The following happened to me in an IRC chatroom, long enough ago that I was still hanging around in IRC chatrooms. Time has fuzzed the memory and my report may be imprecise.

    So there I was, in an IRC chatroom, when someone reports that a friend of his needs medical advice. His friend says that he’s been having sudden chest pains, so he called an ambulance, and the ambulance showed up, but the paramedics told him it was nothing, and left, and now the chest pains are getting worse. What should his friend do?

    I was confused by this story. I remembered reading about homeless people in New York who would call ambulances just to be taken someplace warm, and how the paramedics always had to take them to the emergency room, even on the 27th iteration. Because if they didn’t, the ambulance company could be sued for lots and lots of money. Likewise, emergency rooms are legally obligated to treat anyone, regardless of ability to pay.1 So I didn’t quite understand how the described events could have happened. Anyone reporting sudden chest pains should have been hauled off by an ambulance instantly.

    And this is where I fell down as a rationalist. I remembered several occasions where my doctor would completely fail to panic at the report of symptoms that seemed, to me, very alarming. And the Medical Establishment was always right. Every single time. I had chest pains myself, at one point, and the doctor patiently explained to me that I was describing chest muscle pain, not a heart attack. So I said into the IRC channel, “Well, if the paramedics told your friend it was nothing, it must really be nothing—they’d have hauled him off if there was the tiniest chance of serious trouble.”

    Thus I managed to explain the story within my existing model, though the fit still felt a little forced . . .

    Later on, the fellow comes back into the IRC chatroom and says his friend made the whole thing up. Evidently this was not one of his more reliable friends.

    I should have realized, perhaps, that an unknown acquaintance of an acquaintance in an IRC channel might be less reliable than a published journal article. Alas, belief is easier than disbelief; we believe instinctively, but disbelief requires a conscious effort.2

    So instead, by dint of mighty straining, I forced my model of reality to explain an anomaly that never actually happened. And I knew how embarrassing this was. I knew that the usefulness of a model is not what it can explain, but what it can’t. A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.

    Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.

    We are all weak, from time to time; the sad part is that I could have been stronger. I had all the information I needed to arrive at the correct answer, I even noticed the problem, and then I ignored it. My feeling of confusion was a Clue, and I threw my Clue away.

    I should have paid more attention to that sensation of still feels a little forced. It’s one of the most important feelings a truthseeker can have, a part of your strength as a rationalist. It is a design flaw in human cognition that this sensation manifests as a quiet strain in the back of your mind, instead of a wailing alarm siren and a glowing neon sign reading:

    Either Your Model Is False Or This Story Is Wrong.

    1 And the hospital absorbs the costs, which are enormous, so hospitals are closing their emergency rooms . . . It makes you wonder what’s the point of having economists if we’re just going to ignore them.

    2 From McCluskey (2007), “Truth Bias”: “[P]eople are more likely to correctly judge that a truthful statement is true than that a lie is false. This appears to be a fairly robust result that is not just a function of truth being the correct guess where the evidence is weak—it shows up in controlled experiments where subjects have good reason not to assume truth[.]” .

    And from Gilbert et al. (1993), “You Can’t Not Believe Everything You Read”: “Can people comprehend assertions without believing them? [...] Three experiments support the hypothesis that comprehension includes an initial belief in the information comprehended.”

    New Comment
    123 comments, sorted by Click to highlight new comments since:
    Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

    It's strange that it sounds like a rationalist is saying that he should have listened to his instincts. A true rationalist should be able to examine all the evidence without having to rely on feelings to make a judgment, or would be able to truly understand the source of his feelings, in which case it's more than just a feeling. The unfortunate thing is that people are more likely to remember the cases when they didn't listen to their feelings which ended up being correct in the end, than all the times when they were wrong.

    The "quiet strain in the b... (read more)

    a rationalist should acknowledge their irrationality, to do otherwise would be to irrational.

    How do you face this situation as a rationalist?
    Sorry, since when does "quiet strain in the back of your mind" automatically translate to "irrational"? This particular quiet voice is usually _right_; surely that makes it rational?
    To my mind, this question relates to the accuracy of intuitions and the problems that arises while relying on it. In the original post, my take is that the "quiet strain in the back of your mind" refers to the observation that people whom's opinion you value in a chatroom are discarding your opinion which is based on a single "anec-data" ; which, for a rationalist in his right mind, taking a step back and on a good day, would automatically discard as the sole model through which reality ought to be interpreted. While this answers your question, my broader take is that untrained intuition is just a mashup feeling of what feels right or wrong according to a situation, and feelings are not to be confounded with reality. Unless... in the rare occurrence that these feelings have been thoroughly trained to be right, and by that I mean conditioning of the mind through repetition to the point that, for example, a veteran mathematician would "feel" or "intuit" something is wrong with a mathematical proof with just a glance and without going through the details. Yet there ought to be human limits about relying on such trained intuition and feeling, thus, by default, relying on them must be a last resort or a matter of physical survival - which is what intuition is better used for (to me) - rather than to be extrapolated as a proxy for rationality.

    Anon, see Why Truth?:

    When people think of "emotion" and "rationality" as opposed, I suspect that they are really thinking of System 1 and System 2 - fast perceptual judgments versus slow deliberative judgments. Deliberative judgments aren't always true, and perceptual judgments aren't always false; so it is very important to distinguish that dichotomy from "rationality". Both systems can serve the goal of truth, or defeat it, according to how they are used.

    "I should have paid more attention to that sensation of still feels a little forced."

    The force that you would have had to counter was the impetus to be polite. In order to boldly follow your models, you would have had to tell the person on the other end of the chat that you didn't believe his friend. You could have less boldly held your tongue, but that wouldn't have satisfied your drive to understand what was going on. Perhaps a compromise action would have been to point out the unlikelihood, (which you did: "they'd have hauled him off if there was the tiniest chance of serious trouble"), and ask for a report on the eventual outcome.

    Given the constraints of politeness, I don't know how you can do better. If you were talking to people who knew you better, and understood your viewpoint on rationality, you might expect to be forgiven for giving your bald assessment of the unlikeliness of the report.

    Not necessarily.

    You can assume the paramedics did not follow the proper procedure, and that his friend aught to go to the emergency room himself to verify that he is OK. People do make mistakes.

    The paramedics are potentially unreliable as well, though given the litigious nature of our society I would fully expect the paramedics to be extremely reliable in taking people to the emergency room, which would still cast doubt on the friend.

    Still, if you want to be polite, just say "if you are concerned, you should go to the emergency room anyway" and keep your doubts about the man's veracity to yourself. No doubt the truth would have come out at that point as well.

    I saw someone on FB reposting this post today. Makes an interesting point about not doubting your own models in certain circumstances I guess, but the original post leaves out relevant issues of trust and pragmatism. Sure people probably gullibly believe untrue stories more often than they should, but biases also often cause us to discount anecdotes that are actually representative of real, lived experiences (such as the subtle experiences of those who suffer from racism and sexism). - Just because a bug is unusual or difficult to locally replicate/experience doesn't mean you should discount the bug reports. Also (obviously) faith in even medical experts/institutions should be absolute. Finally there's nothing wrong with offering someone good advice even if you think they may have lied to you/are trolling... there's still a chance they were not trolling, and arming them with good information might be good for them in the short term or long term.
    That article is written as though "are you sure that was sexism" literally means "you had better prove it is sexism with 100% certainty, or I won't believe you". That is not what it means. It's not a demand for 100% certainty, it's a demand for better evidence. You don't have to be treating the world like a computer in order to think that you should try to rule out innocent explanations before proclaiming someone guilty. Also, while the author claims that the standard he quotes makes it impossible to prove sexism, his own standard has the opposite problem: according to it it's impossible to prove anyone innocent of sexism. People don't favor uncertainty over assumption because they're computer geeks; people favor uncertainty over assumption because there are such things as false positives, and they have enough of a cost that avoiding them is worthwhile.

    Reminds me of a family dinner where the topic of the credit union my grandparents had started came up.

    According to my grandmother, the state auditor was a horribly sexist fellow.  He came and audited their books every single month, telling everyone who would listen that it was because he "didn't think a woman could be a successful credit union manager."

    This, of course, got my new-agey aunts and cousins all up-in-arms about how horrible it was that that kind of sexism was allowed back in the 60s and 70s.  They really wanted to make sure everyone knew they didn't approve, so the conversation dragged on and on...

    And about the time everyone was all thoroughly riled up and angry from the stories of the mean, vindictive things this auditor had done because the credit union was run by a woman my grandfather decided to get in on the ruckus and told his story about the auditor...

    Seems like the very first time the auditor had come through, the auditor spent several hours going over the books and couldn't make it all balance correctly.  He was all-fired sure this brand new credit union was up to something shady.  Finally, my grandfather (who was the credit union accountant... (read more)

    In it's strongest form, not believing system 1 amounts to not believing perceptions, hence not believing in empiricism. This is possibly the oldest of philosophical mistakes, made by Plato, possibly Siddhartha, and probably others even earlier.


    Sounds like good old cognitive dissonance. Your mental model was not matching the information being presented.

    That feeling of cognitive dissonance is a piece of information to be considered in arriving at your decision. If something doesn't feel right, usually either te model or the facts are wrong or incomplete.


    "And this is where I fell down as a rationalist. I remembered several occasions where my doctor would completely fail to panic at the report of symptoms that seemed, to me, very alarming. And the Medical Establishment was always right. Every single time. I had chest pains myself, at one point, and the doctor patiently explained to me that I was describing chest muscle pain, not a heart attack. So I said into the IRC channel, "Well, if the paramedics told your friend it was nothing, it must really be nothing - they'd have hauled him off if there was the tiniest chance of serious trouble.""

    My own "hold on a second" detector is pinging mildly at that particular bit. Specifically, isn't there a touch of an observer selection effect there? If the docs had been wrong and you ended up dying as a result, you wouldn't have been around to make that deduction, so you're (Well, anyone is) effectively biased to retroactively observe outcomes in which if the doctor did say you're not in a life threatening situation, you're genuinely not?

    Or am I way off here?


    A valid point, Psy-Kosh, but I've seen this happen to a friend too. She was walking along the streets one night when a strange blur appeared across her vision, with bright floating objects. Then she was struck by a massive headache. I had her write down what the blur looked like, and she put down strange half-circles missing their left sides.

    That point was when I really started to get worried, because it looked like lateral neglect - something that I'd heard a lot about, in my studies of neurology, as a symptom of lateralized brain damage from strokes.

    The funny thing was, nobody in the medical profession seemed to think this was a problem. The medical advice line from her health insurance said it was a "yellow light" for which she should see a doctor in the next day or two. Yellow light?! With a stroke, you have to get the right medication within the first three hours to prevent permanent brain damage! So we went to the emergency room - reluctantly, because California has enormously overloaded emergency rooms - and the nurse who signed us in certainly didn't seem to think those symptoms were very alarming.

    The thing is, of course, that non-doctors are legally prohib... (read more)

    I had a similar experience with my girlfriend, except the symptoms were significantly more alarming. She was, among other things, unable to remember many common nouns. I would point and say 'What is that swinging room separator?" and she would be unable to figure out "door". I was aware from the start that the symptoms might have been due to a migraine aura, having looked up the symptoms on Wikipedia, but was advised by 811 to take her to the hospital immediately. The symptoms were gone before we arrived. Five hours later (a strong hint that at least the triage people thought it wasn't an emergency), a doctor had diagnosed it as a silent migraine.

    Okie, and yeah, I imagine you would have noticed.

    Also, of course, docs that habitually misdiagnose would presumably be sued or worse to oblivion by friends and family of the deceased. I was just unsure about the actual strength of that one thing I mentioned.

    I think one would be the closest to truth by replying: "I don't quite believe that your story is true, but if it is, you should... etc" because there is no way for you to surely know whether he was bluffing or not. You have to admit both cases are possible even if one of them is highly improbable.

    Doesn't any model contain the possibility, however slight, of seeing the unexpected? Sure this didn't fit with your model perfectly — and as I read the story and placed myself in your supposed mental state while trying to understand the situation, I felt a great deal of similar surprise — but jumping to the conclusion that someone was just totally fabricating is something that deserves to be weighed against other explanations for this deviation from your model.

    Your model states that pretty much under all circumstances an ambulance is going to pick up a pat... (read more)

    See antiprediction.
    That's certainly sensible, and in But There's Still a Chance Eleiezer makes examples where this seems strong. In the above example, it depends a whole lot on how much belief you have in people (or, rather, lines of IRC chat). I think then that your strength as a rationalist comes in balancing that uncertainty against some your prior trust in people. At which point, instead of predicting the negative, I'd seek more information.
    The level of "trust" you have in a person should be inversely proportional to the sensationalism of the claim that he's making. If a person tells you he was abducted by a UFO, you demand evidence. If a person tells you that on the way to work he slipped and fell down, and you have no concrete reason to doubt the story in particular or the person in general, you take that at face value. It is a reasonable assumption that a perfect stranger in all likelihood will NOT be delusional or a compulsive liar. DP
    That makes sense if you're only evaluating complete strangers. In other words, your uncertainty about the population-inferred trustworthiness of a person is pretty high and so instead the mere (Occam Factor style) complexity of their statement is the overruling component of your decision. In the stated case, this isn't a totally random stranger. I feel quite justified in having a less-than uninformative prior about trusting IRC ghosts. In this case, my rationally acquired prejudice overrules in inference about the truth of even somewhat ordinary tales.
    The author did not mention anything about an exceptionally high percentage of liars in IRC relative to the general population (which would be quite relevant to his statement) therefore there's no reason to believe that such had been HIS experience in the past. Given that, there is no reason for HIM to presume that the percentage of compulsive liars in IRC would different from the general population. YOUR experiences may, of course, be drastically different, but they are not the subject of discussion here. DP

    I don't see that you did anything at all irrational. You're talking to a complete stranger on the internet. He doesn't know you, and cannot have any possible interest in deceiving you. He tells you a fairly detailed story and asks for you advice. For him to make the whole thing up just for kicks is an example of highly irrational and fairly unlikely behavior.

    Conversely, a person's panicking over chest pains and calling the ambulance is a comparatively frequent occurrence. Your having read somewhere something about ambulance policies does not amount to hav... (read more)

    You're talking to a complete stranger on the internet. He doesn't know you, and cannot have any possible interest in deceiving you.

    There's plenty of evidence that some people (a smallish minority, I think) will deceive strangers for the fun of it.

    Which, as I said later on in the same paragraph, is irrational and unlikely behavior. Therefore, when lacking any factual evidence, the reasonable presumption is that that's not the case. DP
    I think many of us have actually encountered liars on the Internet. I'm not sure what you mean when you say "lacking any factual evidence".
    I presume that you have encountered liars in the real world as well. Do you, on that basis, habitually assume that a random stranger engaging in casual conversation with you is a liar? My point is that pathological liars are a small minority. So if you're dealing with a person that you know absolutely nothing about, and who does not have any conceivable reason to lie to you, there is nothing unreasonable in assuming that he's telling you the truth, unless you have factual evidence (i.e. you have accurate, verifiable knowledge of ambulance policies) that contradicts what he's saying. DP
    I think at this point the questions have become (a) "how many bits of evidence does it take to raise 'someone is lying' to prominence as a hypothesis?" and (b) "how many bits of evidence can I assign to 'someone is lying' after evaluating the probability of this story based on what I know?" I believe your argument is that a > b (specifically, that a is large and b is small), where the post asserts that a < b. I'm not going to say that's unreasonable, given that all we know is what Eliezer Yudkowsky wrote, but often actual experience has much more detail than any feasible summary - I'm willing to grant him the benefit of the doubt, given that his tiny note of discord got the right answer in this instance.
    My argument is what I stated, nothing more. Namely that there is nothing unreasonable about assuming that a perfect stranger that you're having a casual conversation with is not trying to deceive you. I already laid out my reasoning for it. I'm not sure what more I can add. DP
    "Do you, on that basis, habitually assume that a random stranger engaging in casual conversation with you is a liar?" Yes. Absolutely. Almost /everyone/ lies to complete strangers sometimes. Who among us has never given an enhanced and glamourfied story about who they are to a stranger they struck up a conversation with on a train? Never? Really? Not even /once/?
    If everyone regularly talked to strangers on trains, and exactly once lied to such a stranger, it would still be pretty safe to assume that any given train-stranger is being honest with you.
    Actually, yes, you're entirely right. In conversations I've had about this with friends - good grief, there's a giant flashing anecdata alert if ever I did see one, but it's the best we've got to go off here - I would suspect that people do it often enough that it's a reasonable thing to consider in a situation like the one being discussed here, though. Not that I think it's a bad thing that the person in question didn't, mind you. It would be a very easy option not to consider.
    Yes, they deceive strangers in particular ways that have the potiential to bring enjoyment to the deceiver. The story here doesn't strike me as one of those cases -- would it bring the deceiver any mirth to hear people's medical advice about chest pains? Probably not. That would be more likely if the story were something like, "um, I've got these strange warts on my..." (And I say this as someone who's trolled IRC with similar requests for advice.)
    Wait, why not?
    I read somewhere that if spin about and click my heels 3 times I will be transported to the land of Oz. Does that qualify as a concrete reason to believe that such a land does indeed exist? DP

    I read somewhere that if spin about and click my heels 3 times I will be transported to the land of Oz. Does that qualify as a concrete reason to believe that such a land does indeed exist?

    That indeed serves as evidence for that fact, though we have much stronger evidence to the contrary.

    N.B. You do not need to sign your comments; your username appears above every one.

    That indeed serves as evidence for that fact, though we have much stronger evidence to the contrary.

    And not just because clicking the heels three times is more canonically (and more often) said to be way to return to Kansas from Oz. and not to Oz.

    So the fact that something was written somewhere is sufficient to meet your criteria for considering it evidence? I take it you have actually tried clicking your heels to check whether or not you would be teleported to Oz then? Also, does my signing my comments offend you? DP

    Also, does my signing my comments offend you?

    It hurts aesthetically by disrupting uniformity of standard style.

    Fair enough. It's a habit of mine that I'm not married to. If members of this board take issue with it, I can stop.

    Yes. It's really sucky evidence. This doesn't remotely follow and is far weaker evidence than other available sources. For a start, everyone knows that you get to Oz with tornadoes and concussions. It makes you look like an outsider who isn't able to follow simple social conventions and may have a tendency towards obstinacy. (Since you asked...)
    "This doesn't remotely follow and is far weaker evidence than other available sources. For a start, everyone knows that you get to Oz with tornadoes and concussions." Let's not get bogged down in the specific procedure of getting to Oz. My point was that if you truly adapt merely seeing something written somewhere as your standard for evidence, you commit yourself to analyzing and weighing the merits of EVERYTHING you read about EVERYWHERE. Do you mean to tell that when you read a fairy tale you truly consider whether or not what's written there is true? That you don't just dismiss it offhand without giving it a second thought? "It makes you look like an outsider who isn't able to follow simple social conventions and may have a tendency towards obstinacy. (Since you asked...)" Like I said above to Vladimir, it's not a big deal, but you're reading quite a bit into a simple habit.
    The fact that something is really written is true; whether it implies that the written statements themselves are true is a separate theoretical question. Yes, ideally you'd want to take into account everything you observe in order to form an accurate idea of future expected events (observable or not). Of course, it's not quite possible, but not for the want of motivation.
    Well I didn't think I needed to clarify that I'm not questioning whether or not something that's written is really written. Of course, I'm questioning the truthfulness of the actual statement. Or not so much it's truthfulness, but rather whether or not it can be considered evidence. Though I realize that you take issue with arguing over word definitions, to me the word evidence has certain meaning that goes beyond every random written sentence, whisper or rumor that you encounter.
    The fact that something is written, or not written, is evidence about the way world is, and hence to some extent evidence about any hypothesis about the world. Whether it's strong evidence about a given hypothesis is a different question, and whether the statement written/not written is correct is yet another question. (See also the links from this page.)

    Though I realize that you take issue with arguing over word definitions, to me the word evidence has certain meaning that goes beyond every random written sentence, whisper or rumor that you encounter.

    Around these parts, a claim that B is evidence for A is a taken to be equivalent to claiming that B is more probable if A is true than if not-A is true. Something can be negligible evidence without being strictly zero evidence, as in your example of a fairy story.

    Let's not get bogged down in the specific procedure of getting to Oz. My point was that if you truly adapt merely seeing something written somewhere as your standard for evidence, you commit yourself to analyzing and weighing the merits of EVERYTHING you read about EVERYWHERE.

    No, you can acknowledge that something is evidence while also believing that it's arbitrarily weak. Let's not confuse the practical question of how strong evidence has to be before it becomes worth the effort to use it ("standard of evidence") with the epistemic question of what things are evidence at all. Something being written down, even in a fairy tale, is evidence for its truth; it's just many orders of magnitude short of the evidential strength necessary for us to consider it likely.

    Vladimir, Cyan, and jimrandomh, since you essentially said the same thing, consider this reply to be addressed to all three of you. Answer me honestly, when reading a fairy tale, do you really stop to consider what's written there, qualify its worth as evidence, and compare it to everything else you know that might contradict it, before making the decision that the probability of the fairy tale being true is extremely low? Do you really not just dismiss it offhand as not true without a second thought?
    When I pick up a work of fiction, I do not spend time assessing its veracity. If I read a book of equally fantastic claims which purports to be true, I do spend a little time. You might want to peruse bounded rationality for an overview.
    So you would then agree that merely the fact that something is written SOMEWHERE, does not automatically qualify it as evidence? (Incidentally that is my original point, which in spite of seeming as common sense as common sense can be, has attracted a surprising amount of disagreement.)
    You have to specify what it purports to be evidence of before I can give you an answer that isn't a tangent. Edited to add: Maybe I can do better than the above sentence. I affirm that the existence of this book is negligible but not strictly zero evidence for the claims detailed therein.
    There may be sense in which this is common sense, but you were purposely using it tendentiously, which is why people responded in the technical way that they did. Eliezer said that he read something "somewhere", obviously intending to say that he read it somewhere that he considered trustworthy at the time, not in a fairy tale.
    Well, what can I say? I simply don't consider the vague recollection of reading something somewhere credible evidence of anything, and I stand by that. However, the amount of people that took issue with this statement did open my eyes to the fact that the definition of word "evidence" is not as clear cut as I thought it to be. Not sure if there's any way to resolve this difference of opinion though.
    The easy solution is to stop arguing about the definition of evidence. This community uses it to mean one thing, you're using it to mean something else, and any sort of conflict goes away as soon as people make clear which definition they're using. Since this community already has an accepted definition, you would be safe in assuming that that definition is what other posters here have in mind when they use the word "evidence". By the same token, you should probably find a more precise way to refer to the definition of evidence that you are using in order to avoid being misinterpreted.
    Sticking an adjective in front of the word evidence seems to work. "Evidence" includes things that give you 10^-15 bits of information; on the other hand "good evidence", "usable evidence" and "credible evidence" all imply that the strength of the evidence is at least not exponentially tiny.
    I thought that "evidence", unmodified, would mean non-trivial evidence; otherwise, everything has to count as evidence because it will have some connection to the hypothesis, however weak. To specify a kind of evidence that includes the 1e-15 bit case, I think you would need to say "weak evidence" or "very weak evidence". But I'm not the authority on this: How do others here interpret the term "evidence" (in the Bayesian sense) when it's unmodified?
    I'm sympathetic to both views. I have encountered a number of disputes that revolve around using these two different senses of the word, and am nonetheless blindsided by them consistently. I try to always specify the strength of evidence in some sense when using the word. I think when I do use it unmodified I tend to use it in the technical sense (including even weak evidence). It would be odd if 'evidence' excluded weak evidence, since then 'weak evidence' would be a contradiction in terms, or you could see people arguing things like "When I said 'weak evidence' I didn't mean the 1e-15 bit case, since that's not evidence at all!"
    Hmm. Maybe the strength of the evidence isn't the right thing to use, but rather the confidence with which we know the sign of the correlation.
    I would if I were talking to a Bayesian, interpret it as meaning something where a "B is evidence for A" if rough calculation shows that P(A|B) > P(A). I don't generally expect rationalists to even mention individual data points unless P(A|B)/P(A) is large, but if someone else gave the data as an example, then I wouldn't expect it to be necessarily large if a Bayesian referred to as evidence. So for example, I could see a Bayesian asserting that the writing of the Bible is evidence for a global flood some 5000 years ago, but I'd be deeply surprised if a Bayesian brought this up in almost any context because the evidence is so weak (in this case P(A|B)>P(A) but P(A|B)/P(A) is very close to 1).
    I agree, this sounds exactly right to me. Unfortunately, I remember that in a lot of Robin Hanson's earlier OvercomingBias posts, my reaction to them would be, "Yes, B is technically evidence in favor of A, but it's extremely weak -- why even mention it?" For example, Suicide Rock. (I think I have a picture of one of those somewhere...)
    That's fair enough. However, judging by what I've read, this community's definition of evidence seems to constitute just about anything ever written about anything. How would you then differentiate evidence, from rumor, hearsay, speculation, etc.?
    The wiki should be a good starting point for answering this question. What is Evidence? may also be helpful. Short version: rumor, hearsay, and speculation are evidence, albeit of a very weak variety.
    Well that clarifies things quite a bit. I find this definition of evidence surprising, especially in this community, but very interesting. I'll have to sleep on it. Thank you for the references.
    Rumor, hearsay, etc. falls under our definition of evidence, just weak evidence, or probably very indirect (for example, if there is a rumor that A, it might constitute evidence against A being true, given other things you know).
    As noted by jimrandomh, saying 'credible evidence' does make an effort to differentiate between different sorts of evidence. If your claim was simply that reading something was not evidence, then you should not have to qualify the word when you use it now. I imagine for those of us who seem to be disagreeing with you, we would agree that that does not constitute 'credible evidence' for some values of 'credible'.
    That's really clever. I always thought that "credible evidence" was a bit redundant actually. I just used as a figure of speech without thinking about, but according to my definition of evidence that it has to be credible is pretty much implicit. It has been made abundantly clear to me, however, that this community's definition differs substantially, so that's the definition I will use when posting here going forward.
    No, but only because that would be cognitively burdensome. We're boundedly rational.
    Immediate observation is only that something is written. That it's also true is a theoretical hypothesis about that immediate observation. That what you are reading is a fairy tale is evidence against the things written there being true, so the theory that what's written in a fairy tale is true is weak. On the other hand, the fact that you observe the words of a given fairy tale is strong evidence that the person (author) whose name is printed on the cover really existed.
    All that is indisputably true. But you didn't really answer my question on whether or not you give enough consideration to what's written in a fairy tale (not whether or not it's written, not who it's written by, but the actual claims made therein) to truly consider it evidence to be incorporated into or excluded from your model of the world.
    Evidence isn't usually something you "include" in your model of the world, it's something you use to categorize models of the world into correct and incorrect ones. Evidence is usually something not interesting in itself, but interesting instrumentally because of the things it's connected to (caused by).
    That is because it is a bad question and one of a form for which you have already received responses.
    This doesn't remotely follow either. Go and research the concept of evidence more. I care little about your signature. I merely describe the social behaviour of humans. What actually does annoy me is if people refuse to use markdown syntax for quotes once they have been prompted. Click the help link below the comment box - consider yourself prompted.
    Duly noted. God forbid I do something that annoys you. Won't be able to live with myself.

    As always, I recommend against sarcasm, which can hide errors in reasoning that would be more obvious when you speak straightforwardly.

    It was a comment on wedrifid's implicit assumption that I should care about what annoys him and bizarre expectation that I would adjust my behavior because I was "prompted" (not asked politely mind you) by him. Not sure what part of that is not obvious to you.
    Generally, when some minor formatting issue annoys a long-standing member of an internet community it is a good idea to listen to what they have to say. Many internet fora have standard rules about formatting and style that aren't explicitly expressed. These rules are convenient because they make reading easier for everyone. There's also a status/signaling aspect in that not using standard formatting signals someone is an outsider. Refusing to adopt standard format and styling signals an implicit lack of identification with a community. Even if one doesn't identify with a group, the effort it takes to conform to formatting norms is generally small enough that the overall gain is positive.
    You're absolutely right. I have no problem using indentation for quotes, as a matter of fact I was wondering how to do that, it's his condescending tone that I took issue with. In retrospect though, I should have just ignored it, but let my temper get the best of me. I'll try to keep counter-productive comments to a minimum in the future.
    Indentation happens by putting a greater-than sign at the beginning of the line. Thus: > The quick brown fox jumps over the lazy dog. becomes
    I'm not sure of the particulars of your situation, but I personally encounter people lying on the internet orders of magnitude more times than I do people having chest pains.

    An alternative explanation? You put your energy into solving a practical problem with a large downside (minimizing the loss function in nerdese). Yes, to be perfectly rational you should have said: "the guy is probably lying, but if he is not then...".

    It is a design flaw in human cognition that this sensation manifests as a quiet strain in the back of your mind, instead of a wailing alarm siren and a glowing neon sign reading "EITHER YOUR MODEL IS FALSE OR THIS STORY IS WRONG."

    I wouldn't call it a flaw; blaring alarms can be a nuisance. Ideally you could adjust the sensitivity settings . . . hence the popularity of alcohol.

    Thank you, Eliezer. Now I know how to dissolve Newcomb type problems. (

    I simply recite, "I just do not believe what you have told me about this intergalactic superintelligence Omega".

    And of course, since I do not believe, the hypothetical questions asked by Newcomb problem enthusiasts become beneath my notice; my forming a belief about how to act rationally in this contrary-to-fact hypothetical situation cannot pay the rent.

    Fair enough (upvoted); but I'm pretty sure Parfit's Hitchhiker is analogous to Newcomb's Problem, and that's an absolutely possible real-world scenario. Eliezer presents it in chapter 7 of his TDT document.

    This sort of brings to my mind Pirsig's discussions about problem solving in ZATAOMM. You get that feeling of confusion when you are looking at a new problem, but that feeling is actually a really natural, important part of the process. I think the strangest thing to me is that this feeling tends to occur in a kind of painful way -- there is some stress associated with the confusion. But as you say, and as Pirsig says, that stress is really a positive indication of the maturation of an understanding.

    I'm not sure that listening to ones intuitions is enough to cause accurate model changes. Perhaps it is not rational to hold a single model in your head, as your information is incomplete. Instead one can consciously examine the situation from multiple perspectives, in this way the nicer (simpler, more consistent, whatever your metric is) model response can be applied. Alternatively you could legitimately assume that all the models you hold have merit and produce a response that balances their outcomes e.g. if your model of the medical profession is wrong ... (read more)

    Considering that medical errors apparently kill more people than car accidents each year in the United States, I suspect the establishment is not in fact infallible.

    Citation needed? I know I'm coming to this rather late, but a quick check of the 2010 CDC report on deaths in the US gives "Complications of medical and surgical care" as causing 2,490 deaths whereas transport accidents causing 37,961 deaths (35,332 of which were classified a 'motor vehicle deaths'). The only other thing I can see that might be medical errors put under a different heading is "Accidental poisoning and exposure to noxious substances" at 33,041 which combines to still fewer deaths than transport accidents even without removing those poisonings which are not medical errors. (This poisoning category appears to have a lot of recreational drug overdoses judging by the way it sharply increases in the 15-24 age group then drops off after 54 whereas time-spent-in-hospital is presumably increasing with age.) On the other hand, a 2012 New York Times Op-Ed claims 98,000 deaths from medical errors a year. This number is so much larger than what the CDC reports that I must be misreading something. That would be about 1 in 20 people who die in the US die due to medical error. Original source from 1999). Actually checking that source, 98,000 deaths/year is the upper bound number given (lower bound of 44,000 deaths/year). The report also recommends a 50% reduction in these deaths within 5 years (so by 2004) - and Wikipedia mentions a 2006 study claiming that they successfully preventing 120,000 deaths in an 18 month time period but I can't find this study. A 2001 followup here appears to focus on suggestions for improvements rather than on giving new data to our question. 3 minutes on Google Scholar didn't turn up any recent estimates. This entire sub-field appears to rely very heavily upon that one source - at least in the US. Also of interest is "Actual Causes of Death in the US" which classifies deaths by 'mistake made' (so to speak) - the top killer being tobacco use, then poor diet/low exercise, alcohol, microbial agents, toxic agents, car accidents, firearms,
    If a doctor makes a mistake treating a patient from a vehicle accident, what heading does it get reported under? (I ask the question in earnest, to anybody who might know the answer - because depending on what the answer is, it could explain the discrepancy.)

    From TvTropes:

    "According to legend, one night the students of Baron Cuvier (one of the founders of modern paleontology and comparative anatomy) decided to play a trick on their instructor. They fashioned a medley of skins, skulls and other animal parts (including the head and legs of a deer) into a credibly monstrous costume. One brave fellow then donned the chimeric assemblage, crept into the Baron's bedroom when he was asleep and growled "Cuvier, wake up! I am going to eat you!" Cuvier woke up, took one look at the deer parts that formed part of the costume and sniffed "Impossible! You have horns and hooves!" (one would think "what sort of animals have horns and hooves" is common knowledge).

    More likely he was saying "Impossible! You have horns and hooves (and are therefore not not a predator.)" The prank is more commonly reported as: "Cuvier, wake up! I am the Devil! I am going to eat you!" His response was "Divided hoof; graminivorous! It cannot be done." Apparently Satan is vegan. Don't comment that some deer have been seen eating meat or entrails, I occasionally grab the last slice of my bud's pizza but that doesn't classify me as a scavenger."

    How do you face this situation as a rationalist?

    I think more context is necessary. Sorry.
    I believe the evidence is that the initial urge of A is more credible than the rationalization of B. That is, when students change answers on multiple choice tests, they are more likely to turn a right answer to a wrong answer than a wrong answer to a right answer. (I don't know if that generalizes to a true-false setting.)
    It matters why "B sounds more plausible to your mind." If it's because you remembered a new fact, or if you reworked the problem and came out with B, change the answer (after checking that your work was correct and everything.) The many multiple choice tests are written so that there is one right answer, one wrong answer, and two plausible-sounding answers, so you shouldn't change an answer just because B is starting to sound plausible.
    There are two modes of reasoning that are useful that I'd like to briefly discuss: inside view, and outside view. Inside view uses models with small reach / high specificity. Outside view uses models with large reach / high generality. Inside view arguments are typically easier to articulate, and thus often more convincing, but there are often many reasons to prefer outside view arguments. (Generally speaking, there are classes of decisions where inside view estimates are likely to be systematically biased, and so using the outside view is better.) When wondering whether to switch an answer, the inside view recommends estimating which answer is better. The outside view recommends looking at the situation you're in- "when people have switched answers in the past, has it generally helped or hurt?". There are times when switching leads to the better result. But the trouble is that you need to know that ahead of time- and so, as you suggest, there may be reasons to switch that you can identify as strong reasons. But the decision whether to apply the inside or outside view (or whether you collect enough data to increase the specificity of your outside view approach) is itself a decision you have to make correctly, which you probably want to use the outside view to track, rather than just trusting your internal assessment at the time.

    I feel really uncomfortable with this idea: "EITHER YOUR MODEL IS FALSE OR THIS STORY IS WRONG."

    I think this statement suffers from the same limitations of propositional logic; consequently, it is not applicable to many real life situations.

    Most of the times, our model contains rules of this type (at least if we are rationalists): Event A occurs in situation B with probability C, where C is not 0 or 1. Also, life experiences teach us that we should update the probabilities in our model over time. So beside the uncertainty caused by the probabili... (read more)

    This post frustrated me for a while, because it seems right but not helpful. Saying to myself, "I should be confused by fiction" doesn't influence my present decision.

    First concertize. Let's say I have a high level world model. A few of them perhaps, to reduce the chance that one bad example results in a bad principle.

    "My shower produces hot water in the morning." "I have fresh milk to last the next two days." "The roads are no longer slippery."

    What do these models exclude? "The water will be cold", "t... (read more)

    [This comment is no longer endorsed by its author]Reply

    Your strength as a rationalist is your ability to be more confused by fiction than by reality.

    Yet, when a person of even moderate cleverness wishes to deceive you, this "strength" can be turned against you. Context is everything.

    As Donald DeMarco asks in "Are Your Lights On?", WHO is it that is bringing me this problem?

    Alas, belief is easier than disbelief; we believe instinctively, but disbelief requires a conscious effort.

    Looking through Google Scholar for citations of Gilbert 1990 and Gilbert 1993, I see 2 replications which question the original effect:

    • Hasson, U., Simmons, J. P. and Todorov, A. 2005: Believe it or not: on the possibility of suspending belief. Psychological Science, 16, 566–71
    • Richter, T., Schroeder, S. and Wohrmann, B. 2009: You don’t have to believe everything you read: background knowledge p
    ... (read more)

    Eliezer's model:

    The Medical Establishment is always right.

    Information given:

    • Person is feeling chest pain.
    • Paramedics say hospitalization is unnecessary.

    Possible scenarios mentioned in the story:

    1. Person is feeling chest pain and is having a heart attack.
    2. Person is feeling chest pain but does not need to be hospitalized.
    3. Person is lying.

    Between the model and the information given, only Scenario 1 can be ruled false; Scenarios 2 and 3 are both possible. If Eliezer is going to beat himself up for not knowing better, it should be because Scenario 3 did n... (read more)

    The way you phrase it hides the crucial part of the story. Rephrasing: 1. Person is telling the truth a.) They are having a heart attack, but the paramedics judged wrongly, dismissed it, and didn't take him to the hospital. b.) They are not having a heart attack, the paramedics judged rightly, and the paramedics dismissed it and didn't take him to the hospital. 2. Person is lying. Eliezer is saying that he should have known scenario 1 is wrong because regardless of whether or not the paramedics think it's legit, they would have taken the person to the hospital anyway. So, 1a and 1b must be wrong, leaving 2. Or, if I were going to add to your model, I would add "The Medical Establishment always takes in the ambulance if they call for a medical reason." Then, when the information given is "Paramedics say hospitalization is unnecessary," that would have been a direct conflict between model and information, where Eliezer had to choose between rejecting the model and rejecting the information.

    I see two senses (or perhaps not-actually-qualiatively-different-but-still-useful-to-distinguish cases?) of 'I notice I'm confused':

    (1) Noticing factual confusion, as in the example in this post. (2) Noticing confusion when trying to understand a concept or phenomenon, or to apply a concept.

    Example of (2): (A) "Hrm, I thought I understood what, "Colorless green ideas sleep furiously" means when I first heard it; the words seemed to form a meaningful whole based on the way they fell together. But when I actually try to concretise what that co... (read more)

    It might be useful to identify a third type: (3) Noticing argumentative confusion. Example of (3): "Hrm, those fringe ideas seem convincing after reading the arguments for them on this LessWrong website. But I still feel a lingering hesitation to adopt the ideas as strongly as lots of these people seem to have, though I'm not sure why." (Confusion as pointer to epistemic learned helplessness) As in the parent to this comment, (3) is not necessarily qualitatively distinct (e.g. argumentative confusion could be recast as factual confusion: "Hrm, I'm confused by this hesitation I observe in myself to fully endorse these fringe ideas after seeing such seemingly-decisive arguments. Maybe this means something." (Observations of internal reaction are still observations about which one can be factually confused).

    Was a mistake really made in this instance? Is it not correct to conclude 'there was no problem'? Yes, the author did not realise the story was fictional; but what of what he concluded implied the story was not fictional?

    Furthermore, is it good to berate oneself because one does not immediately realise something? In this case, the author did not immediately realise the story was fictional. But evidently the author was already working toward that conclusion by throwing doubt on parts of the story. And the evidence the author had was obviously inconclusive;... (read more)

    This looks like an instance of the Dunning-Kruger effect to me. Despite your own previous failures in diagnosis, you still felt competent to give medical advice to a stranger in a potentially life-threatening situation.

    In this case, the "right answer" is not an analysis of the reliability of your friend's account, it is "get a second opinion, stat". This is especially true seeing as how you believed the description you gave above.

    If a paramedic tells me "it's nothing", I complain to his or her superiors, because that is not a ... (read more)

    Of course, it's also possible to overdo it. If you hear something odd or confusing, and it conflicts with belief that you are emotionally attached to, the natural reaction is to ignore the evidence that doesn't fit your worldview, thus missing an opportunity to correct a mistaken belief.

    On the other hand, if you hear something odd or confusing, and it conflicts with belief or assumption that you aren't emotionally attached to, then you shouldn't forget about the prior evidence in light of new evidence. The state of confusion should act as a trigger mechanism telling you to tally up all the evidence, and decide which piece doesn't fit.

    It is a design flaw in human cognition...

    Since I think evolution makes us quite fit to our current environment I don't think cognitive biases are design flaws, in the above example you imply that even if you had the information available to guess the truth, your guess was another one and it was false, therefore you experienced a flaw in your cognition.

    My hypotheses is that reaching the truth or communicating it in the IRC may have not been the end objective of your cognitive process, in this case just to dismiss the issue as something that was not impor... (read more)

    See, this is why it's a bad idea to use the language of design when talking about evolution. Evolution doesn't have a design. It optimizes locally according to a complex landscape of physical and sexual incentives, and in the EEA that usually would have favored fast and frugal heuristics. Often it still does; if you're driving a car or running away from a bear, you don't want to drop what you're doing and work out the globally optimal path before taking action. That's all well and good. But things have changed in the last 12,000 years; we spend more time doing long-range planning and optimization work, for example, and less time running away from tigers and hitting each other on the head with clubs. Evolution works slowly, and we haven't reached a local maximum for our environment yet, nor are we likely to in the near future as we continue to reshape it; we're left with a set of cognitive tools, therefore, that are often poorly adapted to our goals. It's these that we seek to compensate for, when and where doing so is appropriate. While our goals are informed by biology, though, their biological influences are no "truer", no more "correct", than any other. We certainly shouldn't treat them as gospel; if they turn out to be in tension with the environment, as in many cases they have, evolution will be quite happy to select against them.
    They're design flaws insofar as that there are far better possibilities. Just because something doesn't fail entirely, doesn't mean its design is any good. This is the same as above. This might also be relevant. Many of us do not (consciously) want to gain competitive advantages compared to other people but rather raise the sanity waterline.
    Good for survival, but not for truth seeking. Epistemic and instrumental rationality are difference things.
    And even in terms of survival, human neurology isn't that great. It was good enough to get our species to survive until now, but it's nowhere close to optimal.

    Is EY saying that if something doesn't feel right, it isn't? I've been working on this rationalist koan for weeks and can't figure out something more believable! I feel like a doofus!

    No. Two possibilities, not just one:

    This article actually made me question „Wait, is this even true?“ when I read an article with weird claims; then I research whether the source is trustworthy and sometimes, it turns out that it isn‘t

    Trying to understand this.

    I *knew* that the usefulness of a model is not what it can explain, but what it can’t. A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.

    I think what Yud means there is that a good model will break quickly. It only explains a very small set of things because the universe is very specific. So it's good that it doesn't explain many many things.

    It's a bit like David Deutsch arguing that models should be sensitive to small changes.  All of their elements should be important.