The following happened to me in an IRC chatroom, long enough ago that I was still hanging around in IRC chatrooms. Time has fuzzed the memory and my report may be imprecise.
So there I was, in an IRC chatroom, when someone reports that a friend of his needs medical advice. His friend says that he’s been having sudden chest pains, so he called an ambulance, and the ambulance showed up, but the paramedics told him it was nothing, and left, and now the chest pains are getting worse. What should his friend do?
I was confused by this story. I remembered reading about homeless people in New York who would call ambulances just to be taken someplace warm, and how the paramedics always had to take them to the emergency room, even on the 27th iteration. Because if they didn’t, the ambulance company could be sued for lots and lots of money. Likewise, emergency rooms are legally obligated to treat anyone, regardless of ability to pay.1 So I didn’t quite understand how the described events could have happened. Anyone reporting sudden chest pains should have been hauled off by an ambulance instantly.
And this is where I fell down as a rationalist. I remembered several occasions where my doctor would completely fail to panic at the report of symptoms that seemed, to me, very alarming. And the Medical Establishment was always right. Every single time. I had chest pains myself, at one point, and the doctor patiently explained to me that I was describing chest muscle pain, not a heart attack. So I said into the IRC channel, “Well, if the paramedics told your friend it was nothing, it must really be nothing—they’d have hauled him off if there was the tiniest chance of serious trouble.”
Thus I managed to explain the story within my existing model, though the fit still felt a little forced . . .
Later on, the fellow comes back into the IRC chatroom and says his friend made the whole thing up. Evidently this was not one of his more reliable friends.
I should have realized, perhaps, that an unknown acquaintance of an acquaintance in an IRC channel might be less reliable than a published journal article. Alas, belief is easier than disbelief; we believe instinctively, but disbelief requires a conscious effort.2
So instead, by dint of mighty straining, I forced my model of reality to explain an anomaly that never actually happened. And I knew how embarrassing this was. I knew that the usefulness of a model is not what it can explain, but what it can’t. A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.
Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.
We are all weak, from time to time; the sad part is that I could have been stronger. I had all the information I needed to arrive at the correct answer, I even noticed the problem, and then I ignored it. My feeling of confusion was a Clue, and I threw my Clue away.
I should have paid more attention to that sensation of still feels a little forced. It’s one of the most important feelings a truthseeker can have, a part of your strength as a rationalist. It is a design flaw in human cognition that this sensation manifests as a quiet strain in the back of your mind, instead of a wailing alarm siren and a glowing neon sign reading:
Either Your Model Is False Or This Story Is Wrong.
1 And the hospital absorbs the costs, which are enormous, so hospitals are closing their emergency rooms . . . It makes you wonder what’s the point of having economists if we’re just going to ignore them.
2 From McCluskey (2007), “Truth Bias”: “[P]eople are more likely to correctly judge that a truthful statement is true than that a lie is false. This appears to be a fairly robust result that is not just a function of truth being the correct guess where the evidence is weak—it shows up in controlled experiments where subjects have good reason not to assume truth[.]” http://www.overcomingbias.com/2007/08/truth-bias.html .
And from Gilbert et al. (1993), “You Can’t Not Believe Everything You Read”: “Can people comprehend assertions without believing them? [...] Three experiments support the hypothesis that comprehension includes an initial belief in the information comprehended.”
This post frustrated me for a while, because it seems right but not helpful. Saying to myself, "I should be confused by fiction" doesn't influence my present decision.
First concertize. Let's say I have a high level world model. A few of them perhaps, to reduce the chance that one bad example results in a bad principle.
"My shower produces hot water in the morning." "I have fresh milk to last the next two days." "The roads are no longer slippery."
What do these models exclude? "The water will be cold", "the milk will be spoiled", "I'll see someone sliding at an intersection" are easy ones. Then there are weirder ones like "I don't even own a shower", "Someone drank all my milk in the middle of the night", and "the roads are closed off due to an earthquake".
I could say, "My model as stated technically disallows all these things, so if I see any, I should have a huge update", but that's unrealistic. The use of "easy" and "weird" implicitly show that I'm already thinking about hypotheses not as strictly allowing and disallowing, but as resulting in greater and lesser probabilistic gains/hits to my confidence.
Even if I do give up entirely on "I have fresh milk", I usually replace it with something that is consistent with the old reasoning (not just the old observations). Perhaps I reason "The milk should have been fresh but spoiled because of a temporary power outage last night". That's actually a bad example because it's not something I'd jump to if I didn't have other observation indicating a power outage. Let's try again. "The milk should have been fresh, but oh dang, it wasn't." Yes, that looks like something I'd think. What about the others? My first explanations would probably be "The roads are a little slippery some places" and "The water heater is acting up".
So what did we just see in this totally fictional but mildly plausible-sounding anecdote? Sometimes a failed hypothesis --becomes-> failed hypothesis + some noise. Other times it's like the water heater explanation which look pretty different. Let's think about the first type. Is this small model-distance update heuristic justified? The new model clearly gives more probability mass to our actual observations, but that's the representative heuristic, totally insufficient to judge whether the theory is acceptable. For that we look to Bayes:
P(H|E) = P(H) * P(E|H) / P(E)
P(E) will be the same for all hypotheses we consider, so just ignore that. P(E|H) is pretty high, since we added noise to make sure the hypothesis would predict evidence. What about P(H)? How do I practically compare the prior for different hypotheses? How do I know when adding noise to my model is good enough vs. when I need to search for new hypotheses?
Let's think of six different methods to guess whether our new hypothesis will be good enough.
Now the critical stage!
You don't have time to remember the last four months. Don't even think about hypothesis priors unless you've already spent more than a minute trying to decide something. Milk is not a big deal, save your cognitive energy for the higher order bits of your life. Also, four months is kind of food-spoiling specific. Time frames would have to be adapted for different problems.
That is not Solomonoff induction in any way. We don't even have a language for formally expressing high level concepts like "spoiled milk" unless you look at brain architecture to figure out how they classify reality. Also "compare" is not concrete enough.
Emotional salience fails us badly in abstract situations. Thinking of disconfirming evidence is painful; our brains won't easily present squicky things.
Arbitrarily decide is not an actionable procedure.
This one actually seems kind of okay, unless you're just as likely to give sugar to senseless wizards.
I'm not sure small updates have small changes in consequence value, but doing more thinking when costs are high generally doesn't seem horrible. Maybe we should add in something to keep us from thinking longer just to procrastinate though.
Conclusions! Priors over explanations are -hard-. Sometimes we naturally make new hypotheses, sometimes we just add some noise. Maybe take the outside view of yourself if you have time! Maybe take the outside view of the hypothesis by having a wizard tell it to you if not. Your strength as a rationalist? Not drinking spoiled milk, not wasting time thinking about spoiled milk, noticing squicks, successfully doubting yourself when you feel a squick, believing some things because they work really well even if they sound crazy when a wizard says them.