Update Yourself Incrementally

Politics is the mind-killer.  Debate is war, arguments are soldiers.  There is the temptation to search for ways to interpret every possible experimental result to confirm your theory, like securing a citadel against every possible line of attack.  This you cannot do.  It is mathematically impossible. For every expectation of evidence, there is an equal and opposite expectation of counterevidence.

But it's okay if your cherished belief isn't perfectly defended.  If the hypothesis is that the coin comes up heads 95% of the time, then one time in twenty you will see what looks like contrary evidence.  This is okay.  It's normal.  It's even expected, so long as you've got nineteen supporting observations for every contrary one.  A probabilistic model can take a hit or two, and still survive, so long as the hits don't keep on coming in.

Yet it is widely believed, especially in the court of public opinion, that a true theory can have no failures and a false theory no successes.

You find people holding up a single piece of what they conceive to be evidence, and claiming that their theory can 'explain' it, as though this were all the support that any theory needed.  Apparently a false theory can have no supporting evidence; it is impossible for a false theory to fit even a single event.  Thus, a single piece of confirming evidence is all that any theory needs.

It is only slightly less foolish to hold up a single piece of probabilistic counterevidence as disproof, as though it were impossible for a correct theory to have even a slight argument against it.  But this is how humans have argued for ages and ages, trying to defeat all enemy arguments, while denying the enemy even a single shred of support.  People want their debates to be one-sided; they are accustomed to a world in which their preferred theories have not one iota of antisupport.  Thus, allowing a single item of probabilistic counterevidence would be the end of the world.

I just know someone in the audience out there is going to say, "But you can't concede even a single point if you want to win debates in the real world!  If you concede that any counterarguments exist, the Enemy will harp on them over and over—you can't let the Enemy do that!  You'll lose!  What could be more viscerally terrifying than that?"

Whatever.  Rationality is not for winning debates, it is for deciding which side to join.  If you've already decided which side to argue for, the work of rationality is done within you, whether well or poorly.  But how can you, yourself, decide which side to argue?  If choosing the wrong side is viscerally terrifying, even just a little viscerally terrifying, you'd best integrate all the evidence.

Rationality is not a walk, but a dance.  On each step in that dance your foot should come down in exactly the correct spot, neither to the left nor to the right.  Shifting belief upward with each iota of confirming evidence. Shifting belief downward with each iota of contrary evidence.  Yes, down.  Even with a correct model, if it is not an exact model, you will sometimes need to revise your belief down.

If an iota or two of evidence happens to countersupport your belief, that's okay.  It happens, sometimes, with probabilistic evidence for non-exact theories.  (If an exact theory fails, you are in trouble!)  Just shift your belief downward a little—the probability, the odds ratio, or even a nonverbal weight of credence in your mind. Just shift downward a little, and wait for more evidence. If the theory is true, supporting evidence will come in shortly, and the probability will climb again.  If the theory is false, you don't really want it anyway.

The problem with using black-and-white, binary, qualitative reasoning is that any single observation either destroys the theory or it does not.  When not even a single contrary observation is allowed, it creates cognitive dissonance and has to be argued away. And this rules out incremental progress; it rules out correct integration of all the evidence.  Reasoning probabilistically, we realize that on average, a correct theory will generate a greater weight of support than countersupport.  And so you can, without fear, say to yourself:  "This is gently contrary evidence, I will shift my belief downward".  Yes, down.  It does not destroy your cherished theory.  That is qualitative reasoning; think quantitatively.

For every expectation of evidence, there is an equal and opposite expectation of counterevidence. On every occasion, you must, on average, anticipate revising your beliefs downward as much as you anticipate revising them upward.  If you think you already know what evidence will come in, then you must already be fairly sure of your theory—probability close to 1—which doesn't leave much room for the probability to go further upward.  And however unlikely it seems that you will encounter disconfirming evidence, the resulting downward shift must be large enough to precisely balance the anticipated gain on the other side.  The weighted mean of your expected posterior probability must equal your prior probability.

How silly is it, then, to be terrified of revising your probability downward, if you're bothering to investigate a matter at all?  On average, you must anticipate as much downward shift as upward shift from every individual observation.

It may perhaps happen that an iota of antisupport comes in again, and again and again, while new support is slow to trickle in.  You may find your belief drifting downward and further downward.  Until, finally, you realize from which quarter the winds of evidence are blowing against you.  In that moment of realization, there is no point in constructing excuses.  In that moment of realization, you have already relinquished your cherished belief.  Yay!  Time to celebrate!  Pop a champagne bottle or send out for pizza!  You can't become stronger by keeping the beliefs you started with, after all.


Part of the Against Rationalization subsequence of How To Actually Change Your Mind

Next post: "One Argument Against An Army"

Previous post: "Knowing About Biases Can Hurt People"

28 comments, sorted by
magical algorithm
Highlighting new comments since Today at 3:34 PM
Select new highlight date
Moderation Guidelines: Reign of Terror - I delete anything I judge to be annoying or counterproductiveexpand_more

Here's the thing.

I could a book and find that the arguments in the book are "valid" - that it is impossible, or at least unlikely, that the premises are true and the conclusion false. However, what I can't do by reading is determine if the premises are true.

In the infamous Alien Autopsy "documentary", there were three specific claims made for the authenticity of the video.

1) An expert from Kodak examined the film, and verified that it is as old as was claimed. 2) A pathologist was interviewed, who said that the autopsy portrayed was done in the manner that an actual autopsy would have been done. 3) An expert from Spielberg's movie studio testified that modern special effects could not duplicate the scenes in the video.

If you accept these statements as true, it becomes reasonable to accept that the footage was actually showing what it appeared to show; an autopsy of dead aliens.

Upon seeing these claims, though, my response was along the lines of "I defy the data." As it turns out, all three of those statements were blatant lies. There was no expert from Kodak who verified the film. Kodak offered to verify the film, but was denied access. Many other pathologists said that the way the autopsy was performed in the film was absurd, and that no competent pathologist would ever do an autopsy on an unknown organism in that manner because it would be completely useless. The person from Spielberg's movie studio was selectively quoted and was very angry about it. What he really said that the film was good for whatever grade B studio happened to have produced it.

I could read your book, but I believe that it is more likely that the statements in the book are wrong than it is that psi exists. As Thomas Jefferson did not say, "It is easier to believe that two Yankee professors [Profs. Silliman and Kingsley of Yale] would lie than that stones would fall from the sky."

The burden of proof is on you, Matthew. Many, many claims of the existence of "psi" have been shown to be bogus, so I give further claims of that nature very little credence. Either tell us about a repeatable experiment - copy a few paragraphs from that book if you have to - or we're going to ignore you.

You want me to believe precognition has been scientifically established? Give me one single research protocol which reliably (90% probability) produces results at the p < 0.01 significance level for events 30 minutes in the future.

If the effect is real, however small, there will exist some number of subjects/trials that reliably amplifies the effect to any given level of statistical significance.

Matthew C:

I don't understand why the Million Dollar Challenge hasn't been won. I've spent some time in the JREF forums and as far as I can see the challenge is genuine and should be easily winnable by anyone with powers you accept. The remote viewing, for instance, that I see on your blog. That's trivial to turn into a good protocol. Why doesn't someone just go ahead and prove these things exist? It'd be good for everyone involved. I see you say: "But for the far larger community of psi deniers who have not read the literature of evidence for psi, and get all your information from the Shermers and Randis of the world, I have a simple message: you are uninformed." So obviously you think that either Randi has bad information or is deliberately sharing bad information. That's fine. If the Challenge is set up correctly it shouldn't matter what Randi does or does not believe/know/whatever. I can only conclude there is at least one serious flaw in the Challenge. Could you tell me what it is?

"Got protocol? Yes or no?"

If there was any actual evidence, somebody would have claimed Randi's million-dollar prize years ago. I wasn't able to find a copy of "The Irreducible Mind" online; it doesn't have a Wikipedia article and apparently isn't that popular. A quick Google of the authors reveals that only one (Bruce Greyson) has a Wikipedia article (http://en.wikipedia.org/wiki/Bruce_Greyson). The lead author, Edward F. Kelly, is employed as a professor of "Perceptual Studies" at the University of Virginia Health System (http://www.healthsystem.virginia.edu/internet/personalitystudies/Edbio.cfm) and has a PhD. from Harvard in "Psycholinguistics/Cognitive Science". The authors seem to work mainly within the field of psychology, asserting that it has "no explanation" for the human mind (http://www.amazon.com/Irreducible-Mind-hard-find-contemporary/dp/customer-reviews/0742547922).

As for the other two links, the first one sounds like nonsense; the "research" was not peer-reviewed, replicated or verified and was "released exclusively to the Daily Mail", a well-known London tabloid (http://en.wikipedia.org/wiki/Daily_Mail). The article he linked is from The Evening Standard, another British tabloid (http://en.wikipedia.org/wiki/The_Evening_Standard), and asserts that "Virtually all the great scientific formulae which explain how the world works allow information to flow backwards and forwards through time - they can work either way, regardless.", as well as a great deal of other obvious nonsense. The second one lists a number of anecdotes, none of which have sources, identifying references or even names.

So there's no reproducible protocol, then?

I have better things to do with my time.

Matthew C - it sounds more like you're trying to sell a book than produce a testable experiment.

Matthew: As far as I can tell, Psi is not a hypothesis that constrains the probability density of predictions rather than simply saying "anything goes, anything can happen". As such, isn't it just an instance of radical skepticism? The thing is, radical skeptical arguments don't change anticipations or proscribe changes in behavior. Taken seriously, it's not clear that such hypotheses even constitute arguments for their own advocacy. Maybe if I draw attention to the unknowable demons behind the curtain I will be better able to deal with them but maybe that will cause them to eat me. I don’t see how an expected value calculation holds that the former is more likely than the latter, just as I don’t see how a god who punishes atheists is any less likely than one who punishes believers. Related question. What evidence would cause you to relinquish the psi hypothesis?

John, Stuart, let's do the math:

H1: "the coin will come up heads 95% of the time."

Whether a given coinflip is evidence for or against H1 depends not only on the value of that coinflip, but on what other hypotheses you are comparing H1 to. So let's introduce...

H2: "the coin will come up heads 50% of the time."

By Bayes' Theorem (odds form), the odds conditional upon the data D are:

p(H1|D) / p(H2|D) = p(H1)p(D|H1) / p(H2)p(D|H2)

So when we see the data, our odds are multiplied by the likelihood ratio p(D|H1)/p(D|H2).

If D = heads, our likelihood ratio is:

p(heads|H1) / p(heads|H2) = .95 / .5 = 1.9.

If D = tails, our likelihood ratio is:

p(tails|H1) / p(tails|H2) = .05 / .5 = 0.1.

If you prefer to measure evidence in decibels, then a result of heads is 10log10(1.9) ~= +2.8db of evidence and a result of tails is 10log10(0.1) = -10.0db of evidence.

The same result is true regardless of how you group the coinflips; if you get nothing but heads, that is even stronger evidence for H1 than if you get 95% heads and 5% tails. This is true because we are only comparing it to hypothesis H2. If we introduce hypothesis H3:

H3: "the coin will come up heads 99% of the time."

Then we can also measure the likelihood ratio p(D|H1) / p(D|H3).

Plugging in "heads" or "tails", we get:

p(heads|H1) / p(heads|H3) = 0.95 / 0.99 = 0.9595... p(tails|H1) / p(tails|H3) = 0.05 / 0.01 = 5.0

So a result of heads is about -0.18 db of evidence for H1, and a result of tails is about +7.0 db of evidence.

If you have a uniform prior on [0, 1] for the frequency of a heads, then you can use Laplace's Rule of Succession.

"If the hypothesis is that the coin comes up heads 95% of the time, then one time in twenty you will see what looks like contrary evidence."

My question here assumes that you mean one in twenty times you get a tails (if you mean one in twenty times you get a heads, then I'm also confused but for different reasons).

Surely if I have a hypothesis that a coin will land heads 95% of the time (and therefore tails 5% of the time) then every cluster of results in which 1/20 are tails is actually supporting evidence. If I toss a coin X times (where X is some number whereby 95% is a meaningful description of outcomes: X >= 20) and 1 out of those 20 is tails, that actually is solid evidence is support of my hypothesis - if, as you say "one in twenty times" I see a tails, that is very strong evidence that my 95% hypothesis is accurate...

Have I misread you point or am I thinking about this from the wrong angle?

There is more discussion of this post here as part of the Rerunning the Sequences series.

Have I misread you point or am I thinking about this from the wrong angle?

Maybe the belief here is "the next flip of the coin will be heads". Then each head causes your confidence in that belief to increase, while each tail causes a decrease in that confidence.

You're right, though; the belief "the coin is heads 94-96% of the time" behaves according to more complicated rules. Even if it is true, every so often, you will still get evidence that contradicts your belief - such as a twenty tails in a row. But not often, and Elizer's point still applies.

Actually rather than rehashing the entire psi debate here, I'd much prefer you just read the material instead. Chapter 3 of Irreducible Mind is particularly powerful, and I will send excerpts to anyone who gives me a US postal address or PO box (email mcromer @t blast dawt com). The natural history of these phenomena are very easily available, and often very well documented.

McCabe's single-paragraph dismissal of an 800 page book with hundreds of footnotes that he hasn't read, based on wikipedia entries seems to be the precise opposite of the raison d’être of Overcoming Bias. And Yudkowsky, I simply dare you to read this book. You talk the good talk here about The Way and the search for truth. I dare you to expose yourself to some of the meticulously-documented lacunae in your worldview by reading Irreducible Mind. I dare you to your sense of intellectual pride. Chapter 3 is a good place to start. . .