There are a lot of explanations of Bayes' Theorem, so I won't get into the technicalities. I will get into why it should change how you think. This post is pretty introductory, so free to totally skip it if you don't feel like there's anything about Bayes' Theorem that you don't understand.

For a while I was reading LessWrong and not seeing what the big deal about Bayes' Theorem was. Sure, probability is in the mind and all, but I didn't see why it was so important to insist on bayesian methods. For me they were a tool, rather than a way of thinking. This summary also helped someone in the DC group.

After using the Anki deck, a thought occurred to me:

Bayes theorem means that when seeing how likely a hypothesis is after an event, not only do I need to think about how likely the hypothesis said the event is, I need to consider everything else that could have possibly made that event more likely.

To illustrate:

pretty clearly shows how you need to consider P(e|H), but that's slightly more obvious than the rest of it.

If you write it out the way that you would compute it you get...

where h is an element of the hypothesis space.

This means that every way that e could have happened is important, on top of (or should I say under?) just how much probability the hypothesis assigned to e.

This is because P(e) comes from every hypothesis that contributes to e happening, or more mathilyeX P(e) is the sum over all possible hypotheses of the probability of the event and that hypothesis, computed by the probability of the hypothesis times the probability of the event given the hypothesis.

In LaTeX:

where h is an element of the hypothesis space.

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 5:02 AM

I do think Bayes is an improvement over the frequentist methods. But I do think that it's not the main place where people go wrong.

The main failure I see is the "What a coincidence!" failure. People notice an event that is a one in a billion chance. They say to themselves "Wow ! There was only one chance in a billion for that to happen by chance. This means there must be something special about me/the universe/a higher power/whatever."

They completely neglect two other numbers.

  1. The number of events that occur every day. You get 86400 seconds every day, and the proportion of those when you're awake is a time when something unlikely could occur.
  2. The number of other possible unlikely events that you would also have noticed. This is a pretty big number in its own right.

Once you've taken these into account, you discover that you actually ought to expect to experience a steady flow of apparently coincidental unlikely events throughout your life. It would be notable if it didn't happen.

I suspect this is the most important step for most people - the principle of thinking numerically about the things that happen in an orderly way. Going on from there to Bayes' theorem is then a matter of being taught how to do the maths correctly.

It might be a good idea to put Bayes' Theorem into English (I haven't seen that anywhere). My attempt would be this:

To judge the hypothesis H given the evidence e, you need to know how well H explains e AND the base-rate of H AND compare that with all alternative hypotheses.

So even if you don't have any numbers to plug into Bayes' Theorem, the theorem is still a nerdy reminder not to neglect base-rate and to think of alternative explanations.

Remember that "H causes e" and "H implies e" are two very different statements. The map is not the territory.

In order to show that H causes e you would have to show that the probabilities always factor as P(e & H) = P(H)P(e|H) and not as P(e & H) = P(e)P(H|e).

For example, rain causes wet grass, but wet grass does not cause rain, even though the Bayesian inference goes both ways.

In order to show that H causes e you would have to show that the probabilities always factor as P(e & H) = P(H)P(e|H) and not as P(e & H) = P(e)P(H|e).

Both of these are mathematical identities. It is not possible for one to hold and not the other; both are always true.

Causal analysis of probabilities is a lot more complicated.


Thinking is itself an act that modifies your subjective probabilities - whatever reference class you pick."...persuasion is powerfully affected by the amount of self-talk that occurs in response to a message.[2] The degree to which the self-talk supports the message and the confidence that recipients express in the validity of that self-talk further support the cognitive response model."