[ Question ]

Why is Bayesianism important for rationality?

by Chris_Leong1 min read1st Sep 202024 comments

37

Bayes' TheoremRationality
Frontpage

My impression from the Sequences seems to be that Eliezer considers Bayesianism to be a core element of rationality. Some people have even referred to the community as Bayesian Rationalists. I've always found it curious, like it seemed like more of a technicality most of the time. Why is Bayesianism important or why did Eliezer consider it important?

New Answer
Ask Related Question
New Comment

4 Answers

In my opinion, "a rationalism" (IE, a set of memes designed for an intellectual community focused on the topic of clear thinking itself) requires a few components to work.

  • It requires a story in which more is possible (as compared with "how you would reason otherwise" or "how other people reason" or such).
    • The first component is an overarching theory of reasoning. This is a framework in which we can understand what reason is, analyze good and bad reasoning, and offer advice about reasoning.
    • The second component is an account of how this is not the default. If there is a simple notion of good reasoning, but also everyone is already quite good at that, then there is not a strong motivation to learn about it, practice it, or form a community around it.

The sequences told a story in which the first role was mostly played by a form of Bayesianism, and the second was mostly played by the heuristics and biases literature. The LessWrong memeplex has evolved somewhat over time, including forming some distinct subdivisions with slightly different answers to those two questions. 

Most notably, I think CFAR has changed its ideas about these two components quite a bit. One version I heard once: the sequences might give you the impression that people are overall pretty bad at Bayesian reasoning, and the best way to become more rational is to specifically de-bias yourself by training Bayesian reasoning and un-training all the known biases or coming up with ways to compensate for them. Initially, this was the vision of CFAR as well. But what CFAR found was that humans are actually really really good at Bayesian reasoning, when other psychological factors are not getting in the way. So CFAR pivoted to a model more focused on removing blockers rather than increasing basic reasoning skills.

Note that this is a different answer to the second question, but keeps Bayesianism as the overarching theory of rationality. (Also keep in mind that this is, quite probably, a pretty bad summary of how views have changed since the beginning of CFAR.)

Eliezer has now written Inadequate Equilibria, which offers a significantly different version of the second component. I could understand starting there and getting an impression of what's important about rationalism which is quite distant from Bayesianism: there, the primary story about rationality is social blockers, to which the primary antidote is thinking for yourself rather than going with the crowd. Why is Bayesianism important for that? Well, the answer is that Bayesianism offers a nuts-and-bolts theory of how to think. You need some such theory in order to ground attempts at self-improvement (otherwise you run the risk of making haphazard changes without any standard by which to judge whether you are thinking better/worse). But the quality of the theory has a significant bearing on how well the self-improvement will turn out!

See Eliezer's post Beautiful Probability, and Yvain on 'probabilism'; there's a core disagreement about what sort of knowledge is possible, and unless you're thinking about things in Bayesian terms, you will get hopelessly confused.

I don't know how many people, if any, are actually going around in daily life trying to assign or calculate probabilities (conditional or otherwise) or directly apply Bayes' theorem. However, there are core insights that come from learning to think about probability theory coherently that are extremely non-obvious to almost everyone, and require deliberate practice. This includes seemingly simple things like "Mathematical theorems hold whether or not you understand them," "Questions of truth and probability have right answers, and if you get the wrong answers you'll fail to make optimal decisions," or "It's valuable, psychologically and for interpersonal communication, to be able to assign numerical estimates of your confidence in various beliefs or hypotheses." Other more subtle ones like "it is fundamentally impossible to be 100% certain of anything" are also important, and *much* harder to explain to people who aren't aware of the math that defines the relevant terms.

My day job as a research analyst involves making a lot of estimates about a lot of things based on fairly loose and imprecise evidence. In recent years I've been involved in helping train a lot of my coworkers. I find myself paraphrasing ideas from the Sequences constantly (recommending people read them has been less helpful; most won't, and in any case transfer of learning is hard). I notice that their writing, speaking, and thinking become a lot more precise, with fewer mistakes and impossibilities, when I ask them to try doing simple mental exercises like "In your head, assign a probability estimate to everything you claim will happen or think is true now, and add appropriate "likeliness" quantifiers to your sentences based on that."

Also, I've had multiple people tell me that they won't, or even literally can't, make numerical assumptions and estimates without numerical data to back them up, sometimes with very strict ideas about what counts as data. The fact they their colleagues manage to make such assumptions and get useful answers isn't enough to persuade them otherwise. Math is often more likely to get through to such people.

I wrote a LessWrong post that addressed this: What Bayesianism Taught Me