In my opinion, "a rationalism" (IE, a set of memes designed for an intellectual community focused on the topic of clear thinking itself) requires a few components to work.
- It requires a story in which more is possible (as compared with "how you would reason otherwise" or "how other people reason" or such).
- The first component is an overarching theory of reasoning. This is a framework in which we can understand what reason is, analyze good and bad reasoning, and offer advice about reasoning.
- The second component is an account of how this is not the default. If there is a simple notion of good reasoning, but also everyone is already quite good at that, then there is not a strong motivation to learn about it, practice it, or form a community around it.
The sequences told a story in which the first role was mostly played by a form of Bayesianism, and the second was mostly played by the heuristics and biases literature. The LessWrong memeplex has evolved somewhat over time, including forming some distinct subdivisions with slightly different answers to those two questions.
Most notably, I think CFAR has changed its ideas about these two components quite a bit. One version I heard once: the sequences might give you the impression that people are overall pretty bad at Bayesian reasoning, and the best way to become more rational is to specifically de-bias yourself by training Bayesian reasoning and un-training all the known biases or coming up with ways to compensate for them. Initially, this was the vision of CFAR as well. But what CFAR found was that humans are actually really really good at Bayesian reasoning, when other psychological factors are not getting in the way. So CFAR pivoted to a model more focused on removing blockers rather than increasing basic reasoning skills.
Note that this is a different answer to the second question, but keeps Bayesianism as the overarching theory of rationality. (Also keep in mind that this is, quite probably, a pretty bad summary of how views have changed since the beginning of CFAR.)
Eliezer has now written Inadequate Equilibria, which offers a significantly different version of the second component. I could understand starting there and getting an impression of what's important about rationalism which is quite distant from Bayesianism: there, the primary story about rationality is social blockers, to which the primary antidote is thinking for yourself rather than going with the crowd. Why is Bayesianism important for that? Well, the answer is that Bayesianism offers a nuts-and-bolts theory of how to think. You need some such theory in order to ground attempts at self-improvement (otherwise you run the risk of making haphazard changes without any standard by which to judge whether you are thinking better/worse). But the quality of the theory has a significant bearing on how well the self-improvement will turn out!