No Really, Why Aren't Rationalists Winning?
Reply to Extreme Rationality: It's Not That Great, Extreme Rationality: It could Be Great, the Craft and the Community and Why Don't Rationalists Win? I’m going to say something which might be extremely obvious in hindsight: If LessWrong had originally been targeted at and introduced to an audience of competent business people and self-improvement health buffs instead of an audience of STEM specialists and Harry Potter fans, things would have been drastically different. Rationalists would be winning. Right now, rationalists aren’t winning. Rationality helps us choose which charities to donate to, and as Scott Alexander pointed out in 2009 it gives clarity of mind benefits. However, as he also pointed out in the same article, rationality doesn't seem to be helping us win in individual career or interpersonal/social areas of life. It’s been nearly ten years since then, and I have yet to see any sign that this fact has changed. I considered the possibility that I just hadn’t heard about other rationalists’ practical success due to having not become a rationalist until around 2015, or simply because no one was talking about their success. Then I realized that was silly. If rationalists had started winning, at least one person would have posted about it here on lesswrong.com. I recently spoke to Scott Alexander, and he said he still agreed with everything he said in his article. So rationalists aren’t winning. Why not? The Bayesian Conspiracy podcast (if I recall correctly), proposed the following explanation in one of their episodes: that rationality can only help us improve a limited amount relative to where we started out. They predicted that people who started out at a lower level of life success/cognitive functioning/talent cannot outperform non-rationalists who started out at a sufficiently high level. This argument is fundamentally a cop-out. When others win in places where we fail, it makes sense to ask, “How? What knowledge, skills, qualities or experience
1. On the deontology/virtue ethics vs consequentialism thing, you're right I don't know how I missed that, thanks!
1a. I'll have to think about that a bit more.
2. Well, if we were just going off of the four moralities I described, then I already named two examples where two of those moralities are unable to converge: a pure flourishing maximizer wouldn't want to mercy kill the human species, but a pure suffering minimizer would. A pure flourishing maximizer would be willing to have one person tortured forever if that was a necessary prerequisite for uplifting the rest of the human species into a transhumanist utopia. A suffering minimizer would not. Even if the... (read more)