The main way I've seen people turn ideologically crazy [Linkpost]
This linkpost is in part a response to @Raemon's comment about why the procedure Raemon did doesn't work in practice to deal with the selection effects I talked about in my last post. So in the last post, I was talking about a selection effect where believers of an argument can come to believe that their idea is true and that their critics are crazy wrong/trolling/dumb, no matter what argument is used. And the believers are half-right, in that their random error swamps any truth signal/evidence for most critics due to bounded computation for most topics, but they incorrectly perceive that as evidence that their theory is correct, because they don't realize that most possible critics of an idea will have bad criticisms independently of whether your claims are correct. This was part of what I was trying to get at when I said that selection effects are very, very hard to deal with using only arguments, because once you can't rely on the general public to know whether or not an argument is true, it becomes much, much easier to create bubbles/selection effects that distort your thinking, and because you can't have any other sources of grounding other than arguments, this can easily lead to false beliefs. In practice, the way we generally ameliorate selection election effects is either by having ground truth feedback, or by having the subject matter be easy to verify like in mathematics, physics, and more domains, such that others not of your ideological bubble can correct you if you are wrong. Absent these, we have fields that confidently produce a lot of nonsense that's clearly ideological like nutrition/health studies, sociology, psychology and more fields of science, though there are other problems in these fields, but selection bias is a big contributor. Andy Masley has some tools at the end of the linkpost to help you be able to avoid selection bias more generally: > I’ve seen people around me and in the general public turn ideologically crazy at a higher rat
My own take re rationalization/motivated reasoning is that at the end of the day, no form of ethics can meaningfully slow it down if the person either can't credibly commit to their future selves, or simply isn't bound/want to follow ethical rules, so the motivated reasoning critique isn't EA specific, but rather shows 2 things:
People are more selfish than they think themselves to be, and care less about virtues, so motivated reasoning is very easy.
We can't credibly commit our future selves to do certain things, especially over long timeframes, and even when people do care about virtues, motivated reasoning still harms their thinking.
Motivated reasoning IMO is a pretty deep-seated problem within our own brains, and is probably unsolvable in the near term.