It seems that a thing that the human mind tries to do all the time, without being aware of it, is to reach a conclusion and then, after the fact, provide justification for why it is true. Compare this with first taking into account all the available evidence and then building a conclusion based on what the evidence says.

This mechanism is probably at work all the time. One hypothesis for its purpose is, that it can act as a sort of compression algorithm. If there is something where you carefully consider the evidence and reach a conclusion, it makes sense to just store the conclusion reached, instead of the entire process with which you reached it. From this conclusion, you can then reconstruct the argument. This allows you to just focus on the reconstruction of the conclusion you made, instead of going through the entire process of evaluating the argument again in detail.

This already has great potential to mislead, as each time you reconstruct the argument, you train to remember the things that support a specific outcome, while the arguments for the opposite outcome might get lost over time. This probably happens at least to some extent (especially with arguments for the other side, that you do not explicitly refute).

However, we have lots of innate intuitions. For example, that incest is wrong. Here this mechanism works the same, only that you might never really evaluated the evidence. You maybe never actually have gone through the evidence and reached a conclusion, but just wrote down your bottom line, based on how you feel about certain thoughts. For example, this could happen because we are genetically predisposed to feel negative about incest, but also because we are socially conditioned to dislike incest. In general, both genetics and social conditioning can determine how you feel about certain thoughts.

This algorithm might try to construct the previous argument, but without considering the specifics of the situation. If there is additional information, then it might be ignored. Possibly because there is the implicit assumption that you have also evaluated this piece of evidence already when forming your original conclusion (unless it sounds very alien to you). And of course, it also depends on how certain you are that you have some absolute truth. The more certain you are that what you think is correct, the less likely you are to evaluate new pieces of information (at least this seems to be the default). It seems that this even happens if you have never seen a piece of information before. Even when you know that you have never seen that piece of information before you might ignore it if you are certain enough about your conclusion.

8

New Comment

New to LessWrong?