Previously: Red Flags for Rationalization


It is often said that the best way to resist temptation is to avoid it. This includes the temptation to rationalize. You can avoid rationalization by denying it the circumstances in which it would take place.

When you avoid rationalization, you don't need to worry about infinite regress of metacognition, nor about overcompensation. As a handy bonus, you can demonstrate to others that you weren't rationalizing, which otherwise involves a lot of introspection and trust.

Here are three ways to do it:

Double Blinding

Identify the thing that would control your rationalization and arrange not to know it.

The trope namer is experimental science. You might be tempted to give better care to the experimental than the control group, but not if you don't know which is which. In many cases, you can maintain the blinding in statistical analysis as well, comparing "group A" to "group B" and only learning which is which after doing the math.

Similarly, if you are evaluating people (e.g. for a job) and are worried about subconscious sexism (or overcompensation for it), write a script or ask a friend to strip all gender indicators from the application.

Unfortunately, this technique requires you to anticipate the rationalization risk. Once you notice you might be rationalizing, it's usually too late to double-blind.

End-to-End Testing

A logical argument is a series of mental steps that build on each other. If the argument is direct, every step must be correct. As such, it is somewhat similar to a computer program. You wouldn't write a nontrivial computer program, look it over, say "that looks right" and push it to production. You would test it. With as close as possible to a full, end-to-end test before trusting it.

What does testing look like for an argument? Take a claim near the end of the argument which can be observed. Go and observe it. Since you have a concrete prediction, it should be a lot easier to make the observation.

Once you've got that, you don't need the long chain of argument that got you there. So it doesn't matter if you were rationalizing along the way. The bit at the end of the argument which you haven't chopped off still needs scrutinizing, but it's shorter, so you can give each bit more attention.

Suppose you're organizing some sort of event, and you want to not bother planning cleanup because you'll just tell all the crowd at the end "ok, everybody clean up now". You expect this to work out because of the known agentiness and competence of the crowd, overlap with HOPE attendees, estimates of the difficulty of cleanup tasks.... There's a lot of thinking that could have gone wrong, and a lot of communal pride pressuring you to rationalize. Instead of examining each question, try asking these people to clean up in some low-stakes situation.

(Do be careful to actually observe. I've seen people use their arguments to make predictions, then update on those predictions as if they were observations.)

This approach is also highly effective against non-rationalization errors in analysis as well. It's also especially good at spotting problems involving unknown unknowns.

The Side of Safety

Sometimes, you don't need to know.

Suppose you're about to drive a car, and are estimating whether you gain nontrivial benefit from a seatbelt. You conclude you will not, but note this could be ego-driven rationalization causing you to overestimate your driving talent. You could try to re-evaluate more carefully, or you could observe that the costs of wearing a seatbelt are trivial.

When you're uncertain about the quality of your reasoning, it makes sense to have a probability density function of posteriors for a yes-no question. But when the payoff table is lopsideded enough, you might find the vast bulk of the PDF is on one side of the decision threshold. And then you can just decide.


Next: Testing for Rationalization

New to LessWrong?

New Comment