Epistemic status: Had a valley of bad rationality problem, did some thinking about how to solve it in my own case, thought the results might be useful for other people who have the same problem. (To be clear, I don't yet claim this solution works, I haven't really tried it yet, but it seems like the kind of thing that might work. The claim here is "this might be worth trying" rather than "this works" and the target audience is myself and people sufficiently similar to me)

Epistemic effort: Maybe 2-3 hours of writing cumulatively. Thanks to Elephantiskon for feedback.

EDIT: Here's a summary of the post, which I was told was much clearer than the original:

There's a thing people (including me) sometimes do, where they (unreflectively) assume that the conclusions of motivated reasoning are always wrong, and dismiss them out of hand. That seems like a bad plan. Instead, try going into System II mode and reexamining conclusions you think might be the result of motivated reasoning, rather than immediately dismissing them. This isn't to say that System II processes are completely immune to motivated reasoning, far from it, but "apply extra scrutiny" seems like a better strategy than "dismiss out of hand."
This habit of [automatically dismissing anything that seems like it might be the result of motivated reasoning] can lead to decision paralysis and pathological self-doubt. The point of this post is to correct for that somewhat.

It sometimes seems like a substantial fraction of my reasoning is driven by an awareness of the insidiousness of motivated reasoning, and a desire to avoid it. In particular, I sometimes have thoughts like the following:

Brain: I would like to go to philosophy graduate school.
Me: But 80,000 hours says it's usually not a good idea to go to philosophy graduate school...
Brain: But none of the other options I can come up with seem especially realistic. Combine that with the fact that grad school can be made to be a good path if you do it right, and it actually seems like a pretty good option.
Me: But I started off wanting to go to graduate school because it seemed like fun. Seems pretty suspicious that it would turn out to be my best option, despite the fact that, according to 80k, for most EAs it's not. Are you sure you're not engaging in motivated reasoning?
Brain: Are you sure you're not engaging in motivated reasoning? Are you sure you're not just trying to make a decision that's socially defensible to our in-group (other EAs)?

Um, what??

I seem to be reasoning as if there were a general principle that, if there's a plausible way that I might be using motivated reasoning to come to a particular conclusion, that conclusion must be wrong. In other words, my brain has decided that anything tagged as "probably based on motivated reasoning" is false. Another way of thinking about this is that I'm using "that's probably based on motivated reasoning" as a fully general excuse against myself.

While being averse to motivated reasoning seems reasonable, the general principle that any conclusion arrived at by motivated reasoning must be false seems ridiculous when I write it out. Obviously, my coming to a conclusion by motivated reasoning doesn't have any effect on whether or not it's true -- it's already true or already false, no matter what sort of reasoning I used.[1]

A better process for dealing with motivated reasoning might be:

If

1. Getting the right (true) answer matters in a particular case, and

2. There's a plausible reason to suspect that I might be coming to my answer in that case on the basis of motivated reasoning,

then it is worth it to:

a. Go into system-II/slow thinking/manual mode/whatever.

b. Ask yourself what you would do if your (potentially motivated-reasoning-generated) conclusion were true, and what you would do if it were false (c.f. leave a line of retreat, split-and-commit, see the dark world).[2]

c. Use explicit, gears-based, models-based reasoning to check my conclusion. (e.g. list out all important considerations in a doc, if the problem is quantitative make a spreadsheet, etc.)

Then, whatever answer comes out of that process, trust it until new information comes along, then rerun the process.

To sum up: if you have a habit of dismissing a belief when you notice it might be the result of motivated reasoning, it might be worth it to replace that habit with the habit of reevaluating the belief instead.


[1]To be clear, I do think the basic idea that [if something seems to be the result of motivated reasoning, that's evidence against it] is probably correct. I just think that you shouldn't update all the way to this is false, since the thing might still be true.

[2]I think the basic idea behind why reasoning hypothetically in this way helps is this: it takes the focus off of deciding whether X is true (which is the step that's suspect) and puts it onto deciding what that would lead to. I like to think of it as first "fixing" X as true, and then "fixing" X as false.

New to LessWrong?

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 4:13 PM

Meta-comment: looks like you tried to include HTML in this post. You'll need to convert it to markdown for it to appear correctly. The main issue seems to be links, and it's pretty easy to format them: put the text between square brackets and the link following between parentheses. I don't think I can type an example here without it being consumed, but here's a good reference.

Nope, this was actually our fault. Our HTML parser sometimes chokes if you end inline-styles in the middle of a link (i.e. have a half-italicized link, or half-underlined link). I fixed it. Sorry for the inconvenience.

Ah, thanks! What happened was that I wrote the post in the LW editor, copied it over to Google Docs for feedback (including links), added some more links while it was in the Google Doc, then copy-and-pasted it back. So that might have been where the weird link formatting came from.

Noticing motivated reasoning gives you a chance to use more formal and auditable reasoning, which is less susceptible to bias from motivation.

That's not to say that the conclusions of motivated reasoning are always wrong, just that the mechanism of knowing whether they're wrong and by how much are highly suspect, and you should cross-check with other means.

If this is intended as a summary of the post, I'd say it doesn't quite seem to capture what I was getting at. If I had to give my own one-paragraph summary, it would be this:

There's a thing people (including me) sometimes do, where they (unreflectively) assume that the conclusions of motivated reasoning are always wrong, and dismiss them out of hand. That seems like a bad plan. Instead, try going into System II mode and reexamining conclusions you think might be the result of motivated reasoning, rather than immediately dismissing them. This isn't to say that System II processes are completely immune to motivated reasoning, far from it, but "apply extra scrutiny" seems like a better strategy than "dismiss out of hand."

Something that was in the background of the post, but I don't think I adequately brought out, is that this habit of [automatically dismissing anything that seems like it might be the result of motivated reasoning] can lead to decision paralysis and pathological self-doubt. The point of this post is to somewhat correct for that. Perhaps it's an overcorrection, but I don't think it is.

I found this one-paragraph summary way clearer than the OP, and suggest adding it at the beginning.