Today's post, Motivated Stopping and Motivated Continuation was originally published on 28 October 2007. A summary (taken from the LW wiki):

 

When the evidence we've seen points towards a conclusion that we like or dislike, there is a temptation to stop the search for evidence prematurely, or to insist that more evidence is needed.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Why Are Individual IQ Differences OK?, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 7:22 PM

One - very pricey, only to be used if other methods don't work AND it's causing you large problems - way to lessen these is to make yourself comfortable with the idea of explicitly not doing the thing you're motivated to avoid, so that you at least get a known unknown instead of an unknown unknown.

Isn't this just another way of saying "confirmation bias"?

About the only cure I've found for that is multiple personality disorder. Pretend you are trying to refute yourself, and want to find the strongest argument. Then pretend you have no opinion, and just want to understand the problem.

I find that helpful on personal issues. Pretend you're your big brother, and looking for a solution for you. Solutions are obvious once you get your own ego and angst out of the way - once it isn't your problem, but your little brother's problem.

Jaynes talks about optional stopping from a statistical perspective, which makes for an interesting paper. I don't think he was as clear on the solution as he needed to be. The issue is whether you know the data, or you just know that you passed a test where optional stopping was used. If you clearly identify the state of knowledge you are conditioning on, the problems go away. I think that paper is a good one for really clarifying your thoughts on bayesian statistics and how it's your state of knowledge that determines the probability.

I'm currently having trouble accessing the Wiki, so I wasn't able to post this summary. Could someone copy and paste it for me?