Some of this post is an expansion of topics covered by Lukeprog here

1. Knowing about biases (doesn't stop you being biased)

Imagine you had to teach a course that would help people to become less biased. What would you teach? A natural idea, tempting enough in theory, might be that you should teach the students about all of the biases that influence their decision making. Once someone knows that they suffer from overconfidence in their ability to predict future events, surely they will adjust their confidence accordingly.

Readers of Less Wrong will be aware that it's more complicated than that.

There is a mass of research showing that knowing about cognitive biases does not stop someone from being biased. Quattrone et. al. (1981) showed that anchoring effects are not decreased by instructing subjects to avoid the bias. Similarly, Pohl et. al. (1996) demonstrate that the same applies to the hindsight bias. Finally, Arzy et al (2009) showed that including a misleading detail in a description of a medical case significantly decreased diagnostic accuracy. Accuracy does not improve if doctors are warned that such information may be present.

2. Consider the opposite (but not too much)

So what does lead to debiasing? As Lukeprog mentioned one well supported tactic is that of "consider the opposite", which involves simply considering some reasons that an initial judgment might be incorrect. This has been shown to help counter overconfidence and hindsight bias as well as anchoring. See, for example, Arkes (1991) or Mussweiler et. al. (2000) for studies along this line.

There are two more things worth noting about this tactic. The first is that Soll and Klayman (2004) have demonstrated that a related tactic has positive results in relation to overconfidence. In their experiment, Soll and Klayman asked subjects to give an interval such that they are 80% sure that the answer to a question lay within this interval. So they asked for predictions of things like the birth year of Oliver Cromwell and the subjects would need to provide an early year and a late year such that they were 80% sure that Cromwell was born somewhere between there two years. These subjects exhibited substantial overconfidence - they were right far less than 80% of the time.

However, another group of subjects were asked two questions. For the first, they were asked to pick a year such that they were 90% sure Cromwell wasn't born before this year. For the second, they were asked to pick a year such that they were 90% sure that Cromwell wasn't born after this year. Subjects still displayed overconfidence in response to this question but to a far more minor extent. But the two questions are equivalent (eta: though see this comment)! Being forced to consider arguments for both ends of the interval seemed to lead to more accurate prediction. Further studies have attempted to improve on this result through more sophisticated tactics along the same lines (see, for example, Andrew Speirs-Bridge et. al., 2009)

The second thing worth noting is that considering too many reasons that an initial judgement might be incorrect is counterproductive (see Roese, 2004 or Sanna et. al. 2002). After a certain point, it becomes increasingly difficult for a person to generate reasons they might have been incorrect. This then serves to convince them that their idea must be right, otherwise it would be easier to come up with reasons against the claim. At this point, the technique ceases to have a debiasing effect. While the exact number of reasons that one should consider is likely to differ from case to case, Sanna et. al. (2002) found a debiasing effect when subjects were asked to consider 2 reasons against their initial conclusion but not when they were asked to consider 10. Consequently, it seems plausible that the ideal number of arguments to consider will be closer to 2 than 10.

So consider the opposite but not too much.


3. Provide reasons

There is also evidence that providing reasons for your decision or judgement can help to mitigate biases. Arkes et. al. (1988) demonstrated that, in relation to hindsight bias, asking for a rationale for a judgement can help debias that judgement.

Similar research has been demonstrated in relation to framing effects. Miller and Fagley (1991) presented participants with a series of scenarios about how to respond to a disease outbreak. One group was then presented with a positive frame while one was presented with a negative frame. This framing influenced the program of response that the participants selected. In other words, those in the negative frame group selected responses with a different frequency to those in the positive frame group despite the scenario being the same. However, if the groups were asked to provide a reason for their decision, then both groups selected responses at about the same frequency (However, Sieck and Yates (1997) demonstrated that this approach does not work in relation to all types of framing questions).

So provide reasons for your decisions.

4. Get some training

There is also evidence that some biases can be trained away. Specifically, Larrick et. al. (1990) has shown that the sunk cost fallacy can be avoided by training and Fong et. al. (1986) has presented similar research with regards to judgements about sample variability.

Larrick (2004) claims that this training is most effective when an abstract principle is taught along with concrete examples. He also suggested that the training should involve examples showing how the principle works in context. The process of training involves not just learning the rule but also figuring out when to apply it and then (hopefully) coming to apply it automatically.

This seems like the sort of thing that could potentially be run in the discussion section of Less Wrong or at face to face meetups.

5. Reference class forecasting

The final technique I want to discuss is reference class forecasting which has been discussed by both Robin and Eliezer. On Less Wrong, this topic is often discussed in terms of the inside and the outside view. Reference class forecasting is basically the idea that in predicting how long a project should take, one should not try to figure out how long each component of the project will take but should instead ask how long it has taken you (or others) to complete similar tasks in the past.

This approach has been shown to be effective in overcoming the planning fallacy. For example, Osberg and Shrauger (1986) demonstrated that those instructed to consider their performance in similar cases in the past were better able to predict their performance in new projects.

So in predicting how long a task will take, use the outside not the inside view.

6. Concluding remarks

I'm sure there's nothing here that will surprise most Less Wrong readers but I hope that having it all together in one place is useful. For anyone who's interested, I got a lot of the information for this post from Richard P. Larrick's article, 'Debiasing' in the Blackwell Handbook of Judgment and Decision Making which is a good book all round.

References

Arkes, H.R. 1991, 'Costs and benefits of judgement errors: Implications for debiasing', Psychological Bulletin, vol. 110, no. 3, pp. 486-498

Arkes, H.R., Faust, D., Guilmette, T.J., & Hart, K. 1988, 'Eliminating the Hindsight Bias', Journal of Applied Psychology, vol. 73, pp. 305-307

Fong, G. T., Krantz, D. H., & Nisbett, R. E. 1986, 'The effects of statistical training on thinking about everyday problems.', Cognitive Psychology, 18, 253-292.

Larrick, R.P. 2004, 'Debiasing', in Blackwell Handbook of Judgment and Decision Making, Blackwell Publishing, Oxford, pp. 316-337.

Miller, P.M. & Fagley, N.S. 1991, 'The Effects of Framing, Problem Variations, and Providing Rationale on Choice', Personality and Social Psychology Bulletin, vol. 17, no. 5, pp. 517-522.

Mussweiler, T. Strack, F. & Pfeiffer, T. 2000, 'Overcoming the Inevitable Anchoring Effect: Considering the Opposite Compensates for Selective Accessibility', Personality and Social Psychology Bulletin, vol. 26, no. 9, pp. 1142-1150

Osberg, T. M., & Shrauger, J. S. 1986, 'Self-prediction: Exploring the parameters of accuracy', Journal of Personality
and Social Psychology
, vol. 51,no. 5, pp. 1044-1057.

Pohl, R.F. & Hell, W. 1996, 'No reduction in Hindsight Bias after Complete Information and repeated Testing', Organizational Behaviour and Human Decision Processes, vol. 67, no. 1, pp. 49-58.

Quattrone, G.A. Lawrence, C.P. Finkel, S.E. & Andrus, D.C. 1981, Explorations in anchoring: The effects of prior range, anchor extremity, and suggestive hints. Manuscript, Stanford University.

Roese, N.J. 2004, 'Twisted Pair: Counterfactual Thinking and the Hindsight Bias', in Blackwell Handbook of Judgment and Decision Making, Blackwell Publishing, Oxford, pp. 258-273.

Sanna, L.J., Schwarz, N., Stocker, S.L. 2002, 'When Debiasing Backfires: Accessible Content and Accessibility Experiences in Debiasing Hindsight', Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 28, no. 3, pp. 497-502.

Soll, J.B. & Klayman, J. 2004, 'Overconfidence in Interval Estimates', Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 30, no. 2, pp. 299-314

Speirs-Bridge, A., Fidler, F., McBride, M., Flander, L., Cumming, G. & Burgman, M. 2009, 'Reducing overconfidence in the interval judgements of experts', Risk Analysis, vol. 30, no. 3, pp. 512 – 523

New Comment
13 comments, sorted by Click to highlight new comments since: Today at 4:01 AM

Technically speaking, the two questions about Cromwell birth are not equivalent : if you have a 90% lower bound and a 90% upper bound it does give you a 80% confidence interval, but not all 80% confidence intervals will have a 90% lower bound and a 90% upper bound. For example you give "now" for upper bound to your interval, and 80% lower bound, that'll still give a 80% interval. That may be part of the reason behind higher error rate for 80% interval : there are many ways to build such an interval, so it's easier to get mixed up.

Else, interesting article, but for some parts it underestimates social complexity of the task : for the planning fallacy, using the "outside view" is usually a good thing, but it's very hard in a professional context to make use of it (neither my boss nor my customers are usually receptive to it, they ask "how long for that feature and that one and that one" and don't care much about "last time we did a project of that complexity, we took twice as long as initially planned"). De-biasing yourself is very important, but sometimes it's not enough, you've to debias others too, and that's even harder...

In my experience debiasing others who have strongly held opinions is far more effort than it's worth, a better road seems to be to facilitate them debiasing themselves. Plant the seed and move on, coming back to assess and perhaps water it later on. I don't try to cut down their tree... as it were.

By the way, I highly recommend this strategy for post-writing. There are hundreds of fascinating studies and results buried in the footnotes of my posts that I didn't take the time to expand more usefully. More like this!

If you or anyone plans to write more posts like this, please feel encouraged to do so. Great work!

Excellent post!

The single hardest bias to avoid is the bias blind spot, wherein people think (maybe not overtly, but on a gut level) that they are less biased than other people.

I've never found any studies, papers, web pages, anecdotes, or anything at all on how to fix it, so I don't really know what to do about it. I have a conjecture, though, which intuitively appears that it might work (but take it cum grano salis):

To avoid bias blind spot, think "What biases are going into this decision?" every single time you make an important decision or come to an important conclusion. Then actually correct for them.

The problem with this is that people who currently take into account biases in their decisions (i.e. Less Wrong, and psychologists) end up not knowing how much their decisions are affected by the bias, and consequently they don't know how much to adjust. So they don't adjust at all. It ends up being an, "Okay, something's wrong here, but I can't fix it, so I'll just do the same thing I was going to do before," scenario.

The single hardest bias to avoid is the bias blind spot, wherein people think (maybe not overtly, but on a gut level) that they are less biased than other people.

Half actually are less biased than the median. Not bad considering how deluded some other biases make us!

Did you know that 50% of our teachers are below average? This is unacceptable. We need to improve our education system!

Okay, lessdazed's comment is getting up-voted a lot, which I think stems from how his post misinterprets mine in a humorous way.

less biased than other people.

was bad wording on my part, I'm afraid. What I really meant was:

significantly less biased than the norm.

which, in turn, really means:

less biased than they actually are.

This is what I really meant, and what the bias blind spot is really about: not noticing, or not correcting for, your own biases (which are there, regardless of whether they are there more or less often than other people. Remember: it is useless to be superior.)

On "Give Reasons": I have read of a study (mentioned, for example here as being in Lehrer's "How we Decide") that students given a poster of their choice were less happy with their decision some months later i they had been asked to give reasons for their choice than if they were just given the poster with no questions. The study hypothesized that students chose based off of easily-explainable aspects rather than the aspects that actually affected their preferences.

So be careful what you give reasons for. Perhaps more aesthetic decisions should be left to the initial impression. For some things, we don't want to overcome our biases.

"Provide reasons" helps for me. Many times I think something is obviously true, and when I start writing a blog post about it, where I have to explain and justify, I realize, mid-paragraph, that what I'm writing is not quite correct, and I have to rethink it.

Despite the fact that this has happened to me several times, my gut still doesn't quite say "it may not be that obvious, and you may be somewhat wrong." Rather, my gut now says, "the argument, written down, may not be as simple as you think."

So I feel like I still have a ways to go.

Fong et. al. (1986) has presented similar research with regards to judgements about sample variability.

You're missing this from your list of references:

Fong, G. T., Krantz, D. H., & Nisbett, R. E. (1986). The effects of statistical training on thinking about everyday problems. Cognitive Psychology, 18, 253-292.

(I'm reading it right now, it looks promising)

Fixed. Thanks.