Cashing Out Cognitive Biases as Behavior



We believe cognitive biases and susceptibility lead to bad decisions and suboptimal performance. I’d like to look at 2 interesting studies:

  1. Parker & Fischhoff 2005: “Decision-making competence: External validation through an individual-differences approach”

    compiled a number of questions for 7 cognitive biases and then asked questions about impulsiveness, number of sexual partners, etc to their 110 18–19 year olds, who also supplied some IQ, education, and thinking style metrics. The components for their ‘DMC’ battery:

    • Consistency in risk perception
    • Recognizing social norms
    • Resistance to sunk costs
    • Resistance to framing
    • Applying decision rules
    • Path independence
    • Under/overconfidence
  2. Bruine de Bruin et al 2007: “Individual Differences in Adult Decision-Making Competence”

    They used the DMC as well, but also developed what we might call a 34-item index of bad decisions (the DOI): ever bought clothes you never wore, rented a movie you didn’t watch, get expelled, file for bankruptcy, forfeit your driver’s license, miss an airplane, bounced a check, drink until you vomited, etc. (pg 18–19 full list). The subjects were 360 18–88 year olds (average 48), with many of the same metrics gathered (education/IQ/thinking style).

Before continuing further, it might be interesting to write down what you expect the results to be. Controlling for IQ eliminates all interesting correlations? A few of the fallacies correlated, or all, or none? Education increases, decreases, or doesn’t affect susceptibility? Fallacy susceptibility correlates strongly with risky behavior, >0.5? Correlates strongly with the DOI results, >0.5? Less for either? And so on.

Are we done? Good, but first I’d like to discuss why I was reading these papers: I recently received my copy of Keith Stanovich’s 2010 book, Rationality & The Reflective Mind. (It cost a cool $50 because I couldn’t find anyone who would pirate the ebook version from Oxford Scholarship Online. So it goes.) It’s fairly interesting - I’m going to have to edit my DNB FAQ based on chapter 3 - and presents a two-process model of IQ and rationality, arguing that lack of reflection or meta-cognition explains why IQ tests can accurately measure IQ but still fail to correlate as much as one would expect with performance measures like the Cognitive Reflection Test. Naturally, one of the first things I did was go through the index and look at the pages dealing with sunk cost so I could use them in my essay; Bruine de Bruin et al 2007 was the first useful reference to pop up, and once I read that, I had to return to Parker & Fischhoff 2005, which I had previously used only as a citation for the lack of correlation between IQ and sunk cost vulnerability.

The sunk cost material in these 2 studies is interesting: they replicated the minimal correlation of sunk cost avoidance with IQ, but sunk cost (and ‘path independence’) exhibited fascinating behaviors compared to the other biases/fallacies measured: sunk cost & path independence correlated minimally with the other biases/fallacies, Cronbach’s alpha were almost uselessly low (in the first one, 0.03! Wikipedia tells me good reliability only starts at 0.50…), education did not help much, age helped some, and sunk cost had low correlations before corrections with the risky behavior or the DOI (eg. after controlling for decision-making styles, 0.13).

This alone is very interesting. I wound up arguing as I read through the sunk cost literature that it was probably not a serious issue, but this is actually far more striking a result than I expected. I expected sunk cost to correlate with the other biases, just on the general type–1/type–2 reasoning (people falling for the intuitively appealing sunk cost answer and not reflecting carefully on the question’s exact logical structure), and to have to point out that this is just a correlation which could be explained in general like that, and to point out further that this wouldn’t show that training sunk cost avoidance would improve any bad behavior - any more than memorizing vocabulary genuinely improves your IQ score. But it turns out there’s not much of a correlation for me to have to explain away! And we’re talking about some 470 test subjects here, which is larger than most of the studies I use in the essay in the first place!

Now, for the other questions. The fallacies don’t factor out very well any general ‘rationality quotient’: a single factor explains 25% of Parker & Fischhoff. Bruine de Bruin does a little better:

“Table 4 further shows a two-factor solution using the principal factors method with oblimin rotation, which allows nonorthogonal factors. The two factors account for 46.2% of the variance and are correlated (r ϭ .30, p Ͻ .001). Except for Resistance to Sunk Costs and Path Independence, all tasks have loadings of at least .30 on the first factor. These loadings resemble those of the one-factor solution. Recognizing Social Norms, Resistance to Sunk Costs, and Path Independence have a higher loading on the second factor, but the latter remains under .30. The two-factor solution does not correspond to the three-factor solution reported for the Y-DMC (Parker & Fischhoff, 2005). Nor does either factor solution correspond to any of the three task characteristics highlighted in Table 1: response mode, criterion, or general decision-making skills.”

In contrast, I believe g as a single factor accounts for more than 50% of variance on IQ tests. If we look at table 3, page 8 of Bruine de Bruin, we see how each fallacy correlates with each other: the correlations tend to be 0–0.3, with nothing higher than 0.43 (‘Applying Decision Rules’ x ‘Consistency in Risk Perception’).

And the practical cash out of susceptibility to this grab bag? Well, in Bruine de Bruin, we get correlations with the DOI (uncontrolled for SES/IQ/age/style) for each of the 7 not exceeding 0.26 and overall correlation of 0.29. (SES: 0.2; IQ: 0.26; age: 0.31; style: 0.14.)

Is 0.3 the correlation you expected? Would training on those 7 fallacies affect the underlying cause of lower performance on the DOI? How much training would this take? Would it be worthwhile?