This is the second part in a mini-sequence presenting content from Keith E. Stanovich's excellent book What Intelligence Tests Miss: The psychology of rational thought. It will culminate in a review of the book itself.

Noting that there are many different kinds of bias, Keith Stanovich proposes a classification scheme for bias that has two primary categories: the Cognitive Miser, and Mindware Problems. Today, I will discuss the Cognitive Miser category, which has the subcategories of Default to the Autonomous Mind, Serial Associative Cognition with a Focal Bias, and Override Failure.

The Cognitive Miser

Cognitive science suggests that our brains use two different kinds of systems for reasoning: Type 1 and Type 2. Type 1 is quick, dirty and parallel, and requires little energy. Type 2 is energy-consuming, slow and serial. Because Type 2 processing is expensive and can only work on one or at most a couple of things at a time, humans have evolved to default to Type 1 processing whenever possible. We are "cognitive misers" - we avoid unnecessarily spending Type 2 cognitive resources and prefer to use Type 1 heuristics, even though this might be harmful in a modern-day environment.

Stanovich further subdivides Type 2 processing into what he calls the algorithmic mind and the reflective mind. He argues that the reason why high-IQ people can fall prey to bias almost as easily as low-IQ people is that intelligence tests measure the effectiveness of the algorithmic mind, whereas many reasons for bias can be found in the reflective mind. An important function of the algorithmic mind is to carry out cognitive decoupling - to create copies of our mental representations about things, so that the copies can be used in simulations without affecting the original representations. For instance, a person wondering how to get a fruit down from a high tree will imagine various ways of getting to the fruit, and by doing so he operates on a mental concept that has been copied and decoupled from the concept of the actual fruit. Even when he imagines the things he might do to the fruit, he never confuses the fruit he has imagined in his mind with the fruit that's still hanging in the tree (the two concepts are decoupled). If he did, he might end up believing that he could get the fruit down by simply imagining himself taking it down. High performance on IQ tests indicates an advanced ability for cognitive decoupling.

In contrast, the reflective mind embodies various higher-level goals as well as thinking dispositions. Various psychological tests of thinking dispositions measure things such as the tendency to collect information before making up one's mind, the tendency to seek various points of view before coming to a conclusion, the disposition to think extensively about a problem before responding, the tendency to calibrate the degree of strength of one's opinion to the degree of evidence available, the tendency to think about future consequences before taking action, the tendency to explicitly weigh pluses and minuses of situations before making a decision, and the tendency to seek nuance and avoid absolutism. All things being equal, a high-IQ person would have a better chance of avoiding bias if they stopped to think things through, but a higher algorithmic efficiency doesn't help them if it's not in their nature to ever bother doing so. In tests of rational thinking where the subjects are explicitly instructed to consider the issue in a detached and objective manner, there's a correlation of .3 - .4 between IQ and test performance. But if such instructions are not given, and people are free to reason in a biased or unbiased way as they wish (like in real life), the correlation between IQ and rationality falls to nearly zero!

Modeling the mind purely in terms of Type 1 and Type 2 systems would do a poor job of explaining the question of why intelligent people only do better at good thinking if you tell them in advance what "good thinking" is. It is much better explained by a three-level model where the reflective mind may choose to "make a function call" to the algorithmic mind, which in turn will attempt to override the autonomous Type 1 processes. A failure of rationality may happen either if the reflective mind fails to activate the algorithmic mind, or if the algorithmic mind fails to override the autonomous mind. This gives us a three-level classification of this kind of bias.

Default to the Autonomous Mind

Defaulting to the autonomous mind is the most shallow kind of thought, where no Type 2 processing is done at all. The reflective mind fails to react and activate the algorithmic mind. Stanovich considers biases such as impulsively associative thinking and affect substitution (evaluating something primarily based on its affect) to be caused by this one.

Serial Associative Cognition with a Focal Bias

In this mode of thinking, Type 2 processes are engaged, but they are too conservative in their use of resources. For instance, consider the following problem (answer in rot13 below):

Jack is looking at Anne but Anne is looking at George. Jack is married but George is not. Is a married person looking at an unmarried person? A) Yes B) No C) Cannot be determined.

Gur pbeerpg nafjre, juvpu yrff guna 20 creprag bs crbcyr trg, vf N. Vs Naar vf zneevrq, gura gur nafjre vf "Lrf", orpnhfr fur'f ybbxvat ng Trbetr jub'f hazneevrq. Vs Naar vf hazneevrq, gura gur nafjre vf "Lrf" orpnhfr fur'f orvat ybbxrq ng ol Wnpx, jub'f zneevrq.

In this example, people frequently concentrate too much on a single detail and get the answer wrong. There are numerous biases of similar kind. For instance, when asked to guess the amount of murders in Detroit (which is located in Michigan) they give a higher number than when asked to guess the number of murders in Michigan. This is because people are using crude affect-laden images of the locations in question to generate their guess. Vividness, salience and accessibility of various pieces of information have an overly strong effect to our thinking, becoming the main focal point of our evaluation. Focal bias is also involved biases such as framing effects (the presented frame is taken as focal), the Wason selection task, motivated cognition, and confirmation bias.

Override Failure

In an override failure, Type 2 processes notice that Type 1 systems are attempting to apply rules or heuristics that are not applicable to the situation at hand. As a result, the Type 2 processes attempt to initiate an override and take the Type 1 systems offline, but for whatever reason they fail to do so. Override failures can be divided into two categories: "cold" and "hot" ones.

  • Premise: All living things need water
  • Premise: Roses need water
  • Conclusion: Roses are living things

The above reasoning is invalid ("Living things" implies "need water", but "need water" does not imply "living thing"), but many people will instinctively accept it, because the conclusion is a true one. It's an example of a cold override, where you need to override a natural response with a rule-based one. In another example, test subjects were presented with two cans of jelly beans. One of the cans had nine white jelly beans and one red jelly bean. The other had eight red jelly beans and ninety-two white jelly beans. The subjects were told to pick one of the cans and then draw a jelly bean at random from their chosen can: if they got a red one, they'd win a dollar. Most picked the can with one red jelly bean (a 10% chance) but 30 to 40 percent of the subjects picked the one with the worse (8%) odds. Many of them knew that they were doing a mistake, but having a higher absolute amount of beans was too enticing to them. One commented afterwards: "I picked the one with more red jelly beans because it looked like there were more ways to get a winner, even though I knew there were also more whites, and that the percents were against me."

A "hot" override, on the other hand, is one where strong emotions are involved. In what's likely to be a somewhat controversial example around here, Stanovich discusses the trolley problem. He notes that most people would choose to flip the switch sending the trolley to the track where it kills one person instead of five, but that most people would also say "no" to pushing the fat man on the tracks. He notes that this kind of a scenario feels more "yucky". Brain scans of people being presented various variations of this dilemma show more emotional activity in the more personal variations. The people answering "yes" to the "fat man"-type dilemmas took a longer time to answer, and scans of their brain indicated activity in the regions associated with overriding the emotional brain. They were using Type 2 processing to override the effects of Type 1 emotions.

Stanovich identifies denominator neglect (the jelly bean problem), belief bias effects ("roses are living things"), self-control problems such as the inability to delay gratification, as well as moral judgement failures as being caused by an override failure.

New Comment
38 comments, sorted by Click to highlight new comments since: Today at 8:02 AM

I find the post not jargony or heavy - it is fine. Type 1 is obviously uninterrupted, unconscious cognition and Type 2 is cognition that rises to consciousness in order to use working memory. This is why Type 2 is slower - it is done stepwise with a trip through consciousness between each major step in cognition rather than continuous cognition. That is why it is restrictive too - it can only deal with the limited number of items that can be held in working memory at least at the point where working memory is used. But Type 2 is more controlled and therefore can be more logical because as steps in the cognition process pass through consciousness they can be accessed by parts of the cortex that were not involved in the cognition but can judge it, like parts of the frontal cortex. I personally like the Type 1 and Type 2 rather than unconscious and conscious because is avoids the implication that unconscious and conscious are both ways to do cognition. Consciousness only is awareness of otherwise unconscious cognition. I am voting up.

Can we perhaps come up with some better names than "Type 1" and "Type 2"? Those aren't suggestive at all.

Can we go further than this and declare a blanket moratorium on "1 and 2" or "a and b" taxonomies?

[-][anonymous]14y90

Talk to the statisticians. They've been using "Type I error" and "Type II error" instead of "false positive" and "false negative" for ages.

In this case, though, I had much less trouble than with the statistical errors. Possibly because those are essentially the same thing, differentiated only by which hypothesis is "null". Here, though, a Type 1 system and a Type 2 system are actually very different things. Plus as others have mentioned the ordering on the systems does make sense.

Talk to the statisticians. They've been using "Type I error" and "Type II error" instead of "false positive" and "false negative" for ages.

They're still bad names. It's like making new word processor documents and leaving them titled "Untitled 1" and "Untitled 2" instead of something descriptive.

I remember learning this and absolutely hating statisticians for it

Talk to the statisticians. They've been using "Type I error" and "Type II error" instead of "false positive" and "false negative" for ages.

Some people can not be saved. That is absolutely idiotic.

Yes please!

Where does that come from, anyway? I read this book and found it much easier to digest than this post - though I'm fairly new and have hardly read the other sequences.

EDIT: 'sequence' seemed a bit harsh; I liked What Cost for Irrationality?

Huh. I wondered why this post isn't getting any more upvotes. Is that because it's hard to understand? Am I trying to say too much in too few words?

Am I trying to say too much in too few words?

More words would be a bad thing. Perhaps too much content and too many arbitrary new names for things for one post.

For my part I just hadn't had time to read the post until now so the vote just required patience. Another consideration is that people haven't had their attention constantly dragged back to the post, prompting them to read it and vote. There just isn't all that much that is controversial in the content to prompt extensive debate or analysis. I just read "a bunch of biases that I already know and categorizations that I'm neither for nor against."

It's a little dense and jargon-y.. I feel like I haven't loaded terms in my working memory and so need to re-look them up. But who knows? Maybe it's something trivial like formatting.

Then again, it's late here and I should take another crack when I'm well rested.

Rather than too much jargon, it seemed there were too many vague names given to existing concepts. These seem to be shortcomings of the text's author

I would have more confidence in the author's model if there was a failure mode given for each element of the cognitive model. Unfortunately the chart I created to display the lack of correspondence keeps collapsing when I post it...

The coupling of type 1 and 2 override failures seems weak; a logical failure is not the same as an ethical dispute except in the minds of pure utilitarians.

"Serial Associative Cognition with a Focal Bias, " which I might have referred to as mere "focal bias," might make use of this fun psych experiment {rot13 -spoiler}tbevyyn movie.

Once the next segment of the review comes along, it may be worthwhile to compare wikipedia's list of fallacies and cognitive distortions to do an initial check for completeness.

It's a little dense and jargon-y.. I feel like I haven't loaded terms in my working memory and so need to re-look them up.

You are perhaps referring to short term memory there. Or "haven't kept the terms in my working memory". Trying to solve the problem with Jack, Anne and George would displace "Type 1 and 2" from working memory regardless and you would need them in short term memory to keep engaging with them once finished with that task.

I thought working memory had subsumed short term memory conceptually. Anyway, yes, I meant 'kept'.

It seems to vary (up to the extent of working memory being considered a part of long term memory which happens to be the subject of focus!). I suspect 'medium-term memory' is more what I am referring to but some just classify that is 'long term memory that you possibly will not bother consolidating all that much'. Whatever it is when we 'get' the thing we are reading but then fill our working memory with random numbers.

I think "System 1" is becoming standard, supplanting the old-fashioned "Type 1." I think the mnemonic is that "System 1" acts first and only sometimes falls back to "System 2."

That's true; it was a mistake to say they aren't suggestive at all. There does seem to be a natural ordering here. I guess it's not so bad in this context. "System 1/2" is a bit more distinctive, I agree.

Type 1: Implicit reasoning

Type 2: Explicit reasoning

Oh, also: the OP refers to Type 1 as being "autonomous" and Type 2 as being "algorithmic", so another option would be to just stick with those words.

He uses "Autonomous" for Type 1. "Conscious" might work for Type 2.

You threw in an "Autonomous" instead of "Type 1" in the middle there. Apart from being a lousy name (just don't think the association is at all right) it made me assume that you were introducing a new type. Particularly because the previous sentence said you were moving to a 3-type classification!

Of course, if we think it's exhaustive, we can just name one and call the other "non-" that...

Because Type 2 processing is expensive and can only work on one or at most a couple of things at a time, humans have evolved to default to Type 1 processing whenever possible.

We also got Type 1 from our animal heritage and evolution basically hacked Type 2 on for a few animals but mostly for us. We haven't evolved a tendency to use Type 2 because we mostly suck at it. It also relies on reasoning from consciously known premises. Those who are most inclined to override Type 1 with Type 2 often get bitten in the arse by it because much of what most people believe is crap that just sounds good when signalling.

You write "We haven't evolved a tendency to use Type 2 because we mostly suck at it."

Maybe "type 2" is generally expensive, as opposed to specifically expensive for humans because humans happen to mostly suck at it. It seems pretty common in successful search-based solutions to AI problems (like planning, pathing, or adversarial games) to use something analogous to the "type 1" vs. "type 2" split, moving a considerable amount of logic into "type 1"-like endpoint evaluation and/or heuristic hints for the search, then layering "type 2"-like search over that and treating the search as expensive. Even in problems that have been analyzed to death by hundreds or thousands of independent programmers (playing chess, e.g.) that design theme persists and enjoys competitive success. The competitive success of this theme across hundreds of independent designs doesn't eliminate the possibility that this is just a reflection of a blind spot in human's design ability, but it seems to me that that the success at least casts considerable doubt on the blind spot explanation. Perhaps we should take seriously the idea that the two layer theme and/or the relative expense of the two layers are properties of good solutions to this kind of problem, not merely idiosyncrasies of the human brain.

Maybe "type 2" is generally expensive, as opposed to specifically expensive for humans because humans happen to mostly suck at it.

I suggest, rather, that type two is universally cheaper if done efficiently and that we use type 1 preferentially because our "type 2" thought is essentially emulated in "type 1" architecture.

Most of what I did when creating AI was finding more intelligent ways to program the type 2 system such that I could reduce the calls to the expensive type 1 evaluation system.

I would be interested in more details on that, because my experience very much matches what wnewman said: it's more efficient to do as many things as reasonably possible in blind hardwired code than with search or explicit logical inference.

What sort of problems were you trying to solve, and what sort of architecture were you using that you found type 2 more efficient than type 1?

Perhaps I am merely extracting a different message from the "Type 1" and "Type 2" distinction. When I look at the function of the human brain "Type 1" is orders of magnitude more powerful when implemented in mammal architecture than the pathetic excuse for "type 2" that we have. 7 bits and 200 hz? Seriously? How is Type 2 supposed to work with that? But it occurs to me that there are all sorts of arbitrary distinctions that can be drawn between '1' and '2' when generalized away from humans. My own interpretation is obviously going to be biased by different prior models.

Override failure when conscious processes override correct answer of autonomous processes is worth mentioning too. Conformity bias seems to fit in that category for example.

One of the cans had nine white jelly beans and one red jelly bean. The other had eight white jelly beans and ninety-two red jelly beans. The subjects were told to pick one of the cans and then draw a jelly bean at random from their chosen can: if they got a red one, they'd win a dollar. Most picked the can with one red jelly bean (a 10% chance) but 30 to 40 percent of the subjects picked the one with the worse (8%) odds.

If there are 92 red and 8 white jelly bellies, there is a 92% chance of getting a red jelly belly. Did you invert the colors?

...yes, I did. I seem to be awful at correcting for belief bias effects: I hope it's only for silly writing mistakes like these, and not in general...

Anyway, thanks. Corrected.

"I picked the one with more red jelly beans because it looked like there were more ways to get a winner, even though I knew there were also more whites, and that the percents were against me."

I think that cognitive psychology experiments should be legally obliged to sterilize people like that. Think of the long term benefit. ;-)

Note that this is not a serious suggestion

Wait, rationality is genetic? Since when?

Wait, rationality is genetic? Since when?

Possibly not, but the desire to sterilize inconvenient people probably is.

Wait, rationality is genetic? Since when?

This raises a question: has anyone done twin studies to determine whether any aspects of rationality do have a genetic element?

IQ improves rationality somewhat, and IQ is genetic.

Furthermore, consider reasoning with a prior over the heritability coefficient.

I see - you're right that I don't have any data against heritability of rationality.

(And JoshuaZ will need to take the IQ into account when doing the twin studies.)