Oct 10, 2007
Suppose you ask subjects to press one button if a string of letters forms a word, and another button if the string does not form a word. (E.g., "banack" vs. "banner".) Then you show them the string "water". Later, they will more quickly identify the string "drink" as a word. This is known as "cognitive priming"; this particular form would be "semantic priming" or "conceptual priming".
The fascinating thing about priming is that it occurs at such a low level—priming speeds up identifying letters as forming a word, which one would expect to take place before you deliberate on the word's meaning.
Priming also reveals the massive parallelism of spreading activation: if seeing "water" activates the word "drink", it probably also activates "river", or "cup", or "splash"... and this activation spreads, from the semantic linkage of concepts, all the way back to recognizing strings of letters.
Priming is subconscious and unstoppable, an artifact of the human neural architecture. Trying to stop yourself from priming is like trying to stop the spreading activation of your own neural circuits. Try to say aloud the color—not the meaning, but the color—of the following letter-string: "GREEN"
In Mussweiler and Strack (2000), subjects were asked the anchoring question: "Is the annual mean temperature in Germany higher or lower than 5 Celsius / 20 Celsius?" Afterward, on a word-identification task, subjects presented with the 5 Celsius anchor were faster on identifying words like "cold" and "snow", while subjects with the high anchor were faster to identify "hot" and "sun". This shows a non-adjustment mechanism for anchoring: priming compatible thoughts and memories.
The more general result is that completely uninformative, known false, or totally irrelevant "information" can influence estimates and decisions. In the field of heuristics and biases, this more general phenomenon is known as contamination. (Chapman and Johnson 2002.)
Early research in heuristics and biases discovered anchoring effects, such as subjects giving lower (higher) estimates of the percentage of UN countries found within Africa, depending on whether they were first asked if the percentage was more or less than 10 (65). This effect was originally attributed to subjects adjusting from the anchor as a starting point, stopping as soon as they reached a plausible value, and under-adjusting because they were stopping at one end of a confidence interval. (Tversky and Kahneman 1974.)
Tversky and Kahneman's early hypothesis still appears to be the correct explanation in some circumstances, notably when subjects generate the initial estimate themselves (Epley and Gilovich 2001). But modern research seems to show that most anchoring is actually due to contamination, not sliding adjustment. (Hat tip for Unnamed for reminding me of this—I'd read the Epley/Gilovich paper years ago, as a chapter in Heuristics and Biases, but forgotten it.)
Your grocery store probably has annoying signs saying "Limit 12 per customer" or "5 for $10". Are these signs effective at getting customers to buy in larger quantities? You probably think you're not influenced. But someone must be, because these signs have been shown to work, which is why stores keep putting them up. (Wansink et. al. 1998.)
Yet the most fearsome aspect of contamination is that it serves as yet another of the thousand faces of confirmation bias. Once an idea gets into your head, it primes information compatible with it—and thereby ensures its continued existence. Never mind the selection pressures for winning political arguments; confirmation bias is built directly into our hardware, associational networks priming compatible thoughts and memories. An unfortunate side effect of our existence as neural creatures.
A single fleeting image can be enough to prime associated words for recognition. Don't think it takes anything more to set confirmation bias in motion. All it takes is that one quick flash, and the bottom line is already decided, for we change our minds less often than we think...
Chapman, G.B. and Johnson, E.J. 2002. Incorporating the irrelevant: Anchors in judgments of belief and value. In Gilovich et. al. (2003).
Epley, N., & Gilovich, T. (2001). Putting adjustment back in the anchoring and adjustment heuristic: Differential processing of self-generated and experimenter-provided anchors. Psychological Science, 12, 391–396.
Mussweiler, T. and Strack, F. Comparing is believing: a selective accessibility model of judgmental anchoring. European Review of Social Psychology, 10, 135-167.
Tversky, A. and Kahneman, D. 1974. Judgment under uncertainty: Heuristics and biases. Science, 185: 251-284.
Wansink, B., Kent, R.J. and Hoch, S.J. 1998. An Anchoring and Adjustment Model of Purchase Quantity Decisions. Journal of Marketing Research, 35(February): 71-81.