Suppose you ask subjects to press one button if a string of letters forms a word, and another button if the string does not form a word (e.g., “banack” vs. “banner”). Then you show them the string “water.” Later, they will more quickly identify the string “drink” as a word. This is known as “cognitive priming”; this particular form would be “semantic priming” or “conceptual priming.”

    The fascinating thing about priming is that it occurs at such a low level—priming speeds up identifying letters as forming a word, which one would expect to take place before you deliberate on the word’s meaning.

    Priming also reveals the massive parallelism of spreading activation: if seeing “water” activates the word “drink,” it probably also activates “river,” or “cup,” or “splash” . . . and this activation spreads, from the semantic linkage of concepts, all the way back to recognizing strings of letters.

    Priming is subconscious and unstoppable, an artifact of the human neural architecture. Trying to stop yourself from priming is like trying to stop the spreading activation of your own neural circuits.

    Try making a set of index cards with words like Brown written in randomly assigned colors–a red Green, a blue Yellow, and so on. Try to say aloud the color—not the meaning, but the color—of the letter-strings.

    In Mussweiler and Strack’s experiment, subjects were asked an anchoring question: “Is the annual mean temperature in Germany higher or lower than 5°C / 20°C?”1 Afterward, on a word-identification task, subjects presented with the 5°C anchor were faster on identifying words like “cold” and “snow,” while subjects with the high anchor were faster to identify “hot” and “sun.” This shows a non-adjustment mechanism for anchoring: priming compatible thoughts and memories.

    The more general result is that completely uninformative, known false, or totally irrelevant “information” can influence estimates and decisions. In the field of heuristics and biases, this more general phenomenon is known as contamination.2

    Early research in heuristics and biases discovered anchoring effects, such as subjects giving lower (higher) estimates of the percentage of UN countries found within Africa, depending on whether they were first asked if the percentage was more or less than 10 (65). This effect was originally attributed to subjects adjusting from the anchor as a starting point, stopping as soon as they reached a plausible value, and under-adjusting because they were stopping at one end of a confidence interval.3

    Tversky and Kahneman’s early hypothesis still appears to be the correct explanation in some circumstances, notably when subjects generate the initial estimate themselves. But modern research seems to show that most anchoring is actually due to contamination, not sliding adjustment.4

    Your grocery store probably has annoying signs saying “Limit 12 per customer” or “5 for $10.” Are these signs effective at getting customers to buy in larger quantities? You probably think you’re not influenced. But someone must be, because these signs have been shown to work. Which is why stores keep putting them up.5

    Yet the most fearsome aspect of contamination is that it serves as yet another of the thousand faces of confirmation bias.6 Once an idea gets into your head, it primes information compatible with it—and thereby ensures its continued existence. Never mind the selection pressures for winning political arguments; confirmation bias is built directly into our hardware, associational networks priming compatible thoughts and memories. An unfortunate side effect of our existence as neural creatures.

    A single fleeting image can be enough to prime associated words for recognition. Don’t think it takes anything more to set confirmation bias in motion. All it takes is that one quick flash, and the bottom line is already decided, for we change our minds less often than we think . . .

    1Thomas Mussweiler and Fritz Strack, “Comparing Is Believing: A Selective Accessibility Model of Judgmental Anchoring,” European Review of Social Psychology 10 (1 1999): 135–167.

    2Gretchen B. Chapman and Eric J. Johnson, “Incorporating the Irrelevant: Anchors in Judgments of Belief and Value,” in Heuristics and Biases, ed. Thomas Gilovich, Dale Griffin, and Daniel Kahneman (New York: Cambridge University Press, 2002), 120–138.

    3Tversky and Kahneman, “Judgment Under Uncertainty.”

    4Nicholas Epley and Thomas Gilovich, “Putting Adjustment Back in the Anchoring and Adjustment Heuristic: Differential Processing of Self-Generated and Experimentor-Provided Anchors,” Psychological Science 12 (5 2001): 391–396.

    5Brian Wansink, Robert J. Kent, and Stephen J. Hoch, “An Anchoring and Adjustment Model of Purchase Quantity Decisions,” Journal of Marketing Research 35, no. 1 (1998): 71–81, http://www.jstor.org/stable/3151931.

    6See “The Third Alternative,” “Knowing About Biases Can Hurt You,” “One Argument Against An Army,” “What Evidence Filtered Evidence?”, and “Rationalization.” And “Hindsight Devalues Science,” “Fake Causality,” and “Positive Bias: Look into the Dark” in Map and Territory. And the rest of this book.

    New to LessWrong?

    New Comment
    27 comments, sorted by Click to highlight new comments since: Today at 6:10 AM

    "Yet the most fearsome aspect of contamination is that it serves as yet another of the thousand faces of confirmation bias. Once an idea gets into your head, it primes information compatible with it - and thereby ensures its continued existence."

    I am not sure I understand this. Once an idea gets into my head, my brain should prime all information related to the idea, not just information that is compatible with the idea. I am of course not denying the existence of confirmation bias, just trying to understand how priming in particular can promote it.

    Once an idea gets into my head, my brain should prime all information related to the idea, not just information that is compatible with the idea.

    Because the terrifying truth is that compatible information is primed much more strongly than contrary information. Both are logically related, yes; but the brain is not, in that aspect, logical. It should be, but it isn't. If someone asks you whether the average temperature in Germany is more or less than 5 degrees Celsius, "cold" is primed more than "hot". That is just how our brain sorta-works.

    What can we do about this? Can we reduce the effects of contamination by consciously avoiding contaminating input before making an important decision? Or does consciously avoiding it contaminate us?

    I had to look at the html source where you said "Try to say aloud the color - not the meaning, but the color - of the following letter-string: "GREEN"" because I'm colorblind and I couldn't tell what color it was. Small amounts of red or green appear to be BOTH red and green simultaneously haha (show me a giant field of green and I can tell it's green most of the time, but show me a dot of green on a field of white and I have no clue, same with red). I guess that really isn't relevant to anything said here, I just thought it was funny considering the point of the exercise.

    Same here. I had to look at the HTML source for the color code: #ff3300. But I figured that it wasn't green before I looked, because I guess I had been primed to expect it not to be the case. At least I think I did.

    Yeah. Somebody should change it to Blue. Blue-Yellow colour-blindess is far more rare than red-green, so more people would "get" the example ;)

    Same here. Though the fact that I initially thought it was green, then managed to resolve it as red is probably a good example of priming in itself.

    Is it a statistical artifact, however, or a genuine intellectual one? That is, those who genuinely have no clue whatsoever in regard to the number of UN nations in Africa might take information about it as a weak sort of evidence - I don't know, so I'll go with a figure I've encountered that is associated with this question. Similarly, someone who is not familiar with pricing may see a "Limit 12" and believe, because of the presence of the sign, that the pricing - regardless of what it is, because they don't have comparative information - is extremely good.

    Which is to say, your examples may come from subject-matter ignorance rather than priming, and conceptual priming may not be quite as contaminative as these studies suggest.

    Adrian, priming still works even if subjects see the number came out of Wheel-of-fortune type random outcomes.

    Which still doesn't say anything about the impact of priming on an individual's decision-making process regarding a matter they are well-informed on - because weak correlation is still better than no correlation.

    Another practical example of this: When asking for ideas don't give examples of the ideas. Today I asked someone for a list of various non-mammal animal prints. For clarification I used the examples of bird feathers and monarch butterflies. But I had already thought of those and was looking for more. It took a little while to get feathers, butterflies, and mammals out of her head. Once we had moved on, I got some great answers, but the beginning was tricky.

    The annoying part for me was that I wouldn't have spent any more time by just asking for animal prints and after she thought of mammals telling her, "We got those already, what else do you have?" Of course, I realized this one sentence too late. Ah well.

    "What's their house number? Is it number 73?" <- never do this!

    Yep. This is even more obvious with kids. Asking "What happened?" is much more likely to result in the truth than asking, "What happened? Did you hit him?"

    Or, "How old are you?" versus, "How old are you? Are you five?"

    On the other hand, if you want to use this to your advantage, you can ask, "Do you want fries with that?" Relatedly, a server friend of mine has noticed that the easiest way to get higher tabs is to start nodding when asking if they want extras.

    If you look for this behavior in interviews you will do much better. It is surprising at how much the people interviewing you want you to succeed and how often they will prime the answers to the questions they are asking. (Or not, I guess, considering if you succeed than they take that as you are a valuable asset to their company...)

    While I respect priming and contamination as a bias, I think you've overdramaticized it in this article. Similar exaggerations of scientific findings for shock purposes has up until recently made me paranoid of attacks on my decision making process, and not just cognitive bias either. In fact, this being before I read LW, I don't think I even considered cognitive biases other than what you call contamination here, and it still seriously screwed me up emotionally and socially.

    So yes, concepts will cause someone to think of related, maybe compatible concepts. No, this is not mind control, and no, a flashed image on the screen will not rewrite all your utility functions, make you a paperclip maximizer, and kill your dog.

    Thank you. I started to feel like I was reading the patter of a Darren Brown act.

    Re-reading this post just now, I find it funny that I thought your comment over-dramatized, and much more than the post itself.

    It's almost like you've been primed to think of rewritten utility functions and paperclip-maximizers by something in this post other than its explicit contents.

    I first heard of cognitive priming on a TED talk where a guy from Skeptic magazine was explaining 'pseudoscience and weird beliefs'. They played a popular song backward, most of the audience couldn't hear anything that sounded like words. But when the supposed 'lyrics' of the backward song were put on the screen, everyone could clearly hear the words 'satan' and '666' and entire sentences that were supposedly there. It was easy to hear once we were 'primed' for it, even though normally no one would have heard anything but gibberish.

    Sounds a lot like Simon Singh's demonstration with "Stairway to Heaven".

    Much the same trick can work, of course, with a song played forwards that has (entirely different) words. Here's one particularly nice example.

    I wonder whether that would have worked with better sound quality. I listened to it once without looking at the subtitles, and I couldn't understand a word.

    "Yet the most fearsome aspect of contamination is that it serves as yet another of the thousand faces of confirmation bias."

    A horrible thing, if you look at it, as on the part of the cognition process of an [individual] ant. (Not that there is a lot of cognition expected to go on in the head of a single ant). And some usufull insights in the cognitive process of the anthill, as the whole - if you but try to look at it from another angle.

    Our subcultures - actually do some cognition. They make something done. They do come up with some workable models of the real world. Then, we tend to attribute some label (say, "Newton") to the resaults... without going into all that complexity contained in that particular subculture.

    http://mat33.livejournal.com/716213.html?thread=683189#t683189

    The fascinating thing about priming is that it occurs at such a low level—priming speeds up identifying letters as forming a word, which one would expect to take place before you deliberate on the word's meaning.

    I would not expect this to take place before deliberating on a word's meaning. Think about it. How would you know if a string of letters is a word? If it corresponds to a meaning. Thus you have to search for a meaning in order to determine if the string of letters is a word. If it were a string of letters like alskjdfljasdfl, it would be obvious sooner, since it's unpronouncable and visually jarring, but something like "banack" could be a word, if it only had a meaning attached to it. So you have to check to see if there is a meaning there. So it doesn't seem all that strange to me that if you prime the neural pathways of a word's meaning, you'd recognize it as a word sooner.

    Did the experiments referenced here replicate?

    https://en.wikipedia.org/wiki/Priming_(psychology)

    "Although semantic, associative, and form priming are well established,[70] some longer-term priming effects were not replicated in further studies, casting doubt on their effectiveness or even existence.[71] Nobel laureate and psychologist Daniel Kahneman has called on priming researchers to check the robustness of their findings in an open letter to the community, claiming that priming has become a "poster child for doubts about the integrity of psychological research."[72] Other critics have asserted that priming studies suffer from major publication bias,[73] experimenter effect[66] and that criticism of the field is not dealt with constructively.[74]"