Today's post, Priming and Contamination was originally published on 10 October 2007. A summary (taken from the LW wiki):

 

Contamination by Priming is a problem that relates to the process of implicitly introducing the facts in the attended data set. When you are primed with a concept, the facts related to that concept come to mind easier. As a result, the data set selected by your mind becomes tilted towards the elements related to that concept, even if it has no relation to the question you are trying to answer. Your thinking becomes contaminated, shifted in a particular direction. The data set in your focus of attention becomes less representative of the phenomenon you are trying to model, and more representative of the concepts you were primed with.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was A Priori, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New to LessWrong?

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 7:34 PM
[-][anonymous]13y10

The post focuses entirely on the scary side of this phenomena which is worrying to some but I'm sure it's also an essential part of many processes (like speech recognition) and trying to eliminate it would leave you unable to carry out a conversation over a telephone.

[-][anonymous]13y60

I don't think Eliezer was suggesting that contamination be "eliminated." (That's probably not even possible without physically modifying the brain--as Luke points out here, we are essentially built out of cognitive biases.) Instead, he's arguing that we need to be aware of our own thought processes and notice when our brains come to a conclusion because of a priming effect; this allows us to correct for the contamination and make more accurate estimates.

ETA: Biases that cannot be totally eliminated are usually still worth reducing.