Which cognitive biases should we trust in?

by Andy_McKenzie 7y1st Jun 201242 comments

17


There have been (at least) a couple of attempts on LW to make Anki flashcards from Wikipedia's famous List of Cognitive Biases, here and here. However, stylistically they are not my type of flashcard, with too much info in the "answer" section. 

Further, and more troublingly, I'm not sure whether all of the biases in the flashcards are real, generalizable effects; or, if they are real, whether they have effect sizes large enough to be worth the effort to learn & disseminate. Psychology is an academic discipline with all of the baggage that entails. Psychology is also one of the least tangible sciences, which is not helpful.

There are studies showing that Wikipedia is no less reliable than more conventional sources, but this is in aggregate, and it seems plausible (though difficult to detect without diligently checking sources) that the set of cognitive bias articles on Wikipedia has high variance in quality.

We do have some knowledge of how many of them were made, in that LW user nerfhammer wrote a bunch. But, as far as I can tell, s/he didn't discuss how s/he selected biases to include. (Though, s/he is obviously quite knowledgable on the subject, see e.g. here.)

As the articles stand today, many (e.g., here, here, here, here, and here) only cite research from one study/lab. I do not want to come across as whining: the authors who wrote these on Wikipedia are awesome. But, as a consumer the lack of independent replication makes me nervous. I don't want to contribute to information cascades. 

Nevertheless, I do still want to make flashcards for at least some of these biases, because I am relatively sure that there are some strong, important, widespread biases out there. 

So, I am asking LW whether you all have any ideas about, on the meta level, 

1) how we should go about deciding/indexing which articles/biases capture legit effects worth knowing,

and, on the object level,

2) which of the biases/heuristics/fallacies are actually legit (like, a list). 

Here are some of my ideas. First, for how to decide: 

- Only include biases that are mentioned by prestigious sources like Kahneman in his new book. Upside: authoritative. Downside: potentially throwing out some good info and putting too much faith in one source. 

- Only include biases whose Wikipedia articles cite at least two primary articles that share none of the same authors. Upside: establishes some degree of consensus in the field. Downside: won't actually vet the articles for quality, and a presumably false assumption that the Wikipedia pages will reflect the state of knowledge in the field. 

- Search for the name of the bias (or any bold, alternative names on Wikipedia) on Google scholar, and only accept those with, say, >30 citations. Upside: less of a sampling bias of what is included on Wikipedia, which is likely to be somewhat arbitrary. Downside: information cascades occur in academia too, and this method doesn't filter for actual experimental evidence (e.g., there could be lots of reviews discussing the idea).  

- Make some sort of a voting system where experts (surely some frequent this site) can weigh in on what they think of the primary evidence for a given bias. Upside: rather than counting articles, evaluates actual evidence for the bias. Downside: seems hard to get the scale (~ 8 - 12 + people voting) to make this useful. 

- Build some arbitrarily weighted rating scale that takes into account some or all of the above. Upside: meta. Downside: garbage in, garbage out, and the first three features seem highly correlated anyway. 

Second, for which biases to include. I'm just going off of which ones I have heard of and/or look legit on a fairly quick run through. Note that those annotated with a (?) are ones I am especially unsure about. 

- anchoring

- availability

- bandwagon effect

- base rate neglect

- choice-supportive bias

- clustering illusion

- confirmation bias

- conjunction fallacy (is subadditivity a subset of this?) 

- conservatism (?) 

- context effect (aka state-dependent memory) 

- curse of knowledge (?) 

- contrast effect

- decoy effect (aka independence of irrelevant alternatives) 

- Dunning–Kruger effect (?) 

- duration neglect

- empathy gap

- expectation bias

- framing

- gambler's fallacy

- halo effect

- hindsight bias

- hyperbolic discounting 

- illusion of control

- illusion of transparency

- illusory correlation

- illusory superiority

- illusion of validity (?) 

- impact bias

- information bias (? aka failure to consider value of information)

- in-group bias (this is also clearly real, but I'm also not sure I'd call it a bias

- escalation of commitment (aka sunk cost/loss aversion/endowment effect; note, contra Gwern, that I do think this is a useful fallacy to know about, if overrated)

- false consensus (related to projection bias) 

- Forer effect

- fundamental attribution error (related to the just-world hypothesis) 

- familiarity principle (aka mere exposure effect) 

- moral licensing (aka moral credential) 

- negativity bias (seems controversial & it's troubling that there is also a positivity bias) 

- normalcy bias (related to existential risk?) 

- omission bias

- optimism bias (related to overconfidence)

- outcome bias (aka moral luck) 

- outgroup homogeneity bias

- peak-end rule

- primacy

- planning fallacy

- reactance (aka contrarianism) 

- recency

- representativeness

- self-serving bias 

- social desirability bias

- status quo bias

Happy to hear any thoughts! 

17