Participation in the LW Community Associated with Less Bias
Summary CFAR included 5 questions on the 2012 LW Survey which were adapted from the heuristics and biases literature, based on five different cognitive biases or reasoning errors. LWers, on the whole, showed less bias than is typical in the published research (on all 4 questions where this was testable), but did show clear evidence of bias on 2-3 of those 4 questions. Further, those with closer ties to the LW community (e.g., those who had read more of the sequences) showed significantly less bias than those with weaker ties (on 3 out of 4-5 questions where that was testable). These results all held when controlling for measures of intelligence. METHOD & RESULTS Being less susceptible to cognitive biases or reasoning errors is one sign of rationality (see the work of Keith Stanovich & his colleagues, for example). You'd hope that a community dedicated to rationality would be less prone to these biases, so I selected 5 cognitive biases and reasoning errors from the heuristics & biases literature to include on the LW survey. There are two possible patterns of results which would point in this direction: * high scores: LWers show less bias than other populations that have answered these questions (like students at top universities) * correlation with strength of LW exposure: those who have read the sequences (or have been around LW a long time, have high karma, attend meetups, make posts) score better than those who have not. The 5 biases were selected in part because they can be tested with everyone answering the same questions; I also preferred biases that haven't been discussed in detail on LW. On some questions there is a definitive wrong answer and on others there is reason to believe that a bias will tend to lead people towards one answer (so that, even though there might be good reasons for a person to choose that answer, in the aggregate it is evidence of bias if more people choose that answer). This is only one quick, rough survey. If the res
The median researcher hypothesis seems false. Something like an 80/20 distribution seems much more plausible, and is presumably more like what you'd find for measurable proxies of 'influence on a field' like number of publications in "top tier" journals, or number of researchers in the field who were your grad student. Voting "no".