Thank you to everyone who took the 2012 Less Wrong Survey (the survey is now closed. Do not try to take it.) Below the cut, this post contains the basic survey results, a few more complicated analyses, and the data available for download so you can explore it further on your own. You may want to compare these to the results of the 2011 Less Wrong Survey.

Part 1: Population

How many of us are there?

The short answer is that I don't know.

The 2011 survey ran 33 days and collected 1090 responses. This year's survey ran 23 days and collected 1195 responses. The average number of new responses during the last week was about five per day, so even if I had kept this survey open as long as the last one I probably wouldn't have gotten more than about 1250 responses. That means at most a 15% year on year growth rate, which is pretty abysmal compared to the 650% growth rate in two years we saw last time.

About half of these responses were from lurkers; over half of the non-lurker remainder had commented but never posted to Main or Discussion. That means there were only about 600 non-lurkers.

But I am skeptical of these numbers. I hang out with some people who are very closely associated with the greater Less Wrong community, and a lot of them didn't know about the survey until I mentioned it to them in person. I know some people who could plausibly be described as focusing their lives around the community who just never took the survey for one reason or another. One lesson of this survey may be that the community is no longer limited to people who check Less Wrong very often, if at all. One friend didn't see the survey because she hangs out on the #lesswrong channel more than the main site. Another mostly just goes to meetups. So I think this represents only a small sample of people who could justly be considered Less Wrongers.

The question of "how quickly is LW growing" is also complicated by the high turnover. Over half the people who took this survey said they hadn't participated in the survey last year. I tried to break this down by combining a few sources of information, and I think our 1200 respondents include 500 people who took last year's survey, 400 people who were around last year but didn't take the survey for some reason, and 300 new people.

As expected, there's lower turnover among regulars than among lurkers. Of people who have posted in Main, about 75% took the survey last year; of people who only lurked, about 75% hadn't.

This view of a very high-turnover community and lots of people not taking the survey is consistent with Vladimir Nesov's data showing http://lesswrong.com/lw/e4j/number_of_members_on_lesswrong/77xz 1390 people who have written at least ten comments. But the survey includes only about 600 people who have at least commented; 800ish of Vladimir's accounts are either gone or didn't take the census.

Part 2: Categorical Data

SEX:
Man: 1057, 89.2%
Woman: 120, 10.1%
Other: 2, 0.2%)
No answer: 6, 0.5%

GENDER:
M (cis): 1021, 86.2%
F (cis): 105, 8.9%
M (trans f->m): 3, 0.3%
F (trans m->f): 16, 1.3%
Other: 29, 2.4%
No answer: 11, 0.9%

ORIENTATION:
Heterosexual: 964, 80.7%
Bisexual: 135, 11.4%
Homosexual: 28, 2.4%
Asexual: 24, 2%
Other: 28, 2.4%
No answer: 14, 1.2%

RELATIONSHIP STYLE:

Prefer monogamous: 639, 53.9%
Prefer polyamorous: 155, 13.1%
Uncertain/no preference: 358, 30.2%
Other: 21, 1.8%
No answer: 12, 1%

NUMBER OF CURRENT PARTNERS:
0: 591, 49.8%
1: 519, 43.8%
2: 34, 2.9%
3: 12, 1%
4: 5, 0.4%
6: 1, 0.1%
7, 1, 0.1% (and this person added "really, not trolling")
Confusing or no answer: 20, 1.8%

RELATIONSHIP STATUS:
Single: 628, 53%
Relationship: 323, 27.3%
Married: 220, 18.6%
No answer: 14, 1.2%

RELATIONSHIP GOALS:
Not looking for more partners: 707, 59.7%
Looking for more partners: 458, 38.6%
No answer: 20, 1.7%

COUNTRY:
USA: 651, 54.9%
UK: 103, 8.7%
Canada: 74, 6.2%
Australia: 59, 5%
Germany: 54, 4.6%
Israel: 15, 1.3%
Finland: 15, 1.3%
Russia: 13, 1.1%
Poland: 12, 1%

These are all the countries with greater than 1% of Less Wrongers, but other, more exotic locales included Kenya, Pakistan, and Iceland, with one user each. You can see the full table here.

This data also allows us to calculate Less Wrongers per capita:


Finland: 1/366,666
Australia: 1/389,830
Canada: 1/472,972
USA: 1/483,870
Israel: 1/533,333
UK: 1/603,883
Germany: 1/1,518,518
Poland: 1/3,166,666
Russia: 1/11,538,462

RACE:
White, non-Hispanic 1003, 84.6%
East Asian: 50, 4.2%
Hispanic 47, 4.0%
Indian Subcontinental 28, 2.4%
Black 8, 0.7%
Middle Eastern 4, 0.3%
Other: 33, 2.8%
No answer: 12, 1%

WORK STATUS:
Student: 476, 40.7%
For-profit work: 364, 30.7%
Self-employed: 95, 8%
Unemployed: 81, 6.8%
Academics (teaching): 54, 4.6%
Government: 46, 3.9%
Non-profit: 44, 3.7%
Independently wealthy: 12, 1%
No answer: 13, 1.1%

PROFESSION:
Computers (practical): 344, 29%
Math: 109, 9.2%
Engineering: 98, 8.3%
Computers (academic): 72, 6.1%
Physics: 66, 5.6%
Finance/Econ: 65, 5.5%
Computers (AI): 39, 3.3%
Philosophy: 36, 3%
Psychology: 25, 2.1%
Business: 23, 1.9%
Art: 22, 1.9%
Law: 21, 1.8%
Neuroscience: 19, 1.6%
Medicine: 15, 1.3%
Other social science: 24, 2%
Other hard science: 20, 1.7%
Other: 123, 10.4%
No answer: 27, 2.3%

DEGREE:
Bachelor's: 438, 37%
High school: 333, 28.1%
Master's: 192, 16.2%
Ph.D: 71, 6%
2-year: 43, 3.6%
MD/JD/professional: 24, 2%
None: 55, 4.6%
Other: 15, 1.3%
No answer: 14, 1.2%

POLITICS:
Liberal: 427, 36%
Libertarian: 359, 30.3%
Socialist: 326, 27.5%
Conservative: 35, 3%
Communist: 8, 0.7%
No answer: 30, 2.5%

You can see the exact definitions given for each of these terms on the survey.

RELIGIOUS VIEWS:
Atheist, not spiritual: 880, 74.3%
Atheist, spiritual: 107, 9.0%
Agnostic: 94, 7.9%
Committed theist: 37, 3.1%
Lukewarm theist: 27, 2.3%
Deist/Pantheist/etc: 23, 1.9%
No answer: 17, 1.4%

FAMILY RELIGIOUS VIEWS:
Lukewarm theist: 392, 33.1%
Committed theist: 307, 25.9%
Atheist, not spiritual: 161, 13.6
Agnostic: 149, 12.6%
Atheist, spiritual: 46, 3.9%
Deist/Pantheist/Etc: 32, 2.7%
Other: 84, 7.1%

RELIGIOUS BACKGROUND:
Other Christian: 517, 43.6%
Catholic: 295, 24.9%
Jewish: 100, 8.4%
Hindu: 21, 1.8%
Traditional Chinese: 17, 1.4%
Mormon: 15, 1.3%
Muslim: 12, 1%

Raw data is available here.

MORAL VIEWS:

Consequentialism: 735, 62%
Virtue Ethics: 166, 14%
Deontology: 50, 4.2%
Other: 214, 18.1%
No answer: 20, 1.7%

NUMBER OF CHILDREN
0: 1044, 88.1%
1: 51, 4.3%
2: 48, 4.1%
3: 19, 1.6%
4: 3, 0.3%
5: 2, 0.2%
6: 1, 0.1%
No answer: 17, 1.4%

WANT MORE CHILDREN?

No: 438, 37%
Maybe: 363, 30.7%
Yes: 366, 30.9%
No answer: 16, 1.4%

LESS WRONG USE:
Lurkers (no account): 407, 34.4%
Lurkers (with account): 138, 11.7%
Posters (comments only): 356, 30.1%
Posters (comments + Discussion only): 164, 13.9%
Posters (including Main): 102, 8.6%

SEQUENCES:
Never knew they existed until this moment: 99, 8.4%
Knew they existed; never looked at them: 23, 1.9%
Read < 25%: 227, 19.2%
Read ~ 25%: 145, 12.3%
Read ~ 50%: 164, 13.9%
Read ~ 75%: 203, 17.2%
Read ~ all: 306, 24.9%
No answer: 16, 1.4%

Dear 8.4% of people: there is this collection of old blog posts called the Sequences. It is by Eliezer, the same guy who wrote Harry Potter and the Methods of Rationality. It is really good! If you read it, you will understand what we're talking about much better!

REFERRALS:
Been here since Overcoming Bias: 265, 22.4%
Referred by a link on another blog: 23.5%
Referred by a friend: 147, 12.4%
Referred by HPMOR: 262, 22.1%
No answer: 35, 3%

BLOG REFERRALS:

Common Sense Atheism: 20 people
Hacker News: 20 people
Reddit: 15 people
Unequally Yoked: 7 people
TV Tropes: 7 people
Marginal Revolution: 6 people
gwern.net: 5 people
RationalWiki: 4 people
Shtetl-Optimized: 4 people
XKCD fora: 3 people
Accelerating Future: 3 people

These are all the sites that referred at least three people in a way that was obvious to disentangle from the raw data. You can see a more complete list, including the long tail, here.

MEETUPS:
Never been to one: 834, 70.5%
Have been to one: 320, 27%
No answer: 29, 2.5%

CATASTROPHE:
Pandemic (bioengineered): 272, 23%
Environmental collapse: 171, 14.5%
Unfriendly AI: 160, 13.5%
Nuclear war: 155, 13.1%
Economic/Political collapse: 137, 11.6%
Pandemic (natural): 99, 8.4%
Nanotech: 49, 4.1%
Asteroid: 43, 3.6%

The wording of this question was "which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?"

CRYONICS STATUS:
No, don't want to: 275, 23.2%
No, still thinking: 472, 39.9%
No, procrastinating: 178, 15%
No, unavailable: 120, 10.1%
Yes, signed up: 44, 3.7%
Never thought about it: 46, 3.9%
No answer: 48, 4.1%

VEGETARIAN:
No: 906, 76.6%
Yes: 147, 12.4%
No answer: 130, 11%

For comparison, 3.2% of US adults are vegetarian.


SPACED REPETITION SYSTEMS
Don't use them: 511, 43.2%
Do use them: 235, 19.9%
Never heard of them: 302, 25.5%

Dear 25.5% of people: spaced repetition systems are nifty, mostly free computer programs that allow you to study and memorize facts more efficiently. See for example http://ankisrs.net/

HPMOR:
Never read it: 219, 18.5%
Started, haven't finished: 190, 16.1%
Read all of it so far: 659, 55.7%

Dear 18.5% of people: Harry Potter and the Methods of Rationality is a Harry Potter fanfic about rational thinking written by Eliezer Yudkowsky (the guy who started this site). It's really good. You can find it at http://www.hpmor.com/.


ALTERNATIVE POLITICS QUESTION:

Progressive: 429, 36.3%
Libertarian: 278, 23.5%
Reactionary: 30, 2.5%
Conservative: 24, 2%
Communist: 22, 1.9%
Other: 156, 13.2%

ALTERNATIVE ALTERNATIVE POLITICS QUESTION:
Left-Libertarian: 102, 8.6%
Progressive: 98, 8.3%
Libertarian: 91, 7.7%
Pragmatist: 85, 7.2%
Social Democrat: 80, 6.8%
Socialist: 66, 5.6%
Anarchist: 50, 4.1%
Futarchist: 29, 2.5%
Moderate: 18, 1.5%
Moldbuggian: 19, 1.6%
Objectivist: 11, 0.9%

These are the only ones that had more than ten people. Other responses notable for their unusualness were Monarchist (5 people), fascist (3 people, plus one who was up for fascism but only if he could be the leader), conservative (9 people), and a bunch of people telling me politics was stupid and I should feel bad for asking the question. You can see the full table here.

CAFFEINE:
Never: 162, 13.7%
Rarely: 237, 20%
At least 1x/week: 207, 17.5
Daily: 448, 37.9
No answer: 129, 10.9%

SMOKING:
Never: 896, 75.7%
Used to: 1-5, 8.9%
Still do: 51, 4.3%
No answer: 131, 11.1%

For comparison, about 28.4% of the US adult population smokes

NICOTINE (OTHER THAN SMOKING):
Never used: 916, 77.4%
Rarely use: 82, 6.9%
>1x/month: 32, 2.7%
Every day: 14, 1.2%
No answer: 139, 11.7%

MODAFINIL:
Never: 76.5%
Rarely: 78, 6.6%
>1x/month: 48, 4.1%
Every day: 9, 0.8%
No answer: 143, 12.1%

TRUE PRISONERS' DILEMMA:
Defect: 341, 28.8%
Cooperate: 316, 26.7%
Not sure: 297, 25.1%
No answer: 229, 19.4%

FREE WILL:
Not confused: 655, 55.4%
Somewhat confused: 296, 25%
Confused: 81, 6.8%
No answer: 151, 12.8%

TORTURE VS. DUST SPECKS
Choose dust specks: 435, 36.8%
Choose torture: 261, 22.1%
Not sure: 225, 19%
Don't understand: 22, 1.9%
No answer: 240, 20.3%

SCHRODINGER EQUATION:
Can't calculate it: 855, 72.3%
Can calculate it: 175, 14.8%
No answer: 153, 12.9%

PRIMARY LANGUAGE:
English: 797, 67.3%
German: 54, 4.5%
French: 13, 1.1%
Finnish: 11, 0.9%
Dutch: 10, 0.9%
Russian: 15, 1.3%
Portuguese: 10, 0.9%

These are all the languages with ten or more speakers, but we also have everything from Marathi to Tibetan. You can see the full table here..

NEWCOMB'S PROBLEM
One-box: 726, 61.4%
Two-box: 78, 6.6%
Not sure: 53, 4.5%
Don't understand: 86, 7.3%
No answer: 240, 20.3%

ENTREPRENEUR:
Don't want to start business: 447, 37.8%
Considering starting business: 334, 28.2%
Planning to start business: 96, 8.1%
Already started business: 112, 9.5%
No answer: 194, 16.4%

ANONYMITY:
Post using real name: 213, 18%
Easy to find real name: 256, 21.6%
Hard to find name, but wouldn't bother me if someone did: 310, 26.2%
Anonymity is very important: 170, 14.4%
No answer: 234, 19.8%

HAVE YOU TAKEN A PREVIOUS LW SURVEY?
No: 559, 47.3%
Yes: 458, 38.7%
No answer: 116, 14%

TROLL TOLL POLICY:
Disapprove: 194, 16.4%
Approve: 178, 15%
Haven't heard of this: 375, 31.7%
No opinion: 249, 21%
No answer: 187, 15.8%

MYERS-BRIGGS
INTJ: 163, 13.8%
INTP: 143, 12.1%
ENTJ: 35, 3%
ENTP: 30, 2.5%
INFP: 26, 2.2%
INFJ: 25. 2.1%
ISTJ: 14, 1.2%
No answer: 715, 60%

This includes all types with greater than 10 people. You can see the full table here.

Part 3: Numerical Data

Except where indicated otherwise, all the numbers below are given in the format:

mean+standard_deviation (25% level, 50% level/median, 75% level) [n = number of data points]

INTELLIGENCE:

IQ (self-reported): 138.7 + 12.7 (130, 138, 145) [n = 382]
SAT (out of 1600): 1485.8 + 105.9 (1439, 1510, 1570) [n = 321]
SAT (out of 2400): 2319.5 + 1433.7 (2155, 2240, 2320)
ACT: 32.7 + 2.3 (31, 33, 34) [n = 207]
IQ (on iqtest.dk): 125.63 + 13.4 (118, 130, 133)   [n = 378]

I am going to harp on these numbers because in the past some people have been pretty quick to ridicule this survey's intelligence numbers as completely useless and impossible and so on.

According to IQ Comparison Site, an SAT score of 1485/1600 corresponds to an IQ of about 144. According to Ivy West, an ACT of 33 corresponds to an SAT of 1470 (and thence to IQ of 143).

So if we consider self-report, SAT, ACT, and iqtest.dk as four measures of IQ, these come out to 139, 144, 143, and 126, respectively.

All of these are pretty close except iqtest.dk. I ran a correlation between all of them and found that self-reported IQ is correlated with SAT scores at the 1% level and iqtest.dk at the 5% level, but SAT scores and IQTest.dk are not correlated with each other.

Of all these, I am least likely to trust iqtest.dk. First, it's a random Internet IQ test. Second, it correlates poorly with the other measures. Third, a lot of people have complained in the comments to the survey post that it exhibits some weird behavior.

But iqtest.dk gave us the lowest number! And even it said the average was 125 to 130! So I suggest that we now have pretty good, pretty believable evidence that the average IQ for this site really is somewhere in the 130s, and that self-reported IQ isn't as terrible a measure as one might think.

AGE:
27.8 + 9.2 (22, 26, 31) [n = 1185]

LESS WRONG USE:
Karma: 1078 + 2939.5 (0, 4.5, 136) [n = 1078]
Months on LW: 26.7 + 20.1 (12, 24, 40) [n = 1070]
Minutes/day on LW: 19.05 + 24.1 (5, 10, 20) [n = 1105]
Wiki views/month: 3.6 + 6.3 (0, 1, 5) [n = 984]
Wiki edits/month: 0.1 + 0.8 (0, 0, 0) [n = 984]

PROBABILITIES:
Many Worlds: 51.6 + 31.2 (25, 55, 80) [n = 1005]
Aliens (universe): 74.2 + 32.6 (50, 90, 99) [n = 1090]
Aliens (galaxy): 42.1 + 38 (5, 33, 80) [n = 1081]
Supernatural: 5.9 + 18.6 (0, 0, 1) [n = 1095]
God: 6 + 18.7 (0, 0, 1) [n = 1098]
Religion: 3.8 + 15.5 (0, 0, 0.8) [n = 1113]
Cryonics: 18.5 + 24.8 (2, 8, 25) [n = 1100]
Antiagathics: 25.1 + 28.6 (1, 10, 35) [n = 1094]
Simulation: 25.1 + 29.7 (1, 10, 50) [n = 1039]
Global warming: 79.1 + 25 (75, 90, 97) [n = 1112]
No catastrophic risk: 71.1 + 25.5 (55, 80, 90) [n = 1095]
Space: 20.1 + 27.5 (1, 5, 30) [n = 953]

CALIBRATION:
Year of Bayes' birth: 1767.5 + 109.1 (1710, 1780, 1830) [n = 1105]
Confidence: 33.6 + 23.6 (20, 30, 50) [n= 1082]

MONEY:
Income/year: 50,913 + 60644.6 (12000, 35000, 74750) [n = 644]
Charity/year: 444.1 + 1152.4 (0, 30, 250) [n = 950]
SIAI/CFAR charity/year: 309.3 + 3921 (0, 0, 0) [n = 961]
Aging charity/year: 13 + 184.9 (0, 0, 0) [n = 953]

TIME USE:
Hours online/week: 42.4 + 30 (21, 40, 59) [n = 944]
Hours reading/week: 30.8 + 19.6 (18, 28, 40) [n = 957]
Hours writing/week: 7.9 + 9.8 (2, 5, 10) [n = 951]

POLITICAL COMPASS:
Left/Right: -2.4 + 4 (-5.5, -3.4, -0.3) [n = 476]
Libertarian/Authoritarian: -5 + 2 (-6.2, -5.2, -4)

BIG 5 PERSONALITY TEST:
Big 5 (O): 60.6 + 25.7 (41, 65, 84) [n = 453]
Big 5 (C): 35.2 + 27.5 (10, 30, 58) [n = 453]
Big 5 (E): 30.3 + 26.7 (7, 22, 48) [n = 454]
Big 5 (A): 41 + 28.3 (17, 38, 63) [n = 453]
Big 5 (N): 36.6 + 29 (11, 27, 60) [n = 449]

These scores are in percentiles, so LWers are more Open, but less Conscientious, Agreeable, Extraverted, and Neurotic than average test-takers. Note that people who take online psychometric tests are probably a pretty skewed category already so this tells us nothing. Also, several people got confusing results on this test or found it different than other tests that they took, and I am pretty unsatisfied with it and don't trust the results.

AUTISM QUOTIENT
AQ: 24.1 + 12.2 (17, 24, 30) [n = 367]

This test says the average control subject got 16.4 and 80% of those diagnosed with autism spectrum disorders get 32+ (which of course doesn't tell us what percent of people above 32 have autism...). If we trust them, most LWers are more autistic than average.

CALIBRATION:

Reverend Thomas Bayes was born in 1701. Survey takers were asked to guess this date within 20 years, so anyone who guessed between 1681 and 1721 was recorded as getting a correct answer. The percent of people who answered correctly is recorded below, stratified by the confidence they gave of having guessed correctly and with the number of people at that confidence level.

0-5: 10% [n = 30]
5-15: 14.8% [n = 183]
15-25: 10.3% [n = 242]
25-35: 10.7% [n = 225]
35-45: 11.2% [n = 98]
45-55: 17% [n = 118]
55-65: 20.1% [n = 62]
65-75: 26.4% [n = 34]
75-85: 36.4% [n = 33]
85-95: 60.2% [n = 20]
95-100: 85.7% [n = 23]

Here's a classic calibration chart. The blue line is perfect calibration. The orange line is you guys. And the yellow line is average calibration from an experiment I did with untrained subjects a few years ago (which of course was based on different questions and so not directly comparable).

The results are atrocious; when Less Wrongers are 50% certain, they only have about a 17% chance of being correct. On this problem, at least, they are as bad or worse at avoiding overconfidence bias as the general population.

My hope was that this was the result of a lot of lurkers who don't know what they're doing stumbling upon the survey and making everyone else look bad, so I ran a second analysis. This one used only the numbers of people who had been in the community at least 2 years and accumulated at least 100 karma; this limited my sample size to about 210 people.

I'm not going to post exact results, because I made some minor mistakes which means they're off by a percentage point or two, but the general trend was that they looked exactly like the results above: atrocious. If there is some core of elites who are less biased than the general population, they are well past the 100 karma point and probably too rare to feel confident even detecting at this kind of a sample size.

I really have no idea what went so wrong.  Last year's results were pretty good - encouraging, even. I wonder if it's just an especially bad question. Bayesian statistics is pretty new; one would expect Bayes to have been born in rather more modern times. It's also possible that I've handled the statistics wrong on this one; I wouldn't mind someone double-checking my work.

Or we could just be really horrible. If we haven't even learned to avoid the one bias that we can measure super well and which is most susceptible to training, what are we even doing here? Some remedial time at PredictionBook might be in order.

HYPOTHESIS TESTING:

I tested a very few of the possible hypothesis that were proposed in the survey design threads.

Are people who understand quantum mechanics are more likely to believe in Many Worlds? We perform a t-test, checking whether one's probability of the MWI being true depends on whether or not one can solve the Schrodinger Equation. People who could solve the equation had on average a 54.3% probability of MWI, compared to 51.3% in those who could not. The p-value is 0.26; there is a 26% probability this occurs by chance. Therefore, we fail to establish that people's probability of MWI varies with understanding of quantum mechanics.

Are there any interesting biological correlates of IQ? We run a correlation between self-reported IQ, height, maternal age, and paternal age. The correlations are in the expected direction but not significant.

Are there differences in the ways men and women interact with the community? I had sort of vaguely gotten the impression that women were proportionally younger, newer to the community, and more likely to be referred via HPMOR. The average age of women on LW is 27.6 compared to 27.7 for men; obviously this difference is not significant. 14% of the people referred via HPMOR were women compared to about 10% of the community at large, but this difference is pretty minor. Women were on average newer to the community - 21 months vs. 39 for men - but to my surprise a t-test was unable to declare this significant. Maybe I'm doing it wrong?

Does the amount of time spent in the community affect one's beliefs in the same way as in previous surveys? I ran some correlations and found that it does. People who have been around longer continue to be more likely to believe in MWI, less likely to believe in aliens in the universe (though not in our galaxy), and less likely to believe in God (though not religion). There was no effect on cryonics this time.

In addition, the classic correlations between different beliefs continue to hold true. There is an obvious cluster of God, religion, and the supernatural. There's also a scifi cluster of cryonics, antiagathics, MWI, aliens, and the Simulation Hypothesis, and catastrophic risk (this also seems to include global warming, for some reason).

Are there any differences between men and women in regards to their belief in these clusters? We run a t-test between men and women. Men and women have about the same probability of God (men: 5.9, women: 6.2, p = .86) and similar results for the rest of the religion cluster, but men have much higher beliefs in for example antiagathics (men 24.3, women: 10.5, p < .001) and the rest of the scifi cluster.

DESCRIPTIONS OF LESS WRONG

Survey users were asked to submit a description of Less Wrong in 140 characters or less. I'm not going to post all of them, but here is a representative sample:

- "Probably the most sensible philosophical resource avaialble."
- "Contains the great Sequences, some of Luke's posts, and very little else."
- "The currently most interesting site I found ont the net."
- "EY cult"
- "How to think correctly, precisely, and efficiently."
- "HN for even bigger nerds."
- "Social skills philosophy and AI theorists on the same site, not noticing each other."
- "Cool place. Any others like it?"
- "How to avoid predictable pitfalls in human psychology, and understand hard things well: The Website."
- "A bunch of people trying to make sense of the wold through their own lens, which happens to be one of calculation and rigor"
- "Nice."
- "A font of brilliant and unconventional wisdom."
- "One of the few sane places on Earth."
- "Robot god apocalypse cult spinoff from Harry Potter."
- "A place to converse with intelligent, reasonably open-minded people."
- "Callahan's Crosstime Saloon"
- "Amazing rational transhumanist calming addicting Super Reddit"
- "Still wrong"
- "A forum for helping to train people to be more rational"
- "A very bright community interested in amateur ethical philosophy, mathematics, and decision theory."
- "Dying. Social games and bullshit now >50% of LW content."
- "The good kind of strange, addictive, so much to read!"
- "Part genuinely useful, part mental masturbation."
- "Mostly very bright and starry-eyed adults who never quite grew out of their science-fiction addiction as adolescents."
- "Less Wrong: Saving the world with MIND POWERS!"
- "Perfectly patternmatches the 'young-people-with-all-the-answers' cliche"
- "Rationalist community dedicated to self-improvement."
- "Sperglord hipsters pretending that being a sperglord hipster is cool." (this person's Autism Quotient was two points higher than LW average, by the way)
- "An interesting perspective and valuable database of mental techniques."
- "A website with kernels of information hidden among aspy nonsense."
- "Exclusive, elitist, interesting, potentially useful, personal depression trigger."
- "A group blog about rationality and related topics. Tends to be overzealous about cryogenics and other pet ideas of Eliezer Yudkowsky."
- "Things to read to make you think better."
- "Excellent rationality. New-age self-help. Worrying groupthink."
- "Not a cult at all."
- "A cult."
- "The new thing for people who would have been Randian Objectivists 30 years ago."
- "Fascinating, well-started, risking bloat and failure modes, best as archive."
- "A fun, insightful discussion of probability theory and cognition."
- "More interesting than useful."
- "The most productive and accessible mind-fuckery on the Internet."
- "A blog for rationality, cognitive bias, futurism, and the Singularity."
- "Robo-Protestants attempting natural theology."
- "Orderly quagmire of tantalizing ideas drawn from disagreeable priors."
- "Analyze everything. And I do mean everything. Including analysis. Especially analysis. And analysis of analysis."
- "Very interesting and sometimes useful."
- "Where people discuss and try to implement ways that humans can make their values, actions, and beliefs more internally consistent."
- "Eliezer Yudkowsky personality cult."
- "It's like the Mormons would be if everyone were an atheist and good at math and didn't abstain from substances."
- "Seems wacky at first, but gradually begins to seem normal."
- "A varied group of people interested in philosophy with high Openness and a methodical yet amateur approach."
- "Less Wrong is where human algorithms go to debug themselves."
- "They're kind of like a cult, but that doesn't make them wrong."
- "A community blog devoted to nerds who think they're smarter than everyone else."
- "90% sane! A new record!"
- "The Sequences are great. LW now slowly degenerating to just another science forum."
- "The meetup groups are where it's at, it seems to me. I reserve judgment till I attend one."
- "All I really know about it is this long survey I took."
- "The royal road of rationality."
- "Technically correct: The best kind of correct!"
- "Full of angry privilege."
- "A sinister instrument of billionaire Peter Thiel."
- "Dangerous apocalypse cult bent on the systematic erasure of traditional values and culture by any means necessary."
- "Often interesting, but I never feel at home."
- "One of the few places I truly feel at home, knowing that there are more people like me."
- "Currently the best internet source of information-dense material regarding cog sci, debiasing, and existential risk."
- "Prolific and erudite writing on practical techniques to enhance the effectiveness of our reason."
- "An embarrassing Internet community formed around some genuinely great blog writings."
- "I bookmarked it a while ago and completely forgot what it is about. I am taking the survey to while away my insomnia."
- "A somewhat intimidating but really interesting website that helps refine rational thinking."
- "A great collection of ways to avoid systematic bias and come to true and useful conclusions."
- "Obnoxious self-serving, foolish trolling dehumanizing pseudointellectualism, aesthetically bankrupt."
- "The cutting edge of human rationality."
- "A purveyor of exceedingly long surveys."

PUBLIC RELEASE

That last commenter was right. This survey had vastly more data than any previous incarnation; although there are many more analyses I would like to run I am pretty exhausted and I know people are anxious for the results. I'm going to let CFAR analyze and report on their questions, but the rest should be a community effort. So I'm releasing the survey to everyone in the hopes of getting more information out of it. If you find something interesting you can either post it in the comments or start a new thread somewhere.

The data I'm providing is the raw data EXCEPT:

- I deleted a few categories that I removed halfway through the survey for various reasons
- I deleted 9 entries that were duplicates of other entries, ie someone pressed 'submit' twice.
- I deleted the timestamp, which would have made people extra-identifiable, and sorted people by their CFAR random number to remove time order information.
- I removed one person whose information all came out as weird symbols.
- I numeralized some of the non-numeric data, especially on the number of months in community question. This is not the version I cleaned up fully, so you will get to experience some of the same pleasure I did working with the rest.
- I deleted 117 people who either didn't answer the privacy question or who asked me to keep them anonymous, leaving 1067 people.

Here it is: Data in .csv format , Data in Excel format

2012 Survey Results
New Comment
653 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Aharon530

Hi Yvain,

please state a definite end date next year. Filling out the survey didn't have a really high priority for me, but knowing that I had "about a month" made me put it off. Had I known that the last possible day was the 26th of November, I probably would have fit it in sometime in between other stuff.

6John_Maxwell
Hm, could it be that the longer survey format this time around cut down on the number of responses as well?
-1[anonymous]
So, what "cut down on the number of responses"?
7Ralith
I presume that John's indended implication was that community growth was greater than indicated by the number of respondents.

The calibration question is an n=1 sample on one of the two important axes (those axes being who's answering, and what question they're answering). Give a question that's harder than it looks, and people will come out overconfident on average; give a question that's easier than it looks, and they'll come out underconfident on average. Getting rid of this effect requires a pool of questions, so that it'll average out.

Yep. (Or as Yvain suggests, give a question which is likely to be answered with a bias in a particular direction.)

It's not clear what you can conclude from the fact that 17% of all people who answered a single question at 50% confidence got it right, but you can't conclude from it that if you asked one of these people a hundred binary questions and they answered "yes" at 50% confidence, that person would only get 17% right. The latter is what would deserve to be called "atrocious"; I don't believe the adjective applies to the results observed in the survey.

I'm not even sure that you can draw the conclusion "not everyone in the sample is perfectly calibrated" from these results. Well, the people who were 100% sure they were wrong, and happened to be correct, are definitely not perfectly calibrated; but I'm not sure what we can say of the rest.

7CarlShulman
I have often pondered this problem with respect to some of the traditional heuristics and biases studies, e.g. the "above-average driver" effect. If people consult their experiences of subjective difficulty at doing a task, and then guess they are above average for the ones that feel easy, and below average for the ones that feel hard, this will to some degree track their actual particular strengths and weaknesses. Plausibly a heuristic along these lines gives overall better predictions than guessing "I am average" about everything. However, if we focus in on activities that happen to be unusually easy-feeling or hard-feeling in general, then we can make the heuristics look bad by only showing their successes and not their failures. Although the name "heuristics and biases" does reflect this notion: we have heuristics because they usually work, but they produce biases in some cases as an acceptable loss.
2steven0461
I would agree that this explains the apparent atrocious calibration. It's worth an edit to the main post. No reason to beat ourselves up needlessly. People were answering different questions in the sense that they each had an interval of their own choosing to assign a probability to, but obviously different people's performance here was going to be strongly correlated. Bayes just happens to be the kind of guy who was born surprisingly early. If everyone had literally been asked to assign a probability to the exact same proposition, like "Bayes was born before 1750" or "this coin will come up heads", that would have been a more extreme case. We'd have found that events that people predicted with probability x% actually happened either 0% or 100% of the time, and it wouldn't mean people were infinitely badly calibrated.
-1A1987dM
All of that also applies to the year calibration questions in previous surveys and yet people did much better in those.
8steven0461
Because they weren't about events that occurred surprisingly early.
0[anonymous]
Yes, and this is probably worth an edit to the original post. For a more extreme example, consider what would happen if you asked a large group of people to assess the probability that the same coin would come up heads. You'd find that events that people said would happen 50% of the time happened either 0% or 100% of the time, but it would be wrong to conclude they were atrociously calibrated.
[-]gwern370

I previously mentioned that item non-response might be a good measure of Conscientiousness. Before doing anything fancy with non-response, I first checked that there was a correlation with the questionnaire reports. The correlation is zero:

R> lwc <- subset(lw, !is.na(as.integer(as.character(BigFiveC))))
R> missing_answers <- apply(lwc, 1, function(x) sum(sapply(x, function(y) is.na(y) || as.character(y)==" ")))
R> cor.test(as.integer(as.character(lwc$BigFiveC)), missing_answers)

    Pearson's product-moment correlation

data:  as.integer(as.character(lwc$BigFiveC)) and missing_answers
t = -0.0061, df = 421, p-value = 0.9952
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
 -0.09564  0.09505
sample estimates:
       cor
-0.0002954
# visualize to see if we made some mistake somewhere
R> plot(as.integer(as.character(lwc$BigFiveC)), missing_answers)

I am completely surprised. The results in the economics paper looked great and the rationale is very plausible. Yet... The 2 sets of data here have the right ranges, there's plenty of variation in both dimension, I'm sure I'm catching most of the item non-responses or N... (read more)

[-]Kindly220

There is a correlation of 0.13 between non-responses and N.

Of course, there's also a correlation of -0.13 between C and the random number generator.

People who had seen the RNG give a large number were primed to feel unusually reckless when taking the Big 5 test. Duh. (Just kidding.)

6NancyLebovitz
Were you expecting that people with high C would or wouldn't skip questions? I can see arguments either way. Conscientious people might skip questions they don't have answers to or that they aren't willing to put the time into to give a good answer, or they might put in the work to have answers they consider good to as many questions as possible. Is it feasible to compare wrong sort of answer with C? Is it possible that the test for C wasn't very good?
8gwern
Wouldn't; that was the claim of the linked paper. Not really, if it wasn't caught by the no-answer check or the NA check. As I said, it came out as expected for LW as a whole, and it did correlate with income once the CS salaries were removed... Hard to know what ground-truth there could be to check the scores against.
4Vaniver
I am also surprised by this. I wonder about the effect of "I'm taking this survey so I don't have to go to bed / do work / etc.," but I wouldn't have expected that to be as large as the diligence effect. Also, perhaps look at nonresponse by section? I seem to recall the C part being after the personality test, which might be having some selection effects.
1gwern
What do you mean? I can't compare non-response with anyone who didn't supply a C score, and there were plenty of questions to non-response on after the personality test section.
3Vaniver
It seems to me that other survey non-response may be uncorrelated with C once you condition on taking a long personality survey, especially if the personality survey doesn't allow nonresponse. (I seem to recall taking all of the optional surveys and considering the personality one the most boring. I don't know how much that generalizes to other people.) The first way that comes to mind to gather information for this is to compare the nonresponse of people who supplied personality scores and people who didn't, but that isn't a full test unless you can come up with another way to link the nonresponse to C. I was thinking it might help to break down the responses by section, and seeing if nonresponse to particular sections was correlated with C, but the result could only be that some sections are anticorrelated if a few are correlated. So that probably won't get you anything.
1gwern
Why would the strong correlation go away after adding a floor? That would simply restrict the range... if that were true, we'd expect to see a cutoff for all C scores but in fact we see plenty of very low C scores being reported. Yes. You'd expect, by definition, that people who answered the personality questions would have fewer non-responses than the people who didn't... That's pretty obvious and true: R> lwc <- subset(lw, !is.na(as.integer(as.character(BigFiveC)))) R> missing_answers1 <- apply(lwc, 1, function(x) sum(sapply(x, function(y) is.na(y) || as.character(y)==" "))) R> lwnc <- subset(lw, is.na(as.integer(as.character(BigFiveC)))) R> missing_answers2 <- apply(lwnc, 1, function(x) sum(sapply(x, function(y) is.na(y) || as.character(y)==" "))) R> t.test(missing_answers1, missing_answers2) Welch Two Sample t-test data: missing_answers1 and missing_answers2 t = -25.19, df = 806.8, p-value < 2.2e-16 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -18.77 -16.05 sample estimates: mean of x mean of y 9.719 27.129

I really have no idea what went so wrong [with the question about Bayes' birth year]

Note also that in the last two surveys the mean and median answers were approximately correct, whereas this time even the first quartile answer was too late by almost a decade. So it's not just a matter of overconfidence -- there also was a systematic error. Note that Essay Towards Solving a Problem in the Doctrine of Chances was published posthumously when Bayes would have been 62; if people estimated the year it was published and assumed that he had been approximately in his thirties (as I did), that would explain half of the systematic bias.

7Sam_Jaques
To expand on this: Confidence intervals that are accurate for multiple judgements by the same person may be accurate for the same judgement made by multiple people. Normally, we can group everyone's responses and measure how many people were actually right when they said they were 70% sure. This should average out to 70% is because the error is caused by independent variations in each person's estimate. If there's a systematic error, then even if we all accounted for the systematic error in our confidence levels, we would all still fail at the same time if there was an error.
2Alejandro1
I had a vaguely right idea for the year of publication, and didn't know it was posthumous, but assumed that it was published in his middle-to-old age and so got the question right.
1Cakoluchiam
This question was biased against people who don't believe in history. For my answer, which was wildly wrong, I guesstimated by interpolating backward using the rate of technological and cultural advance in various cultures throughout my lifetime, the dependency of such advances on Bayesian and related logics, with an adjustment for known wars and persecution of scientists and an assumption that Bayes lived in the western world. I should have realized that my confidence on estimates of each of these (except the last) was not very good and that I really shouldn't have had any more than marginal confidence in my answer, but I was hoping that the sheer number of assumptions I made would approach statistical mean of my confidences and that the overestimates would counterbalance the underestimates. The real lesson I learned from this exercise is that I shouldn't have such high confidence in my ability to produce and compound a statistically significant number of assumptions with associated confidence levels.
0Manfred
Have you read Malcolm Gladwell - Blink? It's a fun book that doesn't take too long, which hella makes up for the occasional failure of rigor. Anyhow, the conclusion is that even on hard problems, expert-trusted models will still have very few parameters. And those parameters don't have to be the same things you'd use if you were a perfect reasoner - what's important is that you can use it as an indicator.
0magfrump
I personally had error bars of 75 years on my confidence and was 74 years off. I'm not sure if I translated that correctly into percent certainty of being within 20 years of correct, but I felt okay about the result. This might be another source of systematic error?

On IQ Accuracy:

As Yvain says, "people have been pretty quick to ridicule this survey's intelligence numbers as completely useless and impossible and so on" because if they're true, it means that the average LessWronger is gifted. Yvain added a few questions to the 2012 survey, including the ACT and SAT questions and the Myers-Briggs personality type question that I requested (I'll explain why this is interesting), and that give us a few other things to check against, which has made the figures more believable. The ridicule may be an example of the "virtuous doubt" that Luke warns about in Overconfident Pessimism, so it makes sense to "consider the opposite":

The distribution of Myers-Briggs personality types on LessWrong replicates the Mensa pattern. This is remarkable since the patterns of personality types here are, in many significant ways, the exact opposite of what you'd find in the regular population. For instance, the introverted rationalists and idealists are each about 1% of the population. Here, they are the majority and it's the artisans and guardians who are relegated to 1% or less of our population.

Mensa's personality test results we... (read more)

Alternate possibility: The distribution of personality types in Mensa/LW relative to everyone else is an artifact produced by self-identified smart people trying to signal their intelligence by answering 'yes' to traits that sound like the traits they ought to have.

eg. I know that a number of the T/F questions are along the lines of "I use logic to make decisions (Y/N)", which is a no-brainer if you're trying to signal intelligence.

A hypothetical way to get around this would be to have your partner/family member/best friend next to you as you take the test, ready to call you out when your self-assessment diverges from your actual behaviour ("hold on, what about that time you decided not to go to the concert of [band you love] because you were angry about an unrelated thing?")

5Epiphany
Ok, it's possible that all of the following happened: * Most of the 1000 people decided to lie about their IQ on the LessWrong survey. * Most of the liars realized that their personality test results were going to be compared with Mensa's personality type results, and it dawned on them that this would bring their IQ lie into question. * Most of the liars decided that instead of simply skipping the personality test question, or taking it to experience the enjoyment of finding out their type, they were going to fudge the personality test results, too. * Most of the liars actually had the patience to do an additional 72 questions specifically for the purpose of continuing to support a lie when they had just slogged through 100 questions. * Most of the liars did all of that extra work (Researching the IQ correlation with the SAT and the ACT and fudging 72 personality type questions) when it would have been so much easier to put their real IQ in the box, or simply skip the IQ question completely because it is not required. * Most of the liars succeeded in fudging their personality types. This is, of course, possible, but it it is likely to be more complicated than it at first seems. They'd have to be lucky that enough of the questions give away their intelligence correlation in the wording (we haven't verified that). They'd have to have enough of an understanding of what intelligent people are like that they'd choose the right ones. Questions like these are likely to confuse a non-gifted person trying to guess which answers will make them look gifted: "You are more interested in a general idea than in the details of its realization" (Do intelligent people like ideas or details more?) "Strict observance of the established rules is likely to prevent a good outcome" (Either could be the smarter answer, depending who you ask.) "You believe the best decision is one that can be easily changed" (It's smart to leave your options open, but it's also more intelle
1private_messaging
"Lie" is a strawman. One could report an estimate, mis-remember, report the other "IQ" (mental age / chronological age metric), or one may have took any one of entirely faulty online tests that report IQ as high to increase the referral rate (some are bad enough to produce >100 if the answers are filled in at random).
1Epiphany
This would be a good point in the event that we were not discussing IQ scores generated by an IQ test selected by Yvain, which many people took at the same time as filling out the survey. This method (and timing) rules out problems due to relying on estimates alone, most of the potential for mis-remembering, (neither of which should be assumed to be likely to result in an average score that's 30 points too high, as mistakes like these could go in either direction), and, assuming that the IQ test Yvain selected was pretty good, it also rules out the problem of the test being seriously skewed. If you would like to continue this line of argument, one effective method of producing doubt would be to go to the specific IQ test in question, fill out all of the answers randomly, and report the IQ that it produces. If you want to generate a full-on update regarding those particular test results, complete with Yvain being likely to refrain from recommending this source during his next survey, write a script that fills out the test randomly and reports the results so that multiple people can run it and see for themselves what average IQ the test produces after a large number of trials. You may want to check to see whether Yvain or Gwern or someone has already done this before going to the trouble. Also, there really were people whose concern it was that people were lying on the survey. Your "lie is a strawman" perception appears to have been formed due to not having read the (admittedly massive number of) comments on this.
0private_messaging
Look. People misremember (and remember the largest value, and so on) in the way most favourable to themselves. While mistakes can of course go in either direction, they don't actually go in either direction. If you ask men to report their penis size (quite literally), they over-estimate; if you ask them to measure, they still overestimate but not by as much. This sort of error is absolutely the norm in any surveys. More so here, as the calibration (on Bayes date of birth question at least) was comparatively very bad. The situation is anything but symmetric, given that the results are rather far from the mean, on a Gaussian. Furthermore, given the interest in self improvement, people here are likely to have tried to improve their test scores by practice, which would have considerably lower effect on iqtest.dk unless you practice specifically the Raven's matrices. The low scores on iqtest.dk are particularly interesting in light that the scores on the latter are a result of better assignment of priors / processing of probabilities (as, fundamentally, one needs to pick the choice which results in simplest - highest probability - overall pattern. If one is overconfident about the pattern they see being the best, one's score is lowered, so poor calibration will hurt that test more).
1Epiphany
I intuit that this is likely to be a popular view among sceptics, but I do not recall ever being presented with research that supports this by anyone. To avoid the lure of "undiscriminating scepticism", I am requesting to see the evidence of this. I agree that, for numerous reasons, self-reported IQ scores, SAT scores, ACT scores and any other scores are likely to have some amount of error, and I think it's likely for the room for error to be pretty big. On that we agree. An average thirty points higher than normal seems to me to be quite a lot more than "pretty big". That's the difference between an IQ in the normal range and an IQ large enough to qualify for every definition of gifted. To use your metaphor, that's like having a 6-incher and saying it's 12. I can see guys unconsciously saying it's 7 if it's 6, or maybe even 8. But I have a hard time believing that most of these people have let their imaginations run so far away with them as to accidentally believe that they're Mensa level gifted when they're average. I'd bet that there was a significant amount of error, but not an average of 30 points. If you agree with those two, then whether we agree over all just depends on what specific belief we're each supporting. I think these beliefs are supported: * The SAT, ACT, self-reported IQ and / or iqtest.dk scores found on the survey are not likely to be highly accurate. * Despite inaccuracies, it's very likely that the average LessWrong member has an IQ above average - in other words, I don't think that the scores reported on the survey are so inaccurate that I should believe that most LessWrongers actually have just an average IQ. * LessWrong is (considering a variety of pieces of evidence, not just the survey) likely to have more gifted people than you'd find by random chance. Do we agree on those three beliefs? If not, then please phrase the belief(s) you want to support.
9CCC
Even if every self-reported IQ is exactly correct, the average of the self-reported IQ values can still be (and likely will still be) higher than the average of the readership's IQ values. Consider two readers, Tom and Jim. Tom does an IQ test, and gets a result of 110. Jim does an IQ test, and gets a result of 90. Tom and Jim are both given the option to fill in a survey, which asks (among other questions) what their IQ is. Neither Tom nor Jim intend to lie. However, Jim seems significantly more likely to decide not to participate; while Tom may decide to fill in the survey as a minor sort of showing off. This effect will skew the average upwards. Perhaps not 30 points upwards... but it's an additional source of bias, independent of any bias in individual reported values.
5Vaniver
I remember looking into this when I looked at the survey data. There were only a handful of people who reported two-digit IQs, which is consistent with both the concealment hypothesis and the high average intelligence hypothesis. If you assume that nonresponders have an IQ of 100 on average the average IQ across everyone drops down to 112. (I think this is assumption is mostly useful for demonstrative purposes; I suspect that the prevalence of people with two-digit IQs on LW is lower than in the general population.) (You could do some more complicated stuff if you had a functional form for concealment that you wanted to predict, but it's not obvious to me that IQs on LW actually follow a normal distribution, which would make it hard to separate out the oddities of concealment with the oddities of the LW population.)
0Epiphany
Ah! Good point! Karma for you! Now I will think about whether there is a way to figure out the truth despite this. Ideas?
3CCC
Hmmm. Tricky. * Select a random sampling of people (such as by picking names from the phonebook). Ask each person whether they would like to fill in a survey which asks, among other things, for their IQ. If a sufficiently large, representative sample is taken, the average IQ of the sample is likely to be 100 (confirm if possible). Compare this to the average reported IQ, in order to get an idea of the size of the bias. * Select a random sampling of lesswrongers, and ask them for their IQs. If they all respond, this should cut out the self-selection bias (though the odds are that at least some of them won't respond, putting us back at square one). It's probably also worth noting that this is a known problem in statistics which is not easy to compensate for.
2somervta
There's also the selection effect of only getting answers from "people who , when asked, can actually name their IQ".
3satt
As one of the sceptics, I might as well mention a specific feature of the self-reported IQs that made me pretty sure they're inflated. (Even before I noticed this feature, I expected the IQs to be inflated because, well, they're self-reported. Note that I'm not saying people must be consciously lying, though I wouldn't rule it out. Also, I agree with your three bullet points but still find an average LW IQ of 138-139 implausibly high.) The survey has data on education level as well as IQ. Education level correlates well with IQ, so if the self-reported IQ & education data are accurate, the subsample of LWers who reported having a "high school" level of education (or less) should have a much lower average IQ. But in fact the mean IQ of the 34% of LWers with a high school education or less was 136.5, only 2.2 points less than the overall mean. There is a pretty obvious bias in that calculation: a lot of LWers are young and haven't had time to complete their education, however high their IQs. This stacks the deck in my favour because it means the high-school-or-less group includes a lot of people who are going to get degrees but haven't yet, which could exaggerate the IQ of the high-school-or-less group. I can account for this bias by looking only at the people who said they were ≥29 years old. Among that older group, only 13% had a high school education or less...but the mean IQ of that 13% was even higher* at 139.6, almost equal to the mean IQ of 140.0 for older LWers in general. The sample sizes aren't huge but I think they're too big to explain this near-equality away as statistical noise. So IQ or education level or age was systematically misreported, and the most likely candidate is IQ, 'cause almost everyone knows their age & education level, and nerds probably have more incentive to lie on a survey about their IQ than about their age or education level. ---------------------------------------- * Assuming people start university at age 18, take 3 years to g
5gwern
And I suspect if you look at the American population for that age cohort, you'll find a lot higher a percentage than 13% which have a "high school education or less"... All you've shown is that of the highschool-educated populace, LW attracts the most intelligent end, the people who are the dropouts for whatever reason. Which for high-IQ people is not that uncommon (and one reason the generic education/IQ correlation isn't close to unity). LW filters for IQ and so only smart highschool dropouts bother to hang out here? Hardly a daring or special pleading sort of suggestion. And if we take your reasoning at face-value that the general population-wide IQ/education correlate must hold here, it would suggest that there would be hardly any autodidacts on LW (clearly not the case), such as our leading 'high school education or less' member, Eliezer Yudkowsky.
2satt
Right, but even among LWers I'd still expect the dropouts to have a lower average IQ if all that's going on here is selection by IQ. Sketch the diagram. Put down an x-axis (representing education) and a y-axis (IQ). Put a big slanted ellipse over the x-axis to represent everyone aged 29+. Now (crudely, granted) model the selection by IQ by cutting horizontally through the ellipse somewhere above its centroid. Then split the sample that's above the horizontal line by drawing a vertical line. That's the boundary between the high-school-or-less group and everyone else. Forget about everyone below the horizontal line because they're winnowed out. That leaves group A (the high-IQ people with less education) and group B (the high-IQ people with more). Even with the filtering, group A is visibly going to have a lower average IQ than B. So even though A comprises "the most intelligent end" of the less educated group, there remains a lingering correlation between education level and IQ in the high-IQ sample; A scores less than B. The correlation won't be as strong as the general population-wide correlation you refer to, but an attenuated correlation is still a correlation.
4Nornagest
It seems implausible to me that education level would screen off the same parts of the IQ distribution in LW as it does in the general population, at least at its lower levels. It's not too unreasonable to expect LWers with PhDs to have higher IQs than the local mean, but anyone dropping out of high school or declining to enter college because they dislike intellectual pursuits, say, seems quite unlikely to appreciate what we tend to talk about here.
1satt
Upvoted. If I repeat the exercise for the PhD holders, I find they have a mean IQ of 146.5 in the older subsample, compared to 140.0 for the whole older subsample, which is consistent with what you wrote.
0EHeller
How significant is that difference?
0satt
I did a back-of-the-R-session guesstimate before I posted and got a two-tailed p-value of roughly 0.1, so not significant by the usual standard, but I figured that was suggestive enough. Doing it properly, I should really compare the PhD holders' IQ to the IQ of the non-PhD holders (so the samples are disjoint). Of the survey responses that reported an IQ score and an age of 29+, 13 were from people with PhDs (mean IQ 146.5, SD 14.8) and 135 were from people without (mean IQ 139.3, SD 14.3). Doing a t-test I get t = 1.68 with 14.2 degrees of freedom, giving p = 0.115.
0Nornagest
It's a third of a SD and change (assuming a 15-point SD, which is the modern standard), which isn't too shabby; comparable, for example, with the IQ difference between managerial and professional workers. Much smaller than the difference between the general population and PhDs within it, though; that's around 25 points.
1EHeller
I was really asking about sample size, as I was too lazy to grab the raw data.
0private_messaging
Yes, and even without particular expectation of inflation, once you see IQs that are very high, you can be quite sure IQs tend to be inflated simply because of the prior being the bell curve. Any time I see "undiscriminating scepticism" mentioned, it's a plea to simply ignore necessarily low priors when evidence is too weak to change conclusions. Of course, it's not true "undiscriminating scepticism". If LW undergone psychologist-administered IQ testing and that were the results, and then there was a lot of scepticism, you could claim that there's some excessive scepticism. But as it is, rational processing of probabilities is not going to discriminate that much based on self reported data.
2private_messaging
Sceptics in that case, I suppose, being anyone who actually does the most basic "Bayesian" reasoning, such as starting with a Gaussian prior when you should (and understanding how an imperfect correlation between self reported IQ and actual IQ would work on that prior, i.e. regression towards the mean when you are measuring by proxy). I picture there's a certain level of Dunning Kruger effect at play, whereby those least capable of probabilistic reasoning would think themselves most capable (further evidenced by calibration; even though the question may have been to blame, I'm pretty sure most people believed that a bad question couldn't have that much of an impact). Wikipedia to the rescue, given that a lot of stuff is behind the paywall... http://en.wikipedia.org/wiki/Illusory_superiority#IQ "The disparity between actual IQ and perceived IQ has also been noted between genders by British psychologist Adrian Furnham, in whose work there was a suggestion that, on average, men are more likely to overestimate their intelligence by 5 points, while women are more likely to underestimate their IQ by a similar margin." and more amusingly http://en.wikipedia.org/wiki/Human_penis_size#Erect_length Just about any internet forum would select for people owning a computer and having an internet connection and thus cut off the poor, mentally disabled, and so on, improving the average. So when you state it this way - mere "above average" - it is a set of completely unremarkable beliefs. It'd be interesting to check how common are advanced degrees among white Americans with actual IQ of 138 and above, but I can't find any info.
1Vaniver
This was one of the things I checked when I looked into the IQ results from the survey here and here. One of the things I thought was particularly interesting was that there was a positive correlation between self-reported IQ and iqtest.dk (which is still self-reported, and could have been lied on, but hopefully this is only deliberate lies, rather than fuzzy memory effects) among posters and a negative correlation among lurkers. This comment might also be interesting. I endorse Epiphany's three potential explanations, and would quantify the last one: I strongly suspect the average IQ of LWers is at least one standard deviation above the norm. I would be skeptical of the claim that it's two standard deviations above the norm, given the data we have.
2private_messaging
Wow, that's quite interesting - that's some serious Dunning-Kruger. Scatterplot could be of interest. Thing to keep in mind is that even given a prior that errors can go either way equally, when you have obtained a result far from the mean, you must expect that errors (including systematic errors) were predominantly in that direction. Other issue is that in a 1000 people, about 1 will have an IQ of >=146 or so , while something around 10 will have fairly severe narcissism (and this is not just your garden variety of overestimating oneself, but the level where it interferes with normal functioning). Self reported IQ of 146 is thus not really a good sign overall. Interestingly some people do not understand that and go on how the others "punish" them for making poorly supported statements of exceptionality, while it is merely a matter of correct probabilistic reasoning. The actual data is even worse than what comparisons of prevalence would suggest - 25% of people put themselves in the top 1% in some circumstances. Yes, average of 115 would be possible.
3Vaniver
The actual data is linked in the post near the end. If you drop three of the lurkers- who self-reported 180, 162, and 156 but scored 102, 108, and 107- then the correlation is positive (but small). (Both samples look like trapezoids, which is kind of interesting, but might be explained by people using different standard deviations.)
0[anonymous]
That sounds pretty high to me. I haven't looked into narcissism as such, but I remember seeing similar numbers for antisocial personality disorder when I was looking into that, which surprised me; the confusion went away, however, when I noticed that I was looking at the prevalence in therapy rather than the general population. Something similar, perhaps?
0ygert
You know, people do lie to themselves. It's a sad but true (and well known around here) fact about human psychology that humans have surprisingly bad models of themselves. It is simply true that if you asked a bunch of people selected at random about their (self-reported) IQ scores, you would get an average of more than 100. One would hope that LessWrongers are good enough at detecting bias in order to mostly dodge that bullet, but the evidence of whether or not we actually are that good at it is scarce at best.
2Epiphany
Your unintentional lie explanation does not explain how the SAT scores ended up so closely synchronised to the IQ scores - as we know, one common sign of a lie is that the details do not add up. Synchronising one's SAT scores to the same level as one's IQ scores would most likely require conscious effort, making the discrepancy obvious to the LessWrong members who took the survey. If you would argue that they were likely to have chosen corresponding SAT scores in some way that did not require them to become consciously aware of discrepancies in order to synchronize the scores, how would you support the argument that they synched them on accident? If not, then would you support the argument that LessWrong members consciously lied about it? Linda Silverman, a giftedness researcher, has observed that parents are actually pretty decent at assessing their child's intellectual abilities despite the obvious cause for bias. "In this study, 84% of the children whose parents indicated that they fit three-fourths of the characteristics tested above 120 IQ. " (An unpublished study, unfortunately.) http://www.gifteddevelopment.com/PDF_files/scalersrch.pdf This isn't exactly the same as managing knowledge of one's own intellectual abilities, but if it would seem to you that parents would most likely be hideously biased when assessing their children's intellectual abilities even though, according to a giftedness researcher, this is probably not the case, then should you probably also consider that your concern that most LessWrong members are likely to subconsciously falsify their own IQ scores by a whopping 30 points (if that is your perception) may be far less likely to be a problem than you thought?
8NonComposMentis
Scores on standardized tests like SAT and ACT can be improved via hard work and lots of practice -- there are abundant practice books out there for such tests. It is entirely conceivable that those self-reported IQs were generated via comparing scores on these standardized tests against IQ-conversion charts. I.e., with very hard work, the apparent IQs are in the 130+ range according to these standardised tests; but when it comes to tests that measure your native intelligence (e.g., iqtest.dk), the scores are significantly lower. In future years, it would be advisable for the questionnaire to ask participants how much time they spent in total to prepare for tests such as SAT and ACT -- and even then you might not get honest answers. That brings me to the point of lying... Not necessarily true. If the survey results show that LWers generally have IQs in the gifted range, then it allows LWers to signal their intelligence to others just by identifying themselves as LWers. People would assume that you probably have an IQ in the gifted range if you tell them that you read LW. In this case, everyone has an incentive to fudge the numbers. erratio has also pointed out that participants might have answered those personality tests untruthfully in order to signal intelligence, so I shan't belabour the point here.
3Epiphany
Ok, now here is a motive! I still find it difficult to believe that: 1. Most of 1000 people care so much about status that they're willing to prioritize it over truth, especially since this is LessWrong where we gather around the theme of rationality. If there's anyplace you'd think it would be unlikely to find a lot of people lying about things on a survey, it's here. 2. The people who take the survey know that their IQ contribution is going to be watered down by the 1000 other people taking the survey. Unless they have collaborated by PM and have made a pact to fudge their IQ test figures, these frequently math oriented people must know that fudging their IQ figure is going to have very, very little impact on the average that Yvain calculates. I do not know why they'd see the extra work as worthwhile considering the expected amount of impact. Thinking that fudging only one of the IQs is going to be worthwhile is essentially falling for a Pascal's mugging. 3. Registration at LessWrong is free and it's not exclusive. At all. How likely is it, do you think, that this group of rationality-loving people has reasoned that claiming to have joined a group that anybody can join is a good way to brag about their awesomeness? I suppose you can argue that people who have karma on their accounts can point to that and say "I got karma in a gifted group" but lurkers don't have that incentive. All lurkers can say is "I read LessWrong." but that is harder to prove and even less meaningful than "I joined LessWrong". Putting the numbers where our mouths are: If the average IQ for lurkers / people with low karma on LessWrong is pretty close to the average IQ for posters and/or people with karma on LessWrong, would you say that the likelihood of post-making/karma-bearing LessWrongers lying on the survey in order to increase other's status perceptions of them is pretty low? Do you want to get these numbers? I'll probably get them later if you don't, but I have a pile of LW me

From the public dataset:

165 out of 549 responses without reported positive karma (30%) self-reported an IQ score; the average response was 138.44.

181 out of 518 responses with reported positive karma (34%) self-reported an IQ score; the average response was 138.25.

One of the curious features of the self-reports is how many of the IQs are divisible by 5. Among lurkers, we had 2 151s, 1 149, and 10 150s.

I think the average self-response is basically worthless, since it's only a third of responders and they're likely to be wildly optimistic.

So, what about the Raven's test? In total, 188 responders with positive karma (36%), and 164 responders without positive karma (30%) took the Raven's test, with averages of 126.9 and 124.4. Noteworthy is the new max and min- the highest scorer on the Raven's test claimed to get 150, and the three sub-100 scores were 3, 18, and 66 (of which I suspect only the last isn't a typo or error of some sort).

Only 121 users both self-reported IQ and took the Raven's test. The correlation between their mean-adjusted self-reported IQ and mean-adjusted Raven's test was an abysmal .2. Among posters with positive karma, the correlation was .45; among posters without positive karma, the correlation was -.11.

3Epiphany
Thank you for these numbers, Vaniver! I should have thanked you sooner. I had become quite busy (partly with preparing my new endless September post) so I did not show up to thank you promptly. Sorry about that.
2Vaniver
You're welcome!
0NonComposMentis
I have thought of that. But a person who wants to lie about his IQ would think this way: If I lie and other LWers do not, it is true that my impact on the average calculated IQ will be negligible, but at least it will not be negative; but if I lie and most other LWers also lie, then the collective upward bias will lead to a very positive result which would portray me in a good light when I associate myself with other LWers. So there is really no incentive to not lie. (I'm not saying that they definitely lied; I'm merely pointing out that this is something to think about.) Fair point; but very often the kind of clubs you join does indicate something about your personality and interests, regardless of whether you are actually an active/contributing member or not. Saying "I read LessWrong" or "I joined LessWrong" certainly signals to me that you are more intelligent than someone who joined, say, Justin Bieber's fan club, or the Twilight fan-fiction club. And if there are numbers showing that LW readers tend to have IQs in the gifted range, naturally I would think that X is probably quite intelligent just by virtue of the fact that X reads LW. One last point is that LWers might not be deliberately lying: Perhaps they were merely victim to the Dunning-Kruger effect when self-reporting IQs. I am not sure if there are any studies showing that intelligent people are generally less likely to fall prey to the Dunning-Kruger effect. Last but not least, I would again like to suggest that future surveys include questions asking people how much time they spent on average preparing for exams such as the SAT and the ACT -- as I pointed out previously, scores on such exams can be very significantly improved just by studying hard, whereas tests like iqtest.dk actually measure your native intelligence.
1Epiphany
Not true. It would probably take at least 20 minutes to fudge all the stuff that has to be fudged. When you're already fatigued from filling out survey questions, that's even less desirable at that time. At best, this would be falling for a Pascal's mugging. True that some people may. But would the majority of survey participants... at a site about rationality? They were not asked to assess their own IQ they were asked to report the results of a real assessment. To report something other than the results of a real assessment is a type of lie in this case. That's a suggestion for Yvain. I don't assist with the surveys.
8gwern
Make a copy and post it. Most browsers have the ability to print/save pages as PDFs or various forms of HTML.

Ok I managed to dig it up!

 E/I | S/N | T/F | J/P  (Category)

----------------------------------------------

75/25 75/25 55/45 50/50 (Overall population)

27/73 10/90 75/25 65/35 (Mensans)

15/85  03/97 88/12 54/46 (LessWrongers) *

From the December 1993 Mensa Bulletin.

* The LessWrongers were added by me, using the same calculation method as in the comment where I test my personality type predictions and are based on the 2012 survey results.

4DaFranker
Thanks for the analysis. I agree with your conclusion. On a less relevant note, it does feel good to see more evidence that the community we hang out with is smart and awesome.

This also explains a lot of things. People regard IQ as if it is meaningless, just a number, and they often get defensive when intellectual differences are acknowledged. I spent a lot of time doing research on adult giftedness (though I'm most interested in highly gifted+ adults) and, assuming the studies were done in a way that is useful (I've heard there are problems with this), and my personal experiences talking to gifted adults are halfway decent as representations of the gifted adult population, there are a plethora of differences that gifted adults have. For instance, in "You're Calling Who A Cult Leader?" Eliezer is annoyed with the fact that people assume that high praise is automatic evidence that a person has joined a cult. What he doesn't touch on is that there are very significant neurological differences between people in just about every way you could think of, including emotional excitability. People assume that others are like themselves, and this causes all manner of confusion. Eliezer is clearly gifted and intense and he probably experiences admiration with a higher level of emotional intensity than most. If the readers of LessWrong and Hacker Ne... (read more)

Eliezer is clearly gifted and intense and he probably experiences admiration with a higher level of emotional intensity than most. If the readers of LessWrong and Hacker News are gifted, same goes for many of them. To those who feel so strongly, excited praise may seem fairly normal. To all those who do not, it probably looks crazy.

Would you predict then that people who're not gifted are in general markedly less inclined to praise things with a high level of intensity?

This seems to me to be falsified by everyday experience. See fan reactions to Twilight, for a ready-to-hand example.

My hypothesis would simply be that different people experience emotional intensity as a reaction to different things. Thus, some think we are crazy and cultish, while also totally weird for getting excited about boring and dry things like math and rationality... while some of us think that certain people who are really interested in the lives of celebrities are crazy and shallow, while also totally weird for getting excited about boring and bad things like Twilight.

This also leads each group to think that the other doesn't get similar levels of emotional intensity, because only the group's own type of "emotional intensity" is classified as valid intensity and the other group's intensity is classified as madness, if it's recognized at all. I've certainly made the mistake of assuming that other people must live boring and uninteresting lives, simply because I didn't realize that they genuinely felt very strongly about the things that I considered boring. (Obligatory link.)

(Of course, I'm not denying there being variation in the "emotional intensity" trait in general, but I haven't seen anything to suggest that the median of this trait would be considerably different in gifted and non-gifted populations.)

1Epiphany
Ok, where do I find them?
-1Desrtopa
If you have to go looking, you're lucky. If you want to find them in person, the latest Twilight movie is still in theaters, although you've missed the people who made a point of seeing it on the day of the premier.
3Epiphany
Haha, I guess so. I am very, very nerdy. I had fun getting worldly in my teens and early 20's, but I've learned that most people alienate me, so I've isolated myself into as much of an "ivory tower" as possible. (Which consists of me doing things like getting on my computer Saturday evenings and nerding so hard that I forget to eat.) Not really. What did they do when you saw them? How do we distinguish the difference between the kind of fanaticism that mentally unbalanced people display for, say, a show that is considered by many to have unhealthy themes and the kind of excitement that normal people display for the things they love? Maybe Twilight isn't the best example here.
6Desrtopa
I didn't. I don't particularly have to go out of my way to find Twilight fans, but if I did, I wouldn't. I think you're dramatically overestimating the degree to which fans of Twilight are psychologically abnormal. Harlequin romance was already an incredibly popular genre known for having unhealthy themes. Twilight, like Eragon, is a mostly typical work of its genre with a few distinguishing factors which sufficed to garner it extra attention, which expanded to the point of explosive popularity as it started drawing in people who weren't already regular consumers of the genre.
5Epiphany
I wouldn't be surprised if this is true. This still does not answer the question "What sample can we use that filters out fanaticism from mentally unbalanced people to compare the type of excitement that gifted people feel to the type of excitement that everyone else feels?" Not to assume that no gifted people are mentally unbalanced... I suppose we'd really have to filter those out of both groups.
4Eugine_Nier
Taboo "mentally unbalanced".
6Eugine_Nier
What distinction are you trying to make here?
9RobertLumley
we will all be brain-dead in 70 years.
0Epiphany
It's true that the downward trend can't go on forever, and to say that it's definitely going to continue would be (all by itself, without some other arguments) an appeal to history or slippery slope fallacy. However, when we see a trend as consistent and as potentially meaningful as the one below, it makes sense to start wondering why it is happening: IQ Trend Analysis
9RobertLumley
I was mostly just trying to point out that you are extrapolating from a sample size of three points. Three points which have a tremendous amount of common causes that could explain the variation. Furthermore you aren't extrapolating 10% further from the span of your data, which might be ok, but actually 100% further. You're extrapolating for as long as we have data, which is... absurd.
0Epiphany
One, I am used to seeing the term "sample size" applied to things like the people being studied, not a number of points done in a calculation. If there is some valid use of the term "sample size" that I am not aware of would you mind pointing me in the correct direction? Two, I am not sure where you're getting "three points" from. If you mean the amount of IQ points that LessWrong has lost on the studies, then it was 7.18 points, not three. Two points per year, which could be explained in other ways, sure. No matter what the trend, it could be explained in other ways. Even if it was ten points per year we could still say something like "The smartest people got bored taking the same survey over and over and stopped." There are always multiple ways to explain data. That possibility of other explanations does not rule out the potential that LessWrong is losing intelligent people. Not sure what these 10% and 100% figures correspond to. If I am to understand why you said that, you will have to be specific about what you mean. Including all of the data rather than just a piece of the data is bad why?
4RobertLumley
Three points referred to the number of surveys taken, which I didn't bother to look up, but I believe is three. 10% and 100% referred to the time span over which these data points referred to, ie. three years. Basically, I might be OK with you making a prediction for the next three months (still probably not) but extrapolating for three years based on three years of data seems a bit much to me.
-3Epiphany
Oh I see. The problem here is that "if the trend continues" is not a prediction. "I predict the trend will continue" would be a prediction. Please read more carefully the next time. You confused me quite a bit.
9RobertLumley
If you're not making a prediction, then it's about as helpful as saying "If the moon crashes into North America next year, LW communities will largely cease to exist."
3DaFranker
Looks like Aumann at work. My own readings, though more specifically on teenage giftedness in the 145+ range, along with stuff on ASD and asperger, heavily corroborate with this. When I was 17, my (direct) family and I had strong suspicions that I was in this range of giftedness - suspicions which were never reliably tested, and thus neither confirmed nor infirmed. It's still up in the air and I still don't know whether I fit into some category of gifted or special individuals, but at some point I realized that it wasn't all that important and that I just didn't care. I might have to explore the question a bit more in depth if I decide to return into the official educational system at some point (I mean, having a paper certifying that you're a genius would presumably kind of help when making a pitch at university to let you in without the prerequisite college credit because you already know the material). Just mentioning all of the above to explain a bit where my data comes from. Both of my parents and myself were all reading tons of books, references, papers and other information along with several interviews with various psychology professionals for around three months. Also, and this may be another relevant point, the only recognized, official IQ test I ever took was during that time, and I had a score of "above 130"² (verbal statement) and reportedly placed in the 98th and 99th percentiles on the two sections of a modified WAIS test. The actual normalized score was not included in the report (that psychologist(?¹) sucked, and also probably couldn't do the statistics involved correctly in the first place). However, I was warned that the test lost statistical significance / representativeness / whatever above 125, and so that even if I had an IQ of 170+ that test wouldn't have been able to tell - it had been calibrated for mentally deficient teenagers and very low IQ scores (and was only a one-hour test, and only ten of the questions were written, the rest dyn
3someonewrongonthenet
Was Mensa's test conducted on the internet? The internet has a systematic bias in personalities. For example, reddit subscriptions to each personality type reddit favor Introversion and Intuition 4,828 INTJ 4,457 INTP 1,817 INFP 1,531 INFJ
4A1987dM
IAWYC, but "the internet" is way too broad for what you actually mean -- ISTM that a supermajority of teenagers and young adults in developed countries uses it daily, though plenty of them mostly use it for Facebook, YouTube and similar and probably have never heard of Reddit. (Even I never use Reddit unless I'm following a link to a particular thread from somewhere else -- but the first letter of my MBTI is E so this kind of confirms your point.)
0someonewrongonthenet
Yeah...by "internet" what I meant was sites that most people do not know about - sites that you would only stumble upon in the course of extensive net usage. I once described it to a friend as "deep" vs "shallow" internet, with depth corresponding to the extent to which a typical visitor to the website uses the internet. Even within a website (say reddit) a smaller sub-reddit would be "deeper" than a main one. I'm myself am actually a counterexample to my own "extroverts don't use the internet as much" notion...but I'm only a moderate extrovert. (ENTP or ENFP depending on the test...ENTP description fits better. I listed ENTP in the survey.)
3Eugine_Nier
By that definition, there are many nearly disconnected "deep internets".
2someonewrongonthenet
Yes...i'm confused. Is this supposed to be a flaw in the definition? The idea here is to use relative obscurity to describe the degree to which a site is visited only by Internet users who do heavy exploring. There are only a few "shallow" regions... Facebook, Wikipedia, twitter...the shallowest being google. These are all high traffic and even people who never use computers have heard some of these words. There are many deep regions, on the other hand, and most are disconnected.
0Eugine_Nier
It is if you then proceed to claim to have statistics over users of the "deep internet".
0satt
Yeah, different websites have different personality skews, which complicates things. Fortunately there's evidence against Mensa having used an online sample: Epiphany said the results were published in December 1993. It's fairly easy to give a survey to an Internet forum nowadays, but where would Mensa have found an online sample back in '93? IRC? Usenet? (There is a rec.org.mensa where people posted about personality and the Myers-Briggs back in 1993, but the only relevant post that year was someone asking about Mensans' personalities to no avail.)
1Epiphany
I don't have any more data than that, sorry. To suggest that people on the internet may have certain personality types is a good suggestion, but it raises two questions: * Might your example of Reddit be similar to LW because LW gets lots of users from Reddit? (Or put another way, if the average LessWronger is gifted, maybe "the apple doesn't fall far from the tree" and Reddit has lots of gifted people, too.) * Might gifted people gather in large numbers on the internet because it's easier to find people with similar interests? (Just because people on the internet tend to have those personality types, it doesn't mean they're not gifted.) As for "the internet" having a systematic bias in personalities, I would like to see the evidence of this that's not based on a biased sample. It's likely that the places you go to find people like you will, well, have people like you, so even if you (or somebody else on one of those sites) observed a pattern in personality types across sites they hang out on, the sample is likely to be biased.
2Kindly
I'd say "LW has about as many gifted people as Reddit (proportionally)" should be a sort of null hypothesis: if this is true, then people on LessWrong are not actually surprisingly smart.
6gwern
I wouldn't say that's a reasonable null. Reddit has like 8 million users; 2% of the 310m American population is just 6.2m, so it would be difficult for Reddit to be 100% gifted while LW could easily be. The size disparity is so large that such a null seems more than a little weird.
0Kindly
I don't think I understand your objection. If LW were 100% gifted (while Reddit, presumably, is not?) wouldn't that be evidence that there's some sort of IQ selection at work? (or, conceivably, that just being on LW makes people smarter, although I think that's not supposed to be a thing).
4gwern
I'm saying that we could, just from knowing how big Reddit is, reject out of hand all sorts of proportions of gifted because it would be nigh impossible; a set of nulls (the proportions 0-100%), many of which (all >75%) we can reject before collecting any data is a pretty strange choice to make!
2Kindly
Well, really what I want to ask is: is LW any different, IQ-wise, from a random selection of Redditors of the same size? Possibly stating it in terms of a proportion of "gifted" people is misleading, but that's not as interesting anyway.
4gwern
I don't see the difference. A random selection of Redditors is going to depend on what Reddit overall looks like...
0Kindly
Well, I don't see the difference either, but I'm still not entirely sure what about this hypothesis seems unreasonable to you, so I was hoping this reformulation would help. The reasoning behind it is as follows: I figure a generic discussion board on the Internet has roughly the same IQ distribution as Reddit. If LW has a high average IQ, but so does Reddit, then presumably these are both due to the selection effect of "someone who posts on an online discussion board". So to see if LW is genuinely smarter, we should be comparing it to Reddit, not to the Normal(100,15) distribution.
2gwern
I would be shocked if that were true. Even after having grown stupendously, Reddit is still better than most discussion boards I happen to read.
0Kindly
Okay, fair enough. I don't actually have much experience with Reddit. I still think it's a reasonable reference class. For one thing, LW runs on Reddit-based code. In particular, I would say that being significantly smarter than Reddit is a good cutoff for the feeling of smugness to start kicking in.
2Nominull
Maybe it just means Reddit-folk are surprisingly smart? I mean, IQ 130 corresponds to 98th percentile. The usual standard for surprise is 95th percentile.
1someonewrongonthenet
That's a good point - I hadn't considered sample bias. Extending that point, though, Lesswrong and Mensa are a biased sample in more than the simple fact that the people are gifted. It is only a subset of gifted people that choose to participate in Mensa It should be mentioned, I'm using "internet" as shorthand for the "deep" internet ... not facebook. I'm talking websites that most people do not use, that you'd have to spend a lot of time on the internet to find. As such, the "internet" hypothesis would predict a greater bias towards smaller sub-reddits. Anyway, I was mostly posing an alternate hypothesis. When I first noticed the trend on the personality forums, this is what I thought was happening - Slacking off / internet addiction selects for Perceiving and low Conscientiousness. Non-social-networking internet use selects for Introversion. Any forum discussing an idea without immediate practical benefits selects for iNtuition. And then, factor in lesswrong/giftedness... If it's a math/science/logic topic, it selects for Thinking and iNtuition. High scores on Raven's matrices select for Thinking, iNtuition. High scores on Working memory components select for Judging. The ACT/SAT additionally select for Conscientiousness Strong mathematical affinity shifts those on the border of NTP and NTJ into *NTJ (people prefer dealing with intellectually ordered systems, even if they have messy rooms and chaotic lifestyles) A scientific/engineering ideology creates a shift towards the concrete (empirical evidence, practical gains in technology, etc) shifts those on the border of NTJ and STJ into ISTJ. In summary, I think LW and Mensa surveys are attracting a special subset of idea driven and logical people (iNtuitives and Thinkers) and likely to use the internet often/spot the survey. (Introverts)
2Epiphany
That's much nicer and much more detailed. Questions this raises: 1. Might the "deep" internet you refer to be selecting for gifted people? (I think this is likely!) 2. Do we have figures on personality types and IQs for internet forums in general, not from a biased sample set? These figures would test your theory.
0someonewrongonthenet
I agree with (1), but would claim that it also selectively attracts introverts (and I'm unsure whether or not it will bias J-P to the P side) (2) For each of these, I tried not to look at the data after finding the poll. I made predictions first. Just for fun / to correct for hindsight bias, anyone reading might want to do the same. To play, don't click on the link or read my prediction until you make yours. Also, here is some data which claims to represent the general population - http://mbtitruths.blogspot.com/2011/02/real-statistics.html for comparison. I've already seen similar data on another site, so I won't state my predictions on this one. A website posts stats for people who have taken the test. Unlike the above simple random sample, this selects for internet users. http://www.personalitypage.com/html/demographics.html Prediction: I'd consider this "shallow internet", so very weak biases to (I). The general population is (S), I'd expect a weak bias to (N) but not enough to overcome the general population's S centering. Result: apparently I suck at predictions. In hindsight all the top three would be predicted score high "Fi" on a Jungian cognitive function test, and Fi in theory would be more interested in taking personality tests. But that's hindsight, and I'm not sure if connection between MBTI and Jung hasn't been verified empirically. Here is a "deep internet" forum that I wouldn't ever visit... Christian singles chat forum! This should not suffer from the sample bias you mentioned earlier (He stated that websites I visit are likely to have users with similar personalities to me [ENTP]) http://christianchat.com/christian-singles-forum/34516-meyers-briggs-type-indicator-mbti-poll.html Prediction: I tried my best not to look at the data despite the high visual salience as soon as you open that link. Here is my prediction: I'd predict strong biases towards Introversion (because internet), slight biases towards iNtuition (because religion is idea-ba
0[anonymous]
I'm inclined to believe the survey results myself, but there is a third possibility. If a certain personality type (or distribution of types) reflects a desire to associate with gifted people, or to be seen as gifted, we'd likely expect that to be heavily overrepresented in MENSA; that's pretty much the reason the club exists, after all. We might also expect people with those desires to be less inclined to share average or poor IQ results, or even to falsify results. If the same personality type is overrepresented here, then we have a plausible cause for similar personality test results and for exaggerated IQ reporting, without necessarily implying that the actual IQ distributions are similar.
0Epiphany
---------------------------------------- Looking at Groups of IQs: I acknowledge that the sample set for the highest IQ groups are, of course, rather small, but that's all we've got. What's been happening with the numbers for the highest IQ groups, if indicative of what's really happening, is not encouraging. The highest two groups have decreased in numbers while the lowest two have increased. Also, it looks like the prominence of each group has shifted over time such that the highest group went from being 1/5 to 1/20 and the moderately gifted and normal groups have grown substantially. Exceptionally Gifted Respondents (Self-Reported IQ) (Defined as having an IQ of 160 or more) 2009: 11 (7%) 2011: 27 (3%) 2012: 22 (2%) (Decreased) Highly Gifted Respondents (Self-Reported IQ) (Defined as having an IQ between 145-159) 2009: 17 (11%) 2011: 88 (9%) 2012: 81 (7%) (Decreased) Moderately Gifted Respondents (Self-Reported IQ) (Defined as having an IQ between 132-144) 2009: 22 (14%) 2011: 125 (13%) 2012: 149 (11%) (Increased) Normal Respondents (Self-Reported IQ) (Defined as having an IQ between 100-131) 2009: 11 (7%) 2011: 91 (10%) 2012: 94 (9%) (Increased) ---------------------------------------- Each Group as a Percentage of Total IQ Respondents, by Year: 2009 Group IQ Distribution (As a percentage of 61 total IQ respondents) 18% Exceptionally Gifted 28% Highly Gifted 36% Moderately Gifted 18% Normal IQ 2011 Group IQ Distribution (As a percentage of 331 total IQ respondents) 8% Exceptionally Gifted 27% Highly Gifted 38% Moderately Gifted 28% Normal IQ 2012 Group IQ Distribution (As a percentage of 346 total IQ respondents) 6% Exceptionally Gifted 23% Highly Gifted 43% Moderately Gifted 27% Normal IQ
0hyporational
I don't find it that hard to see why Lesswrong and Mensa would both select for introverted personalities. Do you? I think most sensible people can deduce that IQ is positively correlated with SAT and ACT and all of them are positively correlated with "status". I agree that SAT and ACT are more difficult to fudge though. I haven't ever done either of them. Can they be easily redone several times? Do (smart) people liberally talk about their scores in the US? Many people do IQ tests of different calibers several times and could just remember or report the best result they've gotten. There are different levels of dishonesty. "Lying" is a bit crude.
0alfredmacdonald
I don't think anyone on Less Wrong has lied about their IQ. (addendum: not enough to seriously alter the results, anyway.) If you come up with a "valuing the truth" measure, LessWrong would score pretty highly on that considering the elaborate ways people who post here go about finding true statements in the first place. To lie about your IQ would mean you'd have to know to some degree what your real IQ is, and then exaggerate from there. However, I do think it's more likely than you mention that most people on LessWrong self-reporting IQ simply don't know what their IQ is in absolutely certain terms, since to know your adult IQ you'd have to see a psychometricist and receive an administered IQ test. iqtesk.dk is normed by Mensa Denmark, so it's far more reliable than self-reports. You don't know where the self-reported IQ figures are coming from -- they could be from a psychometricist measuring adult IQ, or they could be from somewhere far less reliable. It could be that they know their childhood IQ was measured at somewhere around 135 for example, and are going by memory. Or they could know by memory that their SAT is 99th percentile and spent a minute to look up what 99th percentile is for IQ, not knowing it's not a reliable proxy. Or they might have taken an online test somewhere that gave ~140 and are recalling that number. Who knows? Either way, I consider "don't attribute to malice what you can attribute to cognitive imperfection" a good mantra here. 126 is actually higher than a lot of people think. As an average for a community, that's really high -- probably higher than all groups I can think of except math professors, physics professors and psychometricists themselves. It's certainly higher than the averages for MIT and Harvard, anyway. About the similarity between self-reported IQ and SAT scores: SAT scores pre-1994 (which many of the scores on here are not likely to fall into) are not reliable as IQ test proxies; Mensa no longer accepts them. This is
-11Epiphany
  • "Robot god apocalypse cult spinoff from Harry Potter."

That should be on a T-shirt.

5Nornagest
I think that's my favorite description on that list.
3Tenoke
I'd buy that shirt. This is instant classic.
1Tripitaka
http://www.spreadshirt.com/design-your-own-t-shirt-C59/product/103759664/view/1/sb/l I thinks it a nice robot, but maybe some of our art-inclined people would like to design a robot god thats got a Harry-Potterish feel about it?
7Bugmaster
I'm envisioning a robot in the classic Sistine Chapel God pose, only with menacingly glowing red eyes. Instead of pointing with its finger, it's holding a wand. There's a wizard hat on its head. The image could be done in silhouette, for that extra-stylized look. If I had any artistic skill, I'd draw it myself :-/
2thomblake
Spinoff is misspelled.
0Tripitaka
sigh Fixed: http://www.spreadshirt.com/design-your-own-t-shirt-C59/product/103760337/view/1/sb/l
1Bugmaster
This link takes me to a blank T-shirt design UI...
0[anonymous]
Myspace Fun Flash Generator

But I am skeptical of these numbers. I hang out with some people who are very closely associated with the greater Less Wrong community, and a lot of them didn't know about the survey until I mentioned it to them in person. I know some people who could plausibly be described as focusing their lives around the community who just never took the survey for one reason or another. One lesson of this survey may be that the community is no longer limited to people who check Less Wrong very often, if at all. One friend didn't see the survey because she hangs out on the #lesswrong channel more than the main site. Another mostly just goes to meetups. So I think this represents only a small sample of people who could justly be considered Less Wrongers.

Yeah, this also fits my observations--I suspect that reading LW and hanging out with LW types in real life are substitute goods.

[-]Tenoke220

Some of the 'descriptions of LessWrong' can make for a great quote on the back of Yudkowsky's book.

[-]Pablo210

Obnoxious self-serving, foolish trolling dehumanizing pseudointellectualism, aesthetically bankrupt.

;-)

Pratchett always includes a quote that calls him a "complete amateur," so there is some precedent for ostentatiously including negative reviews.

1alfredmacdonald
I have always despised the term "pseudointellectualism" since there isn't exactly a set of criteria for a pseudointellectual, nor is there a process of accreditation for becoming an intellectual; the closest thing I'm aware of is, perhaps, a doctorate, but the world isn't exactly short of Ph.D.s who put out crap. There are numerous graduate programs where the GRE/GPA combination to get in is barely above the undergrad averages, for example.
2Nisan
I'd like to have one of these quotes in cross-stitch to hang on my wall. (Hint: Christmas is around the corner!)

Before even reading the full details, I want to congratulate to you for the impressive amount of work. The survey period is possibly my favorite time of the year on lesswrong!

EDIT: The links for the raw csv/xls data at the bottom don't seem to work for me.

6Scott Alexander
Thank you. That should be fixed now.
2Cthulhoo
It's indeed working, thank you!

Top 100 Users' Data, aka Karma 1000+

I was thinking about the fact that there is probably a difference between active LWers versus lurkers or newbies. So I looked at the data for the Top 100 users (actually Top 107, because there was a tie). This happily coincided with the nice Schelling point of 1000 karma. (make sense, because people are likely to round to that number.) To me, this reads as "has been actively posting for at least a year".

So, some data on 1000+ karma people:

Slightly more likely to be male:
92.5% Male, 7.4% Female

NOTE: First percentage is for 1000+ users, second number is for all survey respondents

Much more likely to be polyamorous:

Prefer mono 36% v. 54%
Prefer poly 24% v. 13%
Uncertain 33% v. 30%
Other 4% v. 2%

About the same Age:
average 28.6 v. 27.8

About as likely to be single
51% v. 53%

Equally likely to be vegetarian
12%

Much more likely to use modafinil at least once per month:
15% v. 4%

About equal on intelligence tests

SAT out of 1600: 1509 v. 1486
SAT out of 2400: 2260 v. 2319
Self reported IQ: 138.5 v. 138.7
online IQ test: 127 v. 126
ACT score: ... (read more)

3William_Quixote
multiplying text by 1 or adding zero can often force auto conversion in excel. You can do this by past as values multiply. Shortcut keys are copy 1 then highlight data ALT+e s then v m enter.
2[anonymous]
I had prepared the following post before I came across the one by daenerys. Here are the statistics of the members who claim to have 4000 karma or more. The sample was too small and I was too lazy to fix the data so I used medians (I did it manually). yvain can definitely do a better job since he has the data already fixed and can access the unpublished data. Probabilities: PManyWorld: 67.5% PAliens: 80% PAliens2: 20% Psupernatural: 0.1% PGod: 0.03% PReligion: 0.00000005 PCryonics: 8.5 PAntiagathics: 20% PSimulation: 10% PWarming: 80% PGlobalcatastrophicrisk: 75% Singulartity: 2070 TypeofGlobalCatastrophicRisk: 10 Unfriendly AI, 7 Pandemic bioengineered, 3 Nanotech / grey goo, 2 Nuclear war, 1 Unknown Unknowns, 1 unsure Personality: MyersBriggs: 5 INTJ, 2 INTP, 2 ENFP, 1 ENTJ, 1 ISTP BigFiveO: 80 BigFiveC: 35 BigFiveE: 37 BigFiveA: 38 BigFiveN: 37 IQTest: 135 AutismScore: 23 Politics: 5 Socialist, 12 Liberal, 3 Conservative, 6 Libertarian AlternativeAlternativePolitics: 3 Moldbuggian, 2 Futarchist, 1 Technocratic, 1 Pragmatist (the rest were unremarkable). PoliticalCompassLeftRight: 1.25 PoliticalCompassLiberty: -5.28 Vegetarians: 16% SRS: 36%
4wedrifid
0 INFPs with over 4k? Well, it looks like that has outed me as not filling in this year's survey! Well, unless I was the type to be squeamish about revealing karma or identifying information in such a case (not likely!)

"Eliezer Yudkowsky personality cult."
"The new thing for people who would have been Randian Objectivists 30 years ago."
"A sinister instrument of billionaire Peter Thiel."

Nope, no one guessed whose sinister instrument this site is. Muaha.

[-][anonymous]180

So I suggest that we now have pretty good, pretty believable evidence that the average IQ for this site really is somewhere in the 130s, and that self-reported IQ isn't as terrible a measure as one might think.

This still suffers from selection bias - I'd imagine that people with lower IQ are more likely to leave the field blank than people with higher IQ.

[-]gwern120

This still suffers from selection bias - I'd imagine that people with lower IQ are more likely to leave the field blank than people with higher IQ.

I think this is only true if we're going to also assume that the selection bias is operating on ACT and SAT scores. But we know they correlate with IQ, and quite a few respondents included ACT/SAT1600/SAT2400 data while they didn't include the IQ; so all we have to do is take for each standardized test the subset of people with IQ scores and people without, and see if the latter have lower scores indicating lower IQs. The results seem to indicate that while there may be a small difference in means between the groups on the 3 scores, it's neither of large effect size nor statistical significance.

ACT:

R> lwa <- subset(lw, !is.na(as.integer(ACTscoreoutof36)))
R> lwiq <- subset(lwa, !is.na(as.integer(IQ)))
R> lwiqnot <- subset(lwa, is.na(as.integer(IQ)))
R> t.test(lwiq$ACTscoreoutof36, lwiqnot$ACTscoreoutof36, alternative="less")

    Welch Two Sample t-test

data:  lwiq$ACTscoreoutof36 and lwiqnot$ACTscoreoutof36 
t = 0.5088, df = 141.9, p-value = 0.6942
alternative hypothesis: true difference in means is l
... (read more)
2magfrump
I'm interested in this analysis but I don't think the results are presented nicely, and I am not THAT interested. If someone else wants to summarize the parent I promise to upvote you.
8gwern
I... thought I did summarize it nicely:
5magfrump
That is actually better than I remembered immediately after reading it; with the data coming after the discussion my brain pattern-completed to expect a conclusion after the data. Also the paragraph is a little bit dense; a paragraph break before the last sentence might make it a little more readable in my mind. I had already upvoted your post, regardless :)
2Kindly
Indeed, more than 2/3 of responders left the field blank, so the real IQ could be pretty much anything.

Or we could just be really horrible. If we haven't even learned to avoid the one bias that we can measure super well and which is most susceptible to training, what are we even doing here?

You're fun to read. Posts explaining things and introducing terms that connect subjects and form patterns trigger reward mechanisms in the brain. This is uncorrelated to actually applying any lessons in daily life.

Two questions you might want to ask next year is "do you think it is practical and advantageous to reduce people's biases via standardized exercises?" and "Has reading LW inspired you to try and reduce your own biases?"

If we haven't even learned to avoid the one bias that we can measure super well and which is most susceptible to training, what are we even doing here?

This sounds like a job for cognitive psychology!

"Well-calibrated" should probably be improved to "well-calibrated about X"-- it's plausible that people have better and worse calibration about different subjects, and the samples in the survey only explored a tiny part of calibration space.

[-]gwern140

The 2011 survey ran 33 days and collected 1090 responses. This year's survey ran 23 days and collected 1195 responses.

Why did you close it early? That seems entirely unnecessary.

One friend didn't see the survey because she hangs out on the #lesswrong channel more than the main site.

I put a link and exhortation prominently in the #lesswrong topic from the day the survey opened to the day it closed.

M (trans f->m): 3, 0.3% / F (trans m->f): 16, 1.3%

3 vs 16 seems like quite a difference, even allowing for the small sample size. Is this consistent with the larger population?

Prefer polyamorous: 155, 13.1%...NUMBER OF CURRENT PARTNERS:... [>1 partners = 4.5%]

So ~3x more people prefer polyamory than are actually engaged in it...

Referred by HPMOR: 262, 22.1%

Impressive.

gwern.net: 5 people

Woot! And I'm not even trying or linking LW especially often.

(I am also pleased by the nicotine and modafinil results, although you dropped a number in 'Never: 76.5%')

TROLL TOLL POLICY: Disapprove: 194, 16.4% Approve: 178, 15%

So more people are against than for. Not exactly a mandate for its use.

Are people who understand quantum mechanics are more likely to believe in Ma

... (read more)

So ~3x more people prefer polyamory than are actually engaged in it...

I would not describe this as an accurate conclusion. For one thing, I currently have one partner who has other partners, so I think I am unambiguously "currently engaged in polyamory" even though I would have put 1 on the survey.

For another, I think it is reasonable to say that someone who is in a relationship with exactly one other person, but is not monogamous with that person (i.e. is available to enter further relationships) is engaged in polyamory.

8gwern
Do you think your situation explains 2/3s of those who prefer polyamory?
0gwillen
Well, I think you can probably break it down as follows, given just the data we have: * 0 partners * 1 partner, looking * 1 partner, not looking * 2 partners+ Of those, I would say the second and fourth are unambiguously practicing poly, the third could go either way but you could say is presumptively mono, and the first probably doesn't count (since they are actively practicing neither mono nor poly.) If someone wants to run those numbers, I'd be curious how they come out.
3gwern
The second could be people looking for replacements for their current partner, no? I wouldn't call that unambiguous.
2Cakoluchiam
I don't agree that the first doesn't count. The Relationship Style question was about preferred style, not current active situation. It could be that 2/3 of the polyamorous people just can't get a date (lord knows I've been there). (ETA:) Or, in the case of not looking, don't want a date right now (somewhere I've also been).
2DaFranker
I'm in the "no preference" camp, not the poly specifically, but I'm certainly there. LessWrong does seem to indirectly filter for people who are there, simply because people who aren't are less likely to take an interest in things that would lead them to LW, IME.
2JoeW
TL;DR - I think it's not that simple. Opinion is divided as to whether poly is an orientation or a lifestyle (something one is vs. something one does). i.e. saying someone with no partners is practising neither mono nor poly is like saying someone with no partners is not currently engaged in homo-/bi-/hetero-sexuality. (However I would accept a claim that they were engaged in asexuality.)
2thomblake
This is a good point. I wonder if it's worth even making the distinction between "lifestyle" and "act". Thus, poly could be an orientation ("I'm not poly because I don't want multiple partners"), lifestyle ("I'm not poly because I don't have and I'm not actively seeking multiple partners"), and act ("I'm not poly because I don't currently have multiple partners"). I used to always use the "act" definition when discussing sexual orientation ("I don't have one - I haven't had sex with anyone lately") to the confusion of all interlocutors.
6JoeW
Heh, in fact I started but then deleted as a derail some discussion of problems in activist and academic discussions of sexual orientation - what are we to make of someone whose claimed orientation (identification) does not match their current and past behaviour, which might in turn be different again to their stated actual preferences. I'm not current in my academic reading of sexuality, but when I was, anyone researching from a public health perspective went with behaviour, while psychologists and sociologists were split between identification and preference. Queer activism seems to have generally gone with identification as primary, although I'm not as current there as I used to be. The trumping argument there was actually precisely your situation, where to accept behaviour as primary meant that no virgins had any orientation, and that does not agree with our intuitions or most peoples' personal experiences. There's also a bi-activism point which says that position means the only "true" bisexuals are people engaged in mixed-gender group sex. (This is intended as reductio ad absurdem but I've heard people use it seriously.) Poly seems to be more complicated still, q.v. distinctions between swinging, "monogamish", open relationships, polyfidelity and polyamory. I know multiple examples of dyadic couples who regularly have sex with other people but identify as monogamous, and of couples who aren't currently involved with anyone else, aren't looking, but are firm in their poly identification. I guess my TL;DR is that I'm entirely untroubled by an apparent difference between preference and practice, and if the survey had asked similar questions about sexual orientation preference & practice, we would have seen "discrepancies" there too.

3 vs 16 seems like quite a difference, even allowing for the small sample size. Is this consistent with the larger population?

What struck me was not the difference in numbers of FtM and MtF, but the fact that more than ten percent of the survey population identifying as female is MtF.

6Cakoluchiam
Hypothesis: those directly affected by the troll policy (trolls) are more likely to have strong disapproval than those unaffected by the troll policy are to have strong approval. In my opinion, a strong moderation policy should require a plurality vote in the negative (over approval and abstention) to fail a motion to increase security, rather than a direct comparison to the approval. (withdrawn as it applies to LW, whose trolls are apparently less trolly than other sites I'm used to)
[-]gwern240

Hypothesis: those directly affected by the troll policy (trolls) are more likely to have strong disapproval than those unaffected by the troll policy are to have strong approval.

Hypothesis rejected when we operationalize 'trolls' as 'low karma':

R> lwtroll <- lw[!is.na(lw$KarmaScore),]
R> lwtroll <- lwtroll[lwtroll$TrollToll=="Agree with toll" | lwtroll$TrollToll=="Disagree with toll",]
R> # disagree=3, agree=2; so:
R> # if positive correlation, higher karma associates with disagreement
R> # if negative correlation, higher karma associates with agreement
R> # we are testing hypothesis higher karma = lower score/higher agreement
R> cor.test(as.integer(lwtroll$TrollToll), lwtroll$KarmaScore, alternative="less")

    Pearson's product-moment correlation

data:  as.integer(lwtroll$TrollToll) and lwtroll$KarmaScore 
t = 1.362, df = 315, p-value = 0.9129
alternative hypothesis: true correlation is less than 0 
95 percent confidence interval:
 -1.0000  0.1679 
sample estimates:
    cor 
0.07653
R> # a log-transform of the karma scores doesn't help:
R> cor.test(as.integer(lwtroll$TrollToll), log1p(lwtroll$KarmaScore), altern
... (read more)

If this were anywhere but a site dedicated to rationality, I would expect trolls to self-report their karma scores much higher on a survey than they actually are, but that data is pretty staggering. I accept the rejection of the hypothesis, and withdraw my opinion insofar as it applies to this site.

4thomblake
I wonder, if you split out poly/mono preference and number of partners, whether the number who prefer poly but have <2 partners would be significantly different from the number who prefer mono but have <1 partner. Now that I've wondered this out loud, I feel like I should have just asked a computer.
8DaFranker
I was about to reply the same thing. The quoted statement doesn't sound particularly more surprising than "Most people prefer to be in a relationship, but only a fraction of those are actually engaged in one".
7Kindly
Would it be more surprising to find people that prefer poly relationships, but only have one partner and aren't looking for more, than to find people that prefer mono relationships, but have no partners and aren't looking for any? Among those with firm mono/poly preferences, there are 15% of the former (24% if we also include people that prefer poly, have no partners, and aren't looking for more) and 14% of the latter.
6Kindly
Also, roughly 2/7 of people that prefer poly are single, while roughly 3/7 of people that prefer mono are.
4thomblake
Thanks, computer!
2Kindly
Oh, I forgot to answer your actual question. Slightly over 2/3 of people that prefer poly have 0 or 1 partners. Edit: Although I guess this much was evident from the data if we assume that people that prefer mono won't have 2 or more partners. I guess the group that doesn't have a firm mono/poly preference (which I ignored entirely) could confuse things a bit.
0thomblake
So, people that prefer mono are more likely to have their preferred number of partners, but people who prefer poly have more partners.
2DaFranker
Not by that much, but yes, I suppose a tad more. Thanks for clearing this up.
3thomblake
As I understand it, there isn't good data. Stereotypically, there are more MtF than FtM. But according to Wikipedia, a Swedish study found a ratio of 1.4:1 in favor of MtF for those requesting sexual reassignment surgery, and 1:1 for those going through with it. Of course, this is the sort of Internet community where I'd expect some folks to identify as trans without wanting to go through surgery at all.
[-]gwern150

After I posted my comment, I realized that 3 vs 16 might just reflect the overall gender ratio of LW: if there's no connection between that stuff and finding LW interesting (a claim which may or may not be surprising depending on your background theories and beliefs), then 3 vs 16 might be a smaller version of the larger gender sample of 120 vs 1057. The respective decimals are 0.1875 and 0.1135, which is not dramatic-looking. The statistics for whether membership differs between the two pairs:

R> M <- as.table(rbind(c(120, 1057), c(3,16)))
R> dimnames(M) <- list(status=c("c","t"), gender=c("M","F"))
R> M
      gender
status    M    F
     c  120 1057
     t    3   16

R> chisq.test(M, simulate.p.value = TRUE, B = 20000000)

    Pearson's Chi-squared test with simulated p-value (based on 2e+07 replicates)

data:  M 
X-squared = 0.6342, df = NA, p-value = 0.4346

(So it's not even close to the usual significance level. As intuitively makes sense: remove or add one person in the right category, and the ratio changes a fair bit.)

Under this theory, it seems (with low statistical confidence of course) that LW-interest is perhaps correlated with biological sex rather than gender identity, or perhaps with assigned-gender-during-childhood. Which is kind of interesting.

8Emile
Does anybody know if this holds for other other preferences that tend to vary heavily by gender? Are MtoF transsexuals heavily into say programming, or science fiction? (I know of several transsexual game developers/designers, all MtoF).
8TorqueDrifter
I don't know of any such data. I'd imagine that there's less of a psychological barrier to engaging in traditionally "gendered" interests for most transgendered people (that is, if you think a lot about gender being a social construct, you're probably going to care less about a cultural distinction between "tv shows for boys" and "tv shows for girls"). Beyond that I can't really speculate. Edit: here's me continuing to speculate anyway. A transgendered person is more likely than a cisgendered person to have significant periods of their life in which they are perceived as having different genders, and therefore is likely to be more fully exposed to cultural expectations for each.
5thomblake
FWIW, I have the opposite intuition. Transgendered people (practically by definition) care about gender a lot, so presumably would care more about those cultural distinctions. Contrast the gender skeptic: "What do you mean, you were assigned male but are really female? There's no 'really' about it - gender is just a social construct, so do whatever you want."
[-][anonymous]150

It's more complicated than that. Gender nonconformity in childhood is frequently punished, so a great many trans people have some very powerful incentives to suppress or constrain our interests early in life, or restrict our participation in activities for which an expressed interest earns censure or worse.

Pragmatically, gender is also performed, and there are a lot of subtle little things about it that cisgender people don't necessarily have innately either, but which are learned and transmitted culturally, many of which are the practical aspects of larger stuff (putting on makeup and making it look good is a skill, and it consists of lots of tiny subskills). Due to the aforementioned process, trans people very frequently don't get a chance to acquire those skills during the phase when their cis counterparts are learning them, or face more risks for doing so.

Finally, at least in the West: Trans medical and social access were originally predicated on jumping through an awful lot of very heteronormative hoops, and that framework still heavily influences many trans communities, particularly for older folks. This aspect is changing much faster thanks to the internet, but you still o... (read more)

3TorqueDrifter
Yeah, no idea how good my intuitions are here. I don't have much experience with the subject, and frankly have a little difficulty vividly imagining what it's like to have strong feelings about one's own gender. So let's go read Jandila's comments instead of this one.
5[anonymous]
It's a common inside joke amongst SF-loving, programmer trans women that there are a lot of SF-loving, programmer trans women, or that trans women are especially and unusually common in those fields. But they usually don't socialize with large swathes of other trans women who come unsorted by any other criterion save "trans and women"; I think this is an availability bias coupled with a bit of "I've found my tribe!" thinking.
2A1987dM
Yep, I'd guess that matters a great deal. (IIRC certain radical feminists dislike male-to-female transsexuals for that reason.)
4TorqueDrifter
That's the explanation I'd lean towards myself. As for the radical-feminists-versus-transsexuals thing - there seems to be a fair amount of tension between the gender/sexuality theories of different parts of the queer and feminist movements, which are generally glossed over in favor of cooperation due to common goals. Which, actually, is somewhat heartening.

After I posted my comment, I realized that 3 vs 16 might just reflect the overall gender ratio of LW

Now I feel dumb for not even noticing that. "In a group where most people were born males, why is it the case that most trans people were born males?" doesn't even seem like a question.

3A1987dM
That sounds like hindsight bias. If there were 16 trans men and 3 trans women, you'd be saying ‘"In a group where most people currently identify as men, why is it the case that most trans people currently identify as men?" doesn't even seem like a question.’
0VAuroch
I can attest that this reasoning occurred to me knowing only that there were 1.3% trans women; my prediction was 'based on my experience with trans people, this probably reflects upbringing-assigned gender, so I expect to see fewer trans men'.
0DaFranker
Haha, that's a great way to look at it. Had skipped over this myself too! Now it makes me wonder which would be more significant between this and the apparent prominence of M->F over F->M that I just read some stats about (if the stats are true/reliable, 0.7 conf there).
0thomblake
link?
2DaFranker
Oh, heh, sorry. I mentioned them in a different subthread around here. The linked PDF has a few fun numbers, but didn't notice any obvious dates or timelines. The main website hosting it has a bit more data and references from what little I looked into.
2DaFranker
Hmm. Thanks for the link to that wikipedia page. Interesting... ...the definitions given on that wikipedia page seem to imply that I'm strongly queer and/or andro*, at least in terms of my experiences and gender-identity. Had never noticed nor cared (which, apparently, is a component of some variants of andro-somethings). I'm (very visibly) biologically male and "identify" (socially) as male for obvious reasons (AKA don't care if miscategorized, as long as the stereotyping isn't too harmful), and I'm attracted mostly to females because of instinct (I guess?) and practical issues (e.g. disdain of anal sex). Oh well, one more thing to consider when trying to figure out why people get confused by my behaviors. I've always (in recent years anyway) thought of myself as "human with penis".
[-][anonymous]180

I'm attracted mostly to females because of instinct (I guess?) and practical issues (e.g. disdain of anal sex).

If you can't think of practical ways for two people with penises to have sex that don't involve anal, you might just need better porn.

3DaFranker
Haha, true. Then again, I'm guessing looking at actual male-male porn would decrease the odds of that happening - which I've never done yet.
3A1987dM
Same here. (But one of the reasons why I identify as male in spite of being somewhat psychologically androgynous is that I take exception with the notion that if someone doesn't have sufficiently masculine (feminine) traits, he (she) is not a ‘real’ man (woman). And I'm almost exclusively attracted to females, almost exclusively because of ‘instinct’ (a.k.a. males just don't give me a boner; is there a better word than “instinct”?) but also because I'd like to have biological children some day.) Maybe the next survey should include the Bem Sex Role Inventory. (According to this, I'm slightly above median for both masculinity and femininity, and slightly more feminine than masculine.)
2Scott Alexander
Yes, but I imagined someone like Eliezer might have the hypothesis that the math naturally leads to MWI and rationalists who understood the math would realize that.
2DaFranker
Might be close enough to assume it's due to the small sample: No idea how reliable those numbers are, nor how they compare with elsewhere in the world. The main website that hosts that PDF should have more complete data that could be cross-referenced, if someone wants to take the time to do that.
0thomblake
Interesting. Going to the source of some of those numbers, it doesn't look like there was clear specification of what they meant by "sexual orientation", so that line of the chart is actually entirely meaningless to me. Anyone have a good guess as to how people would have answered?
2DaFranker
AFAICT It seems to be answered in terms of the sex of their partners post-transition, i.e. a hetero MTF would prefer sexually-male partners. The fact that the 59% stat for history of rape is symmetrical for MTF and FTM really bugs me, though. It seems to imply weird causal arrows pointing in completely opposite directions depending on whether you were originally male or female, based on my prior knowledge. Which seems very scary, because it could also imply that MTFs are a dozen decibels more likely to be targets of rape than average females. Now I wonder if that has been taken into account when looking at the mental health stats.
1thomblake
Yeah, somewhere in there are some pretty disturbing violent crime stats. A notable proportion of violent crime in one country was towards trans people.
2NancyLebovitz
Overview for the United States
0[anonymous]
Like "FTM: 35% Heterosexual, 33% Bisexual, 18% Gay, 12% Lesbian".
0AlexMennen
No, the data showed people who could solve the Schrodinger Equation being more likely to accept MWI, contrary to shminux's hypothesis, so the p-value would be 0.13 in a one-tailed test for the opposite of shminux's hypothesis. I guess that means the p-value for a one-tailed test for shminux's hypothesis would be 0.87.
0A1987dM
Well, there also are nine times as many male-born males as female-born females, for that matter.
0gwern
See http://lesswrong.com/lw/fp5/2012_survey_results/7xfh

Thank you for this public service. It seems definitely helpful for the community, and possibly helpful for historians :-)

and possibly helpful for historians :-)

I now have this mental image of future sociology grad students working on their theses by reading through every article and comment ever posted on Less Wrong, and then analyzing us.

I now have an image of those sociologists giving up on reading everything and writing scripts to do some sort of ngram or inverse-markov analysis, then mis-applying statistics to draw wrong conclusions from it. Am I cynical yet?

4Kaj_Sotala
I was actually thinking of the kind of sociology thesis that doesn't use any statistics, and is rather a purely qualitative analysis.
4magfrump
I now have an image of farther future sociologists writing scathing commentaries on the irony of poorly-used statistical measures of this community.
7Armok_GoB
I'm imagining them being vast posthumans with specialized modalities for it that can't really be called "reading".

According to IQ Comparison Site, an SAT score of 1485/1600 corresponds to an IQ of about 144. According to Ivy West, an ACT of 33 corresponds to an SAT of 1470 (and thence to IQ of 143).

Only if you took the SAT before 1994. Here's the percentiles for SATs taken in 2012; someone who was 97th percentile would get ~760 on math and ~730 on critical reading, adding up to 1490 (leaving alone the writing section to keep it within 1600), and 97th percentile corresponds to an IQ of 128.

Here's a classic calibration chart.

An important part of the calibration chart (for people) is the frequency of times that they provide various calibrations. Looking at your table, I would focus on the large frequency between 10% and 30%.

I'll also point out that fixed windows are a pretty bad way to do elicitation. I tend to come from the calibration question from the practical side: how do we get useful probabilities out of subject-matter experts without those people being experts at calibration? Adopting those strategies seems more useful than making people experts at calibration.

[-]gwern120

Are people who understand quantum mechanics are more likely to believe in Many Worlds? We perform a t-test, checking whether one's probability of the MWI being true depends on whether or not one can solve the Schrodinger Equation. People who could solve the equation had on average a 54.3% probability of MWI, compared to 51.3% in those who could not. The p-value is 0.26; there is a 26% probability this occurs by chance. Therefore, we fail to establish that people's probability of MWI varies with understanding of quantum mechanics.

Some Bayesian analysis using the BEST MCMC library for normal two-group comparisons:

R> lw <- read.csv("lw-2012.csv")
R> 
R> lwm <- subset(lw, !(" " == as.character(SchrodingerEquation)))
R> lwm <- subset(lwm, !is.na(as.integer(as.character(PManyWorlds))))
R> mwiyes <- as.integer(as.character(subset(lwm, SchrodingerEquation == "Yes")$PManyWorlds))
R> mwino <- as.integer(as.character(subset(lwm, SchrodingerEquation == "No")$PManyWorlds))
R> 
R> source("BEST.R")
R> mcmcChain = BESTmcmc(mwino, mwiyes)
R> show(postInfo)
           SUMMARY.INFO
PARAMETER       mean 
... (read more)
4gjm
For what it's worth, I interpreted his "there is a 26% probability this occurs by chance" exactly as "if there's no real difference, there's a 26% probability of getting this sort of result by chance alone" or equivalently "conditional on the null hypothesis Pr(something at least this good) = 26%". I'd expect that someone who was making the classic error would have said "there is a 26% probability this occurred by chance".

When you discuss the calibration results, could you mention that the surveyors were told what constituted a correct answer? I didn't take the survey and it isn't obvious from reading this post. Also, could you include a plug for PredictionBook around there? You've included lots of other helpful plugs.

7Scott Alexander
Done.
4Academian
Maybe a plug for the Credence Game too? ;) It's less in touch with real life than prediction book, but a lot faster.
1Eugine_Nier
I wonder whether consequentialism endorsement and possibly some of the probability questions correlate with the two family background questions.
2gwern
Two? I see FamilyReligion but I dunno what your other one is. But to test family & MoralViews: R> lw <- read.csv("2012.csv") R> lwr <- subset(lw, as.character(FamilyReligion) != " ") R> lwr <- subset(lwr, as.character(MoralViews) != " " & as.character(MoralViews) != "Other / no answer") R> levels(lwr$FamilyReligion); levels(lwr$MoralViews) [1] " " "Agnostic" "Atheist and not spiritual" [4] "Atheist but spiritual" "Committed theist" "Deist/Pantheist/etc" [7] "Lukewarm theist" "Mixed / Other" [1] " " "Accept / lean toward consequentialism" [3] "Accept / lean toward deontology" "Accept / lean toward virtue ethics" [5] "Other / no answer" R> R> cor.test(as.integer(lwr$FamilyReligion), as.integer(lwr$MoralViews)) Pearson's product-moment correlation data: as.integer(lwr$FamilyReligion) and as.integer(lwr$MoralViews) t = -0.6631, df = 858, p-value = 0.5075 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: -0.08935 0.04429 sample estimates: cor -0.02263 I wondered if maybe the levels were screwing things up, even though they're in a logical order which should show any correlation if it exists, so I binned all the results into just binary 'atheist' and 'theist' (as it were), and looked at a chi-squared: R> fr <- sapply(as.integer(lwr$FamilyReligion), function(x) if(x>4) {1} else {0}) R> mv <- sapply(as.integer(lwr$MoralViews), function(x) if(x>2) {1} else {0}) R> ct <- chisq.test(fr,mv); ct Pearson's Chi-squared test with Yates' continuity correction data: fr and mv X-squared = 2e-04, df = 1, p-value = 0.9894 R> ct$expected; ct$observed mv fr 0 1 0 200.6 58.43 1 465.4 135.57 mv fr 0 1 0 200 59 1 466 135 I am a little surprised. Maybe I messed up somehow.
0Eugine_Nier
The one about which religion.
0gwern
That's FamilyReligion then... I don't see why there'd be two such questions about family religion as you seem to think.
0Eugine_Nier
I meant RELIGIOUS BACKGROUND.
6gwern
That field has 41 levels, oy gevalt (I particularly like the religious background "Mother: Jewish; Fat"). Someone else can figure out that analysis!
4A1987dM
;-D (Yvain should use larger text fields the next time.)
[-]gwern130

The lesson I have drawn from the survey is that free-response text fields are the devil and no one is to be trusted with them.

[-][anonymous]100

Yvain, I rechecked the calibration survey results, and encourage someone to recheck my recheck further:

First, these strata overlap... is 5 in 0-5 or 5-15? The N I doesn't actually match either one get either one when I recheck.

Secondly, I am not sure what program you used to calculate the statistics, but when I checked in excel, some people used percentages that got pulled as numbers less than one. I tried to clean that for these. (also removed someone who answered 150.)

Thirdly, there are 20 people in this N. You can be either 60% correct (12 correct), or 65% correct (13 correct), but 60.2% correct in this line seems weird. 85-95: 60.2% [n = 20]

Here was my attempt at recalculating those figures: N after data cleaning was 998.

0-<5: 9.1% [n = 2/22]

5-<15: 13.7% [n = 25/183]

15-<25: 9.3% [n = 21/226]

25-<35: 10% [n = 20/200]

35-<45: 11.1% [n = 10/90]

45-<55: 17.3% [n = 19/110]

55-<65: 20.8% [n = 11/53]

65-<75: 22.6% [n = 7/31]

75-<85: 36.7% [n = 11/30]

85-<95: 63.2% [n = 12/19]

95-100: 88.2% [n = 30/34]

I express low confidence in these remarks because I haven't rechecked this or gone into detail about data cleaning, but my brief take is:

1: Yes, there were some e... (read more)

5gwern
I think the calibration data needs additional cleaning. Eyeballing, I see % signs, decimals, and English comments.

In the fair coin questions, there were two people answering 49.9, one 49.9999, one 49.999999, and one 51. :-/

3Tripitaka
Here is a paper which shows that natural coin tosses are not fair- with a 51:49 bias of the side thats "up" at the beginning. Maybe ask for the probability on an indealized coin toss next year? edit: fixed the markup
7A1987dM
Certain tossing techniques can bias the results much more than that, as described in Probability Theory by Jaynes. But the survey did ask about a “fair coin” (emphasis added).
4dbaupp
(For the [text](url) link syntax to work, you need the full URL, i.e. including the http:// bit at the start: http://comptop.stanford.edu/preprints/heads.pdf)
0TrE
Were they excluded from the probabilities questions?
2Cakoluchiam
It was stated that they should give the obvious answer and that surveys that didn't follow the rules would be thrown out... but maybe 50% isn't as obvious as 99.99% of the population thinks it is. Is there any reason the prompt for the question shouldn't have explicitly stated "(The obvious answer is the correctly formatted value equivalent to p=0.5 or 50%)"?
0Eugine_Nier
My working theory is that they were trolling.
0Cakoluchiam
Either way, should we or shouldn't we have trusted the rest of their answers to be statistically reliable?
1EricHerboso
I see no reason to throw out their responses. They appear to just not be familiar with the terminology. To someone that does not know that "fair coin" is defined as having .5 probability for each side, they might envision it as a real physical coin that doesn't have two heads.

Other Christian: 517, 43.6% Catholic: 295, 24.9%

Now that I think about that, lumping Protestants and Orthodoxes together and keeping Catholics separate is about as bizarre as it gets.

Pandemic (bioengineered): 272, 23% Environmental collapse: 171, 14.5% Unfriendly AI: 160, 13.5% Nuclear war: 155, 13.1% Economic/Political collapse: 137, 11.6% Pandemic (natural): 99, 8.4% Nanotech: 49, 4.1% Asteroid: 43, 3.6%

This is one question where the results really suprised me. Combining natural and engendered pandemics, almost a third of respondents picked it as the top near term X risk which was almost twice as many as the next highest risk. I wonder if the x risk discussions we tend to have may be somewhat misallocated.

Note that the question on the survey was not about existential risks:

Type of Global Catastrophic Risk

Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?

I answered bio-engineered pandemics, but would have answered differently for x-risks.

4A1987dM
Note that x-risks as defined by that questions are not the same as x-risks as defined by Bostrom. In principle, a catastrophe might kill 95% of the population but humanity could later recover and colonize the galaxy, or a different type of catastrophe might only kill 5% of the population but permanently prevent humans from creating extraterrestrial settlements, thereby setting a ceiling to economic growth forever.
4V_V
So, if extraterrestrial settlements are unlikely to be ever created regardless of any catastrophe, the point is moot.
0A1987dM
I think that the likes of Bostrom would consider anything that would prevent us from establishing extraterrestrial settlements to be a catastrophe itself, even though it's ‘business as usual’.
1V_V
Then the 'catastrophe' could be quite possibly intrinsic in the laws of physics and the structure of the solar system.
1MugaSofer
Many are.
0NancyLebovitz
I think I went for political/economic collapse, but with no very great certainty. This is probably a question which could lead to some interesting discussion. Wiping out 90% or so of the human race without killing everyone seems unlikely in general. It wasn't on the list, but I'd probably go for infrastructure disaster-- something which could include more than one of the listed items.
2V_V
Less likely than killing 100% of the human race? Why? Remember that humanity went through bottlenecks where the total population was reduced to tens of thousands scattered in pockets of hundreds to thousands. Humanity survived the Toba super eruption in prehistoric times, and would probably survive the Chicxulub impact if it happened today. Other than an impact powerful enough to sterilize the biosphere, I don't see many things capable of obliterating the human species in the foreseable future. Pandemics don't have a 100% kill rate (at least the natural ones, maybe an engineered one could, but who would be foolish enough to create such a thing?)
1MugaSofer
So many people.
1Eugine_Nier
A disgruntled microbiologist?
2V_V
I'm not an expert, but I don't think that a single individual, or even a small team, could do that. The genetic variety created and maintained by sexual reproduction pretty much ensures that no single infection mechanism is effective on all individuals: key components such as the cell surface proteins and the immune system show a large phenotypic variability even among people of common ancestry living in a small geographic region (that's also the reason why finding compatible organs for transplants is difficult). Even for the most infectious pathogens, there is always a sizeable part of the population that is completely or partially immune. In order to create an artificial pathogen capable of infecting and killing everybody, you have to engineer multiple redundant infection mechanisms tailored to every relevant phenotipic variation, including the rare ones. Even if your pathogen kills 99.99% of human population, far more than any natural pathogen ever did, there would be 700,000 people left, more than enough to repopulate the planet.
3MugaSofer
Is this actually true? Of course, few diseases would actually have good odds of infecting everyone, but surely that's more a matter of exposure. [EDIT: or how you define "partial immunity".]
0V_V
By "partial immunity" I mean that you catch the disease, but only in attenuated form, maybe even subclinical or asymptomatic, and usually develop full immunity afterwards. This happened even with higly infectious diseases such as the medieval Black Death (Yersinia pestis), malaria, smallpox, and now happens with HIV. AFAIK, a superbug capable of infecting and killing everyone doesn't seem to be biologically plausible, at least without extensive genetic engineering.
0MugaSofer
Well, genetic engineering is a common part of scenarios like this. However, it was my understanding that not all natural diseases grant immunity to survivors. I'm not an expert, of course.
7[anonymous]
Tetanus doesn't grant immunity if you actually get it and survive. They are soil/intestinal bacteria normally and they don't grow within you to a high enough number that your immune system can get a good look at them, their toxin is just potent enough that even at low concentrations it kills you. There are also protist pathogens which express vast quantities of a particular coat protein on their surface such that when you form an adaptive immune response agains them it is almost certainly against that protein - and something like one in 10^9 cell divisions their DNA rearranges such that they start expressing a different coat protein and evade the last immune response that their host managed to raise, resetting back to no immunity.
-2MugaSofer
Aha, I knew it! That's really interesting, actually.
0DaFranker
I've been led to understand that this was usually the other way around, or that the mechanism that allowed their survival in the first place was "change something in the immune system, see if it works, repeat until it does". Through some magical process of biology or chemistry afterwards, the found solution is then "remembered" and ready to be deployed again if the disease returns. I'm not quite sure whether anyone understands the exact mechanism behind this magic, but I certainly don't (yet). * By "the other way around", I mean a selection effect; they survived because they were already more resistant and had the right biological configuration ready to become immune to it or somesuch. I'm not clear on the details, this is all second-hand (but from people who knew what they were talking about, or so it seemed at the time). * ETA: Got curious. Looks like there's a pretty good understanding of the matter in the field after all. +1 esteem for immunology and +0.2 for scientific medicine in general. And those are some really great wikipedia articles.
-2MugaSofer
Oh, yeah, I know about that. I understood that it didn't work on everything, though. (Well, it doesn't work on the common cold, for a start, although I'm not sure if that kind of constant low-level mutation is feasible for more ... powerful ... diseases. EDIT: turns out it is.
0NancyLebovitz
I don't know about 90% of the human race, but after the recent tunnel collapse in Japan, I think infrastructure disaster is looking a lot more likely, or possibly slow, grinding infrastructure failure. You could make a case that too much is taken by elites, or that too much is given away, but I think the big problem is that building is fun and maintenance is boring.
[-]CCC80

This survey looks like it was a massive amount of work to analyse. Three cheers for Yvain!

These are the results of the CFAR questions; I have also posted this as its own Discussion section post.

SUMMARY: The CFAR questions were all adapted from the heuristics and biases literature, based on five different cognitive biases or reasoning errors. LWers, on the whole, showed less bias than is typical in the published research (on all 4 questions where this was testable), but did show clear evidence of bias on 2-3 of those 4 questions. Further, those with closer ties to the LW community (e.g., those who had read more of the sequences) showed signifi... (read more)

MORE DETAILED RESULTS

There were 5 questions related to strength of membership in the LW community which I standardized and combined into a single composite measure of LW exposure (LW use, sequence reading, time in community, karma, meetup attendance); this was the main predictor variable I used (time per day on LW also seems related, but I found out while analyzing last year's survey that it doesn't hang together with the others or associate the same way with other variables). I analyzed the results using a continuous measure of LW exposure, but to simplify reporting, I'll give the results below by comparing those in the top third on this measure of LW exposure with those in the bottom third.

There were 5 intelligence-related measures which I combined into a single composite measure of Intelligence (SAT out of 2400, SAT out of 1600, ACT, previously-tested IQ, extra credit IQ test); I used this to control for intelligence and to compare the effects of LW exposure with the effects of Intelligence (for the latter, I did a similar split into thirds). Sample sizes: 1101 people answered at least one of the CFAR questions; 1099 of those answered at least one LW exposure question and 835 ... (read more)