Thanks to everyone who took the 2014 Less Wrong Census/Survey. Extra thanks to Ozy, who did a lot of the number crunching work.

This year's results are below. Some of them may make more sense in the context of the original survey questions, which can be seen here. Please do not try to take the survey as it is over and your results will not be counted.

I. Population

There were 1503 respondents over 27 days. The last survey got 1636 people over 40 days. The last four full days of the survey saw nineteen, six, and four responses, for an average of about ten. If we assume the next thirteen days had also gotten an average of ten responses - which is generous, since responses tend to trail off with time - then we would have gotten about as many people as the last survey. There is no good evidence here of a decline in population, although it is perhaps compatible with a very small decline.

II. Demographics

Sex
Female: 179, 11.9%
Male: 1311, 87.2%

Gender
F (cisgender): 150, 10.0%
F (transgender MtF): 24, 1.6%
M (cisgender): 1245, 82.8%
M (transgender FtM): 5, 0.3%
Other: 64, 4.3%

Sexual Orientation
Asexual: 59, 3.9%
Bisexual: 216, 14.4%
Heterosexual: 1133, 75.4%
Homosexual: 47, 3.1%
Other: 35, 2.3%

[This question was poorly worded and should have acknowledged that people can both be asexual and have a specific orientation; as a result it probably vastly undercounted our asexual readers]

Relationship Style
Prefer monogamous: 778, 51.8%
Prefer polyamorous: 227, 15.1%
Uncertain/no preference: 464, 30.9%
Other: 23, 1.5%

Number of Partners
0: 738, 49.1%
1: 674, 44.8%
2: 51, 3.4%
3: 17, 1.1%
4: 7, 0.5%
5: 1, 0.1%
Lots and lots: 3, 0.2%

Relationship Goals
Currently not looking for new partners: 648, 43.1%
Open to new partners: 467, 31.1%
Seeking more partners: 370, 24.6%

[22.2% of people who don’t have a partner aren’t looking for one.]


Relationship Status
Married: 274, 18.2%
Relationship: 424, 28.2%
Single: 788, 52.4%

[6.9% of single people have at least one partner; 1.8% have more than one.]

Living With
Alone: 345, 23.0%
With parents and/or guardians: 303, 20.2%
With partner and/or children: 411, 27.3%
With roommates: 428, 28.5%

Children
0: 1317, 81.6%
1: 66, 4.4%
2: 78, 5.2%
3: 17, 1.1%
4: 6, 0.4%
5: 3, 0.2%
6: 1, 0.1%
Lots and lots: 1, 0.1%

Want More Children?
Yes: 549, 36.1%
Uncertain: 426, 28.3%
No: 516, 34.3%

[418 of the people who don’t have children don’t want any, suggesting that the LW community is 27.8% childfree.]

Country
United States, 822, 54.7%
United Kingdom, 116, 7.7%
Canada, 88, 5.9%
Australia: 83, 5.5%
Germany, 62, 4.1%
Russia, 26, 1.7%
Finland, 20, 1.3%
New Zealand, 20, 1.3%
India, 17, 1.1%
Brazil: 15, 1.0%
France, 15, 1.0%
Israel, 15, 1.0%

Lesswrongers Per Capita
Finland: 1/271,950
New Zealand: 1/223,550
Australia: 1/278,674
United States: 1/358,390
Canada: 1/399,545
Israel: 1/537,266
United Kingdom: 1/552,586
Germany: 1/1,290,323
France: 1/ 4,402,000
Russia: 1/ 5,519,231
Brazil: 1/ 13,360,000
India: 1/ 73,647,058

Race
Asian (East Asian): 59. 3.9%
Asian (Indian subcontinent): 33, 2.2%
Black: 12. 0.8%
Hispanic: 32, 2.1%
Middle Eastern: 9, 0.6%
Other: 50, 3.3%
White (non-Hispanic): 1294, 86.1%

Work Status
Academic (teaching): 86, 5.7%
For-profit work: 492, 32.7%
Government work: 59, 3.9%
Homemaker: 8, 0.5%
Independently wealthy: 9, 0.6%
Nonprofit work: 58, 3.9%
Self-employed: 122, 5.8%
Student: 553, 36.8%
Unemployed: 103, 6.9%

Profession
Art: 22, 1.5%
Biology: 29, 1.9%
Business: 35, 4.0%
Computers (AI): 42, 2.8%
Computers (other academic): 106, 7.1%
Computers (practical): 477, 31.7%
Engineering: 104, 6.1%
Finance/Economics: 71, 4.7%
Law: 38, 2.5%
Mathematics: 121, 8.1%
Medicine: 32, 2.1%
Neuroscience: 18, 1.2%
Philosophy: 36, 2.4%
Physics: 65, 4.3%
Psychology: 31, 2.1%
Other: 157, 10.2%
Other “hard science”: 25, 1.7%
Other “social science”: 34, 2.3%

Degree
None: 74, 4.9%
High school: 347, 23.1%
2 year degree: 64, 4.3%
Bachelors: 555, 36.9%
Masters: 278, 18.5%
JD/MD/other professional degree: 44, 2.9%
PhD: 105, 7.0%
Other: 24, 1.4%

III. Mental Illness

535 answer “no” to all the mental illness questions. Upper bound: 64.4% of the LW population is mentally ill.
393 answer “yes” to at least one mental illness question. Lower bound: 26.1% of the LW population is mentally ill. Gosh, we have a lot of self-diagnosers.

Depression
Yes, I was formally diagnosed: 273, 18.2%
Yes, I self-diagnosed: 383, 25.5%
No: 759, 50.5%

OCD
Yes, I was formally diagnosed: 30, 2.0%
Yes, I self-diagnosed: 76, 5.1%
No: 1306, 86.9%

Autism spectrum

Yes, I was formally diagnosed: 98, 6.5%
Yes, I self-diagnosed: 168, 11.2%
No: 1143, 76.0%

Bipolar

Yes, I was formally diagnosed: 33, 2.2%
Yes, I self-diagnosed: 49, 3.3%
No: 1327, 88.3%

Anxiety disorder
Yes, I was formally diagnosed: 139, 9.2%
Yes, I self-diagnosed: 237, 15.8%
No: 1033, 68.7%

BPD
Yes, I was formally diagnosed: 5, 0.3%
Yes, I self-diagnosed: 19, 1.3%
No: 1389, 92.4%

[Ozy says: RATIONALIST BPDERS COME BE MY FRIEND]

Schizophrenia
Yes, I was formally diagnosed: 7, 0.5%
Yes, I self-diagnosed: 7, 0.5%
No: 1397, 92.9%

IV. Politics, Religion, Ethics

Politics
Communist: 9, 0.6%
Conservative: 67, 4.5%
Liberal: 416, 27.7%
Libertarian: 379, 25.2%
Social Democratic: 585, 38.9%

[The big change this year was that we changed "Socialist" to "Social Democratic". Even though the description stayed the same, about eight points worth of Liberals switched to Social Democrats, apparently more willing to accept that label than "Socialist". The overall supergroups Libertarian vs. (Liberal, Social Democratic) vs. Conservative remain mostly unchanged.]

Politics (longform)
Anarchist: 40, 2.7%
Communist: 9, 0.6%
Conservative: 23, 1.9%
Futarchist: 41, 2.7%
Left-Libertarian: 192, 12.8%
Libertarian: 164, 10.9%
Moderate: 56, 3.7%
Neoreactionary: 29, 1.9%
Social Democrat: 162, 10.8%
Socialist: 89, 5.9%

[Amusing politics answers include anti-incumbentist, having-well-founded-opinions-is-hard-but-I’ve-come-to-recognize-the-pragmatism-of-socialism-I-don’t-know-ask-me-again-next-year, pirate, progressive social democratic environmental liberal isolationist freedom-fries loving pinko commie piece of shit, republic-ist aka read the federalist papers, romantic reconstructionist, social liberal fiscal agnostic, technoutopian anarchosocialist (with moderate snark), whatever it is that Scott is, and WHY ISN’T THERE AN OPTION FOR NONE SO I CAN SIGNAL MY OBVIOUS OBJECTIVITY WITH MINIMAL EFFORT. Ozy would like to point out to the authors of manifestos that no one will actually read their manifestos except zir, and they might want to consider posting them to their own blogs.]


American Parties
Democratic Party: 221, 14.7%
Republican Party: 55, 3.7%
Libertarian Party: 26, 1.7%
Other party: 16, 1.1%
No party: 415, 27.6%
Non-Americans who really like clicking buttons: 415, 27.6%

Voting

Yes: 881, 58.6%
No: 444, 29.5%
My country doesn’t hold elections: 5, 0.3%

Religion

Atheist and not spiritual: 1054, 70.1%
Atheist and spiritual: 150, 10.0%
Agnostic: 156, 10.4%
Lukewarm theist: 44, 2.9%
Deist/pantheist/etc.: 22,, 1.5%
Committed theist: 60, 4.0%

Religious Denomination
Christian (Protestant): 53, 3.5%
Mixed/Other: 32, 2.1%
Jewish: 31, 2.0%
Buddhist: 30, 2.0%
Christian (Catholic): 24, 1.6%
Unitarian Universalist or similar: 23, 1.5%

[Amusing denominations include anti-Molochist, CelestAI, cosmic engineers, Laziness, Thelema, Resimulation Theology, and Pythagorean. The Cultus Deorum Romanorum practitioner still needs to contact Ozy so they can be friends.]

Family Religion
Atheist and not spiritual: 213, 14.2%
Atheist and spiritual: 74, 4.9%
Agnostic: 154. 10.2%
Lukewarm theist: 541, 36.0%
Deist/Pantheist/etc.: 28, 1.9%
Committed theist: 388, 25.8%

Religious Background
Christian (Protestant): 580, 38.6%
Christian (Catholic): 378, 25.1%
Jewish: 141, 9.4%
Christian (other non-protestant): 88, 5.9%
Mixed/Other: 68, 4.5%
Unitarian Universalism or similar: 29, 1.9%
Christian (Mormon): 28, 1.9%
Hindu: 23, 1.5%’

Moral Views
Accept/lean towards consequentialism: 901, 60.0%
Accept/lean towards deontology: 50, 3.3%
Accept/lean towards natural law: 48, 3.2%
Accept/lean towards virtue ethics: 150, 10.0%
Accept/lean towards contractualism: 79, 5.3%
Other/no answer: 239, 15.9%

Meta-ethics
Constructivism: 474, 31.5%
Error theory: 60, 4.0%
Non-cognitivism: 129, 8.6%
Subjectivism: 324, 21.6%
Substantive realism: 209, 13.9%

V. Community Participation


Less Wrong Use
Lurker: 528, 35.1%
I’ve registered an account: 221, 14.7%
I’ve posted a comment: 419, 27.9%
I’ve posted in Discussion: 207, 13.8%
I’ve posted in Main: 102, 6.8%

Sequences
Never knew they existed until this moment: 106, 7.1%
Knew they existed, but never looked at them: 42, 2.8%
Some, but less than 25%: 270, 18.0%
About 25%: 181, 12.0%
About 50%: 209, 13.9%
About 75%: 242, 16.1%
All or almost all: 427, 28.4%

Meetups
Yes, regularly: 154, 10.2%
Yes, once or a few times: 325, 21.6%
No: 989, 65.8%

Community

Yes, all the time: 112, 7.5%
Yes, sometimes: 191, 12.7%
No: 1163, 77.4%

Romance
Yes: 82, 5.5%
I didn’t meet them through the community but they’re part of the community now: 79, 5.3%
No: 1310, 87.2%

CFAR Events
Yes, in 2014: 45, 3.0%
Yes, in 2013: 60, 4.0%
Both: 42, 2.8%
No: 1321, 87.9%

CFAR Workshop
Yes: 109, 7.3%
No: 1311, 87.2%

[A couple percent more people answered 'yes' to each of meetups, physical interactions, CFAR attendance, and romance this time around, suggesting the community is very very gradually becoming more IRL. In particular, the number of people meeting romantic partners through the community increased by almost 50% over last year.]

HPMOR
Yes: 897, 59.7%
Started but not finished: 224, 14.9%
No: 254, 16.9%

Referrals
Referred by a link: 464, 30.9%
HPMOR: 385, 25.6%
Been here since the Overcoming Bias days: 210, 14.0%
Referred by a friend: 199, 13.2%
Referred by a search engine: 114, 7.6%
Referred by other fiction: 17, 1.1%

[Amusing responses include “a rationalist that I follow on Tumblr”, “I’m a student of tribal cultishness”, and “It is difficult to recall details from the Before Time. Things were brighter, simpler, as in childhood or a dream. There has been much growth, change since then. But also loss. I can't remember where I found the link, is what I'm saying.”]

Blog Referrals
Slate Star Codex: 40, 2.6%
Reddit: 25, 1.6%
Common Sense Atheism: 21, 1.3%
Hacker News: 20, 1.3%
Gwern: 13, 1.0%

VI. Other Categorical Data

Cryonics Status
Don’t understand/never thought about it: 62, 4.1%
Don’t want to: 361, 24.0%
Considering it: 551, 36.7%
Haven’t gotten around to it: 272, 18.1%
Unavailable in my area: 126, 8.4%
Yes: 64, 4.3%

Type of Global Catastrophic Risk
Asteroid strike: 64, 4.3%
Economic/political collapse: 151, 10.0%
Environmental collapse: 218, 14.5%
Nanotech/grey goo: 47, 3.1%
Nuclear war: 239, 15.8%
Pandemic (bioengineered): 310, 20.6%
Pandemic (natural): 113. 7.5%
Unfriendly AI: 244, 16.2%

[Amusing answers include ennui/eaten by Internet, Friendly AI, “Greens so weaken the rich countries that barbarians conquer us”, and Tumblr.]

Effective Altruism (do you self-identify)
Yes: 422, 28.1%
No: 758, 50.4%

[Despite some impressive outreach by the EA community, numbers are largely the same as last year]


Effective Altruism (do you participate in community)
Yes: 191, 12.7%
No: 987, 65.7%

Vegetarian
Vegan: 31, 2.1%
Vegetarian: 114, 7.6%
Other meat restriction: 252, 16.8%
Omnivore: 848, 56.4%

Paleo Diet

Yes: 33, 2.2%
Sometimes: 209, 13.9%
No: 1111, 73.9%

Food Substitutes
Most of my calories: 8. 0.5%
Sometimes: 101, 6.7%
Tried: 196, 13.0%
No: 1052, 70.0%

Gender Default
I only identify with my birth gender by default: 681, 45.3%
I strongly identify with my birth gender: 586, 39.0%

Books
<5: 198, 13.2%
5 - 10: 384, 25.5%
10 - 20: 328, 21.8%
20 - 50: 264, 17.6%
50 - 100: 105, 7.0%
> 100: 49, 3.3%

Birth Month
Jan: 109, 7.3%
Feb: 90, 6.0%
Mar: 123, 8.2%
Apr: 126, 8.4%
Jun: 107, 7.1%
Jul: 109, 7.3%
Aug: 120, 8.0%
Sep: 94, 6.3%
Oct: 111, 7.4%
Nov: 102, 6.8%
Dec: 106, 7.1%

[Despite my hope of something turning up here, these results don't deviate from chance]

Handedness
Right: 1170, 77.8%
Left: 143, 9.5%
Ambidextrous: 37, 2.5%
Unsure: 12, 0.8%

Previous Surveys
Yes: 757, 50.7%
No:  598, 39.8%

Favorite Less Wrong Posts (all > 5 listed)
An Alien God: 11
Joy In The Merely Real: 7
Dissolving Questions About Disease: 7
Politics Is The Mind Killer: 6
That Alien Message: 6
A Fable Of Science And Politics: 6
Belief In Belief: 5
Generalizing From One Example: 5
Schelling Fences On Slippery Slopes: 5
Tsuyoku Naritai: 5

VII. Numeric Data

Age: 27.67 + 8.679 (22, 26, 31) [1490]
IQ: 138.25 + 15.936 (130.25, 139, 146) [472]
SAT out of 1600: 1470.74 + 113.114 (1410, 1490, 1560) [395]
SAT out of 2400: 2210.75 + 188.94 (2140, 2250, 2320) [310]
ACT out of 36: 32.56 + 2.483 (31, 33, 35) [244]
Time in Community: 2010.97 + 2.174 (2010, 2011, 2013) [1317]
Time on LW: 15.73 + 95.75 (2, 5, 15) [1366]
Karma Score: 555.73 + 2181.791 (0, 0, 155) [1335]

P Many Worlds: 47.64 + 30.132 (20, 50, 75) [1261]
P Aliens: 71.52 + 34.364 (50, 90, 99) [1393]
P Aliens (Galaxy): 41.2 + 38.405 (2, 30, 80) [1379]
P Supernatural: 6.68 + 20.271 (0, 0, 1) [1386]
P God: 8.26 + 21.088 (0, 0.01, 3) [1376]
P Religion: 4.99 + 18.068 (0, 0, 0.5) [1384]
P Cryonics: 22.34 + 27.274 (2, 10, 30) [1399]
P Anti-Agathics: 24.63 + 29.569 (1, 10, 40) [1390]
P Simulation 24.31 + 28.2 (1, 10, 50) [1320]
P Warming 81.73 + 24.224 (80, 90, 98) [1394]
P Global Catastrophic Risk 72.14 + 25.620 (55, 80, 90) [1394]
Singularity: 2143.44 + 356.643 (2060, 2090, 2150) [1177]

[The mean for this question is almost entirely dependent on which stupid responses we choose to delete as outliers; the median practically never changes]


Abortion: 4.38 + 1.032 (4, 5, 5) [1341]
Immigration: 4 + 1.078 (3, 4, 5) [1310]
Taxes : 3.14 + 1.212 (2, 3, 4) [1410] (from 1 - should be lower to 5 - should be higher)
Minimum Wage: 3.21 + 1.359 (2, 3, 4) [1298] (from 1 - should be lower to 5 - should be higher)
Feminism: 3.67 + 1.221 (3, 4, 5) [1332]
Social Justice: 3.15 + 1.385 (2, 3, 4) [1309]
Human Biodiversity: 2.93 + 1.201 (2, 3, 4) [1321]
Basic Income: 3.94 + 1.087 (3, 4, 5) [1314]
Great Stagnation: 2.33 + .959 (2, 2, 3) [1302]
MIRI Mission: 3.90 + 1.062 (3, 4, 5) [1412]
MIRI Effectiveness: 3.23 + .897 (3, 3, 4) [1336]

[Remember, all of these are asking you to rate your belief in/agreement with the concept on a scale of 1 (bad) to 5 (great)]

Income: 54129.37 + 66818.904 (10,000, 30,800, 80,000) [923]
Charity: 1996.76 + 9492.71 (0, 100, 800) [1009]
MIRI/CFAR: 511.61 + 5516.608 (0, 0, 0) [1011]
XRisk: 62.50 + 575.260 (0, 0, 0) [980]
Older siblings: 0.51 + .914 (0, 0, 1) [1332]
Younger siblings: 1.08 + 1.127 (0, 1, 1) [1349]
Height: 178.06 + 11.767 (173, 179, 184) [1236]
Hours Online: 43.44 + 25.452 (25, 40, 60) [1221]
Bem Sex Role Masculinity: 42.54 + 9.670 (36, 42, 49) [1032]
Bem Sex Role Femininity: 42.68 + 9.754 (36, 43, 50) [1031]
Right Hand: .97 + 0.67 (.94, .97, 1.00)
Left Hand: .97 + .048 (.94, .97, 1.00)

VIII. Fishing Expeditions

[correlations, in descending order]

SAT Scores out of 1600/SAT Scores out of 2400 .844 (59)
P Supernatural/P God .697 (1365)
Feminism/Social Justice .671 (1299)
P God/P Religion .669 (1367)
P Supernatural/P Religion .631 (1372)
Charity Donations/MIRI and CFAR Donations .619 (985)
P Aliens/P Aliens 2 .607 (1376)
Taxes/Minimum Wage .587 (1287)
SAT Score out of 2400/ACT Score .575 (89)
Age/Number of Children .506 (1480)
P Cryonics/P Anti-Agathics .484 (1385)
SAT Score out of 1600/ACT Score .480 (81)
Minimum Wage/Social Justice .456 (1267)
Taxes/Social Justice .427 (1281)
Taxes/Feminism .414 (1299)
MIRI Mission/MIRI Effectiveness .395 (1331)
P Warming/Taxes .385 (1261)
Taxes/Basic Income .383 (1285)
Minimum Wage/Feminism .378 (1286)
P God/Abortion -.378 (1266)
Immigration/Feminism .365 (1296)
P Supernatural/Abortion -.362 (1276)
Feminism/Human Biodiversity -.360 (1306)
MIRI and CFAR Donations/Other XRisk Charity Donations .345 (973)
Social Justice/Human Biodiversity -.341 (1288)
P Religion/Abortion -.326 (1275)
P Warming/Minimum Wage .324 (1248)
Minimum Wage/Basic Income .312 (1276)
P Warming/Basic Income .306 (1260)
Immigration/Social Justice .294 (1278)
P Anti-Agathics/MIRI Mission .293 (1351)
P Warming/Feminism .285 (1281)
P Many Worlds/P Anti-Agathics .276 (1245)
Social Justice/Femininity .267 (990)
Minimum Wage/Human Biodiversity -.264 (1274)
Immigration/Human Biodiversity -.263 (1286)
P Many Worlds/MIRI Mission .263 (1233)
P Aliens/P Warming .262 (1365)
P Warming/Social Justice .257 (1262)
Taxes/Human Biodiversity -.252 (1291)
Social Justice/Basic Income .251 (1281)
Feminism/Femininity .250 (1003)
Older Siblings/Younger Siblings -.243 (1321)
Charity Donations/Other XRisk Charity Donations .240 (957
P Anti-Agathics/P Simulation .238 (1312)
Abortion/Minimum Wage .229 (1293)
Feminism/Basic Income .227 (1297)
Abortion/Feminism .226 (1321)
P Cryonics/MIRI Mission .223 (1360)
Immigration/Basic Income .208 (1279)
P Many Worlds/P Cryonics .202 (1251)
Number of Current Partners/Femininity: .202 (1029)
P Warming/Immigration .202 (1260)
P Warming/Abortion .201 (1289)
Abortion/Taxes .198 (1304)
Age/P Simulation .197 (1313)
Political Interest/Masculinity .194 (1011)
P Cryonics/MIRI Effectiveness .191 (1285)
Abortion/Social Justice .191 (1301)
P Simulation/MIRI Mission .188 (1290)
P Many Worlds/P Warming .188 (1240)
Age/Number of Current Partners .184 (1480)
P Anti-Agathics/MIRI Effectiveness .183 (1277)
P Many Worlds/P Simulation .181 (1211)
Abortion/Immigration .181 (1304)
Number of Current Partners/Number of Children .180 (1484)
P Cryonics/P Simulation .174 (1315)
P Global Catastrophic Risk/MIRI Mission -.174 (1359)
Minimum Wage/Femininity .171 (981)
Abortion/Basic Income .170 (1302)
Age/P Cryonics -.165 (1391)
Immigration/Taxes .165 (1293)
P Warming/Human Biodiversity -.163 (1271)
P Aliens 2/Warming .160 (1353)
Abortion/Younger Siblings -.155 (1292)
P Religion/Meditate .155 (1189)
Feminism/Masculinity -.155 (1004)
Immigration/Femininity .155 (988)
P Supernatural/Basic Income -.153 (1246)
P Supernatural/P Warming -.152 (1361)
Number of Current Partners/Karma Score .152 (1332)
P Many Worlds/MIRI Effectiveness .152 (1181)
Age/MIRI Mission -.150 (1404)
P Religion/P Warming -.150 (1358)
P Religion/Basic Income -.146 (1245)
P God/Basic Income -.146 (1237)
Human Biodiversity/Femininity -.145 (999)
P God/P Warming -.144 (1351)
Taxes/Femininity .142 (987)
Number of Children/Younger Siblings .138 (1343)
Number of Current Partners/Masculinity: .137 (1030)
P Many Worlds/P God -.137 (1232)
Age/Charity Donations .133 (1002)
P Anti-Agathics/P Global Catastrophic Risk -.132 (1373)
P Warming/Masculinity -.132 (992)
P Global Catastrophic Risk/MIRI and CFAR Donations -.132 (982)
P Supernatural/Singularity .131 (1148)
God/Taxes -.130 (1240)
Age/P Anti-Agathics -.128 (1382)
P Aliens/Taxes .127(1258)
Feminism/Great Stagnation -.127 (1287)
P Many Worlds/P Supernatural -.127 (1241)
P Aliens/Abortion .126 (1284)
P Anti-Agathics/Great Stagnation -.126 (1248)
P Anti-Agathics/P Warming .125 (1370)
Age/P Aliens .124 (1386)
P Aliens/Minimum Wage .124 (1245)
P Aliens/P Global Catastrophic Risk .122 (1363)
Age/MIRI Effectiveness -.122 (1328)
Age/P Supernatural .120 (1370)
P Supernatural/MIRI Mission -.119 (1345)
P Many Worlds/P Religion -.119 (1238)
P Religion/MIRI Mission -.118 (1344)
Political Interest/Social Justice .118 (1304)
P Anti-Agathics/MIRI and CFAR Donations .118 (976)
Human Biodiversity/Basic Income -.115 (1262)
P Many Worlds/Abortion .115 (1166)
Age/Karma Score .114 (1327)
P Aliens/Feminism .114 (1277)
P Many Worlds/P Global Catastrophic Risk -.114 (1243)
Political Interest/Femininity .113 (1010)
Number of Children/P Simulation -.112 (1317)
P Religion/Younger Siblings .112 (1275)
P Supernatural/Taxes -.112 (1248)
Age/Masculinity .112 (1027)
Political Interest/Taxes .111 (1305)
P God/P Simulation .110 (1296)
P Many Worlds/Basic Income .110 (1139)
P Supernatural/Younger Siblings .109 (1274)
P Simulation/Basic Income .109 (1195)
Age/P Aliens 2 .107 (1371)
MIRI Mission/Basic Income .107 (1279)
Age/Great Stagnation .107 (1295)
P Many Worlds/P Aliens .107 (1253)
Number of Current Partners/Social Justice .106 (1304)
Human Biodiversity/Great Stagnation .105 (1285)
Number of Children/Abortion -.104 (1337)
Number of Current Partners/P Cryonics -.102 (1396)
MIRI Mission/Abortion .102 (1305)
Immigration/Great Stagnation -.101 (1269)
Age/Political Interest .100 (1339)
P Global Catastrophic Risk/Political Interest .099 (1295)
P Aliens/P Religion -.099 (1357)
P God/MIRI Mission -.098 (1335)
P Aliens/P Simulation .098 (1308)
Number of Current Partners/Immigration .098 (1305)
P God/Political Interest .098 (1274)
P Warming/P Global Catastrophic Risk .096 (1377)

In addition to the Left/Right factor we had last year, this data seems to me to have an Agrees with the Sequences Factor-- the same people tend to believe in many-worlds, cryo, atheism, simulationism, MIRI’s mission and effectiveness, anti-agathics, etc. Weirdly, belief in global catastrophic risk is negatively correlated with most of the Agrees with Sequences things. Someone who actually knows how to do statistics should run a factor analysis on this data.

IX. Digit Ratios

After sanitizing the digit ratio numbers, the following correlations came up:

Digit ratio R hand was correlated with masculinity at a level of -0.180 p < 0.01
Digit ratio L hand was correlated with masculinity at a level of -0.181 p < 0.01
Digit ratio R hand was slightly correlated with femininity at a level of +0.116 p < 0.05

Holy #@!$ the feminism thing ACTUALLY HELD UP. There is a 0.144 correlation between right-handed digit ratio and feminism, p < 0.01. And an 0.112 correlation between left-handed digit ratio and feminism, p < 0.05.

The only other political position that correlates with digit ratio is immigration. There is a 0.138 correlation between left-handed digit ratio and believe in open borders p < 0.01, and an 0.111 correlation between right-handed digit ratio and belief in open borders, p < 0.05.

No digit correlation with abortion, taxes, minimum wage, social justice, human biodiversity, basic income, or great stagnation.

Okay, need to rule out that this is all confounded by gender. I ran a few analyses on men and women separately.

On men alone, the connection to masculinity holds up. Restricting sample size to men, left-handed digit ratio corresponds to masculinity with at -0.157, p < 0.01. Left handed at -0.134, p < 0.05. Right-handed correlates with femininity at 0.120, p < 0.05. The feminism correlation holds up. Restricting sample size to men, right-handed digit ratio correlates with feminism at a level of 0.149, p < 0.01. Left handed just barely fails to correlate. Both right and left correlate with immigration at 0.135, p < 0.05.

On women alone, the Bem masculinity correlation is the highest correlation we're going to get in this entire study. Right hand is -0.433, p < 0.01. Left hand is -0.299, p < 0.05. Femininity trends toward significance but doesn't get there. The feminism correlation trends toward significance but doesn't get there. In general there was too small a sample size of women to pick up anything but the most whopping effects.

Since digit ratio is related to testosterone and testosterone sometimes affects risk-taking, I wondered if it would correlate with any of the calibration answers. I selected people who had answered Calibration Question 5 incorrectly and ran an analysis to see if digit ratio was correlated with tendency to be more confident in the incorrect answer. No effect was found.

Other things that didn't correlate with digit ratio: IQ, SAT, number of current partners, tendency to work in mathematical professions.

...I still can't believe this actually worked. The finger-length/feminism connection ACTUALLY WORKED. What a world. What a world. Someone may want to double-check these results before I get too excited.

X. Calibration


There were ten calibration questions on this year's survey. Along with answers, they were:

1. What is the largest bone in the body? Femur
2. What state was President Obama born in? Hawaii
3. Off the coast of what country was the battle of Trafalgar fought? Spain
4. What Norse God was called the All-Father? Odin
5. Who won the 1936 Nobel Prize for his work in quantum physics? Heisenberg
6. Which planet has the highest density? Earth
7. Which Bible character was married to Rachel and Leah? Jacob
8. What organelle is called "the powerhouse of the cell"? Mitochondria
9. What country has the fourth-highest population? Indonesia
10. What is the best-selling computer game? Minecraft

I ran calibration scores for everybody based on how well they did on the ten calibration questions. These failed to correlate with IQ, SAT, LW karma, or any of the things you might expect to be measures of either intelligence or previous training in calibration; they didn't differ by gender, correlates of community membership, or any mental illness [deleted section about correlating with MWI and MIRI, this was an artifact].

Your answers looked like this:



The red line represents perfect calibration. Where answers dip below the line, it means you were overconfident; when they go above, it means you were underconfident.

It looks to me like everyone was horrendously underconfident on all the easy questions, and horrendously overconfident on all the hard questions. To give an example of how horrendous, people who were 50% sure of their answers to question 10 got it right only 13% of the time; people who were 100% sure only got it right 44% of the time. Obviously those numbers should be 50% and 100% respectively.

This builds upon results from previous surveys in which your calibration was also horrible. This is not a human universal - people who put even a small amount of training into calibration can become very well calibrated very quickly. This is a sign that most Less Wrongers continue to neglect the very basics of rationality and are incapable of judging how much evidence they have on a given issue. Veterans of the site do no better than newbies on this measure.

XI. Wrapping Up

To show my appreciation for everyone completing this survey, including the arduous digit ratio measurements, I have randomly chosen a person to receive a $30 monetary prize. That person is...the person using the public key "The World Is Quiet Here". If that person tells me their private key, I will give them $30.

I have removed 73 people who wished to remain private, deleted the Private Keys, and sanitized a very small amount of data. Aside from that, here are the raw survey results for your viewing and analyzing pleasure:

(as Excel)

(as SPSS)

(as CSV)

2014 Survey Results
New Comment
283 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The number of Asians (both East and South) among American readers is pretty surprisingly low - 43/855 ~= 5%. This despite Asians being, e.g., ~15% of the Ivy League student body (it'd be much higher without affirmative action), and close to 50% of Silicon Valley workers.

Being south asian myself - I suspect that the high achieving immigrant-and-immigrant-descended populations gravitate towards technical fields and Ivy leagues for different reasons than American whites do. Coming from hardship and generally being less WEIRD, they psychologically share more in common with the middle class and even blue collar workers than the Ivy League upper class - they see it as a path to success rather than some sort of grand purposeful undertaking. (One of the Asian Professional community I participated in articulated this and other differences in attitude as a reason that Asians often find themselves getting passed over for higher level management positions, as something to be overcome).

Lesswrong tends to appeal to abstract, starry-eyed types. I hate to use the word "privilege", but there is some hard to quantify things, like degree of time talking about lesswrong-y key words like "free will" or "utilitarianism", which are going to influence the numbers here. (Not that asians don't like chatting about philosophy, but they certainly have less time for it and also they tend to focus on somewhat different topics during philosophical... (read more)

2laofmoonster
East Asian - mostly agreed. I think WEIRDness is the biggest factor. WEIRD thought emphasizes precision and context-independent formalization. I am pretty deracinated myself, but my thinking style is low-precision, tolerant of apparent paradoxes and context-sensitive. The difference is much like the analytic-continental divide in Western philosophy. I recommend Richard Nisbett's book The Geography of Thought, which contrasts WEIRD thought with East Asian thought. 37 Ways Words Can Be Wrong (and LW as a whole) is important because of how brittle WEIRD concepts can be. (I have some crackpot ideas about maps and territories inspired by Jean Baudrillard. He's French, of course...)
3skeptical_lurker
Is affirmative action being used against Asian even though they are a minority?

"Used against", to me, implies active planning that may or may not exist here; but the pragmatic effects of the policy as implemented in American universities do seem to negatively affect Asians.

4skeptical_lurker
Ahh, the old 'malicious or incompetent' dichotomy.
3Nornagest
I'm a big believer in Hanlon's razor, especially as it applies to policy.

There's pretty unambiguous statistical evidence that it happens. The Asian Ivy League percentage has remained basically fixed for 20 years despite the college-age Asian population doubling (and Asian SAT scores increasing slightly).

-5Robin
2Vaniver
I've noticed this for a while. Might be interesting to look at this by referral source?

Calibration Score

Using a log scoring rule, I calculated a total accuracy+calibration score for the ten questions together. There's an issue that this assumes the questions are binary when they're not- someone who is 0% sure that Thor is the right answer to the mythology question gets the same score (0) as the person who is 100% sure that Odin is the right answer to the mythology question. I ignored infinitely low scores for the correlation part.

I replicated the MWI correlation, but I noticed something weird- all of the really low scorers gave really low probabilities to MWI. The worst scorer had a score of -18, which corresponds to giving about 1.6% probability to the right answer. What appears to have happened is they misunderstood the survey, and answered in fractions instead of percents- they got 9 out of 10 questions right, but lost 2 points every time they assigned 1% or slightly less than 1% to the right answer (i.e. they mean to express almost certainty by saying 1 or 0.99) and only lost 0.0013 points when they assigned 0.3% probability to a wrong answer.

When I drop the 30 lowest scorers, the direction of the relationship flips- now, people with better log scores (i.e. close... (read more)

2Luke_A_Somers
I've always wanted to visit 100. Can you show the distribution of overall calibration scores? You only talked about the extreme cases and the differences across P(MWI), but you clearly have it.
3Vaniver
Picture included, tragic mistakes excluded*. The percentage at the bottom is a mapping from the score to probabilities using the inverse of "if you had answered every question right with probability p, what score would you have?", and so is not anything like the mean probability given. Don't take either of the two perfect scores seriously; as mentioned in the grandparent, this scoring rule isn't quite right because it counts answering incorrectly with 0% probability as the same as answering correctly with 100% probability. (One answered 'asdf' to everything with 0% probability, the other left 9 blank with 0% probability and answered Odin with 100% probability.) Bins have equal width in log-space. * I could have had a spike at 0, but that seems not quite fair since it was specified that '100' and '0' would be treated as '100-epsilon' and 'epsilon' respectively, and it's only a Tragic Mistake if you actually answer 0 instead of epsilon.
0Luke_A_Somers
Yeah, that's not a particularly strong scoring method, due to its abusability. I wonder what a better one would be? Of course, it wouldn't help unless people knew that it was going to be used, and care. Fraction correct times this calibration score? Number correct times the product rather than the average of what you did there? Bayes score, with naming the 'wrong' thing yielding a penalty to account for the multiplicity of wrong answers (say, each wrong answer has a 50% hit so even being 100% sure you're wrong is only as good as 50% sure you're right, when you are right)?
2Vaniver
The primary property you want to maintain with a scoring rule is that the best probability to provide is your true probability. I know that the Bayes score generalizes to multiple choice questions, which implies to me that it most likely works with a multiplicity for wrong answers, so long as the multiplicity is close to the actual multiplicity.
2Luke_A_Somers
I think the primary property you want to maintain is that it's best to provide the answer you consider most likely, otherwise it's best to say 'sdfkhasflk' - 0% to all of them you aren't certain of. Multiple choice would making the scoring clearer, but that constraint could well make the calibration easier.
0MTGandP
Sort-of related question: How do you compute calibration scores?
0Vaniver
I was using a logarithmic scoring rule, with a base of 10. (What base you use doesn't really matter.) The Excel formula for the first question (I'm pretty sure I didn't delete any columns, so it should line up) was: =IF(EJ2,LOG(EU2/100),LOG(1-EU2/100))

MIRI Mission/MIRI Effectiveness .395 (1331)

This result sets off my halo effect alarm.

Once again pandemic is the leading cat risk. It was the leading cat risk last year. http://lesswrong.com/lw/jj0/2013_survey_results/aekk It was the leading cat risk the year before that. http://lesswrong.com/lw/fp5/2012_survey_results/7xz0

Pandemics are the risk LWers are most afraid of and to my knowledge we as a community have expended almost no effort on preventing them.

So this year I resolve that my effort towards pandemic prevention will be greater than simply posting a remark about how it's the leading risk.

Clearly, we haven't been doing enough to increase other risks. We can't let pandemic stay in the lead.

1Ander
Get to work on making more AIs everyone!

Givewell has looked into global catastrophic risks in general, plus pandemic preparedness in particular. My impression is that quite a bit more is spent per year on biosecurity (around 6 billion in the US) than is on other catastrophic risks such as AI.

[-][anonymous]110

Pandemics may be the largest risk, but the marginal contribution a typical LWer can make is probably very low, and not their comparative advantage. Let the WHO do its work, and turn your attention to underconsidered risks.

427chaos
Money can be donated.
4someonewrongonthenet
I'm not so sure about that. Isn't the effective altruist focus on global poverty/disease reducing the risk of pandemic? I know very little about epidemiology, but if seems as if a lot of scary diseases (AIDs, ebola...) would never have spread to the human population if certain regions of the third world had better medical infrastructure.
2William_Quixote
That's fair. It's certianly true that poverty reduction also reduces pandemic risk. But it does so inditectly and slowly. There are probably faster ways to reduce pandemic risk than working on poverty.

WHY ISN’T THERE AN OPTION FOR NONE SO I CAN SIGNAL MY OBVIOUS OBJECTIVITY WITH MINIMAL EFFORT

This is why I didn't vote on the politics question.

This is a sign that most Less Wrongers continue to neglect the very basics of rationality and are incapable of judging how much evidence they have on a given issue. Veterans of the site do no better than newbies on this measure.

Theory: People use this site as a geek / intellectual social outlet and/or insight porn and/or self-help site more than they seriously try to get progressively better at rationality. At least, I know that applies to me :).

This definitely belongs on the next survey!

Why do you read LessWrong? [ ] Rationality improvement [ ] Insight Porn [ ] Geek Social Fuzzies [ ] Self-Help Fuzzies [ ] Self-Help Utilons [ ] I enjoy reading the posts

3MTGandP
And then check if the "rationality improvement" people do better on calibration. (I'm guessing they don't.)
1homunq
[ ] Wow, these people are smart. [ ] Wow, these people are dumb. [ ] Wow, these people are freaky. [ ] That's a good way of putting it, I'll remember that. (For me, it's all of the above. "Insight porn" is probably the biggest, but it doesn't dominate.)
0Sabiola
[x] All of the above
7[anonymous]
Is that actually nonobvious? It's sure as hell what I'm here for. I mean, I do actually generally want to be more rational about stuff, but I can't get that by reading a website. Inculcating better habits and reflexes requires long hours spent on practicing better habits and reflexes so I can move the stuff System 2 damn well knows already down into System 1.

I decided to take a look at overconfidence (rather than calibration) on the 10 calibration questions.

For each person, I added up the probabilities that they assigned to getting each of the 10 questions correct, and then subtracted the number of correct answers. Positive numbers indicate overconfidence (fewer correct answers than they predicted they'd get), negative numbers indicate underconfidence (more correct answers than they predicted). Note that this is somewhat different from calibration: you could get a good score on this if you put 40% on each question and get 40% of them right (showing no ability to distinguish between what you know and what you don't), or if you put 99% on the ones you get wrong and 1% on the ones you get right. But this overconfidence score is easy to calculate, has a nice distribution, and is informative about the general tendency to be overconfident.

After cleaning up the data set in a few ways (which I'll describe in a reply to this comment), the average overconfidence score was 0.39. On average, people expected to get 4.79 of the 10 questions correct, but only got 4.40 correct. My impression is that this gap (4 percentage points) is smallish compared ... (read more)

Details on data cleanup:

In the publicly available data set, I restricted my analysis to people who:

  • entered a number on each of the 10 calibration probability estimates
  • did not enter any estimates larger than 100
  • entered at least one estimate larger than 1
  • entered something on each of the 10 calibration guesses
  • did not enter a number for any of the 10 calibration guesses

Failure to meet any of these criteria generally indicated either a failure to understand the format of the calibration questions, or a decision to skip one or more of the questions. Each of these criteria eliminated at least 1 person, leaving a sample of 1141 people.

I counted as "correct":

  • any answer which Scott/Ozy counted as correct
  • any answer to question 1 (largest bone) which began with "fem" (e.g., "femer")
  • any answer to question 2 (Obama's state) which began with "haw" (e.g., "Hawii")
  • any answer to question 4 (Norse god) which began with "od" or "wo" (e.g., "Wotan")
  • any answer to question 8 (cell) which began with "mito" (e.g., "Mitochondira")

These seem to cover the most common ... (read more)

And here's an analysis of calibration.

If a person was perfectly calibrated, then each 10% increase in their probability estimate would translate into a 10% higher likelihood of getting the answer correct. If you plot probability estimates on the x axis and whether or not the event happened on the y axis, then you should get a slope of 1 (the line y=x). But people tend to be miscalibrated - out of the questions where they say "90%", they might only get 70% correct. This results in a shallower slope (in this example, the line would go through the point (90,70) instead of (90,90)) - a slope less than 1.

I took the 1141 people's answers to the 10 calibration questions as 11410 data points, plotted them on an x-y graph (with the probability estimate as the x value and a y value of 100 if it's correct and 0 if it's incorrect), and ran an ordinary linear regression to find the slope of the line fit to all 11410 data points.

That line had a slope of 0.91. In other words, if a LWer gave a probability estimate that was 10 percentage points higher, then on average the claim was 9.1 percentage points more likely to be true. Not perfect calibration, but not bad.

If we look at various s... (read more)

Myth: Americans think they know a lot about other countries but really are clueless.

Verdict: Self-cancelling prophesy.

Method: Semi-humorous generalization from a single data series, hopefully inspiring replication instead of harsh judgment :)

I decided to do some analysis about what makes people overconfident about certain subjects, and decided to start with an old stereotype. I compared how people did on the population calibration question (#9) based on their country.

Full disclosure: I'm Israeli (currently living in the US) and would've guessed Japan with 50% confidence, but I joined LW (unlurked) two days after the end of the survey.

I normalized every probability by rounding extreme confidence values to 1% and 99% and scored each answer that seemed close enough to a misspelling of Indonesia according to the log rule.

Results: Americans didn't have a strong showing with an average score of -0.0071, but the rest of the world really sucked with an average of -0.0296. The reason? While the correct answer rate was almost identical (28.3% v 28.8%) Americans were much less confident in their answers: 42.4% confidence v 46.3% (p<0.01).

Dear Americans, you don't know (significantly) less about the world than everyone else, but at least you internalized the fact that you don't know much*!

Next up: how people who grew up in a religious household do on the Biblical calibration question.

*Unlike cocky Israelis like me.

I'm losing a lot of confidence in the digit ratio/masculinity femininity stuff. I'm not seeing a number of things I'd expect to see.

First, my numbers for correlations don't match up with yours. With filters on for female gendered, and answering all of BemSexRoleF, BemSexRoleM, RightHand, and LeftHand, I get a correlation of only -0.34 for RightHand and BemSexRoleM, not -0.433 as you say. I get various other differences as well, all weaker correlations than you describe. Perhaps differences in filtering explain this? -.34 vs -.433 seems to be high for this to be true though.

Second, Bem masculinity and femininity actually seem to have a positive correlation, albeit tiny. So more masculine people are... more feminine? This makes no sense and makes me more likely to throw out the entire data set.

Thirdly, I don't see any huge differences between Cisgender Men, Transgender Men, Cisgender Women, or Transgender Women on digit ratios. I would expect to see this as well. I get 95% confidence intervals (mean +/- 3*sigma/sqrt(n), formatted [Lower Right - Upper Right / Lower Left - Upper Left]) for the categories as follows:

  • F (Cis): [0.949 - 0.996 / 0.956 - 1.004]
  • M (Cis): [0.962 - 0.978 / 0.
... (read more)
[-]simon120

There isn't necessarily any problem with a small positive correlation between masculinity and femininity. The abstract of what I think is the original paper (I couldn't find an ungated version) says that "The dimensions of masculinity and femininity are empirically and logically independent."

2Baisius
It's not clear that this maps to colloquial use of the terms "feminine" and "masculine" then. I think most would consider them opposite ends of the same spectrum.
7Nornagest
There are aspects of the Western gender roles that are opposed to each other at least to some extent: emotionality vs. stoicism, active vs. passive romantic performance. But there are also aspects that aren't. Blue is not anti-pink. Skill at sewing doesn't forbid skill at fixing cars. These might resolve in people's perceptions to positions on some kind of spectrum of male vs. female presentation, but they won't show up that way on surveys measuring conformity with stereotype. Indeed, that suggests a possible mechanism for these results. Assume for a moment that people prefer to occupy some particular point on the perception spectrum. But people often like stuff for reasons other than gender coding, so it'll sometimes happen that people will be into stuff with gender coding inconsistent with how they'd prefer to be seen. That creates pressure to take up other stuff with countervailing coding. If people respond to that pressure, the net result is a weak positive correlation between stuff with masculine and feminine coding.

I would be really interested in hearing from one of the fourteen schizophrenic rationalists. Given that one of the most prominent symptoms of schizophrenia is delusional thinking, a.k.a. irrationality... I wonder how this plays out in someone who has read the Sequences. Do these people have less severe symptoms as a result? When your brain decides to turn against you, is there a way to win?

I also find it fascinating that bisexuality is vastly overrepresented here (14.4% in LW vs. 1-2% in US), while homosexuality is not. My natural immediate interpretation of this is that bisexuality is a choice. I think Eliezer said once that he would rather be bisexual than straight, because it would allow for more opportunities to have fun. This seems like an attitude many LW members might share, given that polyamory a.k.a. pursuing a weird dating strategy because it's more fun is very popular in this community. (I personally also share Eliezer's attitude, but unfortunately I'm pretty sure I'm straight.) So to me it seems logical that the large number of bisexuals may come from a large number of people-who-want-to-be-bisexual actually becoming so. This seems more likely to me than some aspect or... (read more)

I also find it fascinating that bisexuality is vastly overrepresented here.

I don't. Compare it with the OkCupid data analysis. Bisexuality could be more of a signal. Admittedly at least in the (quite large) OkCupid data.

5gothgirl420666
Oh, wow, that's incredibly strange/interesting, I had never seen that before. Thanks for sharing. The fact that young bi men are almost always closeted gay men, while old bi men are almost always closeted straight men, is particularly baffling.
7Izeinwinter
The first part does not actually follow from the data with any rigor - "Go online to meet people of the same sex, find opposite-sex partners in real life" is a perfectly reasonable strategy, simply because online dating avoids the whole "I'm straight" shot down in flames thing, which must get really old really quickly. The older guys listing bisexuality and only messaging women, tough? Ehhr.. what?
7CBHacking
Best guesses at an explanation for that one: 1) A lot of older men had some homosexual experimentation in their past, decided that they therefore count as bi, but are now only interested in heterosexual relationships. 2) A lot of older men choose to signal what they believe to be the desirable characteristic of "sexual adventurousness" to their actual target sexual partner, which is younger women.
1I_fail_at_brevity
Plus, there may be many bisexual men specifically looking for a partner they can breed with. Based roughly on barely remembered male fertility age statistics, I'd guess men would be most interested in fathering children in the 25-45 age rage, and there does seem to be a bit of a hump in the data in that range.
7Nornagest
Hypothesis: a large fraction of young men in those results are coming to terms with their sexuality, while a large fraction of old men are trying to signal sexual adventurousness?
4gothgirl420666
Yeah, that's what I thought too. I'm just surprised that bisexuality would be something so many men imagine (perhaps correctly?) women are attracted to.
6Vaniver
I don't find the first part baffling; there's a trope that many gay men go through bisexuality on their way to accepting their homosexuality. (I had a brief period where I identified as bi because I wasn't fully ready to identify as gay.)
5Toggle
Same. It's easier to tell people that you have a left hand than it is to tell people you're left-handed, so to speak.
5Username
Nah, selection bias. You don't go on OK Cupid as a bi man to find men - that's Grindr or other similar sites. Much easier and quicker and more straightforward. But if you're a bi man looking for women, OK Cupid is a good place to go.
3Error
I'd be interested to see the orientation numbers broken down by sex/gender. My personal experience is that geek/nerd women seem to be bisexual at surprisingly high rates. I'm wondering if having typically-male personal pursuits (e.g. LW) is correlated with typically-male sexual interests (i.e. liking women). I'm in that boat. Feels like I'm missing out on half the potential fun. :-(
9Vaniver
Using the "Sex" (not gender) and "Sexuality" columns, omitting blanks, asexuals, and others: Male Heterosexual: 999 Male Bisexual: 142 Male Homosexual: 40 Female Heterosexual: 79 Female Bisexual: 62 Female Homosexual: 6 So the male/female ratio by sexuality is: Heterosexual: 12.6 Bisexual: 2.3 Homosexual: 6.7 The sexuality percentage by sex is: Male: 84.6% / 12.0% / 3.4% Female: 53.7% / 42.2% / 4.1% So while female bisexuality is almost as common as female heterosexuality here, the total bisexual ratio resembles the male bisexual ratio closely, as you would expect from the male/female ratio being so high overall (8 men per woman in this restricted sample).
[-]Error130

almost as common as female heterosexuality here, as you would expect

I initially misparsed this as "the female bisexuality rate is as expected." I see that isn't what you meant, but had to re-read two or three times. Just FYI.

I feel like a 42.2% bisexuality rate among LW women is surprising enough to say something, but I'm not sure what.

7JohannesDahlstrom
It is interesting. IME in real life and in OkCupid, female self-identification as bisexual correlates quite strongly with the geek/liberal/poly/kinky meme complex (edit: mirroring your experiences, didn't read carefully enough). Out of my top matches in OkCupid, over 80% of women interested in men seem to self-report as bisexual. However, also IME, bisexual identification usually doesn't imply being biromantic! Many of those women have had, or would like to have, sexual experiences with other women, but still may prefer men in romantic relationships almost exclusively. FWIW, I support adding a question about romantic orientation in the next survey.
6buybuydandavis
Great line from OkCupid:
4CBHacking
Anecdotally, this matches my experience (both on OKC and the "bisexual but hereroromantic" thing with three of my four most recent sexual partners).
3Vaniver
Grammar modified to be clearer, thanks for pointing that out.
0roystgnr
All I've come up with is a half-formed joke about how human females really are intrinsically attractive after all.
1Error
While an appealing hypothesis, if that were the case I would expect roughly the same percentage for the general public. The wiki of a million lies suggests the actual rate for the general public is in the low single digits.
5Vulture
As clever as this phrase is, it is tragically ambiguous. I'm guessing 65% chance Wikipedia, 30% RationalWiki, 3% our local wiki, 2% other. How did I do?
6Error
I meant Wikipedia. I've actually never heard the phrase applied to any other wiki. It's certainly not original to me.
2Vulture
Thanks!
3Alsadius
None of the other wikis you list are big enough to have more than maybe 75,000 lies.
1Nornagest
Are you counting talk pages? I'd expect those to have a higher density of lies than the main namespace.
0Alsadius
Sure, but I'd expect that smaller wikis have exponentially less talk, because there's fewer people to do the talking.
-4Unknowns
It seems that women are borderline bisexual by nature. For example, heterosexual women are significantly more likely to want to dance with other women than heterosexual men are to dance with other men, and the same thing is true for all sorts of other activities that have some kind of borderline relationship with sexual activities. So perhaps there is a kind of implicit bisexuality there which is more often made explicit in the case of Less Wrong women than other women, perhaps on account of higher introspection or the like.
8KPier
I am suspicious of this as an explanation. Most straight-identified women I know who will dance with/jokingly flirt with other women are in fact straight and not 'implicitly bisexual'; plenty of them live in environments where there'd be no social cost to being bisexual, and they are introspective enough that 'they are actually just straight and don't interpret those behaviors as sexual/romantic' seems most likely. Men face higher social penalties for being gay or bisexual (and presumably for being thought to be gay or bisexual) which seems a more likely explanation for why they don't do things that could be perceived as showing romantic interest toward men (like dancing or 'joking' flirting) than that women are borderline bisexual by nature.
7NancyLebovitz
Men who aren't bisexual are missing considerably less than half the potential fun, since the proportion of men who are gay or bisexual is fairly low.
4gothgirl420666
Yeah, but gay men are also more promiscuous.
0Alsadius
Is your comparison "than straight men" or "than straight women" here?
-3cameroncowan
I think being bi is simply being open-minded to all the potentials of relationships. But I agree the number of people to whom you might be engaged in sex or romance does not significantly increase. But I think the dual sexuality thing is dumb because sexuality is fluid. If I had a nickle for every time I went to bed with a "straight" man we could have a nice dinner.

May is missing from Birth Month.

[-][anonymous]110

I think it's pretty astounding that nobody at Less Wrong was born in May. I'm not sure why Scott doesn't think that's a deviation from randomness.

2imuli
May is in the data, a copy-paste error is much less astounding than nobody being born in May. 119 respondents, nothing surprising here.
4gjm
You might consider the hypothesis that FrameBenignly appreciates this and was making a joke. This seems much more likely to me than that s/he actually thinks no one said they were born in May. (Of course, maybe I'm missing a meta-joke where you pretend to take FrameBenignly at face value just as s/he pretended to take the alleged survey data at face value. But then maybe you're now missing a meta-meta-joke where I pretend to take you at face value...)
5[anonymous]
I'd make a triple-meta joke, but there's a two-meta limit on all month of birth jokes.
2imuli
Oh no! I forgot to leave my evidence.
0TheOtherDave
I see what you did there.

Thanks for showing us that there are autistic cryonics patients in the world. I am more likely to sign up when I am old enough to legally do so without parental permission, because now I know I wouldn't be the only autistic person in the future, no matter what happens when people develop a prenatal autism test.

8NancyLebovitz
I believe that if cryonics works, people will tend to associate with those from their home era.

Thanks for doing this!

[This question was poorly worded and should have acknowledged that people can both be asexual and have a specific orientation; as a result it probably vastly undercounted our asexual readers]

I find the "vastly" part dubious, given that 3% asexual already seems disproportionately large (general population seems to be about 1%). I would expect for asexuals to be overrepresented, and I do think the question wording means the survey's estimate underestimates the true proportion, but I don't think that it's, say, actually 10% instead of actually 4%.

0Richard Korzekwa
Why do you expect this? It seems reasonable if I think in terms of stereotypes. Also, I guess LWers might be more likely to recognize that they are asexual.
1Vaniver
Mostly the negative relationship between intelligence and interest in sex / sexual activity, especially when nerds are involved.
1Lumifer
There might be a negative relationship between intelligence and success in having sex, which is a different issue not connected to asexuals.
1alienist
Well, it's possible the asexuals got that way from accepting that they were never going to have sex. Also smart people are more likely to take ideas seriously, including the idea prevalent in many social circles that having a sex drive is evil. See Scott Aaronson's recent comment about how he once begged to be chemically castrated.
3Lumifer
While many things are possible, I don't think this is quite the way it works with asexuals... On the contrary, I think smart people are more likely to recognize that certain "prevalent in many social circles" ideas are bullshit or outright malicious. Scott Aaronson's problems in this respect did not arise because he is very smart.
0lalaithion
I think that, while it is indeed possible for asexuality to arise that way, most evidence seems to point away from that conclusion....
0Wes_W
Keep in mind also that other non-heterosexual orientations are also overrepresented, and I don't think anyone is quite sure why, but the same effect maybe applies to asexuals.
4ssica3003
I think it's less a case of over-representation and more a case of a group of people who believe strongly in giving honest answers to survey questions in order to get good data and who are reasonably sure their privacy will be protected. Most surveys on sexuality suffer from reluctance to self-report. This (and last year's) figure for bisexuality in particular is more in line with my anecdotal & lived experience than 'official' survey data on the topic (bisexual people <1% population). Bisexual people (and bisexual men in particular) do exist! Yay! (We knew that, lol).
4alienist
That's probably a function of your social circle.
0[anonymous]
What actually demonstrates this? I know plenty of nerds with healthy sex drives.
4Good_Burning_Plastic
See e.g. the post "Intelligence and Intercourse" on the blog Gene Expression (though it appears to only mention studies about people in the US).

Good job on running the survey and analyzing the data! I do wish that one of the extra credit questions had asked whether or not readers were fans of My Little Pony: Friendship is Magic.

P Supernatural: 6.68 + 20.271 (0, 0, 1) [1386]

P God: 8.26 + 21.088 (0, 0.01, 3) [1376]

The question for P(Supernatural) explicitly said "including God." So either LW assigns a median probability of at least one in 10,000 that God created the universe and then did nothing, or there's a bad case of conjunction fallacy.

9epursimuove
4Scott Garrabrant
Conjunctions do not work with medians that way. From what you quoted, it is entirely possible that the median probability for that claim is 0. You can figure it out from the raw data.
3TheMajor
I don't understand. Since existence of God is explicitly included in the question about the existence of supernatural things, everybody should have put P(God) < P(Supernatural), and therefore the median also is lower (since for every entry P(God) there is a higher entry P(Supernatural) by that same person). So the result above should be weak evidence that a significant proportion of the LW'ers fell prey to the conjunction fallacy here, right?
2Scott Garrabrant
No, I think that a god that does not interfere with the physical universe at all counts as not supernatural by the wording of the question. My point was that the median of the difference of two data sets is not the difference of the median. (although it is still evidence of a problem)
0[anonymous]
Something else I noticed: Agnostic: 156, 10.4% Lukewarm theist: 44, 2.9% Deist/pantheist/etc.: 22,, 1.5% Committed theist: 60, 4.0% A true agnostic should be 50% on the probability of God, but we'll say 25-75% as reasonable. A lukewarm theist should be 50-100%. I don't like the deist wording, but we'll say 50-100% for them, and 75-100% for the committed theists. We then get: 10.4.25+2.9.5+1.5.5+4.75 = 7.8% P God as our lower bound Compared to the 8.26% actual That's assuming all the atheists assigned a 0% probability to God. So it seems everybody is very close to their minimum on this; even likely below the minimum for some of them. My guess is a lot of people have some major inconsistencies in their views on God's existence.

1319 people supplied a probability of God that was not blank or "idk" or the equivalent thereof as well as a non-blank religion. I was going to do results for both religious views and religious background, but religious background was a write-in so no thanks.

Literally every group had at least one member who supplied a P(God) of 0 and a P(God) of 100.

0Richard Korzekwa
Okay, I'll bite: What does someone mean when they say they are Atheist, and they think P(God) = 100% ?
6jbay
According to Descartes: for any X, P(X exists | X is taking the survey) = 100%, and also that 100% certainty of anything on the part of X is only allowed in this particular case. Therefore, if X says they are Atheist, and that P(God exists | X is taking the survey) = 100%, then X is God, God is taking the survey, and happens to be an Atheist.
3Nornagest
Either they're actually a misotheist, or they're using a nonstandard definition of "God" or of "atheist" (though I think at least the former was defined on the survey), or they misunderstood the question, or they're trolling.
1Alsadius
Presumably "Yeah, God exists, but why should I care?". Or trolling/misunderstanding the question.
0Val
Wouldn't that be the very definition of a deist or an agnostic, instead of an atheist?
0Alsadius
I didn't say that they were good at defining terms.

Do you have some links to calibration training? I'm curious how they handle model error (the error when your model is totally wrong).

For question 10 for example, I'm guessing that many more people would have gotten the correct answer if the question was something like "Name the best selling PC game, where best selling solely counts units not gross, number of box purchases and not subscriptions, and also does not count games packaged with other software?" instead of "What is the best-selling computer game of all time?". I'm guessing mos... (read more)

I'm curious how they handle model error (the error when your model is totally wrong).

They punish it. That is, your stated credence should include both your 'inside view' error of "How confident is my mythology module in this answer?" and your 'outside view' error of "How confident am I in my mythology module?"

One of the primary benefits of playing a Credence Game like this one is it gives you a sense of those outside view confidences. I am, for example, able to tell which of two American postmasters general came first at the 60% level, simply by using the heuristic of "which of these names sounds more old-timey?", but am at the 50% level (i.e. pure chance) in determining which sports team won a game by comparing their names.

But it seems hard to guess beforehand that the question you thought you were answering wasn't the question that you were being asked!

This is the sort of thing you learn by answering a bunch of questions from the same person, or by having a lawyer-sense of "how many qualifications would I need to add or remove to this sentence to be sure?".

0whateverfor
OK, so all that makes sense and seems basically correct, but I don't see how you get from there to being able to map confidence for persons across a question the same way you can for questions across a person. Adopting that terminology, I'm saying for a typical Less Wrong user, they likely have a similar understanding-the-question module. This module will be right most of the time and wrong some of the time, so they correctly apply the outside view error afterwards on each of their estimates. Since the understanding-the-question module is similar for each person, though, the actual errors aren't evenly distributed across questions, so they will underestimate on "easy" questions and overestimate on "hard" ones, if easy and hard are determined afterwards by percentage that get the answer correct.
0Vaniver
That seems reasonable to me, yes, as an easy way for a question to be 'hard' is if most answerers interpret it differently from the questioner.

Yayy! I was having a shitty day, and seeing these results posted lifted my spirits. Thank you for that! Below are my assorted thoughts:

I'm a little disappointed that the correlation between height and P(supernatural)-and-similar didn't hold up this year, because it was really fun trying to come up with explanations for that that weren't prima facie moronic. Maybe that should have been a sign it wasn't a real thing.

The digit ratio thing is indeed delicious. I love that stuff. I'm surprised there wasn't a correlation to sexual orientation, though, since I se... (read more)

I remember answering the computer games question and at first feeling like I knew the answer. Then I realized the feeling I was having was that I had a better shot at the question than the average person that I knew, not that I knew the answer with high confidence. Once I mentally counted up all the games that I thought might be it, then considered all the games I probably hadn't even thought of (of which Minecraft was one), I realized I had no idea what the right answer was and put something like 5% confidence in The Sims 3 (which at least is a top ten game). But the point is that I think I almost didn't catch my mistake before it was too late, and this kind of error may be common.

I was confident in my incorrect computer game answer because I had recently read this Wikipedia page List of best-selling video games remembered the answer and unthinkingly assumed that "video games" was the same as "computer games".

8[anonymous]
The correct answer is Tetris. The question should have been what is the best selling personal computer game of all time? Mobile phones are technically computers too. I'm not sure how much difference that would have made.
0habeuscuppus
I interpreted the question to include mobile devices and answered Tetris with high confidence. It would be interesting to see the results of the question if we accepted either Tetris or Minecraft as the correct answer, since both are correct depending on whether or not "computer" was meant to mean "IBM PC Compatible" or "video game playing platform"
1emr
On the computer game question: Isn't there an implicit "X is true and X will be marked correct by the rater"? You'd hope these two are clearly aligned, but if you've taken many real-world quizzes, you'll recognize the difference.
0devas
I think the computer games question has to do with tribal identity-people who love a particularly well known game might be more inclined to list it as being the best seller ever and put down higher confidence because they love it so much. Kind of like owners of Playstations and Xboxs will debate the superiority of their technical specs regardless of whether they're superior or not.
9Jiro
I think the computer games result has to do with it being a bad question. There are many legitimate answers depending on how you interpret the question, including my answer that Minesweeper sells as a bundle with Windows and thus has probably sold more copies than anything else.
5Vulture
Is it really a "bad question"? Shouldn't a good calibrator be able to account for model error?
1devas
Depends on whether you consider "being able to comprehensively understand questions that may be misleading" to be a subset of calibration skills.
0devas
Good point, I hadn't thought of that.

It looks to me like everyone was horrendously underconfident on all the easy questions, and horrendously overconfident on all the hard questions.

I think that this is what correct calibration overall looks like, since you don't know in advance which questions are easy and which ones are tricky. I would be quite impressed if a group of super-calibrators had correct calibration curves on every question, rather than on average over a set of questions.

0[anonymous]
Can you explain your point more fully? I don't think I understand or can identify the antecedents. For instance, what you mean by calibration..
0Douglas_Knight
Right, Dunning-Kruger is just regression to the mean.
0orthonormal
No, that's false. It's possible (and common) for a person to be wildly overconfident on a pretty broad domain of questions.
[-]tog70

It's interesting to compare these results to those of the 2014 Survey of Effective Altruists. These will be released soon, but here are some initial ways in which effective altruists who took this survey compare to LessWrong census takers:

  • Somewhat less male (75% vs 87%)
  • More in the UK
  • Equally atheist/agnostic
  • More consequentialist (74% vs 60%)
  • Much more vegan/vegetarian (34% vs 10%)
  • Witty, attractive, and great company at parties
0Username
For someone who isn't an EA, and therefor shouldn't take this survey, is there a place where we can see the results?
0RyanCarey
AFAIK, the results aren't out yet, but they'll go on effective-altruism.com when they are.
0tog
Correct, Peter Hurford is working on them and will I believe finish them soon.

I think that there are better analyses of calibration which could be done than the ones that are posted here.

For example, I think it's better to combine all 10 questions into a single graph rather than looking at each one separately.

The pattern of overconfidence on hard questions and underconfidence on easy questions is actually what you'd expect to find, even if people are well-calibrated. One thing that makes a question easy is if the obvious guess is the correct answer (like a question about Confederate Civil War generals where the correct answer is R... (read more)

0Rob Bensinger
I would count minor typos (like 'mitochondira') and spelling errors (like 'mitocondria'), and trivial variants (like 'mitochondrium') as correct. I'd count major typos, where the pronunciation would be majorly different from the correct name, as neither correct nor incorrect -- ditch the data, since it may require too many judgment calls about whether someone's zeroing in on the correct name, and since the respondents themselves weren't told how much to make their calibration correct-spelling-sensitive. So my system says: Heisenberg Hawai'i Odin Jacob Indonesia
2Sabiola
Hawaii is both in the 'Also correct' and the 'Neither' list.
0Rob Bensinger
One uses a typewriter apostrophe ('), the other doesn't (`).
1arundelo
No, bbleeker is saying that "Hawaii" (no apostrophe) is in both lists.
5Rob Bensinger
Ah. "Hawaii or Arkansas" is its own entry, and was typed in as an answer.

This is not a human universal - people who put even a small amount of training into calibration can become very well calibrated very quickly. This is a sign that most Less Wrongers continue to neglect the very basics of rationality and are incapable of judging how much evidence they have on a given issue. Veterans of the site do no better than newbies on this measure.

Can someone who's done calibration training comment on whether it really seems to represent the ability to "judge how much evidence you have on a given issue", as opposed to accurately translate brain-based probability estimates in to numerical probability estimates?

8Vaniver
As I interpret it, the two are distinct but calibration training does both. That is, there's both a "subjective feeling of certainty"->"probability number" model that's being trained, and that model probably ought to be trained for every field independently (that is, determining how much subjective feeling of certainty you should have in different cases). There appears to be some transfer but I don't think it's as much as Yvain seems to be postulating.
0John_Maxwell
Have you done calibration training? Do you recommend it? I think I remember someone from CFAR saying that it was kind of a niche skill, and to my knowledge it hasn't been incorporated in to their curriculum (although Andrew Critch created a calibration android app?)
0Vaniver
I've done a moderate amount of training. I think that the credence game is fun enough to put an hour or two into, but I think the claim that it's worth putting serious effort into rests on the claim that it transfers (or that probabilistic judgments are common enough in your professional field that it makes sense to train those sorts of judgments).
2John_Maxwell
CFAR calibration games I tried the game for a while... many of the questions are pretty hard IMO (especially the "which of these top-10 ranked things was ranked higher" ones), which makes it a bit difficult to learn to differentiate easy & hard questions. Other calibration quizzes