Thanks to everyone who took the 2013 Less Wrong Census/Survey. Extra thanks to Ozy, who helped me out with the data processing and statistics work, and to everyone who suggested questions.
This year's results are below. Some of them may make more sense in the context of the original survey questions, which can be seen here. Please do not try to take the survey as it is over and your results will not be counted.
Part I. Population
1636 people answered the survey.
Compare this to 1195 people last year, and 1090 people the year before that. It would seem the site is growing, but we do have to consider that each survey lasted a different amount of time; for example, last survey lasted 23 days, but this survey lasted 40.
However, almost everyone who takes the survey takes it in the first few weeks it is available. 1506 of the respondents answered within the first 23 days, proving that even if the survey ran the same length as last year's, there would still have been growth.
As we will see lower down, growth is smooth across all categories of users (lurkers, commenters, posters) EXCEPT people who have posted to Main, the number of which remains nearly the same from year to year.
We continue to have very high turnover - only 40% of respondents this year say they also took the survey last year.
II. Categorical Data
SEX:
Female: 161, 9.8%
Male: 1453, 88.8%
Other: 1, 0.1%
Did not answer: 21, 1.3%
[[Ozy is disappointed that we've lost 50% of our intersex readers.]]
GENDER:
F (cisgender): 140, 8.6%
F (transgender MtF): 20, 1.2%
M (cisgender): 1401, 85.6%
M (transgender FtM): 5, 0.3%
Other: 49, 3%
Did not answer: 21, 1.3%
SEXUAL ORIENTATION:
Asexual: 47, 2.9%
Bisexual: 188, 12.2%
Heterosexual: 1287, 78.7%
Homosexual: 45, 2.8%
Other: 39, 2.4%
Did not answer: 19, 1.2%
RELATIONSHIP STYLE:
Prefer monogamous: 829, 50.7%
Prefer polyamorous: 234, 14.3%
Other: 32, 2.0%
Uncertain/no preference: 520, 31.8%
Did not answer: 21, 1.3%
NUMBER OF CURRENT PARTNERS:
0: 797, 48.7%
1: 728, 44.5%
2: 66, 4.0%
3: 21, 1.3%
4: 1, .1%
6: 3, .2%
Did not answer: 20, 1.2%
RELATIONSHIP STATUS:
Married: 304, 18.6%
Relationship: 473, 28.9%
Single: 840, 51.3%
RELATIONSHIP GOALS:
Looking for more relationship partners: 617, 37.7%
Not looking for more relationship partners: 993, 60.7%
Did not answer: 26, 1.6%
HAVE YOU DATED SOMEONE YOU MET THROUGH THE LESS WRONG COMMUNITY?
Yes: 53, 3.3%
I didn't meet them through the community but they're part of the community now: 66, 4.0%
No: 1482, 90.5%
Did not answer: 35, 2.1%
COUNTRY:
United States: 895, 54.7%
United Kingdom: 144, 8.8%
Canada: 107, 6.5%
Australia: 69, 4.2%
Germany: 68, 4.2%
Finland: 35, 2.1%
Russia: 22, 1.3%
New Zealand: 20, 1.2%
Israel: 17, 1.0%
France: 16, 1.0%
Poland: 16, 1.0%
LESS WRONGERS PER CAPITA:
Finland: 1/154,685.
New Zealand: 1/221,650.
Canada: 1/325,981.
Australia: 1/328,659.
United States: 1/350,726
United Kingdom: 1/439,097
Israel: 1/465,176.
Germany: 1/1,204,264.
Poland: 1/2,408,750.
France: 1/4,106,250.
Russia: 1/6,522,727
RACE:
Asian (East Asian): 60, 3.7%
Asian (Indian subcontinent): 37, 2.3%
Black: 11, .7%
Middle Eastern: 9, .6%
White (Hispanic): 73, 4.5%
White (non-Hispanic): 1373, 83.9%
Other: 51, 3.1%
Did not answer: 22, 1.3%
WORK STATUS:
Academics (teaching): 77, 4.7%
For-profit work: 552, 33.7%
Government work: 55, 3.4%
Independently wealthy: 14, .9%
Non-profit work: 46, 2.8%
Self-employed: 103, 6.3%
Student: 661, 40.4%
Unemployed: 105, 6.4%
Did not answer: 23, 1.4%
PROFESSION:
Art: 27, 1.7%
Biology: 26, 1.6%
Business: 44, 2.7%
Computers (AI): 47, 2.9%
Computers (other academic computer science): 107, 6.5%
Computers (practical): 505, 30.9%
Engineering: 128, 7.8%
Finance/economics: 92, 5.6%
Law: 36, 2.2%
Mathematics: 139, 8.5%
Medicine: 31, 1.9%
Neuroscience: 13, .8%
Philosophy: 41, 2.5%
Physics: 92, 5.6%
Psychology: 34, 2.1%
Statistics: 23, 1.4%
Other hard science: 31, 1.9%
Other social science: 43, 2.6%
Other: 139, 8.5%
Did not answer: 38, 2.3%
DEGREE:
None: 84, 5.1%
High school: 444, 27.1%
2 year degree: 68, 4.2%
Bachelor's: 554, 33.9%
Master's: 323, 19.7%
MD/JD/other professional degree: 31, 2.0%
PhD.: 90, 5.5%
Other: 22, 1.3%
Did not answer: 19, 1.2%
POLITICAL:
Communist: 11, .7%
Conservative: 64, 3.9%
Liberal: 580, 35.5%
Libertarian: 437, 26.7%
Socialist: 502, 30.7%
Did not answer: 42, 2.6%
COMPLEX POLITICAL WITH WRITE-IN:
Anarchist: 52, 3.2%
Conservative: 16, 1.0%
Futarchist: 42, 2.6%
Left-libertarian: 142, 8.7%
Liberal: 5
Moderate: 53, 3.2%
Pragmatist: 110, 6.7%
Progressive: 206, 12.6%
Reactionary: 40, 2.4%
Social democrat: 154, 9.5%
Socialist: 135, 8.2%
Did not answer: 26.2%
[[All answers with more than 1% of the Less Wrong population included. Other answers which made Ozy giggle included "are any of you kings?! why do you CARE?!", "Exclusionary: you are entitled to an opinion on nuclear power when you know how much of your power is nuclear", "having-well-founded-opinions-is-really-hard-ist", "kleptocrat", "pirate", and "SPECIAL FUCKING SNOWFLAKE."]]
AMERICAN PARTY AFFILIATION:
Democratic Party: 226, 13.8%
Libertarian Party: 31, 1.9%
Republican Party: 58, 3.5%
Other third party: 19, 1.2%
Not registered: 447, 27.3%
Did not answer or non-American: 856, 52.3%
VOTING:
Yes: 936, 57.2%
No: 450, 27.5%
My country doesn't hold elections: 2, 0.1%
Did not answer: 249, 15.2%
RELIGIOUS VIEWS:
Agnostic: 165, 10.1%
Atheist and not spiritual: 1163, 71.1%
Atheist but spiritual: 132, 8.1%
Deist/pantheist/etc.: 36, 2.2%
Lukewarm theist: 53, 3.2%
Committed theist 64, 3.9%
RELIGIOUS DENOMINATION (IF THEIST):
Buddhist: 22, 1.3%
Christian (Catholic): 44, 2.7%
Christian (Protestant): 56, 3.4%
Jewish: 31, 1.9%
Mixed/Other: 21, 1.3%
Unitarian Universalist or similar: 25, 1.5%
[[This includes all religions with more than 1% of Less Wrongers. Minority religions include Dzogchen, Daoism, various sorts of Paganism, Simulationist, a very confused secular humanist, Kopmist, Discordian, and a Cultus Deorum Romanum practitioner whom Ozy wants to be friends with.]]
FAMILY RELIGION:
Agnostic: 129, 11.6%
Atheist and not spiritual: 225, 13.8%
Atheist but spiritual: 73, 4.5%
Committed theist: 423, 25.9%
Deist/pantheist, etc.: 42, 2.6%
Lukewarm theist: 563, 34.4%
Mixed/other: 97, 5.9%
Did not answer: 24, 1.5%
RELIGIOUS BACKGROUND:
Bahai: 3, 0.2%
Buddhist: 13, .8%
Christian (Catholic): 418, 25.6%
Christian (Mormon): 38, 2.3%
Christian (Protestant): 631, 38.4%
Christian (Quaker): 7, 0.4%
Christian (Unitarian Universalist or similar): 32, 2.0%
Christian (other non-Protestant): 99, 6.1%
Christian (unknown): 3, 0.2%
Eckankar: 1, 0.1%
Hindu: 29, 1.8%
Jewish: 136, 8.3%
Muslim: 12, 0.7%
Native American Spiritualist: 1, 0.1%
Mixed/Other: 85, 5.3%
Sikhism: 1, 0.1%
Traditional Chinese: 11, .7%
Wiccan: 1, 0.1%
None: 8, 0.4%
Did not answer: 107, 6.7%
MORAL VIEWS:
Accept/lean towards consequentialism: 1049, 64.1%
Accept/lean towards deontology: 77, 4.7%
Accept/lean towards virtue ethics: 197, 12.0%
Other/no answer: 276, 16.9%
Did not answer: 37, 2.3%
CHILDREN
0: 1414, 86.4%
1: 77, 4.7%
2: 90, 5.5%
3: 25, 1.5%
4: 7, 0.4%
5: 1, 0.1%
6: 2, 0.1%
Did not answer: 20, 1.2%
MORE CHILDREN:
Have no children, don't want any: 506, 31.3%
Have no children, uncertain if want them: 472, 29.2%
Have no children, want children: 431, 26.7%
Have no children, didn't answer: 5, 0.3%
Have children, don't want more: 124, 7.6%
Have children, uncertain if want more: 25, 1.5%
Have children, want more: 53, 3.2%
HANDEDNESS:
Right: 1256, 76.6%
Left: 145, 9.5%
Ambidextrous: 36, 2.2%
Not sure: 7, 0.4%
Did not answer: 182, 11.1%
LESS WRONG USE:
Lurker (no account): 584, 35.7%
Lurker (account) 221, 13.5%
Poster (comment, no post): 495, 30.3%
Poster (Discussion, not Main): 221, 12.9%
Poster (Main): 103, 6.3%
SEQUENCES:
Never knew they existed: 119, 7.3%
Knew they existed, didn't look at them: 48, 2.9%
~25% of the Sequences: 200, 12.2%
~50% of the Sequences: 271, 16.6%
~75% of the Sequences: 225, 13.8%
All the Sequences: 419, 25.6%
Did not answer: 24, 1.5%
MEETUPS:
No: 1134, 69.3%
Yes, once or a few times: 307, 18.8%
Yes, regularly: 159, 9.7%
HPMOR:
No: 272, 16.6%
Started it, haven't finished: 255, 15.6%
Yes, all of it: 912, 55.7%
CFAR WORKSHOP ATTENDANCE:
Yes, a full workshop: 105, 6.4%
A class but not a full-day workshop: 40, 2.4%
No: 1446, 88.3%
Did not answer: 46, 2.8%
PHYSICAL INTERACTION WITH LW COMMUNITY:
Yes, all the time: 94, 5.7%
Yes, sometimes: 179, 10.9%
No: 1316, 80.4%
Did not answer: 48, 2.9%
VEGETARIAN:
No: 1201, 73.4%
Yes: 213, 13.0%
Did not answer: 223, 13.6%
SPACED REPETITION:
Never heard of them: 363, 22.2%
No, but I've heard of them: 495, 30.2%
Yes, in the past: 328, 20%
Yes, currently: 219, 13.4%
Did not answer: 232, 14.2%
HAVE YOU TAKEN PREVIOUS INCARNATIONS OF THE LESS WRONG SURVEY?
Yes: 638, 39.0%
No: 784, 47.9%
Did not answer: 215, 13.1%
PRIMARY LANGUAGE:
English: 1009, 67.8%
German: 58, 3.6%
Finnish: 29, 1.8%
Russian: 25, 1.6%
French: 17, 1.0%
Dutch: 16, 1.0%
Did not answer: 15.2%
[[This includes all answers that more than 1% of respondents chose. Other languages include Urdu, both Czech and Slovakian, Latvian, and Love.]]
ENTREPRENEUR:
I don't want to start my own business: 617, 37.7%
I am considering starting my own business: 474, 29.0%
I plan to start my own business: 113, 6.9%
I've already started my own business: 156, 9.5%
Did not answer: 277, 16.9%
EFFECTIVE ALTRUIST:
Yes: 468, 28.6%
No: 883, 53.9%
Did not answer: 286, 17.5%
WHO ARE YOU LIVING WITH?
Alone: 348, 21.3%
With family: 420, 25.7%
With partner/spouse: 400, 24.4%
With roommates: 450, 27.5%
Did not answer: 19, 1.3%
DO YOU GIVE BLOOD?
No: 646, 39.5%
No, only because I'm not allowed: 157, 9.6%
Yes, 609, 37.2%
Did not answer: 225, 13.7%
GLOBAL CATASTROPHIC RISK:
Pandemic (bioengineered): 374, 22.8%
Environmental collapse including global warming: 251, 15.3%
Unfriendly AI: 233, 14.2%
Nuclear war: 210, 12.8%
Pandemic (natural) 145, 8.8%
Economic/political collapse: 175, 1, 10.7%
Asteroid strike: 65, 3.9%
Nanotech/grey goo: 57, 3.5%
Didn't answer: 99, 6.0%
CRYONICS STATUS:
Never thought about it / don't understand it: 69, 4.2%
No, and don't want to: 414, 25.3%
No, still considering: 636, 38.9%
No, would like to: 265, 16.2%
No, would like to, but it's unavailable: 119, 7.3%
Yes: 66, 4.0%
Didn't answer: 68, 4.2%
NEWCOMB'S PROBLEM:
Don't understand/prefer not to answer: 92, 5.6%
Not sure: 103, 6.3%
One box: 1036, 63.3%
Two box: 119, 7.3%
Did not answer: 287, 17.5%
GENOMICS:
Yes: 177, 10.8%
No: 1219, 74.5%
Did not answer: 241, 14.7%
REFERRAL TYPE:
Been here since it started in the Overcoming Bias days: 285, 17.4%
Referred by a friend: 241, 14.7%
Referred by a search engine: 148, 9.0%
Referred by HPMOR: 400, 24.4%
Referred by a link on another blog: 373, 22.8%
Referred by a school course: 1, .1%
Other: 160, 9.8%
Did not answer: 29, 1.9%
REFERRAL SOURCE:
Common Sense Atheism: 33
Slate Star Codex: 20
Hacker News: 18
Reddit: 18
TVTropes: 13
Y Combinator: 11
Gwern: 9
RationalWiki: 8
Marginal Revolution: 7
Unequally Yoked: 6
Armed and Dangerous: 5
Shtetl Optimized: 5
Econlog: 4
StumbleUpon: 4
Yudkowsky.net: 4
Accelerating Future: 3
Stares at the World: 3
xkcd: 3
David Brin: 2
Freethoughtblogs: 2
Felicifia: 2
Givewell: 2
hatrack.com: 2
HPMOR: 2
Patri Friedman: 2
Popehat: 2
Overcoming Bias: 2
Scientiststhesis: 2
Scott Young: 2
Stardestroyer.net: 2
TalkOrigins: 2
Tumblr: 2
[[This includes all sources with more than one referral; needless to say there was a long tail]]
III. Numeric Data
(in the form mean + stdev (1st quartile, 2nd quartile, 3rd quartile) [n = number responding]))
Age: 27.4 + 8.5 (22, 25, 31) [n = 1558]
Height: 176.6 cm + 16.6 (173, 178, 183) [n = 1267]
Karma Score: 504 + 2085 (0, 0, 100) [n = 1438]
Time in community: 2.62 years + 1.84 (1, 2, 4) [n = 1443]
Time on LW: 13.25 minutes/day + 20.97 (2, 10, 15) [n = 1457]
IQ: 138.2 + 13.6 (130, 138, 145) [n = 506]
SAT out of 1600: 1474 + 114 (1410, 1490, 1560) [n = 411]
SAT out of 2400: 2207 + 161 (2130, 2240, 2330) [n = 333]
ACT out of 36: 32.8 + 2.5 (32, 33, 35) [n = 265]
P(Aliens in observable universe): 74.3 + 32.7 (60, 90, 99) [n = 1496]
P(Aliens in Milky Way): 44.9 + 38.2 (5, 40, 85) [n = 1482]
P(Supernatural): 7.7 + 22 (0E-9, .000055, 1) [n = 1484]
P(God): 9.1 + 22.9 (0E-11, .01, 3) [n = 1490]
P(Religion): 5.6 + 19.6 (0E-11, 0E-11, .5) [n = 1497]
P(Cryonics): 22.8 + 28 (2, 10, 33) [n = 1500]
P(AntiAgathics): 27.6 + 31.2 (2, 10, 50) [n = 1493]
P(Simulation): 24.1 + 28.9 (1, 10, 50) [n = 1400]
P(ManyWorlds): 50 + 29.8 (25, 50, 75) [n = 1373]
P(Warming): 80.7 + 25.2 (75, 90, 98) [n = 1509]
P(Global catastrophic risk): 72.9 + 25.41 (60, 80, 95) [n = 1502]
Singularity year: 1.67E +11 + 4.089E+12 (2060, 2090, 2150) [n = 1195]
[[Of course, this question was hopelessly screwed up by people who insisted on filling the whole answer field with 9s, or other such nonsense. I went back and eliminated all outliers - answers with more than 4 digits or answers in the past - which changed the results to: 2150 + 226 (2060, 2089, 2150)]]
Yearly Income: $73,226 +423,310 (10,000, 37,000, 80,000) [n = 910]
Yearly Charity: $1181.16 + 6037.77 (0, 50, 400) [n = 1231]
Yearly Charity to MIRI/CFAR: $307.18 + 4205.37 (0, 0, 0) [n = 1191]
Yearly Charity to X-risk (excluding MIRI or CFAR): $6.34 + 55.89 (0, 0, 0) [n = 1150]
Number of Languages: 1.49 + .8 (1, 1, 2) [n = 1345]
Older Siblings: 0.5 + 0.9 (0, 0, 1) [n = 1366]
Time Online/Week: 42.7 hours + 24.8 (25, 40, 60) [n = 1292]
Time Watching TV/Week: 4.2 hours + 5.7 (0, 2, 5) [n = 1316]
[[The next nine questions ask respondents to rate how favorable they are to the political idea or movement above on a scale of 1 to 5, with 1 being "not at all favorable" and 5 being "very favorable". You can see the exact wordings of the questions on the survey.]]
Abortion: 4.4 + 1 (4, 5, 5) [n = 1350]
Immigration: 4.1 + 1 (3, 4, 5) [n = 1322]
Basic Income: 3.8 + 1.2 (3, 4, 5) [n = 1289]
Taxes: 3.1 + 1.3 (2, 3, 4) [n = 1296]
Feminism: 3.8 + 1.2 (3, 4, 5) [n = 1329]
Social Justice: 3.6 + 1.3 (3, 4, 5) [n = 1263]
Minimum Wage: 3.2 + 1.4 (2, 3, 4) [n = 1290]
Great Stagnation: 2.3 + 1 (2, 2, 3) [n = 1273]
Human Biodiversity: 2.7 + 1.2 (2, 3, 4) [n = 1305]
IV. Bivariate Correlations
Ozy ran bivariate correlations between all the numerical data and recorded all correlations that were significant at the .001 level in order to maximize the chance that these are genuine results. The format is variable/variable: Pearson correlation (n). Yvain is not hugely on board with the idea of running correlations between everything and seeing what sticks, but will grudgingly publish the results because of the very high bar for significance (p < .001 on ~800 correlations suggests < 1 spurious result) and because he doesn't want to have to do it himself.
Less Political:
SAT score (1600)/SAT score (2400): .835 (56)
Charity/MIRI and CFAR donations: .730 (1193)
SAT score out of 2400/ACT score: .673 (111)
SAT score out of 1600/ACT score: .544 (102)
Number of children/age: .507 (1607)
P(Cryonics)/P(AntiAgathics): .489 (1515)
SAT score out of 1600/IQ: .369 (173)
MIRI and CFAR donations/XRisk donations: .284 (1178)
Number of children/ACT score: -.279 (269)
Income/charity: .269 (884)
Charity/Xrisk charity: .262 (1161)
P(Cryonics)/P(Simulation): .256 (1419)
P(AntiAgathics)/P(Simulation): .253 (1418)
Number of current partners/age: .238 (1607)
Number of children/SAT score (2400): -.223 (345)
Number of current partners/number of children: .205 (1612)
SAT score out of 1600/age: -.194 (422)
Charity/age: .175 (1259)
Time on Less Wrong/IQ: -.164 (492)
P(Warming)/P(GlobalCatastrophicRisk): .156 (1522)
Number of current partners/IQ: .155 (521)
P(Simulation)/age: -.153 (1420)
Immigration/P(ManyWorlds): .150 (1195)
Income/age: .150 (930)
P(Cryonics)/age: -.148 (1521)
Income/children: .145 (931)
P(God)/P(Simulation): .142 (1409)
Number of children/P(Aliens): .140 (1523)
P(AntiAgathics)/Hours Online: .138 (1277)
Number of current partners/karma score: .137 (1470)
Abortion/P(ManyWorlds): .122 (1215)
Feminism/Xrisk charity donations: -.122 (1104)
P(AntiAgathics)/P(ManyWorlds) .118 (1381)
P(Cryonics)/P(ManyWorlds): .117 (1387)
Karma score/Great Stagnation: .114 (1202)
Hours online/P(simulation): .114 (1199)
P(Cryonics)/Hours Online: .113 (1279)
P(AntiAgathics)/Great Stagnation: -.111 (1259)
Basic income/hours online: .111 (1200)
P(GlobalCatastrophicRisk)/Great Stagnation: -.110 (1270)
Age/X risk charity donations: .109 (1176)
P(AntiAgathics)/P(GlobalCatastrophicRisk): -.109 (1513)
Time on Less Wrong/age: -.108 (1491)
P(AntiAgathics)/Human Biodiversity: .104 (1286)
Immigration/Hours Online: .104 (1226)
P(Simulation)/P(GlobalCatastrophicRisk): -.103 (1421)
P(Supernatural)/height: -.101 (1232)
P(GlobalCatastrophicRisk)/height: .101 (1249)
Number of children/hours online: -.099 (1321)
P(AntiAgathics)/age: -.097 (1514)
Karma score/time on LW: .096 (1404)
This year for the first time P(Aliens) and P(Aliens2) are entirely uncorrelated with each other. Time in Community, Time on LW, and IQ are not correlated with anything particularly interesting, suggesting all three fail to change people's views.
Results we find amusing: high-IQ and high-karma people have more romantic partners, suggesting that those are attractive traits. There is definitely a Cryonics/Antiagathics/Simulation/Many Worlds cluster of weird beliefs, which younger people and people who spend more time online are slightly more likely to have - weirdly, that cluster seems slightly less likely to believe in global catastrophic risk. Older people and people with more children have more romantic partners (it'd be interesting to see if that holds true for the polyamorous). People who believe in anti-agathics and global catastrophic risk are less likely to believe in a great stagnation (presumably because both of the above rely on inventions). People who spend more time on Less Wrong have lower IQs. Height is, bizarrely, correlated with belief in the supernatural and global catastrophic risk.
All political viewpoints are correlated with each other in pretty much exactly the way one would expect. They are also correlated with one's level of belief in God, the supernatural, and religion. There are minor correlations with some of the beliefs and number of partners (presumably because polyamory), number of children, and number of languages spoken. We are doing terribly at avoiding Blue/Green politics, people.
More Political:
P(Supernatural)/P(God): .736 (1496)
P(Supernatural)/P(Religion): .667 (1492)
Minimum wage/taxes: .649 (1299)
P(God)/P(Religion): .631 (1496)
Feminism/social justice: .619 (1293)
Social justice/minimum wage: .508 (1262)
P(Supernatural)/abortion: -.469 (1309)
Taxes/basic income: .463 (1285)
P(God)/abortion: -.461 (1310)
Social justice/taxes: .456 (1267)
P(Religion)/abortion: -.413
Basic income/minimum wage: .392 (1283)
Feminism/taxes: .391 (1318)
Feminism/minimum wage: .391 (1312)
Feminism/human biodiversity: -.365 (1331)
Immigration/feminism: .355 (1336)
P(Warming)/taxes: .340 (1292)
Basic income/social justice: .311 (1270)
Immigration/social justice: .307 (1275)
P(Warming)/feminism: .294 (1323)
Immigration/human biodiversity: -.292 (1313)
P(Warming)/basic income: .290 (1287)
Social justice/human biodiversity: -.289 (1281)
Basic income/feminism: .284 (1313)
Human biodiversity/minimum wage: -.273 (1293)
P(Warming)/social justice: .271 (1261)
P(Warming)/minimum wage: .262 (1284)
Human biodiversity/taxes: -.251 (1270).
Abortion/feminism: .239 (1356)
Abortion/social justice: .220 (1292)
P(Warming)/immigration: .215 (1315)
Abortion/immigration: .211 (1353)
P(Warming)/abortion: .192 (1340)
Immigration/taxes: .186 (1322)
Basic income/taxes: .174 (1249)
Abortion/taxes: .170 (1328)
Abortion/minimum wage: .169 (1317)
P(warming)/human biodiversity: -.168 (1301)
Abortion/basic income: .168 (1314)
Immigration/Great Stagnation: -.163 (1281)
P(God)/feminism: -.159 (1294)
P(Supernatural)/feminism: -.158 (1292)
Human biodiversity/Great Stagnation: .152 (1287)
Social justice/Great Stagnation: -.135 (1242)
Number of languages/taxes: -.133 (1242)
P(God)/P(Warming): -.132 (1491)
P(Supernatural)/immigration: -.131 (1284)
P(Religion)immigration: -.129 (1296)
P(God)/immigration: -.127 (1286)
P(Supernatural)/P(Warming): -.125 (1487)
P(Supernatural)/social justice: -.125 (1227)
P(God)/taxes: -.145
Minimum wage/Great Stagnation: -124 (1269)
Immigration/minimum wage: .122 (1308)
Great Stagnation/taxes: -.121 (1270)
P(Religion)/P(Warming): -.113 (1505)
P(Supernatural)/taxes: -.113 (1265)
Feminism/Great Stagnation: -.112 (1295)
Number of children/abortion: -.112 (1386)
P(Religion)/basic income: -.108 (1296)
Number of current partners/feminism: .108 (1364)
Basic income/human biodiversity: -.106 (1301)
P(God)/Basic Income: -.105 (1255)
Number of current partners/basic income: .105 (1320)
Human biodiversity/number of languages: .103 (1253)
Number of children/basic income: -.099 (1322)
Number of children/P(Warming): -.091 (1535)
V. Hypothesis Testing
A. Do people in the effective altruism movement donate more money to charity? Do they donate a higher percent of their income to charity? Are they just generally more altruistic people?
1265 people told us how much they give to charity; of those, 450 gave nothing. On average, effective altruists (n = 412) donated $2503 to charity, and other people (n = 853) donated $523 - obviously a significant result. Effective altruists gave on average $800 to MIRI or CFAR, whereas others gave $53. Effective altruists gave on average $16 to other x-risk related charities; others gave only $2.
In order to calculate percent donated I divided charity donations by income in the 947 people helpful enough to give me both numbers. Of those 947, 602 donated nothing to charity, and so had a percent donated of 0. At the other extreme, three people donated 50% of their (substantial) incomes to charity, and 55 people donated at least 10%. I don't want to draw any conclusions about the community from this because the people who provided both their income numbers and their charity numbers are a highly self-selected sample.
303 effective altruists donated, on average, 3.5% of their income to charity, compared to 645 others who donated, on average, 1% of their income to charity. A small but significant (p < .001) victory for the effective altruism movement.
But are they more compassionate people in general? After throwing out the people who said they wanted to give blood but couldn't for one or another reason, I got 1255 survey respondents giving me an unambiguous answer (yes or no) about whether they'd ever given blood. I found that 51% of effective altruists had given blood compared to 47% of others - a difference which did not reach statistical significance.
Finally, at the end of the survey I had a question offering respondents a chance to cooperate (raising the value of a potential monetary prize to be given out by raffle to a random respondent) or defect (decreasing the value of the prize, but increasing their own chance of winning the raffle). 73% of effective altruists cooperated compared to 70% of others - an insignificant difference.
Conclusion: effective altruists give more money to charity, both absolutely and as a percent of income, but are no more likely (or perhaps only slightly more likely) to be compassionate in other ways.
B. Can we finally resolve this IQ controversy that comes up every year?
The story so far - our first survey in 2009 found an average IQ of 146. Everyone said this was stupid, no community could possibly have that high an average IQ, it was just people lying and/or reporting results from horrible Internet IQ tests.
Although IQ fell somewhat the next few years - to 140 in 2011 and 139 in 2012 - people continued to complain. So in 2012 we started asking for SAT and ACT scores, which are known to correlate well with IQ and are much harder to get wrong. These scores confirmed the 139 IQ result on the 2012 test. But people still objected that something must be up.
This year our IQ has fallen further to 138 (no Flynn Effect for us!) but for the first time we asked people to describe the IQ test they used to get the number. So I took a subset of the people with the most unimpeachable IQ tests - ones taken after the age of 15 (when IQ is more stable), and from a seemingly reputable source. I counted a source as reputable either if it name-dropped a specific scientifically validated IQ test (like WAIS or Raven's Progressive Matrices), if it was performed by a reputable institution (a school, a hospital, or a psychologist), or if it was a Mensa exam proctored by a Mensa official.
This subgroup of 101 people with very reputable IQ tests had an average IQ of 139 - exactly the same as the average among survey respondents as a whole.
I don't know for sure that Mensa is on the level, so I tried again deleting everyone who took a Mensa test - leaving just the people who could name-drop a well-known test or who knew it was administered by a psychologist in an official setting. This caused a precipitous drop all the way down to 138.
The IQ numbers have time and time again answered every challenge raised against them and should be presumed accurate.
C. Can we predict who does or doesn't cooperate on prisoner's dilemmas?
As mentioned above, I included a prisoner's dilemma type question in the survey, offering people the chance to make a little money by screwing all the other survey respondents over.
Tendency to cooperate on the prisoner's dilemma was most highly correlated with items in the general leftist political cluster identified by Ozy above. It was most notable for support for feminism, with which it had a correlation of .15, significant at the p < .01 level, and minimum wage, with which it had a correlation of .09, also significant at p < .01. It was also significantly correlated with belief that other people would cooperate on the same question.
I compared two possible explanations for this result. First, leftists are starry-eyed idealists who believe everyone can just get along - therefore, they expected other people to cooperate more, which made them want to cooperate more. Or, second, most Less Wrongers are white, male, and upper class, meaning that support for leftist values - which often favor nonwhites, women, and the lower class - is itself a symbol of self-sacrifce and altruism which one would expect to correlate with a question testing self-sacrifice and altruism.
I tested the "starry-eyed idealist" hypothesis by checking whether leftists were more likely to believe other people would cooperate. They were not - the correlation was not significant at any level.
I tested the "self-sacrifice" hypothesis by testing whether the feminism correlation went away in women. For women, supporting feminism is presumably not a sign of willingness to self-sacrifice to help an out-group, so we would expect the correlation to disappear.
In the all-female sample, the correlation between feminism and PD cooperation shrunk from .15 to a puny .04, whereas the correlation between the minimum wage and PD was previously .09 and stayed exactly the same at .09. This provides some small level of support for the hypothesis that the leftist correlation with PD cooperation represents a willingness to self-sacrifice in a population who are not themselves helped by leftist values.
(on the other hand, neither leftists nor cooperators were more likely to give money to charity, so if this is true it's a very selective form of self-sacrifice)
VI. Monetary Prize
1389 people answered the prize question at the bottom. 71.6% of these [n = 995] cooperated; 28.4% [n = 394] defected.
The prize goes to a person whose two word phrase begins with "eponymous". If this person posts below (or PMs or emails me) the second word in their phrase, I will give them $60 * 71.6%, or about $43. I can pay to a PayPal account, a charity of their choice that takes online donations, or a snail-mail address via check.
VII. Calibration Questions
The population of Europe, according to designated arbiter Wikipedia, is 739 million people.
People were really really bad at giving their answers in millions. I got numbers anywhere from 3 (really? three million people in Europe?) to 3 billion (3 million billion people = 3 quadrillion). I assume some people thought they were answering in billions, others in thousands, and other people thought they were giving a straight answer in number of individuals.
My original plan was to just adjust these to make them fit, but this quickly encountered some pitfalls. Suppose someone wrote 1 million (as one person did). Could I fairly guess they meant 100 million, even though there's really no way to guess that from the text itself? 1 billion? Maybe they just thought there were really one million people in Europe?
If I was too aggressive correcting these, everyone would get close to the right answer not because they were smart, but because I had corrected their answers. If I wasn't aggressive enough, I would end up with some guy who answered 3 quadrillion Europeans totally distorting the mean.
I ended up deleting 40 answers that suggested there were less than ten million or more than eight billion Europeans, on the grounds that people probably weren't really that far off so it was probably some kind of data entry error, and correcting everyone who entered a reasonable answer in individuals to answer in millions as the question asked.
The remaining 1457 people who can either follow simple directions or at least fail to follow them in a predictable way estimated an average European population in millions of 601 + 35.6 (380, 500, 750).
Respondents were told to aim for within 10% of the real value, which means they wanted between 665 million and 812 million. 18.7% of people [n = 272] got within that window.
I divided people up into calibration brackets of [0,5], [6,15], [16, 25] and so on. The following are what percent of people in each bracket were right.
[0,5]: 7.7%
[6,15]: 12.4%
[16,25]: 15.1%
[26,35]: 18.4%
[36,45]: 20.6%
[46,55]: 15.4%
[56,65]: 16.5%
[66,75]: 21.2%
[76,85]: 36.4%
[86,95]: 48.6%
[96,100]: 100%
Among people who should know better (those who have read all or most of the Sequences and have > 500 karma, a group of 162 people)
[0,5]: 0
[6,15]: 17.4%
[16,25]: 25.6%
[26,35]: 16.7%
[36,45]: 26.7%
[46,55]: 25%
[56,65]: 0%
[66,75]: 8.3%
[76,85]: 40%
[86,95]: 66.6%
[96,100]: 66.6%
Clearly, the people who should know better don't.
This graph represents your performance relative to ideal performance. Dipping below the blue ideal line represents overconfidence; rising above it represents underconfidence. With few exceptions you were very overconfident. Note that there were so few "elite" LWers at certain levels that the graph becomes very noisy and probably isn't representing much; that huge drop at 60 represents like two or three people. The orange "typical LWer" line is much more robust.
There is one other question that gets at the same idea of overconfidence. 651 people were willing to give valid 90% confidence interval on what percent of people would cooperate (this is my fault; I only added this question about halfway through the survey once I realized it would be interesting to investigate). I deleted four for giving extremely high outliers like 9999% which threw off the results, leaving 647 valid answers. The average confidence interval was [28.3, 72.0], which just BARELY contains the correct answer of 71.6%. Of the 647 of you, only 346 (53.5%) gave 90% confidence intervals that included the correct answer!
Last year I complained about horrible performance on calibration questions, but we all decided it was probably just a fluke caused by a particularly weird question. This year's results suggest that was no fluke and that we haven't even learned to overcome the one bias that we can measure super-well and which is most easily trained away. Disappointment!
VIII. Public Data
There's still a lot more to be done with this survey. User:Unnamed has promised to analyze the "Extra Credit: CFAR Questions" section (not included in this post), but so far no one has looked at the "Extra Credit: Questions From Sarah" section, which I didn't really know what to do with. And of course this is most complete survey yet for seeking classic findings like "People who disagree with me about politics are stupid and evil".
1480 people - over 90% of the total - kindly allowed me to make their survey data public. I have included all their information except the timestamp (which would make tracking pretty easy) including their secret passphrases (by far the most interesting part of this exercise was seeing what unusual two word phrases people could come up with on short notice).
Next survey, I'd be interested in seeing statistics involving:
Excellent write-up and I look forward to next year's.
I'd like:
Oh, we are really self-serving elitist overconfident pricks, aren't we?
Repeating complaints from last year:
The 2012 estimate from SATs was about 128, since the 1994 renorming destroyed the old relationship between the SAT and IQ. Our average SAT (on 1600) was again about 1470, which again maps to less than 130, but not by much. (And, again, self-reported average probably overestimates actual population average.)
I still think you're asking this question in a way that's particularly hard for people to get right. (The issue isn't the fact you ask about, but what sort of answers you look for.)
You've clearly got an error in your calibration chart; you can't have 2 out of 3 elite LWers be right in the [95,100] categor... (read more)
What if the people who have taken IQ tests are on average smarter than the people who haven't? My impression is that people mostly take IQ tests when they're somewhat extreme: either low and trying to qualify for assistive services or high and trying to get "gifted" treatment. If we figure lesswrong draws mostly from the high end, then we should expect the IQ among test-takers to be higher than what we would get if we tested random people who had not previously been tested.
The IQ Question read: "Please give the score you got on your most recent PROFESSIONAL, SCIENTIFIC IQ test - no Internet tests, please! All tests should have the standard average of 100 and stdev of 15."
Among the subset of people making their data public (n=1480), 32% (472) put an answer here. Those 472 reports average 138, in line with past numbers. But 32% is low enough that we're pretty vulnerable to selection bias.
(I've never taken an IQ test, and left this question blank.)
This sounds plausible, but from looking at the data, I don't think this is happening in our sample. In particular, if this were the case, then we would expect the SAT scores of those who did not submit IQ data to be different from those who did submit IQ data. I ran an Anderson–Darling test on each of the following pairs of distributions:
The p-values came out as 0.477 and 0.436 respectively, which means that the Anderson–Darling test was unable to disting... (read more)
The reported SAT numbers are very high, but the reported IQ scores are extremely high. The mean reported SAT score, if received on the modern 1600 test, corresponds to an IQ in the upper 120s, not the upper 130s. The mean reported SAT2400 score was 2207, which corresponds to 99th but not 99.5th percentile. 99th percentile is an IQ of 135, which suggests that the self-reports may not be that off compared to the SAT self-reports.
The second word in the winning secret phrase is pony (chosen because you can't spell the former without the latter); I'll accept the prize money via PayPal to main att zackmdavis daht net.
(As I recall, I chose to Defect after looking at the output of one call to Python's random.random() and seeing a high number, probably point-eight-something. But I shouldn't get credit for following my proposed procedure (which turned out to be wrong anyway) because I don't remember deciding beforehand that I was definitely using a "result > 0.8 means Defect" convention (when "result < 0.2 means Defect" is just as natural). I think I would have chosen Cooperate if the random number had come up less than 0.8, but I haven't actually observed the nearby possible world where it did, so it's at least possible that I was rationalizing.)
(Also, I'm sorry for being bad at reading; I don't actually think there are seven hundred trillion people in Europe.)
Hey, it's not too late: if you should have made such a commitment, then the mere fact that you didn't actually do so shouldn't stop you now. Go ahead, flip a coin; if it comes up heads, you pay me $200; if it comes up tails, I'll ask Yvain to give you the $42.96.
Precommitment is a solved problem which doesn't need a trusted website. For example, simplicio could've released a hash precommitment (made using a local hash utility like
sha512sum
) to Yvain after taking the survey and just now unveiled that input, if he was serious about the counterfactual.(He would also want to replace the 'flip a coin' with eg. 'total number of survey participants was odd'.)
You can even still easily do a verifiable coin flip now. For example, you could pick a commonly observable future event like a property of a Bitcoin block 24 hours from now, or you could both post a hash precommitment of a random bit, then when both are posted, each releases the chosen bit, verifies the other's hash, and XOR the 2 bits to choose the winner.
Yvain - Next year, please include a question asking if the person taking the survey uses PredictionBook. I'd be curious to see if these people are better calibrated.
Thanks for doing this!
Results from previous years: 2009 2011 2012
The standard way to fix this is to run them on half the data only and then test their predictive power on the other half. This eliminates almost all spurious correlations.
Does that actually work better than just setting a higher bar for significance? My gut says that data is data and chopping it up cleverly can't work magic.
Cross validation is actually hugely useful for predictive models. For a simple correlation like this, it's less of a big deal. But if you are fitting a local linearly weighted regression line for instance, chopping the data up is absolutely standard operating procedure.
That's roughly what Yvain did, by taking into consideration the number of correlations tested when setting the significance level.
Hypothesis: the predictions on the population of Europe are bimodal, split between people thinking of geographical Europe (739M) vs people thinking of the EU (508M). I'm going to go check the data and report back.
Here is a kernel density estimate of the "true" distribution, with bootstrapped) pointwise 95% confidence bands from 999 resamples:
It looks plausibly bimodal, though one might want to construct a suitable hypothesis test for unimodality versus multimodality. Unfortunately, as you noted, we cannot distinguish between the hypothesis that the bimodality is due to rounding (at 500 M) versus the hypothesis that the bimodality is due to ambiguity between Europe and the EU. This holds even if a hypothesis test rejects a unimodal model, but if anyone is still interested in testing for unimodality, I suggest considering Efron and Tibshirani's approach using the bootstrap.
Edit: Updated the plot. I switched from adaptive bandwidth to fixed bandwidth (because it seems to achieve higher efficiency), so parts of what I wrote below are no longer relevant—I've put these parts in square brackets.
Plot notes: [The adaptive bandwidth was achieved with Mathematica's built-in "Adaptive" option for SmoothKernelDistribution, which is horribly documented; I think it uses the same algorithm as 'akj' in R's quantreg package.] A Gaussian kernel was used with the bandwidth set according to ... (read more)
You might as well ask, "Who is the president of America?" and then follow up with, "Ha ha got you! America is a continent, you meant USA."
Hah, my score almost doubled from last year.
I'm not culture.
In some social circles I might behave in one way, in others another way. In different situations I act differently depending on how strongly I want to communicate a demand.
Not sure how much sense it makes to take the arithmetic mean of probabilities when the odds vary over many orders of magnitude. If the average is, say, 30%, then it hardly matters whether someone answers 1% or .000001%. Also, it hardly matters whether someone answers 99% or 99.99999%.
I guess the natural way to deal with this would be to average (i.e., take the arithmetic mean of) the order of magnitude of the odds (i.e., log[p/(1-p)], p someone's answer). Using this method, it would make a difference whether someone is "pretty certain" or "extremely certain" that a certain statement is true or false.
Does anyone know what the standard way for dealing with this issue is?
Thanks for taking the time to conduct and then analyze this survey!
What surprised me:
What disappointed me:
And a comment at the end:
Given that LW explicitly tries to e... (read more)
To me it has always sounded right. I'm MENSA-level (at least according to the test the local MENSA association gave me) and LessWrong is the first forum I ever encountered where I've considered myself below-average -- where I've found not just one or two but several people who can think faster and deeper than me.
With only 500 people responding to the IQ question, it is entirely possible that this is simply a selection effect. I.e. only people with high IQ test themselves or report their score while lower IQ people keep quiet.
There's nothing necessarily wrong with this. You are assuming that feminism is purely a matter of personal preference, incorrectly I feel. If you reduce feminism to simply asking "should women have the right to vote" then you should in fact find a correlation between that and "is there such a thing as global warming", because the correct answer in each case is yes.
Not saying I am necessarily in favour of modern day feminism, but it does bother me that people simply assume that social issues are independent of fact. This sounds like "everyone is entitled to their opinion" nonsense to me.
What I find more s... (read more)
I've heard GMOs described as the left equivalent for global warming-- maybe there should be a question about GMOs on next survey.
Not necessarily a joke.
Helpful for letting us know there are bad people out there that will seek to sabotage the value of a survey even without any concrete benefit to themselves other than the LOLZ of the matter. But I think we are already aware of the existence of bad people.
As for your "I suspect that I am not alone", I ADBOC (agree denotationaly but object connotationaly). Villains exist, but I suspect villains are rarer than they believe themselves to be, since in order to excuse their actions they need imagine the whole world populated with villains (while denying that it's an act of villainy they describe).
Well, I'm also a European (with a Master's Degree in Computer Science ) who didn't give my number in millions, and I could have my MENSA-acceptance letter scanned and posted if anyone disbelieves me on my provided IQ.
So bollocks on that. You are implying that people like me are liars just because we are careless readers or careless typists. Lying is a whole different thing than mere carelessness.
There could be some measurement bias here. I was on the fence about whether I should identify myself as an effective altruist, but I had just been reminded of the fact that I hadn't donated any money to charity in the last year, and decided that I probably shouldn't be identifying as an effective altruist myself despite having philosophical agreements with the movement.
This is blasphemy against Saint Boole.
There's something strange about the analysis posted.
How is it that 100% of the general population with high (>96%) confidence got the correct answer, but only 66% of a subset of that population? Looking at the provided data, it looks like 3 out of 4 people (none with high Karma scores) who gave the highest confidence were right.
(Predictably, the remaining person with high confidence answered 500 million, which is almost the exact population of the European Union (or, in the popular parlance "Europe"). I almost made the same mistake, before realizing that a) "Europe" might be intended to include Russia, or part of Russia, plus other non-EU states and b) I don't know the population of those countries, and can't cover both bases. So in response, I kept the number and decreased my confidence value. Regrettably, 500 million can signify both tremendous confidence and very little confidence, which makes it hard to do an analysis of this effect.)
Well played :)
Nice work Yvain and Ozy, and well done to Zack for winning the MONETARY REWARD.
I continue to be bad at estimating but well calibrated.
(Also, I'm sure that this doesn't harm the data to any significant degree but I appear to appear twice in the data, both rows 548 and 552 in the xls file, with row 548 being more complete.)
No, I don't. To explain why, let me point out that you list of four questions neatly divides into two halves.
Your first two questions are empirically testable questions about what reality is. As such they are answerable by the usual scienc-y means and a rational person will have to accept the answers.
Your last two questions are value-based questions about what should be. They are not answerable by science and the answers are culturally determined. It is perfectly possible to be very rational and at the same time believe that, say, homosexuality is a great evil.
Rationality does not determine values.
While I think you're making a good point, and LW should definitely listen to it, this:
Is phrased a bit strongly, and
Is a word I almost never see outside of a mindkilled context, though at least it's in a sentence, here. (People who use "Disgusting" as the entirety of a sentence are basically wearing a giant "I AM MINDKILLED" flag as a coat, in my experience.)
I think that people should feel free to mention their emotions, but they should also express them in a manner that recognizes said emotions are two place words. X is horrified/disgusted by Y.
Something may be 'disgusting' you, and that's a useful datapoint, but to say that something is 'disgusting' as if it's an inherent characteristic of the thing pretty much puts a stopper to the conversation. What could be the response "No, it's not"?
How would you feel about someone who said things like "Homosexuality is disgusting." as opposed to someone saying something like "Homosexuality icks me out."? I think you would probably see the latter sentence as less of a conversation-killer than the former.
Politics as in "politics is the mind-killer" doesn't mean "involvement with the polis"; it means "entanglement with factional identity". We routinely touch on the former; insofar as "raising the sanity waterline" can be taken as a goal, for example, it's inextricably political in that sense. But most of the stuff we've historically talked about here isn't strongly factionalized in the mainstream.
If you're posting on something that is and you stop to consider its reception, of course, you're engaging in politics in both senses. But that's the exception here, not the rule.
I expected that the second word in my passphrase would stay secret no matter what and the first word would only be revealed if I won the game.
Well, thank goodness I didn't pick anything too embarrassing.
Some thoughts on the correlations:
At first I saw that IQ seems to correlate with less children (a not uncommon observation):
But then I noticed that number of children obviously correlate with age and age with IQ (somewhat):
So it may be that older people just have lower IQ (Flynn effect).
Something to think about:
This can be read as smarter people stay shorter on LW. It seems to imply that over time LW will degrade in smarts. But it could also just mean that smarter people just turn over faster (thus also entering faster).
On the other hand most human endeavors tend toward the mean over time.
Older people (like me ahem) either take longer to notice LW or the community is spreading from younger to older people slowly.
This made me laugh:
Guess who does the voting :-)
I don't know if this is the LW hug or something but I'm having trouble downloading the xls. Also, will update with what the crap my passphrase actually means, because it's in Lojban and mildly entertaining IIRC.
EDIT: Felt like looking at some other entertaining passphrases. Included with comment.
sruta'ulor maftitnab {mine! scarf-fox magic-cakes!(probably that kind)}
Afgani-san Azerbai-chan {there... are no words}
DEFECTORS RULE
do mlatu {a fellow lojbanist!}
lalxu daplu {and another?}
telephone fonxa {and another! please get in contact with me. please.}
xagfu'a ... (read more)
Valid point. Thanks for the clarification.
Though to my experience, even women seem to think the the part that comes after is in fact more laborous that the carrying part - and that part can be equally shared between genders. Of course, it usually/traditionally isn't, so I guess that's a point towards male bias too.
Lowest-hanging? I consider having children to be quite a huge investment of my personal resources. How is that a low-hanging fruit?
I should have looked at the data set. The answer is that zero people reported having 5 partners.
The links to the public data given at the end appear to be broken. They give internal links to Less Wrong instead of redirecting to Slate Star Codex. These links should work:
sav xls csv
Even if people don't have fully integrated beliefs in destructive policies, their beliefs can be integrated enough to lead to destructive behavior.
The Muslims who throw acid in their daughters' faces may not have an absolute preference for disfigured daughters, but they may prefer disfigured daughters over being attacked by their neighbors for permitting their daughters more freedom than is locally acceptable-- or prefer to not be attacked by the imagined opinions (of other Muslims and/or of Allah) which they're carrying in their minds.
Also, even though it may not be a terminal value, I'd say there are plenty of people who take pleasure in hurting people, and more who take pleasure in seeing other people hurt.
While I understand the sentiment here (and I know a number of women who share it), I'm not sure this is correct. I was under the impression that eugenic impulses and pro-natalism were close to evenly split among the genders, and if there was an imbalance, it was that women were more likely to be interested in having babies and in having good babies. It may be eas... (read more)
It looks like lots of people put themselves as atheist, but still answered the religion question as Unitarian Universalist, in spite of the fact that the question said to answer your religion only if you are theist.
I was looking forward to data on how many LW people are UU, but I have no way of predicting how many people followed the rules as written for the question, and how many people followed the rules as (I think they were) intended.
We should make sure to word that question differently next year, so that people who identify as atheist and religious know to answer the question.
N.B.: Average IQ drops to 135 when only considering tests administered at an adult age -- those "IQ 172 at age 7" entries shouldn't be taken as authoritative for adult IQ.
Could someone who voted for unfriendly AI explain how nanotech or biotech isn't much more of a risk than unfriendly AI (I'll assume MIRI's definition here)?
I ask this question because it seems to me that even given a technological singularity there should be enough time for "unfriendly humans" to use precursors to fully fledged artificial general intelligence (e.g. advanced tool AI) in order to solve nanotechnology or advanced biotech. Technologies which themselves will enable unfriendly huma... (read more)
Accidental grey goo is unlikely to get out of the lab. If I design a nanite to self-replicate and spread through a living brain to report useful data to me, and I have an integer overflow bug in the "stop reproducing" code so that it never stops, I will probably kill the patient but that's it. Because the nanites are probably using glucose+O2 as their energy source. I never bothered to design them for anything else. Similarly if I sent solar-powered nanites to clean up Chernobyl I probably never gave them copper-refining capability -- plenty of copper wiring to eat there -- but if I botch the growth code they'll still stop when there's no more pre-refined copper to eat. Designing truely dangerous grey goo is hard and would have to be a deliberate effort.
As for stopping grey goo, why not? There'll be something that destroys it. Extreme heat, maybe. And however fast it spreads, radio goes faster. So someone about to get eaten radios a far-off military base saying "help! grey goo!" and the bomber planes full of incindiaries come forth to meet it.
Contrast uFAI, which has thought of this before it surfaces, and has already radioed forged orders to take all the bomber planes apart for maintenance or something.
It was interesting to see how very average I am (as a member of Less Wrong). My feelings of being an outsider (here at least) have diminished.
I've also resolved to do two things this year, thanks in part to this survey: 1) sign the hell up for cryonics already and 2) take a professional IQ test.
For cryonics, the number of yeses compared to the number who want to or are still considering is a bit of a wake-up call for me.
So, I was going through the xls, and saw the "passphrase" column. "Wait, what? Won't the winner's passphrase be in here?"
Not sure if this is typos or hitting the wrong entry field, but two talented individuals managed to get 1750 and 2190 out of 1600 on the SAT.
I was curious about the breakdown of romance (whether or not you met your partner through LW) and sexuality. For "men" and "women," I just used sex- any blanks or others are excluded. Numbers are Yes/No/I didn't meet them through community but they're part... (read more)
I think it's more likely he was misusing the word “literally”/wearing belief as attire (in technical terms, bullshitting) than he actually really believed that. After all I guess he could tell boys and girl apart without looking between their legs, couldn't he?
I assume you're getting downvoted because liking Pinterest is not one of the most salient things about women in general, nor about the class of women we'd like frequenting this site. If part of the reason talented women don't end up here is that women are stereotyped as vapid, then appealing to a site at the low end of the intellectual spectrum as your prototype for 'place we can find women' only exacerbates the problem.
I've just noticed there was no Myers-Briggs question this year. Why?
I gave blood before I was an EA but stopped because I didn't think it was effective. Does being veg*n correlate with calling oneself an EA? That seems like a more effective intervention.
Some unique passphrases that weren't so unique (I removed the duplicates from people who took the survey twice). You won't want to reuse your passphrase for next year's survey!
But you can always find harm if you allow for feelings of disgust, or take into account competition in sexual markets (i.e. if having sex with X is a substitute for having sex with Y then Y might be harmed if someone is allowed to have sex with X.)
Perhaps, but there is always the odd statistical outlier. I go to church every week, and I visit this forum, for example.
The standard reply to this is that many people hurt themselves by their choices, and that justifies intervention. (Even if we hastily add an "else" after "anyone," note that hurting yourself hurts anyone who cares about you, and thus the set of acts which harm no one is potentially empty.)
Don't feed the troll. :D
Were there enough CFAR workshoppers to check CFAR attendance against calibration?
There are (very probably around) 1.7x10^11 galaxies in the observable universe. So I don't understand how can P(Aliens in Milky Way) be so closed to P(Aliens in observable universe)? If P(Aliens in an average galaxy) = 0.0000000001, P(Aliens in observable universe) should be around 1-(1-0.0000000001)^(1.7x10^11)=0.9999999586. I know there are other factors that influence these numbers, but still, even if there's a only a very s... (read more)
Things that stuck out to me:
HPMOR: - Yes, all of it: 912, 55.7% REFERRAL TYPE: Referred by HPMOR: 400, 24.4%
EY's Harry Potter fanfic is more popular around here than I'd thought.
PHYSICAL INTERACTION WITH LW COMMUNITY: Yes, all the time: 94, 5.7% Yes, sometimes: 179, 10.9%
CFAR WORKSHOP ATTENDANCE: Yes, a full workshop: 105, 6.4% A class but not a full-day workshop: 40, 2.4%
LESS WRONG USE: Poster (Discussion, not Main): 221, 12.9% Poster (Main): 103, 6.3%
~6% at the maximum "buy-in" levels on these 3 items. My guess is they are all made up of a si... (read more)
Gray goo designs don't need to be built up with miniscule steps, each of which makes evolutionary sense, like the evolved biosphere was. This might open up designs that are feasible to invent, very difficult to evolve naturally, and sufficiently different from anything in the natural biosphere to do serious damage even without a billion years of evolutionary optimization.
For the second year in a row Pandemic is the leading cat risk. If you include natural and designed it has twice the support of the next highest cat risk.
The correlations with number of partners seem like they confound two very different questions: "in a relationship or not?" and "poly or not, and if so how poly?". This makes correlations with things like IQ and age less interesting. It seems like it would be more informative to look at the variables "n >= 1" and "value of n, conditional on n >= 1".
(Too lazy to redo those analyses myself right now, and probably ever. Sorry. If someone else does I'll be interested in the results, though.)
Really? Gives his history I think the answer is pretty clear that he's not the kind of person who's out to argue that legalizing pedophila is a clear cut issue.
He also said something about wanting to avoid the kind of controversy that causes downvoting.
It's wrong on a biological level. From my physiology lecture: Woman blink twice as much as men. The have less water in their bodies.
So you are claiming either: "Childre... (read more)
Technically you are correct, so you can read my above argument as figuratively "accurate to one decimal place". The important thing is that there's nothing mysterious going on here in a linguistic or metaethical sense.
Let's introduce Charlie.
"I think women should be barefoot and pregnant" is a factual statement about Charlie's preferences which are themselves reducible to the physical locations of atoms in the universe (specifically Charlie's brain).
... (read more)Interesting. That's a rather basic and low-level disagreement.
So, let's take a look at Alice and Bob. Alice says "I like the color green! We should paint all the buildings in town green!". Bob says "I like the color blue! We should paint all the buildings in town blue!". Are these statements meaningless? Or are they reducible to factual matters?
By the way, your position was quite popular historically. The Roman Catholic church was (and still is) a big proponent.
Yes, but I woudn't expect that sentiment to really be all that gender-biased, though.
Not everybody see their lives as a big genetic experiment where their goal is to out-breed the opponents.
^ See this? This is one of the reasons this forum is 90% male.
In fact, most people don't - judging by those numbers.
This isn't about out-breeding opponents. This is about the consequences of dysgenic selection against intelligence.
As Yvain pointed out in his post on a similar topic, far more women than men go to church across all denominations, including ones that don't even let women in leadership positions. I recommend you update your model about what kinds of things drive off women.
The issue is that impact of actions on the future is progressively harder to predict over longer timespans, and the ignorance of even the sign of the true utility difference due to an action makes the expected utility differences small. Thus unusual concerns with the grande future leave people free to pick what ever actions make them feel good about themselves, with no real direction towards any future good; such actions are then easily rationalized.
I find it odd that 66.2% of LWers are "liberal" or "socialist" but only 13.8% of LWers consider themselves affiliated with the Democrat party. Can anybody explain this?
First reason: by European standards, I imagine the Democrat party is still quite conservative. Median voter theorem and all that. Second reason: "affiliated" probably implies more endorsement than "it's not quite as bad as the other party". It could also be both of these together.
What the problem with someone external writing an article about how LW is a group who thinks they are high IQ?
"I'm part of a community, you live in a bubble, he's out of touch."
I would like to see how percent of positive karma, rather than total karma, correlates with the other survey responses. I find the former a more informative measure than the latter.
Noting that this was suggested to me by the algorithm, and presumably shouldn't be eligible for that.
Gamete donation is lower-hanging fruit.
I think you misunderstood me. Of course I don't mean that the terms "facts" and "values" represent the same thing. Saying that a preference itself is wrong is nonsense in the same way that claiming that a piece of cheese is wrong is nonsensical. It's a category error. When I say I reject a strict fact-value dichotomy I mean that I reject the notion that statements regarding values should somehow be treated differently from statements regarding facts, in the same way that I reject the notion of faith inhabiting a separate magistrate from... (read more)
I partly agree, but a tradition that developed under certain conditions isn't necessarily optimal under different conditions (e.g. much better technology and medicine, less need for manual labour, fewer stupid people (at least for now), etc.).
Otherwise, we'd be even better off just executing our evolved adaptations, which had even more time to develop.
With modern medicine not in any meaningful sense.
What definition of "considered a person" are you using that makes the above even a remotely valid deduction.
If you have problems with doing things as a means to an end, might I recommend a forum where consequentialism isn't the default moral theory.
The question I was trying to answer wasn't whether they were right, it was whether a rational actor could hold those opinions. That has a lot less to do with factual accuracy and a lot more to do with internal consistency.
As to the correctness of normative claims -- well, that's a fairly subtle question. Deontological claims are often entangled with factual ones (e.g. the existence-of-God thing), so that's at least one point of grounding, but even from a consequential perspective you need an optimization objective. Rational actors may disagree on exactly what that objective is, and reasonable-sounding objectives often lead to seriously counterintuitive prescriptions in some cases.
In all of these cases, the people breaking with the conclusion you presumably believe to be obvious often do so because they believe the existing research to be hopelessly corrupt. This is of course a rather extraordinary statement, and I'm pretty sure they'd be wrong about it (that is, as sure as I can be with a casual knowledge of each field and a decent grasp of statistics), but bad science isn't exactly unheard of. Given the right set of priors, I can see a rational person holding each of these opinions at least for a time.
In the latter two, they might additionally have different standards for "should" than you're used to.
Yes, but the closer you get to lightspeed the bigger problem you have with any collision with any small particle.
Did you select cooperate or defect on the prisoner dilemma question?
Really? Von Neumann machines (the universal assembler self-replicating variety, not the computer architecture) versus regular ol' mitosis, and you think mitosis would win out?
I've only ever heard "building self-replicating machinery on a nano-scale is really hard" as the main argument against the immediacy of that particular x-risk, never "even if there were self-replicators on a nano-scale, they would have a hard time out-competing the existing biosphere". Can you elaborate?
What the best way to import the data into R without having to run as.numeric(as.character(...)) on all the numeric variables like the probabilities?
It seems like the effect of effective altruism on charity donations is relatively independent from income.
If I do a straight linear model with predicts charity donation from effective altruism, the effect is 1851 +- 416 $. If I add income into the model the effect shrinks to 1751+-392.
Furthermore being a effective altruist doesn't have a significant effect on income (I tried a few different ways to control it).
Results on google docs.
I'm extremely surprised and confused. Is there an explanation for how these probabilities are so high?
Well, we apparently have 3.9% of "committed theists", 3.2% of "lukewarm theists", and 2.2% of "deists, pantheists, etc.". If these groups put Pr(God) at 90%, 60%, 40% respectively (these numbers are derived from a sophisticated scientific process of rectal extraction) then they contribute 6.3% of the overall Pr(God) requiring an average Pr(God) of about 3.1% from the rest of the LW population. If enough respondents defined "God" broadly enough, that doesn't seem altogether crazy.
If those groups put Pr(religion) at 90%, 30%, 10% then they contribute about 4.7% to the overall Pr(religion) suggesting ~1% for the rest of the population. Again, that doesn't seem crazy.
So the real question is more or less equivalent to: How come there are so many committed theists on LW? Which we can frame two ways: (1) How come LW isn't more effective in helping people recognize that their religion is wrong? or (2) How come LW isn't more effective in driving religious people away? To which I would say (1) recognizing that your religion is wrong is really hard and (2) I hope LW is very ineffective in driving religious people away.
(For those who expect meta-level opinions on these topics to be perturbed by object-level opinions and wish to discount or adjust: I am an atheist; I don't remember what probabilities I gave but they would be smaller than any I have mentioned above.)
I wonder if I can claim credit for either of the Freethought Blogs referrals.
(I'm an ex-FTBer. I think Zinnia Jones is the only other current or former FTBer involved in LessWrong.)
Some things that took me by surprise:
People here are more favorable of abortion than feminism. I always thought the former as secondary to the latter, though I suppose the "favorable" phrasing makes the survey sensitive to opinion of the term itself.
Mean SAT (out of 1600) is 1474? Really, people? 1410 is 96th percentile, and it's the bottom 4th quartile. I guess the only people who remembered their scores were those who were proud of them. (And I know this is right along with the IQ discussion)
It looks like you created the 2014 survey before I got around to posting my comment for this one. Oh well. Hopefully you will still find my comment useful. :)
Some answer choices from the survey weren't included in the results, without any explanation as to why. Does that mean no one selected them? If so, I suggest editing the post to make that clear.
I noticed that 13.6% of respondents chose not to answer the "vegetarian" question. I think it would have helped if you provided additional choices for "vegan" and "pescatarian".
... (read more)In the general sense that all claims must abide by the usual requirements of validity and soundness of logic, sure.
In fact, you might say that mathematics is really just a very pure form of logic, while science deals with more murky, more complicated matters. But the essential principle is the same: You better make sure that the output follows logically from the input, or else you're not doing it right.
But the original context was "we should". Sophronius obviously intended the sentence to refer to everyone. I don't see anything relative about his use of words.
But he says "We should" not "I want" because there is the implication that I should also paint the buildings blue. But if the only reason I should do so is because he wants me to, it raises the question of why I should do what he wants. And if he answers "You should do what I want because it's what I want", it's a tautology.
Depends on where you are.
You mention a "very confused secular humanist." What other answers did that person provide that mark him/her/zer as confused?
There's some subtlety here. I believe that ethical propositions are ultimately reducible to physical facts (involving idealized preference satisfaction, although I don't think it'd be productive to dive into the metaethical rabbit hole here), a... (read more)
Well, given that Charlie indeed genuinely holds that preference, then no he is not wrong to hold that preference. I don't even know what it would mean for a preference to be wrong. Rather, his preferences might conflict with preferences of others, who might object to this state of reality by calling it "wrong", which seems like the mind-projection fallacy to me. There is nothing mysterious about this.
Similarly, the person in the original example of mine is not wrong to think men kissing each other is icky, but he IS wrong to conclude that there ... (read more)
Including as basso singers? ;-)
(As you worded your sentence, I would agree with it, but I would also add "But employers should be allowed to not hire them.")
Could you explain how a dysgenic society could result in 90% of the human population dying by 2100? To me that seems widely overblown.
How do you support that assumption?
Again, assuming that whatever makes activity A beneficial to some people and not other people isn... (read more)
If one is known for using drugs, then every unusual claim he makes is dismissed as a literal pipe dream. It is a huge blow to authority.
I don't understand how P(Simulation) can be so much higher than P(God) and P(Supernatural). Seems to me that "the stuff going on outside the simulation" would have to be supernatural by definition. The beings that created the simulation would be supernatural intelligent entities who created the universe, aka gods. How do people justify giving lower probabilities for supernatural than for simulation?
Please explain, then, without using the word 'sky', what exactly you mean by "turning the sky green".
I had parsed that as "ensuring that a person, looking upwards during the daytime and not seeing an intervening obstacle (such as a ceiling, an aeroplane, or a cloud) would honestly identify the colour that he sees as 'green'." It is now evident that this is not what you had meant by the phrase.
The "did not answer" option seems to be distorting the perception of the results. Perhaps structuring the presentation of the data with those percentages removed would be more straightforward to visualise.
Percentages including the non respondents is misleading, at first glance you could be mistaken for thinking there is a significant population of Non-English speakers as less than 70% of people who ... (read more)
I would be interested to see Eliezer's responses.
Not quite. The averages might roughly work, but the correlations appear off. For instance this:
Is about half of what you'd expect.
Why not? If we're such smarty pants, maybe we should learn how to shut up and multiply. There are lots of people. Let's go with the 146 value. That's roughly 1 in a 1000 people have IQ >= 146. That high IQ people congregate at a rationality site shouldn't shock anyone. The site is easily accessible to all of the Anglospher... (read more)
"Finally, at the end of the survey I had a question offering respondents a chance to cooperate (raising the value of a potential monetary prize to be given out by raffle to a random respondent) or defect (decreasing the value of the prize, but increasing their own chance of winning the raffle). 73% of effective altruists cooperated compared to 70% of others - an insignificant difference."
Assuming an EA thinks they will use the money better than the typical other winner, the most altruistic thing to do could be to increase their chances of winning, even at the cost of a lower prize. Or maybe they like the person putting up the prize, in which case they would prefer it to be smaller.
Well then, I disagree. Since I just did a whole circle of the mulberry bush with Sophronius I'm not inclined to do another round. Instead I'll just state my position.
I think that statements which do not describe reality but instead speak of preferences, values, and "should"s are NOT "factually either true or false". They cannot be unconditionally true or false at all. Instead, they can be true or false conditional on the specified value system and if you specify a different value system, the true/false value may change. To rephrase it i... (read more)
Yes they are, but the same sentence can state different logical propositions depending on where, when and by whom it is uttered.
Well, if you should drink more because you're dehydrated, then you're right to say that not everyone is bound by that, but people in similar circumstances are (i.e. dehydrated, with no other reason not to drink). Or are you saying that there are ultimately personal shoulds?
I think we're pretty close to someone declaring that egoism isn't a valid moral position, again.
Revealed preferences of women buying shoes and contraception?
Then again, medicine doesn't disproportionately drive off women either, and I'm not under the impression that doctors are less likely to be atheistic/rationalistic/high-Openness/etc. than the general population (indeed, they include 1.9% of LW survey respondents, which is about one order of magnitude higher than my out-of-my-ass^WFermi estimate for the general population).
That depends on what you mean by "divided equally". I think it should be divided based on comparative advantage.
Ok right. I agree.
Suppose A is beneficial to 80% of males and 40% of females, and detrimental to 20% of males and 60% of females; why would you expect, in a perfect world, to see 60% of males and 60% of females attempting activity A?
Were there any significant differences between lurkers and posters? Would be interesting to see if that indicates any entry barriers to commenting.
Why did you interview Gowers anyway? It's not like he has any domain knowledge in artificial intelligence.
Tapping out now.
How do you use a drug without possessing it at some point? Isn't admitting use of drugs a fortiori an admission of possession of drugs?
All of mathematics? Dunno. I'm not even sure what that phrase refers to. But sure, there exist mathematical problems that humans can't solve unaided, but which can be solved by tools we create.
... (read more)Formatting: I find the reports a bit difficult to scan, because each line contains two numbers (absolute numbers, relative percents), which are not vertically aligned. An absolute value of one line may be just below the value of another line, and the numbers may similar, which makes it difficult to e.g. quickly find a highest value in the set.
I think this could be significantly improved with a trivial change: write the numbers at the beginning of the line, that will make them better aligned. For even better legibility, insert a separator (wider than just a... (read more)
Is grey goo the only extinction type scenario possible if humans solve advanced nanotechnology? And do you really need an AI whose distance from an intelligence explosion is under 5 years in order to guide something like grey goo?
But yes, this is an answer to my original question. Thanks.
Presumably for the same reason there is no data on people with 7, 8, 9, 10...n partners: no one claimed to have them. Since there was only 1 person who claimed 4 partners, and 3 people who claimed 6, perfectly plausible that there simply was no such respondent.
We should have an answer wiki with ideas for next survey.
I'm sorry, I'm not sure what you're saying? I'm aware of what "EA" stands for, if that's the confusion.