The number of Asians (both East and South) among American readers is pretty surprisingly low - 43/855 ~= 5%. This despite Asians being, e.g., ~15% of the Ivy League student body (it'd be much higher without affirmative action), and close to 50% of Silicon Valley workers.
Being south asian myself - I suspect that the high achieving immigrant-and-immigrant-descended populations gravitate towards technical fields and Ivy leagues for different reasons than American whites do. Coming from hardship and generally being less WEIRD, they psychologically share more in common with the middle class and even blue collar workers than the Ivy League upper class - they see it as a path to success rather than some sort of grand purposeful undertaking. (One of the Asian Professional community I participated in articulated this and other differences in attitude as a reason that Asians often find themselves getting passed over for higher level management positions, as something to be overcome).
Lesswrong tends to appeal to abstract, starry-eyed types. I hate to use the word "privilege", but there is some hard to quantify things, like degree of time talking about lesswrong-y key words like "free will" or "utilitarianism", which are going to influence the numbers here. (Not that asians don't like chatting about philosophy, but they certainly have less time for it and also they tend to focus on somewhat different topics during philosophical...
"Used against", to me, implies active planning that may or may not exist here; but the pragmatic effects of the policy as implemented in American universities do seem to negatively affect Asians.
There's pretty unambiguous statistical evidence that it happens. The Asian Ivy League percentage has remained basically fixed for 20 years despite the college-age Asian population doubling (and Asian SAT scores increasing slightly).
Calibration Score
Using a log scoring rule, I calculated a total accuracy+calibration score for the ten questions together. There's an issue that this assumes the questions are binary when they're not- someone who is 0% sure that Thor is the right answer to the mythology question gets the same score (0) as the person who is 100% sure that Odin is the right answer to the mythology question. I ignored infinitely low scores for the correlation part.
I replicated the MWI correlation, but I noticed something weird- all of the really low scorers gave really low probabilities to MWI. The worst scorer had a score of -18, which corresponds to giving about 1.6% probability to the right answer. What appears to have happened is they misunderstood the survey, and answered in fractions instead of percents- they got 9 out of 10 questions right, but lost 2 points every time they assigned 1% or slightly less than 1% to the right answer (i.e. they mean to express almost certainty by saying 1 or 0.99) and only lost 0.0013 points when they assigned 0.3% probability to a wrong answer.
When I drop the 30 lowest scorers, the direction of the relationship flips- now, people with better log scores (i.e. close...
Once again pandemic is the leading cat risk. It was the leading cat risk last year. http://lesswrong.com/lw/jj0/2013_survey_results/aekk It was the leading cat risk the year before that. http://lesswrong.com/lw/fp5/2012_survey_results/7xz0
Pandemics are the risk LWers are most afraid of and to my knowledge we as a community have expended almost no effort on preventing them.
So this year I resolve that my effort towards pandemic prevention will be greater than simply posting a remark about how it's the leading risk.
Clearly, we haven't been doing enough to increase other risks. We can't let pandemic stay in the lead.
Givewell has looked into global catastrophic risks in general, plus pandemic preparedness in particular. My impression is that quite a bit more is spent per year on biosecurity (around 6 billion in the US) than is on other catastrophic risks such as AI.
Pandemics may be the largest risk, but the marginal contribution a typical LWer can make is probably very low, and not their comparative advantage. Let the WHO do its work, and turn your attention to underconsidered risks.
WHY ISN’T THERE AN OPTION FOR NONE SO I CAN SIGNAL MY OBVIOUS OBJECTIVITY WITH MINIMAL EFFORT
This is why I didn't vote on the politics question.
This is a sign that most Less Wrongers continue to neglect the very basics of rationality and are incapable of judging how much evidence they have on a given issue. Veterans of the site do no better than newbies on this measure.
Theory: People use this site as a geek / intellectual social outlet and/or insight porn and/or self-help site more than they seriously try to get progressively better at rationality. At least, I know that applies to me :).
This definitely belongs on the next survey!
Why do you read LessWrong? [ ] Rationality improvement [ ] Insight Porn [ ] Geek Social Fuzzies [ ] Self-Help Fuzzies [ ] Self-Help Utilons [ ] I enjoy reading the posts
I decided to take a look at overconfidence (rather than calibration) on the 10 calibration questions.
For each person, I added up the probabilities that they assigned to getting each of the 10 questions correct, and then subtracted the number of correct answers. Positive numbers indicate overconfidence (fewer correct answers than they predicted they'd get), negative numbers indicate underconfidence (more correct answers than they predicted). Note that this is somewhat different from calibration: you could get a good score on this if you put 40% on each question and get 40% of them right (showing no ability to distinguish between what you know and what you don't), or if you put 99% on the ones you get wrong and 1% on the ones you get right. But this overconfidence score is easy to calculate, has a nice distribution, and is informative about the general tendency to be overconfident.
After cleaning up the data set in a few ways (which I'll describe in a reply to this comment), the average overconfidence score was 0.39. On average, people expected to get 4.79 of the 10 questions correct, but only got 4.40 correct. My impression is that this gap (4 percentage points) is smallish compared ...
Details on data cleanup:
In the publicly available data set, I restricted my analysis to people who:
Failure to meet any of these criteria generally indicated either a failure to understand the format of the calibration questions, or a decision to skip one or more of the questions. Each of these criteria eliminated at least 1 person, leaving a sample of 1141 people.
I counted as "correct":
These seem to cover the most common ...
And here's an analysis of calibration.
If a person was perfectly calibrated, then each 10% increase in their probability estimate would translate into a 10% higher likelihood of getting the answer correct. If you plot probability estimates on the x axis and whether or not the event happened on the y axis, then you should get a slope of 1 (the line y=x). But people tend to be miscalibrated - out of the questions where they say "90%", they might only get 70% correct. This results in a shallower slope (in this example, the line would go through the point (90,70) instead of (90,90)) - a slope less than 1.
I took the 1141 people's answers to the 10 calibration questions as 11410 data points, plotted them on an x-y graph (with the probability estimate as the x value and a y value of 100 if it's correct and 0 if it's incorrect), and ran an ordinary linear regression to find the slope of the line fit to all 11410 data points.
That line had a slope of 0.91. In other words, if a LWer gave a probability estimate that was 10 percentage points higher, then on average the claim was 9.1 percentage points more likely to be true. Not perfect calibration, but not bad.
If we look at various s...
Myth: Americans think they know a lot about other countries but really are clueless.
Verdict: Self-cancelling prophesy.
Method: Semi-humorous generalization from a single data series, hopefully inspiring replication instead of harsh judgment :)
I decided to do some analysis about what makes people overconfident about certain subjects, and decided to start with an old stereotype. I compared how people did on the population calibration question (#9) based on their country.
Full disclosure: I'm Israeli (currently living in the US) and would've guessed Japan with 50% confidence, but I joined LW (unlurked) two days after the end of the survey.
I normalized every probability by rounding extreme confidence values to 1% and 99% and scored each answer that seemed close enough to a misspelling of Indonesia according to the log rule.
Results: Americans didn't have a strong showing with an average score of -0.0071, but the rest of the world really sucked with an average of -0.0296. The reason? While the correct answer rate was almost identical (28.3% v 28.8%) Americans were much less confident in their answers: 42.4% confidence v 46.3% (p<0.01).
Dear Americans, you don't know (significantly) less about the world than everyone else, but at least you internalized the fact that you don't know much*!
Next up: how people who grew up in a religious household do on the Biblical calibration question.
*Unlike cocky Israelis like me.
I'm losing a lot of confidence in the digit ratio/masculinity femininity stuff. I'm not seeing a number of things I'd expect to see.
First, my numbers for correlations don't match up with yours. With filters on for female gendered, and answering all of BemSexRoleF, BemSexRoleM, RightHand, and LeftHand, I get a correlation of only -0.34 for RightHand and BemSexRoleM, not -0.433 as you say. I get various other differences as well, all weaker correlations than you describe. Perhaps differences in filtering explain this? -.34 vs -.433 seems to be high for this to be true though.
Second, Bem masculinity and femininity actually seem to have a positive correlation, albeit tiny. So more masculine people are... more feminine? This makes no sense and makes me more likely to throw out the entire data set.
Thirdly, I don't see any huge differences between Cisgender Men, Transgender Men, Cisgender Women, or Transgender Women on digit ratios. I would expect to see this as well. I get 95% confidence intervals (mean +/- 3*sigma/sqrt(n), formatted [Lower Right - Upper Right / Lower Left - Upper Left]) for the categories as follows:
There isn't necessarily any problem with a small positive correlation between masculinity and femininity. The abstract of what I think is the original paper (I couldn't find an ungated version) says that "The dimensions of masculinity and femininity are empirically and logically independent."
I would be really interested in hearing from one of the fourteen schizophrenic rationalists. Given that one of the most prominent symptoms of schizophrenia is delusional thinking, a.k.a. irrationality... I wonder how this plays out in someone who has read the Sequences. Do these people have less severe symptoms as a result? When your brain decides to turn against you, is there a way to win?
I also find it fascinating that bisexuality is vastly overrepresented here (14.4% in LW vs. 1-2% in US), while homosexuality is not. My natural immediate interpretation of this is that bisexuality is a choice. I think Eliezer said once that he would rather be bisexual than straight, because it would allow for more opportunities to have fun. This seems like an attitude many LW members might share, given that polyamory a.k.a. pursuing a weird dating strategy because it's more fun is very popular in this community. (I personally also share Eliezer's attitude, but unfortunately I'm pretty sure I'm straight.) So to me it seems logical that the large number of bisexuals may come from a large number of people-who-want-to-be-bisexual actually becoming so. This seems more likely to me than some aspect or...
I also find it fascinating that bisexuality is vastly overrepresented here.
I don't. Compare it with the OkCupid data analysis. Bisexuality could be more of a signal. Admittedly at least in the (quite large) OkCupid data.
almost as common as female heterosexuality here, as you would expect
I initially misparsed this as "the female bisexuality rate is as expected." I see that isn't what you meant, but had to re-read two or three times. Just FYI.
I feel like a 42.2% bisexuality rate among LW women is surprising enough to say something, but I'm not sure what.
I think it's pretty astounding that nobody at Less Wrong was born in May. I'm not sure why Scott doesn't think that's a deviation from randomness.
Thanks for showing us that there are autistic cryonics patients in the world. I am more likely to sign up when I am old enough to legally do so without parental permission, because now I know I wouldn't be the only autistic person in the future, no matter what happens when people develop a prenatal autism test.
Thanks for doing this!
[This question was poorly worded and should have acknowledged that people can both be asexual and have a specific orientation; as a result it probably vastly undercounted our asexual readers]
I find the "vastly" part dubious, given that 3% asexual already seems disproportionately large (general population seems to be about 1%). I would expect for asexuals to be overrepresented, and I do think the question wording means the survey's estimate underestimates the true proportion, but I don't think that it's, say, actually 10% instead of actually 4%.
Good job on running the survey and analyzing the data! I do wish that one of the extra credit questions had asked whether or not readers were fans of My Little Pony: Friendship is Magic.
P Supernatural: 6.68 + 20.271 (0, 0, 1) [1386]
P God: 8.26 + 21.088 (0, 0.01, 3) [1376]
The question for P(Supernatural) explicitly said "including God." So either LW assigns a median probability of at least one in 10,000 that God created the universe and then did nothing, or there's a bad case of conjunction fallacy.
1319 people supplied a probability of God that was not blank or "idk" or the equivalent thereof as well as a non-blank religion. I was going to do results for both religious views and religious background, but religious background was a write-in so no thanks.
Literally every group had at least one member who supplied a P(God) of 0 and a P(God) of 100.
Do you have some links to calibration training? I'm curious how they handle model error (the error when your model is totally wrong).
For question 10 for example, I'm guessing that many more people would have gotten the correct answer if the question was something like "Name the best selling PC game, where best selling solely counts units not gross, number of box purchases and not subscriptions, and also does not count games packaged with other software?" instead of "What is the best-selling computer game of all time?". I'm guessing mos...
I'm curious how they handle model error (the error when your model is totally wrong).
They punish it. That is, your stated credence should include both your 'inside view' error of "How confident is my mythology module in this answer?" and your 'outside view' error of "How confident am I in my mythology module?"
One of the primary benefits of playing a Credence Game like this one is it gives you a sense of those outside view confidences. I am, for example, able to tell which of two American postmasters general came first at the 60% level, simply by using the heuristic of "which of these names sounds more old-timey?", but am at the 50% level (i.e. pure chance) in determining which sports team won a game by comparing their names.
But it seems hard to guess beforehand that the question you thought you were answering wasn't the question that you were being asked!
This is the sort of thing you learn by answering a bunch of questions from the same person, or by having a lawyer-sense of "how many qualifications would I need to add or remove to this sentence to be sure?".
Yayy! I was having a shitty day, and seeing these results posted lifted my spirits. Thank you for that! Below are my assorted thoughts:
I'm a little disappointed that the correlation between height and P(supernatural)-and-similar didn't hold up this year, because it was really fun trying to come up with explanations for that that weren't prima facie moronic. Maybe that should have been a sign it wasn't a real thing.
The digit ratio thing is indeed delicious. I love that stuff. I'm surprised there wasn't a correlation to sexual orientation, though, since I se...
I remember answering the computer games question and at first feeling like I knew the answer. Then I realized the feeling I was having was that I had a better shot at the question than the average person that I knew, not that I knew the answer with high confidence. Once I mentally counted up all the games that I thought might be it, then considered all the games I probably hadn't even thought of (of which Minecraft was one), I realized I had no idea what the right answer was and put something like 5% confidence in The Sims 3 (which at least is a top ten game). But the point is that I think I almost didn't catch my mistake before it was too late, and this kind of error may be common.
I was confident in my incorrect computer game answer because I had recently read this Wikipedia page List of best-selling video games remembered the answer and unthinkingly assumed that "video games" was the same as "computer games".
It looks to me like everyone was horrendously underconfident on all the easy questions, and horrendously overconfident on all the hard questions.
I think that this is what correct calibration overall looks like, since you don't know in advance which questions are easy and which ones are tricky. I would be quite impressed if a group of super-calibrators had correct calibration curves on every question, rather than on average over a set of questions.
It's interesting to compare these results to those of the 2014 Survey of Effective Altruists. These will be released soon, but here are some initial ways in which effective altruists who took this survey compare to LessWrong census takers:
I think that there are better analyses of calibration which could be done than the ones that are posted here.
For example, I think it's better to combine all 10 questions into a single graph rather than looking at each one separately.
The pattern of overconfidence on hard questions and underconfidence on easy questions is actually what you'd expect to find, even if people are well-calibrated. One thing that makes a question easy is if the obvious guess is the correct answer (like a question about Confederate Civil War generals where the correct answer is R...
This is not a human universal - people who put even a small amount of training into calibration can become very well calibrated very quickly. This is a sign that most Less Wrongers continue to neglect the very basics of rationality and are incapable of judging how much evidence they have on a given issue. Veterans of the site do no better than newbies on this measure.
Can someone who's done calibration training comment on whether it really seems to represent the ability to "judge how much evidence you have on a given issue", as opposed to accurately translate brain-based probability estimates in to numerical probability estimates?
Thanks to everyone who took the 2014 Less Wrong Census/Survey. Extra thanks to Ozy, who did a lot of the number crunching work.
This year's results are below. Some of them may make more sense in the context of the original survey questions, which can be seen here. Please do not try to take the survey as it is over and your results will not be counted.
I. Population
There were 1503 respondents over 27 days. The last survey got 1636 people over 40 days. The last four full days of the survey saw nineteen, six, and four responses, for an average of about ten. If we assume the next thirteen days had also gotten an average of ten responses - which is generous, since responses tend to trail off with time - then we would have gotten about as many people as the last survey. There is no good evidence here of a decline in population, although it is perhaps compatible with a very small decline.
II. Demographics
Sex
Female: 179, 11.9%
Male: 1311, 87.2%
Gender
F (cisgender): 150, 10.0%
F (transgender MtF): 24, 1.6%
M (cisgender): 1245, 82.8%
M (transgender FtM): 5, 0.3%
Other: 64, 4.3%
Sexual Orientation
Asexual: 59, 3.9%
Bisexual: 216, 14.4%
Heterosexual: 1133, 75.4%
Homosexual: 47, 3.1%
Other: 35, 2.3%
[This question was poorly worded and should have acknowledged that people can both be asexual and have a specific orientation; as a result it probably vastly undercounted our asexual readers]
Relationship Style
Prefer monogamous: 778, 51.8%
Prefer polyamorous: 227, 15.1%
Uncertain/no preference: 464, 30.9%
Other: 23, 1.5%
Number of Partners
0: 738, 49.1%
1: 674, 44.8%
2: 51, 3.4%
3: 17, 1.1%
4: 7, 0.5%
5: 1, 0.1%
Lots and lots: 3, 0.2%
Relationship Goals
Currently not looking for new partners: 648, 43.1%
Open to new partners: 467, 31.1%
Seeking more partners: 370, 24.6%
[22.2% of people who don’t have a partner aren’t looking for one.]
Relationship Status
Married: 274, 18.2%
Relationship: 424, 28.2%
Single: 788, 52.4%
[6.9% of single people have at least one partner; 1.8% have more than one.]
Living With
Alone: 345, 23.0%
With parents and/or guardians: 303, 20.2%
With partner and/or children: 411, 27.3%
With roommates: 428, 28.5%
Children
0: 1317, 81.6%
1: 66, 4.4%
2: 78, 5.2%
3: 17, 1.1%
4: 6, 0.4%
5: 3, 0.2%
6: 1, 0.1%
Lots and lots: 1, 0.1%
Want More Children?
Yes: 549, 36.1%
Uncertain: 426, 28.3%
No: 516, 34.3%
[418 of the people who don’t have children don’t want any, suggesting that the LW community is 27.8% childfree.]
Country
United States, 822, 54.7%
United Kingdom, 116, 7.7%
Canada, 88, 5.9%
Australia: 83, 5.5%
Germany, 62, 4.1%
Russia, 26, 1.7%
Finland, 20, 1.3%
New Zealand, 20, 1.3%
India, 17, 1.1%
Brazil: 15, 1.0%
France, 15, 1.0%
Israel, 15, 1.0%
Lesswrongers Per Capita
Finland: 1/271,950
New Zealand: 1/223,550
Australia: 1/278,674
United States: 1/358,390
Canada: 1/399,545
Israel: 1/537,266
United Kingdom: 1/552,586
Germany: 1/1,290,323
France: 1/ 4,402,000
Russia: 1/ 5,519,231
Brazil: 1/ 13,360,000
India: 1/ 73,647,058
Race
Asian (East Asian): 59. 3.9%
Asian (Indian subcontinent): 33, 2.2%
Black: 12. 0.8%
Hispanic: 32, 2.1%
Middle Eastern: 9, 0.6%
Other: 50, 3.3%
White (non-Hispanic): 1294, 86.1%
Work Status
Academic (teaching): 86, 5.7%
For-profit work: 492, 32.7%
Government work: 59, 3.9%
Homemaker: 8, 0.5%
Independently wealthy: 9, 0.6%
Nonprofit work: 58, 3.9%
Self-employed: 122, 5.8%
Student: 553, 36.8%
Unemployed: 103, 6.9%
Profession
Art: 22, 1.5%
Biology: 29, 1.9%
Business: 35, 4.0%
Computers (AI): 42, 2.8%
Computers (other academic): 106, 7.1%
Computers (practical): 477, 31.7%
Engineering: 104, 6.1%
Finance/Economics: 71, 4.7%
Law: 38, 2.5%
Mathematics: 121, 8.1%
Medicine: 32, 2.1%
Neuroscience: 18, 1.2%
Philosophy: 36, 2.4%
Physics: 65, 4.3%
Psychology: 31, 2.1%
Other: 157, 10.2%
Other “hard science”: 25, 1.7%
Other “social science”: 34, 2.3%
Degree
None: 74, 4.9%
High school: 347, 23.1%
2 year degree: 64, 4.3%
Bachelors: 555, 36.9%
Masters: 278, 18.5%
JD/MD/other professional degree: 44, 2.9%
PhD: 105, 7.0%
Other: 24, 1.4%
III. Mental Illness
535 answer “no” to all the mental illness questions. Upper bound: 64.4% of the LW population is mentally ill.
393 answer “yes” to at least one mental illness question. Lower bound: 26.1% of the LW population is mentally ill. Gosh, we have a lot of self-diagnosers.
Depression
Yes, I was formally diagnosed: 273, 18.2%
Yes, I self-diagnosed: 383, 25.5%
No: 759, 50.5%
OCD
Yes, I was formally diagnosed: 30, 2.0%
Yes, I self-diagnosed: 76, 5.1%
No: 1306, 86.9%
Autism spectrum
Yes, I was formally diagnosed: 98, 6.5%
Yes, I self-diagnosed: 168, 11.2%
No: 1143, 76.0%
Bipolar
Yes, I was formally diagnosed: 33, 2.2%
Yes, I self-diagnosed: 49, 3.3%
No: 1327, 88.3%
Anxiety disorder
Yes, I was formally diagnosed: 139, 9.2%
Yes, I self-diagnosed: 237, 15.8%
No: 1033, 68.7%
BPD
Yes, I was formally diagnosed: 5, 0.3%
Yes, I self-diagnosed: 19, 1.3%
No: 1389, 92.4%
[Ozy says: RATIONALIST BPDERS COME BE MY FRIEND]
Schizophrenia
Yes, I was formally diagnosed: 7, 0.5%
Yes, I self-diagnosed: 7, 0.5%
No: 1397, 92.9%
IV. Politics, Religion, Ethics
Politics
Communist: 9, 0.6%
Conservative: 67, 4.5%
Liberal: 416, 27.7%
Libertarian: 379, 25.2%
Social Democratic: 585, 38.9%
[The big change this year was that we changed "Socialist" to "Social Democratic". Even though the description stayed the same, about eight points worth of Liberals switched to Social Democrats, apparently more willing to accept that label than "Socialist". The overall supergroups Libertarian vs. (Liberal, Social Democratic) vs. Conservative remain mostly unchanged.]
Politics (longform)
Anarchist: 40, 2.7%
Communist: 9, 0.6%
Conservative: 23, 1.9%
Futarchist: 41, 2.7%
Left-Libertarian: 192, 12.8%
Libertarian: 164, 10.9%
Moderate: 56, 3.7%
Neoreactionary: 29, 1.9%
Social Democrat: 162, 10.8%
Socialist: 89, 5.9%
[Amusing politics answers include anti-incumbentist, having-well-founded-opinions-is-hard-but-I’ve-come-to-recognize-the-pragmatism-of-socialism-I-don’t-know-ask-me-again-next-year, pirate, progressive social democratic environmental liberal isolationist freedom-fries loving pinko commie piece of shit, republic-ist aka read the federalist papers, romantic reconstructionist, social liberal fiscal agnostic, technoutopian anarchosocialist (with moderate snark), whatever it is that Scott is, and WHY ISN’T THERE AN OPTION FOR NONE SO I CAN SIGNAL MY OBVIOUS OBJECTIVITY WITH MINIMAL EFFORT. Ozy would like to point out to the authors of manifestos that no one will actually read their manifestos except zir, and they might want to consider posting them to their own blogs.]
American Parties
Democratic Party: 221, 14.7%
Republican Party: 55, 3.7%
Libertarian Party: 26, 1.7%
Other party: 16, 1.1%
No party: 415, 27.6%
Non-Americans who really like clicking buttons: 415, 27.6%
Voting
Yes: 881, 58.6%
No: 444, 29.5%
My country doesn’t hold elections: 5, 0.3%
Religion
Atheist and not spiritual: 1054, 70.1%
Atheist and spiritual: 150, 10.0%
Agnostic: 156, 10.4%
Lukewarm theist: 44, 2.9%
Deist/pantheist/etc.: 22,, 1.5%
Committed theist: 60, 4.0%
Religious Denomination
Christian (Protestant): 53, 3.5%
Mixed/Other: 32, 2.1%
Jewish: 31, 2.0%
Buddhist: 30, 2.0%
Christian (Catholic): 24, 1.6%
Unitarian Universalist or similar: 23, 1.5%
[Amusing denominations include anti-Molochist, CelestAI, cosmic engineers, Laziness, Thelema, Resimulation Theology, and Pythagorean. The Cultus Deorum Romanorum practitioner still needs to contact Ozy so they can be friends.]
Family Religion
Atheist and not spiritual: 213, 14.2%
Atheist and spiritual: 74, 4.9%
Agnostic: 154. 10.2%
Lukewarm theist: 541, 36.0%
Deist/Pantheist/etc.: 28, 1.9%
Committed theist: 388, 25.8%
Religious Background
Christian (Protestant): 580, 38.6%
Christian (Catholic): 378, 25.1%
Jewish: 141, 9.4%
Christian (other non-protestant): 88, 5.9%
Mixed/Other: 68, 4.5%
Unitarian Universalism or similar: 29, 1.9%
Christian (Mormon): 28, 1.9%
Hindu: 23, 1.5%’
Moral Views
Accept/lean towards consequentialism: 901, 60.0%
Accept/lean towards deontology: 50, 3.3%
Accept/lean towards natural law: 48, 3.2%
Accept/lean towards virtue ethics: 150, 10.0%
Accept/lean towards contractualism: 79, 5.3%
Other/no answer: 239, 15.9%
Meta-ethics
Constructivism: 474, 31.5%
Error theory: 60, 4.0%
Non-cognitivism: 129, 8.6%
Subjectivism: 324, 21.6%
Substantive realism: 209, 13.9%
V. Community Participation
Less Wrong Use
Lurker: 528, 35.1%
I’ve registered an account: 221, 14.7%
I’ve posted a comment: 419, 27.9%
I’ve posted in Discussion: 207, 13.8%
I’ve posted in Main: 102, 6.8%
Sequences
Never knew they existed until this moment: 106, 7.1%
Knew they existed, but never looked at them: 42, 2.8%
Some, but less than 25%: 270, 18.0%
About 25%: 181, 12.0%
About 50%: 209, 13.9%
About 75%: 242, 16.1%
All or almost all: 427, 28.4%
Meetups
Yes, regularly: 154, 10.2%
Yes, once or a few times: 325, 21.6%
No: 989, 65.8%
Community
Yes, all the time: 112, 7.5%
Yes, sometimes: 191, 12.7%
No: 1163, 77.4%
Romance
Yes: 82, 5.5%
I didn’t meet them through the community but they’re part of the community now: 79, 5.3%
No: 1310, 87.2%
CFAR Events
Yes, in 2014: 45, 3.0%
Yes, in 2013: 60, 4.0%
Both: 42, 2.8%
No: 1321, 87.9%
CFAR Workshop
Yes: 109, 7.3%
No: 1311, 87.2%
[A couple percent more people answered 'yes' to each of meetups, physical interactions, CFAR attendance, and romance this time around, suggesting the community is very very gradually becoming more IRL. In particular, the number of people meeting romantic partners through the community increased by almost 50% over last year.]
HPMOR
Yes: 897, 59.7%
Started but not finished: 224, 14.9%
No: 254, 16.9%
Referrals
Referred by a link: 464, 30.9%
HPMOR: 385, 25.6%
Been here since the Overcoming Bias days: 210, 14.0%
Referred by a friend: 199, 13.2%
Referred by a search engine: 114, 7.6%
Referred by other fiction: 17, 1.1%
[Amusing responses include “a rationalist that I follow on Tumblr”, “I’m a student of tribal cultishness”, and “It is difficult to recall details from the Before Time. Things were brighter, simpler, as in childhood or a dream. There has been much growth, change since then. But also loss. I can't remember where I found the link, is what I'm saying.”]
Blog Referrals
Slate Star Codex: 40, 2.6%
Reddit: 25, 1.6%
Common Sense Atheism: 21, 1.3%
Hacker News: 20, 1.3%
Gwern: 13, 1.0%
VI. Other Categorical Data
Cryonics Status
Don’t understand/never thought about it: 62, 4.1%
Don’t want to: 361, 24.0%
Considering it: 551, 36.7%
Haven’t gotten around to it: 272, 18.1%
Unavailable in my area: 126, 8.4%
Yes: 64, 4.3%
Type of Global Catastrophic Risk
Asteroid strike: 64, 4.3%
Economic/political collapse: 151, 10.0%
Environmental collapse: 218, 14.5%
Nanotech/grey goo: 47, 3.1%
Nuclear war: 239, 15.8%
Pandemic (bioengineered): 310, 20.6%
Pandemic (natural): 113. 7.5%
Unfriendly AI: 244, 16.2%
[Amusing answers include ennui/eaten by Internet, Friendly AI, “Greens so weaken the rich countries that barbarians conquer us”, and Tumblr.]
Effective Altruism (do you self-identify)
Yes: 422, 28.1%
No: 758, 50.4%
[Despite some impressive outreach by the EA community, numbers are largely the same as last year]
Effective Altruism (do you participate in community)
Yes: 191, 12.7%
No: 987, 65.7%
Vegetarian
Vegan: 31, 2.1%
Vegetarian: 114, 7.6%
Other meat restriction: 252, 16.8%
Omnivore: 848, 56.4%
Paleo Diet
Yes: 33, 2.2%
Sometimes: 209, 13.9%
No: 1111, 73.9%
Food Substitutes
Most of my calories: 8. 0.5%
Sometimes: 101, 6.7%
Tried: 196, 13.0%
No: 1052, 70.0%
Gender Default
I only identify with my birth gender by default: 681, 45.3%
I strongly identify with my birth gender: 586, 39.0%
Books
<5: 198, 13.2%
5 - 10: 384, 25.5%
10 - 20: 328, 21.8%
20 - 50: 264, 17.6%
50 - 100: 105, 7.0%
> 100: 49, 3.3%
Birth Month
Jan: 109, 7.3%
Feb: 90, 6.0%
Mar: 123, 8.2%
Apr: 126, 8.4%
Jun: 107, 7.1%
Jul: 109, 7.3%
Aug: 120, 8.0%
Sep: 94, 6.3%
Oct: 111, 7.4%
Nov: 102, 6.8%
Dec: 106, 7.1%
[Despite my hope of something turning up here, these results don't deviate from chance]
Handedness
Right: 1170, 77.8%
Left: 143, 9.5%
Ambidextrous: 37, 2.5%
Unsure: 12, 0.8%
Previous Surveys
Yes: 757, 50.7%
No: 598, 39.8%
Favorite Less Wrong Posts (all > 5 listed)
An Alien God: 11
Joy In The Merely Real: 7
Dissolving Questions About Disease: 7
Politics Is The Mind Killer: 6
That Alien Message: 6
A Fable Of Science And Politics: 6
Belief In Belief: 5
Generalizing From One Example: 5
Schelling Fences On Slippery Slopes: 5
Tsuyoku Naritai: 5
VII. Numeric Data
Age: 27.67 + 8.679 (22, 26, 31) [1490]
IQ: 138.25 + 15.936 (130.25, 139, 146) [472]
SAT out of 1600: 1470.74 + 113.114 (1410, 1490, 1560) [395]
SAT out of 2400: 2210.75 + 188.94 (2140, 2250, 2320) [310]
ACT out of 36: 32.56 + 2.483 (31, 33, 35) [244]
Time in Community: 2010.97 + 2.174 (2010, 2011, 2013) [1317]
Time on LW: 15.73 + 95.75 (2, 5, 15) [1366]
Karma Score: 555.73 + 2181.791 (0, 0, 155) [1335]
P Many Worlds: 47.64 + 30.132 (20, 50, 75) [1261]
P Aliens: 71.52 + 34.364 (50, 90, 99) [1393]
P Aliens (Galaxy): 41.2 + 38.405 (2, 30, 80) [1379]
P Supernatural: 6.68 + 20.271 (0, 0, 1) [1386]
P God: 8.26 + 21.088 (0, 0.01, 3) [1376]
P Religion: 4.99 + 18.068 (0, 0, 0.5) [1384]
P Cryonics: 22.34 + 27.274 (2, 10, 30) [1399]
P Anti-Agathics: 24.63 + 29.569 (1, 10, 40) [1390]
P Simulation 24.31 + 28.2 (1, 10, 50) [1320]
P Warming 81.73 + 24.224 (80, 90, 98) [1394]
P Global Catastrophic Risk 72.14 + 25.620 (55, 80, 90) [1394]
Singularity: 2143.44 + 356.643 (2060, 2090, 2150) [1177]
[The mean for this question is almost entirely dependent on which stupid responses we choose to delete as outliers; the median practically never changes]
Abortion: 4.38 + 1.032 (4, 5, 5) [1341]
Immigration: 4 + 1.078 (3, 4, 5) [1310]
Taxes : 3.14 + 1.212 (2, 3, 4) [1410] (from 1 - should be lower to 5 - should be higher)
Minimum Wage: 3.21 + 1.359 (2, 3, 4) [1298] (from 1 - should be lower to 5 - should be higher)
Feminism: 3.67 + 1.221 (3, 4, 5) [1332]
Social Justice: 3.15 + 1.385 (2, 3, 4) [1309]
Human Biodiversity: 2.93 + 1.201 (2, 3, 4) [1321]
Basic Income: 3.94 + 1.087 (3, 4, 5) [1314]
Great Stagnation: 2.33 + .959 (2, 2, 3) [1302]
MIRI Mission: 3.90 + 1.062 (3, 4, 5) [1412]
MIRI Effectiveness: 3.23 + .897 (3, 3, 4) [1336]
[Remember, all of these are asking you to rate your belief in/agreement with the concept on a scale of 1 (bad) to 5 (great)]
Income: 54129.37 + 66818.904 (10,000, 30,800, 80,000) [923]
Charity: 1996.76 + 9492.71 (0, 100, 800) [1009]
MIRI/CFAR: 511.61 + 5516.608 (0, 0, 0) [1011]
XRisk: 62.50 + 575.260 (0, 0, 0) [980]
Older siblings: 0.51 + .914 (0, 0, 1) [1332]
Younger siblings: 1.08 + 1.127 (0, 1, 1) [1349]
Height: 178.06 + 11.767 (173, 179, 184) [1236]
Hours Online: 43.44 + 25.452 (25, 40, 60) [1221]
Bem Sex Role Masculinity: 42.54 + 9.670 (36, 42, 49) [1032]
Bem Sex Role Femininity: 42.68 + 9.754 (36, 43, 50) [1031]
Right Hand: .97 + 0.67 (.94, .97, 1.00)
Left Hand: .97 + .048 (.94, .97, 1.00)
VIII. Fishing Expeditions
[correlations, in descending order]
SAT Scores out of 1600/SAT Scores out of 2400 .844 (59)
P Supernatural/P God .697 (1365)
Feminism/Social Justice .671 (1299)
P God/P Religion .669 (1367)
P Supernatural/P Religion .631 (1372)
Charity Donations/MIRI and CFAR Donations .619 (985)
P Aliens/P Aliens 2 .607 (1376)
Taxes/Minimum Wage .587 (1287)
SAT Score out of 2400/ACT Score .575 (89)
Age/Number of Children .506 (1480)
P Cryonics/P Anti-Agathics .484 (1385)
SAT Score out of 1600/ACT Score .480 (81)
Minimum Wage/Social Justice .456 (1267)
Taxes/Social Justice .427 (1281)
Taxes/Feminism .414 (1299)
MIRI Mission/MIRI Effectiveness .395 (1331)
P Warming/Taxes .385 (1261)
Taxes/Basic Income .383 (1285)
Minimum Wage/Feminism .378 (1286)
P God/Abortion -.378 (1266)
Immigration/Feminism .365 (1296)
P Supernatural/Abortion -.362 (1276)
Feminism/Human Biodiversity -.360 (1306)
MIRI and CFAR Donations/Other XRisk Charity Donations .345 (973)
Social Justice/Human Biodiversity -.341 (1288)
P Religion/Abortion -.326 (1275)
P Warming/Minimum Wage .324 (1248)
Minimum Wage/Basic Income .312 (1276)
P Warming/Basic Income .306 (1260)
Immigration/Social Justice .294 (1278)
P Anti-Agathics/MIRI Mission .293 (1351)
P Warming/Feminism .285 (1281)
P Many Worlds/P Anti-Agathics .276 (1245)
Social Justice/Femininity .267 (990)
Minimum Wage/Human Biodiversity -.264 (1274)
Immigration/Human Biodiversity -.263 (1286)
P Many Worlds/MIRI Mission .263 (1233)
P Aliens/P Warming .262 (1365)
P Warming/Social Justice .257 (1262)
Taxes/Human Biodiversity -.252 (1291)
Social Justice/Basic Income .251 (1281)
Feminism/Femininity .250 (1003)
Older Siblings/Younger Siblings -.243 (1321)
Charity Donations/Other XRisk Charity Donations .240 (957
P Anti-Agathics/P Simulation .238 (1312)
Abortion/Minimum Wage .229 (1293)
Feminism/Basic Income .227 (1297)
Abortion/Feminism .226 (1321)
P Cryonics/MIRI Mission .223 (1360)
Immigration/Basic Income .208 (1279)
P Many Worlds/P Cryonics .202 (1251)
Number of Current Partners/Femininity: .202 (1029)
P Warming/Immigration .202 (1260)
P Warming/Abortion .201 (1289)
Abortion/Taxes .198 (1304)
Age/P Simulation .197 (1313)
Political Interest/Masculinity .194 (1011)
P Cryonics/MIRI Effectiveness .191 (1285)
Abortion/Social Justice .191 (1301)
P Simulation/MIRI Mission .188 (1290)
P Many Worlds/P Warming .188 (1240)
Age/Number of Current Partners .184 (1480)
P Anti-Agathics/MIRI Effectiveness .183 (1277)
P Many Worlds/P Simulation .181 (1211)
Abortion/Immigration .181 (1304)
Number of Current Partners/Number of Children .180 (1484)
P Cryonics/P Simulation .174 (1315)
P Global Catastrophic Risk/MIRI Mission -.174 (1359)
Minimum Wage/Femininity .171 (981)
Abortion/Basic Income .170 (1302)
Age/P Cryonics -.165 (1391)
Immigration/Taxes .165 (1293)
P Warming/Human Biodiversity -.163 (1271)
P Aliens 2/Warming .160 (1353)
Abortion/Younger Siblings -.155 (1292)
P Religion/Meditate .155 (1189)
Feminism/Masculinity -.155 (1004)
Immigration/Femininity .155 (988)
P Supernatural/Basic Income -.153 (1246)
P Supernatural/P Warming -.152 (1361)
Number of Current Partners/Karma Score .152 (1332)
P Many Worlds/MIRI Effectiveness .152 (1181)
Age/MIRI Mission -.150 (1404)
P Religion/P Warming -.150 (1358)
P Religion/Basic Income -.146 (1245)
P God/Basic Income -.146 (1237)
Human Biodiversity/Femininity -.145 (999)
P God/P Warming -.144 (1351)
Taxes/Femininity .142 (987)
Number of Children/Younger Siblings .138 (1343)
Number of Current Partners/Masculinity: .137 (1030)
P Many Worlds/P God -.137 (1232)
Age/Charity Donations .133 (1002)
P Anti-Agathics/P Global Catastrophic Risk -.132 (1373)
P Warming/Masculinity -.132 (992)
P Global Catastrophic Risk/MIRI and CFAR Donations -.132 (982)
P Supernatural/Singularity .131 (1148)
God/Taxes -.130 (1240)
Age/P Anti-Agathics -.128 (1382)
P Aliens/Taxes .127(1258)
Feminism/Great Stagnation -.127 (1287)
P Many Worlds/P Supernatural -.127 (1241)
P Aliens/Abortion .126 (1284)
P Anti-Agathics/Great Stagnation -.126 (1248)
P Anti-Agathics/P Warming .125 (1370)
Age/P Aliens .124 (1386)
P Aliens/Minimum Wage .124 (1245)
P Aliens/P Global Catastrophic Risk .122 (1363)
Age/MIRI Effectiveness -.122 (1328)
Age/P Supernatural .120 (1370)
P Supernatural/MIRI Mission -.119 (1345)
P Many Worlds/P Religion -.119 (1238)
P Religion/MIRI Mission -.118 (1344)
Political Interest/Social Justice .118 (1304)
P Anti-Agathics/MIRI and CFAR Donations .118 (976)
Human Biodiversity/Basic Income -.115 (1262)
P Many Worlds/Abortion .115 (1166)
Age/Karma Score .114 (1327)
P Aliens/Feminism .114 (1277)
P Many Worlds/P Global Catastrophic Risk -.114 (1243)
Political Interest/Femininity .113 (1010)
Number of Children/P Simulation -.112 (1317)
P Religion/Younger Siblings .112 (1275)
P Supernatural/Taxes -.112 (1248)
Age/Masculinity .112 (1027)
Political Interest/Taxes .111 (1305)
P God/P Simulation .110 (1296)
P Many Worlds/Basic Income .110 (1139)
P Supernatural/Younger Siblings .109 (1274)
P Simulation/Basic Income .109 (1195)
Age/P Aliens 2 .107 (1371)
MIRI Mission/Basic Income .107 (1279)
Age/Great Stagnation .107 (1295)
P Many Worlds/P Aliens .107 (1253)
Number of Current Partners/Social Justice .106 (1304)
Human Biodiversity/Great Stagnation .105 (1285)
Number of Children/Abortion -.104 (1337)
Number of Current Partners/P Cryonics -.102 (1396)
MIRI Mission/Abortion .102 (1305)
Immigration/Great Stagnation -.101 (1269)
Age/Political Interest .100 (1339)
P Global Catastrophic Risk/Political Interest .099 (1295)
P Aliens/P Religion -.099 (1357)
P God/MIRI Mission -.098 (1335)
P Aliens/P Simulation .098 (1308)
Number of Current Partners/Immigration .098 (1305)
P God/Political Interest .098 (1274)
P Warming/P Global Catastrophic Risk .096 (1377)
In addition to the Left/Right factor we had last year, this data seems to me to have an Agrees with the Sequences Factor-- the same people tend to believe in many-worlds, cryo, atheism, simulationism, MIRI’s mission and effectiveness, anti-agathics, etc. Weirdly, belief in global catastrophic risk is negatively correlated with most of the Agrees with Sequences things. Someone who actually knows how to do statistics should run a factor analysis on this data.
IX. Digit Ratios
After sanitizing the digit ratio numbers, the following correlations came up:
Digit ratio R hand was correlated with masculinity at a level of -0.180 p < 0.01
Digit ratio L hand was correlated with masculinity at a level of -0.181 p < 0.01
Digit ratio R hand was slightly correlated with femininity at a level of +0.116 p < 0.05
Holy #@!$ the feminism thing ACTUALLY HELD UP. There is a 0.144 correlation between right-handed digit ratio and feminism, p < 0.01. And an 0.112 correlation between left-handed digit ratio and feminism, p < 0.05.
The only other political position that correlates with digit ratio is immigration. There is a 0.138 correlation between left-handed digit ratio and believe in open borders p < 0.01, and an 0.111 correlation between right-handed digit ratio and belief in open borders, p < 0.05.
No digit correlation with abortion, taxes, minimum wage, social justice, human biodiversity, basic income, or great stagnation.
Okay, need to rule out that this is all confounded by gender. I ran a few analyses on men and women separately.
On men alone, the connection to masculinity holds up. Restricting sample size to men, left-handed digit ratio corresponds to masculinity with at -0.157, p < 0.01. Left handed at -0.134, p < 0.05. Right-handed correlates with femininity at 0.120, p < 0.05. The feminism correlation holds up. Restricting sample size to men, right-handed digit ratio correlates with feminism at a level of 0.149, p < 0.01. Left handed just barely fails to correlate. Both right and left correlate with immigration at 0.135, p < 0.05.
On women alone, the Bem masculinity correlation is the highest correlation we're going to get in this entire study. Right hand is -0.433, p < 0.01. Left hand is -0.299, p < 0.05. Femininity trends toward significance but doesn't get there. The feminism correlation trends toward significance but doesn't get there. In general there was too small a sample size of women to pick up anything but the most whopping effects.
Since digit ratio is related to testosterone and testosterone sometimes affects risk-taking, I wondered if it would correlate with any of the calibration answers. I selected people who had answered Calibration Question 5 incorrectly and ran an analysis to see if digit ratio was correlated with tendency to be more confident in the incorrect answer. No effect was found.
Other things that didn't correlate with digit ratio: IQ, SAT, number of current partners, tendency to work in mathematical professions.
...I still can't believe this actually worked. The finger-length/feminism connection ACTUALLY WORKED. What a world. What a world. Someone may want to double-check these results before I get too excited.
X. Calibration
There were ten calibration questions on this year's survey. Along with answers, they were:
1. What is the largest bone in the body? Femur
2. What state was President Obama born in? Hawaii
3. Off the coast of what country was the battle of Trafalgar fought? Spain
4. What Norse God was called the All-Father? Odin
5. Who won the 1936 Nobel Prize for his work in quantum physics? Heisenberg
6. Which planet has the highest density? Earth
7. Which Bible character was married to Rachel and Leah? Jacob
8. What organelle is called "the powerhouse of the cell"? Mitochondria
9. What country has the fourth-highest population? Indonesia
10. What is the best-selling computer game? Minecraft
I ran calibration scores for everybody based on how well they did on the ten calibration questions. These failed to correlate with IQ, SAT, LW karma, or any of the things you might expect to be measures of either intelligence or previous training in calibration; they didn't differ by gender, correlates of community membership, or any mental illness [deleted section about correlating with MWI and MIRI, this was an artifact].
Your answers looked like this:
The red line represents perfect calibration. Where answers dip below the line, it means you were overconfident; when they go above, it means you were underconfident.
It looks to me like everyone was horrendously underconfident on all the easy questions, and horrendously overconfident on all the hard questions. To give an example of how horrendous, people who were 50% sure of their answers to question 10 got it right only 13% of the time; people who were 100% sure only got it right 44% of the time. Obviously those numbers should be 50% and 100% respectively.
This builds upon results from previous surveys in which your calibration was also horrible. This is not a human universal - people who put even a small amount of training into calibration can become very well calibrated very quickly. This is a sign that most Less Wrongers continue to neglect the very basics of rationality and are incapable of judging how much evidence they have on a given issue. Veterans of the site do no better than newbies on this measure.
XI. Wrapping Up
To show my appreciation for everyone completing this survey, including the arduous digit ratio measurements, I have randomly chosen a person to receive a $30 monetary prize. That person is...the person using the public key "The World Is Quiet Here". If that person tells me their private key, I will give them $30.
I have removed 73 people who wished to remain private, deleted the Private Keys, and sanitized a very small amount of data. Aside from that, here are the raw survey results for your viewing and analyzing pleasure:
(as Excel)
(as SPSS)
(as CSV)