I posted a quick 10-question survey, and 100 users filled it out. Here are the results!

1. Pick the answer most people will not pick.

  • HPMOR (26%)
  • SlateStarCodex (26%)
  • The Sequences (30%)
  • A dust speck in the eye (18%)

Well done everyone who picked the dust speck! I myself won this, and am pleased. 

My reasoning was that it was the most distinctive in type (the rest were all 'big things people read') and so would be considered obvious, thus rendering it non-obvious. I now overconfidently believe LWers operate at level 2, and I can always win by playing level 3. (I will test this some time again in the future.)

My housemates point out that we should all have rolled a 4-sided die and pick that option, with some chance for 100% of us to win if we got a perfect 25% on each of them. So now I feel a little bit sad, because I sure didn't think of that.

2. To the nearest 10, at least how much karma makes a post worth opening?

Plus one selection of "200+" not shown above.

The median answer was 20, and the mean was 28, the st dev was 25, all to the nearest whole number.

3. How much do you think most LWers believe in the thesis of Civilizational Inadequacy?

The average was 6.08, st dev was 1.4 (2.s.f).

Don't have much to say here.

4. How much do you believe in the thesis of civilizational inadequacy?

Average: 6.13. St dev: 1.7 (2.s.f).

On average, we had good self-models. Will be interesting to see how people's accuracy here correlates with the other questions.

5. Here are 10 terms that have been used on LW. Click the terms you have used in conversation (at least, say, 10 times) because of LessWrong (and other rationalist blogs).

Here they are, in order of how many clicks they got:

  • Existential Risk (64)
  • Coordination Problem (61)
  • Bayesian Update (58)
  • Common Knowledge (53)
  • Counterfactual (51)
  • Goodhart (47)
  • Slack (38)
  • Legibility (in the sense of James C. Scott's book "Seeing Like a State") (31)
  • Asymmetric tools / asymmetric weapons (28)
  • Babble (or Prune) (17)

On average, people had used 44.8% of these terms in conversation (at least 10 times). Which is... higher than I'd have predicted.

6. How much slack do you have in your life?

Average: 5.36. St. Dev: 2.4 (2.s.f).

I'd been quite worried it would skew hard in the low direction, but it seems like there's a fair number of people here who are kind of doing okay. Perhaps everyone has more slack due to covid? But it's weirdly bimodal, and I didn't have a theory that predicted that.

7. How many dollars would you pay for a copy of a professionally edited and designed book of the best essays as voted in the 2018 LessWrong Review? (including shipping costs)

Average: $12.30. Median: $10.

Well, that's good to know. If we want it to sell at more than that, we need to make it more attractive for y'all...

8. How happy are you with the work of the LessWrong Team?

Average: 7.21. St. Dev: 1.6.

The text on either end was about whether the LW team has been strongly 'net negative' or 'net positive' in its impact on the site.

Overall, that's 79% of people giving 7-9. 17% are ambivalent (5-6), and 4% think net negative. So overall seems pretty good to me. Will ask more pointed questions in the future, but was good to see the sign overall being quite positive.

9. When you feel emotions, do they mostly help or hinder you in pursuing your goals?

Average: 5.6. St dev: 2.1.

Interesting. Of note, if you're in this set, then going to a CFAR workshop would increase your answer to this question on average by 0.84, given the data from their longitudinal study (data that I discussed here). That's if you haven't already been, of course.

10. In a sentence or two, what's LessWrong's biggest problem? (optional)

This one was fun. Some interesting ones:

  • Nobody knows how to write short, concise posts, including me.
  • My high-effort posts don't get enough upvotes.
  • Play, humor, and levity seem kind of underutilized compared to the Sequence days, and that makes me sad.
  • Level of some AI posts is intimidating
  • It needs better real time interactive tools for debate. E.g. you could attribute karma for just a section of the post, not the whole post, and comment and expand on sections of the post (using maybe tip-boxes?) while reading the post.
  • There's no link to r/rational.
  • Discussion norms stifle pursuit of truth (too much focus on “prosocial” / “polite”) etc., people’s feelings, etc.
  • Hasn't cracked the central problem in getting generative pairings of people together rather than chains of criticism and responses.

I also resonated a bit with whoever answered "lying motherfuckers who practice 'instrumental rationality' instead of telling the goddamned truth". Although I think on net we're doing a good job with this on LW.

Over half the respondents said something. Here's a spreadsheet with the full responses.

Final Thoughts

I looked for interesting correlations, and checked 15 of them. 

  • Most things below 0.1, one or two nearing 0.2, which I discarded.
  • There was a strong correlation between what people believed about CivIn and what people believed about the community, a correlation of 0.62. It's basically a measure of the strength of the typical mind fallacy around here.
  • I find myself a bit confused about how to calculate the bayesian truth serum correctly regarding civilizational inadequacy. I'm not sure how to calculate something that isn't just the 0.62 number above. Here's the whole data set. Can someone help? If you figure it out I'll give you a strong upvote and add it to the post.

Thank you all for answering! Anyone got any other question ideas?

New Comment
21 comments, sorted by Click to highlight new comments since: Today at 10:52 PM

I think that as well as noting the means and medians of the questions about "how much karma" and "how many dollars", it's worth pointing out explicitly that in both cases the modal value was zero. I think the zeros reflect, at least for some of the people saying zero, positions that are somewhat different in kind from the non-zeros. (A rejection of the very idea that you decide whether a post is worth reading by looking at its score; the opinion that there's no need at all for the book.)

That won't always be true; someone who would pay $3 for the book will probably have answered zero. Still, the fraction of zeros seems like a relevant statistic here in both those cases.

A rejection of the very idea that you decide whether a post is worth reading by looking at its score

Look at that, exactly the reason I picked 0.

Bimodal slack -- my guess is this is mostly about being a student vs having a job.

Interesting hypothesis, will keep it in mind.

There was a strong correlation between what people believed about CivIn and what people believed about the community, a correlation of 0.62. It's basically a measure of the strength of the typical mind fallacy around here.

I don’t think that this necessarily represents typical minding so much as correct interpretation of limited evidence.

I have very good evidence of what I believe. I have limited evidence of what other people believe. I know that they may be different from me but I don’t necessarily know whether they will believe more or less. So using my own level of belief as a guide seems correct rather than representing that I believe that everyone will believe the same.

If I had been asked to give a distribution guess for the community belief level and it had a very sharp peak at my own belief level then that would better represent the typical mind fallacy. (The Bayesian truth serum papers used distributions for the community prediction.)

(Of note: of the 73 respondents who had different own and community guesses, only 57% managed to guess a more popular value of actual community belief than if they had just used their own belief level as a prediction)

Yeah, that seems right.

I wondered whether the bimodal slack curve was about "People who knew the term Slack" and "People who didn't", so I checked the correlation of whether the person said they used the term often in conversation. The correlation was very weak, 0.1, so this wasn't a key factor. People really just have quite different feelings about their slack. That's so cool.

But it's weirdly bimodal, and I didn't have a theory that predicted that.

I had a comment a year ago which would predict this. The idea is that we generate value from slack by using that slack to take unreliable/high-noise opportunities. But as long as the noise in those high-noise opportunities is independent, we should usually be able to take advantage of N^2 opportunities using N units of slack (because noise in a sum scales with the square root of the number of things summed, roughly speaking). In other words, slack has increasing marginal returns: the tenth unit of slack is far more valuable than the second unit.

That suggests that individual people should either:

  • specialize in having lots of slack and using lots unreliable opportunities (so they can accept N^2 unreliability trade-offs with only N units of slack), or
  • specialize in having little slack and making everything in their life highly reliable (because a relatively large amount of slack would need to be set aside for just one high-noise opportunity).

I did roll a four-sided die for the first question, in fact. (Well, to be more precise, I rolled a six-sided die after precommitting to myself that I would continue rolling until the answer was in [1, 4].) Now I'm glad I did.

Same! (Except that I used the google random number generator)

For the record, I also thought of that. But I didn't actually do it. I thought the answer to the question would be more informative if people didn't randomize.

Maybe, but I think any change to the result caused by people randomizing is inherently part of the actual result here. But then, any change to the result caused by people thinking they shouldn't randomize because it would hamper the result is also part of the result.

Aww thanks!!

Well, that's good to know. If we want it to sell at more than that, we need to make it more attractive for y'all...

Or recognize that LW readers involved enough to respond to a "fun survey" are NOT the audience likely to buy a book filled with concepts they've already read and discussed. This book could be massively successful to more casual readers or interested non-LW readers, even if core readers (and contributors) don't buy it.

Simulacra levels (and that false consciousness can occur at any/multiple levels) help a lot with understanding why it feels like everyone is lying most of the time. It would have been highly helpful to a younger version of myself, along with someone explicitly admitting that a bunch of the signaling game is either bullshit (read: helps people coordinate to defect together a la kakonomics) or actual scary levels of delusion due to the emotional pain of helplessness along with explaining that admitting as such is low status/a losing move most of the time, thus selecting for it.

Did you mean to write this on this post?

In response to

I also resonated a bit with whoever answered "lying motherfuckers who practice 'instrumental rationality' instead of telling the goddamned truth".

(That was not obvious in advanced, but I understand now.)

9. When you feel emotions, do they mostly help or hinder you in pursuing your goals?

I don't understand how to interpret the results of this question. Is lower or higher help or hinder?

Higher is help. 

I'll go back later and edit that in... feel free to ping me if I forget.