Overview

I would like there to be a LessWrong Community Census, because I had fun playing with the data from last year and there's some questions I'm curious about. It's also an entertaining site tradition. Since nobody else has stepped forward to make the community census happen, I'm getting the ball rolling. This is a request for comments, constructive criticism, careful consideration, and silly jokes on the census.

Here's the draft.

I'm posting this request for comments on November 1st. I'm planning to incorporate feedback throughout November, then on December 1st I'll update the census to remove the "DO NOT TAKE" warning at the top, and make a new post asking people to take the census. I plan to let it run throughout all December, close it in the first few days of January, and then get the public data and analysis out sometime in mid to late January.

How Was The Draft Composed?

I coped the question set from 2022, which itself took extremely heavy inspiration from previous years. I then added a section sourced from the questions Ben Pace of the LessWrong team had been considering in 2022, and another section of questions I'd be asking on a user survey if I worked for LessWrong. (I do not work for LessWrong.) Next I fixed some obvious mistakes from last year (in particular allowing free responses on the early politics questions) as well as changed some things that change every year like the Calibration question, and swapped around the questions in the Indulging My Curiosity section. 

Changes I'm Interested In

In general, I want to reduce the number of questions. Last year I asked about the length and overall people thought it was a little too long. Then I added more questions. (The LW Team Questions and the Questions The LW Team Should Have Asked section.) I'm inclined to think those sections aren't pulling their weight right now, but I do think it's worth asking good questions about how people use the website on the census.

I'm likely to shrink down the religion responses, as I don't think checking the different variations of e.g. Buddhism or Judaism revealed anything interesting. I'd probably put them back to the divisions used in earlier versions of the survey.

I'm sort of tempted to remove the Numbers That Purport To Measure Your Intelligence section entirely. I believe it was part of Scott trying to answer a particular question about the readership, and while I love his old analyses they could make space for current questions. The main arguments in favour of keeping them is that they don't take up much space, and they've been around for a while.

The Detailed Questions From Previous Surveys and Further Politics sections would be where I'd personally start making some cuts, though I admit I just don't care about politics very much. Some people care a lot about politics and if anyone wants to champion those sections that seems potentially fun. This may also be the year that some of the "Detailed Questions From Previous Surveys" get questions can get moved into the survey proper or dropped.

I'd be excited to add some questions that would help adjacent or subset communities. If you're with CFAR, The Guild of the Rose, Glowfic, or an organization like that I'm cheerful about having some questions you're interested in, especially if the questions would be generally useful or fun to discuss. I've already offered to the LessWrong team directly, but I'll say again that I'd be excited to try and ask questions that would be useful for you all.

You don't actually have to be associated with an organization either. If there's a burning question you have about the general shape of the readership, I'm interested in sating other people's curiosity and I'd like to encourage you to chime in. I have a moderate bias towards keeping questions that have been in lots of prior versions of the census and I'd ideally like the final version of the survey to have about the same number of questions as last year, but that's the main constraint. My best compilation of previous versions is in this google sheet. 

New to LessWrong?

New Comment
37 comments, sorted by Click to highlight new comments since: Today at 7:43 PM

Questions themes I would like:

  1. Should open-source LLM's be allowed or regulated out of existence?
  2. What are your AI timelines?

AI questions we currently have:

P(GPT-5 Release) 
What is the probability that OpenAI will release GPT-5 before the end of 2024?

Singularity 
By what year do you think the Singularity will occur?

Tangential AI questions we currently have:

P(Global catastrophic risk)
What is the probability that the human race will make it to 2100 without any catastrophe that wipes out more than 90% of humanity?

P(Simulation)
What is the probability that our universe is a simulation?

I think Singularity basically gets at your AI timelines question, though not in a lot of detail. You said themes; are you hoping for a subsection of multiple questions trying to ask about that from different angles, or one good question for each? 

I'd be tempted to reword the open-source LLM toward something like "How would you describe your opinion on open-source LLMs? [pro-open-source, lean open-source, neutral, lean regulated, pro-regulation]" or something along those lines. I also have an instinct to define LLM with context in the question ("How would you describe your opinion on open-source Artificial Intelligence such as LLMs" perhaps) but maybe that's unnecessary here.

Thinking in terms of "the Singularity" might not be the most effective way to frame the timelines. I prefer "When will AI be able to do all tasks that expert human knowledge workers currently do?" seems like it's better at getting people to give a timeline.

Do you have a stance on "all the tasks that expert human knowledge workers currently do?" vs "all the intellectual tasks that humans currently do?" I ask because "expert human knowledge workers" is an uncommon phrase. Like many uncommon phrases we have more latitude for a LessWrong census than for general population, but it's not LW specific jargon either.

Changelog, since Google Forms is basically the only google drive tool with basically no versioning.

2023-11-03

  • Removed the Virtues questions and the April Fools question from Section 10: LessWrong Team Questions. The Willingness To Pay questions survive for now since I have some reason to believe they aren't totally hypothetical, but they're on thin ice. -3 questions.
  • Added one question to section 11: Questions the LessWrong Team should have asked, attempting to get at a discussions norms preference. Realistically this question is terrible since it's going to point people at four different essays, two of which are kinda lengthy. +1 question.
  • Removed all of Section 14: Bonus Politics Questions except for "How would you describe your level of interest in politics?" Replaced it with Tailcalled's factor analysis questions. @tailcalled , please check whether you think the question titles will mess with anything; it's useful for me to have a shorthand way to refer to a question but I had to do a little bit of creative summation. -11+20= +9 questions
  • Section 9 renamed from "Other Traditional LessWrong Census Questions, Which Used To Be Called More Complicated Probability Questions" to "More Complicated Probability Questions." Added AI Knowledge Work question. @ChristianKl does this look like it gets at your AI timelines question? (I'm leaning towards the banning LLMs question as being a bonus politics question.) +1 question. 

+1 (from Section 9: More Complicated Probability Questions) +1 (from section 11: Questions the LessWrong Team Should Have Asked) -3 (from section 10: Less Wrong Team Questions) -11 (from Bonus Politics Questions) +20(to Bonus Politics Questions) = net increase of 8. 

Not touched: I didn't add Sherrinford's research idea. That's not a principled stance against it, that's me feeling like more people wanted more politics questions and so pausing for a moment before adding even more questions.

Changelog, 2023-11-28

  • Retitled Tailcalled's questions so as to mess with the results as little as possible. They're now titled "TC[#]", in the hopes that simple numbers don't mess with anything.
  • Added a "Where did you find this survey from?" question
  • Removed P(Space), SRS, as well as a couple questions I lost track of from Community and LW sections. Removed the question about whether they also took the ACX census, since I think this one will come out first.
  • Added titles to questions without titles.
  • Added a symmetrical question about cross group identity to what I expect the ACX census will use.

I don't expect the question titles to mess with much, but it's hard to cheaply know for sure, due to the hyper-empirical approach used to construct the test. It should be feasible to recognize after the fact if there are any issues, and I assume issues would be localized to one or a few items so one could just look at the other items for comparison.

Comment thread for a more open ended question: How much should the census lean towards consistently asking the same questions vs veering wildly into new questions? (I'm assuming that the total question count needs to stay basically constant.)

If we ask the same questions every year, we get to track changes and we get to average out some noise. Some questions ask about things I'd expect to change year by year, like age or income. 

If we ask new questions, we get to find out new things we'd never learn by previous questions. There's a fun, playful kind of curiosity in just asking things you want to know. 

Right now I'm leaning a little more towards the new, but it's also been about five years since the last survey with big turnout so there's more value than usual in getting the baselines clear. 

Comment thread for discussion of what to remove from the census. I think last year was about the right size, and the current draft added some questions, so I'm interested in making a few cuts. If you think something from last year is important not to cut, this is also a good place to state that.

Deletion suggestions:

  • Seems redundant: "Religious Background What is your family's religious background, as of the last time your family practiced a religion?"
  • Maybe one of the aliens questions is enough.
  • P(Pastafarianism)
  • "Best Virtue
    Which of the twelve virtues of rationality do you think the LessWrong community scores highest on at the moment?" 
    I don't really understand the question. Same for "worst virtue".
  • The willingness-to-pay for the sequences, hpmor and the codex. This could just be asked outside of the survey.
  • ideas for LW April Fools’ Day (I think you don't need a centralized question for that.)
  • "Tax Opinion": I think it's not useful to answer "higher" or "lower"
  • "Great stagnation": seems to specific

I'm torn on Family Religion and Religious Background. On the one hand, those questions have been in every census I have data for and I like being able to spot long term trends. On the other hand, it's basically tracking whether your family changed faiths since you grew up and that's not super interesting on its own. Okay, so what's the ideal merged form of that question look like? (Question addressed to anyone including me, I don't have a good idea right now but I'll come back to it later.)

I will defend the pastafarianism as mostly a joke intended to make people smile.

 also it's the replacement lizardman constant question, though it's not a great lizardman question.

Most of section 10: LessWrong Team Questions is kiiiiind of acting as placeholders for the team to suggest which questions are most important to them. Dropping the virtue questions makes sense, anything that suggests going elsewhere to read a thing before coming back is dubious anyway. In contrast, I'm curious what shape you have in mind for the willingness-to-pay questions; this kind of broad survey seems a good place to ask that kind of thing. I've been dipping my toes into the publishing world a little lately, and there's a lot of different ways to aim at different price points.

LW April Fools' Day. . . idk, seemed funny to me when I was looking at Ben's list, but it's not a question I'm personally curious to know the answer to. 

"Tax Opinion" and "Great Stagnation" are on their way out unless anyone shows up to defend them, possibly almost all of section 14: bonus politics questions should just be swapped out for politics questions people are currently interested in.

I think willingness-to-pay questions are in general not very reliable, because they are hypothetical. Moreover, they might give the survey a marketing flavor.

Comment thread for questions people want to add to the census. You don't need to articulate the exact question; "We should ask something about pets, but I'm not quite sure what exactly I'm getting at or how to phrase it" is fine.

I'd love to add some questions about discussion norms and expectations on LessWrong.

Over the last year there was some pretty strong disagreement on how to argue with other users. I know of about two and a half posts on how rationalist discourse ought to be conducted, and I'm curious what kind of consensus any of them have. (I suspect mentioning the disagreement can reawaken the disagreement and would be sad if the object level disagreement broke out here. If there is a crux that could be answered by a community census though, there's conveniently one planned anyway!)

I haven't checked what questions you currently ask, so maybe all the stuff below is superfluous or off the mark; apologies if so.

Anyway, re: discussion norms & expectations, I figure that questions about mood affiliation might work fine?

E.g. "On a scale of 1 to 5, how pleasant do you find it to engage with commenters on LW?" Or "In comparison to other sites on the Internet, how pleasant do you find LW for discourse?" Or other questions in a similar vein.

Those questions might need to be disambiguated from questions about how (un)pleasant it is to post on LW.

And since one claim was that some styles of discourse push authors away, related questions on that topic would be: "In the last year, I have written less/the same number of/more LW posts", same for comments. And questions for LW writers who used to write posts, but who now write less or not at all.

I'm also interested in how crossposting authors interact with LW. E.g. we now have a bunch of Substack crossposts here.

Overall, engagement on a feed-based site like LW seems more directly downstream from authors than from commenters, so I'm interested in questions re: how to get more people to (cross)post their stuff here. Especially non-AI stuff. And I wonder how current discussion norms & commenter behavior affect the willingness of authors to do so.

Mood affiliation questions could give a kind of baseline. Offhand, "On a scale of 1 to 7, with 1 being very unpleasant, 4 being neutral, and 7 being very pleasant, how do you feel about engaging with commenters on LW?" seems serviceable? If I went down this path for my own curiosity though, it would be in pursuit of something more specific about figuring out what the expectations or norms are.

I'm sort of suspicious that "in the last year, I have written less/the same number of/more LW posts" wouldn't get a useful answer because the selection effect has already happened. I'm even assuming I reach people in the first place! Like, if you went from writing a post a month in 2021 to writing zero posts in 2022, and then also had zero posts in 2023, you'd answer "the same number of LW posts." Asking over a longer timespan works would work, asking people who used to write posts and now write less why they stopped could work. Though for the second, I'd be tempted to put the question somewhere else in the census so as to not blatantly prime people. (Does priming work like that? Replication crisis I think suggests no.)

At least one question about crossposts seems worthwhile but I don't know what to ask. "If you crosspost, about how hard was it to set that up?" But then we're trying to strain information out of a small subset of users. "Do you write elsewhere?" and "Do you crosspost to LW?" perhaps catches more. 

I like the train of thought!

In general, I want to reduce the number of questions.

I'm gonna suggest 20 questions (should take maybe 2.3 minutes to fill out), which of course is in tension with you wanting to reduce the number of questions. Maybe you could split them out into some extra-optional part of the survey?

I've been factor-analyzing hundreds of statements for the purpose of creating a measure of ideology/worldview. I haven't finished the test yet, but it would still be interesting to include some of the top items for the factors so that the LessWrong data can be observed and compared to my data, once it's published.

Each item has 5 response options: "Disagree strongly", "Disagree", "Neither", "Agree", and "Agree strongly". In total, there are 5 factors, which should be close to independent. Some of the top loading items for each factor are (with "(R)" meaning that it is reverse scored):

Factor 0:

  • Companies that focus on profit buy up and reduce the wages of companies that try to pay workers more.
  • The stock market fails to punish powerful people for poor investments because people in power just get the government to bail them out using taxpayer money.
  • The government has regulations that make financial markets work well and fairly. (R)
  • The government knows well how to balance costs and benefits. (R)

Factor 1:

  • Academia has been taken over by a woke culture which suppresses dissent.
  • Minority groups tend to be biased and favor wokeness over fairness.
  • You can see from the gender ratios in income and work areas that there's still tons of sexism around. (R)
  • Climate science is critically important due to global warming. (R)

Factor 2:

  • One of the greatest benefits of art is that management can place it in workplaces to set a calming, productive tone.
  • Brand reputation is the main way consumers know that products are safe and high-quality.
  • Fashion is a good way to build confidence.
  • Democratic elections are basically polls about who you trust to lead the country, so democratically elected leaders are considered especially trustworthy.

Factor 3:

  • Teaching will need to start incorporating AI technology.
  • Genetically modified organisms will make farming more effective in the future.
  • AI cannot replace designers as computers lack creativity. (R)
  • Elon Musk's project of colonizing Mars is a useless vanity project. (R)

Factor 4:

  • To save the environment, people should eat seasonal locally grown food instead of importing food from across the world.
  • Claims that it's soon the end of the world are always hyperbolic and exaggerated.
  • It is important that the news is run independent of the government so it can serve as a check on those in power.
  • The moon landing was faked. (R)

This sure seems like a well written set of very politically charged questions. Thank you for stepping into the gap where I sure would not have added anything by myself.

Right now I'm tempted to replace most of section 14: Bonus Politics Questions and replace it with your set here. There's eleven questions there now, of which the only two I like are "If you are an American, what party are you registered with?" and "How would you describe your level of interest in politics?" Do your twenty work as a set, or do you by chance have a favourite ten?

Context of that last question: if you had clear favourite ten, I could keep the two bonus politics questions I liked and replace the others with your ten, giving us 12 instead of last year's 11. At ~22 I'd want to make cuts elsewhere to try and keep the length from growing too much.

Do your twenty work as a set, or do you by chance have a favourite ten?

The factors work as a set; they have been selected based on a factor analysis of over 400 statements, to capture things which influence as much of one's worldview as possible. But this makes the item lists for each factor essentially arbitrary, such that they can be easily substituted or expanded or shortened, without changing the core idea much.

I guess if you want to shorten it by 2x, you could remove the 2nd and the 4th item for factor 0, 1, and 2; and remove the 3rd and the 4th item for factor 2, and remove the 2nd and the 3rd item for factor 4. I wouldn't recommend this though as the items are usually only 0.4-0.6 correlated with the factors, so to obtain more accurate measurement of the worldview, more items would help. (With 4 items that each have a correlation of 0.5 with the underlying factor, the scales' correlation with the factor would be 0.76; meanwhile with only 2 items, it would be 0.63.)

I would not like if the question "If you are an American, what party are you registered with?" were one of very few politics questions. It is too country-specific.

I have a research idea in mind - I would like to know how by how certain expectations shape peoples' decisions. In addition to certain questions already in the survey, the question suggestions for this are:

  1. Were you surprised by the capabilities of chatgpt?

2. Self-rate your knowledge of 
Global income and wealth distributions
AI
Geopolitics
Climate change 
Practical ethics
Animal suffering
Effective interventions to help the poor
 

3. Self-rate your social skills on a scale from 0 to 10

4. Suffering of poor people that live today touches me emotionally: 0 to 10 scale
5. Suffering of animals that live today touches me emotionally: 0 to 10 scale
6. Whether people 10,000 years in the future exist touches me emptionally: 0 to 10 scale
7. Whether humanity will go extinct within the next 100 years touches me emptionally: 0 to 10 scale. 

8. I rate my expectations as:
- insert "Noisy to well-calibrated" scale
- insert "Biased towards optimism to biased towards pessimism" scale
 

9. I believe that the median human's life in 2040 compared to today will be (your median expectation):

  • better than today
  • worse than today 
  • doesn't apply because humans will be extinct
  • other answer: 


10. I do not have more children than I have because: 
Lack of a partner
Unwilling partner
This is my ideal family size
More are planned or expected
I don't have time
Personal finance reasons
Personal Biology reasons
It is more important to help others who exist
I think the future is not livable
I think they would be born into a short life and/or suffer
Later is better
Other reasons:

11. My overall happiness: 
(0 to 10 scale)

12. I expect to live to an age of:

13. I save a relevant amount of money or other resources for old age:
- No, because I do not expect to live long enough
- No, because I expect an age of abundance
- No, but I think I should
- No, for other reasons
- Yes

I would benefit from hearing what secondary sources (aka not papers or blog posts written by researchers about their research) people find useful for learning about AI alignment research.

Hrm. The laziest version of that is a free response section. A slightly better version might be multiple select checkboxes with an "Other, full in your own" option. What secondary sources are there?

If I keep on that thought and combine it with an inclination to make questions be answerable by as many people as possible, I notice I find out about new AI alignment research mostly via Twitter. (I am not an AI researcher.) Would you only be interested in answers from researchers?

I think a free response section would be fine. For suggestions for checkboxes, I'd start with this survey I ran in 2022 and comments on that post.

I'm not only interested in answers from researchers, but it would be good to break it down by that.

Maybe that is a question for the Open Thread or just a general forum question instead of a survey question?

Survey question gets me more and more representative responses.

In which sense do you need the answer to be "representative"?

I'd like it to cover the community of people interested in these resources, and not be selected for people who read open threads or people who are willing to answer publicly.

The census answers you'll get to read are the census answers people are willing to have be public. I guess it's not attached to their names, which is maybe what you meant?

"MIddle Eastern" has a typo.

A possible question I'd be vaguely curious to see results for: "Do you generally disagree with Eliezer Yudkowsky?", and maybe also "Do you generally disagree with popular LessWrong opinions?", left deliberately somewhat vague. (If it turns out that most people say yes to both, that would be an interesting finding.)

I am personally interested in questions relating to mental/neurological conditions (e.g. depression, autism, ADHD, dyslexia, anxiety, schizophrenia, etc.)

It's fine to include my responses in summaries from the dataset, but please remove it before making the data public (Example: "The average age of the respondents, including row 205, is 22.5")

It's not clear to me what this option is for. If someone doesn't tick it, it seems like you are volunteering to remove their information even from summary averages, but that doesn't make sense because at that point it seems to mean "I am filling out this survey but please throw it directly in the trash when I'm done." Surely if someone wanted that kind of privacy they would simply not submit the survey?

In 2022, I was advised to make the privacy state of answers clear (that is, what would be released in the public dataset and what wouldn't be) so I put three options for the only required question on the census.

  1. Release the responses, including the row
  2. Use the responses when I summarize the census, but don't release the row
  3. Don't use the responses to summarize, and don't release the row.

Note that it's a required radio select: you have to pick an answer before it will let you hit submit. This year I removed 3, because I wasn't going to do anything with those responses so why bother collecting them. 

The main argument I see for bringing 3 back is to say I won't use the responses to summarize and won't release the row, but will show it to the LessWrong team. That gives people a way to potentially exert a little influence on what the devs are up to without showing up in the public statistics. I don't think that's a strong argument though since there's lots of ways to make the LW team aware of feedback.

(Correction, by "in 2022" I mean "for the 2022 census," which actually happened in early 2023.)

I think you're misinterpreting. That question is for opting in to the highest privacy option. Not checking it means that your data will be included when the survey is made public. Wanting to not be included at all, even in summaries, is indicated by simply not submitting any answers.

Minor point: I think it would be useful to have, at the top of the census, a box for people to click "I see this census, but don't want to fill it out". Such an option could help with understand what % of active users saw but didn't fill out the census, which is information that, while I can't immediately see how it is valuable, might be found to be valuable in the future.

Huh. So, the instructions currently say to fill things out starting at the top and if you decide you're done to scroll to the bottom and hit submit. Only the very first question is required (well, first two at the moment but one of those is going to go away when the census opens) so in theory this should kind of happen automatically. It sort of looks like this happens? There's a long left tail of people filling out the basic demographic info and then submitting that. People don't check the first required box (or the first box and like, fill out Age Country and Race) and then hit submit.

Do you think it needs to be an explicit "don't want to fill it out" box?

The slightly sneaky thing would be to make the link such that I know how many people clicked on it. I'm currently lightly opposed to doing that since it feels slightly sneaky, but it's also pretty standard practice for basically every survey or feedback link I ever used in corporate.