AE Studio is launching a short, anonymous survey for alignment researchers, in order to develop a stronger model of various field-level dynamics in alignment.
This appears to be an interestingly neglected research direction that we believe will yield specific and actionable insights related to the community’s technical views and more general characteristics.
The survey is a straightforward 10-15 minute Google Form with some simple multiple choice questions.
For every alignment researcher who completes the survey, we will donate $40 to a high-impact AI safety organization of your choosing (see specific options on the survey). We will also send each alignment researcher who wants one a customized report that compares their personal results to those of the field.
Together, we hope to not only raise some money for some great AI safety organizations, but also develop a better field-level model of the ideas and people that comprise alignment research.
We will open-source all data and analyses when we publish the results. Thanks in advance for participating and for sharing this around with other alignment researchers!
Survey full link: https://forms.gle/d2fJhWfierRYvzam8
I timed how long it took me to fill in the survey. It took 30 min. I could probably have done it in 15 min if I skipped the optional text questions. This is to be expected however. Every time I've seen someone someone guesses how long it will take to respond to their survey, it's off by a factor of 2-5.
Thanks for taking the survey! When we estimated how long it would take, we didn't count how long it would take to answer the optional open-ended questions, because we figured that those who are sufficiently time constrained that they would actually care a lot about the time estimate would not spend the additional time writing in responses.
In general, the survey does seem to take respondents approximately 10-20 minutes to complete. As noted in another comment below,
this still works out to donating $120-240/researcher-hour to high-impact alignment orgs (plus whatever the value is of the comparison of one's individual results to that of community), which hopefully is worth the time investment :)
This seems like a great effort. We made a small survey called pain points in AI safety survey back in 2022 that we received quite a few answers to which you can see the final results of here. Beware that this has not been updated in ~2 years.
Thanks for sharing this! Will definitely take a look at this in the context of what we find and see if we are capturing any similar sentiment.
What do you mean by an alignment researcher? Is somebody who did AI Safety Fundamentals an alignment researcher? Is somebody participating in MATS, AISC or SPAR an alignment researcher? Or somebody who has never posted anything on LW?
When do you expect to publish results?
Ideally within the next month or so. There are a few other control populations still left to sample, as well as actually doing all of the analysis.
Note: The survey took me 20 mins (but also note selection effects on leaving this comment)
Definitely good to know that it might take a bit longer than we had estimated from earlier respondents (with the well-taken selection effect caveat).
Note that if it takes between 10-20 minutes to fill out, this still works out to donating $120-240/researcher-hour to high-impact alignment orgs (plus whatever the value is of the comparison of one's individual results to that of community), which hopefully is worth the time investment :)