Cross-posted to the EA forum.
Summary
- In August 2020, we conducted an online survey of prominent AI safety and governance researchers. You can see a copy of the survey at this link.
- We sent the survey to 135 researchers at leading AI safety/governance research organisations (including AI Impacts, CHAI, CLR, CSER, CSET, FHI, FLI, GCRI, MILA, MIRI, Open Philanthropy and PAI) and a number of independent researchers. We received 75 responses, a response rate of 56%.
- The survey aimed to identify which AI existential risk scenarios (which we will refer to simply as “risk scenarios”) those researchers find most likely, in order to (1) help with prioritising future work on exploring AI risk scenarios, and (2) facilitate discourse and understanding within the AI safety and governance community, including between researchers who
...
Thanks for your comment! I think your critique is justified.
My best guess is that this consideration was not salient for most participants and probably didn't distort the results in meaningful ways, but it's of course hard to tell and DanielFilan's comment suggests that it was not irrelevant.
We are aware of a number of other limitations, especially with regards to the mutual exclusivity of different scenarios. We've summarized these limitations here.
Overall, you should take the results with a grain of salt. They should only be seen as signposts indicating which scenarios people find most plausible.