My own take on this is that this suggests we need more work in this area. We need a follow-up with some follow-up work doing a similar survey, taking sample selection and question design more seriously.
Potential sampling bias – particularly over-representation of safety-minded respondents
There’s a risk that the sample reflects the views of those already more inclined to endorse stringent safety norms. This is particularly important in light of the sample composition, as over 40% of respondents are affiliated with AGI labs
As the authors note, this [agreement] may be partly due to the high-level and generally uncontroversial framing of the statements (e.g., “AGI labs should conduct pre-deployment risk assessments”). But in their current form, the items mostly capture agreement in principle, rather than forcing respondents to grapple with the kinds of tradeoffs that real-world governance inevitably entails.
For example, would respondents still support red teaming or third-party audits if they significantly delayed product releases?
[Emphasis added]
the paper states that the selected practices are extracted from (1) current practices at individual AGI labs and (2) planned practices at individual labs, among other sources.
... These results might suggest a selection bias where statements selected from labs practices are agreed on by labs themselves,
[suggestion to] introduce an inclusion/exclusion criterion to provide a better justification as to why some statements are selected.
Overstated claims for consensus?
[Paper] “findings suggest that AGI labs need to improve their risk management practices. In particular, there seems to be room for improvement when it comes to their risk governance.”
While one can agree with such claim, it is difficult to see how this conclusion can be reached from the paper’s results.
By the way, just flagging that The Unjournal did an evaluation of this (post/discussion here -- I'll extend this with some more opinionated comments now). Overall it was taken to be a strong step but with important limitations and need for further caveats and further work.
By "this is now the canonical collection" do you mean the ideas surveyed in the paper? Do you think it's still canonical or is it now ~out-of-date?
Unjournal’s first Pivotal Question, focusing on the viability of cultured meat — This post also gives concrete details of our process and proposed approach.
I was indeed looking for something that could be used in a live conversation.
Is there a version of this bot (or something similar) that one can use in an LLM model or website? I want to use this on a podcast without having to link this to a Slack
I only realised the latter when I saw the Dutch word for this “middellandse zee”. The sea in the middle of the lands.
“Terranean” had never scanned separately to me
Related; when you never realized a compound word had a literal meaning....
Cup board -- board to put cups on -- cupboard
Medi terrain -- between two continents -- Mediterranean
Etc.
I think the gut thing is usually metaphorical though
Naturally this paper is several years old. But it still seems like the most prominent work on this, with 61 citations etc.
My own take: we need more work in this area... perhaps follow-up work doing a similar survey, taking sample selection and question design more seriously.
I hope we can identify & evaluate such work in a timely fashion.
E.g., there is some overlap with
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5021463
which "focuses on measures to mitigate systemic risks associated with general-purpose AI models, rather than addressing the AGI scenario considered in this paper".
I'm eager to hear other suggestions for relevant work to consider and evaluate.