LESSWRONG
LW

david reinstein
527150
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Unjournal evaluation of "Towards best practices in AGI safety & governance" (2023), quick take
david reinstein3d10

Naturally this paper is several years old. But it still seems like the most prominent work on this, with 61 citations etc.
 
My own take: we need more work in this area...  perhaps follow-up work doing a similar survey, taking sample selection and question design more seriously.

I hope we can identify & evaluate such work in a timely fashion.

E.g., there is some overlap with 
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5021463

which "focuses on measures to mitigate systemic risks associated with general-purpose AI models, rather than addressing the AGI scenario considered in this paper".

I'm eager to hear other suggestions for relevant work to consider and evaluate.







 

Reply
Unjournal evaluation of "Towards best practices in AGI safety & governance" (2023), quick take
david reinstein3d10

My own take on this is that this suggests we need more work in this area. We need a follow-up with some follow-up work doing a similar survey, taking sample selection and question design more seriously.
 

Representative quotes from the evaluations

Sampling bias/selection issues

Potential sampling bias – particularly over-representation of safety-minded respondents

There’s a risk that the sample reflects the views of those already more inclined to endorse stringent safety norms. This is particularly important in light of the sample composition, as over 40% of respondents are affiliated with AGI labs

 

Question selection/design, need for sharper questions

As the authors note, this [agreement] may be partly due to the high-level and generally uncontroversial framing of the statements (e.g., “AGI labs should conduct pre-deployment risk assessments”). But in their current form, the items mostly capture agreement in principle, rather than forcing respondents to grapple with the kinds of tradeoffs that real-world governance inevitably entails. 
For example, would respondents still support red teaming or third-party audits if they significantly delayed product releases?

[Emphasis added]

 

the paper states that the selected practices are extracted from (1) current practices at individual AGI labs and (2) planned practices at individual labs, among other sources.

... These results might suggest a selection bias where statements selected from labs practices are agreed on by labs themselves,

 

[suggestion to] introduce an inclusion/exclusion criterion to provide a better justification as to why some statements are selected.

 

Overstated claims for consensus?

[Paper] “findings suggest that AGI labs need to improve their risk management practices. In particular, there seems to be room for improvement when it comes to their risk governance.”

While one can agree with such claim, it is difficult to see how this conclusion can be reached from the paper’s results.


 

Reply
GovAI: Towards best practices in AGI safety and governance: A survey of expert opinion
david reinstein3d10


By the way, just flagging that The Unjournal did an evaluation of this (post/discussion here -- I'll extend this with some more opinionated comments now). Overall it was taken to be a strong step but with important limitations and need for further caveats and further work.
 

Reply
GovAI: Towards best practices in AGI safety and governance: A survey of expert opinion
david reinstein3d10

By "this is now the canonical collection" do you mean the ideas surveyed in the paper?  Do you think it's still canonical or is it now ~out-of-date?

Reply
The Unjournal's "Pivotal Questions" project
david reinstein2mo50

Unjournal’s first Pivotal Question, focusing on the viability of cultured meat — This post also gives concrete details of our process and proposed approach.

Reply
Announcing the Double Crux Bot
david reinstein2mo10

I was indeed looking for something that could be used in a live conversation. 

Reply
Announcing the Double Crux Bot
david reinstein5mo10

Is there a version of this bot (or something similar) that one can use in an LLM model or website?  I want to use this on a podcast without having to link this to a Slack

Reply
Which things were you surprised to learn are not metaphors?
david reinstein9mo10

I only realised the latter when I saw the Dutch word for this “middellandse zee”. The sea in the middle of the lands.

“Terranean” had never scanned separately to me

Reply
Which things were you surprised to learn are not metaphors?
david reinstein9mo210

Related; when you never realized a compound word had a literal meaning....

Cup board -- board to put cups on -- cupboard

Medi terrain -- between two continents -- Mediterranean

Etc.

Reply
Which things were you surprised to learn are not metaphors?
david reinstein9mo30

I think the gut thing is usually metaphorical though

Reply
Load More
7Unjournal evaluation of "Towards best practices in AGI safety & governance" (2023), quick take
21d
2
4GPT-5 is out
25d
0
6The Unjournal's "Pivotal Questions" project
2mo
1
6'Chat with impactful research & evaluations' (Unjournal NotebookLMs)
1y
0
3Calibration training for 'percentile rankings'?
Q
1y
Q
0
8Checking public figures on whether they "answered the question" quick analysis from Harris/Trump debate, and a proposal
1y
4
3A possible check against motivated reasoning using elicit.org
3y
0