this policy brief provides policymakers with concrete recommendations for how governments can manage AI risks.

 

Policy recommendations:
1. Mandate robust third-party auditing and certification.
2. Regulate access to computational power.
3. Establish capable AI agencies at the national level.
4. Establish liability for AI-caused harms.
5. Introduce measures to prevent and track AI model leaks.
6. Expand technical AI safety research funding.
7. Develop standards for identifying and managing AI-generated content and recommendations.

New to LessWrong?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 4:47 PM

From an x-risk perspective, I think this report is good. It's far from shovel-ready policy proposals, but it points in some reasonable directions and might advance the debate.

I think it's wrong on #5 (watermarking)– what could Meta do if LLaMa had been watermarked? And #7 seems to have little x-risk relevance.

There's a tweet thread by Jason Crawford that talks about how liability law could be used for safer AI:

The tweet thread is here:

https://twitter.com/jasoncrawford/status/1646894709032247296

In essence, use liability law and liability insurance to make the market price in externalities, such that the market can be incentivized to solve the AI Alignment problem.

Carefully arranged to bring all motion to a fully regulated stop forever.  Yeahno.