Here's my summary of the recommendations:
Well, I'm disappointed.
Everything about misuse risks and going faster to Beat China, nothing about accident/systematic risks. I guess "testing for national security capabilities" is probably in practice code for "some people will still be allowed to do AI alignment work," but that's not enough.
I really would have hoped Anthropic could be realistic and say "This might go wrong. Even if there's no evil person out there trying to misuse AI, bad things could still happen by accident, in a way that needs to be fixed by changing what AI gets built in the first place, not just testing it afterwards. If this was like making a car, we should install seatbelts and maybe institute a speed limit."
Note: This is an automated crosspost from Anthropic. The bot selects content from many AI safety-relevant sources. Not affiliated with the authors or their organization and not affiliated with LW.
In response to the White House’s Request for Information on an AI Action Plan, Anthropic has submitted recommendations to the Office of Science and Technology Policy (OSTP). Our recommendations are designed to better prepare America to capture the economic benefits and national security implications of powerful AI systems.
As our CEO Dario Amodei writes in ‘Machines of Loving Grace’, we expect powerful AI systems will emerge in late 2026 or early 2027. Powerful AI systems will have the following properties:
Our own recent work adds further evidence to the idea that powerful AI will arrive soon: our recently-released Claude 3.7 Sonnet and Claude Code demonstrate significant capability improvements and increased autonomy, as do systems released by other frontier labs.
We believe the United States must take decisive action to maintain technological leadership. Our submission focuses on six key areas to address the economic and security implications of powerful AI while maximizing benefits for all Americans:
These recommendations build on Anthropic's previous policy work, including our advocacy for responsible scaling policies and testing and evaluation. Our aim is to strike a balance—enabling innovation while mitigating serious risks posed by increasingly capable AI systems.
Our full submission, found here, offers further detail into these recommendations and provides practical implementation strategies to help the U.S. government navigate this critical technological transition.