Views expressed here are those of the author.
The Artificial Intelligence Risk Evaluation Act is an exciting step toward preventing catastrophic and existential risks from advanced artificial intelligence. This legislation creates a domestic institutional foundation which can support effective governance and provide the situational awareness required to stay on top of the rapidly changing AI landscape. There are a handful of small issues with the bill, but overall, it looks great to me. This short post will describe the bill and analyze its strengths and weaknesses.
The bill requires AI developers to disclose information about their AI systems before they can be deployed. This information goes to a new “advanced AI evaluation program” within the Department of Energy (DOE) for analysis and to contribute toward recommendations for Congress. In this way, the bill is very forward-looking; it creates understanding today so that we can take action tomorrow. The disclosures must include detailed information required to carry out the evaluation program. This includes data, weights, architecture, and interface or implementation of the AI system. The final major section of the bill requires the creation of a comprehensive plan for permanent federal oversight.
Disclosure before deployment: The bill establishes that the most advanced AI systems should not reach deployment without first disclosing to the evaluation program essentially all information about the system and how it was created. This provides the government with much-needed situational awareness and informs future action.
Focus on catastrophic risks and superintelligence: By directing evaluation toward loss-of-control scenarios, weaponization potential, critical infrastructure threats, and scheming behavior, the bill targets the failure modes most likely to produce civilizational catastrophe. It requires the DOE to evaluate whether AI systems could reach artificial superintelligence and recommend oversight measures. The bill does not pull its punches: Even nationalization is on the table as a means of “preventing or managing” superintelligence. I appreciate how this demonstrates the authors are taking superintelligence seriously.
Planning for a Permanent Framework: Within 360 days, the bill requires the submission “to Congress [of] a detailed recommendation for Federal oversight of advanced artificial intelligence systems”. The recommendation can include standards, certification, licensing, monitoring, “adaptive governance”, the creation of a new agency, and evaluations for existential risk. This creates the necessary impetus for Congressional attention and action appropriate to an updated understanding of the trajectory of the technology.
International coordination: The development of artificial superintelligence anywhere on Earth threatens everyone, and it is not sufficient to only monitor and restrain the activities of developers within the US. The passage of this bill would demonstrate that the US government is seriously pursuing the capacity for domestic AI regulation, and this could be foundational for the success of international coordination. Therefore, a major opportunity for the bill is to add an explicit requirement that the recommendation for a permanent framework include how the US government should attain assurance that the development of superintelligence is being appropriately prevented or managed beyond its borders. This is most likely accomplished through international agreements and is facilitated by the ability to verify compliance with such agreements.
Monitoring Training and Internal Deployments: Without a disclosure requirement, the government can’t confidently know what capabilities are being created within AI companies before those capabilities are then publicly deployed. Artificial superintelligence still threatens everyone, even before it is deployed internally (and especially if absent appropriate safeguards.) This bill misses the opportunity to create disclosure requirements before training begins or before AI systems are used internally. While the eventual permanent framework would probably include oversight of systems in development, the bill’s advanced AI evaluation program should have access to this information as well.
The Hawley-Blumenthal Act does not, by itself, prevent the premature development of artificial superintelligence. But it lays essential foundations:
While imperfect, this bill is a big step in the right direction for AI preparedness. The Permanent Framework recommendations can be the start of an iterative process of government oversight and management leading to the international coordination required to prevent catastrophic AI risks.