I recently worked with Andrea Miotti on a draft of a treaty that would implement global compute caps. I'm including the abstract from our paper and the treaty text below.
Aside: I'm interested in follow-up work that aims to strengthen the treaty, describe compute limitations in more detail, and discuss alternative paths toward global compute caps. There is a growing community of people who are interested in developing, strengthening, and advocating for global compute caps and related proposals. If you're interested in this, please feel free to reach out.
This paper presents an international treaty to reduce risks from the development of advanced artificial intelligence (AI). The main provision of the treaty is a global compute cap: a ban on the development of AI systems above an agreed-upon computational resource threshold. The treaty also proposes the development and testing of emergency response plans, negotiations to establish an international agency to enforce the treaty, the establishment of new communication channels and whistleblower protections, and a commitment to avoid an AI arms race. We hope this treaty serves as a useful template for global leaders as they implement governance regimes to protect civilization from the dangers of advanced artificial intelligence.
TREATY ON THE PROHIBITION OF DANGEROUS ARTIFICIAL INTELLIGENCE
The States Parties to this Treaty,
Deeply concerned about the catastrophic consequences that would be visited upon all humankind by a disaster induced by advanced artificial intelligence,
Acknowledging the need to make every effort to avert the danger of such a catastrophe and to take measures to safeguard international peace and security,
Affirming that artificial intelligence poses risks at least as severe as those from nuclear war, uncontrolled pandemics, and other major threats to global security,
Believing that the creation of human-level artificial intelligence or artificial superintelligence should only occur once the international community is confident that such technologies can be controlled and that the necessary national and international governance measures have been established,
Recognizing that global security risks from artificial intelligence can occur either from uncontrolled artificial intelligence systems or from human misuse,
Determined to eliminate and prevent artificial intelligence race dynamics between countries and between corporations which significantly raise the risk of catastrophe,
Acknowledging the benefits that advanced artificial intelligence could bring to humanity once there is greater certainty that such technology can be developed and governed safely,
Expressing their support for research, development, and other efforts to safeguard the production and trade of powerful artificial intelligence hardware and to identify privacy-preserving methods of monitoring compliance with hardware regulations,
Reaffirming the United Nation’s commitment to achieve international co-operation in solving international problems of an economic, social, cultural, or humanitarian character,
Urging the cooperation of all States in the prevention of catastrophes caused by artificial intelligence,
Desiring to further facilitate the monitoring of advanced hardware, the avoidance of an artificial intelligence arms race, and the elimination and prevention of efforts to prematurely develop human-level artificial intelligence, artificial superintelligence, and other forms of highly dangerous artificial intelligence,
Have agreed as follows:
For the purposes of this Treaty:
Emergency response plans
Monitoring and enforcement
Negotiations for creating an international organization for monitoring, enforcement, and research
Sharing the benefits from safe artificial intelligence
Each State Party undertakes to collaborate in good-faith for the establishment of effective measures to ensure that potential benefits from safe and beneficial artificial intelligence systems are distributed globally.
Communicating dangers and establishing whistleblower protections
Prevention of an artificial intelligence arms race
Each State Party undertakes to pursue in good faith negotiations on effective measures relating to the cessation of an artificial intelligence arms race and the prevention of any future artificial intelligence arms race.
National regulations beyond the scope of the treaty
Settlement of disputes
When a dispute arises between two or more States Parties relating to the interpretation or application of this Treaty, the parties concerned shall consult together with a view to the settlement of the dispute by negotiation or by other peaceful means of the parties’ choice in accordance with Article 33 of the Charter of the United Nations.
Signature, ratification, entry into force, and withdrawal
A few comments on the proposed treaty:
Each State Party undertakes to self-report the amount and locations of large concentrations of advanced hardware to relevant international authorities.
"Large concentrations" isn't defined anywhere, and would probably need to be, for this to be a useful requirement.
Hm, I feel like this line might make certain countries less likely to agree to this? Not sure.
What might this actually entail?
The proposed treaty does not mention the threshold-exempt "Multinational AGI Consortium" suggested in the policy paper. Such an exemption would be, in my opinion, a very bad idea. The underlying argument behind a compute cap is that we do not know how to build AGI safely. It does not matter who is building it, whether OpenAI or the US military or some international organization, the risked outcome is the same: The AI escapes control and takes over, regardless of how much "security" humanity tries to place around it. If the threshold is low enough that we can be sure that it won't be dangerous to go over it, then countries will want to go past it for their own critical projects. If it's high enough that we can't be sure, then it wouldn't be safe for MAGIC to go over it either.
We can argue, "This point is too dangerous. We need to not build that far. Not to ensure national security, not to cure cancer, no. Zero exceptions, because otherwise we will all die." People can accept that.
There's no way to argue, "This point is dangerous, so let the more responsible group handle it. We'll build it, but you can't control it." That's a clear recipe for disaster.