overnOne of the basic suggestions for dealing with the threat of AI is a global treaty banning training sufficiently large models. Summarizing the forthcoming If Anyone Builds it Everyone Dies, Scott Alexander describes this plan as "more-or-less lifted from the playbook for dealing with nuclear weapons."[1]
I am deeply skeptical that anything along the lines of the nuclear non-proliferation regime will work for AI. My goal here is to sketch out some of the basic reasons why, beginning with a brief summary of the non-proliferation regime that you can safely skip if you're familiar with it.
The Nuclear Non-Proliferation Regime
Nuclear weapons are a 1940s-era technology. In this sense, it quite remarkable that we have managed to limit their diffusion. In 1963, John F. Kennedy observed: "I am haunted by the feeling that by 1970, unless we are successful [at non-proliferation], there may be 10 nuclear powers instead of four, and by 1975, 15 or 20." Judged by this standard, the non-proliferation regime has been a success. As of 2025, there are only nine nuclear powers, and only ten states have ever developed nuclear weapons.[2] Another dozen states or so have seriously pursued nuclear weapons before eventually abandoning those efforts under international pressure.
Legally speaking, the center piece of the non-proliferation regime is the Treaty on the Non-Proliferation of Nuclear Weapons (NPT), originally signed in 1968 and now ratified by all but five countries.[3] The NPT set up a two-tier system. The five countries with acknowledged nuclear weapons as of 1968 joined the treaty as nuclear weapons states; all others joined as non-nuclear weapons states.[4]
The non-nuclear weapons NPT members agreed not to develop nuclear weapons. In return, they obtained two basic promises from the nuclear states: first, the nuclear weapons states agreed to provide assistance in developing peaceful uses of nuclear energy (a promise that they have largely honored). Second, t