Although many AI alignment projects seem to rely on offense/defense balance favoring defense
Why do you think this is the case?
Hi Charbel, thanks for your interest, great question.
If the balance would favor offense, we would die anyway despite a successful alignment project, since there's always either a bad actor or someone accidentally failing to align their takeover-level AI, in a world with many AGIs. (I tend to think about this as Murphy's law for AGI). Therefore, if one claims that one's alignment project reduces existential risk, they must think their aligned AI can somehow stop another unaligned AI (favorable offense/defense balance).
There are some other options:
Barring these options though, we seem to not only need AI alignment, bit also a positive offense defense balance.
Some more on the topic: https://www.lesswrong.com/posts/2cxNvPtMrjwaJrtoR/ai-regulation-may-be-more-important-than-ai-alignment-for
Last year, we (the Existential Risk Observatory) published a Time Ideas piece proposing the Conditional AI Safety Treaty, a proposal to pause AI when AI safety institutes determine that its risks, including loss of control, have become unacceptable. Today, we publish our paper on the topic: “International Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty”, by Rebecca Scholefield and myself (both Existential Risk Observatory) and Samuel Martin (unaffiliated).
We would like to thank Tolga Bilge, Oliver Guest, Jack Kelly, David Krueger, Matthijs Maas and José Jaime Villalobos for their insights (their views do not necessarily correspond to the paper).
Read the full paper here.
The malicious use or malfunction of advanced general-purpose AI (GPAI) poses risks that, according to leading experts, could lead to the “marginalisation or extinction of humanity.”[1] To address these risks, there are an increasing number of proposals for international agreements on AI safety. In this paper, we review recent (2023-) proposals, identifying areas of consensus and disagreement, and drawing on related literature to indicate their feasibility.[2] We focus our discussion on risk thresholds, regulations, types of international agreement and five related processes: building scientific consensus, standardisation, auditing, verification and incentivisation.
Based on this review, we propose a treaty establishing a compute threshold above which development requires rigorous oversight. This treaty would mandate complementary audits of models, information security and governance practices, to be overseen by an international network of AI Safety Institutes (AISIs) with authority to pause development if risks are unacceptable. Our approach combines immediately implementable measures with a flexible structure that can adapt to ongoing research.
(Below are our main treaty recommendations. For our full recommendations, please see the paper.)
The provisions of the treaty we recommend are listed below. The treaty would ideally apply to models developed in the private and public sectors, for civilian or military use. To be effective, states parties would need to include the US and China.
We have worked out the contours of a possible treaty to reduce AI existential risk, specifically loss of control. Systemic risks, however, such as gradual disempowerment, geopolitical risks of intent-aligned superintelligence (see for example the interesting recent work on MAIM), mass unemployment, stable extreme inequality, planetary boundaries and climate, and others, have so far been out of scope. Some of these risks are, however, hugely important for the future of humanity, too. Therefore, we might do follow-up work to address these risks as well, perhaps in a framework convention proposal.
Many AI alignment projects seem to be expecting that achieving reliably aligned AI will reduce the chance that someone else will create unaligned, takeover-level AI. Historically, some were convinced that AGI would directly result in ASI via a fast takeoff, and such ASI would automatically block other takeover-level AIs, making only alignment of the first AGI relevant. While we acknowledge this as one possibility, we think aligned AI that is powerful enough to help with defense against unaligned ASI, yet not powerful enough or unauthorized to monopolize all ASI attempts by default, is also a realistic possibility. In such a multipolar world, the offense/defense balance becomes crucial.
Although many AI alignment projects seem to rely on offense/defense balance favoring defense, so far little work has been done on aiming to determine whether this assumption holds, and in fleshing out what such defense could look like. A follow-up research project would be to try to shed light on these questions.
We are happy to engage in follow-up discussion either here or via email: info@existentialriskobservatory.org. If you want to support our work and make additional research possible, consider donating on our website or by reaching out to the email address above, since we are funding-constrained.
We hope our work can contribute to the emerging debate on what global AI governance, and specifically an AI safety treaty, should look like!
Yoshua Bengio and others, International AI Safety Report (2025) <https://www.gov.uk/government/publications/international-ai-safety-report-2025>, p.101.
For a review of proposed international institutions specifically, see Matthijs M. Maas and José Jaime Villalobos, ‘International AI Institutions: A Literature Review of Models, Examples, and Proposals,’ AI Foundations Report 1 (2023) <http://dx.doi.org/10.2139/ssrn.4579773>.
Heim and Koessler, ‘Training Compute Thresholds,’ p.3.
See paper Section 1.1, footnote 13.
Cass-Beggs and others, ‘Framework Convention on Global AI Challenges,’ p.15; Hausenloy, Miotti and Dennis, ‘Multinational AGI Consortium (MAGIC)’; Miotti and Wasil, ‘Taking control,’ p.7; Treaty on Artificial Intelligence Safety and Cooperation.
See Apollo Research, Our current policy positions (2024) <https://www.apolloresearch.ai/blog/our-current-policy-positions> [accessed Feb 25, 2025].
For example, the U.S. Nuclear Regulatory Commission set a quantitative goal for a Core Damage Frequency of less than 1 × 10⁻⁴ per year. United States Nuclear Regulatory Commission, ‘Risk Metrics for Operating New Reactors’ (2009) <https://www.nrc.gov/docs/ML0909/ML090910608.pdf>.
See paper Section 1.2.3.