We model how rapid AI development may reshape geopolitics in the absence of international coordination on preventing dangerous AI development. We focus on predicting which strategies would be pursued by superpowers and middle powers and which outcomes would result from them.
You can read our paper here: ai-scenarios.com
Attempts to predict scenarios with fast AI progress should be more tractable compared to most attempts to make forecasts. This is because a single factor (namely, access to AI capabilities) overwhelmingly determines geopolitical outcomes.
This becomes even more the case once AI has mostly automated the key bottlenecks of AI R&D. If the best AI also produces the fastest improvements in AI, the advantage of the leader in an ASI race can only grow as time goes on, until their AI systems can produce a decisive strategic advantage (DSA) over all actors.
In this model, superpowers are likely to engage in a heavily state-sponsored (footnote: “Could be entirely a national project, or helped by private actors; either way, countries will invest heavily at scales only possible with state involvement, and fully back research efforts eg. by providing nation-state level security.” ) race to ASI, which will culminate in one of three outcomes:
If the course of AI R&D turns out to be highly predictable, or if AI R&D operations are highly visible to opponents, there comes a point when it becomes obvious to laggards in the race that time is not on their side: if they don’t act to stop the leader’s AI program now, they will eventually suffer a total loss.
In this case, the laggard(s) are likely to initiate a violent strike aimed at disabling the leader’s AI research program, leading to a highly destructive war between superpowers.
If the superpowers’ research program is allowed to continue, it is likely to eventually reach the point where AI is powerful enough to confer DSA. If such powerful AI escaped human control, this would be irreversible, leading to human extinction or its permanent disempowerment.
This landscape is quite bleak for middle powers: their chances at competing in the ASI race are slim, and they are largely unable to unilaterally pressure superpowers to halt their attempts at developing ASI.
One more strategy for middle powers, common in previous conflicts, is to ally themselves with one of the superpowers and hope that it “wins” the race, a strategy we term “Vassal’s Wager”.
For this to work in the ASI race, the patron must not only develop ASI first, but must also avert loss-of-control risks and avoid an extremely destructive major power war.
Even in this best case, this strategy entails completely giving up one’s autonomy: a middle power would have absolutely no recourse against actions taken by an ASI-wielding superpower, including to actions that breach the middle power’s sovereignty.
If AI progress plateaus before reaching the levels where it can automate AI R&D, future trajectories are harder to predict, as they are no longer overwhelmingly determined by a single factor.
While we don’t model this case in as much detail, we point out some of its potential risks, like:
Being a democracy and being a middle power both put an actor at an increased risk from these factors: