We will soon enter an unstable state where the balance of military and political power will shift significantly because of advanced AI.
This evidently makes the very strong assumption that AGI is sufficient for a widely-recognized DSA before it becomes a near-term existential risk. That is, everyone behind in the AI race figures out that they have no chance to win without actually fighting a war that leads to nuclear escalation, or there is a war is won so decisively so quickly by one side that nuclear escalation does not occur. These seem like big claims that aren't actually explained or explored (Or it assumes that ASI can be aligned enough to ensure we don't all die before power dynamics shift in favor of whoever built to ASI, which is an even bigger claim.)
I don't think I make the claim that a DSA is likely to be achieved by a human faction before AI takeover happens. My modal prediction (~58% as written in the post) for this whole process is that the AI takes over while the nations are trying to beat each other (or failing to coordinate).
In the world where the leading project has a large secret lead and has solved superalignment (an unlikely intersection) then yes, I think a DSA is achievable.
Maybe a thing you're claiming is that my opening paragraphs don't emphasize AI takeover enough to properly convey my expectations of AI takeover. I'm pretty sympathetic to this point.
Thanks. It does seem like the conditional here was assumed, and there was some illusion of transparency. The way it read was that you viewed this type of geopolitical singularity as the default future, which seemed like a huge jump, as I mentioned.
We will soon enter an unstable state where the balance of military and political power will shift significantly because of advanced AI.
As different nations gain access to new advanced technologies, nations that have a relative lead will amass huge power over those that are left behind. The difference in technological capabilities between nations might end up becoming so large that one nation will possess the capability to disable the militaries (including nuclear arsenals) of other countries with minimal retaliation. While this all is happening, AIs will possibly be accruing power of their own.
The unstable state where the relative power of nations is shifting dramatically cannot be sustained for long. In particular, it’s implausible that multiple groups will possess a DSA at separate moments in time without using their DSA to entrench their influence over the future.
There are three main types of stable outcomes for this transition period: unipolar, multipolar[1], and existential catastrophe.
Here are some (non-comprehensive) illustrative stories for each outcome:
There is also a scenario where superintelligence is literally never created, but I think this scenario is very unlikely.
The question of when we enter one of the outcomes is to me very similar to the question of when ASI is created, meaning I place around a 50% probability on “by the end of 2031” as I expect us to settle into a stable geopolitical state soon after the first ASIs exist (or even before).
There might be intermediate states that are reached and sustained for a while (e.g. a global moratorium or MAIM) but I don’t expect those to delay superintelligence for more than a few decades. There's also some chance that a global nuclear war starts[6] which sets civilization back decades or centuries, but we'd eventually probably land back at a similar fork to the one we’re facing now.
I expect the unipolar scenario to be attempted with higher probability than the multipolar scenario, but I expect it to lead to a existential catastrophe with a higher probability too (due to corner-cutting on safety and potential for misuse), so the likelihood of a “successful” unipolar and multipolar scenario are similar.
P(attempted by leading project) | P(doesn’t end in existential catastrophe | attempted by leading project) | P(attempted by leading project and doesn’t end in existential catastrophe) | |
Unipolar outcome | 2/3 | 1/4 | 1/6 |
Multipolar outcome | 1/3 | 3/4 | 1/4 |
Therefore, the final probabilities of the three outcomes are:
The unipolar and multipolar stable end states imply very different policy interventions:
The way to the unipolar outcome would be fraught with danger:
If ASI is a handful of years away, this gives us a very tight deadline to set up a multinational agreement if we ever wish to set one up. It also gives us very little time to set ourselves up to wisely navigate the extreme judgement calls that we’ll have to make in order to avoid existential catastrophe.
Unipolar and multipolar map onto Strategic Advantage and Cooperative Development in this post.
How would a nuclear weapons state keep the existence of an ASI secret? I won’t go into details but I think it’s doable, especially considering the help that precursor systems could provide in infosecurity and persuasion.
Possibly using advanced autonomous weapons, superpersuasion, nuclear defense, or biological weapons.
In the case of secret Soviet biological weapons programs, two years passed between a defector revealing illegal biological weapons activities and foreign governments conducting inspections. In the case of ASI, it might be enough to pacify other governments for a few months.
While ASI can achieve unipolar or multipolar outcomes once humanity has been destroyed or disempowered, I count these outcomes under “existential catastrophe”. Basically every story in an attempted unipolar or multipolar outcome can devolve into an existential catastrophe story.
If you have don’t think a nuclear war is likely, imagine a scenario where the top intelligence officials of a nuclear weapons state present their country’s leaders with a report that claims “Within 3 months, another nuclear weapons state is on track to possess superintelligence and technologies that can disable our first and second strike nuclear capabilities.” How would this affect international relations? I claim it would likely cause tensions higher than those at the peak of the Cold War. Very high tensions make accidents more likely, as well as make it plausible that country leaders will deliberately start a nuclear war.