LESSWRONG
LW

AI GovernanceAI
Frontpage

62

Outcomes of the Geopolitical Singularity

by Nikola Jurkovic
20th May 2025
6 min read
5

62

AI GovernanceAI
Frontpage

62

Outcomes of the Geopolitical Singularity
7ryan_greenblatt
2Nikola Jurkovic
2Davidmanheim
4Nikola Jurkovic
2Davidmanheim
New Comment
5 comments, sorted by
top scoring
Click to highlight new comments since: Today at 1:32 PM
[-]ryan_greenblatt4mo72

(nit: global nuclear war isn't an existential catastrophe.)

Reply
[-]Nikola Jurkovic4mo20

Agreed, thanks! I've moved that discussion down to timelines and probabilities.

Reply
[-]Davidmanheim4mo20

We will soon enter an unstable state where the balance of military and political power will shift significantly because of advanced AI.


This evidently makes the very strong assumption that AGI is sufficient for a widely-recognized DSA before it becomes a near-term existential risk. That is, everyone behind in the AI race figures out that they have no chance to win without actually fighting a war that leads to nuclear escalation, or there is a war is won so decisively so quickly by one side that nuclear escalation does not occur. These seem like big claims that aren't actually explained or explored (Or it assumes that ASI can be aligned enough to ensure we don't all die before power dynamics shift in favor of whoever built to ASI, which is an even bigger claim.)

Reply
[-]Nikola Jurkovic4mo40

I don't think I make the claim that a DSA is likely to be achieved by a human faction before AI takeover happens. My modal prediction (~58% as written in the post) for this whole process is that the AI takes over while the nations are trying to beat each other (or failing to coordinate). 

In the world where the leading project has a large secret lead and has solved superalignment (an unlikely intersection) then yes, I think a DSA is achievable.

Maybe a thing you're claiming is that my opening paragraphs don't emphasize AI takeover enough to properly convey my expectations of AI takeover. I'm pretty sympathetic to this point.

Reply
[-]Davidmanheim4mo20

Thanks. It does seem like the conditional here was assumed, and there was some illusion of transparency. The way it read was that you viewed this type of geopolitical singularity as the default future, which seemed like a huge jump, as I mentioned.

Reply
Moderation Log
More from Nikola Jurkovic
View more
Curated and popular this week
5Comments

We will soon enter an unstable state where the balance of military and political power will shift significantly because of advanced AI.

As different nations gain access to new advanced technologies, nations that have a relative lead will amass huge power over those that are left behind. The difference in technological capabilities between nations might end up becoming so large that one nation will possess the capability to disable the militaries (including nuclear arsenals) of other countries with minimal retaliation. While this all is happening, AIs will possibly be accruing power of their own.

The unstable state where the relative power of nations is shifting dramatically cannot be sustained for long. In particular, it’s implausible that multiple groups will possess a DSA at separate moments in time without using their DSA to entrench their influence over the future.

There are three main types of stable outcomes for this transition period: unipolar, multipolar[1], and existential catastrophe.

Here are some (non-comprehensive) illustrative stories for each outcome:

  1. Unipolar: One nation comes into possession of a DSA and uses it to control the course of the future.
    1. Story 1A - Secret ASI project leading to hegemony: A nuclear weapons state secretly[2] achieves superintelligence first and has a major advantage over other countries. It uses this advantage to enter a position where they can make arbitrary demands of other countries[3]. The rest of history is shaped by the leadership of the ASI project.
      1. The story in Situational Awareness most closely matches this story.
    2. Story 1B - Non-secret ASI project leading to hegemony: A nuclear weapons state creates AGI and gets close to secretly achieving a DSA but gets found out. It offers token concessions to the other nuclear weapon states to increase the upside of not attacking. Other countries are pacified[4] by the concessions and unwilling to commit major acts of sabotage because they underestimate the danger of facing an adversarial ASI. Using the additional months of time, the leading ASI project secretly continues improving its capabilities and achieves a DSA. The rest of history is shaped by the leadership of the ASI project.
      1. The slowdown ending of AI 2027 most closely matches this story.
  2. Multipolar: No nation has a DSA, and this is sustained until at least the end of the 21st century.
    1. Story 2A - International centralized ASI project: The nuclear weapons states agree to centralize all frontier AI development in a single datacenter complex. They settle on an inspection regime where they can be certain none of the countries are attempting to secretly amass power or exfiltrate the weights. Eventually, they create a superintelligence aligned to some notion of pleasing the administrations of all of the nuclear weapons states, and the rest of history is shaped by their wishes and values.
    2. Story 2B - Secret ASI project caught red-handed, leading to concessions: A nuclear weapons state attempts to secretly achieve DSA but gets found out. The other nuclear weapons states demand concessions that actually level the playing field, including a multinational ASI project. Under major threats, the leading state concedes and joins the multinational ASI project, leading to Story 2A.
    3. Story 2C - Secret ASI project leading to self-disarming act: A nuclear weapons state gets ASI first and has a DSA. Their administration decides to use their ASI to level the playing field, give up their unilateral power, and give all humans (most of whom have no bargaining power before this act) roughly equal input into shaping the future.
    4. Story 2D - Mutually Assured AI Malfunction until a treaty: The leading nations developing ASI enter a state of MAIM and remain at a very similar power level such that no nation ever achieves a DSA over other nations. They maintain similar bargaining power without centralizing AI research until they reach a treaty enforced by verifiably-compliant ASIs which ensures no nation can ever take over a large fraction of other nations.
  3. Existential catastrophe[5]: humanity is destroyed or disempowered.
    1. Story 3A - Rushing to ASI leads to ASI takeover: A nuclear weapons state attempts to secretly achieve a DSA extremely quickly, pressuring their AGI project to cut corners on safety and not invest a large proportion of their research on aligning superintelligence. This leads to a misaligned superintelligence which disempowers or kills all humans.
      1. The race ending of AI 2027, as well as How AI Takeover Might Happen in 2 Years most closely match this story.

There is also a scenario where superintelligence is literally never created, but I think this scenario is very unlikely. 

Timelines and probabilities

The question of when we enter one of the outcomes is to me very similar to the question of when ASI is created, meaning I place around a 50% probability on “by the end of 2031” as I expect us to settle into a stable geopolitical state soon after the first ASIs exist (or even before). 

There might be intermediate states that are reached and sustained for a while (e.g. a global moratorium or MAIM) but I don’t expect those to delay superintelligence for more than a few decades. There's also some chance that a global nuclear war starts[6] which sets civilization back decades or centuries, but we'd eventually probably land back at a similar fork to the one we’re facing now.

I expect the unipolar scenario to be attempted with higher probability than the multipolar scenario, but I expect it to lead to a existential catastrophe with a higher probability too (due to corner-cutting on safety and potential for misuse), so the likelihood of a “successful” unipolar and multipolar scenario are similar.

 P(attempted by leading project)P(doesn’t end in existential catastrophe | attempted by leading project)P(attempted by leading project and doesn’t end in existential catastrophe)
Unipolar outcome2/31/41/6
Multipolar outcome1/33/41/4

 

 

 

 

 

 

 

Therefore, the final probabilities of the three outcomes are:

  1. Unipolar scenario: ~17%
  2. Multipolar scenario: ~25%.
  3. Existential catastrophe: ~58%.

Implications

The unipolar and multipolar stable end states imply very different policy interventions:

  1. Successfully achieving a unipolar state requires secrecy, acceleration, large investments in military R&D, and extreme hawkishness.
  2. A multipolar state (with the exception of secret DSAs leading to self-disarming acts) requires verification mechanisms, broad awareness of risks from ASI, transparency, multinational agreements, and building trust between the nuclear weapon states.

The way to the unipolar outcome would be fraught with danger:

  • Racing allows very little room for safety work amid the mad rush to superintelligence – which might make the difference between ASI that helps or destroys humanity.
  • It’s unclear if a secret superintelligence explosion and buildup of advanced military technology is possible to cover up from the other nuclear weapons states. If a nuclear weapons state gets caught doing this, it’s likely that the other nuclear weapons states will consider this an extremely escalatory action and resort to rapidly escalating threats, sabotage, or pre-emptive attacks.
    • Additionally, secret buildups of advanced technology would be easier to pull off for ASI projects and militaries which are more willing to infringe on the rights of their human employees by keeping their movements and communications restricted 24/7. This means that nations which care less about personal freedoms are favored in achieving unipolar outcomes over others.
      • However, once substantial levels of automation are achieved, it is possible that a DSA will be achievable with the involvement of a very small number of humans (e.g. through superpersuasion). So projects aiming to achieve a unipolar outcome can make a trade-off where they provide human employees with more rights at the cost of less transparency (to ensure a very small number of humans are aware of a planned hegemony).
  • A unipolar outcome will put complete power over the rest of the human population into a small number of hands, which is more likely to devolve into dystopian forms of stable authoritarianism.

If ASI is a handful of years away, this gives us a very tight deadline to set up a multinational agreement if we ever wish to set one up. It also gives us very little time to set ourselves up to wisely navigate the extreme judgement calls that we’ll have to make in order to avoid existential catastrophe.

  1. ^

    Unipolar and multipolar map onto Strategic Advantage and Cooperative Development in this post.

  2. ^

     How would a nuclear weapons state keep the existence of an ASI secret? I won’t go into details but I think it’s doable, especially considering the help that precursor systems could provide in infosecurity and persuasion.

  3. ^

    Possibly using advanced autonomous weapons, superpersuasion, nuclear defense, or biological weapons.

  4. ^

     In the case of secret Soviet biological weapons programs, two years passed between a defector revealing illegal biological weapons activities and foreign governments conducting inspections. In the case of ASI, it might be enough to pacify other governments for a few months.

  5. ^

    While ASI can achieve unipolar or multipolar outcomes once humanity has been destroyed or disempowered, I count these outcomes under “existential catastrophe”. Basically every story in an attempted unipolar or multipolar outcome can devolve into an existential catastrophe story.

  6. ^

    If you have don’t think a nuclear war is likely, imagine a scenario where the top intelligence officials of a nuclear weapons state present their country’s leaders with a report that claims “Within 3 months, another nuclear weapons state is on track to possess superintelligence and technologies that can disable our first and second strike nuclear capabilities.” How would this affect international relations? I claim it would likely cause tensions higher than those at the peak of the Cold War. Very high tensions make accidents more likely, as well as make it plausible that country leaders will deliberately start a nuclear war.