Crossposted from the EA Forum.

No longer endorsed.

Imagine it's 2030 or 2040 and there's a catastrophic great power conflict. What caused it? Probably AI and emerging technology, directly or indirectly. But how?

I've found almost nothing written on this. In particular, the relevant 80K and EA Forum pages don't seem to have relevant links. If you know of work on how AI might cause great power conflict, please let me know. For now, I'll start brainstorming. Specifically:

  1. How could great power conflict affect the long-term future? (I am very uncertain.)
  2. What could cause great power conflict? (I list some possible scenarios.[1])
  3. What factors increase the risk of those scenarios? (I list some plausible factors.)

Epistemic status: brainstorm; not sure about framing or details.

 

I. Effects

Alternative formulations are encouraged; thinking about risks from different perspectives can help highlight different aspects of those risks. But here's how I think of this risk:

Emerging technology enables one or more powerful actors (presumably states) to produce civilization-devastating harms, and they do so (either because they are incentivized to or because their decisionmaking processes fail to respond to their incentives).[2]

Significant (in expectation) effects of great power conflict on the long-term future include:

  • Risk of human extinction
  • Risk of civilizational collapse
  • Effects on states' relative power
  • Other effects on the time until superintelligence and the environment in which we achieve superintelligence

Human extinction would be bad. Civilizational collapse would be prima facie bad, but its long-term consequences are very unclear. Effects on relative power are difficult to evaluate in advance. Overall, the long-term consequences of great power conflict are difficult to evaluate because it is unclear what technological progress and AI safety look like in a post-collapse world or in a post-conflict, no-collapse world.

Current military capabilities don't seem to pose a direct existential risk. More concerning for the long-term future are future military technologies and side effects of conflict, such as on AI development.

 

II. Causes

How could AI and the technology it enables lead to great power conflict? Here are the scenarios that I imagine, for great powers called "Albania" and "Botswana":

  • Intentional conflict due to bilateral tension. In each of these scenarios, international hostility and fear are greater than in 2021, and domestic politics and international relations are more confusing and chaotic.
    • Preventive attack. Albania thinks that Botswana will soon become much more powerful and that this would be very bad. Calculating that it can win—or accepting a large chance of devastation rather than simply letting Botswana get ahead—Albania attacks preemptively.
    • Seizing opportunity. An arms race is in progress. Albania thinks it has an opportunity to get ahead. Albania attempts to strike or sabotage Botswana's AI program or its military. Albania does not disable Botswana's military (either because it failed to or because it assumed Botswana would not launch a major counterattack anyway). Botswana retaliates.
    • Diplomatic breakdown. Albania makes a demand or draws a line in the sand (legitimately, from its perspective). Botswana ignores it (legitimately, from its perspective). Albania attacks. Possible demands include, among others: stop building huge AI systems (and submit to external verification), or stop developing technology that threatens a safe first strike (and submit to external verification).
  • Intentional conflict due to a single state's domestic political forces. These scenarios are currently difficult to imagine among great powers. But some researchers are worried about polarization and epistemic decline in the near future, which could increase this risk.
    • Ambition. Albania hopes to dominate other states. Albania attacks.
    • Hatred. A substantial fraction of Albanians despise Botswana, and the Albanian government's decisionmaking process empowers that faction. Albania attacks.
    • Blame. Albania suffers an attack, leak, security breach, or embarrassment from one or more malcontents/spies/saboteurs/assassins/terrorists. Albania incorrectly blames Botswana — for rational reasons, for political convenience, or just due to bad epistemics. Albania attacks.
  • Intentional conflict due to multi-agent forces. This scenario is currently difficult to imagine. But perhaps crazy stuff happens when power increases, relative power is unstable, technology confuses states, and memetic chaos reigns. Roughly, I imagine a multi-agent failure scenario like this:
    • Offense outpaces defense. New technologies are leaked, are developed independently by many states, or cannot be kept secret. The capability to devastate civilization, which in 2021 was restricted to the major nuclear states, is held by many states. Even if none are malevolent, all are afraid, and domestic political forces (which are more chaotic than they were in 2021) make one or two states do crazy stuff.
  • An accident. "If the Earth is destroyed, it will probably be by mistake."[3]
    • Automatic counterattacks. AI, AI-enabled military technology, and the prospect of future advances foster chaos and uncertainty. International tension increases in general, and tension between Albania and Botswana increases in particular. Offensive capabilities increase and are on hair trigger.[4] Eventually there's an accident, miscommunication, glitch, or some anomaly resulting from multiple complex systems interacting faster than humans can understand. Albania automatically launches a "counterattack."

 

III. Risk factors

Great power conflict is generally bad, and we can list high-level scenarios to avoid, such as those in the previous section. But what can we do more specifically to prevent great power conflict?

Off the top of my head, risk factors for the above scenarios include:

  • International cooperation/trust/unity/comity decreases (in general or between particular great powers)[5]
  • Fear about other states' capabilities and goals increases (in general or between particular great powers)
  • Chaos increases
  • States' relative power is in flux and uncertain
  • There is conflict (that could escalate), especially international violence or conquest, especially involving a great power (e.g., a great power annexes territory, or there is a proxy war)
  • More states acquire devastating offensive capabilities beyond the power of any defensive capabilities (this needs nuance but is prima facie generally true)[6]

It also matters what and how regular people and political elites think about AI and emerging technology. Spreading better memes may be generally more tractable than reducing the risk factors above, because it's pulling the rope sideways, although the benefits of better memes are limited.

 

Finally, the same forces from emerging technology, international relations, and beliefs and modes of thinking about AI that affect great power conflict will also affect:

  • How quickly superintelligence is developed
  • The extent to which there is an international arms race
  • Regulations and limits on AI, locally and globally
  • Hardware accessibility

Interventions affecting the probability and nature of great power conflict will also have implications for these variables.

 

Please comment on what should be added or changed, and please alert me to any relevant sources you've found useful. Thanks!


  1. My analysis is abstract. Consideration of more specific factors, such as what conflict might look like between specific states or involving specific technologies, is also valuable but is not my goal here. ↩︎

  2. Adapted from Nick Bostrom's Vulnerable World Hypothesis, section "Type-2a." My definition includes scenarios in which a single actor chooses to devastate civilization; while this may not technically be great power conflict, I believe it is sufficiently similar that its inclusion is analytically prudent. ↩︎

  3. Eliezer Yudkowsky's Cognitive Biases Potentially Affecting Judgment of Global Risks. ↩︎

  4. Future weapons will likely be on hair trigger for the same reasons that nukes have been: swifter second strike capabilities could help states counterattack and thus defend themselves better in some circumstances, it makes others less likely to attack since the decision to use hair trigger is somewhat transparent, and there is emotional/psychological/political pressure to take them down with us. ↩︎

  5. Currently the world doesn't include large, powerful groups, coordinated at the state level, that totally despise and want to destroy each other. If it ever does, devastation occurs by default. ↩︎

  6. Another potential desideratum is differential technological progress. Avoiding military development is infeasible to do unilaterally, but perhaps we can avoid some particularly dangerous capabilities or do multilateral arms control. Unfortunately, this is unlikely: avoiding certain technologies is costly because you don't know what you'll find, and effective multilateral arms control is really hard. ↩︎

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 2:31 AM

Preemptive attack. Albania thinks that Botswana will soon become much more powerful and that this would be very bad. Calculating that it can win—or accepting a large chance of devastation rather than simply letting Botswana get ahead—Albania attacks preemptively.

FWIW, many distinguish between preemptive and preventive war, where the scenario you described falls under "preventive", and "preemptive" implies an imminent attack from the other side.

Ha, I took intro IR last semester so I should have caught this. Fixed, thanks.

There's no clear line between war and peace. We live in a world that's already in constant cyberwar. AI gets deployed in the existing cyberwar and likely will be more so in the future.

It's unclear how strongly the control about the individual actors are controlled by their respective governments. Arkhipov's submarine didn't get attacked because anyone up the chain ordered it. Attribution of attacks is hard. 

The countries that are players are all different, so you lose insight when you talk about Albania and Botswana instead of the real players.

Given Russia tolerating all the ransomware attacks being launched from their soil, it could be that one US president says "Enough, if Russia doesn't do anything against attacks from their soil on the West, let's decrimilize hacking Russian targets". 

Thanks for your comment.

It's unclear how strongly the control about the individual actors are controlled by their respective governments.

Good point. If I understand right, this is an additional risk factor: there's a risk of violence that neither state wants due to imperfect internal coordination, and this risk generally increases with international tension, number of humans in a position to choose to act hostile or attack, general confusion, and perhaps the speed at which conflict occurs. Please let me know if you were thinking something else.

The countries that are players are all different, so you lose insight when you talk about Albania and Botswana instead of the real players.

Of course. I did acknowledge this: "Consideration of more specific factors, such as what conflict might look like between specific states or involving specific technologies, is also valuable but is not my goal here." I think we can usefully think about conflict without considering specific states. Focusing on, say, US-China conflict might obscure more general conclusions.

Given Russia tolerating all the ransomware attacks being launched from their soil, it could be that one US president says "Enough, if Russia doesn't do anything against attacks from their soil on the West, let's decrimilize hacking Russian targets".

Hmm, I haven't heard this suggested before. This would greatly surprise me (indeed, I'm not familiar with domestic or international law for cyber stuff, but I would be surprised to learn that US criminal law was the thing stopping cyberattacks on Russian organizations from US hackers or organizations). And I'm not sure how this would change the conflict landscape.

Speaking about states wanting things obscures a lot. 

I expect that there's a good chance that Microsoft, Amazon, Facebook, Google, IBM, Cisco, Palantir and maybe a few other private entities are likely to have strong offensive capabilities. 

Then there are a bunch of different three letter agencies who are likely having offensive capabilities. 

This would greatly surprise me (indeed, I'm not familiar with domestic or international law for cyber stuff, but I would be surprised to learn that US criminal law was the thing stopping cyberattacks on Russian organizations from US hackers or organizations)

The US government of course hacks Russian targets but sophisticated private actors won't simply attack Russia and demand ransom to be payed to them. There are plenty of people who currently do mainly do penetration testing for companies and who are very capable at actually attacking who might consider it worthwhile to attack Russian targets for money if that would be possible without legal repercussions.

US government sponsored attacks aren't about causing damage in the way attacks targed at getting ransom are.

And I'm not sure how this would change the conflict landscape.

It would get more serious private players involved in attacking who are outside of government control. Take someone like https://www.fortalicesolutions.com/services . Are those people currently going to attack Russian targets outside of retaliation? Likely not.

Oh, interesting.

Speaking about states wanting things obscures a lot.

So I assume you would frame states as less agenty and frame the source of conflict as decentralized — arising from the complex interactions of many humans, which are less predictable than "what states want" but still predictably affected by factors like bilateral tension/hostility, general chaos, and various technologies in various ways?

[+][comment deleted]3y20