846

LESSWRONG
LW

845
AI TakeoffAI

7

Space colonization and scientific discovery could be mandatory for successful defensive AI

by otto.barten
18th Oct 2025
2 min read
0

7

AI TakeoffAI

7

New Comment
Moderation Log
More from otto.barten
View more
Curated and popular this week
0Comments

Epistemic status: quick draft of a few hours thought, related to a few weeks cooperative research 

In a multipolar ASI offense/defense scenario, there seems to be a good chance that intent-aligned, friendly AI will not colonize space. This could for example happen because we intent-align defensive AI(s) with institutes under human control, such as companies, police forces, secret services, militaries or military alliances, governments, or supragovernmental organizations. The humans controlling these entities might not support space colonization, space colonization might be outside their organization’s mandate, or there might be other organizational constraints prohibiting space colonization.

If an offensive AI (either unaligned, or intent-aligned with a bad actor) escapes into space, it might be able to colonize the resources it finds there. For example, it could build a laser with a beam diameter exceeding earth's and use it against us. Or, it could direct a meteorite at us large enough to cause extinction. In these scenarios, it seems impossible for earth-bound defensive AI to successfully ward off the attack, or for us, and the defensive AI(s), to recover from it.

Therefore, if:

  1. We end up in a multipolar ASI offense/defense scenario (e.g. because no pivotal act was performed), and
  2. Defensive AI is intent-aligned with humans who do not effectively colonize space, and
  3. Offensive AI escapes into space, and
  4. Escaped offensive AI can mobilize space resources to build a decisively large weapon,

It seems to follow that offense trumps defense, possibly leading to human extinction.

More generally, a minimum viable defense theorem could be formulated for multipolar ASI offense/defense scenarios:

If mobilizing resources can lead to a decisive strategic advantage, any successful (system of) defensive AI(s) should at least mobilize sufficient resources to win from any weaponry that could be constructed from the unmobilized resources.

One could also imagine that weaponizing new science and technology could lead to a decisive strategic advantage. A version of this theory could therefore also be:

If inventing weaponizable science and technology leads to a decisive strategic advantage, any successful (system of) defensive AIs should at least invent and weaponize sufficient science and technology to successfully defend against any weaponry that could be constructed from the uninvented science and technology.

These results might be seen as a reason to:

  • Support a pause.
  • Perform a pivotal act (if ASI can be aligned).
  • Make sure we align (if ASI can be aligned) defensive, friendly ASI with entities which intent to occupy sufficient strategic space in domains such as space colonization and weaponizable science.