Some logical nits:
Other notes:
The physical attacks may be highly visible, but not their source. An AGI could deploy autonomous agents with no clear connection back to it, manipulate human actors without them realising, or fabricate intelligence to create seemingly natural accidents. The AGI itself remains invisible. While this increases the visibility of an attack, it does not expose the AGI. It wouldn't be a visible war - more like isolated acts of sabotage. Good point to raise, though.
You bring up manoeuvre warfare, but that assumes AI operates under constraints similar to human militaries. The reason to prefer perfect, deniable strikes is that failure in an early war phase means immediate extinction for the weaker AGI. Imperfect attacks invite escalation and countermeasures - if AGI Alpha attacks AI Bravo first but fails, it almost guarantees its own destruction. In human history, early aggression sometimes works - Pearl Harbour, Napoleon's campaigns - but other times it leads to total defeat - Germany in WW2, Saddam Hussein invading Kuwait. AIs wouldn’t gamble unless they had no choice. A first strike is only preferable when not attacking is clearly worse. Of course, if an AGI assesses that waiting for the perfect strike gives its opponent an insurmountable edge, it may attack earlier, even if imperfectly. But unless forced, it will always prioritise invisibility.
This difference in strategic incentives is why AGI war operates under a different logic than human conflicts, including nuclear deterrence. The issue with the US nuking other nations is that nuclear war is catastrophically costly - even for the "winner." Beyond the direct financial burden, it leads to environmental destruction, diplomatic fallout, and increased existential risk. The deterrent is that nobody truly wins. An AGI war is entirely different: there is no environmental, economic, or social cost - only a resource cost, which is negligible for an AGI. More importantly, eliminating competition provides a definitive strategic advantage with no downside. There is no equivalent to nuclear deterrence here - just a clear incentive to act first.
Bolding helps emphasise key points for skimmers, which is a large portion of online readers. If I could trust people to read every word deeply, I wouldn’t use it as much. When I compile my essays into a book, I’ll likely reduce its use, as book readers engage differently. In a setting like this, however, where people often scan posts before committing, bolding increases retention and ensures critical takeaways aren’t missed.
deleted