This post interrogates the shift that is happening in particular to nuclear deterrence as AI nears an exponential rate of acceleration and the interesting shifts in incentives this creates for governments and non-state actors (for example AI companies). Foremost the race for AI development increases the incentive for a first strike to take out a competitor's AI infrastructure. These shifts are manifesting in policy (or lack thereof) and significant investment into nuclear and space capabilities. We argue that current deterrent frameworks being discussed by policy makers, such as Mutually Assured AI Malfunction (MAIM) are unrealistic. We propose an operational security based approach to deterrence (akin to modern cybersecurity practices) making systems more expensive to “take out”, with a strong focus on geographic and even interplanetary distribution to key civilizational infrastructure. Mutual AI Redundant Safeguards or MARS.
This post focuses less on the technical capabilities of AI (my day job is building advanced nuclear breeder reactors, not neural networks) and simply assumes that AI will have a direct, positive impact on a given economy. We can say that AI will define and drive “key civilizational infrastructure”, ambiguous of the actual technology behind it. We want to argue that the greatest short term risk for civilization is not the fact of superintelligence, but the assumption that the risk of being second in the race for superintelligence, incentivizes pre-emptive strikes. We argue that this key assumption is incentivizing nations to make missive build outs in energy and AI capabilities.
As a final note and bias: my current focus is building mass producible nuclear reactors, ensuring that AI systems that go online are carbon neutral (or negative) and can provide power for “highly distributed” habitation and power systems.
Defining the current geopolitical shift is the idea that whoever is second in this race for superintelligence, will never be able to catch up. Even if your competitor is just a few weeks ahead of you, growth curves advance so rapidly that your own civilization (often defined as western vs Chinese civilization) will become obsolete. Your competitor will be so good at writing code and shipping key civilizational technologies, that you will never be able to catch up.
This may be as simple as SciML models discovering new materials or energy sources rapidly, and automation allowing these systems to be scaled into production faster than your competitor. Or autonomous systems building drones. Policy experts are generally not considering some of the greater risks discussed on this platform, just pure simple economic efficiency. This is often oversimplified to:
This race is manifesting itself quickly as massive datacenter and energy projects are sounded off. The US and Open AI are building Stargate (which we believe based on the incentives outlined in this document, will be soon considered a small project), xAI its Giga Cluster, etc. The US government is currently measuring progress by how much energy will be needed to compete with China, currently considered to be around 60 Gigawatts by 2030 and growing, with China in the lead. Some are even throwing around the term “Teracluster” alluding to one terawatt hour electrical of training capability. An absurd amount of energy.
A few weeks ago China announced a pilot molten salt breeder reactor, which can run off natively sourced thorium, can breed its own fuel, and theoretically be mass produced (once they solve corrosion issues). Advances in material science are going to allow Thorium breeder reactors to be mass produced at a scale unprecedented before. While much of this 60 GW gap will be filled by natural gas peaker plants, nuclear will quickly become cost competitive.
We can make a few key assumptions here:
Assumption 1: AI will have a direct, positive impact on a given civilization’s economy, military and overall competitiveness.
Assumption 2: The race for superintelligence is a zero sum game, increasing the incentive for a first strike. While we can imagine a world where AI is equally creating wealth across all “players”, many consider superintelligence to be zero sum.
Assumption 3: Regulating the race for superintelligence is unlikely to happen (based on current trends) with little to no international coordination. Most AI coordination between superpowers is Track 2 diplomacy.
If we begin to look at the world through this lens of a geopolitical race for AI, everything begins to change. The current leading thought piece in this space is MAIM (Mutually Assured AI Malfunction), introduced by Dan Hendrycks, Eric Schmidt, and Alexandr Wang in their Superintelligence Strategy paper.
The Superintelligence Strategy paper proposes that as nations race toward AI supremacy, they are increasingly incentivized towards preemptive actions—such as cyber or kinetic attacks. David Abecassis wrote a great in depth post putting MAIM into a rationality framework here. In short, MAIM posits that the international norm should be destroying your competitors' AI pre-emptively before reaching superintelligence. While this in our opinion is the best attempt yet to describe an international deterrence framework for AI, yes creates several issues. As we (and other authors) dig deeper, MAIM begins to break down. Most notably when compared to existing nuclear deterrence frameworks.
Lesswronger mc1soft summarizes the key problems better than I could have (link to his post above):
“Mutual Assured AI Malfunction (MAIM)—a strategic deterrence framework proposed to prevent nations from developing Artificial Superintelligence (ASI)—is fundamentally unstable and dangerously unrealistic. Unlike Cold War-era MAD, MAIM involves multiple competing actors, increasing risks of unintended escalation, misinterpretation, and catastrophic conflict. Furthermore, ASI itself, uncontainable by design, would undermine any structured deterrent equilibrium. Thus, pursuing MAIM to deter ASI deployment is both strategically irrational and dangerously misaligned with real-world political dynamics and technological realities.”
Effective Deterrence requires clear “red lines” and monitoring, something that is not happening today.
People are stuck thinking in terms of a world order that does not exist today. Everything has changed.
The current policy (as far as we can tell) is an inherent lack of policy. No rules, full steam ahead. Go go go! This is even being extended to energy systems as nuclear becomes highly deregulated and petrochemical emissions controls are significantly reduced. All vectors are aligning towards a mad race to be the first top superintelligence.
The Superintelligence Strategy paper evens advises against a “Superintelligence Manhattan Project” adding this helpful chart:
The US Department of Energy quickly disregards any hopes of following the Superintelligence Strategy paper. This has been reinforced by the latest nuclear executive orders as of May 23rd, 2025.
Surely Chinese analysts are looking at these communications, looking at papers like Superintelligence Strategy, and coming to conclusions of their own that America is in it for the race.
More interestingly, all of this is happening while classical nuclear deterrence is beginning to break down. 5th Generation warfare, (sometimes dubbed decentralized warfare) is changing nuclear deterrence. This is perhaps manifested best by Ukraine’s Operation Spiderweb, which on June 1st 2025 crippled Russia's Strategic Nuclear Bomber fleet. Ukraine claimed that 34% of Russia's strategic cruise missile carriers had been hit by an attack consisting of just 117 FPV drones (likely costing less than $1000 each, and running local 4G!). This is wildly disproportionate and extremely destabilizing from a nuclear deterrent standpoint.
Assumption 4: Nuclear weapons are not a suitable deterrent against AI superintelligence, and may not even be relevant in a world of 5th Generation Warfare, i.e. Operation Spideweb (something we debate much to the chagrin of the nuclear non-proliferation community)
Just as the primary objective during the Cold war was to reduce the risk of civilizational destruction with mutually assured destruction or “MAD”. MAD made a nuclear strike extremely costly. We argue that a cybersecurity based approach may be more appropriate, making a strike costly not in terms of human life, but in the cost of the attack itself. This relates to standard cybersecurity practices where increased security means increasing the cost of attacking a system. (there is no “real cybersecurity”)
We propose Mutual AI Redundant Safeguards or MARS.
Note that Superintelligence Strategy argues strongly against hardened datacenters due to the increased risk of escalation. We make several key Assumption here (at this point we consider them to be observations)
Under these conditions, it becomes more logical to harden AI datacenters, making them more expensive to attack and reducing the risk of an attempted attack being successful.
We can also make another assumption (5): Nuking a datacenter would probably have a minimal impact as once a superintelligence model is created it will likely be compressible to a few Terabytes, and can be easily backed up, run, (or even trained) in a distributed fashion.
Note: MARS may reduce the risk of a nuclear attack, but can increase the risk of a superintelligence going out of control. We highly advocate observability and other measures to reduce civilizational risk, but that is not the focus for this piece.
Let’s view the world now through this lens now of the race for superintelligence being on at full speed:
It now becomes highly rational:
I would argue we are already seeing these things above happening at a frantic pace. It would also be Rational to reduce the total number of nuclear weapons on earth, and implement standard AI safety mechanisms internationally, but we consider that unlikely to happen as countries near superintelligence.
Here’s where this argument gets interesting. Based on this logic, it now becomes highly rational to distribute datacenter and AI capability off earth. This idea already aligns with incentives to reduce the civilizational risk of AI by building off earth colonies, most notably on mars.
We have come to the conclusion that MARS will be the primary incentive for off-world colonization. We are already seeing this manifest with several AI projects acting as their own non-state actors building space programs. Most notably:
xAI: Just merged X and xAI. Currently controls the largest GPU cluster on earth, and majority ownership in SpaceX which is mass producing starships.
Google AI: Investments in Kairos, and other extensive (rumored) energy projects. Schmidt, a major shareholder, just bought a controlling interest in relativity space, a company that is building 3D printed rockets. Their stated interest is to create “An industrial base on Mars”
The Chinese Government: Focusing on an “optimization” and open source strategy for AI dominance. Rapidly building Thorium reactors at a pace 10X of that of the U.S, and is planning a manned Mars orbiter mission for 2033
Considerations:
We should consider many aspects of MAIM to be already be in full swing, with datacenter build outs, nuclear reactors going into mass production and talk of a “Superintelligence Manhattan Project” already underway. MARS posits some solutions to MAIM, reducing nuclear deterrent risk, while leaving key civilizational risk factors of AI open (and for others to create solutions for). We consider MARS to be highly rational but encourage debate on these fronts, especially as we build out nuclear breeder reactors in our own private work.
Garrett Kinsman
CoFounder California Thermodynamics
San Francisco, California, Earth.
Note, this article is based on and improved upon an earlier post I authored.