In Defense of the Arms Races… that End Arms Races

by GentzelThe Consequentialist2 min read15th Jan 20209 comments


WarAI Takeoff

All else being equal, arms races are a waste of resources and often an example of the defection equilibrium in the prisoner’s dilemma. However, in some cases, such capacity races may actually be the globally optimal strategy. Below I try to explain this with some examples.

1: If the U.S. kept racing in its military capacity after WW2, the U.S. may have been able to use its negotiating leverage to stop the Soviet Union from becoming a nuclear power: halting proliferation and preventing the build up of world threatening numbers of high yield weapons. Basically, the earlier you win an arms race, the less nasty it may be later. If the U.S. had won the cold war earlier, global development may have taken a very different course, with decades of cooperative growth instead of immense amounts of Soviet GDP being spent on defense, and ultimately causing its collapse. The principle: it may make sense to start an arms race if you think you are going to win if you start now, provided that a nastier arms race is inevitable later. 

2: If neither the U.S. nor Russia developed nuclear weapons at a quick pace, many more factions could have developed them later at a similar time, and this would be much more destabilizing and potentially violent than cases where there is a monopolar or a bipolar power situation. Principle: it is easier to generate stable coordination with small groups of actors than large groups. The more actors there are, the less likely MAD and treaties are to work, the earlier an arms race starts, the more prohibitively expensive it is for new groups to join the race.

3: If hardware design is a bottleneck on the development of far more powerful artificial intelligence systems, then racing to figure out good algorithms now will let us test a lot more things before we get to the point a relatively bad set of algorithms can create an immense amount of harm due to the hardware it has at its disposal (improbable example: imagine a Hitler emulation with the ability to think 1000x faster). Principle: the earlier you start an arms race, the more constrained you are by technological limits.1

I do not necessarily think these arguments are decisive, but I do think it is worth figuring out what the likely alternatives are before deciding if engaging in a particular capacity race is a bad idea. In general:

  • It’s nice for there to not be tons of violence and death from many factions fighting for power (multipolar situation)
  • It is nice to not have the future locked into a horrible direction by the first country/company/group/AI/etc. to effectively take over the world due some advantage derived from racing toward a technological advantage (singleton/monopolar power)
  • It’s nice for there to not be the constant risk of overwhelming suffering and death from a massive arms build up between two factions (bipolar situation)

So if an arms race is good or not basically depends on if the “good guys” are going to win (and remain good guys). If not, racing just makes everyone spend more on potentially risky tech and less on helping people. While some concerns about autonomous drones are legitimate and they may make individuals much more powerful, I am unsure it is good to stop investment races now unless they can also be stopped from happening later. Likewise, the consequences of U.S. leadership in such a race are likely to shape how lethal autonomous weapons proliferate in a more ethical direction, with lower probabilities of civilian deaths than the weapons that states would otherwise purchase.  It is also probably better to start figuring out what goes wrong while humans will still be controlling mostly autonomous drones than to wait for a bunch of countries to defect on unenforceable arms control agreements later in conflict and start deploying riskier/less well vetted systems.

If one thinks decision agencies will be better governed in the future, delaying technologies that centralize power may make sense to avoid locking in bad governments/companies/AI systems. However, to the degree competent bureaucracies can gain advantage from making risky tech investments regardless of their alignment with the general population, the more aligned systems must keep a lead to prevent others from locking in poor institutions.

Overall, arms races are wasteful and unsafe, but they may mitigate other even less safe races if they happen at the right time under the right conditions. In general, by suppressing the incentive for violence between individuals and building up larger societies, states pursuing power in zero-sum power races ultimately created positive sum economic spillovers from peace and innovation.


  1. As opposed to tech research continuing outside the military, and when an arms race begins there is a sudden destabilizing leap in attack capacity for one side or another. Return to Article
  2. You can see other related arguments in the debate on nuclear modernization here.


9 comments, sorted by Highlighting new comments since Today at 2:14 AM
New Comment
If the U.S. kept racing in its military capacity after WW2, the U.S. may have been able to use its negotiating leverage to stop the Soviet Union from becoming a nuclear power: halting proliferation and preventing the build up of world threatening numbers of high yield weapons.

BTW, the most thorough published examination I've seen of whether the U.S. could've done this is Quester (2000). I've been digging into the question in more detail and I'm still not sure whether it's true or not (but "may" seems reasonable).

Thanks, some of Quester's other books on deterrence also seem pretty interesting books also seem interesting.

My post above was actually intended as a minor update to an old post from several years ago on my blog, so I didn't really expect it to be copied over to LessWrong. If I spent more time rewriting the post again, I think I would focus less on that case, which I think rightly can be contested from a number of directions, and talk more about conditions for race deterrence generally.

Basically, if you can credibly build up the capacity to win an arms race (with significant advantages in the relevant forms of talent, natural resources, industrial capacity, etc.) then you may not even have to race. Limited development could plausibly serve to make capacity credible, gain the advantages of positive externalities from cutting edge R&D, but avoid actually sinking a lot of the economy into the production of destabilizing systems. By showing extreme capability in a limited sense, and credible capability to win a particular race, you may be able to deter racing if the communication of lasting advantage is credible. If lasting advantage is not credible, you may get more of a Sputnik or AlphaGo type event and galvanize competitors toward racing faster.

For global tech competition more generally, it would be interesting to investigate industrial subsidies by competing governments to see in what conditions countries attempt strategic protectionism and to get around the WTO and in which cases they give up a sector of competition. My prior is that protectionism is more likely when an industry is established, and that countries which could have successfully entered a sector can be deterred from doing so.

So if an arms race is good or not basically depends on if the “good guys” are going to win (and remain good guys).

Quick thought — it's not apples and apples, but it might be worth investigating which fields hegemony works well in, and which fields checks and balances works well in:

There's also the question with AGI of what we're more scared of — one country or organization dominating the world, or an early pioneer in AGI doing a lot of damage by accident?

#2 scares me more than #1. You need to create exactly one resource-commandeering positive feedback loop without an off switch to destroy the world, among other things.

While it may sound counter intuitive, I think you want to increase both hegemony and balance of power at the same time. Basically a more powerful state can help solve lots of coordination problems, but to accept the risks of greater state power you want the state to be more structurally aligned with larger and larger populations of people.

Obviously states are more aligned with their own populations than with everyone, but I think the expansion of the U.S. security umbrella has been good for reducing the number of possible security dilemmas between states and accordingly people are better off than they would otherwise be with more independent military forces (higher defense spending, higher war risk, etc.). There is some degree of specialization within NATO which makes it harder for states to go to war as individuals, and also makes their contribution to the alliance more vital. The more this happens at a given resource level, the more powerful the alliance will be in absolute terms, and the more power will be internally balanced against unilateral actions that conflict with some state's interests, though at some point veto power and reduced redundancy could undermine the strength of the alliance.

For technological risks, racing increases risk in the short-run between the competitors but will tend to reduce the number of competitors. In the long-run, agreeing not to race while other technologies progress increases the amount of low hanging fruit and expands the scope of competition to more possible competitors. If you think resource-commandeering positive feedback loops are not super close, there might be a degree of racing you would want earlier to establish front-runners to win and deter potential market entrants from expanding the competition during a period of high-risk low-hanging fruit. You might be able to do better yet if the near term leading competitors can reach agreement to not race, and then team up to defeat or buyout new entrants. The leaders obviously can't hold everything completely still and expect to remain leaders though, and businesses should deliver measurable tech progress if they want to avoid anti-monopoly regulation.

Anyway, basically preventing races isn't as simple as choosing not to race, and even if your goal is just to minimize risk you either have to credibly commit a larger and large number of actors to not defect over time as technology and know-how diffuses, or you should want more aligned competitors to win and to cooperate to slow the risky aspects of racing.

Apologies if this wasn't clear from the post, the post was intended as a minor update to one I wrote several years ago, and I didn't expect to see it get copied over to LessWrong, haha.

Here's an example from nature on snake venom that 'won' an evolutionary arms race.

From the abstract: "Examination of the prothrombin target revealed endogenous blood proteins are under extreme negative selection pressure for diversification, this in turn puts a strong negative selection pressure upon the toxins as sequence diversification could result in a drift away from the target. Thus this study reveals that adaptive evolution is not a consistent feature in toxin evolution in cases where the target is under negative selection pressure for diversification."

There are implications here for arms races generally. When you target something 'core' to the target that cannot be easily randomized to develop a diverse and therefore adaptive strategy, it is possible to 'win' an evolutionary arms race in the long term.

Essentially Eliezer's blind idiot god writes itself into a corner when it can no longer randomize a section under attack, and just sort of fails.

Basically, if even if there are adaptations that could happen to make an animal more resistant to venom, the incremental changes in their circulatory system required to do this are so maladaptive/harmful that they can't happen.

This is a pretty core part of competitive strategy: matching enduring strengths against the enduring weaknesses of competitors.

That said, races can shift dimensions too. Even though snake venom won the race against blood, gradual changes in the lethality of venom might still cause gradual adaptive changes in the behavior of some animals. A good criticism of competitive strategies between states, businesses, etc. is that the repeated shifts in competitive dimensions can still result in Molochian conditions/trading away utility for victory, which may have been preventable via regulation or agreement.

The principle: it may make sense to start an arms race if you think you are going to win if you start now, provided that a nastier arms race is inevitable later. 

My impression is that while there were some people who thought the Soviet Union would turn out to be troublesome, most people believed (either genuinely or as the result of wishful thinking) that capitalism and communism could coexist, and thus the nastier arms race later was not inevitable.

states pursuing power in zero-sum power races ultimately created positive sum economic spillovers from peace and innovation.

Which seems a lot like one might characterized basic research in many ways -- it seems a bit wasteful and doesn't really accomplish a lot that is directly useful to anyone on most practical levels initially. However ultimately it tends to plant a lot of seeds or open a lot of new development branches that do.

So are arms races, particularly those that don't end in an armed conflict, something we can view as just another form of basic research? Or is the arms race side of this just one of the branches that stemmed from the basic research and maybe we shouldn't give the arms race or the zero-sum power game much credit for the spin offs?

I'm also not quite sure what to make of "if the 'good guys' are going to win (and remain good guys)." That seems to be too subjective to be much help to me.

I think capacity races to deter deployment races are the best from this perspective: develop decisive capability advantages, and all sorts of useful technology, credibly signal that your could use it for coercive purposes and could deploy it at scale, then don't abuse the advantage, signal good intent, and just deploy useful non-coercive applications. The development and deployment process basically becomes an escalation ladder itself, where you can choose to stop at any point (though you still need to keep people employed/trained to sustain credibility).

I would probably rephase the "good guys" statement in terms of value alignment. For competitions where near term racing risks are low, long-term racing risks are high, the likely winner from starting race conditions earlier would be more value aligned with your goals, then racing may make sense. If initial race risks are high, or a less aligned actor is disproportionately likely to gain power, then pushing for more cooperative norms at the margin makes sense. You want to maximize your ability to foster or engineer risk reducing cooperation before the times of highest risk, and you want to avoid forms of cooperation that increase risk.