Who benefits if the US develops artificial superintelligence (ASI) faster than China?
One possible answer is that AI kills us all regardless of which country develops it first. People who base their policy on that concern already agree with the conclusions of this post, so I won't focus on that concern here.
This post aims to convince other people, especially people who focus on democracy versus authoritarianism, to be less concerned about which country develops ASI first. I will assume that AIs will be fully aligned with at least one human, and that the effects of AI will be roughly as important as the industrial revolution, or a bit more important.
Pre-industrial experts would have been fairly surprised if they'd lived to see how the industrial revolution affected political systems. Democracy was uncommon, and the franchise was mostly limited to elites. There was little nationalism, and hardly any sign of a state-run welfare system.
So our prior ought to be that the intelligence revolution will produce similar surprises. We shouldn't extrapolate too much from current policies to post-ASI conditions.
I'll examine several scenarios for how ASI influences political power. Most likely we'll end up with something stranger than what I've been able to imagine.
For simplicity, I'll start with scenarios involving highly concentrated power, and work my way toward decentralized scenarios. I will not predict here which scenarios seem most likely.
Imagine that a single ASI, which is aligned with a single person, becomes powerful enough to conquer the world. A military that gets mostly automated could be a pretty powerful tool. This likely leads to a world ruled by someone who knows a fair amount about seizing power, and knows enough about AI to be in the right place at the right time.
Donald Trump? Elon Musk? Xi Jinping? Liang Wenfeng? Sam Altman? Dario Amodei? Gavin Newsom?
Few of these would submit to the will of voters if they had enough power to suppress any rebellion.
But a leader in this scenario would likely feel secure enough in power that he wouldn't need to suppress dissent. He wouldn't have much to gain by adopting policies that hurt people. With superhuman advice on how to help people, it would only take a little bit of leader altruism for things to turn out well.
So if we're stuck in this scenario, the desirability of a US victory depends heavily on what kind of personality each country allows to seize power. In particular, how likely is it that a psychopath grabs power.
I expect that most non-psychopathic leaders would use near-absolute power to mostly help people.
Which institutions are most likely to avoid letting psychopaths gain power? I think Deng Xiaoping and Ronald Reagan were fairly non-psychopathic. But current leaders of China, the US, and OpenAI inspire little confidence. The frontrunners for the US 2028 presidential election do not at all reassure me. I conclude that the within-country variation is dramatically larger than the difference between countries.
In this scenario, a single ASI takes control of the world. Its goals encompass the welfare of a broad set of actors (a nation? humanity? sentient creatures?).
Does the nation of origin influence how broad a set the king cares about? I don't see a clear answer.
I presume this scenario depends either on the altruism of a key person who configures the ASI's goals, or a compromise between multiple stakeholders.
This is presumably influenced by the culture of the project which creates the ASI. WEIRD culture features a more universalizing approach to morality, making a "sentient creatures" option more likely. But WEIRD culture also emphasizes individualism more, maybe making a US project less likely to compromise with people outside of the project (as in ensuring that the ASI circle of caring extends to at least a modest sized community).
The US has a better track record of producing the kind of altruism that helps distant strangers, but that still only describes a minority of business and government project leaders.
Lots of influences matter in this scenario, but the country of origin doesn't stand out as clearly important.
This scenario involves multiple projects producing ASI with about the same capabilities. Maybe due to diminishing returns just as they approach the ASI level. An alternate story is that as they get close to ASI, their near-ASIs all persuade the relevant companies that it's too risky to advance further without a better understanding of alignment.
This implies that being first entails no lasting advantage.
Bostrom's Open Global Investment as a Governance Model for AGI proposes a scenario where an AI corporation effectively becomes something like a world government. Power ends up being distributed in proportion to ability to buy stock in the corporation.
I see important differences within China as to whether Chinese corporate governance would work better or worse than US corporate governance. I'm pretty familiar with governance of companies that are traded on the Hong Kong stock exchange. Their rules are better than US rules (they were heavily influenced by British rule). Whereas what little I know of other Chinese companies suggests that I'd be a good deal less happy with their governance than with US corporate governance.
However, good rules mean less in China than in the US. What happens when disputes go to court? US courts have so far mostly resisted the growing corruption in the other two branches of government. Whereas my impression of Chinese courts is that their results are heavily dependent on the guanxi of the parties.
Another important concern is that Chinese rules mostly prevent foreigners from acquiring voting power in corporations. So wealthy people in other countries could influence the ASI company a little bit by influencing its stock price, but for many purposes it would be quite close to Chinese domination of the world.
So in this scenario, ASIs from different countries would be controlled by a fairly different set of moderately wealthy investors. I'd prefer control by US-dominated investors, since I'm one of them. But control by wealthy Chinese sounds much less scary than control by the CCP, so I don't find this to be a strong argument for a race.
Democracy could prove unable to adapt to post-ASI conditions.
One risk is a simple extrapolation of how special interest groups work. Elections become decided mostly by attack ads. Most policy decisions become determined by whoever spends the most money on ads.
Or maybe it's foreign governments that covertly arrange for those attack ads, or arrange for manipulative tweets.
China's government is controlled by a more professional elite, so it's much less vulnerable to these influences, and the quality of its policies degrades less.
In this scenario, I'd weakly prefer that China develops ASI first.
Why did the West adopt a democratic system with a broad franchise in the first place? One leading theory holds that elites extended the franchise as a strategic response to the threat of social unrest, strikes, or revolution. I can easily imagine that AI will weaken those threats, leading to elites wanting to move away from democracy. AIs are unlikely to go on a strike. Military drones are unlikely to side with rebels.
In this scenario, I'd expect an equally authoritarian result from either country, with a slightly better culture in the US.
Voters could easily switch to relying on AIs for their political information, with AIs being much closer than any current information source to the ideal of objectively evaluating what policies will produce results that voters like.
The US turns into a de facto futarchy-like democracy, but with the AIs providing forecasts that are better than what human-run markets could produce.
China creates something similar, but with the franchise restricted to elite CCP members. A majority of CCP members genuinely believe CCP rhetoric about aiming for a workers' paradise. So China ends up with a Marxist utopia where no workers get exploited.
In this scenario it seems somewhat unlikely that there's much difference between nation-states.
Maybe something causes AIs to adopt something like Star Trek's Prime Directive, and remain carefully neutral about all political conflicts. And maybe most people who have enough power to change political policies are satisfied with the way that their government works.
This is the main scenario in which I have a clear preference for the US being first. It seems like the least likely of the scenarios that I've described.
So far I've been talking as if, in the nice scenarios, the US and China coexist peacefully. Yet I haven't addressed the concern that one will get a significant military advantage via achieving ASI sooner, and using that advantage to seize control of most of the world.
I don't have much of a prediction as to whether the winner will seize control of the world, so I ought to analyze both possibilities. It feels easier to analyze the takeover possibility in one section that covers most of the nicer scenarios.
How much harm would result from the "wrong" country dominating the world?
Communism, in spite of all its faults, is a utopian ideology that causes most of its adherents to genuinely favor a pleasant society, even when it blinds them to whether their policies are achieving that result.
The CCP is somewhat embarrassed when it needs to use force against dissidents, unlike the Putins and Trumps who are eager to be seen as bullies.
The CCP's worst disaster was because yes-men who wanted to please Mao deluded him into thinking that China had achieved agricultural miracles. An ASI seems less likely to need to lie to leaders. It's more likely to either depose them or be clearly loyal.
ASI will cure many delusions. The CCP will be a very different political force if it has been cured of 99% of its delusions.
There's some risk that either the CCP or half the voters in the US will develop LLM psychosis. I'm predicting that that risk will be low enough that it shouldn't dominate our ASI strategy. I don't think I have a strong enough argument here to persuade skeptics.
I also predict that ASI will raise new issues which will significantly distract voters and politicians from culture wars and from the conflict between capitalism and communism.
This is not an exhaustive list of possibilities.
I've probably overlooked some plausible scenarios in which there's a clear benefit to the US getting ASI before China does. But I hope that I've helped you imagine that they're not a clear-cut default outcome, and that the benefits to getting ASI first aren't unusually important compared to the benefits of ensuring that ASI has good effects on whoever develops it.
The possibility of ASI killing us all was not sufficient to persuade me to feel neutral about scenarios where China builds ASI before the US.
This post has described the kind of analysis that has led me to have only a minor preference for a US entity to be the first to build ASI.
It seems much more important to influence which of these scenarios we end up in.
P.S. This post was not influenced by Red Heart, even though there's some overlap in the substance - I wrote a lot of the post before reading that book.