There's some risk that either the CCP or half the voters in the US will develop LLM psychosis. I'm predicting that that risk will be low enough that it shouldn't dominate our ASI strategy. I don't think I have a strong enough argument here to persuade skeptics.
I've been putting some thought into this, because my strong intuition is that something like this is an under-appreciated scenario. My basic argument is that mass brainwashing, for lack of a better word, is cheaper and less risky than other forms of ASI control. The idea is that we (humans) are extremely programmable (plenty of historical examples), it just requires a more sophisticated "multi-level" messaging scheme - so it's not going to look like an AI cult, more like an AI "movement" with a fanatical base.
Here is one pathway worked out in detail - will be generalizing soon: https://www.lesswrong.com/posts/zvkjQen773DyqExJ8/the-memetic-cocoon-threat-model-soft-ai-takeover-in-an
This post aims to convince other people, especially people who focus on democracy versus authoritarianism, to be less concerned about which country develops ASI first.
I agree that it makes sense to make this kind of argument. I have some questions/disagreements below, which don't necessarily affect your conclusions, but seem important to point out anyway.
I think Deng Xiaoping and Ronald Reagan were fairly non-psychopathic. But current leaders of China, the US, and OpenAI inspire little confidence. The frontrunners for the US 2028 presidential election do not at all reassure me.
Did you know that Deng approved the 1989 crackdown on Tiananmen protesters (as part of the CCP top leadership, also subsequently making a speech endorsing this)? I see from context that you probably don't consider suppression of dissent (like the Tiananmen crackdown) to be evidence or part of being "psychopathic". Maybe you could expand a bit more what you do mean, and what makes you more worried about the people you mentioned above? What makes you more worried about Xi relative to Deng, for example?
Communism, in spite of all its faults, is a utopian ideology that causes most of its adherents to genuinely favor a pleasant society, even when it blinds them to whether their policies are achieving that result.
A darker interpretation is that the (subconscious, but more real or substantial in some sense) goals of nearly all humans are to gain power and status, and utopian ideologies are merely a tool for achieving this. From this perspective, Communists do not more "genuinely favor a pleasant society" but are instead more deluded (on a conscious level) about their true motivations, and about human nature in general. That "it blinds them to whether their policies are achieving that result" is not some sort of incidental side effect, but a fundamental part of what's going on.
(Are you aware of the many internal CCP purges, often powered by forced (i.e., torture-induced) confessions, both before and after it took power in 1949? This is another reason to prefer to the "power maximization" model of Communist/human motivation.)
There's some risk that either the CCP or half the voters in the US will develop LLM psychosis. I'm predicting that that risk will be low enough that it shouldn't dominate our ASI strategy. I don't think I have a strong enough argument here to persuade skeptics.
I would agree that "it shouldn't dominate our ASI strategy" given many other risks that all have to be considered, but not that the risk is low in an absolute sense, at least if we interpret "LLM psychosis" more broadly to include corruption / adversarial manipulation of human values or motivational systems in general.
Did you know that Deng approved the 1989 crackdown on Tiananmen protesters
Yes, I'm aware that he did a few things that I consider evil. Wanting to keep his party in power is common enough among politicians that it's not much evidence of psychopathy. His overall attitude toward independent thought was a least no worse than average for a political leader.
A lot of what I have in mind is that Deng allowed more freedom than can readily explained by his self-interest, and Xi seems more Maoist than Deng.
But I wouldn't be surprised if you have better information about their personalities than do I.
A darker interpretation is that the (subconscious, but more real or substantial in some sense) goals of nearly all humans are to gain power and status, and utopian ideologies are merely a tool for achieving this.
The ideologies are partly a tool for that, but they have more effects on the wielder than a mere tool does. My biggest piece of evidence for that is the mostly peaceful collapse of the Soviet Union. I was quite surprised that the leaders didn't use more force to suppress dissent.
A lot of what I have in mind is that Deng allowed more freedom than can readily explained by his self-interest, and Xi seems more Maoist than Deng.
I think this can mostly be explained by different incentives faced by Deng and Xi. Here's a longer AI-generated explanation on Deng allowing more freedom, which generally matches my own understanding:
Gemini 3.0 Pro's response to "is it fair to say Deng Xiaoping allowed more freedom than can readily explained by his self-interest"
It is generally not fair—or at least, historically inaccurate—to say that Deng Xiaoping allowed more freedom than can be explained by his self-interest.
To understand why, one must define what Deng’s "self-interest" was. If self-interest is defined narrowly as "accumulating personal wealth" or "exercising sadism," then he was certainly more benevolent than that.
However, if self-interest is defined as the survival of the Chinese Communist Party (CCP) and the preservation of his own legacy as the savior of China, then almost every freedom he granted can be explained as a calculated move to serve those ends.
Here is an analysis of why Deng’s granting of freedoms aligns closely with political necessity rather than altruistic liberalism.
When Deng rose to paramount power in the late 1970s, the CCP was facing a crisis of legitimacy. The Great Leap Forward and the Cultural Revolution had left the country in poverty and social chaos. Belief in Marxist ideology was shattered.
Deng realized that for the Party (and himself) to survive, they could no longer rely on ideological fervor; they needed results.
In the late 1970s, Deng was locked in a power struggle with Mao’s chosen successor, Hua Guofeng, and the "Whateverist" faction (hardline Maoists).
Deng introduced term limits, collective leadership, and mandatory retirement ages. One could argue this limited his own power, and thus went against his self-interest.
The ultimate test of whether Deng prioritized freedom over self-interest occurred in 1989. Protesters demanded political transparency, an end to corruption, and greater press freedom.
There is one area where Deng went further than a purely cynical dictator might have: The Southern Tour (1992).
After Tiananmen, conservatives in the party wanted to roll back economic reforms. An elderly Deng, holding no official top titles, toured southern China to rally support for continued economic liberalization.
Deng Xiaoping was a pragmatist, not a liberal. His famous maxim, "It doesn't matter if a cat is black or white, so long as it catches mice," sums up his approach to freedom.
He viewed freedom not as a human right, but as a utility. He dispensed exactly as much economic and personal freedom as was necessary to unleash China’s productivity and restore the Party’s strength, but he ruthlessly withheld any political freedom that threatened the Party’s dominance. Therefore, his actions are readily explained by a sophisticated, long-term understanding of self-interest.
The ideologies are partly a tool for that, but they have more effects on the wielder than a mere tool does. My biggest piece of evidence for that is the mostly peaceful collapse of the Soviet Union. I was quite surprised that the leaders didn't use more force to suppress dissent.
This seems fair, but I note that other Communist countries did suppress dissent more forcefully. One explanation is that it's a tool wielded by a fairly dumb and constrained agent, namely one's subconscious, and sometimes the situation gets away from it before it has a chance to respond. In my "master-slave" post linked earlier, I wrote:
Plus, the slave is in charge day-to-day and could potentially overthrow the master. For example, the slave could program an altruistic AI and hit the run button, before the master has a chance to delete the altruism value from the slave.
The freedoms Deng Xiaoping granted can in fact be explained by his personal interests: selling state assets cheaply to officials helped consolidate his support within the Party, while marketization stimulated economic growth and stabilized society. Yet at the same time, he effectively stripped away most political freedoms.
Mao Zedong's late-stage governance, however, defies such explanation: even when power was unassailable, he encouraged radical leftist workers and students (the “rebels”) to confront pro-bureaucratic forces (the ‘conservatives’) and attempted to establish direct democratic systems like the Shanghai Commune. Despite ordering crackdowns on communist dissidents like the “May 16th” group, this behavior likely stemmed more from political ideals.
I may be missing something, but the fact that the Soviet Union collapsed relatively peacefully is not a lot of evidence for restraint/unwillingness to use force amongst its leaders. For the whole previous history of the Soviet Union repression and silencing of dissent was commonplace, especially during Lenin and Stalin (for example by the KGB torturing dissenting/intellectual people or sending them to gulags). Also, Gorbachev was a pretty uniquely commited to change in the context of USSR leadership, and there were plenty of other factors making dissolution more appealing. Opression definitely decreased as USSR neared its end, but why does this outweight the very violent first half of its existence?
US courts have so far mostly resisted the growing corruption in the other two branches of government.
I'd say that that's a controversial assertion to state as an axiom. The best proxy I can think of is their public support, which, per Gallup, is slightly higher than the Executive, but by nothing near the gap between the Executive and Legislative branches. Even then, the difference is primarily due to a 600% difference in trust among Democrats specifically, with Republicans and Independents rating them similarly to the Executive.
That's not much of a proxy. I'm relying on my subjective impressions from many reports. A more precise phrasing of my claim is that I've seen numerous reports of what I consider to be open contempt for the rule of law among elected officials, but judges in newsworthy cases have almost always looked like they're trying to take the law seriously.
Some of my impressions come from a private mailing list where conservative lawyers have been expressing dismay at the Trump administration's lack of interest in whether their actions could plausibly be defended in a court.
I would say that vibes are a much worse proxy than overall public opinion - especially if you have partisan leanings and don't attempt to temper their effects.
Some of my impressions come from a private mailing list where conservative lawyers
Since 2016, there has been a small but very vocal contingent of neoconservatives that get trotted around as "conservative <insert profession here>", but whose priors WRT anything Trump does are closer to the leftmost quartile of Democrats than to the median Republican, or even the median Independent. A common drinking game among news-readers is to search for articles of the format "Conservative Commentator says <thing that one would very much not expect a right-wing American to say>", and take a swig if the unnamed conservative commentator turns out to be Erick Erickson, Bill Kristol, or David French. The three of them alone cover about seventy five percent of these articles, and you can pick another three names to cover 75 percent of the rest.
This is to say that it's very easy to fall into the trap of believing that your views are universal because a cherrypicked set of Fox News Liberals (of either party) are serving as your model for the 'reasonable opposition'. Hard data may not be perfect, but it's essential in emotionally fraught domains.
I think answering the question of "Who can be included within political life?" is quite important to answer before deciding on whether you prefer China or not. We may very well get an ASI alligned to the interests of the nation which created it, to the detriment of others.
China is inherently a nation of Chinese people, and nominally a nation for the various minorities that are already there. The government represents the interests of this in-group, and almost never lets anyone else in. It's essentially impossible to become a Chinese citizen for everyone but a few Chinese diaspora.
The US is nominally a European-descended nation, but is essentially universal in who can be an American and who is allowed to participate in political life. Nearly a million people are naturalized every year, and many more people with immigrant parents are born Americans. These immigrants come from every part of the planet.
Despite the xenophobic rhetoric and immigration bans that have been popular recently in the US, these are the result of immigration policy being so expansive in how many people are allowed into the country, that it changes the ethnic and religious makeup of entire cities. This indicates to me that the U.S. is clearly a nation that is willing to expand who it considers part of the nation, even if there's pushback when this happens rapidly. In China you won't see much anti-immigrant sentiment because they don't allow many immigrants at all. The few who are conditionally allowed to live in China over the long term have no path to citizenship.
That's all to say that a Chinese ASI risks being an ASI working for the interests of the Chinese people, whereas an American ASI seems like it would at least conceptually understand that everyone besides Americans aren't completely excluded from the possibility of being part of the nation. If an ASI pursues the national interest, I'd rather it be a nation that is not exclusive to a single ethnic group to the detriment of everyone else.
If you're an American or Chinese I think the answer is obvious. If you're neither I think preferring American ASI makes more sense.
The idea that the head of the organisation gets to be king is debatable https://www.lesswrong.com/posts/7gfA2RSibbr2cdEgp/the-wise-baboon-of-loyalty
Who benefits if the US develops artificial superintelligence (ASI) faster than China?
One possible answer is that AI kills us all regardless of which country develops it first. People who base their policy on that concern already agree with the conclusions of this post, so I won't focus on that concern here.
This post aims to convince other people, especially people who focus on democracy versus authoritarianism, to be less concerned about which country develops ASI first. I will assume that AIs will be fully aligned with at least one human, and that the effects of AI will be roughly as important as the industrial revolution, or a bit more important.
Expect the Unexpected
Pre-industrial experts would have been fairly surprised if they'd lived to see how the industrial revolution affected political systems. Democracy was uncommon, and the franchise was mostly limited to elites. There was little nationalism, and hardly any sign of a state-run welfare system.
So our prior ought to be that the intelligence revolution will produce similar surprises. We shouldn't extrapolate too much from current policies to post-ASI conditions.
I'll examine several scenarios for how ASI influences political power. Most likely we'll end up with something stranger than what I've been able to imagine.
For simplicity, I'll start with scenarios involving highly concentrated power, and work my way toward decentralized scenarios. I will not predict here which scenarios seem most likely.
Leader Personality
Imagine that a single ASI, which is aligned with a single person, becomes powerful enough to conquer the world. A military that gets mostly automated could be a pretty powerful tool. This likely leads to a world ruled by someone who knows a fair amount about seizing power, and knows enough about AI to be in the right place at the right time.
Donald Trump? Elon Musk? Xi Jinping? Liang Wenfeng? Sam Altman? Dario Amodei? Gavin Newsom?
Few of these would submit to the will of voters if they had enough power to suppress any rebellion.
But a leader in this scenario would likely feel secure enough in power that he wouldn't need to suppress dissent. He wouldn't have much to gain by adopting policies that hurt people. With superhuman advice on how to help people, it would only take a little bit of leader altruism for things to turn out well.
So if we're stuck in this scenario, the desirability of a US victory depends heavily on what kind of personality each country allows to seize power. In particular, how likely is it that a psychopath grabs power.
I expect that most non-psychopathic leaders would use near-absolute power to mostly help people.
Which institutions are most likely to avoid letting psychopaths gain power? I think Deng Xiaoping and Ronald Reagan were fairly non-psychopathic. But current leaders of China, the US, and OpenAI inspire little confidence. The frontrunners for the US 2028 presidential election do not at all reassure me. I conclude that the within-country variation is dramatically larger than the difference between countries.
Benign ASI King
In this scenario, a single ASI takes control of the world. Its goals encompass the welfare of a broad set of actors (a nation? humanity? sentient creatures?).
Does the nation of origin influence how broad a set the king cares about? I don't see a clear answer.
I presume this scenario depends either on the altruism of a key person who configures the ASI's goals, or a compromise between multiple stakeholders.
This is presumably influenced by the culture of the project which creates the ASI. WEIRD culture features a more universalizing approach to morality, making a "sentient creatures" option more likely. But WEIRD culture also emphasizes individualism more, maybe making a US project less likely to compromise with people outside of the project (as in ensuring that the ASI circle of caring extends to at least a modest sized community).
The US has a better track record of producing the kind of altruism that helps distant strangers, but that still only describes a minority of business and government project leaders.
Lots of influences matter in this scenario, but the country of origin doesn't stand out as clearly important.
Multiple Co-Equal ASIs
This scenario involves multiple projects producing ASI with about the same capabilities. Maybe due to diminishing returns just as they approach the ASI level. An alternate story is that as they get close to ASI, their near-ASIs all persuade the relevant companies that it's too risky to advance further without a better understanding of alignment.
This implies that being first entails no lasting advantage.
Bostrom's OGI Model
Bostrom's Open Global Investment as a Governance Model for AGI proposes a scenario where an AI corporation effectively becomes something like a world government. Power ends up being distributed in proportion to ability to buy stock in the corporation.
I see important differences within China as to whether Chinese corporate governance would work better or worse than US corporate governance. I'm pretty familiar with governance of companies that are traded on the Hong Kong stock exchange. Their rules are better than US rules (they were heavily influenced by British rule). Whereas what little I know of other Chinese companies suggests that I'd be a good deal less happy with their governance than with US corporate governance.
However, good rules mean less in China than in the US. What happens when disputes go to court? US courts have so far mostly resisted the growing corruption in the other two branches of government. Whereas my impression of Chinese courts is that their results are heavily dependent on the guanxi of the parties.
Another important concern is that Chinese rules mostly prevent foreigners from acquiring voting power in corporations. So wealthy people in other countries could influence the ASI company a little bit by influencing its stock price, but for many purposes it would be quite close to Chinese domination of the world.
So in this scenario, ASIs from different countries would be controlled by a fairly different set of moderately wealthy investors. I'd prefer control by US-dominated investors, since I'm one of them. But control by wealthy Chinese sounds much less scary than control by the CCP, so I don't find this to be a strong argument for a race.
Poisoned Democracy
Democracy could prove unable to adapt to post-ASI conditions.
One risk is a simple extrapolation of how special interest groups work. Elections become decided mostly by attack ads. Most policy decisions become determined by whoever spends the most money on ads.
Or maybe it's foreign governments that covertly arrange for those attack ads, or arrange for manipulative tweets.
China's government is controlled by a more professional elite, so it's much less vulnerable to these influences, and the quality of its policies degrades less.
In this scenario, I'd weakly prefer that China develops ASI first.
Democracy is Dying
Why did the West adopt a democratic system with a broad franchise in the first place? One leading theory holds that elites extended the franchise as a strategic response to the threat of social unrest, strikes, or revolution. I can easily imagine that AI will weaken those threats, leading to elites wanting to move away from democracy. AIs are unlikely to go on a strike. Military drones are unlikely to side with rebels.
In this scenario, I'd expect an equally authoritarian result from either country, with a slightly better culture in the US.
AI Enhances Government
Voters could easily switch to relying on AIs for their political information, with AIs being much closer than any current information source to the ideal of objectively evaluating what policies will produce results that voters like.
The US turns into a de facto futarchy-like democracy, but with the AIs providing forecasts that are better than what human-run markets could produce.
China creates something similar, but with the franchise restricted to elite CCP members. A majority of CCP members genuinely believe CCP rhetoric about aiming for a workers' paradise. So China ends up with a Marxist utopia where no workers get exploited.
In this scenario it seems somewhat unlikely that there's much difference between nation-states.
Governments Little Changed by ASI
Maybe something causes AIs to adopt something like Star Trek's Prime Directive, and remain carefully neutral about all political conflicts. And maybe most people who have enough power to change political policies are satisfied with the way that their government works.
This is the main scenario in which I have a clear preference for the US being first. It seems like the least likely of the scenarios that I've described.
Decisive Advantage?
So far I've been talking as if, in the nice scenarios, the US and China coexist peacefully. Yet I haven't addressed the concern that one will get a significant military advantage via achieving ASI sooner, and using that advantage to seize control of most of the world.
I don't have much of a prediction as to whether the winner will seize control of the world, so I ought to analyze both possibilities. It feels easier to analyze the takeover possibility in one section that covers most of the nicer scenarios.
How much harm would result from the "wrong" country dominating the world?
Communism, in spite of all its faults, is a utopian ideology that causes most of its adherents to genuinely favor a pleasant society, even when it blinds them to whether their policies are achieving that result.
The CCP is somewhat embarrassed when it needs to use force against dissidents, unlike the Putins and Trumps who are eager to be seen as bullies.
The CCP's worst disaster was because yes-men who wanted to please Mao deluded him into thinking that China had achieved agricultural miracles. An ASI seems less likely to need to lie to leaders. It's more likely to either depose them or be clearly loyal.
ASI will cure many delusions. The CCP will be a very different political force if it has been cured of 99% of its delusions.
There's some risk that either the CCP or half the voters in the US will develop LLM psychosis. I'm predicting that that risk will be low enough that it shouldn't dominate our ASI strategy. I don't think I have a strong enough argument here to persuade skeptics.
I also predict that ASI will raise new issues which will significantly distract voters and politicians from culture wars and from the conflict between capitalism and communism.
Conclusion
This is not an exhaustive list of possibilities.
I've probably overlooked some plausible scenarios in which there's a clear benefit to the US getting ASI before China does. But I hope that I've helped you imagine that they're not a clear-cut default outcome, and that the benefits to getting ASI first aren't unusually important compared to the benefits of ensuring that ASI has good effects on whoever develops it.
The possibility of ASI killing us all was not sufficient to persuade me to feel neutral about scenarios where China builds ASI before the US.
This post has described the kind of analysis that has led me to have only a minor preference for a US entity to be the first to build ASI.
It seems much more important to influence which of these scenarios we end up in.
P.S. This post was not influenced by Red Heart, even though there's some overlap in the substance - I wrote a lot of the post before reading that book.