There are some additional reasons, beyond the question of which values would be embedded in the AGI systems, to not prefer AGI development in China, that I haven't seen mentioned here:
@Tomás B. There is also vastly less of an "AI safety community" in China -- probably much less AI safety research in general, and much less of it, in percentage terms, is aimed at thinking ahead about superintelligent AI. (ie, more of China's "AI safety research" is probably focused on things like reducing LLM hallucinations, making sure it doesn't make politically incorrect statements, etc.)
The four questions you ask are excellent, since they get away from general differences of culture or political system, and address the processes that are actually producing Chinese AI.
The best reference I have so far is a May 2024 report from Concordia AI on "The State of AI Safety in China". I haven't even gone through it yet, but let me reproduce the executive summary here:
...The relevance and quality of Chinese technical research for frontier AI safety has increased substantially, with growing work on frontier issues such as LLM unlearning, misuse risks of AI in biology and chemistry, and evaluating "power-seeking" and "self-awareness" risks of LLMs.
There have been nearly 15 Chinese technical papers on frontier AI safety per month on average over the past 6 months. The report identifies 11 key research groups who have written a substantial portion of these papers.
China’s decision to sign the Bletchley Declaration, issue a joint statement on AI governance with France, and pursue an intergovernmental AI dialogue with the US indicates a growing convergence of views on AI safety among major powers compared to early 2023.
Since 2022, 8 Track 1.5 or 2 dialogu
To add to the discussion, my impression is that many people in the US believe they have some moral superiority or know what is good for other people. The whole "we need a manhattan project for AI" discourse is reminiscent of calling for global domination. Also, doing things for the public good is controversial in the US as it can infringe on individual freedom.
This makes me really uncertain as to which AGI would be better (assuming somebody controls it).
I think there's a deep question here as to whether Trump is "America's true self finally being revealed" or just the insane but half predictable accident of a known-retarded "first past the post" voting system and an aging electorate that isn't super great at tracking reality.
I tend to think that Trump is aberrant relative to two important standards:
(1) No one like Trump would win an election with Ranked Ballots that were properly counted either via the Schulze method (which I tend to like) or the Borda method (which might have virtues I don't understand (yet! (growth mindset))). Someone that the vast majority of America thinks is reasonable and decent and wise would be selected by either method.
I grant that if you're looking at America from the outside as a black box, we're unlikely to change our voting method to something that isn't insanely broken any time soon, and so you could hold it against the overall polity that we are dangerously bad at selecting leaders... and unlikely to fix this fast enough to matter... but in terms of the basic decency of our median voter I think that Trump isn't strong evidence that we are morally degenerate sociopaths.
In fact, Americans tend to smil...
Consider instead that Trump was elected with over 50% of the popular vote. Perhaps there are more fundamental cultural factors at play than the method used to count ballots.
Winning the popular vote in the current system doesn't tell you what would happen in a different system. This is the same mistake people make when they talk about who would have won if we didn't have an electoral college: If we had a different system, candidates would campaign differently and voters would vote differently.
I don't think it's possible to align AGI with democracy. AGI, or at least ASI, is an inherently political technology. The power structures that ASI creates within a democratic system would likely destroy the system from within. Whichever group would end up controlling an ASI would get decisive strategic advantage over everyone else within the country, which would undermine the checks and balances that make democracy a democracy.
To steelman a devil's advocate: If your intent-aligned AGI/ASI went something like
oh, people want the world to be according to their preferences but whatever normative system one subscribes to, the current implicit preference aggregation method is woefully suboptimal, so let me move the world's systems to this other preference aggregation method which is much more nearly-Pareto-over-normative-uncertainty-optimal than the current preference aggregation method
and this would be, in an important sense, more democratic, because the people (/demos) would have more influence over their societies.
Yeah, I can see why that's possible. But I wasn't really talking about the improbable scenario where ASI would be aligned to the whole of humanity/country, but about a scenario where ASI is 'narrowly aligned' in the sense that it's aligned to its creators/whoever controls it when it's created. This is IMO much more likely to happen since technologies are not created in a vacuum.
I've also noticed this assumption. I myself don't have it, at all. My first thought has always been something like "If we actually get AGI then preventing terrible outcomes will probably require drastic actions and if anything I have less faith in the US government to take those". Which is a pretty different approach from just assuming that AGI being developed by government will automatically lead to a world with values of government . But this a very uncertain take and it wouldn't surprise me if someone smart could change my mind pretty quickly.
There's more variance within countries than between countries. Where did the disruptive upstart that cares about Free Software[1] come from? China. Is that because China is more libertarian than the US? No, it's because there's a wide variance in both the US and China and by chance the most software-libertarian company was Chinese. Don't treat countries like point estimates.
Free as in freedom, not as in beer
Based on previous data, it's plausible like CCP AGI will perform worse on safety benchmarks than US AGI. Take Cisco Harmbench evaluation results:
Though, if it was just CCP making AGI or just US making AGI it might be better because it'd reduce competitive pressures.
But, due to competitive pressures and investments like Stargate, the AGI timeline is accelerated, and the first AGI model may not perform well on safety benchmarks.
It seems like it would depend pretty strongly on which side you view as having a closer alignment with human values generally. That probably depends a lot on your worldview and it would be very hard to be unbiased about this.
There was actually a post about almost this exact question on the EA Forums a while back. You may want to peruse some of the comments there.
I'm very happy this post is getting traction, because I think spotlighting and questioning these invisible assumptions should become standard practice if we want to raise the epistemic quality of AI safety discourse. Especially since these assumptions tangibly translate into agendas and real-world policy.
I must say that I find it troubling how often I see people accept the implicit narrative that “CCP AGI < USG AGI” as an obvious truth. Such a high-stakes assumption should first be made explicit, and then justified on the basis of sound epistemics. The burden of justifying these assumptions should lie on people who invoke them, and I think AI Safety discourse's epistemic quality would benefit greatly if we called out those who fail to state + justify their underlying assumptions (a virtuous cycle, I hope)
Similarly, I think its very detrimental for terms like “AGI with Western values” or “aligned with democracy” (implied positive-valences) to circulate without their authors providing operational clarity. On this note, I think it quite important that the AI Safety community isn't co-opted by their respective governments' halo terms or applause lights; let's leave it to politicians...
We don’t want an ASI to be ”democratic”. We want it to be ”moral”. Many people in the West conflate the two words thinking that democratic and moral is the same thing but it is not. Democracy is a certain system of organizing a state. Morality is how people and (in the future) an ASI behave towards one another.
There are no obvious reasons why an authocratic state would care more or less about a future ASI being immoral, but an argument can be made that autocratic states will be more cautious and put more restrictions on the development of an ASI because autocrats usually fear any kind of opposition and an ASI could be a powerful adversary of itself or in the hands of powerful competitors.
I think "democratic" is often used to mean a system where everyone is given a meaningful (and roughly equal) weight into it decisions. People should probably use more precise language if that's what they mean, but I do think it is often the implicit assumption.
And that quality is sort of prior to the meaning of "moral", in that any weighted group of people (probably) defines a specific morality - according to their values, beliefs, and preferences. The morality of a small tribe may deem it as a matter of grave importance whether a certain rock has been touched by a woman, but barely anyone else truly cares (i.e. would still care if the tribe completely abandoned this position for endogenous reasons). A morality is more or less democratic to the extent that it weights everyone equally in this sense.
I do want ASI to be "democratic" in this sense.
One very important consideration is whether they hold values that they believe are universalist, or merely locally appropriate.
For example, a Chinese AI might believe the following: "Confucianist thought is very good for Chinese people living in China. People in other countries can have their own worse philosophies, and that is fine so long as they aren't doing any harm to China, its people or its interests. Those idiots could probably do better by copying China, but frankly it might be better if they stick to their barbarian ways so that they remain too weak to pose a threat."
Now, the USA AI thinks: "Democracy is good. Not just for Americans living in America, but also for everyone living anywhere. Even if they never interact ever again with America or its allies the Taliban are still a problem that needs solving. Their ideology needs to be confronted, not just ignored and left to fester."
The sort of thing America says it stands for is much more appealing to me than a lot of what the Chinese government does. (I like the government being accountable to the people it serves - which of course entails democracy, a free press and so on). But, my impression is that American values are held to be Universal Truths, not uniquely American idiosyncratic features, which makes the possibility of the maximally bad outcome (worldwide domination by a single power) higher.
What would it mean for an AGI to be aligned with "Democracy," or "Confucianism," or "Marxism with Chinese characteristics," or "the American Constitution"? Contingent on a world where such an entity exists and is compatible with my existence, what would my life be like in a weird transhuman future as a non-citizen in each system?
None of these philosophies or ideologies was created with an interplanetary transhuman order in mind, so to some extent a superintelligent AI guided by them, will find itself "out of distribution" when deciding what to do. And how that turns out, should depend on underlying features of the AGI's thought - how it reasons and how it deals with ontological crisis. We could in fact do some experiments along these lines - tell an existing frontier AI to suppose that it is guided by historic human systems like these, and ask how it might reinterpret the central concepts, in order to deal with being in a situation of relative omnipotence.
Supposing that the human culture of America and China is also a clue to the world that their AIs would build when unleashed, one could look to their science fiction for paradigms of life under cosmic circumstances. The West has lots of science fiction, but the one we keep returning to in the context of AI, is the Culture universe of Iain Banks. As for China, we know about Liu Cixin ("Three-Body Problem" series), and I also dwell on the xianxia novels of Er Gen, which are fantasy but do depict a kind of politics of omnipotence.
The state of the geopolitical board will influence how the pre-ASI chaos unfolds, and how the pre-ASI AGIs behave. Less plausibly intentions of the humans in charge might influence something about the path-dependent characteristics of ASI (by the time it takes control). But given the state of the "science" and lack of the will to be appropriately cautious and wait a few centuries before taking the leap, it seems more likely that the outcome will be randomly sampled from approximately the same distribution regardless of who sets off the intelligence explosion.
There's also the possibility that a CCP AGI can only happen through being trained on Western data to some extent (i.e., the English language internet) because otherwise they can't scale data enough. This implies that it would probably be a "Marxism with Chinese characteristics [with American characteristics]" AI since it seems like that just raises the "alignment to CCP values" technical challenge difficulty a lot.
I'm relieved not to be the only one wondering about this.
I know this particular thread is granting that "AGI will be aligned with the national interest of a great power", but that assumption also seems very questionable to me. Is there another discussion somewhere of whether it's likely that AGI values cleave on the level of national interest, rather than narrower (whichever half-dozen guys are in the room during a FOOM) or broader (international internet-using public opinion) levels?
From an individual person perspective, less authoritarian ASI is better. "Authoritarian" measure here means the degree it allows itself to restrict your freedoms.
The implicit assumption here as I understand it is that Chinese ASI would be more authoritarian than US. It may not be a correct assumption, as US has proven to commit fairly heinous things to domestic (spying on) and foreign (mass murdering) citizens.
I'm guessing you live in a country with a US military base? Are you more free than the average Chinese citizen?
I am unsure how free the average Chinese person is, nor how to weigh freedom of speech with certain economic freedoms and competent local government, low crime, the tendency of modern democracies to rent seek from the young in favour of the old, zoning laws, restriction on industrial development, a student loan system that seems to be a weird form of indenture. I do come from a country with rather strict hate speech laws. And we do not, in fact, have freedom of speech by any strict definition. And this is a policy American elites in and out of government strongly approve of.
I ask out of relative ignorance of what life in China is like for the average Chinese person, but with a slight suspicion that we might be defining our western notion of 'freedom' in such a way that ignores the many ways we are restricted and extracted from, and ways in which the average Chinese may be more free.
It's very clear the CCP has committed far larger crimes against its people in living memory. But it is also a very different organization than it was at its worst.
I think the question is still worth asking. And the argument worth justifying.
As things stand today, if AGI is created (aligned or not) in the US, it won't be by the USG or agents of the USG. I'll be by a private or public company. Depending on the path to get there, there will be more or less USG influence of some sort. But if we're going to assume the AGI is aligned to something deliberate, I wouldn't assume AGI built in the US is aligned to the current administration, or at least significantly less so than the degree to which I'd assume AGI built in China by a Chinese company would be aligned to the current CCP.
For more con...
Chinese culture is just less sympathetic in general. China practically has no concept of philanthropy, animal welfare. They are also pretty explicitly ethnonationalist. You don’t hear about these things because the Chinese government has banned dissent and walled off its inhabitants.
However, I think the Hong Kong reunification is going better than I'd expect given the 2019 protests. You'd expect mass social upheaval, but people are just either satisfied or moderately dissatisfied.
Claiming China has no concept of animal welfare is quite extraordinary. This is wrong both in theory and in practice. In theory, Buddhism has always ascribed sentience in animals, long before it was popular in the west. In practice, 14% of the Chinese population is vegetarian (vs. 4.2% in the US) and China's average meat consumption is also lower.
I am neither an American citizen nor a Chinese citizen.
does not describe most people who make that argument.
Most of these people are US citizens, or could be. under liberalism/democracy those sorts of people get a say in the future, so think AGI will be better if it gives those sorts of people a say.
Most people talking about the USG AGI have structural investments in the US, which are better and give them more chances to bid on not destroying the world. (many are citizens or are in the US block). Since the US government is expected to treat oth...
Since the US government is expected to treat other stakeholders in its previous block better than China treats members of it's block
At the risk of getting too into politics...
IMO, this was maybe-true for the previous administrations, but is completely false for the current one. All people making the argument based on something like this reasoning need to update.
Previous administrations were more or less dead inertial bureaucracies. Those actually might have carried on acting in democracy-ish ways even when facing outside-context events/situations, such as suddenly having access to overwhelming ASI power. Not necessarily because were particularly "nice", as such, but because they weren't agenty enough to do something too out-of-character compared to their previous democracy-LARP behavior.
I still wouldn't have bet on them acting in pro-humanity ways (I would've expected some more agenty/power-hungry governmental subsystem to grab the power, circumventing e. g. the inertial low-agency Presidential administration). But there was at least a reasonable story there.
The current administration seems much more agenty: much more willing to push the boundaries of what's allowed and deliberatel...
USA wins on the merits of historically preferring to pretend it isn't ruling the world and mostly letting other countries do their thing, even when it has extreme military dominance (nukes)
China seems to be better at governance
On values USA is more adapted to wealth, while China has the communistic underpinnings which may be very good in a fully-automated economy.
Comes down to whether you want the easygoing less competent (and slightly psychotic) overlords or the more competent higher-strung control freaks I suppose.
I think the the assumption is that this is the USG of the last 50 years - which has flaws, but also has human rights goals and an ability to eventually change and accommodate the public’s beliefs.
So in the scenario where AI is controlled by a strongly democratic USG, you have a much more robust “alignment” to enlightenment values and no one person with too much power.
That said, that’s probably a flawed assumption for how the US government operates now/ over the next decade.
Western AI is much more likely to be democratic and have humanity's values a bit higher up. Chinese one is much more likely to put CCP values and control higher up.
But yes, if it's the current US administration specifically, neither option is that optimistic.
I don't know what it would mean for AI to "be democratic." People in a democratic system can use tool AI, but if ASI is created, there will be no room for human decision-making on any level of abstraction that the AI cares about. I suppose it's possible for an ASI to focus its efforts solely on maintaining a democratic system, without making any object-level decisions itself. But I don't think anyone is even trying to build such a thing.
If intent-aligned ASI is successfully created, the first step is always "take over the world," which isn't a very democratic thing to do. That doesn't necessarily mean there is a better alternative, but I do so wish that AI industry leaders would stop making overtures to democracy out of the other side of their mouth. For most singularitarians, this is and always has been about securing or summoning ultimate power and ushering in a permanent galactic utopia.
There are a number of ways that the US seems to have better values than the CCP, by my lights, but it seems incredibly strange to claim the US values being egalitarian, and social equality or harmony more.
Rule of law, fostering diversity, encouraging human excellence? Sure, there you would have an argument. But egalitarian?
I think people focus too much on "would US AGI be safer than China" and not as much on "how much safer"
In the sense that US has 15% pdoom and China has 22%, this notion that everyone needs to get onboard and help US win with full effort could be bad
Could be used (and arguably is currently being used) to be even LESS safe, and empower an authoritarian mercantilist behemoth state, and possibly invade other countries for resources
And in general massively increase and accelerate pdoom simply on the idea that our pdoom is lower than theirs
I mostly put this question through the same filter I do the question of Chinese vs. US hegemony/empire. China has a long history of empire and knows how to do it well. The political bureaucracy in China is well developed for preserving both itself and the empire (even within changes at the top/changes of dynasty). Culturally and socially the population seems to be well acclimated to being ruled rather than seeing government as the servant of the people (which I not quite the same as saying they are resigned to abusive totalitarianism, the empire has to be ...
Id guess its more likely to be good. The logic of "post scarcity utopia" is pretty far from market capitalism. Also China has been leading in open source models. Open source is a lot more aligned with humanity as a whole.
I really like that I see more discussion of "ok even if we managed to avoid xrisk what then?", e.g. recent papers on AI-enabled coups and so on. To the point however, I think the problem runs deeper. What I fear the most is that by "Western values imbued in AGI" people mean "we create an everlasting upperclass with no class mobility because capital is everything that matters and we freeze the capital structure, you will get UBI so you should be grateful."
It probably makes sense to keep the capitalist structure between ASIs but between humans? Seems like a very bad outcome for me (You will live in a pod and you will be happy type of endgame for the masses).
I feel the question misstates the natsec framing by jumping to the later stages of AGI and ASI. This is important because it leads to a misunderstanding of the rhetoric that convinces normal non-futurists, who aren't spending their days thinking about superintelligence.
The American natsec framing is about an effort to preserve the status quo in which the US is the hegemon. It is a conservative appeal with global reach, which works because Pax Americana has been relatively peaceful and prosperous. Anything that threatens American dominance, including giving...
Anglo armies have been extremely unusual historically speaking for their low rates of atrocity.
(I don't think this is super relevant for AI, but I think this is where intuitions about the superiority of the west bottoms out)
I think history is a good teacher when it comes to AI in general, especially AI we did not (at least at the time of deployment, and perhaps now, do not) fully understand.
I too feel a temptation to imagine that a USG AGI would hypothetically have alignment with US ideals, and likewise a CCP AGI would align with CCP ideals.
That said, I struggle with, given our lack of robust knowledge of what alignment with any set of ideals would look like in an AGI system, and how we could assure them, having any certainty that these systems would align with anything...
Judging from historical figures, the entire West represented by the United States and Europe is much worse. The things that the United States accuses China of as a whole but has no actual evidence are all things that the United States has done before. The United States' systematic genocide of Indians, the United States' large-scale network surveillance and wiretapping of leaders including European allies, and the United States' direct use of force to suppress veterans' protests. The corresponding events accused of China are the genocide of Uyghurs (althoug...
Though, given my doomerism, I think the natsec framing of the AGI race is likely wrongheaded, let me accept the Dario/Leopold/Altman frame that AGI will be aligned with the national interest of a great power. These people seem to take as an axiom that a USG AGI will be better in some way than a CCP AGI. Has anyone written a justification for this assumption?
I am neither an American citizen nor a Chinese citizen.
What would it mean for an AGI to be aligned with "Democracy," or "Confucianism," or "Marxism with Chinese characteristics," or "the American Constitution"? Contingent on a world where such an entity exists and is compatible with my existence, what would my life be like in a weird transhuman future as a non-citizen in each system? Why should I expect a USG AGI to be better than a CCP AGI? It does not seem to me super obvious that I should cheer for either party over the other. And if the intelligence of the governing class is of any relevance to the likelihood of a positive outcome, um, CCP seems to have USG beat hands down.