Though, given my doomerism, I think the natsec framing of the AGI race is likely wrongheaded, let me accept the Dario/Leopold/Altman frame that AGI will be aligned with the national interest of a great power. These people seem to take as an axiom that a USG AGI will be better in some way than a CCP AGI. Has anyone written a justification for this assumption?

I am neither an American citizen nor a Chinese citizen.

What would it mean for an AGI to be aligned with "Democracy," or "Confucianism," or "Marxism with Chinese characteristics," or "the American Constitution"? Contingent on a world where such an entity exists and is compatible with my existence, what would my life be like in a weird transhuman future as a non-citizen in each system? Why should I expect a USG AGI to be better than a CCP AGI? It does not seem to me super obvious that I should cheer for either party over the other. And if the intelligence of the governing class is of any relevance to the likelihood of a positive outcome, um, CCP seems to have USG beat hands down.

New Comment
84 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

There are some additional reasons, beyond the question of which values would be embedded in the AGI systems, to not prefer AGI development in China, that I haven't seen mentioned here:

  • Systemic opacity, state-driven censorship, and state control of the media means AGI development under direct or indirect CCP control would probably be less transparent than in the US, and the world may be less likely to learn about warning shots, wrongheaded decisions, reckless behaviour, etc. True, there was the Manhattan Project, but that was quite long ago; recent examples like the CCP's suppression of information related to the origins of COVID feel more salient and relevant.
  • There are more checks and balances in the US than in China, which you may think could e.g., positively influence regulation; or if there's a government project, help incentivise responsible decisions there; or if someone attempts to concentrate power using some early AGI, stop that from happening. E.g., in the West voters have some degree of influence over the government, there's the free press, the judiciary, an ecosystem of nonprofits, and so on. In China, the CCP doesn't have total control, but much more so than Western gov
... (read more)

@Tomás B.  There is also vastly less of an "AI safety community" in China -- probably much less AI safety research in general, and much less of it, in percentage terms, is aimed at thinking ahead about superintelligent AI.  (ie, more of China's "AI safety research" is probably focused on things like reducing LLM hallucinations, making sure it doesn't make politically incorrect statements, etc.)

  • Where are the chinese equivalents of the American and British AISI government departments?  Organizations like METR, Epoch, Forethought, MIRI, et cetera?
  • Who are some notable Chinese intellectuals / academics / scientists (along the lines of Yoshua Bengio or Geoffrey Hinton) who have made any public statements about the danger of potential AI x-risks?
  • Have any chinese labs published "responsible scaling plans" or tiers of "AI Safety Levels" as detailed as those from OpenAI, Deepmind, or Anthropic?  Or discussed how they're planning to approach the challenge of aligning superintelligence?
  • Have workers at any Chinese AI lab resigned in protest of poor AI safety policies (like the various people who've left OpenAI over the years), or resisted the militarization of AI technology
... (read more)

The four questions you ask are excellent, since they get away from general differences of culture or political system, and address the processes that are actually producing Chinese AI. 

The best reference I have so far is a May 2024 report from Concordia AI on "The State of AI Safety in China". I haven't even gone through it yet, but let me reproduce the executive summary here: 

The relevance and quality of Chinese technical research for frontier AI safety has increased substantially, with growing work on frontier issues such as LLM unlearning, misuse risks of AI in biology and chemistry, and evaluating "power-seeking" and "self-awareness" risks of LLMs. 

There have been nearly 15 Chinese technical papers on frontier AI safety per month on average over the past 6 months. The report identifies 11 key research groups who have written a substantial portion of these papers. 

China’s decision to sign the Bletchley Declaration, issue a joint statement on AI governance with France, and pursue an intergovernmental AI dialogue with the US indicates a growing convergence of views on AI safety among major powers compared to early 2023. 

Since 2022, 8 Track 1.5 or 2 dialogu

... (read more)
7sammyboiz
Speaking to post-labor futures, I feel that CCP AGI would be more likely to redistribute resources in an equitable manner when compared to the US.  Over the last 50 years or so, productivity growth in the US has translated to the ultra-wealthy growing in wealth while the wages for the working class has stagnated. Coupled with oligarchy growing in US, I don't expect the USG to have the interest of the people first and foremost. If USG has AGI, I expect that the trend of rising inequality will continue: billionaires will reap the benefits and the rest of people will be economically powerless... at best surviving on UBI. As for China, I think that less corporate interests and power-seeking pressures have plagued the CCP. I don't know much about Xi and his administration but I assume that they are less corrupt and more caring about their people. China has their capitalism under control and I believe that are more likely to create a fully automated luxury communism utopia rather than a hyper-capitalist hell. As for lacking American free-speech, I think equitable resource distribution is at least 100x more important. As long as the US stays staunchly capitalist, I fear they will not be able/willing to redistribute AGI abundance.
7Jackson Wagner
I think when it comes to the question of "who's more likely to use AGI build fully-automated luxury communism", there are actually a lot of competing considerations on both sides, and it's not nearly as clear as you make it out. Xi Jinping, the leader of the CCP, seems like kind of a mixed bag: * On the one hand, I agree with you that Xi does seem to be a true believer in some elements of the core socialist dream of equality, common dignity for everyone, and improved lives for ordinary people.  Hence his "Common Prosperity" campaign to reduce inequality, anti-corruption drives, bragging (in an exaggerated but still-commendable way) about having eliminated extreme poverty, etc.  Having a fundamentally humanist outlook and not being an obvious psychopath / destructive idiot / etc is of course very important, and always reflects well on people who meet that description. * On the other hand, as others have mentioned, the intense repression of Hong Kong, Tibet, and most of all Xinjiang, does not bode super well if we are thinking "who seems like a benevolent guy in which to entrust the future of human civilization".  In terms of scale and intensity, the extent of the anti-Uyghur police state in Xinjiang seems beyond anything that the USA has done to their own citizens. * More broadly, China generally seems to have less respect for individual freedoms, and instead positions themselves as governing for the benefit of the majority.  (Much harsher covid lockdowns are an example of this, as is their reduced freedom of speech, fewer regulations protecting the environment or private property, etc.  Arguably benefits have included things like faster pace of development, fewer covid deaths, etc.)  This effect could cut both ways -- respect for individual freedoms is pretty important, but governing for the benefit of the majority is by definition gonna benefit most ordinary people if you do it well. * Your comment kind of assumes that china = socialist and socialism = more
2ProgramCrafter
That's screened off by actual evidence, which is, top labs don't publish much no matter where they are, so I'd only agree with "equally opaque".

To add to the discussion, my impression is that many people in the US believe they have some moral superiority or know what is good for other people. The whole "we need a manhattan project for AI" discourse is reminiscent of calling for global domination. Also, doing things for the public good is controversial in the US as it can infringe on individual freedom.

This makes me really uncertain as to which AGI would be better (assuming somebody controls it).

7robo
This is true, and it strongly influences the ways Americans think about how to provide public goods to the rest of the world.  But they're thinking about how to provide public goods the rest of the world[1].  "America First" is controversial in American intellectual circles, whereas in my (limited) conversations in China people are usually confused about what other sort of policy you would have. 1. ^ Disclosure: I'm American, I came of age in this era

I think there's a deep question here as to whether Trump is "America's true self finally being revealed" or just the insane but half predictable accident of a known-retarded "first past the post" voting system and an aging electorate that isn't super great at tracking reality.

I tend to think that Trump is aberrant relative to two important standards:

(1) No one like Trump would win an election with Ranked Ballots that were properly counted either via the Schulze method (which I tend to like) or the Borda method (which might have virtues I don't understand (yet! (growth mindset))). Someone that the vast majority of America thinks is reasonable and decent and wise would be selected by either method.

I grant that if you're looking at America from the outside as a black box, we're unlikely to change our voting method to something that isn't insanely broken any time soon, and so you could hold it against the overall polity that we are dangerously bad at selecting leaders... and unlikely to fix this fast enough to matter... but in terms of the basic decency of our median voter I think that Trump isn't strong evidence that we are morally degenerate sociopaths.

In fact, Americans tend to smil... (read more)

9Alexander Howell
I'm confused why electoral systems seems to be at the forefront of your thinking about the relevant pros and cons of US or Chinese domination of the future. Electoral systems do and can matter, but consider that all of the good stuff that happened in Anglo-America happened under first past the post as well, and all the bad stuff that happened elsewhere happened under whatever system they used (the Nazis came to power under proportional representation!). Consider instead that Trump was elected with over 50% of the popular vote. Perhaps there are more fundamental cultural factors at play than the method used to count ballots.

Consider instead that Trump was elected with over 50% of the popular vote. Perhaps there are more fundamental cultural factors at play than the method used to count ballots.

Winning the popular vote in the current system doesn't tell you what would happen in a different system. This is the same mistake people make when they talk about who would have won if we didn't have an electoral college: If we had a different system, candidates would campaign differently and voters would vote differently.

3jmh
I always find the use of "X% of the vote" in US elections to make some general point about overall acceptability or representation of the general public problematic. I agree it's a true statement but leaves out the important aspect of turn out for the vote.  I have to wonder if, particularly the USA, would not be quite as divided if all the reporting provided percentage of vote of the eligible voting population rather than the percentage of votes cast. I think there is a big problem with just ignoring the non-vote information the is present (or expecting anyone to look it up and make the adjustments on their own). But I agree, I'm not too sure just where electoral systems fall into this question of AGI/ASI first emerging under ether the USA or CCP.
1Pazzaz
But that's wrong. Trump received 49.8% of the votes.
2Alexander Howell
Yes, my mistake. I meant Trump votes > Harris votes and forgot about 3rd parties. On the other hand 49.8% vs 50% + 1 feels semi trivial when compared to say the UK where Labour received 33.7% of the vote. 
5Mitchell_Porter
I can imagine an argument analogous to Eliezer's old graphic illustrating that it's a mistake to think of a superintelligence as Einstein in a box. I'm referring to the graphic where you have a line running from left to right, on the left you have chimp, ordinary person, Einstein all clustered together, and then far away on the other side, "superintelligence", the point being that superintelligence far transcends all three.  In the same way, the nature of the world when you have a power that great is so different that the differences among all human political systems diminish to almost nothing by comparison, they are just trivial reorderings of power relations among beings so puny as to be almost powerless. Neither the Chinese nor the American system is built to include intelligent agents with the power of a god, that's "out of distribution" for both the Communist Manifesto and the Federalist Papers.  Because of that, I find it genuinely difficult to infer from the nature of the political system, what the likely character of a superintelligence interested in humanity could be. I feel like contingencies of culture and individual psychology could end up being more important. So long as you have elements of humaneness and philosophical reflection in a culture, maybe you have a chance of human-friendly superintelligence emerging.  
2Afterimage
I notice you're talking a lot about the values of American people but only talk about what the leaders of China are doing or would do.  If you just compare both leaders likelihood of enacting a world government, once again there is no clear winner. I'm interpreting this as "intelligence is irrelevant if the CCP doesn't care about you." Once again you need to show that Trump cares more about us (citizens of the world) than the CCP. As a non-American it is not clear to me that he does. I think the best argument for America over China would be the idea that Trump will be replaced in under 4 years with someone much more ethical. 
5JenniferRM
Hello anonymous account that joined 2 months ago and might be a bot! I will respond to you extensively and in good faith! <3 Yes, I agree with your summary of my focus... Indeed, I think "focusing on the people and their culture" is consistent with a liberal society, freedom of conscience, etc, which are part of the American cultural package that restrains Trump, whose even-most-loyal minions have a "liberal judeo-christian constitutional cultural package" installed in their emotional settings based on generations of familial cultures living in a free society with rule of law. By contrast, "focusing on the leadership" is in fact consistent when reasoning about China, which has only ever had "something like a Liberal Rights-Respecting Democratic Republic" for a brief period from 1912 to 1949 and is currently being oppressed by an unelected totalitarian regime. I'm not saying that Chinese people are spiritually or genetically incapable of caring about fairness and predictable leadership and freedom and wanting to engage in responsible self-rule and so on (Taiwan, for example has many ethnically Chinese people, who speak a Chinese dialect, and had ancestors from China, and who hold elections, and have rule of law, and, indeed, from a distance, seems better run that America). But for the last ~76 years, mainland China has raised human people whose cultural and institutional and moral vibe has been "power does as power wills and I should submit to that power". And for the thousands of years before 1912 it was one Emperor after another, with brief periods of violence, where winning the violent struggle explicitly conferred legitimacy. There was no debate. There was no justice. There was only murdering one's political enemies better and faster than one could be murdered in pre-emptive response, and then long periods of feudal authoritarian rule by the best murderer's gang of murderers being submitted to by cowardly peasants. That's what the feudal system was everywher
4Mitchell_Porter
Your comment has made me think rather hard on the nature of China and America. The two countries definitely have different political philosophies. On the question of how to avoid dictatorship, you could say that the American system relies on representation of the individual via the vote, whereas the Chinese system relies on representation of the masses via the party. If an American leader becomes an unpopular dictator, American individuals will vote them out; if a Chinese leader becomes an unpopular dictator, the Chinese masses will force the party back on track.  Even before these modern political philosophies, the old world recognized that popular discontent could be justified. That's the other side of the mandate of heaven: when a ruler is oppressive, the mandate is withdrawn, and revolt is justified. Power in the world of monarchs and emperors was not just about who's the better killer; there was a moral dimension, just as democratic elections are not just a matter of who has the most donors and the best public relations.  Returning to the present and going into more detail, America is, let's say, a constitutional democratic republic in which a party system emerged. There's a tension between the democratic aspect (will of the people) and the republican aspect (rights of the individual), which crystallized into an opposition found in the very names of the two main parties; though in the Obama-Trump era, the ideologies of the two parties evolved to transnational progressivism and populist nationalism.  These two ideologies had a different attitude to the unipolar world-system that America acquired, first by inheriting the oceans from the British empire, and then by outlasting the Russian communist alternative to liberal democracy, in the ideological Cold War. For about two decades, the world system was one of negotiated capitalist trade among sovereign nations, with America as the "world police" and also a promoter of universal democracy. In the 2010s, this bro
2Afterimage
Thanks for the reply, you'll be happy to know I'm not a bot. I actually mostly agree with everything you wrote so apologies if I don't reply as extensively as you have.  There's no doubt the CCP are oppressing the Chinese people. Ive never used TikTok and never intend to (and I think it's being used as a propaganda machine). I agree that Americans have far more freedom of speech and company freedom than in China. I even think it's quite clear that Americans will be better off with Americans winning the AI race.  The reason I am cautious boils down to believing that as AI capabilities get close to ASI or powerful AI, governments (both US and Chinese) will step in and basically take control of the projects. Imagine if the nuclear bomb was first developed by a private company, they are going to get no say in how it is used. This would be harder in the US than in China but it would seem naive to assume it can't be done.  If this powerful AI is able to be steered by these governments, when imagining Trump's decisions VS Xi's in this situation it seems quite negative either way and I'm having trouble seeing a positive outcome for the non-American, non-Chinese people.  On balance, America has the edge, but it's not a hopeful situation if powerful AI appears in the next 4 years. Like I said, I'm mostly concerned about the current leadership, not the American people's values. 
[-]sanyer3433

I don't think it's possible to align AGI with democracy. AGI, or at least ASI, is an inherently political technology. The power structures that ASI creates within a democratic system would likely destroy the system from within. Whichever group would end up controlling an ASI would get decisive strategic advantage over everyone else within the country, which would undermine the checks and balances that make democracy a democracy.

To steelman a devil's advocate: If your intent-aligned AGI/ASI went something like

oh, people want the world to be according to their preferences but whatever normative system one subscribes to, the current implicit preference aggregation method is woefully suboptimal, so let me move the world's systems to this other preference aggregation method which is much more nearly-Pareto-over-normative-uncertainty-optimal than the current preference aggregation method

and this would be, in an important sense, more democratic, because the people (/demos) would have more influence over their societies.

[-]sanyer113

Yeah, I can see why that's possible. But I wasn't really talking about the improbable scenario where ASI would be aligned to the whole of humanity/country, but about a scenario where ASI is 'narrowly aligned' in the sense that it's aligned to its creators/whoever controls it when it's created. This is IMO much more likely to happen since technologies are not created in a vacuum.

7mako yass
I think it's pretty straightforward to define what it would mean to align AGI with what democracy actually is supposed to be (the aggregate of preferences of the subjects, with an equal weighting for all) but hard to align it with the incredibly flawed american implementation of democracy, if that's what you mean? The american system cannot be said to represent democracy well. It's intensely majoritarian at best, feudal at worst (since the parties stopped having primaries), indirect and so prone to regulatory capture, inefficent and opaque. I really hope no one's taking it as their definitional example of democracy.
4sanyer
No, I wasn't really talking about any specific implementation of democracy. My point was that, given the vast power that ASI grants to whoever controls it, the traditional checks and balances would be undermined.  Now, regarding your point that aligning AGI with what democracy is actually supposed to be, I have two objections: 1. To me, it's not clear at all why it would be straightforward to align AGI with some 'democratic ideal'. Arrow's impossibility theorem shows that no perfect voting system exists, so an AGI trying to implement the "perfect democracy" will eventually have to make value judgments about which democratic principles to prioritize (although I do think that an AGI could, in principle, help us find ways to improve upon our democracies). 2. Even if aligning AGI with democracy would in principle be possible, we need to look at the political reality the technology will emerge from. I don't think it's likely that whichever group that would end up controlling AGI would willingly want to extend its alignment to other groups of people.
4mako yass
2: I think you're probably wrong about the political reality of the groups in question. To not share AGI with the public is a bright line. For most of the leading players it would require building a group of AI researchers within the company who are all implausibly willing to cross a line that says "this is straight up horrible, evil, illegal, and dangerous for you personally", while still being capable enough to lead the race, while also having implausible levels of mutual trust that no one would try to cut others out of the deal at the last second (despite the fact that the group's purpose is cutting most of humanity out of the deal), to trust that no one would back out and whistleblow, and it also requires an implausible level of secrecy to make sure state actors wont find out. It would require a probably actually impossible cultural discontinuity and organization structure. It's more conceivable to me that a lone CEO might try to do it via a backdoor. Something that mostly wasn't built on purpose and that no one else in the company are cognisant could or would be used that way. But as soon as the conspiracy consists of more than one person...
5sanyer
I think there are several potential paths of AGI leading to authoritarianism.  For example consider AGI in military contexts: people might be unwilling to let it make very autonomous decisions, and on that basis, military leaders could justify that these systems be loyal to them even in situations where it would be good for the AI to disobey orders. Regarding your point about requirement of building a group of AI researchers, these researchers could be AIs themselves. These AIs could be ordered to make future AI systems secretly loyal to the CEO. Consider e.g. this scenario (from Box 2 in Forethought's new paper):  Relatedly, I'm curious what you think of that paper and the different scenarios they present.
3mako yass
1: The best approach to aggregating preferences doesn't involve voting systems. You could regard carefully controlling one's expression of one's utility function as being like a vote, and so subject to that blight of strategic voting, in general people have an incentive to understate their preferences about scenarios they consider unlikely/vice versa, which influences the probability of those outcomes in unpredictable ways and fouls their strategy, or to understate valuations when buying and overstate when selling, this may add up to a game that cannot be played well, a coordination problem, outcomes no one wanted. But I don't think humans are all that guileful about how they express their utility function. Most of them have never actually expressed a utility function before, it's not easy to do, it's not like checking a box on a list of 20 names. People know it's a game that can barely be played even in ordinary friendships, people don't know how to lie strategically about their preferences to the youtube recommender system, let alone their neural lace.

I've also noticed this assumption. I myself don't have it, at all. My first thought has always been something like "If we actually get AGI then preventing terrible outcomes will probably require drastic actions and if anything I have less faith in the US government to take those". Which is a pretty different approach from just assuming that AGI being developed by government will automatically lead to a world with values of government . But this a very uncertain take and it wouldn't surprise me if someone smart could change my mind pretty quickly.

[-]robo*219

There's more variance within countries than between countries.  Where did the disruptive upstart that cares about Free Software[1] come from?  China.  Is that because China is more libertarian than the US?  No, it's because there's a wide variance in both the US and China and by chance the most software-libertarian company was Chinese.  Don't treat countries like point estimates.

  1. ^

    Free as in freedom, not as in beer

5robo
(Counterpoint: for big groups like bureaucracies, intra-country variances can average out.  I do think we can predict that a group of 100 random Americans writing an AI constitution would place more value on political self-determination and less on political unity than a similar group of Chinese.)
3David J Higgs
Counter-counterpoint: big groups like bureaucracies are not composed of randomly selected individuals from their respective countries. I strongly doubt that say, 100 randomly selected Google employees (the largest plausible bureaucracy that might potentially develop AGI in the very near term future?) would answer extremely similarly to 100 randomly selected Americans. Of course, in the only moderately near term or median future, something like a Manhatten Project for AI could produce an AGI. This would still not be identical to 100 random Americans, but averaging across the US security & intelligence apparatus, the current political facing portion of the US executive administration, and the leadership + relevant employee influence from a (mandatory?) collaboration of US frontier labs would be significantly closer on average. I think it would at least be closer to average Americans than a CCP Centralized AGI Project would be to average Chinese people, although I admit I'm not very knowledgeable on the gap between Chinese leadership and average Chinese people other than basics like (somewhat) widespread VPN usage.
[-]Ram Potham*16-10

Based on previous data, it's plausible like CCP AGI will perform worse on safety benchmarks than US AGI. Take Cisco Harmbench evaluation results:

  • DeepSeek R1: Demonstrated a 100% failure rate in blocking harmful prompts  according to Anthropic's safety tests.
  • OpenAI GPT-4o: Showed an 86% failure rate in the same tests, indicating better but still concerning gaps in safety measures.
  • Meta Llama-3.1-405B: Had a 96% failure rate, performing slightly better than DeepSeek but worse than OpenAI.

Though, if it was just CCP making AGI or just US making AGI it might be better because it'd reduce competitive pressures. 

But, due to competitive pressures and investments like Stargate, the AGI timeline is accelerated, and the first AGI model may not perform well on safety benchmarks.

4Stephen Fowler
You have conflated two separate evaluations, both mentioned in the TechCrunch article.  The percentages you quoted come from Cisco’s HarmBench evaluation of multiple frontier models, not from Anthropic and were not specific to bioweapons. Dario Amondei stated that an unnamed DeepSeek variant performed worst on bioweapons prompts, but offered no quantitative data. Separately, Cisco reported that DeepSeek-R1 failed to block 100% of harmful prompts, while Meta’s Llama 3.1 405B and OpenAI’s GPT-4o failed at 96 % and 86 %, respectively. When we look at performance breakdown by Cisco, we see that all 3 models performed equally badly on chemical/biological safety.
3Ram Potham
Thanks, updated the comment to be more accurate

It seems like it would depend pretty strongly on which side you view as having a closer alignment with human values generally. That probably depends a lot on your worldview and it would be very hard to be unbiased about this.

There was actually a post about almost this exact question on the EA Forums a while back. You may want to peruse some of the comments there.

8Caleb Biddulph
Side note - it seems there's an unofficial norm: post about AI safety in LessWrong, post about all other EA stuff in the EA Forum. You can cross-post your AI stuff to the EA Forum if you want, but most people don't. I feel like this is pretty confusing. There was a time that I didn't read LessWrong because I considered myself an AI-safety-focused EA but not a rationalist, until I heard somebody mention this norm. If we encouraged more cross-posting of AI stuff (or at least made the current norm more explicit), maybe we wouldn't get near-duplicate posts like these two.
[-]Dea L153

I'm very happy this post is getting traction, because I think spotlighting and questioning these invisible assumptions should become standard practice if we want to raise the epistemic quality of AI safety discourse. Especially since these assumptions tangibly translate into agendas and real-world policy.

I must say that I find it troubling how often I see people accept the implicit narrative that “CCP AGI < USG AGI” as an obvious truth. Such a high-stakes assumption should first be made explicit, and then justified on the basis of sound epistemics. The burden of justifying these assumptions should lie on people who invoke them, and I think AI Safety discourse's epistemic quality would benefit greatly if we called out those who fail to state + justify their underlying assumptions (a virtuous cycle, I hope)

Similarly, I think its very detrimental for terms like “AGI with Western values” or “aligned with democracy” (implied positive-valences) to circulate without their authors providing operational clarity. On this note, I think it quite important that the AI Safety community isn't co-opted by their respective governments' halo terms or applause lights; let's leave it to politicians... (read more)

[-]MattJ1516

We don’t want an ASI to be ”democratic”. We want it to be ”moral”. Many people in the West conflate the two words thinking that democratic and moral is the same thing but it is not. Democracy is a certain system of organizing a state. Morality is how people and (in the future) an ASI behave towards one another.

There are no obvious reasons why an authocratic state would care more or less about a future ASI being immoral, but an argument can be made that autocratic states will be more cautious and put more restrictions on the development of an ASI because autocrats usually fear any kind of opposition and an ASI could be a powerful adversary of itself or in the hands of powerful competitors.

I think "democratic" is often used to mean a system where everyone is given a meaningful (and roughly equal) weight into it decisions. People should probably use more precise language if that's what they mean, but I do think it is often the implicit assumption.

And that quality is sort of prior to the meaning of "moral", in that any weighted group of people (probably) defines a specific morality - according to their values, beliefs, and preferences. The morality of a small tribe may deem it as a matter of grave importance whether a certain rock has been touched by a woman, but barely anyone else truly cares (i.e. would still care if the tribe completely abandoned this position for endogenous reasons). A morality is more or less democratic to the extent that it weights everyone equally in this sense.

I do want ASI to be "democratic" in this sense.

2GeneSmith
I'm not sure I buy that they will be more cautious in the context of an "arms race" with a foreign power. The Soviet Union took a lot of risks their bioweapons program during the cold war. My impression is the CCP's number one objective is preserving their own power over China. If they think creating ASI will help them with that, I fully expect them to pursue it (and in fact to make it their number one objective)
[-]Ben142

One very important consideration is whether they hold values that they believe are universalist, or merely locally appropriate.

For example, a Chinese AI might believe the following: "Confucianist thought is very good for Chinese people living in China. People in other countries can have their own worse philosophies, and that is fine so long as they aren't doing any harm to China, its people or its interests. Those idiots could probably do better by copying China, but frankly it might be better if they stick to their barbarian ways so that they remain too weak to pose a threat."

Now, the USA AI thinks: "Democracy is good. Not just for Americans living in America, but also for everyone living anywhere. Even if they never interact ever again with America or its allies the Taliban are still a problem that needs solving. Their ideology needs to be confronted, not just ignored and left to fester."

The sort of thing America says it stands for is much more appealing to me than a lot of what the Chinese government does. (I like the government being accountable to the people it serves - which of course entails democracy,  a free press and so on). But, my impression is that American values are held to be Universal Truths, not uniquely American idiosyncratic features, which makes the possibility of the maximally bad outcome (worldwide domination by a single power) higher.

What would it mean for an AGI to be aligned with "Democracy," or "Confucianism," or "Marxism with Chinese characteristics," or "the American Constitution"? Contingent on a world where such an entity exists and is compatible with my existence, what would my life be like in a weird transhuman future as a non-citizen in each system?

None of these philosophies or ideologies was created with an interplanetary transhuman order in mind, so to some extent a superintelligent AI guided by them, will find itself "out of distribution" when deciding what to do. And how that turns out, should depend on underlying features of the AGI's thought - how it reasons and how it deals with ontological crisis. We could in fact do some experiments along these lines - tell an existing frontier AI to suppose that it is guided by historic human systems like these, and ask how it might reinterpret the central concepts, in order to deal with being in a situation of relative omnipotence. 

Supposing that the human culture of America and China is also a clue to the world that their AIs would build when unleashed, one could look to their science fiction for paradigms of life under cosmic circumstances. The West has lots of science fiction, but the one we keep returning to in the context of AI, is the Culture universe of Iain Banks. As for China, we know about Liu Cixin ("Three-Body Problem" series), and I also dwell on the xianxia novels of Er Gen, which are fantasy but do depict a kind of politics of omnipotence. 

The state of the geopolitical board will influence how the pre-ASI chaos unfolds, and how the pre-ASI AGIs behave. Less plausibly intentions of the humans in charge might influence something about the path-dependent characteristics of ASI (by the time it takes control). But given the state of the "science" and lack of the will to be appropriately cautious and wait a few centuries before taking the leap, it seems more likely that the outcome will be randomly sampled from approximately the same distribution regardless of who sets off the intelligence explosion.

There's also the possibility that a CCP AGI can only happen through being trained on Western data to some extent (i.e., the English language internet) because otherwise they can't scale data enough. This implies that it would probably be a "Marxism with Chinese characteristics [with American characteristics]" AI since it seems like that just raises the "alignment to CCP values" technical challenge difficulty a lot.

[-]uugr92

I'm relieved not to be the only one wondering about this.

I know this particular thread is granting that "AGI will be aligned with the national interest of a great power", but that assumption also seems very questionable to me. Is there another discussion somewhere of whether it's likely that AGI values cleave on the level of national interest, rather than narrower (whichever half-dozen guys are in the room during a FOOM) or broader (international internet-using public opinion) levels?

From an individual person perspective, less authoritarian ASI is better. "Authoritarian" measure here means the degree it allows itself to restrict your freedoms. 

The implicit assumption here as I understand it is that Chinese ASI would be more authoritarian than US. It may not be a correct assumption, as US has proven to commit fairly heinous things to domestic (spying on) and foreign (mass murdering) citizens.

I'm guessing you live in a country with a US military base? Are you more free than the average Chinese citizen?

I am unsure how free the average Chinese person is, nor how to weigh freedom of speech with certain economic freedoms and competent local government, low crime, the tendency of modern democracies to rent seek from the young in favour of the old, zoning laws, restriction on industrial development, a student loan system that seems to be a weird form of indenture. I do come from a country with rather strict hate speech laws. And we do not, in fact, have freedom of speech by any strict definition. And this is a policy American elites in and out of government strongly approve of. 

I ask out of relative ignorance of what life in China is like for the average Chinese person, but with a slight suspicion that we might be defining our western notion of 'freedom' in such a way that ignores the many ways we are restricted and extracted from, and ways in which the average Chinese may be more free. 

It's very clear the CCP has committed far larger crimes against its people in living memory. But it is also a very different organization than it was at its worst. 

I think the question is still worth asking. And the argument worth justifying. 

6lemonhope
Makes sense. Those were real questions, to be clear.
2momom2
My experience interacting with Chinese people is that they have to constantly mind the censorship in a way that I would find abhorrent and mentally taxing if I had to live in their system. Though given there are many benefits to living in China (mostly quality of life and personal safety), I'm unconvinced that I prefer my own government all things considered. But for the purpose of developing AGI, there's a lot more variance in possible outcomes (higher likelihood of S-risk and benevolent singleton) from the CCP  getting a lead rather than the US.

As things stand today, if AGI is created (aligned or not) in the US, it won't be by the USG or agents of the USG. I'll be by a private or public company. Depending on the path to get there, there will be more or less USG influence of some sort. But if we're going to assume the AGI is aligned to something deliberate, I wouldn't assume AGI built in the US is aligned to the current administration, or at least significantly less so than the degree to which I'd assume AGI built in China by a Chinese company would be aligned to the current CCP. 

For more con... (read more)

[-]O O6-9

Chinese culture is just less sympathetic in general. China practically has no concept of philanthropy, animal welfare. They are also pretty explicitly ethnonationalist. You don’t hear about these things because the Chinese government has banned dissent and walled off its inhabitants.

However, I think the Hong Kong reunification is going better than I'd expect given the 2019 protests. You'd expect mass social upheaval, but people are just either satisfied or moderately dissatisfied.

[-]S M1611

Claiming China has no concept of animal welfare is quite extraordinary. This is wrong both in theory and in practice. In theory, Buddhism has always ascribed sentience in animals, long before it was popular in the west. In practice, 14% of the Chinese population is vegetarian (vs. 4.2% in the US) and China's average meat consumption is also lower.

-1O O
China has no specific animal welfare laws. There are also some Chinese that regard animal welfare as a Western import. Maybe the claim that they have no concept at all is too strong, but it's certainly minimized by previous regimes.  ie And China's average meat consumption being lower could just be a reflection of their gdp per capita being lower. I don't know where you got the 14% vegetarian number. I can find 5% online. About the same as US numbers. 
5Jayson_Virissimo
How are you in a position to know this?
3O O
Looked up a poll from 2023. Though, maybe that poll is biased by people not voicing their true opinions?

I am neither an American citizen nor a Chinese citizen.

does not describe most people who make that argument.

Most of these people are US citizens, or could be. under liberalism/democracy those sorts of people get a say in the future, so  think AGI will be better if it gives those sorts of people a say. 

Most people talking about the USG AGI have structural investments in the US, which are better and give them more chances to bid on not destroying the world. (many are citizens or are in the US block). Since the US government is expected to treat oth... (read more)

Since the US government is expected to treat other stakeholders in its previous block better than China treats members of it's block

At the risk of getting too into politics...

IMO, this was maybe-true for the previous administrations, but is completely false for the current one. All people making the argument based on something like this reasoning need to update.

Previous administrations were more or less dead inertial bureaucracies. Those actually might have carried on acting in democracy-ish ways even when facing outside-context events/situations, such as suddenly having access to overwhelming ASI power. Not necessarily because were particularly "nice", as such, but because they weren't agenty enough to do something too out-of-character compared to their previous democracy-LARP behavior.

I still wouldn't have bet on them acting in pro-humanity ways (I would've expected some more agenty/power-hungry governmental subsystem to grab the power, circumventing e. g. the inertial low-agency Presidential administration). But there was at least a reasonable story there.

The current administration seems much more agenty: much more willing to push the boundaries of what's allowed and deliberatel... (read more)

2Mis-Understandings
I don't think that people from the natsec version have made that update, since they have been talking this line for a while.  But the dead organization framing matters here. In short, people think that democratic institutions are not dead (especially electoralism). If AGI is "Democratic", that live institution, in which they are a stakeholder, will have the power to choose to do fine stuff. (and might generalize to everybody is a stakeholder) Which is + ev, especially for them. They also expect that China as a live actor will try to kill all other actors if given the chance. 

USA wins on the merits of historically preferring to pretend it isn't ruling the world and mostly letting other countries do their thing, even when it has extreme military dominance (nukes)

China seems to be better at governance

On values USA is more adapted to wealth, while China has the communistic underpinnings which may be very good in a fully-automated economy.

 

Comes down to whether you want the easygoing less competent (and slightly psychotic) overlords or the more competent higher-strung control freaks I suppose.

I think the the assumption is that this is the USG of the last 50 years - which has flaws, but also has human rights goals and an ability to eventually change and accommodate the public’s beliefs. 


So in the scenario where AI is controlled by a strongly democratic USG, you have a much more robust “alignment” to enlightenment values and no one person with too much power. 

That said, that’s probably a flawed assumption for how the US government operates now/ over the next decade. 

[-]Tenoke4-12

Western AI is much more likely to be democratic and have humanity's values a bit higher up. Chinese one is much more likely to put CCP values and control higher up. 

But yes, if it's the current US administration specifically, neither option is that optimistic.

[-]Haiku205

I don't know what it would mean for AI to "be democratic." People in a democratic system can use tool AI, but if ASI is created, there will be no room for human decision-making on any level of abstraction that the AI cares about. I suppose it's possible for an ASI to focus its efforts solely on maintaining a democratic system, without making any object-level decisions itself. But I don't think anyone is even trying to build such a thing.

If intent-aligned ASI is successfully created, the first step is always "take over the world," which isn't a very democratic thing to do. That doesn't necessarily mean there is a better alternative, but I do so wish that AI industry leaders would stop making overtures to democracy out of the other side of their mouth. For most singularitarians, this is and always has been about securing or summoning ultimate power and ushering in a permanent galactic utopia.

5Tenoke
Democratic in the 'favouring or characterized by social equality; egalitarian.' sense (one of the definitions from Google), rather than about Elections or whatever. For example, I recently wrote a Short Story of my Day in 2035 in the scenario where things continue mostly like that and we get positive AGI that's similarish enough to current trends. There, people influenced the initial values - mainly via The Spec, and can in theory vote to make some changes to The Spec that governs the general AI values, but in practice by that point AGI controls everything and it's more or less set in stone. Still, it overall mostly tries to fulfil people's desires (overly optimistic that we go this route, I know). I'd call that more democratic than one that upholds CCP values specifically.

There are a number of ways that the US seems to have better values than the CCP, by my lights, but it seems incredibly strange to claim the US values being egalitarian, and social equality or harmony more.

Rule of law, fostering diversity, encouraging human excellence? Sure, there you would have an argument. But egalitarian?

  1. I strongly suspect that a Trump-controlled AGI would not respect democracy.
  2. I strongly suspect that an Altman-controlled AGI would not respect democracy.
  3. I have my doubts about the other heads of AI companies.
4O O
I don't think the Trump admin has the capacity to meaningful take over an AGI project. Whatever happens, I think the lab leadership will be calling the shots.
1Tachikoma
The heads of AI labs are functionally cowards that would been the one at the first knock on their door by state agents. Some have preemptively bent the knee to get into the good graces of the Trump admin like Altman and Zuckerberg to accelerate their progress. While Trump himself might be out of the loop, his adminstration is staffed by people who know what AGI means and are looking for any sources of power to pursue their agenda.
4O O
I just think they’ll be easy to fool. For example, historically many companies would get political favors (tariff exemptions) by making flashy fake headlines such as promising to spend trillions on factories.
3martinkunev
This sounds like "western AI is better because it is much more likely to have western values" I don't understand what you mean by "humanity's values". Also, one could maybe argue that "democratic" societies are those where actions are taken based on whether the majority of people can be manipulated to support them.
3Tenoke
As in ultimately more people are likely to like their condition and agree (comparably more) with the AI's decisions while having roughly equal rights.
[-]R S31

I think people focus too much on "would US AGI be safer than China" and not as much on "how much safer"

In the sense that US has 15% pdoom and China has 22%, this notion that everyone needs to get onboard and help US win with full effort could be bad

Could be used (and arguably is currently being used) to be even LESS safe, and empower an authoritarian mercantilist behemoth state, and possibly invade other countries for resources 

And in general massively increase and accelerate pdoom simply on the idea that our pdoom is lower than theirs 

[-]jmh31

I mostly put this question through the same filter I do the question of Chinese vs. US hegemony/empire. China has a long history of empire and knows how to do it well. The political bureaucracy in China is well developed for preserving both itself and the empire (even within changes at the top/changes of dynasty). Culturally and socially the population seems to be well acclimated to being ruled rather than seeing government as the servant of the people (which I not quite the same as saying they are resigned to abusive totalitarianism, the empire has to be ... (read more)

Id guess its more likely to be good. The logic of "post scarcity utopia" is pretty far from market capitalism. Also China has been leading in open source models. Open source is a lot more aligned with humanity as a whole. 

2Jackson Wagner
I think that jumping straight to big-picture ideological questions is a mistake.  But for what it's worth, I tried to tally up some pros and cons of "ideologically who is more likely to implement a post-scarcity socialist utopia" here; I think it's more of a mixed bag than many assume.

I really like that I see more discussion of "ok even if we managed to avoid xrisk  what then?", e.g. recent papers on AI-enabled coups and so on. To the point however, I think the problem runs deeper. What I fear the most is that by "Western values imbued in AGI" people mean "we create an everlasting upperclass with no class mobility because capital is everything that matters and we freeze the capital structure, you will get UBI so you should be grateful."

It probably makes sense to keep the capitalist structure between ASIs but between humans? Seems like a very bad outcome for me (You will live in a pod and you will be happy type of endgame for the masses).

4mako yass
I don't see a way Stabilization of class and UBI could both happen. The reason wealth tends to entrench itself under current conditions is tied inherently to reinvestment and rentseeking, which are destabilizing to the point where a stabilization would have to bring them to a halt. If you do that, UBI means redistribution. Redistribution without economic war inevitably settles towards equality, but also... the idea of money is kind of meaningless in that world, not just because economic conflict is a highly threatening form of instability, but also imo because financial technology will have progressed to the point where I don't think we'll have currencies with universally agreed values to redistribute. What I'm getting at is that the whole class war framing can't be straightforwardly extrapolated into that world, and I haven't seen anyone doing that. Capitalist thinking about post-singularity economics is seemingly universally "I don't want to think about that right now, let's leave such ideas to the utopian hippies".

I feel the question misstates the natsec framing by jumping to the later stages of AGI and ASI. This is important because it leads to a misunderstanding of the rhetoric that convinces normal non-futurists, who aren't spending their days thinking about superintelligence.

The American natsec framing is about an effort to preserve the status quo in which the US is the hegemon. It is a conservative appeal with global reach, which works because Pax Americana has been relatively peaceful and prosperous. Anything that threatens American dominance, including giving... (read more)

In a post-scarcity world you probably want a lot of personal freedom. 

Anglo armies have been extremely unusual historically speaking for their low rates of atrocity.

(I don't think this is super relevant for AI, but I think this is where intuitions about the superiority of the west bottoms out)

I think history is a good teacher when it comes to AI in general, especially AI we did not (at least at the time of deployment, and perhaps now, do not) fully understand. 

I too feel a temptation to imagine that a USG AGI would hypothetically have alignment with US ideals, and likewise a CCP AGI would align with CCP ideals.

That said, I struggle with, given our lack of robust knowledge of what alignment with any set of ideals would look like in an AGI system, and how we could assure them, having any certainty that these systems would align with anything... (read more)

Are you genuinely unfamiliar with what is happening to the uyghurs, or is this a rhetorical question?

1Kat Woods
Thank you for saying this. Needs to be said
[-]wenxin-20

Judging from historical figures, the entire West represented by the United States and Europe is much worse. The things that the United States accuses China of as a whole but has no actual evidence are all things that the United States has done before. The United States' systematic genocide of Indians, the United States' large-scale network surveillance and wiretapping of leaders including European allies, and the United States' direct use of force to suppress veterans' protests. The corresponding events accused of China are the genocide of Uyghurs (althoug... (read more)

Curated and popular this week