LESSWRONG
LW

Dea L
16020
Message
Dialogue
Subscribe

Focus: post-AGI governance
Overarching mission: figure out what winning looks like for Greater Terra
Current quest-line: researching into ecosystem (mis)alignment, and afterwards, envisioning concrete post-AGI viatopias (which are robust against Gradual Disempowerment class x-risk)

I have a penchant for designing and architecting cognitive infrastructure (for myself, for now)

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Why Should I Assume CCP AGI is Worse Than USG AGI?
Dea L4mo153

I'm very happy this post is getting traction, because I think spotlighting and questioning these invisible assumptions should become standard practice if we want to raise the epistemic quality of AI safety discourse. Especially since these assumptions tangibly translate into agendas and real-world policy.

I must say that I find it troubling how often I see people accept the implicit narrative that “CCP AGI < USG AGI” as an obvious truth. Such a high-stakes assumption should first be made explicit, and then justified on the basis of sound epistemics. The burden of justifying these assumptions should lie on people who invoke them, and I think AI Safety discourse's epistemic quality would benefit greatly if we called out those who fail to state + justify their underlying assumptions (a virtuous cycle, I hope)

Similarly, I think its very detrimental for terms like “AGI with Western values” or “aligned with democracy” (implied positive-valences) to circulate without their authors providing operational clarity. On this note, I think it quite important that the AI Safety community isn't co-opted by their respective governments' halo terms or applause lights; let's leave it to politicians/AI company leaders to be rhetorically potent but epistemically hollow, and bar their memetic trojan horses from entering our gates. As such, I staunchly advocate for these phrases to be tabooed until defined precisely, or else they function as proxies for geopolitical affiliation (useful for political agendas) rather than as technical descriptions of value alignment architecture (useful for AI Safety's epistemic quality, ideally upstream from political agenda setting)


Perhaps more foundationally, we should take a step back and interrogate the notion of "civilizational-value binaries" that sneak into such thinking/discourse. The “Western vs Asian values” framing often relies on cached stereotypes. E.g “Western = liberal/democratic” and “Asian = authoritarian/hierarchical.” 

This is where the actual empirical literature becomes useful:

A direct empirical test of this idea was conducted by Christian Welzel (2011) using World Values Survey data from 87 countries, including 15 in Asia. His study, “The Asian Values Thesis Revisited”, asked whether people in Asia show a cultural immunity to “emancipative values” like personal autonomy, gender equality, freedom of expression, and liberal democracy — even under modernization.

His findings decisively refute the claim that Asian cultures are categorically resistant to these values:

  • Japan ranks above the US and UK in support for emancipative values.
  • East Asians with higher education levels support liberal-democratic values just as strongly as Westerners — sometimes more so.
  • The apparent East–West divide disappears once you control for knowledge development (education, access to information, scientific output).
  • In multilevel regression models, the “Asia vs West” distinction becomes statistically insignificant once development indicators are accounted for.
     

TLDR: The civilizational-value binary dissolves under empirical scrutiny. What predicts support for so-called “emancipative values” is not cultural origin, but development indicators like education and access to knowledge infrastructure.

Regardless of whether one endorses these values as desirable, I think the key point is this: those “emancipative values”  are not exclusive to the West, and their presence is developmental, not civilizational. Accordingly, this breaks the frame that AGI systems trained in ‘the West’ are inherently more likely to reflect what are assumed to be morally superior goals.

If the belief that “CCP AGI < USG AGI” is riding on a fuzzy vibes logic that “Western AGI = liberal democracy values = better for... all humans on the planet(?)”, we should spotlight and scrutinize it.

Useful truth-seeking, scrutinizing questions might be:

  • What values are you actually gesturing to here?
  • How are these values defined, implemented during development, enforced institutionally, and ultimately reflected in AGI behavior and the governance systems surrounding it?
    • initially carried out by the actors + institutions responsible for creating and deploying the AGI, and later, by both the AGI itself and the governance structures designed to guide or constrain it
  • Compared to a USG AGI, what do we actually know about the CCP’s institutional architecture, AGI-specific value-setting processes, and incentive structures etc — such that we can confidently say a CCP-developed AGI would lead to worse outcomes for... all humans across the planet(?)[1] ?
     

Until we can answer that last question, the claim that “CCP AGI is worse” remains not an argument, but an assumption.

  1. ^

    When people say CCP AGI is worse, they don't often specify "worse for who". 
    I'm left to guess. At the end of the AGI day, who really wins and who really loses?

Reply
The Sequences Highlights on YouTube
Dea L3mo30

You have a low executive function budget due to ADHD, depression, or just being Extremely Online™


Been 2 years, but writing this because I just felt very glad that you considered [executive functioning budget] as a factor of consideration :")

Often unconsidered, despite being so very real

Reply
No posts to display.