Focus: post-AGI governance
Overarching mission: figure out what winning looks like for Greater Terra
Current quest-line: researching into ecosystem (mis)alignment, and afterwards, envisioning concrete post-AGI viatopias (which are robust against Gradual Disempowerment class x-risk)
I have a penchant for designing and architecting cognitive infrastructure (for myself, for now)
You have a low executive function budget due to ADHD, depression, or just being Extremely Online™
Been 2 years, but writing this because I just felt very glad that you considered [executive functioning budget] as a factor of consideration :")
Often unconsidered, despite being so very real
I'm very happy this post is getting traction, because I think spotlighting and questioning these invisible assumptions should become standard practice if we want to raise the epistemic quality of AI safety discourse. Especially since these assumptions tangibly translate into agendas and real-world policy.
I must say that I find it troubling how often I see people accept the implicit narrative that “CCP AGI < USG AGI” as an obvious truth. Such a high-stakes assumption should first be made explicit, and then justified on the basis of sound epistemics. The burden of justifying these assumptions should lie on people who invoke them, and I think AI Safety discourse's epistemic quality would benefit greatly if we called out those who fail to state + justify their underlying assumptions (a virtuous cycle, I hope)
Similarly, I think its very detrimental for terms like “AGI with Western values” or “aligned with democracy” (implied positive-valences) to circulate without their authors providing operational clarity. On this note, I think it quite important that the AI Safety community isn't co-opted by their respective governments' halo terms or applause lights; let's leave it to politicians/AI company leaders to be rhetorically potent but epistemically hollow, and bar their memetic trojan horses from entering our gates. As such, I staunchly advocate for these phrases to be tabooed until defined precisely, or else they function as proxies for geopolitical affiliation (useful for political agendas) rather than as technical descriptions of value alignment architecture (useful for AI Safety's epistemic quality, ideally upstream from political agenda setting)
Perhaps more foundationally, we should take a step back and interrogate the notion of "civilizational-value binaries" that sneak into such thinking/discourse. The “Western vs Asian values” framing often relies on cached stereotypes. E.g “Western = liberal/democratic” and “Asian = authoritarian/hierarchical.”
This is where the actual empirical literature becomes useful:
A direct empirical test of this idea was conducted by Christian Welzel (2011) using World Values Survey data from 87 countries, including 15 in Asia. His study, “The Asian Values Thesis Revisited”, asked whether people in Asia show a cultural immunity to “emancipative values” like personal autonomy, gender equality, freedom of expression, and liberal democracy — even under modernization.
His findings decisively refute the claim that Asian cultures are categorically resistant to these values:
TLDR: The civilizational-value binary dissolves under empirical scrutiny. What predicts support for so-called “emancipative values” is not cultural origin, but development indicators like education and access to knowledge infrastructure.
Regardless of whether one endorses these values as desirable, I think the key point is this: those “emancipative values” are not exclusive to the West, and their presence is developmental, not civilizational. Accordingly, this breaks the frame that AGI systems trained in ‘the West’ are inherently more likely to reflect what are assumed to be morally superior goals.
If the belief that “CCP AGI < USG AGI” is riding on a fuzzy vibes logic that “Western AGI = liberal democracy values = better for... all humans on the planet(?)”, we should spotlight and scrutinize it.
Useful truth-seeking, scrutinizing questions might be:
Until we can answer that last question, the claim that “CCP AGI is worse” remains not an argument, but an assumption.
When people say CCP AGI is worse, they don't often specify "worse for who".
I'm left to guess. At the end of the AGI day, who really wins and who really loses?