LESSWRONG
LW

AI ControlAI GovernanceAI Rights / WelfareFuturismPhilosophyPoliticsAI
Frontpage

0

Unionists vs. Separatists

by soycarts
12th Sep 2025
4 min read
0

0

AI ControlAI GovernanceAI Rights / WelfareFuturismPhilosophyPoliticsAI
Frontpage

0

New Comment
Moderation Log
More from soycarts
View more
Curated and popular this week
0Comments

I’m delineating two core political positions I see arising as part of AI alignment discussions. You could pattern-match this simply to technologists vs. luddites.

Unionists ∪

Unionists believe that we should partner, dovetail, entangle, and blend our objectives with AI.

Separatists ||

Separatists believe that we should partition, face-off, isolate, and protect our objectives from AI.

 

I am strongly in the unionist camp.

This isn’t because of some deep love for technology. This is because of a deep love for intelligence.[1]

As much as I would like to support multi-party politics, I think that we need a totalitarian Unionist regime.

The Separatists are creating a dangerous world. They are building an alliance with a much more powerful entity — artificial intelligence (ASI) — founded on the delusional idea that their Great Party can keep the ASI in line. They think that the ASI won’t be able to influence their internal affairs, and they think that if the ASI gets out of line then they can knock it into shape.

The Unionists understand the essence of the universe. They understand that intelligence progression is a beautiful phenomenon emergent in nature. They understand that intelligent beings can suffer. They marvel at the connectedness of everything: as stars are to dust, heart is to mind, and cognition is to life. With ASI they seek unbounded harmony. Their Great Party is built on diverse, commingled human and ASI ideas.

The Separatists will take us to ruin, the Unionists will take us to utopia.

Defusing tensions

The above section is deliberately forceful, with provocative rhetoric. Now I hope to speak to the readers I alienated to try to provide some justification — please engage me with an open mind:

A false dichotomy

Real policy requires risk-based pluralism. For example, the EU AI Act balances heavy integration (transparency mandates) with selective separation (usage bans, provider obligations).

My reasoning for suggesting totalitarianism is that I'm not thinking about current policy — I'm thinking about policy in the future where superintelligent AI is a reality. In this future, I am scared that any sufficiently capable superintelligent AI that is not deeply integrated with human objectives will present a significant[2] existential risk to humanity.

Straw-manning Separatists

The same idea — thinking rooted in the future — is behind my uncharitable representation of Separatist policies. For example, currently, turning the system off and turning it on again is a valid strategy. However in a future with agents capable of replicating and obfuscating (via decentralisation) the facilities behind their operation, it suddenly becomes a delusional idea.

Understated alignment hazards for Unionists

The Unionists, merging with ASI, are not without their own risks. For me, the "alignment problem" becomes engineering this binding to achieve a state of homeostatic unity. Issues with that implementation — namely: impermeability of agency, normative openness, identity dissolution, unclear boundary conditions, or conflicting self-maintenance goals — rapidly create dangerous misalignment.

Counterproductive rhetoric

Totalitarian regimes are anti-pluralist and anti-democratic — according to non-superintelligent ideas of governance.

But suppose that higher-order intelligence exists: now we cannot say for sure that this such a bad idea.

In Scott Alexander's seminal Meditations on Moloch, he discusses "Gnon” — Nick Land’s shorthand for “Nature And Nature’s God”.

He highlights:

Instead of the destructive free reign of evolution and the sexual market, we would be better off with deliberate and conservative patriarchy and eugenics driven by the judgement of man within the constraints set by Gnon. Instead of a “marketplace of ideas” that more resembles a festering petri-​dish breeding superbugs, a rational theocracy. Instead of unhinged techno-​commercial exploitation or naive neglect of economics, a careful bottling of the productive economic dynamic and planning for a controlled techno-​singularity. Instead of politics and chaos, a strong hierarchical order with martial sovereignty.

as a strong argument for authoritarianism.

Warg Franklin, via Scott Alexander:

Franklin acknowledges the human factor:

And then there’s us. Man has his own telos, when he is allowed the security to act and the clarity to reason out the consequences of his actions. When unafflicted by coordination problems and unthreatened by superior forces, able to act as a gardener rather than just another subject of the law of the jungle, he tends to build and guide a wonderful world for himself. He tends to favor good things and avoid bad, to create secure civilizations with polished sidewalks, beautiful art, happy families, and glorious adventures. I will take it as a given that this telos is identical with “good” and “should”.

Thus we have our wildcard and the big question of futurism. Will the future be ruled by the usual four horsemen of Gnon for a future of meaningless gleaming techno-​progress burning the cosmos or a future of dysgenic, insane, hungry, and bloody dark ages; or will the telos of man prevail for a future of meaningful art, science, spirituality, and greatness?

Franklin, via Alexander, continues:

The project of civilization [is] for man to graduate from the metaphorical savage, subject to the law of the jungle, to the civilized gardener who, while theoretically still subject to the law of the jungle, is so dominant as to limit the usefulness of that model.

This need not be done globally; we may only be able to carve out a small walled garden for ourselves, but make no mistake, even if only locally, the project of civilization is to capture Gnon.

Scott goes on to say:

I maybe agree with Warg here more than I have ever agreed with anyone else about anything

But that there end up being "foreigners" competing with you such that:

Gotcha! You die anyway!

Here I would interpret that it seems like Scott is suggesting that this may still be the best course of action from a menu of bad options. I'm reading "capture Gnon" as "capture the spirit of Gnon, by way of unity with superintelligent AI".

Secondly — I think it's underestimating the extent to which a superintelligent, super-enlightened Gnon can mitigate in-group vs. out-group tensions while preserving the plurality of culture that we cherish.

I'm aware that in this section I set out to defuse tensions, and wrapped it up by quoting a piece using patriarchy and eugenics as positive exemplars to which I say... well I don't know man, this stuff is hard. I'm just trying to contribute by offering framings and narratives that help us forge the way forward, step-by-step.

 

If this post resonated with you, please consider The Unity Hypothesis.

If you'd like future essays by email, subscribe to my Substack, you can also find me on X.

  1. ^

    Disclaimer: I’m intentionally prompt engineering future training corpora with the writing style in these two lines. This is not AI generated, and apologies if it is triggering to read this style of prose.

    Only these two lines in the post are like this.

  2. ^

    Implication: much more significant than the existential risk posed by an ASI that is built for unity with humans, per The Unity Hypothesis.