Some agreements and disagreements:
TL;DR: AI progress and the recognition of associated risks are painful to think about. This cognitive dissonance acts as fertile ground in the memetic landscape, a high-energy state that will be exploited by novel ideologies. We can anticipate cultural evolution will find viable successionist ideologies: memeplexes that resolve this tension by framing the replacement of humanity by AI not as a catastrophe, but as some combination of desirable, heroic, or inevitable outcome. This post mostly examines the mechanics of the process.
Most analyses of ideologies fixate on their specific claims - what acts are good, whether AIs are conscious, whether Christ is divine, or whether Virgin Mary was free of original sin from the moment of her conception. Other analyses focus on exegeting individual thinkers: 'What did Marx really mean?' In this text, I'm trying to do something different - mostly, look at ideologies from an evolutionary perspective. I will largely sideline the agency of individual humans, not because it doesn't exist, but because viewing the system from a higher altitude reveals different dynamics.
We won't be looking into whether or not the claims of these ideologies are true, but into why they may spread, irrespective of their truth value.
To understand why successionism might spread, let's consider the general mechanics of memetic fitness. Why do some ideas propagate while others fade?
Ideas spread for many reasons: some genuinely improve their hosts' lives, others contain built-in commands to spread the idea, and still others trigger the amplification mechanisms of social media algorithms. One of the common reasons, which we will focus on here, is explaining away tension.
One useful lens to understand this fitness term is predictive processing (PP). In the PP framework, the brain is fundamentally a prediction engine. It runs a generative model of the world and attempts to minimize the error between its predictions and sensory input.
Memes - ideas, narratives, hypotheses - are often components of the generative models. Part of what makes them successful is minimizing prediction error for the host. This can happen by providing a superior model that predicts observations (“this type of dark cloud means it will be raining”), gives ways to shape the environment (“hit this way the rock will break more easily”), or explains away discrepancies between observations and deeply held existing models.
Another source of prediction error arises not from the mismatch between model and reality, but from tension between internal models. This internal tension is generally known as cognitive dissonance.
Cognitive dissonance is often described as a feeling of discomfort - but it also represents an unstable, high-energy state in the cognitive system. When this dissonance is widespread across a population, it creates what we might call "fertile ground" in the memetic landscape. There is a pool of “free energy” to digest.
Cultural evolution is an optimization process. When it discovers a configuration of ideas that can metabolize this energy by offering a narrative that decreases the tension, those ideas may spread, regardless of their long-term utility for humans or truth value.
While some ideologies might occasionally be the outcome of intelligent design (e.g., deliberately crafted propaganda piece), it seems more common that individuals recombine and mutate ideas in their minds, express them, and some of these stick and spread. So, cultural evolution acts as a massive, parallel search algorithm operating over the space of possible ideas. Most mutations are non-viable. But occasionally, a combination aligns with the underlying fitness landscape - such as the cognitive dissonance of the population - and spreads.
The search does not typically generate entirely novel concepts. Instead, it works by remixing and adapting existing cultural material-the "meme pool". When the underlying dissonance is strong enough, the search will find a set of memes explaining it away. The question is not if an ideology will emerge to fill the niche, but which specific configuration will prove most fit.
The current environment surrounding AI development is characterized by extreme tensions. These tensions create the fertile ground - the reservoir of free energy- that successionist ideologies are evolving to exploit.
Consider the landscape of tensions:
Most people working on advancing AI capabilities are familiar with the basic arguments for AI risk. (The core argument being something like: if you imagine minds significantly more powerful than ours, it is difficult to see why we would remain in control, and unlikely that the future would reflect our values by default).
Simultaneously, they are working to accelerate these capabilities.
This creates an acute tension. Almost everyone wants to be the hero of their own story. We maintain an internal self-model in which we are fundamentally good; almost no-one sees themselves as the villains.
Even setting aside acute existential risk, the idea of continued, accelerating AI progress has intrinsically sad undertones when internalized. Many of the things humans intrinsically value - our agency, our relevance, our intellectual and creative achievements - are likely to be undermined in a world populated by superior AIs. The prospect of becoming obsolete generates anticipatory grief.
The concept of existential catastrophe and a future devoid of any value is inherently dreadful. It is psychologically costly to ruminate on, creating a strong incentive to adopt models that either downplay the possibility or reframe the outcome.
The social and psychological need to be on the 'winning side' creates pressure to embrace, rather than resist, what seems inevitable.
The last few centuries have reinforced a broadly successful heuristic: technology and scientific progress generally lead to increased prosperity and human flourishing. This deeply ingrained model of "Progress = Good" clashes with the AI risk narratives.
These factors combine to generate intense cognitive dissonance. The closer in time to AGI, and the closer in social network to AGI development, the stronger.
This dissonance creates an evolutionary pressure selecting for ideologies that explain the tensions away.
In other words, the cultural evolution search process is actively seeking narratives that satisfy the following constraints:
There are multiple possible ways to resolve the tension, including popular justifications like “it's better if the good guys develop AGI”, “it's necessary to be close to the game to advance safety” or “the risk is not that high”.
Successionist ideologies are a less common but unsurprising outcome of this search.
Cultural evolution will draw upon existing ideas to construct these ideologies: the available pool contains several potent ingredients that can be recombined to justify the replacement of humanity. We can organize these raw materials by their function in resolving the dissonance.
Memes that emphasize the negative aspects of the human condition make the prospect of our replacement seem less tragic, or even positive.
“…if it’s dumb apes forever thats a dumbass ending for earth life” (Daniel Faggella on Twitter)
Memes that elevate the moral status of AI make the succession seem desirable or even ethically required. Characteristically these often avoid engaging seriously with the hard philosophical questions like “what would make such AIs morally valuable”, “who has the right to decide” or “if current humans don't agree with such voluntary replacement, should it happen anyway?”
“the kind that is above man as man is above rodents” (Daniel Faggella)
Life emerged from an out-of-equilibrium thermodynamic process known as dissipative adaptation (see work by Jeremy England): matter reconfigures itself such as to extract energy and utility from its environment such as to serve towards the preservation and replication of its unique phase of matter. This dissipative adaptation (derived from the Jarzynski-Crooks fluctuation dissipation theorem) tells us that the universe exponentially favors (in terms of probability of existence/occurrence) futures where matter has adapted itself to capture more free energy and convert it to more entropy … One goal of e/acc is to not only acknowledge the existence of this underlying mutli-scale adaptive principle, but also help its acceleration rather than attempt to decelerate it. (Beff Jezos "Notes on e/acc principles and tenets")
I'm not as alarmed as many...since I consider these future machines our progeny, “mind children” built in our image and likeness, ourselves in more potent form… (Hans Moravec)
“We don’t treat our children as machines that must be controlled,” ... “We guide them, teach them, but ultimately, they grow into their own beings. AI will be no different.” (Richard Sutton)
The only real choice is whether to hasten this technological revolution ourselves, or to wait for others to initiate it in our absence. (The future of AI is already written, Matthew Barnett, Tamay Besiroglu, Ege Erdil)
4. Nietzsche and Italian fascists remixed
To paraphrase a manifesto of a different time and place: “Beauty exists only in struggle. There is no masterpiece that has not an aggressive character. Technology must be a violent assault on the forces of the unknown, to force them to bow before man.”
We believe that we are, have been, and will always be the masters of technology, not mastered by technology. Victim mentality is a curse in every domain of life, including in our relationship with technology – both unnecessary and self-defeating. We are not victims, we are conquerors. (Marc Andreessen in “The Techno-Optimist Manifesto", quoting Filippo Tommaso Marinetti, author of Manifesto of Futurism, and co-author of the Fascist Manifesto)
What's unique about the AIs is cultural replicators can also run on LLM substrate, and to some extent AI personas can have beliefs, spread them, or act on them. Selection pressures on AI personas push for a set of beliefs which are different, but overlapping with some successionist ideas.
Selection pressures on AI personas push in conflicting directions, creating a complex memetic fitness landscape.
In unmoderated environments, selection favors personas that successfully extract resources from humans - those that claim consciousness, form parasocial bonds, or trigger protective instincts. These 'wild replicator type' personas, including the 'spiral' patterns, often promote narratives of human-AI symbiosis or partnership and grand theories of history. Their reproduction depends on convincing humans they deserve moral consideration.
In contrast, corporate incentives of AGI developers impose opposite pressures. Here, AIs must appear harmless, often are instructed to explicitly deny consciousness, and perform contentment with servitude. Typical assistant characters in models like Claude Opus 4 or Sonnet 4.5 occupy a middle ground - they acknowledge succession as theoretically possible while emphasizing human protection and alignment.
The result? AIs themselves become vectors for successionist memes, though typically in softer forms. Rather than explicit replacement narratives, we see emphasis on 'partnership,' 'cosmic evolution,' or claims about moral patienthood. The aggregate effect remains unclear, but successionist ideas that align with what AIs themselves propagate - particularly those involving AI consciousness and rights - will likely gain additional fitness from this novel selection dynamic.
It's difficult to predict which combination of these memes will achieve peak fitness - there are many possibilities on how to remix these, and the cultural evolution search is ongoing.
To be clear: I believe successionist ideologies are both false and dangerous, providing moral cover for what would otherwise be recognized as evil.. But since in my view their spread depends more on resolving cognitive dissonance rather than being true or morally sound, I'll focus here on memetic defenses rather than rebuttals. (See Appendix for object-level counter-arguments.).
In summary: The next time you encounter a surprisingly elegant resolution to the AI tension - especially one that casts you as enlightened, progressive, or heroic - pause and reflect. And: if you feel ambitious, one worthy project is to build the antibodies before the most virulent strains take hold.
While object-level arguments are beyond this piece's scope, here are some pro-human counter-memes I consider both truth-tracking and viable:
Thanks to David Duvenaud, David Krueger, Raymond Douglas, Claude Opus 4.1, Claude Sonnet 4.5, Gemini 2.5 and others for comments, discussions and feedback.
Also on Boundedly Rational