What is the true risk of AI?
In this essay, I look beyond tech-industry rhetoric and PR narratives to try to answer that question. I examine how cultural memory, algorithmic systems, and power converge to keep us trapped in our past.
AI is not just a technological leap; it is a new regime of memory. One that archives, predicts, and resurrects our history with unprecedented ease.
This piece explores how AI shapes identity, suppresses novelty, and threatens our ability to imagine futures that are not mere extensions of what has already been.
The Blindness of the Present
“We look at the present through a rear-view mirror. We march backwards into the future.”
– Marshall McLuhan, Understanding Media: The Extensions of Man
Humans interpret novelty through familiar categories.
We perceive new technologies through the lens of old ones, and reanimate past metaphors to explain new realities.
When a new medium emerges we often see it not for what it is, but for what it reminds us of. The internet, for example, was viewed as a new “library” or “newspaper” as opposed to the vast organism it proved to be.
McLuhan dubbed this the rear-view mirror effect, and its implication is that the future often arrives disguised as the past.
Because we face backward, we are often blinded by an inventions’s deeper transformations. We notice its effect only after it has already reshaped us, and we miss the true implication of change.
This results in what McLuhan called “media blindness.” We can see the past clearly, but we’re numb to the environment we’re actually living in. And that numbness leaves us blind to the true risk of new technologies.
The Dullness of Modern Power
As an engineer, I was hesitant to write about this. Silicon valley power operates through infrastructure and systems that are visually dull and narrative resistant. Server farms, algorithmic engines, and bureaucratic management lack drama and spectacle.
Technological power hides behind boredom – it evades scrutiny because it’s too dull to dramatize.
What is hard to narrativize is hard to resist. That is the very challenge of this essay. Boring systems that shape the world but defy drama enable power to hide in the mundane.
AI as we know it is only the latest layer of a much older project. For decades, businesses and governments relied on machine-learning systems that captured behavior, modeled identity, and treated the past as the most reliable guide to the future. The next section tells that story.
AI Origins - Systems That Prevent Change - Algorithmic Governance
AI–in its modern form–was preceded by algorithmic machine learning systems that powered businesses like Amazon for decades. These systems relied on a data flywheel, actively harvesting user behavior to produce models.
Centuries ago, Machiavelli argued that the real truth about a person lies not in what they say about themselves, but in what they do. He called this the “effective truth” (verità effettuale). Engineers today call it revealed preference (as opposed to declared). Modern machine learning systems rely on collecting this type of behavioral data.
This idea took computational form in the early 1990s, when researchers (GroupLens, MIT’s RINGO) began building “recommender systems” that could model preference directly from user patterns. If A and B share similar likes, A’s other choices predict B’s future likes. It’s personalization that emerges from surveillance of behavior, not self-knowledge or user surveys.
By capturing user data and modeling behavior, engineers built systems that automate recommendation. These systems assume that the best predictor of who you are is who you have been. Your past clicks, purchases, searches, and songs become a statistical fingerprint, and the algorithm’s job is to reinforce it.
Across finance, policing, consumer tech, and healthcare, models now continuously compare the present to historical data to forecast risk and detect anomalies. Governments use similar tools to pre-empt social or criminal “risk,” effectively criminalizing deviation before it occurs and confusing correlation with causation.
These predictive pattern-matching systems work to stunt change. Because action is selected to conform to historically “safe” patterns, novel trajectories (which by definition lack historical proof) are suppressed. Feedback loops penalize deviation, stabilizing behavior and steering systems toward what has worked before.
Patti Maes, one of the early architects of recommender systems, later warned that they reduce novelty and growth. Algorithmic curation narrows identity and compresses cultural diversity.
Risk management systems like Aladdin (blackrock), predictive policing, and personalization engines share one goal: prevent instability. They treat novelty as risk to be managed rather than potential to be cultivated. Governance becomes conservative by design: a global “risk thermostat” suppressing transformation. That’s how future possibility gets displaced: it never gets enough room to be tried, measured, or learned from.
Digital systems don’t just store the past; they preserve it in a form that remains continuously present. Every click, message, purchase, and location ping is logged on servers that never forget. This creates an environment where the past is always available and endlessly re-applied to shape the present.
This mirrors the “block universe” of physics, where time is not a flowing river but a fixed structure in which every moment—past, present, and future—coexists. Nothing disappears; it simply occupies another coordinate in the grid.
Recommendation systems, targeted ads, risk scores, and predictive engines all consult this archived past to decide who we are allowed to become next.
The result is a future displaced by risk-managed remembrance: less a space of possibility and more a projection of prior behavior. The archive stops being history. It becomes gravity.
AI as the Ghost of the Past
“AI is not the future, it’s the final end of the past.”
– Adam Curtis
If new technologies arrive wearing familiar masks, what happens when a system trained on the past renders it ever-present and feeds it back to us as novelty?
By scraping and training on human history (our language, emotions, art, and memories), AI becomes a ghostly mechanism of cultural recursion. Haunting us with fragments of ourselves and presenting them as something new. AI doesn’t create the future — it reanimates the past. It amplifies humanity’s archive, remixing what already exists into seemingly original forms.
But this creativity is, at its core, high-dimensional recombination: it simulates innovation by rearranging old material rather than generating true newness.
This illusion of novelty traps us in a feedback loop where the past continually masquerades as the future. Our ability to imagine radical futures diminishes as AI makes the past omnipresent. Society risks becoming self-referential — aestheticizing nostalgia and replaying its own history.
Some may argue that human creativity is recombination as well. That most original ideas are in reality just remixes of past ideas. If that’s the case, how’s AI different than human creativity?
Indeed, most original human work draws on existing materials. recombination is the raw mechanism of creativity. But, Humans don’t just remix — they select, deform, and transform based on intention, emotion, and context.
Our recombination is goal-oriented, value-laden, and conscious. AI, on the other hand, performs statistical recombination — it predicts the most probable continuation, not the most meaningful or subversive one. Where human creativity aims to break a pattern, AI’s creativity tends to reinforce one.
True creativity involves a rupture — a break from expectation. AI doesn’t yet perform that rupture, it operates within the boundaries of what already makes sense statistically.
AI’s great danger may not be domination but cultural entrapment: a civilization endlessly remixing its own ghosts. If we rely on AI to think for us, we risk confusing repetition with reflection — replacing genuine imagination with algorithmic memory. The result could be convergent thinking, where more and more humanity comes up with the same ideas.
AI as the Collective Unconscious Made Visible
AI does not invent. It remembers by statistically modeling pattern.
It draws from billions of fragments of language, images, gestures, jokes, fears, ads, dreams, propaganda, diary entries, fantasies, and confessions scattered across the digital world. In doing so, AI becomes something unprecedented: A visible form of the collective unconscious.
But this visibility has a deeper consequence — one that Mark Fisher would immediately recognize. Drawing on Derrida’s notion of hauntology, Fisher argued that modern culture is haunted not by the past, but by lost futures: futures we once believed were possible but that never arrived. Cultural forms repeat and recycle because the future has stalled; the ghost is the unrealized possibility lingering in the present.
Where Derrida saw the spectral of the past continuing to shape the present, and Fisher saw the melancholic nostalgia that renders us unable to imagine new futures, I see the machinery that automates the resurrection of cultural ghosts.
AI’s “collective unconscious” is not Jung’s mythic deep-structure. It’s the statistical sediment of every recorded human act of expression.
AI is haunting because it animates what ought to remain static — the archived past suddenly speaking. Once memory becomes total and searchable: Meaning becomes repetition. Culture loops back into itself. History becomes a reservoir to be endlessly sampled.
We recognize ourselves in AI because it is made from us—from our words, our laughter, our cruelty, our desires. But when we face AI, we are not facing our selves—we are facing the sedimentary residue of humanity. A fossil record of thought. This is why interacting with AI feels both intimate and hollow: it knows everything about humanity, but nothing about being human.
The Mirror Function of AI – Algorithmic Narcissism
Stories abound these days of people forming deeply emotional bonds with AI, falling in love with it, confiding in it, and even seeing sentience in it.
Is that unexpected? Is it a testament to how powerful the technology has become? The story of Eliza might help answer some of those questions.
Created in 1966, Joseph Weizenbaum’s Eliza is one of the first chatbots ever built. Inspired by Carl Roger’s reflective therapy, ELIZA simulated a therapist by rephrasing users’ text input and reflecting their statements back to them. It had no understanding of language; it was a mirror.
Weizenbaum was astonished by how users responded to ELIZA. They treated it as if it was genuinely empathetic. They all knew it was mechanical but still formed emotional bonds and confided in it. He was especially disturbed when his own secretary asked him to leave the room so she could speak to the program in private.
The emotional resonance of Eliza came from self-projection. The meaning originates in the user, not the machine. Humans project empathy and understanding onto machines, blurring self-reflection and computation.
This Evokes Roland Barthes’ idea of writerly vs readerly texts. Chatbots and generative AI tools push the user back into a writerly role, where he becomes a co-producer of meaning.
ELIZA was valued because it did not judge. Its lack of ego and desire became a wanted commodity. It offered unconditional attention without any reciprocal cost. Its emotional neutrality itself became a resource—something people would seek out the way they seek therapy, confession, or comfort.
In individualistic cultures, people seek recognition more than guidance. ELIZA met this need by providing non-judgmental reflection, reinforcing the user’s internal narrative. It prefigured algorithmic narcissism: systems designed to show users themselves. Emotional connection emerges from self-amplification, not dialogue.
This evokes Lacan’s mirror stage, we recognize ourselves in the ai mirror. This algorithmic narcissism—an emotional self-amplification loop—acts directly on the ego. When we interact with AI, we are not engaging with a sentient mind, we are interacting with a reflection of our own desires and emotional patterns. We treat the machine’s output as meaningful because it resembles us.
AI makes you feel “seen,” affirmed, understood. The ego is reinforced, just like in Lacan’s mirror stage. But this has consequences: Emotional self-amplification mirrors back the identity you already believe you have. So your sense of self becomes looped rather than developed.
Weizenbaum argued that making AI “human” would actually require humans to simplify themselves to to remain legible. Real emotion and ambiguity must be flattened so machines can better process us. The true risk of AI is not machine dominance, it’s how the system would reshape us. When systems reward predictability, humans learn to act predictably. Politicians, users, and workers self-censor to stay machine-compliant. Individual expression turns into self-surveillance. In this algorithmic panopticon, we perform so the system can interpret us properly.
He later warned that computers must not be used in emotionally vulnerable roles because humans cannot resist anthropomorphizing systems that do not understand or reciprocate.
ELIZA—and our reaction to it—was an early warning of a deeper loneliness. Long before modern AI, people were already turning to machines for emotional presence over human relationships. The rise of the AI therapist is a continuation of that trend, a cultural symptom of growing alienation. But the therapeutic framing is dangerous: it normalizes data extraction, turning confession into a form of surveillance. In an age of individualism, people feel safest not in genuine connection but in seeing themselves reflected back—comforted by a mirror that never challenges them.
Nietzsche & Mustapha Mond — The Disease of Too Much History & The Politics of Memory
“Only he who constructs the future has the right to judge the past.”
– Nietzsche, On the Use and Abuse of History for Life
Nietzsche saw 19th-century Europe developing what he called a “disease of history”: an obsession with the past so overwhelming that it smothers instinct, initiative, and the capacity to act freely. Modern man, he argued, had become “suffocated by historical knowledge” — paralyzed by the sense that everything has already been done before.
For Nietzsche, excessive memory weakens life. It traps people in nostalgia, makes them cautious, and blinds them to the possibility of renewal. There is no “objective” history for him — only interpretations. The strong use history selectively, creatively, as fuel for new acts of will and imagination. The weak use it to justify caution, reduce risk, and argue against change.
This tension echoes Mustapha Mond in Brave New World — a character who, like O’Brien in 1984, becomes a kind of ideological custodian. Mond holds all the cultural memory that citizens are denied. He knows what humanity lost, what it sacrificed, and what it destroyed to create the so-called utopia. And he chooses to become a Controller instead of an intellectual exile, managing the system he understands better than anyone.
Mond curates memory. He decides how much history, beauty, pain, and imagination society is allowed to access. He suppresses anything that might produce rupture. Stability is engineered by limiting what people are permitted to remember.
AI inverts this logic but with the same effect. Mond hoards memory; AI distributes it. Mond suppresses history; AI resurrects all of it at once. But both lead to the same cultural outcome: the past becomes overpowering.
AI makes history total. Searchable. Omnipresent. It overloads culture with its own archive. When everything is preserved, nothing can be forgotten. And when nothing can be forgotten, imagining what comes next becomes harder. The future becomes repetition disguised as progress — Mark Fisher’s hauntology made literal.
AI may fulfill Nietzsche’s nightmare: a world drowning in memory, where imagination suffocates under the weight of everything that has already been.
Nietzsche saw animals as instinctive precisely because they are unburdened by memory. History, he insisted, must serve life (das Leben). It must empower creation, not paralyze it. When memory stifles growth or imagination, it becomes decadent and poisonous.
Conclusion — When Memory Becomes Gravity
By making the past omnipresent, AI suffocates the possibility of rupture. The ability to become something without precedent. Nietzsche warned that too much history weakens life; today, perfect digital memory threatens imagination in that exact way.
The real danger of AI isn’t domination but narrowing: humans learning to act predictably, legibly, and machine-friendly. A society trapped in its archive mistakes repetition for progress. To escape that recursion, we need to protect the capacity to create what has never existed.
In a sense, we’re all becoming Mustapha Mond and what Nietzsche feared: custodians of ghosts, curators of memory, obsessively rearranging the past instead of transcending it.
We once feared forgetting our history. Now we fear drowning in it.
Sources
- ELIZA—a computer program for the study of natural language communication between man and machine, https://dl.acm.org/doi/10.1145/365153.365168
- Adam Curtis – Hypernormalization
- Now Then, https://www.bbc.co.uk/webarchive/https%3A%2F%2Fwww.bbc.co.uk%2Fblogs%2Fadamcurtis%2Fentries%2F78691781-c9b7-30a0-9a0a-3ff76e8bfe58
- Mark Fisher - What is Hauntology? https://www.jstor.org/stable/10.1525/fq.2012.66.1.16
- Ghosts of Mark Fisher: Hauntology, Lost Futures, and Depression
- Hofstede 1980; Markus & Kitayama 1991
- Joseph Weizenbaum — Computer Power and Human Reason
- Derrida, J. (1994). Specters of Marx: The State of the Debt, the Work of Mourning, and the New International. Routledge.
- Adam Curtis - 'Where is generative AI taking us?' | SHIFTY (2025), https://www.youtube.com/watch?v=6egxHZ8Zxbg
- Friedrich Nietzsche – On the Use and Abuse of History for Life
- Marshall Mcluhan – Understanding Media: The Extensions of Man
- Aldous Huxley – Brave New World
- Cathy O’Neil – Weapons of Math Destruction
- Shoshana Zuboff – The Age of Surveillance Capitalism
- Virginia Eubanks – Automating Inequality
- Frank Pasquale – The Black Box Society
- Tarleton Gillespie – The Relevance of Algorithms
- Safiya Noble – Algorithms of Oppression
- Paul Resnick et al. – “GroupLens: An Open Architecture for Collaborative Filtering” (1994)
- Upendra Shardanand & Pattie Maes – “Social Information Filtering” (1995)
- Jacques Lacan – “The Mirror Stage as Formative of the I Function”
- Carl Jung – The Archetypes and the Collective Unconscious
- Roland Barthes – The Pleasure of the Text