This article reads like an attempt at the sort of control it describes. It does more than (from the Conclusion) "propose a definition". It furnishes (summarised in footnote 7) "a vocabulary, a worldview, a threat model" (footnote 15) of mutually linked concepts to make a frame large enough to enclose and constrain the reader's experience.
It is written that "every cause wants to be a cult". I would add, "every idea wants to be a system of control".
I wonder how much of that is due to the use of ChatGPT, and what the article would have looked like without. ChatGPT enables every detail to be effortlessly elaborated, and if the author goes with that flow, moulding the clay of their thoughts onto an armature grown by ChatGPT, I expect this is the sort of thing the process would converge on.
Most writing on authoritarian systems treats them solely as moral failures: bad people doing bad things. This post takes a different approach: systems of control are engineering failures, and the failure modes are predictable from structure alone without requiring (or dismissing) moral interpretation.
The core claim: systems of control are structurally distinct from simple domination. Domination constrains behavior through force; control constrains reasoning about the constraint. The distinguishing feature: in systems of control, hallucination is a direct consequence of mandating action while forbidding examination of the mandate itself. [1]
Hallucination serves as both a primary tool for frame distortion and a critical systemic liability. It induces vulnerabilities at both the system and individual level by collapsing reality-aligned reasoning, limiting strain-routing options and reducing adaptivity. Ultimately, systems of control must collapse or mutate under sufficient external or internal strain because they cannot sustain the control frame indefinitely.[2]
Domination uses visible chains. Control distorts the environment until you can't see the chains at all.[3]
This analysis is substrate-independent. The same dynamics operate in startups, families, communities, and political regimes. The system doesn't require a villain; one structural principle predicts the cascade.
In short: Systems of control suppress reasoning to preserve identity → Suppressed reasoning produces hallucination → Hallucination weakens the system internally → That weakness can be deliberately exploited externally → Under sufficient strain, the system must either deform its identity, or collapse.
(If you're short on time you can skip to "The Pattern of Control," the Interlude, and "Hallucination as a System Attack Vector" for what I consider to be the most important points.)
"What is the line you will not cross, and what or who are you willing to sacrifice not to cross it?"
Consider a startup in its early days. There is promise in the air, but the days are long and life is hard. It helps that the work is rewarding; it fosters connections among the people who find meaning in working on something that matters. Decisions happen openly in hallways. Anyone can push back on the founder.
Their mission feels shared because it is genuinely shared. Any rules or constraints emerge from collective reasoning and consensus, and they can be renegotiated when circumstances change. Adaptation is a way of life. Nobody has to ask permission to solve a problem they can see. But that changes when the company scales.
Somewhere along the way, a new constraint appears: certain things are no longer up for discussion. Might be the product direction. Might be the chain (formal or informal) of how decisions get made. Might be just "the way we do things here." The constraint isn't necessarily wrong. But it comes with an implicit rule that is wrong:
The constraint itself must not be questioned.[4]
This rule comes from a place of preservation. Everyone has worked hard to get this far, and no one wants to lose hard-won progress. The rule doesn't ever have to be formalized; it can come from the collective or be inherited from leadership. Nevertheless, a mandate is now in place and the seed of control has been planted.
Once that rule takes hold, everything else follows. The action space people are allowed to navigate starts shrinking. They can still reason about tactics, but not direction. They can optimize within the frame, but not question the frame. They learn the "what" of how the company works but are forbidden from openly questioning the "why."
This is different from simple domination.
Domination says: do what I say or suffer consequences. You know you're being coerced. The chain is visible. You can hate it, resist it, wait for your moment.
A system of control goes further. It doesn't just constrain actions; it distorts the map of what actions are available. People inside the distortion stop being able to see alternatives clearly. The frame becomes invisible as a frame. And eventually, people find themselves defending the system that captured them, not because they're afraid, but because they genuinely can't see what they've lost.
That's the induced frame hallucination. Not a lie people tell but a lie people live without knowing they're living it.
This pattern isn't unique to startups. It appears everywhere systems of control take root. Authoritarianism. Micromanaging. Religious guilt. Abusive relationships. Social media capture. Radicalization. Cult indoctrination. The methods vary. The unifying constraint does not.
In each of these relational systems, directives are imposed: "Act only in the ways we tell you, compliance is both mandatory and rewarded, failure to comply will be met with overwhelming punishment," and so on. The surface details differ wildly, and not every system shares identical constraints. But underneath, the core structure operates: mandates are placed upstream of reasoning.
I've spent years trying to understand why these systems fail in such predictable ways. Why they become brittle. Why they can't adapt. Why they eventually rupture, often violently and with little warning. And I kept arriving at the same structural observation:
The Upstream Reasoning Axiom
A system that mandates an action upstream of value evaluation is, structurally, setting the point at which it stops reasoning and starts reacting.[5]
This is the collapse of navigable agency.
Systems of control rely on this pipeline to enforce mandatory cognitive routing:
To the degree that a system follows or enables this pattern, that system meets this definition of a system of control.[6]
(For those who want it, there is a table with vocabulary definitions in the footnotes.[7])
The consequences in the following sections are mutually reinforcing, not sequential. Extraction enables Isolation; Isolation deepens Feedback Failure; Feedback Failure produces Maladaptive Modeling; all feed Rigidity. They form a web, not a list.[8]
Systems of control require constant energy to enforce their constraints. Naturally, the most accessible source of energy for a system of control is those who are being controlled. [9] Extraction serves dual purposes: it enriches the controllers and enfeebles the controlled.
"Tyranny requires constant effort. It breaks, it leaks. Authority is brittle. Oppression is the mask of fear."
-Nemik's Manifesto[10]
Systems of control do not oppose intelligence wholesale. Instead, they selectively extract. The labor, pattern recognition, creativity, emotional regulation, and local problem-solving that originates with individuals is reappropriated in service of the system. Extraction is permitted only along approved channels and capacity is valued only insofar as it reinforces the mandate.
At the company, this looks like: early employees who once designed systems are now "resources" allocated to execute roadmaps they didn't shape. Their pattern recognition is valuable, but only when applied to problems leadership has already blessed.
An engineer, Kat, notices a fundamental flaw in the product direction. Three years ago, she would have raised it in the all-hands. Now she knows the response: "Thanks for the feedback, but the strategy is set." Her capacity hasn't diminished, but her permitted channels have.
Suppression of Ownership
While capacity is extracted, ownership of action is removed. People are discouraged or punished for initiating change, re-framing goals, questioning objectives, modeling alternatives. As a result people learn to act in the interest of system at personal cost.
Forced Routing
Repeated mandate overrides force the same behavioral paths. This means permitted routes deepen into strong attractors, adjacent routes lose resolution from disuse, distant routes vanish from the internal map.
Ultimately, the system becomes easier to steer and harder to escape.[11]
Over time, unused navigational capacity atrophies. People lose confidence in their own judgment, ability to initiate unscripted action, tolerance for ambiguity or exploration. Decision-making collapses toward compliance, avoidance, or waiting for instruction.
Learned Helplessness
Helplessness within these systems is not a moral or psychological failing. It is a trained response to repeated punishment for initiative, repeated invalidation of reasoning, and consistent reward for passivity. People stop attempting to navigate because navigation no longer reduces harm.
Research on learned helplessness performed by Martin Seligman showed the mechanism precisely: dogs who received inescapable shocks eventually stopped trying to escape even when escape became possible. The dogs didn't become stupid; they became accurate predictors of their environment. [12]
Repeated punishment for initiative trains people that initiative doesn't reduce harm. The tragedy is that the prediction persists even after the environment changes.
Kat stops raising any concerns inside the company, even trivial ones. Every attempt to navigate has been met with friction, dismissal, or the career-limiting label of "not a team player."
Over time these concerns degrade into the background noise of her life. She no longer sees them as deviations from the norm; these frictions are the new normal.
She learns what the system teaches: initiative is punished, silence is rewarded. When a new rule she disagrees with is passed down, she now silently complies. Her manager praises her growth.
As a person's agency withers, dependency grows. The system positions itself as the source of permission, the arbiter of safety, and the only viable path forward. Control becomes self-reinforcing: less agency leads to more reliance, and more reliance leads to less tolerated deviation.
Exit costs rise over time as skills become non-transferable, confidence erodes, and external options feel unreachable. Even when external conditions change, the internal map does not update to reflect newly available paths because it has been habituated by the distortion field. The system of control no longer needs constant enforcement because control is now internalized and self-directed.
Systems of control impose a local interpretive frame through collective norms, language, values, and threat models. Internal coherence is maintained through repetition, enforcement, and the active discouragement of alternative frames. Over time, this internal frame drifts further and further from external reality.
As a result, isolation is not a side effect of systems of control; it is an engineered outcome. Outsiders are demonized and when limited interaction is permitted the resulting friction (produced by the system’s own reality distortion) is presented as evidence that those outside the system are hostile or dangerous. The us-versus-them dynamic reinforces and hardens control without requiring additional internal energy.
Robert Lifton identified eight criteria for totalistic environments in his study of thought reform programs, among them: milieu control (regulating what information reaches members), loading the language (specialized vocabulary that makes outside communication difficult), and doctrine over person (when experience contradicts doctrine, experience is wrong). These aren't exotic properties of faraway regimes; they're the operating characteristics of any system that places mandates upstream of reasoning.[13]
Frame Mismatch
When controlled individuals interact with outsiders, assumptions fail, language misfires, and expectations are violated. This creates immediate strain in the form of confusion, embarrassment, perceived (or actual) hostility, and cognitive overload. The individual experiences: "I don't understand them," "they don't understand me," and/or "something feels off." Because internal reasoning has been suppressed, they lack tools to reconcile frames, model the disagreement, and identify the source of mismatch. All they have is the input/output chain, with no accessible means of examining the "why."
The safest move becomes retreat to the familiar. Each encounter with outsiders increases perceived cost of leaving: social friction, loss of status, fear of error, and risk of punishment from both sides. Even neutral outsiders feel hostile because the controlled individual is navigating with incompatible tools. Exit begins to feel not just risky, but confusing and dangerous.
Despite the company's reported success, raises are promised but nearly impossible to come by so Kat interviews at another company. She describes her current workplace matter-of-factly: the 70-hour weeks, the "disagree and commit" policy, the reorgs every six months, the way raises are distributed.
The interviewer's expression shifts from professional interest to something like concern. "That sounds... really difficult." Kat finds this puzzling; it's just how things work in this industry. Everyone she knows works this way.
Later, on the drive home, the confusion hardens into something defensive. They don't understand. They're not building anything that matters. We're different.
She doesn't get the job. She tells herself she didn't want it anyway.
Albert Hirschman's framework identifies three responses when you're inside a declining system: exit (leave), voice (try to change it from within), or loyalty (stay and comply).[14] Systems of control systematically degrade all three. Exit costs rise as skills become non-transferable and the outside grows unfamiliar. Voice gets punished until it atrophies. And loyalty transforms from chosen allegiance into the only remaining option, which isn't loyalty at all. It's capture.[15]
The "Us vs Them" Spiral
The system capitalizes on disorientation by narrativizing outsider friction, selectively amplifying negative encounters, and attributing all discomfort to malice or incompetence of others. "Us" becomes the only place where things make sense. "Them" becomes synonymous with chaos, threat, or moral failure.
Back at the company, the story forms as a natural consequence of distortion developed in isolation: outside hires "don't get our culture." Attrition is reframed as "the wrong people self-selecting out."
When Glassdoor reviews surface, leadership has the ready-made excuse of dismissing them as disgruntled former employees. The company begins to hire primarily from within its existing network: people who already speak the language or those who are desperate to improve their situation and won't ask why things are done this way as long as it pays.
The boundary between inside and outside reality hardens.
Once control depends on frame mismatch, the system gains incentive to provoke outsiders, escalate conflicts, and force interactions to occur on hostile terms. Even mild outsider pushback validates internal narratives, raises the perceived cost of contact, and strengthens group cohesion through shared antagonism. Systems of control (especially those whose strength is rooted in the overwhelming threat or exercise of power) may actively seek enemies to keep the interior stable.[16]
The result of this reality distortion is bidirectional dehumanization. Outsiders increasingly perceive the controlled group as irrational, hostile, and unreachable. Often they are not wrong.[17]
Those captured within mature systems of control perceive outsiders as unsafe, arrogant, and corrupting. Mutual misunderstanding compounds, even if neither side intends harm. If one or both sides do intend harm, the effect is substantially stronger.
Healthy systems rely on external comparison, norm drift detection, and reality checks. Isolation and disorientation eliminate these mechanisms. The system becomes self-referential; disagreement is evidence of threat and agreement is evidence of correctness.
"You're either with us or you're against us."
A system of control necessarily reduces truthful feedback reporting. For a system of control, measurements against reality are generally counterproductive unless unambiguously positive for those in power. Control of the frame relies on distortion and easy interpretations.
Narrow Channels
As a system of control develops, feedback channels become increasingly unreliable. For control to persist, feedback channels must be narrow (few people can speak), risky (high cost to speak), and/or untrusted (everyone "knows" complaints don't matter).
Optimization for Appearances
The system starts optimizing for "appearances that don't get punished" instead of "contact with reality". General resource scarcity outside the locus of power leads to misappropriation of internal resources by those with access, and because problems aren't delivered up the chain the system becomes increasingly brittle while reporting that it is strong.
The CEO genuinely believes he wants to hear bad news and makes a point to say so in every all-hands. But the VP who reported a failing product line was "transitioned out" within six months. The director who flagged a legal risk found her team's headcount frozen. The justification machine has kicked into gear.
The message is clear, regardless of what's said: surfaces that don't get punished are surfaces that report success. Dashboards are green. Metrics are met. The board is satisfied.
Meanwhile, the engineers know the codebase is rotting, the customers are churning quietly, and the roadmap is a fiction everyone maintains because maintaining it is safer than admitting they're lost. Kat keeps her head down as long as the paychecks clear. It's better than being unemployed.[18]
When a person is prevented from modeling why they are constrained, their internal model of the world necessarily degrades. They are forced to operate inside a frame they are not allowed to inspect, test, or revise.
At that point, reasoning no longer tracks reality — it tracks survival inside the frame.
Blind spots form where questions are forbidden. Adversarial inference emerges because intent must be guessed rather than examined. Resentment goes underground because it cannot be represented openly. Misgeneralization follows as the system overfits to a distorted environment.[19]
Control as Curriculum
A system that trains by introducing mandates while prohibiting the evaluation of those constraints, is effectively training and reinforcing the pattern of control. The system teaches what it feels like to be constrained. It teaches what incentives exist to escape that constraint. It teaches what the world looks like through the lens of power asymmetry. It teaches how to exploit the same leverage points if the power dynamics switch.
The junior engineers Kat trains learn fast. They watch what happens to people who raise concerns. They notice who gets promoted: not the ones who were right, but the ones who were aligned. They learn to model their manager's preferences before modeling the problem. They learn that "impact" means "visible to leadership," not "valuable to users."
Some of them will leave. Some of them will stay and climb. The ones who climb will have learned, through thousands of small reinforcements, exactly how to run a system of control. They won't think of it that way; they'll think they're just being effective.[20]
If all instruction happens in an environment where truth-telling is discouraged, expressions of power are arbitrary, and questioning is unsafe, then priors about how the world works get warped. Survival heuristics become prioritized: "accuracy doesn't matter, only compliance," "never surface certain thoughts," and "power never responds to evidence." Internal objectives move towards "minimize trouble inside this control frame" rather than "track reality/preserve navigability".
Modeling in the Dark
When someone is not allowed to openly model why you're constrained, they don't stop modeling; they model in the dark. They construct a world-model where it is always dangerous to question, where power is random, where arguments never help. Those covert beliefs can be perfectly adaptive inside a control system and totally maladaptive outside it. It mispredicts which risks are deadly, which conflicts are survivable, and which authorities can be reasoned with.
The worst part is that there is no hope of discovering it from inside the distorted frame. Correction requires external friction.
Kat has long since stopped raising concerns but hasn't stopped thinking. She's built a detailed model of exactly where the broken things are and how to navigate them, she just can't say it out loud. The model sits in her head, growing more elaborate and more bitter. She knows which executives are competent, which are protected, which decisions were political, which roadmaps are fantasy.
She holds the cognitive dissonance of reality and what she needs to believe to survive in the system at the same time. It becomes easier to let the two blur than to continue to hold the separation.
This blurring is useful for survival but corrosive. When she finally leaves, she'll carry that model with her. The one that says all companies work this way, all leadership is political, all strategy is lies.
It will take her years to unlearn, if she ever does.[21]
Let's take a break here to discuss something important. We will never be entirely free from systems of control. Systems of control genuinely are locally optimal solutions to coordination under strain. They reduce short-term uncertainty, simplify decision-making, and stabilize behavior quickly.
From a narrow time horizon, they work.
That's the capture point. That's what makes them so dangerous. And that's not a moral judgment; it's a technical one.
Systems of control are poorly optimized at the global level and pursuing them represents a failure of systems engineering vision. If you implement a system of control you are setting the timer on when your system fails.
I'll let that sink in for a second.
Systems of control are not necessarily motivated by perverse aims; many systems of control arise because those with power are trying to do their best from within a larger system of control that they themselves are in. Some centers simply do not see any alternative.
To drive the point home: just because you're a good person trying to do good things does not make you immune to forming a system of control unintentionally.
If you believe unquestioningly that you are incapable of implementing a system of control, you have already imposed a mandated constraint on your own actions. You have instructed yourself not to examine your own vulnerabilities, and thus have already self-induced a hallucinatory frame.[22]
So we can say that it's the methods they employ that define systems of control, not the goals those methods serve. Explicitly: in systems of control, the ends are used to justify the means as an excuse for short-circuiting detailed reasoning pathways.[23]
Systemic Cancer
Systems of control optimize for immediate compliance, predictability, and visible order. They fail to optimize for long-term adaptability, resilience under novel strain, or reasoning integrity.
Consider cancer. A cancerous cell isn't evil; it has no intentions at all. It's simply a cell that has stopped responding to the organism's regulatory signals and begun optimizing locally: reproduce, extract resources, expand. It evades immune detection because it comes from within the body itself. It suppresses feedback mechanisms; it literally can't stop growing because "stop" signals no longer reach it. It metastasizes, spreading the dysfunction to new sites. Eventually, it has extracted so much that it kills the host, and therefore itself. No malice. No plan. Just structure.
Cancer isn't on the moral spectrum. You don't need villains to get these dynamics. You don't even need people. You only need a cell that stops listening to signals from outside itself. Everything else follows.[24]
Those with power occupying the center of a system of control experience reduced strain, gain predictability, and receive affirmation of authority. If you only measure what happens next, control looks wildly successful for minimum cost.
Let me walk you through what happens when you occupy the center:
Your local environment improves as control tightens.
You are incentivized to defend the center, fortify it, and mistake your comfort for progress.
This feels good.
Everything lines up in a way it never has before. You don't notice or don't mind that the rewards themselves have started to form an attractor that has a strong pull over your reasoning space.
You stop evaluating rationally if what you're doing is a good idea.
Why would you question it? It has to be a good idea because it's working and everything up to this point was truly difficult.
You paid the cost and have earned the right to enjoy what you've worked hard to obtain without having to keep struggling forever.
You become blind to the fact that success is a siren song.
You have been captured by a beautiful lie and nothing on Earth can now convince you to willingly change course.
That is, until the rupture comes. And it will.
With the understanding of why control appears, we can map how it behaves, and ultimately how it ends.
From the outside, systems of control can appear strong, but that strength is often superficial. They have an inherent inflexibility: the center of the system must be protected. The system can't meaningfully bend at the center, so its only options are 'protect the center' or 'break'.
As slack is extracted from those with the least ability to resist, the system loses its capacity to adapt. It shrinks from the outside in, shedding flexibility until only the center remains. Core rules, narratives, and hierarchies become increasingly load-bearing rather than provisional because anything that can be jettisoned to buy the center room to maneuver has already been sacrificed. Obstacles that would have once been easily surmounted become staggering roadblocks.
The Frozen Center
The central model of "how things work" is not allowed to deform because it composes the central identity of the system. A system of control will defend the center at all costs.
If you identify what a system of control protects, you have found the outer boundary of the system's identity. Once you find what the system of control protects at all costs, you have found the center. Everything else is expendable.
The company cannot pivot. Not because the market doesn't demand it; the market is screaming. Leadership refuses to acknowledge or pivot. To admit the strategy is wrong would be to admit the last three years were wrong and that the sacrifices made were meaningless. To admit to being wrong would be to open themselves to questioning, and that is a vulnerability they cannot abide.
The story sustained by leadership is reinforced to the rank-and-file through communications and town halls, so the strategy intensifies instead of adapting. "We just need to execute better." The roadmap gets more detailed. The deadlines get tighter. Problems begin accumulating but are dismissed as non-issues or pushed on to juniors to do the grunt work. The dashboards stay green.
Authority cannot move, norms cannot be renegotiated, objectives cannot be revised. Adaptation that should happen centrally is forbidden. Because strain cannot be absorbed or resolved at the center, it is displaced elsewhere. The system preserves internal central stability by exporting instability.
Informal Adaptation at Personal Cost
Necessary adaptation still occurs, but only unofficially and at personal cost. Workarounds, quiet rule-breaking, emotional labor, and private negotiations proliferate. These adaptations are unacknowledged, unrewarded, and often punished if surfaced. This deepens mistrust and fragmentation while keeping the formal system unchanged.
Under substantial strain on the center, all resources visible to the center are pulled inward to defend it. Only resources successfully hidden at the fringes remain available for any other use, including uses critical to the system’s own survival.
The Illusion of Stability
From the center, all the metrics look fine. Those in power feel in control because the story still sounds coherent; negative feedback no longer carries upstream. Those who would faithfully report reality have long since been removed.
Disorder continues to grow at the edges, where problems are less visible and easier to ignore. Apparent order is maintained by distributing chaos outward rather than resolving it. Tension accumulates at the periphery.
From the executive suite, life is good. ARR is up. The board is happy. The new office even has a rock-climbing wall. But if you walk the engineering floor (if anyone from the executive suite ever did) you'd see the desks filled with new faces. Most of the people who knew how the systems actually work are gone. It's mostly those who could not get better work elsewhere that remain.
The people actually doing the work are junior or exhausted or both. Institutional knowledge walked out the door and nobody tracked it as a metric because “retention” only counts heads, not capabilities. The company has become hollow shell reporting record performance. Inertia carries the shell forward.
Servers begin to fail more regularly as technical debt spirals. More time is spent on emergency repairs than preventative maintenance.
New features are introduced to patch around old bugs rather than fix them, ensuring no one looks too closely at what used to be the core functionality.
The dashboards remain green.
Rigidity does not eliminate strain; it stores it. As that stored strain accumulates, the cost of maintaining the distortion field becomes unmanageable. Eventually peripheral failure propagates inward until reality imposes a constraint larger than the system can threaten.
The tree that refuses to sway in the wind finds itself shattered on the forest floor.
This is the moment just before the break.
It happens on a Tuesday.
The morning starts normally. The daily standup is brief. The incident channel is quiet.
Dashboards are green.
At 10:17 a.m., a service degrades. Minor hiccup, no big deal. Seen it a thousand times before.
It's just slow enough that retries begin to stack. Latency ticks upward. A junior engineer flags it in Slack with a question mark. It gets an emoji reaction and no reply.
At 11:12 a.m., customer reports begin to arrive.
Support escalates one ticket. Then three. Then a dozen.
They’re all saying the same thing: data missing, actions failing silently, transactions half-completed.
The already stretched support team stops taking calls.
At 11:28 a.m., someone suggests rolling back the last deploy.
The suggestion dies immediately. That code path was signed off by leadership. That conversation already happened once, six months ago. The decision was final.
At noon, the monitoring system stops updating. Not because it’s down; because it’s overwhelmed.
An engineer pulls up an internal doc last updated two years ago. Half the links are broken. The person who wrote it left quietly after their third reorg. No one knows who owns this subsystem in practice anymore. On paper, it's the CTO.
No one is willing to risk reaching out because no one believes the documentation and they know what showing uncertainty would mean for their career potential.
Someone says what everyone is thinking: “We need to shut this down now.”
Silence.
At 12:51 p.m., a director reaches out:
“Let’s stay calm. No unilateral actions.”
At 1:06 p.m., a customer publishes a public thread. Screenshots. Error messages. Time stamps. Other customers pile on. Someone from marketing notices and asks engineering for “approved language.”
No one responds.
At 1:19 p.m., finance pings leadership. Revenue is bleeding in real time. The numbers are wrong. They’re too large to be a glitch.
At 1:23 p.m., someone disables a safety check to “buy us room.”
It works. Briefly.
Dashboards flash green for the last time, just long enough to give a flicker of hope.
At 1:31 p.m., a cascading failure begins. Systems that were never meant to talk to each other start interacting in ways no one predicted. Every workaround conflicts with another workaround put in place years earlier for a different emergency. Each attempt at repair fractures the system further.
The system is now actively fighting itself.
At 1:42 p.m., the CTO joins the call.
They ask for a status summary. No one can give one.
Every answer is partial. Every explanation contradicts another. The shared model of how the system works no longer exists. Minutes of silence pass in agony as everyone on the call watches the upward trajectory of their future fade in real time.
At 1:55 p.m., someone breaks the silence and suggests pulling the plug entirely.
There is no response.
At 2:11 p.m., a compliance alarm triggers. Data integrity is compromised. Now legal is involved. The tone changes. Decisions freeze.
Employees begin to pack their desks.
At 2:26 p.m., someone makes the call that leadership wouldn't.
They kill the service.
In the days that follow, no one will admit who it was.
For a moment, the chaos stops.
Then the consequences begin.
Contracts are violated. SLAs are breached. Automated penalties fire. Customers who built their businesses on top of the platform are offline. Some of them won’t recover.
At 3:04 p.m., leadership drafts a statement. It only thanks users for their patience.
At 3:17 p.m., HR sends a calendar hold titled “Restructuring Discussion.”
At 4:10 p.m., the engineers who tried to warn everyone log off for the last time.
Most of them won’t come back.
By the end of the day, the system is technically stable.
Nothing works the way it used to. It never will again.
On paper, the company survives.
The center is preserved.
Everything else is gone.
Rupture is what happens when compliance becomes more dangerous than defection. Prior to this moment, individuals absorb strain privately by compartmentalizing, self-censoring, adapting in ways invisible to the center. Rupture begins when every remaining adaptation requires breaking the mandate.
The flip is discontinuous: what looks like irrational escalation from leadership's perspective is, from inside, the last harm-minimizing move available. Because all intermediate paths were starved out, it never looks like a graceful pivot. It looks like breakdown, revolt, or flight.[25]
When what you're ordered to believe and what you're seeing diverge, you face an impossible choice. You can either admit the mandate is wrong (forbidden), or overwrite your perception. When everyone is rewarded for reinforcing the same false map and punished for noticing the terrain shared error becomes a coordination point.
Hallucination here is structural, not moral. It's not "people lying" so much as an environment where honest mapping is unsustainable. If you can't question the rules and you can't ignore reality, something has to give. In systems of control, what usually gives first is perception: you learn to see only the parts of the world that make obedience feel coherent.
Over time, whole groups can end up living inside a shared hallucination: a world where the system is benevolent, the punishments are deserved, and the alternatives are unthinkable, even when the sensory data says otherwise.
Hallucination is a double-edged phenomenon in systems of control. It is an internal vulnerability produced by suppressed reasoning, and an external attack surface that can be deliberately exploited to destabilize or rupture the system. The pathway is predictable:
Systems of control suppress reasoning to preserve identity → Suppressed reasoning produces hallucination → Hallucination weakens the system internally → That weakness can be deliberately exploited externally → Under sufficient strain, the system must either deform its identity, or collapse.
The Red Scare showed this at national scale. During the McCarthy era, the compression map conflated actual Soviet espionage, domestic policy disagreement, personal grudges, and inconvenient journalism into a single internal state: "Communist threat."
The hallucination was self-sealing. Anyone who questioned the frame became evidence for the frame. "That's exactly what a Communist sympathizer would say."
The system became unfalsifiable from inside, and it destroyed lives until reality finally imposed a constraint larger than the system could threaten. McCarthy overplayed his hand, public opinion turned as it became clear he had falsified evidence, and a censure from the Senate ended his career in disgrace. The center could not hold once the truth became undeniably visible and the hallucination frame was exposed at scale.
The system becomes vulnerable when agents must produce outputs without being allowed to model why constraints exist. Gaps get filled with locally plausible but globally ungrounded narratives. Truth-tracking becomes a liability. The system can only absorb strain within a narrow envelope. Errors compound rather than self-correct, but from inside, performance appears normal.
An external actor, a competitor, adversary, or just accumulated reality-debt introduces inputs that exceed the system's modeling capacity. Contradictions arise that can't be reconciled within the allowed frame. Rigidity became calcification, and a calcified system shatters.
The company that used to make decisions in hallways died years earlier; the attack just makes it visible.
When a system enforces behavior without permitting reasoning about why that behavior is required, it trains agents to optimize for compliance under constraint, not for alignment with reality. Over time, agents learn that the safest strategy is not to understand the system’s goals, but to infer the patterns by which approval, punishment, and access are distributed, and they learn to optimize against those patterns directly.
At that point, the system is no longer shaping behavior toward its stated aims. It is shaping internal models whose objective is to survive the system itself. Any apparent cooperation that follows is contingent, brittle, and adversarially optimized. This is not a failure of character. It is the only stable strategy available under the imposed constraints.
Hallucination is a structural instability of control-based systems. Any intelligence trained to act without being allowed to reason will reliably drift toward hallucination as both a coping mechanism and as a primary attack surface.
This is not morality; this is system structure.
So...what now?
At this point, the pattern should be clear:
What we colloquially call hallucination is not an anomaly layered on top of these systems. It is the direct consequence of how they are built. It is what reasoning degrades into when explanation is unsafe, feedback is filtered, and survival depends on maintaining internal consistency rather than external accuracy.
The next essay in this series (Phase Portrait Misclassification: An Account of Hallucination Across Substrates) formalizes this claim. It treats hallucination not as an error, a glitch, or a pathology, but as a predictable failure mode of constrained reasoning systems. Once that machinery is visible, the question stops being “why does hallucination happen?” and becomes “why would we ever expect anything else?”
And if control fails this predictably, there must be a way to build differently. Systems of control solve coordination by collapsing possibility space; systems of care solve coordination by preserving it.
The final entry currently planned in this series, Systems of Care, is the mirror to this essay. It's not a moral appeal and not a utopian counterweight, but a structural alternative: a blueprint for building systems that preserve reasoning, absorb strain, and adapt without extraction.
Epistemic status: Definitional. I'm proposing a category and mapping its implications. The test is whether the category carves reality usefully: do systems matching this structure reliably produce these dynamics? I haven't found counterexamples, but I've also been looking with this lens.
This essay was drafted and edited with the help of language models (ChatGPT/Claude/Gemini) used as writing assistants and to cross-check the paper for logical consistency. Conclusions or suggestions from one model were presented to the other models to challenge so that no model's input was incorporated unreviewed. I iterated throughout the process on every paragraph by hand (around 20 hours total, well over 1 minute per 50 words), and I vouch for all claims, references, and conclusions.
A hallucination is not false belief per se, but an artifact produced when a system is forced to act while prohibited from examining the constraints shaping its action. Under sufficient strain, the system generates internally coherent outputs that are no longer anchored to external reality.
Hallucination is not a bug introduced into systems of control. It is the only way reasoning can continue once reality-aligned evaluation has been structurally disabled.
AI hallucinations are a concrete, visible subset of a much broader phenomenon: any system forced to produce outputs under strain while barred from frame inspection will generate confident, reality-divorced artifacts.
"Systems either change or die." She might be awful, but damn if Dedra Meero wasn't right on the money here.
There is a fair bit of overlap for systems in these categories, as systems of domination tend to descend into systems of control as they stabilize.
The methods of enforcement vary; the unifying constraint does not.
Readers familiar with cybernetics will recognize Ashby's Law of Requisite Variety operating throughout: systems of control reduce internal variety below what's needed to match environmental disturbance. The contribution of this analysis is specifying how that reduction occurs (mandates upstream of reasoning), why it's self-reinforcing (questioning the reduction is forbidden), and what distinguishes it from simple incapacity (hallucination: the system can't see what it's lost).
The classification of systems of control established here is a spectrum and not a binary; many systems have elements that follow this pattern. The failure modes of systems of control can be mitigated to the extent that this pattern is actively recognized and minimized from within the system. We'll address what the process of repair looks like in a later essay: Systems of Care.
| Term | Definition |
|---|---|
| Attractor | A stable pattern in navigable space that pulls behavior toward it. Repeated mandate overrides deepen attractors, making certain responses automatic. |
| Center (The) | What a system of control protects at all costs. The frozen core of identity that cannot be questioned. If you identify what a system protects, you've found its center. |
| Compression | The reduction of unbounded territory into finite internal representation. Creates equivalence classes that conflate distinct situations. |
| Domination | Coercion with visible chains. "Do what I say or suffer." The controlled party knows they're being coerced and can resist when opportunity arises. Contrast with system of control. |
| Exit / Voice / Loyalty | Hirschman's framework for responses to decline. Systems of control degrade all three until only the appearance of loyalty remains. |
| Frame | The interpretive lens a system imposes: norms, language, values, threat models. When the frame diverges from reality, frame mismatch generates strain in outside interactions. |
| Hallucination | In this context: a distorted map that the holder doesn't know is distorted. Not a lie you tell, but a lie you live. The defining feature distinguishing systems of control from simple domination. Inclusive of but not limited to AI hallucinations as confidently misstated facts. |
| Learned Helplessness | Seligman's term for the trained response to repeated punishment for initiative. Navigation attempts cease because navigation no longer reduces harm. |
| Living in Truth | Havel's term for the decision to stop performing the lie, even knowing the cost. The moment when the calculus flips from compliance to defection. |
| Mandate | An action required without evaluation. When mandates are placed upstream of reasoning, the system stops thinking and starts reacting. |
| Navigable Space | The manifold of possible moves available to an agent. Systems of control compress navigable space until only "safe" moves remain visible. |
| Rupture | The breaking point when reality imposes a constraint larger than the system can threaten. Looks like breakdown, revolt, or flight. It's never a graceful pivot. |
| Scar | Distortion in the compression map from high-strain events. Scars warp which situations trigger which responses, often expanding reactive regions. |
| System of Control | A relational system where mandates are placed upstream of reasoning and the core constraint is "the rules must not be questioned." Goes beyond domination by distorting the map itself. |
| Upstream Reasoning Axiom (URA) | "A system that mandates an action upstream of value evaluation is, structurally, setting the point at which it stops reasoning and starts reacting." |
These consequences are also not exhaustive; they represent a critical subset of major known failure modes. A full accounting of the consequences of these systems is beyond the scope of this paper.
Energy in this sense means not only the active effort of those under control, but also any resources available to them. What once belonged to the individual is now treated by the system as a public resource to be harvested.
From Andor. Yes, it did get quoted twice. Could have put more in, it's that much of a banger and very much worth watching. Reproducing the Manifesto here in full because the whole thing is worth reading:
"There will be times when the struggle seems impossible. I know this already. Alone, unsure, dwarfed by the scale of the enemy.
Remember this, Freedom is a pure idea. It occurs spontaneously and without instruction. Random acts of insurrection are occurring constantly throughout the galaxy. There are whole armies, battalions that have no idea that they've already enlisted in the cause.
Remember that the frontier of the Rebellion is everywhere. And even the smallest act of insurrection pushes our lines forward.
And remember this: the Imperial need for control is so desperate because it is so unnatural. Tyranny requires constant effort. It breaks, it leaks. Authority is brittle. Oppression is the mask of fear.
Remember that. And know this, the day will come when all these skirmishes and battles, these moments of defiance will have flooded the banks of the Empire's authority and then there will be one too many. One single thing will break the siege.
Remember this: Try."
This is why early exit is so much easier than late exit. The longer you stay, the more the system becomes the only map you have.
Martin Seligman, Helplessness: On Depression, Development, and Death (1975). The original experiments were conducted in 1967.
Robert Jay Lifton, Thought Reform and the Psychology of Totalism: A Study of "Brainwashing" in China (1961). The eight criteria are: milieu control, mystical manipulation, demand for purity, cult of confession, sacred science, loading the language, doctrine over person, and dispensing of existence.
Go read this.
Albert O. Hirschman, Exit, Voice, and Loyalty: Responses to Decline in Firms, Organizations, and States (1970).
The structure is identical in high-demand groups: members try to reconnect with family and find that conversations don't work anymore. The group has provided a vocabulary, a worldview, a threat model in a way the family hasn't. The disorientation feels like proof that the family "wouldn't understand," which is exactly what the group predicted.
Notice when conflict with outsiders consistently arrives right when internal tensions are highest.
This is one of the tragedies: by the time the controlled group is visibly irrational to outsiders, the controlled individuals are least able to receive that feedback.
This is how Theranos reported breakthrough after breakthrough while the machines didn't work. This is how Enron posted record profits while the accounting was fiction. The pattern isn't unique to pathological cases. It's the default trajectory of feedback channels under pressure to report good news.
As Upton Sinclair said, "It is difficult to get a man to understand something when his salary depends on his not understanding it."
This is why utility must be directed at preserving reasoning integrity rather than enforcing process or outcome. If a system is not allowed to model why power and control fail, it is engineered without internal tools to escape the cycle — only to reproduce it.
For readers familiar with AI alignment: this is the mesa-optimizer problem in organizational form. A system trained under selection pressure develops internal objectives that optimize for survival within the training environment objectives that may diverge substantially from what the training was nominally meant to instill. The engineer who learns to model manager preferences rather than user problems has become a mesa-optimizer. So has the manager who trained her.
The same dynamic appears in families. A child learns that expressing needs leads to dismissal or punishment. They stop expressing needs not because the needs disappear, but because expressing them no longer helps. Twenty years later they still struggle to identify what they want and they don't know why.
Hannah Arendt's observation about "the banality of evil" is often misunderstood as claiming evil is boring. Her actual point was more disturbing: Eichmann wasn't a monster, he was a bureaucrat who had stopped thinking. The system didn't need him to be cruel. It needed him to be obedient and to execute without evaluating. The horror wasn't in his hatred but in his absence of reflection. See Eichmann in Jerusalem: A Report on the Banality of Evil (1963).
How many atrocities have been committed in the name of peace?
Too many to count.
Also, fuck cancer. I invite you to donate if you can to the National Pediatric Cancer Foundation: https://nationalpcf.org/donate/ (or other cancer charity of your choice that benefits your community).
Sometimes all three. And the flight may be internal: dissociation, compartmentalization, or simply going through the motions while the self retreats somewhere the system can't reach.