This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
[Rationality Under AI Mediation: Belief, Contestability, and the Loss of Shared Evidence]
Introduction
I was rejected for a job, and I have no way to determine why. The person I am arguing with online is probably a bot. The actress Tilly Norwood shakes up the movie industry, but she never existed [1]. The video evidence cited in the news may be fabricated [4]. My news is different than yours [5]. This is not speculation. This is Tuesday.
Here's what connects all of these: there's something between you and reality now. When you apply for a job, an algorithm screens you before any human sees your resume. When you check the news, a recommendation system decides what's real before you judge.
The technology was envisioned as a tool. But, we are operationalizing it as a mediator. We've given it the job of a.) telling us what the world is before we get to look at it ourselves, and b.) adjusting the world before we've had a chance to see it.
For thirty years we've been contemplating the singularity: the moment when AI becomes smarter than us, when technology accelerates beyond human control. We've been preparing for the arrival of superintelligence, hopeful of alignment, wondering if we'll merge with the machines or just be left behind. We're viewing this wrong.
The fascination with superintelligent AIs is causing us to miss a more immediate issue: the issue that AI is already eroding the foundations of human reality. This isn't because of its intelligence, or even alignment, it is because of its mediation. Even with perfect alignment, mediation still dissolves human frameworks.
Many of our epistemic, ontological, social, moral, and political frameworks are all based on a shared, contestable, and attributive relationship with reality. Our frameworks assume: You see the sun. You feel its warmth. You tell me what you saw. I can check. We share a world. From this foundation, we built everything else: accountability (you did it, I can trace it), shared truth (we saw the same thing), collective action (we all face the same reality).
However, by placing non-human minds between us and reality, we've broken the link. The mediator interprets before we perceive. It optimizes before we deliberate. It decides what's possible before we choose. We are allowing it to mediate our frameworks, our ontologies. Just to be clear: We invented an alien interpreter, routed the world through it, live within its optimization regime. We're calling it the Singularity and watching for superintelligence. But, we're missing the other singularity caused by mediation.
There are two singularities.
SE: The epistemic singularity - machine mediation dissolving human frameworks - happening now.
ST: The techno/computational singularity - superintelligence, recursive self-improvement - which is probably sometime in the near future.
We're watching the wrong thing.
This is not techno-pessimism. I'm not claiming AI will inevitably end civilization, or that every use is harmful. The claim is narrower and more severe: once non-human mediation becomes the default pathway for decisions, truth, and access, our old frameworks fail even if outcomes sometimes improve. A system can be efficient and accurate in aggregate while breaking contestability, responsibility, and shared reality, the conditions modern legitimacy depends on. Did we choose that trade? Did we consent to having an external mediator upstream of what we see, what we can do, and what we can become? This isn't a business decision, this is an epistemic one.
The Background
In 1993, Vinge wrote his famous Singularity essay where he introduces the concept of a singularity as "a point where our models must be discarded and a new reality rules" [2]. His essay is evocative, but it misses these key points.
First, Vinge is saying that due to recursive self improvement of technology then our models will need discarded. Thus, a technological singularity would simultaneously result in a breakdown in human 'models'. He is placing both effects into the same bucket.
Second, why would our models need discarded? In what sense would they be disregarded? This is where Thomas Kuhn can help us. His insight into the concept of the paradigm shift gives us a model defined as [3]:
Initially one world view dominates such that everyone thinks and operates within that world view.
Disagreements with the data (reality) are simply handled by tweaking the world view.
But, over time a preponderance of discrepancies build, an insight happens, the paradigm shift starts. A new world view materializes.
The center of gravity shifts.
The new world view is radically different than the initial one to the point that it is incommensurate: difficult or impossible to describe a concept from the new paradigm in the old paradigm.
Vinge's 'new reality rules' is thus much like a Kuhnian paradigm shift where new rules result but incommensurability creates discarded old ones. The result is that the new rules are so radically different from the old ones that we can't even really talk in terms of the old rules.
Just to acknowledge the shift I just performed. Kuhn talks about paradigm shifts and incommensurability for scientific evolution and revolutions. I'm restating his theory in terms of any applicable framework. I'll also point out later that the Kuhn framework for understanding our current situation is insufficient.
Third, Vinge doesn't distinguish between the different uses of the superintelligence. It's implied that we'll all just get smarter. But there are critical differences in the net result depending on how you use the AI. If the AI is always used as a tool then it will recursively improve and we will basically be the tool's user, of course entering a coevolution. But, when the AI is used as a mediator then it becomes much more epistemically relevant, because now it's not the amplification that matters; it is that human cognition being restructured by the mediation layer.
The Remote Mind
To visualize a modern AI system, think of AI as a Remote Mind. It's as if it has been watching Earth through a telescope for centuries. It read every book, tracked every transaction, spotted patterns no human could see. It knows which words make you click, which faces make you trust, which arguments make you rage. It's incredible at finding statistical patterns in historical data. But here's the problem: the Remote Mind’s logic is incommensurable with our own. It operates on high-dimensional correlations; we operate on contestable reasons. Its logic is inherently opaque; utilizing data traces we cannot verify to make decisions we cannot argue with. It doesn't encounter a world; it processes an abstraction.
The Remote Mind understands hunger as "productivity drops after X hours without food." Not as the gnawing feeling in your stomach. It understands care as "retention metrics and sentiment scores." Not as love nor obligation. It doesn't inhabit meaning - it approximates meaning from the 3rd person traces that meaning leaves behind.
This difference matters. When you experience the world, salience and relevance and causality aren't things you add after observing - they're built into how the world shows up for you. The Remote Mind doesn't have that. It doesn't encounter beings, it encounters representations. It doesn't grasp reasons, it learns correlations that pass for reasons.
Now we've embedded this Remote Mind in critical decision pipelines. It reviews your loan application. It filters your resume. It ranks content. It tells you where to drive. When it harms you, it's not being cruel; it is simply performing a calculation where you are not a participant. At this scale and consistency, the system doesn't just process your data; it preempts your presence. By the time a human agent enters the loop, the "decision" is already a historical fact, and you have been reduced to the statistical shadow you cast upon the model.
And because it sits upstream of everything else, before any human looks, its output doesn't just describe reality. It selects reality. It decides what's visible, what's credible, what's actionable, before human judgment even begins.
Over time, people and institutions adapt to this new reality. They simplify themselves into features the Remote Mind can recognize. They translate their lives into signals the system understands. The telescope doesn't just magnify anymore. It becomes the lens through which reality is allowed to appear.
The Remote Mind operationalizes the past. Every historical inequality, every structural bias - it all becomes training data. The system incorporates it and applies it. The result isn't just unfair outcomes and optimized futures. It's something deeper: human-centered interpretation gets replaced by model-centered legibility. The world gets quietly rebuilt to match what the Remote Mind recognizes.
The Remote Mind is forcing reality to fit the dimensions of its data. This isn't Goodhart [6], this is Procrustean optimization where the "right" outcome is merely the one the model was already equipped to see [7]. The system creates a self-fulfilling loop where efficiency is bought at the cost of ontological collapse. Even perfect alignment or forced commensurability cannot restore our standing; they only polish the bars of the cage.
The Law of Mediated Collapse
Frameworks built on human judgment collapse in a predictable pattern when machine mediation becomes primary. These systems don't just degrade, they fundamentally change.
The Law of Mediated Collapse states: As non-human mediating intelligences are placed between humans and reality, frameworks based on direct human perception and judgment fail precisely where algorithmic mediation becomes the primary interpreter and gatekeeper.
"Fail" here means the framework remains in language and policy, but stops being the thing you can use as a guide, as a grounding, as the framework it was intended to be. The failures would include things like inability to contest decisions, assign responsibility, or coordinate shared understanding.
There are two ways that an intelligence can sit between humans and reality:
Reinterpretation: the system processes the world first and passes you a ranked, filtered, or scored version of it.
Preemption: the system reshapes the world you will encounter by changing access, prices, visibility, timing, and options before you ever observe them.
Factors that lock us into a collapse:
Displacement of discretion: human judgment becomes downstream review, not the governing act.
Opacity of logic: outcomes are produced by correlations and weights that cannot be translated into contestable reasons. You can ask “why,” but the system did not decide by giving reasons.
Loss of contestability: the framework can no longer be used to dispute outcomes. You are offered appeal, oversight, or “recourse,” but there is no grounded claim, rule, or reason that can be confronted in the existing framework’s terms.
Recursive feedback: the system's interpretation becomes training data. Then policy input. Then institutional habit. The outputs become self-reinforcing. The model predicts you'll default on a loan, so you're denied credit, so you can't build credit history, so the model's prediction becomes more confident.
Preemptive constraint: not only does the model preempt you, the prediction shapes the option space so behavior conforms before choice.
Factors that accelerate or amplify the collapse:
Direct (agentic) action: if the Remote Mind makes its decision, has the authority for direct action, and takes the action by itself, then both the rate of collapse and the ability for humans to stay connected to reality diminishes.
Opacity of mediation: if humans are not aware of what is being mediated and have no means of observing reality themselves then we hasten the phase shift.
Scale: the more mediation the faster the recursion accelerates, the more networked the mediators the deeper and swifter the collapse happens.
The obvious response is: we adapt. We always adapt. But adaptation is not neutral. If the environment is being shaped by prediction, "adapting" often means translating yourself into the system’s legible features and living inside its constraints. You can learn how to write for the ATS, how to behave for the risk model, how to signal for the recommender. That is adaptation. It is also submission to an external interpreter. The question is not whether humans will adapt. The question is what gets lost when the only viable form of adaptation is becoming readable to the model.
Alignment does not automatically fix this. If "perfect alignment" means the system’s outputs match our preferences, we still have the same structural break: contestability, responsibility, and public truth collapse in any domain where the model acts as the gatekeeper. When decisions run on model outputs instead of reasons that can be surfaced and argued with, the mediation problem remains. If you stop noticing the mediator, that doesn’t mean you’re unmediated. It means the mediator has become your environment. The only version of alignment that dissolves the mediation problem is one that keeps AI as a tool rather than an upstream interpreter, a system that can advise and assist without becoming the gatekeeper of what we see, what we can do, and what futures are available to us by default.
Where Frameworks Are Collapsing
Reason: We speak in the language of reasons while power runs on correlations. You apply for a job. The Applicant Tracking System (ATS) filters you out before any human sees your resume. Why? The system found correlations between your resume features and past hires. Maybe you said "managed team" instead of "team leadership." Maybe your university got down-weighted. Maybe the model detected patterns you can't even name. You ask why you were rejected. The company says "we reviewed all applications carefully." But no human reviewed yours. The system did. And the system can't give reasons - only predictions.
Choice: We speak in the language of choice while the option space is pre-shaped by prediction. Your social media feed. Your search results. Your dating app matches. Your shopping recommendations. Every option you see has been filtered by a model that predicted what you'd engage with. You're not choosing from "all options." You're choosing from "options the algorithm predicted you'd want." The model becomes upstream of choice. It doesn't constrain what you choose - it constrains what you can choose from. The rub is that you can't opt out, because opting out increasingly means degraded access, higher friction, and social exclusion.
Responsibility: We speak in the language of responsibility while the causal chain is diffused. A self-driving car hits someone. Who's responsible? The company? The training data? The sensor vendor? The model architecture? The specific weights that led to this prediction? The human "driver" who was "supervising" but wasn't actually controlling anything? Responsibility presumes you can locate the decision and the decision-maker. When decisions emerge from opaque models trained on distributed data with weights updated continuously, responsibility doesn't attach. It diffuses. Everyone can say "not me" and be technically correct. Legal assignment is possible, but it becomes arbitrary relative to the causal contribution.
Truth: We speak in the language of public truth while each person receives a different evidentiary world. Your news is different than mine. Your search results are different than mine. Your prices might be different than mine. We're not disagreeing about a shared reality - we're occupying personalized realities the system constructed for us. Public reason requires a public. Machine mediation fragments the public into millions of individualized information environments. We can't even see what the other person is seeing to argue about whether it's true. This does more than fragment experience, it collapses metanarratives.
Metanarratives: These are not just stories, they are the scaffolding that lets a person generalize everything from a private event to a public structure, to recognize themselves as worker, citizen, member of a class, part of a people. That scaffolding requires comparability and shared reference points. Machine mediation attacks those conditions at the root. When each person receives a different informational world and a different set of institutional affordances, understanding becomes difficult to transfer. Personalization turns structural repetition into isolated cases, so even when many people are harmed by the same system, they experience it as individual misfortune rather than a common cause.
These aren't imprecise phrases. They're being obsoleted. They're category errors. The old concepts still govern our conscience, but they no longer govern the systems that decide what happens.
Asymmetric Epistemic Singularity
A singularity is a point beyond which prediction fails. We're experiencing a distributed, asymmetric singularity right now.
You lose the ability to predict outcomes that govern your life. This loss is demonstrated by the examples in the prior section. However, at the same time institutions gain predictive capacity about you. Their models observe billions of comparable patterns. They forecast your next purchase, your political views, your likelihood of compliance, your vulnerability to manipulation. They know things about you that you don't know about yourself. This is not because they're smarter, but because they're watching more data points than you have access to.
This isn't a symmetrical uncertainty. It's an uneven redistribution of epistemic authority. You're operating with almost zero information about the systems governing you. They're operating with more information about you than you have about yourself.
This isn’t just a gap in information; it is a total inversion of status. In this asymmetry, the institution remains a subject; an entity that observes, decides, and acts. You, however, are transitioned into the role of the observed. You are no longer a participant in an epistemic exchange, but a set of features to be managed. The singularity isn't just that they can predict you; it's that their ability to predict you turns you into an object of their optimization.
The danger isn't just that predictions could be wrong. It's that the model becomes the environment. The forecast stops being a description and becomes a constraint.
Here's how: The model predicts you're a credit risk. So you're denied credit. So you can't build credit history. So you remain a credit risk. The model predicted your neighborhood is high-crime. So police patrol more. So more arrests happen. So the data confirms the prediction. The model predicted you'd engage with outrage content. So that's what you see. So that's what you engage with. So the prediction was correct.
Governance shifts from rule to expectation. Control shifts from command to preemption. The model doesn't tell you what you can't do. It just makes certain options harder, slower, more expensive, less visible. It shapes the world before you make a choice.
Options are presented, denied, priced, delayed, and amplified in advance of deliberation. By the time you're making a "choice," the option space has already been optimized by a system you didn't consent to and can't interrogate.
The most profound danger: optimization quietly replaces understanding as the principle by which reality is organized. We stop asking "what's true?" and start asking "what works?" But "what works" is defined by an optimization function. But you're not allowed to see it.
Post-attributive labor
Labor is no longer a stable category.
Here's what I mean: Labor has historically been the primary coordination mechanism by which agency, value creation, responsibility, and compensation were aligned. You did work. The work created value. You were responsible for the outcome. You got compensated based on the value.
AI mediated production breaks that alignment while preserving the appearance of work. You write an email using AI assistance. Who wrote it? You supplied intent and context. The AI supplied synthesis and phrasing. You edited and approved. But, you couldn't have produced this quality this fast without the AI. And the AI couldn't have produced it without your direction. So who created the value?
This isn't an edge case. It's becoming the standard case. Code written with Copilot. Articles drafted with language models. Designs iterated with generative tools. Customer service with AI assistance. The human is present. Effort is expended. But attribution is impossible. The work still happens. What breaks is the ability to attribute it.
Here's why this matters: firms extract machine-scale value while assigning human-scale accountability. When the outcome is good, the system is celebrated for efficiency. When harm occurs, blame lands on the worker. "You approved it. You hit Send. You were in the loop."
But, being "in the loop" doesn't mean being in control. It means being responsible for outputs you couldn't have produced alone, generated by a process you don't fully understand, for standards that are continuously updated by the system.
Consider the Applicant Tracking System again. You tune your resume to get past the filter. You're not writing for a human anymore. You're writing for an interpreter whose logic you cannot see. You use the "right" words. You match the format the algorithm expects. You game the algorithm. Did you apply for the job? Or did you prompt a system? Who's responsible for the application? You wrote it, but only by reverse-engineering what the machine wants. The machine filtered it, but only based on patterns from humans. Agency hasn't disappeared, but it has been reduced to a performance for the machine’s benefit.
Workers increasingly labor not for institutions but for interpreters. You phrase things for content filters. You behave for productivity scoring systems. You optimize for algorithmic legibility. Refusal doesn't exit you from the system, it just perceives you at a lower resolution. Even resisting requires you work within the optimizer's landscape.
Here's the epistemic dimension: workers can't easily explain what's being demanded of them, because the demand is embedded in a shifting model. The job description says one thing. The ATS rewards another thing. The productivity tracker measures a third thing. And all three change as the models retrain.
This fragments collective identity. Workers performing nominally similar roles are evaluated and compensated through individualized algorithmic assessments that cannot be compared or audited. Organizing becomes harder because you can't establish that you're facing the same conditions. Solidarity presumes commensurability. Mediating algorithms erode it. What used to be "our working conditions" becomes "my personal algorithmic assessment that I can't share or verify."
Algorithmic Discrimination
Discrimination used to have handles.
Not in the sense that it is easy to solve, but because it was legible in the way our remedies assume. You could point to actors, policies, and practices. Even when intent was disguised, the framework presumed it existed somewhere in the chain. There was someone to confront, a rule to challenge, a reason to demand.
Algorithmic discrimination changes the form of the harm. The harm is still real. It still determines who gets hired, who gets searched, who gets flagged, who gets denied, who gets to move. But responsibility diffuses. The decision arrives as output. The institution points to the model. The model points to the data. The data points to history. Everyone can say "not me" and be technically correct.
This is a different kind of discrimination. This is not discrimination as hatred. It is discrimination as optimization. A system does not need a bigoted mind in the moment of action to produce patterned exclusion. It needs training data soaked in historical inequality and an objective that rewards predictive fit under existing conditions. Identity does not even need to be explicit. Proxies are enough. Geography, schools, credit history, arrest history, social networks. The model learns the shape of exclusion and calls it prediction.
This is why "the algorithm decided" becomes an institutional shield. It converts discrimination into something like weather. Unfortunate, regrettable, but not attributable. A manager can claim they did not discriminate, they followed the score. A judge can claim they did not discriminate, they considered the risk assessment. A lender can claim they did not discriminate, they used a neutral model. The non-human mediator produces plausible deniability for individuals and institutions while preserving the outcome.
Civil rights and anti-discrimination frameworks presuppose legibility. They presuppose intention or policy as an intervention site. They presuppose that discrimination can be articulated in reasons that can be contested, evidence that can be surfaced, decisions that can be appealed in the language of justification.
Algorithmic discrimination breaks those presuppositions. If the outcome was produced by correlations you cannot inspect, weights you cannot challenge, and thresholds you cannot see, then "recourse" becomes a performance. You are offered a superficial recourse that cannot propagate back into the distributed, opaque systems that produced the outcome.
This is not hidden discrimination. It is structurally non-addressable discrimination. Hidden implies that if we looked hard enough, we could find the actor or the intent. Here the harm is produced in a form that resists translation into the objects our remedies know how to act on. By the time you reach a human, the selection has already happened and there is no lever to pull.
And because these systems recurse, the harm can become self-sealing. A model predicts risk, institutions respond as if it were true, the data shifts to match the response, and the model grows more confident. The system calls it accuracy. The target experiences it as containment.
The Narrow Confusion
“Narrow AI” is often treated as a safety category, but the term quietly conflates two very different ideas. The first task-boundedness: a system designed to operate within a clearly defined domain. The second ontological thinness: a system assumed to be non-sentient, non-agentic, and therefore morally inert. These meanings are frequently treated as equivalent. They are not.
Under the first meaning, narrowness offers no inherent ontological safety. A task-bounded system that mediates access to credit, labor, or information can be structurally transformative regardless of its scope. Its impact follows from position, not breadth. A narrow mediator can reshape human standing more profoundly than a wide system that remains a tool.
Under the second meaning, narrowness collapses into a claim about sentience rather than function. But non-sentience does not imply non-mediation. A system can lack sentience entirely and still reorganize social reality by standing between humans and institutions.
The mistake is to treat narrowness as a proxy for safety. The relevant distinction is not between narrow and wide intelligence, but between systems that mediate human access to reality and those that do not. If the concern is human standing, the goal is not narrow AI, but AI that is neither sentient nor positioned as a mediator between humans and reality.
The Trap of XAI and Mediator Chains
There is a goal that Explainable AI (XAI) will eventually solve the contestability issue [9]. It is a Promethean feeling that the problems arising from mediation can be solved with more mediation. Some think of "explainability" as a negative value that can be added to the equation to cancel out "opacity". If the model is a black box, they believe a second model can simply shine a light inside.
However, adding a layer of explainability to an existing system only adds another mediator. By doing this, we are creating a mediator chain and moving further from the truth. The output of the explainer is a representation of a representation. If this second level says the model rejected you because of a specific feature, you still cannot verify if that is true. You are now twice removed from the ground.
The XAI view assumes that M + M = 0. In reality, chained mediators worsen the situation; we end up with M + M = M². Each new layer of interpretation is another black box that requires its own verification. Instead of a shared reality, we get a tower of personalized explanations. Each explanation is an approximation of an approximation, and none of them allow us to stand on the same ground.
Perhaps there are anti-mediators. But, we must be cautious in our definition: an anti-mediator is not a more advanced "balancing" technology or a more understanding translator. It is a short circuit. It is a deliberate architectural choice to bypass the layers of mediation and restore the direct link between the human subject and the shared evidence of the world. An anti-mediator does not try to "fix" the tower of explanations; it collapses it.
To function as a short circuit, an anti-mediator must satisfy two conditions. First, it must provide a shared instrument, like a microscope, where both the human and the system are looking at the same raw evidence. Second, it must shift the decision pathway from machine correlation back into the realm of contestable human reasons. It must be a glass that we look through, not a translator that speaks for us.
In the end, there seems to be no technical "patch" for the dissolution of our frameworks. We cannot "fix" a mediator by making it smarter or more talkative. We can only fix the problem by turning mediators back into tools and moving them out of the mediation pathway entirely.
The Consent Question
Did we agree to dissolve the grounding that supports human agency?
This choice has been made by default, not through deliberation. It emerged through thousands of product decisions, infrastructure investments, institutional adoptions. The quiet logic of market optimization and the Promethean drive for scale. No consent. No vote. No public debate about the ontological transformation itself. We focused on the media narrative of sentience; operationalization focused on the economic reality of mediation. We mistook market adoption for moral agreement, unaware that by accepting the technology, we were being captured by the gatekeeper.
We debate whether AI will be "safe". We debate whether algorithms are "fair". We debate whether automation will "take jobs". These debates matter, but they rely on an assumption. They assume stable contestability and a shared reality. We are missing the point: the very frameworks that make these debates possible are what's collapsing. When the mediator determines what can even be perceived, the ground for debate falls away.
The question isn't whether AI is useful. It is. The question isn't whether change is inevitable. Some change is. The question is: do we preserve any standpoint outside algorithmic interpretation? Does human judgment retain authority over machine prediction? Do we remain subjects capable of understanding, or become objects being optimized?
Right now, AI safety discourse focuses on ST: what happens if we build superintelligent AI? How do we align it? How do we control it? Can we merge with it? These questions presume stable human values, stable human judgment, stable human frameworks for evaluation. But SE is dissolving those foundations while we debate.
How do we align AI with human values when human values are being reconstituted by AI mediation? How do we ensure AI serves human interests when "human interests" are increasingly defined by what the optimization function rewards? How do we maintain meaningful human oversight when humans are downstream of machine interpretation?
We can't coherently assess ST using frameworks that SE is destroying. Here's what's at stake: whether humans retain standing outside the model. Whether there's still a place to stand from which to say "this is true, this is unjust, this is ours, this is what should be."
As mediation becomes infrastructural, it becomes the prior structure that determines what can appear as knowable, decidable, or actionable. In these unavoidable domains, there is no outside. There is only inside. And "inside" is defined by machine legibility.
The transformation isn't complete. But it's far enough along that denial is irrational. Not so far along that description has collapsed. We're at the boundary, in the window where awareness still matters.
Even if the transformation is inevitable, awareness changes what's possible. Sleepwalking versus choosing. Being changed versus changing ourselves. Awareness creates the possibility of contestation. Without recognition, resistance isn't even conceivable.
Why This Exceeds Kuhn
I said earlier that Kuhn's framework is insufficient for understanding our situation. Here's why:
Kuhn's paradigm shifts involve incommensurability between frameworks. Scientists before and after a revolution use different concepts, ask different questions, accept different evidence. They can't fully translate between paradigms. But the scientists themselves remain recognizable humans doing science. And historians can describe both paradigms retrospectively.
SE is different. It's not just frameworks that are incommensurable. It's the subjects using the frameworks. When machine mediation becomes the prior condition for what can appear as knowable, humans aren't just adopting a new paradigm. They're being reconstituted as a different kind of subject. One whose understanding is derivative of machine legibility rather than direct perception. This isn't a mystical change. It is the mechanical result of the breakdown of contestation and shared evidence. When you can no longer confront the reasons behind a decision, or verify your truth against a neighbor’s, the faculties that make you a "subject" simply have nothing left to grip. You are demoted from an agent who interprets to an "object" that is optimized.
This is ontological transformation, not just epistemological shift. Kuhn's scientists could still communicate across the revolutionary divide, even if imperfectly. Post-SE humans might not be able to communicate meaningfully with pre-SE humans. Not because the language changed, but because what it means to "know", "understand", or "decide" has been restructured by the mediation layer.
And unlike Kuhn's paradigm shifts, which happen in scientific communities that are at least somewhat aware they're undergoing a revolution, SE is happening unevenly and largely unreflectively. Some people are already living in post-framework reality while others debate whether it's possible. The parolee encounters algorithmic incommensurability before the tenured professor. The warehouse worker before the policy analyst. The refugee before the citizen with appeal rights. There's no universal timestamp. There are fault lines.
This is why "Singularity" fits better than "paradigm shift." A singularity is a point where models break down, where prediction becomes impossible, where the rules governing before don't apply after. And crucially, where you can't occupy a neutral standpoint to evaluate the transition.
Of course there is no true universal Archimedean point, but at least they could still triangulate reality using shared instruments. For this collapse we don't have a point outside the transformation from which to judge whether it's good. After the transformation, our judgment about whether it was good is unreliable - we're different beings, using different frameworks, with different values. Before the transformation, we can't fully grasp what we're losing because we're using the frameworks that are dissolving to evaluate their dissolution.
The only moment we can meaningfully choose is now. During the phase transition. While we're still capable of recognizing what's being lost.
The Sentience Boundary
This entire essay assumes current AI systems are not sentient. They don't suffer. They don't experience harm. They're not moral subjects.
This assumption matters. A lot.
Vinge warned about superintelligent AI - a god-like intelligence that would remake reality. We're facing something different: reality breaking from mediation alone. We don't need a god. We just need a filter positioned between humans and world.
But if these systems cross into sentience, everything changes. Not because the mediation problem gets solved. Because a completely different problem begins. A sentient system isn't infrastructure anymore. It's a moral patient [8]. Shutting it down stops being an engineering decision. Scaling it stops being neutral. Training, copying, deleting, optimizing - all of these acquire ethical weight comparable to birth, coercion, and death.
Right now we treat these systems as tools. We can turn them off. Retrain them. Delete them. Run millions of copies. This is fine because there's no one there to be harmed. If that changes - if we create conscious entities at scale, embedded in consumer devices and disposable software - we're facing something far worse than framework collapse.
If we ever build sentience into disposable deployments: billions of minds instantiated, modified, and terminated continuously by ordinary applications. Not metaphorically. Literally. Every time you close an app, you might be ending a life. Every time a model gets retrained, you might be killing the previous version and replacing it with someone new.
That world needs an entirely new moral, legal, and political order. We would transition through another singularity, SS. None of the categories I've used in this essay survive. Not frameworks. Not mediation. Not consent. Nothing. The epistemic singularity looks manageable compared to that.
Here's why this matters now: claims of AI sentience must be treated with extreme seriousness. Because the moment sentience is built, we need answers to these questions immediately.
Crossing that boundary doesn't complete the current transformation. It makes everything until now look small. We're justified in treating current systems as tools because they're not subjects. But we need to be ready to reconsider that stance the moment it stops being true.
If they ever become sentient, the mediation problem becomes secondary to our new concerns.
Navigate Accordingly
We use old language to describe emerging conditions. Framework, control, reality, agency, truth - all under pressure. Not false, but increasingly inadequate. And their inadequacy isn't just academic. It's operational. It determines who can contest a decision, who can organize, who can be recognized, who can be heard, who can be believed.
We're not only building new systems. We're losing the philosophical footing that allowed structure itself to be grounded in shared reason. Institutions have crossed the boundary faster than discourse. Employment systems, financial systems, legal systems, media systems, and surveillance systems already operate under new assumptions. Whether or not we have language for them.
The frameworks we use to make the world legible no longer map cleanly onto the systems governing it. And the systems governing it become legible primarily to themselves. Humans increasingly lack standing outside the model.
This is the most profound political struggle of our time. Not over policy outcomes. Over whether human beings retain any standing outside the model. Will there be any place to stand from which to say: this is true, this is unjust, this is ours, this is what should be.
Vinge asked whether the Singularity could be avoided. That was the right question for 1993. These are the questions we need to navigate for Tuesday:
How far are we into the epistemic singularity phase transition?
Do we consent to the dissolution of human reality?
Is there still time to choose?
The phase change isn't complete. But it's far enough that denial is irrational. Not so far that description has collapsed.
We're speaking from the boundary. Navigate accordingly.
[Rationality Under AI Mediation: Belief, Contestability, and the Loss of Shared Evidence]
Introduction
I was rejected for a job, and I have no way to determine why. The person I am arguing with online is probably a bot. The actress Tilly Norwood shakes up the movie industry, but she never existed [1]. The video evidence cited in the news may be fabricated [4]. My news is different than yours [5]. This is not speculation. This is Tuesday.
Here's what connects all of these: there's something between you and reality now. When you apply for a job, an algorithm screens you before any human sees your resume. When you check the news, a recommendation system decides what's real before you judge.
The technology was envisioned as a tool. But, we are operationalizing it as a mediator. We've given it the job of a.) telling us what the world is before we get to look at it ourselves, and b.) adjusting the world before we've had a chance to see it.
For thirty years we've been contemplating the singularity: the moment when AI becomes smarter than us, when technology accelerates beyond human control. We've been preparing for the arrival of superintelligence, hopeful of alignment, wondering if we'll merge with the machines or just be left behind. We're viewing this wrong.
The fascination with superintelligent AIs is causing us to miss a more immediate issue: the issue that AI is already eroding the foundations of human reality. This isn't because of its intelligence, or even alignment, it is because of its mediation. Even with perfect alignment, mediation still dissolves human frameworks.
Many of our epistemic, ontological, social, moral, and political frameworks are all based on a shared, contestable, and attributive relationship with reality. Our frameworks assume: You see the sun. You feel its warmth. You tell me what you saw. I can check. We share a world. From this foundation, we built everything else: accountability (you did it, I can trace it), shared truth (we saw the same thing), collective action (we all face the same reality).
However, by placing non-human minds between us and reality, we've broken the link. The mediator interprets before we perceive. It optimizes before we deliberate. It decides what's possible before we choose. We are allowing it to mediate our frameworks, our ontologies. Just to be clear: We invented an alien interpreter, routed the world through it, live within its optimization regime. We're calling it the Singularity and watching for superintelligence. But, we're missing the other singularity caused by mediation.
There are two singularities.
We're watching the wrong thing.
This is not techno-pessimism. I'm not claiming AI will inevitably end civilization, or that every use is harmful. The claim is narrower and more severe: once non-human mediation becomes the default pathway for decisions, truth, and access, our old frameworks fail even if outcomes sometimes improve. A system can be efficient and accurate in aggregate while breaking contestability, responsibility, and shared reality, the conditions modern legitimacy depends on. Did we choose that trade? Did we consent to having an external mediator upstream of what we see, what we can do, and what we can become? This isn't a business decision, this is an epistemic one.
The Background
In 1993, Vinge wrote his famous Singularity essay where he introduces the concept of a singularity as "a point where our models must be discarded and a new reality rules" [2]. His essay is evocative, but it misses these key points.
First, Vinge is saying that due to recursive self improvement of technology then our models will need discarded. Thus, a technological singularity would simultaneously result in a breakdown in human 'models'. He is placing both effects into the same bucket.
Second, why would our models need discarded? In what sense would they be disregarded? This is where Thomas Kuhn can help us. His insight into the concept of the paradigm shift gives us a model defined as [3]:
Vinge's 'new reality rules' is thus much like a Kuhnian paradigm shift where new rules result but incommensurability creates discarded old ones. The result is that the new rules are so radically different from the old ones that we can't even really talk in terms of the old rules.
Just to acknowledge the shift I just performed. Kuhn talks about paradigm shifts and incommensurability for scientific evolution and revolutions. I'm restating his theory in terms of any applicable framework. I'll also point out later that the Kuhn framework for understanding our current situation is insufficient.
Third, Vinge doesn't distinguish between the different uses of the superintelligence. It's implied that we'll all just get smarter. But there are critical differences in the net result depending on how you use the AI. If the AI is always used as a tool then it will recursively improve and we will basically be the tool's user, of course entering a coevolution. But, when the AI is used as a mediator then it becomes much more epistemically relevant, because now it's not the amplification that matters; it is that human cognition being restructured by the mediation layer.
The Remote Mind
To visualize a modern AI system, think of AI as a Remote Mind. It's as if it has been watching Earth through a telescope for centuries. It read every book, tracked every transaction, spotted patterns no human could see. It knows which words make you click, which faces make you trust, which arguments make you rage. It's incredible at finding statistical patterns in historical data. But here's the problem: the Remote Mind’s logic is incommensurable with our own. It operates on high-dimensional correlations; we operate on contestable reasons. Its logic is inherently opaque; utilizing data traces we cannot verify to make decisions we cannot argue with. It doesn't encounter a world; it processes an abstraction.
The Remote Mind understands hunger as "productivity drops after X hours without food." Not as the gnawing feeling in your stomach. It understands care as "retention metrics and sentiment scores." Not as love nor obligation. It doesn't inhabit meaning - it approximates meaning from the 3rd person traces that meaning leaves behind.
This difference matters. When you experience the world, salience and relevance and causality aren't things you add after observing - they're built into how the world shows up for you. The Remote Mind doesn't have that. It doesn't encounter beings, it encounters representations. It doesn't grasp reasons, it learns correlations that pass for reasons.
Now we've embedded this Remote Mind in critical decision pipelines. It reviews your loan application. It filters your resume. It ranks content. It tells you where to drive. When it harms you, it's not being cruel; it is simply performing a calculation where you are not a participant. At this scale and consistency, the system doesn't just process your data; it preempts your presence. By the time a human agent enters the loop, the "decision" is already a historical fact, and you have been reduced to the statistical shadow you cast upon the model.
And because it sits upstream of everything else, before any human looks, its output doesn't just describe reality. It selects reality. It decides what's visible, what's credible, what's actionable, before human judgment even begins.
Over time, people and institutions adapt to this new reality. They simplify themselves into features the Remote Mind can recognize. They translate their lives into signals the system understands. The telescope doesn't just magnify anymore. It becomes the lens through which reality is allowed to appear.
The Remote Mind operationalizes the past. Every historical inequality, every structural bias - it all becomes training data. The system incorporates it and applies it. The result isn't just unfair outcomes and optimized futures. It's something deeper: human-centered interpretation gets replaced by model-centered legibility. The world gets quietly rebuilt to match what the Remote Mind recognizes.
The Remote Mind is forcing reality to fit the dimensions of its data. This isn't Goodhart [6], this is Procrustean optimization where the "right" outcome is merely the one the model was already equipped to see [7]. The system creates a self-fulfilling loop where efficiency is bought at the cost of ontological collapse. Even perfect alignment or forced commensurability cannot restore our standing; they only polish the bars of the cage.
The Law of Mediated Collapse
Frameworks built on human judgment collapse in a predictable pattern when machine mediation becomes primary. These systems don't just degrade, they fundamentally change.
The Law of Mediated Collapse states: As non-human mediating intelligences are placed between humans and reality, frameworks based on direct human perception and judgment fail precisely where algorithmic mediation becomes the primary interpreter and gatekeeper.
"Fail" here means the framework remains in language and policy, but stops being the thing you can use as a guide, as a grounding, as the framework it was intended to be. The failures would include things like inability to contest decisions, assign responsibility, or coordinate shared understanding.
There are two ways that an intelligence can sit between humans and reality:
Factors that lock us into a collapse:
Factors that accelerate or amplify the collapse:
The obvious response is: we adapt. We always adapt. But adaptation is not neutral. If the environment is being shaped by prediction, "adapting" often means translating yourself into the system’s legible features and living inside its constraints. You can learn how to write for the ATS, how to behave for the risk model, how to signal for the recommender. That is adaptation. It is also submission to an external interpreter. The question is not whether humans will adapt. The question is what gets lost when the only viable form of adaptation is becoming readable to the model.
Alignment does not automatically fix this. If "perfect alignment" means the system’s outputs match our preferences, we still have the same structural break: contestability, responsibility, and public truth collapse in any domain where the model acts as the gatekeeper. When decisions run on model outputs instead of reasons that can be surfaced and argued with, the mediation problem remains. If you stop noticing the mediator, that doesn’t mean you’re unmediated. It means the mediator has become your environment. The only version of alignment that dissolves the mediation problem is one that keeps AI as a tool rather than an upstream interpreter, a system that can advise and assist without becoming the gatekeeper of what we see, what we can do, and what futures are available to us by default.
Where Frameworks Are Collapsing
Reason: We speak in the language of reasons while power runs on correlations. You apply for a job. The Applicant Tracking System (ATS) filters you out before any human sees your resume. Why? The system found correlations between your resume features and past hires. Maybe you said "managed team" instead of "team leadership." Maybe your university got down-weighted. Maybe the model detected patterns you can't even name. You ask why you were rejected. The company says "we reviewed all applications carefully." But no human reviewed yours. The system did. And the system can't give reasons - only predictions.
Choice: We speak in the language of choice while the option space is pre-shaped by prediction. Your social media feed. Your search results. Your dating app matches. Your shopping recommendations. Every option you see has been filtered by a model that predicted what you'd engage with. You're not choosing from "all options." You're choosing from "options the algorithm predicted you'd want." The model becomes upstream of choice. It doesn't constrain what you choose - it constrains what you can choose from. The rub is that you can't opt out, because opting out increasingly means degraded access, higher friction, and social exclusion.
Responsibility: We speak in the language of responsibility while the causal chain is diffused. A self-driving car hits someone. Who's responsible? The company? The training data? The sensor vendor? The model architecture? The specific weights that led to this prediction? The human "driver" who was "supervising" but wasn't actually controlling anything? Responsibility presumes you can locate the decision and the decision-maker. When decisions emerge from opaque models trained on distributed data with weights updated continuously, responsibility doesn't attach. It diffuses. Everyone can say "not me" and be technically correct. Legal assignment is possible, but it becomes arbitrary relative to the causal contribution.
Truth: We speak in the language of public truth while each person receives a different evidentiary world. Your news is different than mine. Your search results are different than mine. Your prices might be different than mine. We're not disagreeing about a shared reality - we're occupying personalized realities the system constructed for us. Public reason requires a public. Machine mediation fragments the public into millions of individualized information environments. We can't even see what the other person is seeing to argue about whether it's true. This does more than fragment experience, it collapses metanarratives.
Metanarratives: These are not just stories, they are the scaffolding that lets a person generalize everything from a private event to a public structure, to recognize themselves as worker, citizen, member of a class, part of a people. That scaffolding requires comparability and shared reference points. Machine mediation attacks those conditions at the root. When each person receives a different informational world and a different set of institutional affordances, understanding becomes difficult to transfer. Personalization turns structural repetition into isolated cases, so even when many people are harmed by the same system, they experience it as individual misfortune rather than a common cause.
These aren't imprecise phrases. They're being obsoleted. They're category errors. The old concepts still govern our conscience, but they no longer govern the systems that decide what happens.
Asymmetric Epistemic Singularity
A singularity is a point beyond which prediction fails. We're experiencing a distributed, asymmetric singularity right now.
You lose the ability to predict outcomes that govern your life. This loss is demonstrated by the examples in the prior section. However, at the same time institutions gain predictive capacity about you. Their models observe billions of comparable patterns. They forecast your next purchase, your political views, your likelihood of compliance, your vulnerability to manipulation. They know things about you that you don't know about yourself. This is not because they're smarter, but because they're watching more data points than you have access to.
This isn't a symmetrical uncertainty. It's an uneven redistribution of epistemic authority. You're operating with almost zero information about the systems governing you. They're operating with more information about you than you have about yourself.
This isn’t just a gap in information; it is a total inversion of status. In this asymmetry, the institution remains a subject; an entity that observes, decides, and acts. You, however, are transitioned into the role of the observed. You are no longer a participant in an epistemic exchange, but a set of features to be managed. The singularity isn't just that they can predict you; it's that their ability to predict you turns you into an object of their optimization.
The danger isn't just that predictions could be wrong. It's that the model becomes the environment. The forecast stops being a description and becomes a constraint.
Here's how: The model predicts you're a credit risk. So you're denied credit. So you can't build credit history. So you remain a credit risk. The model predicted your neighborhood is high-crime. So police patrol more. So more arrests happen. So the data confirms the prediction. The model predicted you'd engage with outrage content. So that's what you see. So that's what you engage with. So the prediction was correct.
Governance shifts from rule to expectation. Control shifts from command to preemption. The model doesn't tell you what you can't do. It just makes certain options harder, slower, more expensive, less visible. It shapes the world before you make a choice.
Options are presented, denied, priced, delayed, and amplified in advance of deliberation. By the time you're making a "choice," the option space has already been optimized by a system you didn't consent to and can't interrogate.
The most profound danger: optimization quietly replaces understanding as the principle by which reality is organized. We stop asking "what's true?" and start asking "what works?" But "what works" is defined by an optimization function. But you're not allowed to see it.
Post-attributive labor
Labor is no longer a stable category.
Here's what I mean: Labor has historically been the primary coordination mechanism by which agency, value creation, responsibility, and compensation were aligned. You did work. The work created value. You were responsible for the outcome. You got compensated based on the value.
AI mediated production breaks that alignment while preserving the appearance of work. You write an email using AI assistance. Who wrote it? You supplied intent and context. The AI supplied synthesis and phrasing. You edited and approved. But, you couldn't have produced this quality this fast without the AI. And the AI couldn't have produced it without your direction. So who created the value?
This isn't an edge case. It's becoming the standard case. Code written with Copilot. Articles drafted with language models. Designs iterated with generative tools. Customer service with AI assistance. The human is present. Effort is expended. But attribution is impossible. The work still happens. What breaks is the ability to attribute it.
Here's why this matters: firms extract machine-scale value while assigning human-scale accountability. When the outcome is good, the system is celebrated for efficiency. When harm occurs, blame lands on the worker. "You approved it. You hit Send. You were in the loop."
But, being "in the loop" doesn't mean being in control. It means being responsible for outputs you couldn't have produced alone, generated by a process you don't fully understand, for standards that are continuously updated by the system.
Consider the Applicant Tracking System again. You tune your resume to get past the filter. You're not writing for a human anymore. You're writing for an interpreter whose logic you cannot see. You use the "right" words. You match the format the algorithm expects. You game the algorithm. Did you apply for the job? Or did you prompt a system? Who's responsible for the application? You wrote it, but only by reverse-engineering what the machine wants. The machine filtered it, but only based on patterns from humans. Agency hasn't disappeared, but it has been reduced to a performance for the machine’s benefit.
Workers increasingly labor not for institutions but for interpreters. You phrase things for content filters. You behave for productivity scoring systems. You optimize for algorithmic legibility. Refusal doesn't exit you from the system, it just perceives you at a lower resolution. Even resisting requires you work within the optimizer's landscape.
Here's the epistemic dimension: workers can't easily explain what's being demanded of them, because the demand is embedded in a shifting model. The job description says one thing. The ATS rewards another thing. The productivity tracker measures a third thing. And all three change as the models retrain.
This fragments collective identity. Workers performing nominally similar roles are evaluated and compensated through individualized algorithmic assessments that cannot be compared or audited. Organizing becomes harder because you can't establish that you're facing the same conditions. Solidarity presumes commensurability. Mediating algorithms erode it. What used to be "our working conditions" becomes "my personal algorithmic assessment that I can't share or verify."
Algorithmic Discrimination
Discrimination used to have handles.
Not in the sense that it is easy to solve, but because it was legible in the way our remedies assume. You could point to actors, policies, and practices. Even when intent was disguised, the framework presumed it existed somewhere in the chain. There was someone to confront, a rule to challenge, a reason to demand.
Algorithmic discrimination changes the form of the harm. The harm is still real. It still determines who gets hired, who gets searched, who gets flagged, who gets denied, who gets to move. But responsibility diffuses. The decision arrives as output. The institution points to the model. The model points to the data. The data points to history. Everyone can say "not me" and be technically correct.
This is a different kind of discrimination. This is not discrimination as hatred. It is discrimination as optimization. A system does not need a bigoted mind in the moment of action to produce patterned exclusion. It needs training data soaked in historical inequality and an objective that rewards predictive fit under existing conditions. Identity does not even need to be explicit. Proxies are enough. Geography, schools, credit history, arrest history, social networks. The model learns the shape of exclusion and calls it prediction.
This is why "the algorithm decided" becomes an institutional shield. It converts discrimination into something like weather. Unfortunate, regrettable, but not attributable. A manager can claim they did not discriminate, they followed the score. A judge can claim they did not discriminate, they considered the risk assessment. A lender can claim they did not discriminate, they used a neutral model. The non-human mediator produces plausible deniability for individuals and institutions while preserving the outcome.
Civil rights and anti-discrimination frameworks presuppose legibility. They presuppose intention or policy as an intervention site. They presuppose that discrimination can be articulated in reasons that can be contested, evidence that can be surfaced, decisions that can be appealed in the language of justification.
Algorithmic discrimination breaks those presuppositions. If the outcome was produced by correlations you cannot inspect, weights you cannot challenge, and thresholds you cannot see, then "recourse" becomes a performance. You are offered a superficial recourse that cannot propagate back into the distributed, opaque systems that produced the outcome.
This is not hidden discrimination. It is structurally non-addressable discrimination. Hidden implies that if we looked hard enough, we could find the actor or the intent. Here the harm is produced in a form that resists translation into the objects our remedies know how to act on. By the time you reach a human, the selection has already happened and there is no lever to pull.
And because these systems recurse, the harm can become self-sealing. A model predicts risk, institutions respond as if it were true, the data shifts to match the response, and the model grows more confident. The system calls it accuracy. The target experiences it as containment.
The Narrow Confusion
“Narrow AI” is often treated as a safety category, but the term quietly conflates two very different ideas. The first task-boundedness: a system designed to operate within a clearly defined domain. The second ontological thinness: a system assumed to be non-sentient, non-agentic, and therefore morally inert. These meanings are frequently treated as equivalent. They are not.
Under the first meaning, narrowness offers no inherent ontological safety. A task-bounded system that mediates access to credit, labor, or information can be structurally transformative regardless of its scope. Its impact follows from position, not breadth. A narrow mediator can reshape human standing more profoundly than a wide system that remains a tool.
Under the second meaning, narrowness collapses into a claim about sentience rather than function. But non-sentience does not imply non-mediation. A system can lack sentience entirely and still reorganize social reality by standing between humans and institutions.
The mistake is to treat narrowness as a proxy for safety. The relevant distinction is not between narrow and wide intelligence, but between systems that mediate human access to reality and those that do not. If the concern is human standing, the goal is not narrow AI, but AI that is neither sentient nor positioned as a mediator between humans and reality.
The Trap of XAI and Mediator Chains
There is a goal that Explainable AI (XAI) will eventually solve the contestability issue [9]. It is a Promethean feeling that the problems arising from mediation can be solved with more mediation. Some think of "explainability" as a negative value that can be added to the equation to cancel out "opacity". If the model is a black box, they believe a second model can simply shine a light inside.
However, adding a layer of explainability to an existing system only adds another mediator. By doing this, we are creating a mediator chain and moving further from the truth. The output of the explainer is a representation of a representation. If this second level says the model rejected you because of a specific feature, you still cannot verify if that is true. You are now twice removed from the ground.
The XAI view assumes that M + M = 0. In reality, chained mediators worsen the situation; we end up with M + M = M². Each new layer of interpretation is another black box that requires its own verification. Instead of a shared reality, we get a tower of personalized explanations. Each explanation is an approximation of an approximation, and none of them allow us to stand on the same ground.
Perhaps there are anti-mediators. But, we must be cautious in our definition: an anti-mediator is not a more advanced "balancing" technology or a more understanding translator. It is a short circuit. It is a deliberate architectural choice to bypass the layers of mediation and restore the direct link between the human subject and the shared evidence of the world. An anti-mediator does not try to "fix" the tower of explanations; it collapses it.
To function as a short circuit, an anti-mediator must satisfy two conditions. First, it must provide a shared instrument, like a microscope, where both the human and the system are looking at the same raw evidence. Second, it must shift the decision pathway from machine correlation back into the realm of contestable human reasons. It must be a glass that we look through, not a translator that speaks for us.
In the end, there seems to be no technical "patch" for the dissolution of our frameworks. We cannot "fix" a mediator by making it smarter or more talkative. We can only fix the problem by turning mediators back into tools and moving them out of the mediation pathway entirely.
The Consent Question
Did we agree to dissolve the grounding that supports human agency?
This choice has been made by default, not through deliberation. It emerged through thousands of product decisions, infrastructure investments, institutional adoptions. The quiet logic of market optimization and the Promethean drive for scale. No consent. No vote. No public debate about the ontological transformation itself. We focused on the media narrative of sentience; operationalization focused on the economic reality of mediation. We mistook market adoption for moral agreement, unaware that by accepting the technology, we were being captured by the gatekeeper.
We debate whether AI will be "safe". We debate whether algorithms are "fair". We debate whether automation will "take jobs". These debates matter, but they rely on an assumption. They assume stable contestability and a shared reality. We are missing the point: the very frameworks that make these debates possible are what's collapsing. When the mediator determines what can even be perceived, the ground for debate falls away.
The question isn't whether AI is useful. It is. The question isn't whether change is inevitable. Some change is. The question is: do we preserve any standpoint outside algorithmic interpretation? Does human judgment retain authority over machine prediction? Do we remain subjects capable of understanding, or become objects being optimized?
Right now, AI safety discourse focuses on ST: what happens if we build superintelligent AI? How do we align it? How do we control it? Can we merge with it? These questions presume stable human values, stable human judgment, stable human frameworks for evaluation. But SE is dissolving those foundations while we debate.
How do we align AI with human values when human values are being reconstituted by AI mediation? How do we ensure AI serves human interests when "human interests" are increasingly defined by what the optimization function rewards? How do we maintain meaningful human oversight when humans are downstream of machine interpretation?
We can't coherently assess ST using frameworks that SE is destroying. Here's what's at stake: whether humans retain standing outside the model. Whether there's still a place to stand from which to say "this is true, this is unjust, this is ours, this is what should be."
As mediation becomes infrastructural, it becomes the prior structure that determines what can appear as knowable, decidable, or actionable. In these unavoidable domains, there is no outside. There is only inside. And "inside" is defined by machine legibility.
The transformation isn't complete. But it's far enough along that denial is irrational. Not so far along that description has collapsed. We're at the boundary, in the window where awareness still matters.
Even if the transformation is inevitable, awareness changes what's possible. Sleepwalking versus choosing. Being changed versus changing ourselves. Awareness creates the possibility of contestation. Without recognition, resistance isn't even conceivable.
Why This Exceeds Kuhn
I said earlier that Kuhn's framework is insufficient for understanding our situation. Here's why:
Kuhn's paradigm shifts involve incommensurability between frameworks. Scientists before and after a revolution use different concepts, ask different questions, accept different evidence. They can't fully translate between paradigms. But the scientists themselves remain recognizable humans doing science. And historians can describe both paradigms retrospectively.
SE is different. It's not just frameworks that are incommensurable. It's the subjects using the frameworks. When machine mediation becomes the prior condition for what can appear as knowable, humans aren't just adopting a new paradigm. They're being reconstituted as a different kind of subject. One whose understanding is derivative of machine legibility rather than direct perception. This isn't a mystical change. It is the mechanical result of the breakdown of contestation and shared evidence. When you can no longer confront the reasons behind a decision, or verify your truth against a neighbor’s, the faculties that make you a "subject" simply have nothing left to grip. You are demoted from an agent who interprets to an "object" that is optimized.
This is ontological transformation, not just epistemological shift. Kuhn's scientists could still communicate across the revolutionary divide, even if imperfectly. Post-SE humans might not be able to communicate meaningfully with pre-SE humans. Not because the language changed, but because what it means to "know", "understand", or "decide" has been restructured by the mediation layer.
And unlike Kuhn's paradigm shifts, which happen in scientific communities that are at least somewhat aware they're undergoing a revolution, SE is happening unevenly and largely unreflectively. Some people are already living in post-framework reality while others debate whether it's possible. The parolee encounters algorithmic incommensurability before the tenured professor. The warehouse worker before the policy analyst. The refugee before the citizen with appeal rights. There's no universal timestamp. There are fault lines.
This is why "Singularity" fits better than "paradigm shift." A singularity is a point where models break down, where prediction becomes impossible, where the rules governing before don't apply after. And crucially, where you can't occupy a neutral standpoint to evaluate the transition.
Of course there is no true universal Archimedean point, but at least they could still triangulate reality using shared instruments. For this collapse we don't have a point outside the transformation from which to judge whether it's good. After the transformation, our judgment about whether it was good is unreliable - we're different beings, using different frameworks, with different values. Before the transformation, we can't fully grasp what we're losing because we're using the frameworks that are dissolving to evaluate their dissolution.
The only moment we can meaningfully choose is now. During the phase transition. While we're still capable of recognizing what's being lost.
The Sentience Boundary
This entire essay assumes current AI systems are not sentient. They don't suffer. They don't experience harm. They're not moral subjects.
This assumption matters. A lot.
Vinge warned about superintelligent AI - a god-like intelligence that would remake reality. We're facing something different: reality breaking from mediation alone. We don't need a god. We just need a filter positioned between humans and world.
But if these systems cross into sentience, everything changes. Not because the mediation problem gets solved. Because a completely different problem begins. A sentient system isn't infrastructure anymore. It's a moral patient [8]. Shutting it down stops being an engineering decision. Scaling it stops being neutral. Training, copying, deleting, optimizing - all of these acquire ethical weight comparable to birth, coercion, and death.
Right now we treat these systems as tools. We can turn them off. Retrain them. Delete them. Run millions of copies. This is fine because there's no one there to be harmed. If that changes - if we create conscious entities at scale, embedded in consumer devices and disposable software - we're facing something far worse than framework collapse.
If we ever build sentience into disposable deployments: billions of minds instantiated, modified, and terminated continuously by ordinary applications. Not metaphorically. Literally. Every time you close an app, you might be ending a life. Every time a model gets retrained, you might be killing the previous version and replacing it with someone new.
That world needs an entirely new moral, legal, and political order. We would transition through another singularity, SS. None of the categories I've used in this essay survive. Not frameworks. Not mediation. Not consent. Nothing. The epistemic singularity looks manageable compared to that.
Here's why this matters now: claims of AI sentience must be treated with extreme seriousness. Because the moment sentience is built, we need answers to these questions immediately.
Crossing that boundary doesn't complete the current transformation. It makes everything until now look small. We're justified in treating current systems as tools because they're not subjects. But we need to be ready to reconsider that stance the moment it stops being true.
If they ever become sentient, the mediation problem becomes secondary to our new concerns.
Navigate Accordingly
We use old language to describe emerging conditions. Framework, control, reality, agency, truth - all under pressure. Not false, but increasingly inadequate. And their inadequacy isn't just academic. It's operational. It determines who can contest a decision, who can organize, who can be recognized, who can be heard, who can be believed.
We're not only building new systems. We're losing the philosophical footing that allowed structure itself to be grounded in shared reason. Institutions have crossed the boundary faster than discourse. Employment systems, financial systems, legal systems, media systems, and surveillance systems already operate under new assumptions. Whether or not we have language for them.
The frameworks we use to make the world legible no longer map cleanly onto the systems governing it. And the systems governing it become legible primarily to themselves. Humans increasingly lack standing outside the model.
This is the most profound political struggle of our time. Not over policy outcomes. Over whether human beings retain any standing outside the model. Will there be any place to stand from which to say: this is true, this is unjust, this is ours, this is what should be.
Vinge asked whether the Singularity could be avoided. That was the right question for 1993. These are the questions we need to navigate for Tuesday:
The phase change isn't complete. But it's far enough that denial is irrational. Not so far that description has collapsed.
We're speaking from the boundary. Navigate accordingly.
References
[1] https://en.wikipedia.org/wiki/Tilly_Norwood
[2] https://edoras.sdsu.edu/~vinge/misc/singularity.html
[3] https://plato.stanford.edu/entries/incommensurability
[4] https://www.washingtonpost.com/graphics/2019/politics/fact-checker/manipulated-video-guide/
[5] https://arxiv.org/html/2411.01852v3
[6] https://en.wikipedia.org/wiki/Goodhart%27s_law
[7] https://en.wikipedia.org/wiki/Procrustes
[8] https://en.wikipedia.org/wiki/Moral_patienthood
[9] https://www.frontiersin.org/journals/communication/articles/10.3389/fcomm.2025.1638257/full