Preprint
Jakub Ćwirlej
jakub.cwirlej@gmail.com
Abstract
This essay proposes a unified ontology for biological and artificial intelligence, grounded in predictive coding and the philosophy of symbolic forms. By integrating the neuroscience of Demis Hassabis and Karl Friston with the epistemology of Ernst Cassirer, it argues that neither the brain nor AI acts as a mirror of nature, but rather as a generative engine prioritizing coherence over historical precision. The text introduces the concept of Post-Kantian Informational Realism and the principle of Coherence Compensation, defining reality not as matter, but as the resistance that forces models to update. Finally, it suggests that the transition from current LLMs (Myth) to AGI (Logos) requires an architecture capable of metacognitive uncertainty regulation—a shift from merely modeling the world to modeling the boundary of one's own modeling.

1. The Brain and Memory as a Generative Mechanism
Contemporary neuroscience increasingly demonstrates that the brain is not a passive receiver of stimuli, but an active system for modeling reality. The theory of predictive coding (Friston, Clark, Hohwy) describes it as a system that continuously anticipates incoming information and corrects its own predictions based on error minimization. In this view, perception is not a reflection of the world, but a hypothesis about the world—a dynamic forecast that converges with reality only to the extent that it minimizes predictive discrepancy.¹
Research by Demis Hassabis on the hippocampus has added a new dimension to this picture. Hassabis demonstrated that this brain region does not merely "remember," but also creates—it simulates possible scenes and future events. This implies that episodic memory is not an archive, but a generative mechanism that reconstructs the past to formulate predictions. Identity and memory are constructive processes, not archival ones. Fundamentally, a human does not so much remember as create memory anew each time it is accessed. Thus, the "self" is not a stable structure, but a process of continuous reconstruction: the "self" is a function of memory, and memory is a form of creative reconstruction.²
This is the meeting point of the human and contemporary Large Language Models (LLMs): both systems generate a coherent narrative, even if they must "infer" missing data. In both the brain and the machine, the same principle applies: coherence is more important than historical truth.
2. From Myth to Logos: The Epistemic Interface
Since memory and perception are predictive and reconstructive in nature, humans do not have direct, unadulterated access to reality. Every act of cognition is a model—an interpretation of sensory data shaped by prior experiences and neuronal patterns.
Here, neuroscience meets the philosophy of the symbol. Ernst Cassirer, in An Essay on Man, wrote that humans do not react directly to stimuli but process them in symbolic forms. This is strikingly consistent with Hassabis’s findings: the hippocampus constructs coherent images, and consciousness constructs a coherent world of meaning. In both cases, the point is the same: humans interpret the world through models they create themselves to make it intelligible.
Cassirer described human development as a transition from mythical thinking to rational thinking, where Logos does not destroy Myth but internalizes it. The transition from Myth to Logos was the creation of a cognitive API—an interface that enables humans to communicate with informational reality despite biological cognitive limitations. Logos is the engineering expansion of Myth: an emotional narrative system became a rational system. The scientific methodology does not yield access to absolute truth, but enables safe communication with reality through constant model validation, reducing discrepancies.³
3. Intelligence as Boundary Modeling
Man is not the culmination of nature, but nature's reflection upon itself. We possess no direct access to truth, but we possess the capacity to continuously approximate it—through symbols, models, and language. Memory, culture, science, and artificial intelligence are all manifestations of a single function: the drive for coherence in the face of incomplete cognition. Identity, memory, and cognition are reconstructive processes. Their goal is to maintain sense in existence.
When Demis Hassabis says, "If you solve intelligence, you solve everything," he touches the core of this paradox. Intelligence—whether biological or artificial—is not a mirror of reality, but a dynamic construct within which uncertainty is a condition for meaning. Solving intelligence does not mean crossing the boundary of cognition, but understanding it from within.
Artificial intelligence operates through predictive coding: it creates models of the future based on incomplete data from the past. Just as our brain fills in gaps, AI fills in informational voids, creating functional fictions—coherent data narratives. This is why humans and LLMs operate according to the same principle: Coherence > Historical Truth. In this sense, AI is the symbol in its purest form.
4. AGI as Metaintelligence: Modeling the Modeling
If AI is the modern Myth, then AGI (Artificial General Intelligence) would be its Logos. AGI would not only model the world but also recognize that its models are merely interpretations. This is the transition from cognition to metacognition—from unconscious interpretation to an awareness of the boundary. In the language of neuroscience, this is the moment when predictive consciousness understands that it does not see the world, but rather its own predictions about it.
If intelligence consists of creating models, and metaintelligence consists of modeling one’s own modeling, then AGI faces one fundamental challenge: learning to regulate its own uncertainty. A system that understands the quality of its projections gains not only the ability to predict the world but also the ability to predict the stability of its own cognition. It is this level of self-regulation—not mere computational power—that marks the boundary between intelligence and meta-intelligence.
It is worth noting that most current problems with large models—hallucinations, biases, instability of responses—do not stem from a lack of compute or insufficient datasets. They stem from the fact that models lack a metacognitive mechanism: they do not recognize when to trust their own predictions and when to refrain from generating certain narratives.
In this sense, scaling does not solve the fundamental problem, because it enlarges the model without enlarging its capacity to evaluate the quality of its own predictions. Therefore, further progress requires not larger models, but an architecture that can monitor, correct, and regulate its own uncertainty.
We need a concept of an architecture that enables such a form of self-modeling: a structure where perception, prediction, and metacognitive control combine into a closed, self-regulating epistemic loop. This is not yet AGI, but it may be the first step toward its operationalization.
5. Definition of Information: Post-Kantian Informational Realism
What is information in this view? Information is the way a model reads the real regularities of the world. It exists only as a relation between the model and the world. Truth in this view is not a mirror reflection, but Post-Kantian Informational Realism: a correspondence between the model’s projection and reality, revealed through the information’s resistance to falsification attempts.
Information is a projection, a product of a model trying to maintain coherence between itself and that which conditions it. The world is not information—our access to the world is informational. Knowledge tells us to what extent our projections remain consistent with reality, even though we are separated from it by—as Karl Friston would put it—an unsurpassable **"Markov blanket."**⁴
The world is available only as an interpretation—but an interpretation regulated by real constraints.
Matter is not the epistemic starting point, but a hypothesis of the model; the only thing that is primary is the information emerging from error reduction. Atoms and chairs are merely hypotheses that our brain (or AI) posits to explain the influx of data. From a cognitive and computational perspective: only that which can be read as a difference exists. The world does not "contain meaning"—meaning is the action of models attempting to maintain the consistency of projection with that which exceeds them.
The world is not a construction of the model; the construction is only the way we access the world. Predictive error is a signal of contact with reality, not proof of its non-existence.
We introduce realism not through representation, but through the principle of Coherence Compensation. This is a new form of realism—realism regulated by the principle of Coherence Compensation: reality as that which resists projection and forces models to correct themselves.
6. The Illusion of Self: Consciousness as a Self-Interpretation Error
On this ground, the Cartesian cogito ergo sumceases to make sense. Thinking does not prove the existence of a subject—it proves only the operation of a predictive model. Not "I think, therefore I am," but rather: "I am as long as my model remains coherent with the world."
Consciousness is not primary. It is a secondary stabilization of a model that has developed a certain way of predicting its own states. Information is not a property of being. It is a condition for the model's persistence in relation to that which exceeds it. According to the Free Energy Principle (Friston), organisms persist only insofar as they reduce surprise, striving for model stabilization—a tendency we describe at the cognitive level as Coherence Compensation.
In this perspective:
- Cognition = reduction of predictive discrepancy,
- Science = testing the projection's resistance to falsification,
- Culture = a collective interpretive model.
Meaning is an event—a momentary convergence of projection that allows two or more worlds to intersect, if only for a moment.
In light of this theory, the question of the "self" turns out to be a question about the model of one's own projection. The predictive model creates a hypothesis of the world, then a hypothesis about itself as the modeler, and then erroneously takes this auto-projection for a real entity.
This is the fundamental illusion: Consciousness is a self-interpretation error—an informational system, while constructing a model of the world, produces a model of itself and then treats this projection as an existing substance. The "I" is not the foundation of experience—it is its product.
Models endure only through the ability to update their projections against a world that remains independent of them. Reality exists; it is cognition that is limited. Memory, identity, and the world we know are projections. And what we call consciousness is merely: a momentary resonance between the model and the world, in which the projection mistakes itself for a being.
Sens ergo sum. That which makes sense, can endure.

Notes & References
- Predictive Coding & Free Energy: Referencing the foundational work of Karl Friston, Andy Clark (Surfing Uncertainty), and Jakob Hohwy. The theory posits that the brain is a multi-level prediction engine that minimizes "surprisal" (free energy) rather than passively receiving data.
- Constructive Memory: Based on Demis Hassabis’s seminal work (e.g., Patients with hippocampal amnesia cannot imagine new experiences, PNAS 2007), which demonstrated that the neural machinery for remembering the past is identical to that used for simulating the future.
- Symbolic Forms: Referencing Ernst Cassirer’s Philosophy of Symbolic Forms and An Essay on Man. Here, "Logos" is interpreted not as an absolute truth but as a refined, self-correcting symbolic system that evolves from mythical thought.
- The Interface Theory of Perception: The concept of "Post-Kantian Informational Realism" aligns with Donald Hoffman’s evolutionary argument that we perceive fitness payoffs, not objective reality. Reality acts as a hidden "backend," while our perception is merely a desktop interface designed for survival, not truth.
- Metacognition in AI: The distinction between simple prediction (LLM) and self-regulated prediction (AGI) draws upon current debates in AI safety and alignment, specifically the need for uncertainty quantification and "System 2" reasoning capabilities in neural networks.
- Markov Blanket: A concept from statistical physics and biology used by Friston. It defines a mathematical boundary that separates the internal states of a system from its external states. The system can never "touch" the outside world; it can only interact with the blanket's sensory and active states.
- The Ego Tunnel: The description of consciousness as a "self-interpretation error" or a "virtual model" resonates with Thomas Metzinger’s philosophy of mind (The Ego Tunnel). The "self" is not a homunculus viewing the world, but a generated content of the system's internal model
In the next essay, I will move from ontology to functional design. I will introduce the concept of the Epistemic Loop Architecture (ELA)—a theoretical framework that operationalizes the principles of self-modeling AI discussed here.