The motivation behind seeking this decomposition was to explore if it can reveal new ways of AI alignment. In doing so, it became very clear to me that beingness is often tangled with Cognition, which is commonly tangled with Intelligence.
I think understanding how these three dimeneions apply to AI systems is key to understanding AI risk/threat as well as devising robust evaluations and strategies for AI alignment.
In this post I attempt to crudely chart out the cognitive capabilities - separating them from capabilities and behaviors that relate to consciousness, intelligence, sentience and qualia.
Cognition Axis
If beingness is about a system’s internal organization (how it maintains coherence, boundaries, persistence, self-production), then cognition is about the system’s information processing qualities (how it perceives, learns, models, plans, reasons, and regulates its own reasoning).
I have based the classification of cognitive capabilities and definitions on these sources, and debated about the structure with ChatGPT and Gemini.
The aim was to create a practical map: a way to describe what kind/s of cognitive machinery may be present in a system - not to propose precise, academically defensible definitions. As with the beingness model, this model also does not look to assign levels but help identify what cognitive capabilities a system may/may not have irrespective of intelligence level, consciousness and sentience.
The Cognitive Typology
For convenience, cognitive capabilities can be grouped into three broad bands, each corresponding to a qualitatively different set of information processing capabilities.
Ontonic Cognition
Set of capabilities that enable a system to respond directly to stimuli and learn correlations from experience. These roughly correspond to Ontonic beingness that characterizes the systems that respond and adapt. Ontonic derived from Onto (Greek for being) - extrapolated to Onton (implying a fundamental unit/building block of beingness).
Mesontic Cognition
Set of capabilities that construct and use internal representations to plan, reason across steps, and integrate context into responses. These roughly correspond to Mesontic beingness that characterizes the systems that respond integrative-ly and coherently. Meso (meaning middle) + Onto is simply a level in-between.
Anthropic Cognition
Set of capabilities that enable systems to monitor and regulate their own reasoning, reason about other systems, and apply social or normative constraints across their lifespan. These roughly correspond to Anthropic beingness that characterizes the systems that are cognizant of self and other identities, value and seek survival and propagation.
These bands are not measures of intelligence or consciousness or reflective of sentience; they describe distinct cognitive capabilities, and they can help clarify which kinds of evaluation, risk, and governance mechanisms are appropriate for different systems.
The Seven Rings
The categories are composed of distinct, definable, probably measurable/identifiable set of systemic capability groups. These too correspond to the beingness layers closely, and I dont see anything wrong with it intuitivly (I am open for debate).
This should not be read as a developmental ladder. Each ring identifies a kind of information processing that a system may (or may not) exhibit. Systems can partially implement a ring, approximate it through simulation, or exhibit behavior associated with a ring without possessing its underlying mechanisms. Higher rings do not imply better systems - they simply imply different structure of cognition.
Illustrative Examples
A simple control system (e.g. a thermostat or controller) exhibits the Reactive and Perceptual & Associative properties. It is reliable, predictable, and easy to evaluate, but incapable of planning, abstraction, or self-regulation.
A frontier language model exhibits strong Perceptual–Associative, Contextual & Abstract, and partial Metacognitive capabilities, but weak persistence and no intrinsic goal maintenance (as yet). This is evident from how such systems can reason abstractly and self-correct locally, but lack continuity, long-term agency, or intrinsic objectives (as far as we conclusively know).
An autonomous agent with memory, tools, and long-horizon objectives may span Model-Based, Metacognitive, Social, and Persistent rings. Such systems are qualitatively different from stateless models and require different alignment strategies, even if their raw task performance is similar. This would definitely be AI makers' aspiration today.
These examples illustrate why cognition should be analysed structurally rather than assumed from intelligence level or task proficiency alone.
Why This Matters for Alignment and Safety
Essentially, this lens disambiguates critical concepts, e.g.
Errors in perception or association (e.g. hallucination) are not the same as failures of goal alignment or deception.
Reflective self-correction does not imply system is self-aware.
Persistence across episodes does not imply a desire for survival.
By distinguishing cognitive capabilities, we can better match evaluations to the system under consideration.
Systems operating primarily in lower cognitive rings require robustness and reliability testing.
Systems with goal-directed or metacognitive capabilities require evaluation of internal objectives, self-regulation, and failure recovery.
Systems with social or persistent cognition introduce coordination and governance risks.
Wth repect to beingness, a system may score high on one axis and low on the other. For example, biological cells exhibit strong beingness with minimal cognition, while large language models exhibit advanced cognitive capabilities with weak individuation and persistence.
Being cognizant of these dimensions and their intersections should help more precise governance and evaluation. I aim to conceptually illustrate this in the posts to follow.
Limitations
This typology is coarse at best. The boundaries between rings are not sharp, and real systems may blur or partially implement multiple regimes. I am not sure what it means when a certain capability is externally scaffolded or simulated rather than internally maintained within the system.
The classification itself is not made from deep research, prior academic expertise or knowledge of even basics of cognitive sciences - so I might well be horribly wrong or redundant. Heavy LLM influence in conceptualization would already be apparent as well.
I am happy to learn and refine - this is most certainly a first draft.
In a previous post About Natural & Synthetic Beings (Interactive Typology) and accompanying interactive visualization, I explored beingness as a distinct dimension of dynamic systems - separable from cognition, consciousness, intelligence, or qualia.
The motivation behind seeking this decomposition was to explore if it can reveal new ways of AI alignment. In doing so, it became very clear to me that beingness is often tangled with Cognition, which is commonly tangled with Intelligence.
I think understanding how these three dimeneions apply to AI systems is key to understanding AI risk/threat as well as devising robust evaluations and strategies for AI alignment.
In this post I attempt to crudely chart out the cognitive capabilities - separating them from capabilities and behaviors that relate to consciousness, intelligence, sentience and qualia.
Cognition Axis
If beingness is about a system’s internal organization (how it maintains coherence, boundaries, persistence, self-production), then cognition is about the system’s information processing qualities (how it perceives, learns, models, plans, reasons, and regulates its own reasoning).
Existing Literature
I have based the classification of cognitive capabilities and definitions on these sources, and debated about the structure with ChatGPT and Gemini.
The aim was to create a practical map: a way to describe what kind/s of cognitive machinery may be present in a system - not to propose precise, academically defensible definitions. As with the beingness model, this model also does not look to assign levels but help identify what cognitive capabilities a system may/may not have irrespective of intelligence level, consciousness and sentience.
The Cognitive Typology
For convenience, cognitive capabilities can be grouped into three broad bands, each corresponding to a qualitatively different set of information processing capabilities.
Ontonic Cognition
Set of capabilities that enable a system to respond directly to stimuli and learn correlations from experience. These roughly correspond to Ontonic beingness that characterizes the systems that respond and adapt. Ontonic derived from Onto (Greek for being) - extrapolated to Onton (implying a fundamental unit/building block of beingness).
Mesontic Cognition
Set of capabilities that construct and use internal representations to plan, reason across steps, and integrate context into responses. These roughly correspond to Mesontic beingness that characterizes the systems that respond integrative-ly and coherently. Meso (meaning middle) + Onto is simply a level in-between.
Anthropic Cognition
Set of capabilities that enable systems to monitor and regulate their own reasoning, reason about other systems, and apply social or normative constraints across their lifespan. These roughly correspond to Anthropic beingness that characterizes the systems that are cognizant of self and other identities, value and seek survival and propagation.
These bands are not measures of intelligence or consciousness or reflective of sentience; they describe distinct cognitive capabilities, and they can help clarify which kinds of evaluation, risk, and governance mechanisms are appropriate for different systems.
The Seven Rings
The categories are composed of distinct, definable, probably measurable/identifiable set of systemic capability groups. These too correspond to the beingness layers closely, and I dont see anything wrong with it intuitivly (I am open for debate).
App Link
The typology can be explored using an interactive visual app vibe coded using Gemini.
How to Read This Map
This should not be read as a developmental ladder. Each ring identifies a kind of information processing that a system may (or may not) exhibit. Systems can partially implement a ring, approximate it through simulation, or exhibit behavior associated with a ring without possessing its underlying mechanisms. Higher rings do not imply better systems - they simply imply different structure of cognition.
Illustrative Examples
A simple control system (e.g. a thermostat or controller) exhibits the Reactive and Perceptual & Associative properties. It is reliable, predictable, and easy to evaluate, but incapable of planning, abstraction, or self-regulation.
A frontier language model exhibits strong Perceptual–Associative, Contextual & Abstract, and partial Metacognitive capabilities, but weak persistence and no intrinsic goal maintenance (as yet). This is evident from how such systems can reason abstractly and self-correct locally, but lack continuity, long-term agency, or intrinsic objectives (as far as we conclusively know).
An autonomous agent with memory, tools, and long-horizon objectives may span Model-Based, Metacognitive, Social, and Persistent rings. Such systems are qualitatively different from stateless models and require different alignment strategies, even if their raw task performance is similar. This would definitely be AI makers' aspiration today.
These examples illustrate why cognition should be analysed structurally rather than assumed from intelligence level or task proficiency alone.
Why This Matters for Alignment and Safety
Essentially, this lens disambiguates critical concepts, e.g.
By distinguishing cognitive capabilities, we can better match evaluations to the system under consideration.
Wth repect to beingness, a system may score high on one axis and low on the other. For example, biological cells exhibit strong beingness with minimal cognition, while large language models exhibit advanced cognitive capabilities with weak individuation and persistence.
Being cognizant of these dimensions and their intersections should help more precise governance and evaluation. I aim to conceptually illustrate this in the posts to follow.
Limitations
This typology is coarse at best. The boundaries between rings are not sharp, and real systems may blur or partially implement multiple regimes. I am not sure what it means when a certain capability is externally scaffolded or simulated rather than internally maintained within the system.
The classification itself is not made from deep research, prior academic expertise or knowledge of even basics of cognitive sciences - so I might well be horribly wrong or redundant. Heavy LLM influence in conceptualization would already be apparent as well.
I am happy to learn and refine - this is most certainly a first draft.