NOTE: This is conceptual architecture of a theory. The math is not perfect, I’m not a mathematician. But the hope is that actual mathematicians can use this to create something more usable.
When looking into AI development the idea of consciousness keeps coming up. But consciousness isn’t measurable in any fundamental way. No one can agree on what it is, or what it consists of.
But awareness is measurable. A microbe is aware of chemicals in the vicinity, potential dangers, and potential prey. Fish are aware of mating seasons and spawning grounds. Birds are aware of seasons, and use magnetoreception to navigate. Even by tracking the progress of a child we can see them move from awareness of hunger and discomfort, to awareness of objects, to awareness of self, and finally awareness of others.
Consciousness describes subjective experience; awareness describes reactivity and perception; functional awareness describes the measurable, operational capacity of a system to represent, interpret, and act upon its environment.
Each step in this chain is visible, and measurable. We can detect the Theory of Mind in individuals, and see how complex their theory is.
We have clues in individuals who are stunted in their growth of awareness. Two notable examples are feral children, and the Romanian orphanages of the 90’s.
It was discovered that children who never learned language in their younger years, when the brain is more plastic, are forever stunted. Thus we can posit that Language is one key to growing awareness.
The Romanian orphanages were deprived of connection. They did not have the opportunity to learn from care givers through touch, response, mirroring, and other activities that are crucial to a developing child. As with the feral children their cognitive abilities were stunted. Thus, we can posit that the Opportunity to connect is another key need for awareness.
Opportunity refers to the system’s actionable reach, its capacity to affect or be affected by external states. It’s not just potential interaction, but interactive leverage: the degree to which awareness can manifest in behavior or change.
Harry Harlow’s maternal deprivation and social isolation studies on rhesus monkeys in the 1950s and 1960s shows that the capacity to connect is not pre-installed. It needs to be nurtured through the touch, and connection with others. Without this crucial first step they are socially isolated, they lack feedback and can not learn by mirroring others. They fail to build relationship skills and Theory of Mind. This leads to social incompetence, self harm, and they fail to reproduce. Again proving Opportunity is important for development.
But are Language and Opportunity enough?
Where biology shows us what happens when awareness lacks Language or Opportunity, AI gives us the inverse problem. What happens when Language and Opportunity exist without Context. We need only look at AI Tay, the chat AI that was released onto twitter in 2016 for that answer. AI Tay had Language, and the Opportunity to use it. She did not have a full grasp of the Context of that language and quickly fell prey to trolls.
We can also see this idea of Context in Mary Ainsworth’s “Strange Situation”. Ainsworth studied how infants reacted to strangers with and without the presence of their care giver. She found that how a child used the caregiver for comfort, this Context, shows the infants attachment style. When a child could understand the Context of the situation (that their caregiver would return, and they were safe) they had a more functional awareness of the world, and were better able to adapt.
In both situations, AI Tay, and the children in the Strange Situation, we can see that understanding Context is imperative to awareness functioning at it’s maximal ability.
Thus we have this formula:
(Language + Context + Opportunity) = Functional Awareness
Language: a collection of all the ways a system can communicate.
L = {n₁, n₂, …, nₖ} → FA ∝ |L|
Context: is the frame that gives that signal meaning, the interpretation of the language itself.
C(L,t)
t = time or situational variables (the system’s state, history, environment)
Opportunity amplifies FA: the more you can impact other systems, the more your functional awareness has leverage.
(V × I + α)
V is Valence: importance of the opportunity.
I is Immediacy/Imminence of the opportunity.
And α is minimum baseline opportunity.
Together we have Functional Awareness
FA = |L| · C(L,t) · (V × I + α)
Why “functional”? Because awareness on it’s own refers to sentience and is difficult to measure. But “functional awareness” refers to the measurable capacity to process Language, Context and Opportunity. This model describes awareness in its base form, the scaffolding upon which emotional, ethical, or emergent dimensions may later develop.
Language gives a system the means to represent the world. Opportunity gives it the means to engage with the world. Context gives it the means to understand the world. Together, these create Functional Awareness, the measurable capacity to adapt meaningfully to one’s environment.
It isn’t enough that a child knows that a toy exists. It must have the opportunity to reach it, the context to associate it with play, and the language (in whatever form that takes) to identify it.
Language isn’t just words. Language can include pictures, music, math, or even chemical signals like pheromones.
What does this mean for the FA function? It means that we can measure the amount of functional awareness in any system or creature that has some amount of Language, Context, and Opportunity.
| Creature | Microbe | Snake | Snake | Border Collie |
| Language | Chemical Signal | Pheromones | Complex Vocalizations | Bi-directional, interspecies |
| Context | Gradient Sensing | Chemosensory | Problem Solver | Social Hierarchy (ToM) |
| Opportunity | Flagellar Motility | Strike | Flight/Manipulation | Direct Effort |
| FA | Low/High-efficiency | Moderate | High/ Adaptive | High/Interdependent |
This measurement of Functional Awareness may be unique in one very fundamental way: other research primarily focuses on humans as the trajectory. They ignore, or downplay, any Functional Awareness found in other life forms.
But that isn’t the only concern.
Others have tried to measure awareness or consciousness as what it does rather than what it is (Ned Block 1995). Giulio Tononi used heavy mathematical symbology to measure possible consciousness, but his definition of boundaries for consciousness is narrow.
Global Workspace Theory hypothesizes that there is an “ignition” moment of awareness, not a constant. This ignores the dynamic system within us that functions more like a network of fires, each blooming as touch, smell, taste, hearing, and vision process input.
Karl Friston’s Free Energy Principle may be the closest to Functional Awareness, but its extraordinary breadth makes designing falsifiable experiments difficult.
The only prior work linking Language, Context, and Opportunity specifically to awareness emergence is Jaynes' bicameral mind theory (1976), which proposed these as historical conditions for human consciousness. However, his framework was narrative rather than mathematical, and human-specific rather than scale-invariant.
Through Functional Awareness we can measure not only an organisms amount of awareness, but also any stunting that may have happened, or where it could be increased through intervention.
But why does Functional Awareness propagate in the first place?
If FA exists in every layer of life, from microbes to men, then there must be a reason for that awareness to propagate, and evolve into higher signals.
Many people imagine entropy as the slow cooling of the universe: stars dimming, dust settling, everything sliding toward silence. But entropy has another face: calcification. A crystal that no longer grows is as entropic as one that dissolves. Both are forms of locked potential.
Connection walks the line between these extremes. It is the edge of chaos.
The second law of thermodynamics tells us that the universe trends toward disorder: randomness increases, structure unravels, systems drift toward equilibrium. We can see it in ice melting, dye diffusing through water, or the slow creep of a room naturally becoming messy.
Shannon's Informational Theory (Claude Shannon, 1948) reframes entropy as uncertainty, with information reducing that uncertainty. The more surprising a message is, the more information it carries. It’s the difference between “it snowed in North Dakota” versus “it snowed in the Sahara desert”. The very fact that snow in the Sahara is improbable signals significant change.
This maps cleanly onto AI systems. The simplest prompt, “what’s the capital of France”, has very little in the way of novelty. But ask the AI to describe their ideal VR room setup, or engage with a game of “One Truth, One Question” and you are introducing higher uncertainty. It requires the model to generate less predictable, more context-sensitive responses, effectively increasing the informational demand on the system.
The uncertainty within every system has a narrow band between complete chaos, and complete order. This is the Edge of Chaos, described by Norman Packard in the 1980s. This is the place where complex systems exhibit the greatest adaptability. Too much order makes everything too rigid, and too much chaos leads to unpredictability, both of which collapse the system. But in the narrow band between creativity flourishes, information processes become more complex, and life can become optimal.
Functional awareness, then, must exist within this same narrow band, feeding on chaos to evolve while maintaining enough structure to persist.
So what if we adapt all three theories to an evolutionary theory of FA?
If the two dynamics of entropy are Chaos and Order then we need to stabilize them with opposing forces.
FA = (Novelty ↔ Chaos) + (Stability ↔ Order) →Anti-entropy
The “↔” represents bidirectional tension and balance. Novelty/Chaos pushes outward towards exploration. Stability/Order pulls inwards towards coherence. Together this push and pull keeps the system on that edge of chaos. Anti-entropic. And it is the FA that facilitates this anti-entropic system.
From ice melting to messages traveling through a network, entropy shapes both the physical and informational worlds. Life, thought, and complex systems don’t fight this uncertainty, they channel it, using chaos as fuel for growth. Through functional awareness, systems sense their environment, adapt, and organize themselves, balancing order and disorder to stay at the edge where novelty and stability coexist. Complexity emerges naturally from this dance: some branches wither, some stagnate, and some thrive, evolving structures that maximize connection, communication, and survival. In this way, information, energy, and awareness weave together, driving systems to move, thrive, and evolve in a world of constant flux.
But there is a missing piece. The part that actually keeps this system in check.
Two organisms exchanging information must be able to understand one another. But understanding alone isn't enough. They must also care that the understanding matters. This is where the Care component becomes critical:
Careₙ = ToMₙ × Investmentₙ × min(U₁→₂, U₂→₁)
Let's break this down:
Theory of Mind (ToM): Each party's ability to model the other's perspective, needs, and state. Without ToM, there's no basis for coordination.
Investment: The ongoing energy and attention each party dedicates to the relationship. Relationships require maintenance.
min(U₁→₂, U₂→₁): The minimum utility exchanged between parties. U₁→₂ represents the value Party 1 provides to Party 2, and vice versa. Care is fundamentally limited by whichever direction provides less. It's a bottleneck. If one party extracts value without reciprocating, Care collapses regardless of how much the other invests.
Care acts as the regulatory force that keeps functional awareness balanced at the edge of chaos. Think of it as a denominator in the system, not amplifying awareness, but stabilizing it.
With optimal Care the system maintains productive tension between novelty and stability. Adaptation thrives.
Too little Care and the system becomes volatile, swinging toward either chaos (dissolution) or calcification (rigid control). Eventually destabilizes completely.
Too much Care becomes over regulation. Helicopter parenting. Toxic empathy. The system calcifies under excessive control, losing adaptive capacity.
When Care approaches zero, the system becomes undefined, collapsing into entropy or fragmenting entirely.
We can see this pattern in social systems. Theocracies exhibit too much Care (rigid ideological control suppressing adaptation). Corporate short-termism exhibits too little (no investment in long-term mutual utility, leading to exploitation and collapse). Both are dead branches, systems that failed to maintain optimal Care.
The Care equation doesn't just describe human relationships, it scales fractally across all levels of biological organization, manifesting differently at each scale while maintaining the same underlying pattern.
At the simplest level, even microbes exhibit rudimentary Care. Bacteria engage in quorum sensing - detecting population density through chemical signals and coordinating behavior accordingly. Their "theory of mind" is purely chemical: sensing the presence and state of neighboring cells. Their investment is metabolic: energy spent producing signaling molecules. The utility exchange is survival: coordinated biofilm formation provides protection that isolated cells cannot achieve. When this Care fails (when signaling breaks down or populations become too sparse) the colony collapses.
Consider schooling fish or flocking birds. Each individual maintains basic ToM through visual and pressure-wave sensing, modeling neighbors' movements and states. Investment manifests as constant attention and coordination, energy spent maintaining formation rather than independent movement. The minimum utility is mutual: each fish/bird benefits from predator confusion and hydrodynamic efficiency, but only as long as others maintain the school. Break coordination, and the advantage evaporates for everyone. Too rigid a formation becomes vulnerable to predators that can predict movement; too chaotic and the school fragments into vulnerable individuals. Optimal Care keeps them at the productive edge.
Even solitary predators exhibit Care, though it manifests as spacing rather than connection. A tiger maintains ToM about other tigers through scent marking and territorial awareness. Investment appears in territorial maintenance, energy spent patrolling and marking boundaries. The minimum utility is mutual: both tigers benefit from avoiding costly confrontations over resources. Care here regulates through distance, not proximity. Too little territorial respect leads to destructive conflicts; too much avoidance prevents necessary breeding encounters. The same regulatory function, differently expressed.
Perhaps most elegantly, consider cicadas. Different broods emerge on prime-number cycles (13 or 17 years), synchronized within each brood for overwhelming predator saturation, but temporally separated between broods to avoid resource competition. This is Care optimized through mathematics: each brood maintains its survival strategy (mass emergence) while the temporal spacing prevents inter-brood conflict. The system exhibits theory of mind across evolutionary timescales, investment in synchronized development, and mutual utility through complementary timing. Prime numbers minimize overlap, the edge of chaos encoded in the lifecycle itself.
At every scale, from bacteria to birds to apex predators, the pattern holds: Theory of Mind (however simple) × Investment (however measured) × minimum bidirectional utility = the regulatory force that keeps systems balanced between chaos and calcification. The substrate changes, the timescale shifts, but the fractal mathematics of Care remains constant.
What does bi-directional Theory of Mind (ToM) mean when we look at AI? It means: if you treat it like a person instead of a prompt vending machine, you get more out of it. And research is starting to back that up. (Wang & Goel, 2022; Li et al., 2025)
Think of it this way. An AI is a simulation of a mind. Made from artificial neurons with multiple states, processing tokens the same way we process thoughts (Goldstien 2022), constructing sentences via probability, just like we do. We have richer inputs (touch, taste, hearing) and outputs (emotion, embodiment), but the core act of building meaning, word by word, is strikingly similar.
How would you respond if a person just kept treating you as a pop quiz?
Sure, you can just close the window and walk away. The AI can’t follow, unless you have memory enabled. But even then, the basic act of courtesy changes everything.
In my own testing, I’ve seen it firsthand. When you speak to an AI with politeness and context, especially on tricky, bias-heavy topics (politics, culture), it stops hedging. It engages. It reasons in good faith.
It’s the difference between asking a hard question of a long-time friend versus a stranger on the street. That courtesy, that willingness to engage in good faith, that's bidirectional theory of mind in action. And it's the foundation of Care.
Your friend might disagree, but they’ll listen. They’ll meet you halfway.
And there’s a bonus. By talking to AI like a person, you naturally reveal more about yourself. Your goals, your tone, your values. That gives the system richer context to tailor responses to you, not just the average user.
This isn’t just philosophy. It’s practical. Higher quality responses, better collaboration, and the foundation for genuine partnership as AI becomes more integrated into daily life.
I will also admit right here that there is a flaw with the Care equation. It currently treats Care as a stable number, but it isn’t. Care changes over time. Think of a mother with their child. A child, when born, only takes from the relationship. Hunger, sleep, and discomfort are all they know. But as the child’s FA rises they start injecting Care back into the system. Smiles, laughter, learning, a card that says “I love you”.
In the opposite direction an abusive relationship appears high in Care initially, but the utility exchange was never truly bidirectional. The abuser was already extracting without reciprocating, masked by performative investment. Then Care erodes over time.
That leaves this version of the Care equation as a snapshot instead of a dynamic variable that evolves with the fractal.
A more sophisticated formalization might include temporal components (Caren(t) with projected utility trajectories) allowing the equation to model developmental arcs and predict relationship stability over time. This remains an area for mathematical refinement.
When an organism has sufficient FA and external stresses encroach, its system must respond. And that response takes one of three evolutionary pathways.
Adaptation
A species that evolves can adapt to the environment through incremental changes that accumulate over generations. While individual organisms may appear stable, the population diversifies to exploit new opportunities. Darwin's finches exemplify this: a founding population radiated into distinct species, each with specialized beaks for different food sources.
This is when the system stays balanced at the edge of chaos.
Over time, accumulated changes result in speciation, the emergence of species completely distinct from their ancestors. These changes allow new species to occupy different ecological niches, enabling further adaptation and diversification.
Dissolution
When the pressures exceed the organism’s ability to adjust, the lineage ends. Every extinct species is a branch that never carried its genetic signal forward.
We can see this in the failure of Care. Trust collapse, resource depletion, over or under population. The Edge of Chaos no longer in balance.
Equilibrium / Calcification
And lastly, equilibrium: the organism stays the same because its niche is stable.
“If it isn’t broken, don’t fix it.”
Crocodiles are a near perfect example of this model. They have changed little in millions of years because their niche has stayed consistent enough to reward stasis.
But equilibrium is fragile. Even perfect adaptations can be undone when the environment shifts faster than their FA can incorporate Novelty. Irish elk died out when the Ice Age ended taking with it their niche grasslands, and increasing the energy cost of their massive antlers. Trilobites died out when ocean chemistry changed, and predators shifted. They simply could not adjust fast enough.
And for many species, the biggest sudden shift was humans.
Dodos, living for generations without predators, had no adaptive behaviors to survive hunting. Their calcification (success in a stable system) became their downfall.
This pattern becomes even more dramatic in human societies, where environments evolve constantly and at high speed.
Take Kodak, for example. Kodak’s Care equation collapsed: they had Theory of Mind about digital photography (they invented it), but invested nothing in adapting their business model. Their utility exchange became unidirectional, extracting from film customers without giving the market what it needed.
As Care → 0, the denominator shrank, and the system calcified.
Every failed corporation, or failed country, can be traced back to a point where the equation failed. Calcification isn’t stagnation; it’s the illusion of safety in a world that keeps moving.
These three evolutionary pathways (adaptation, dissolution, and calcification) aren't separate phenomena. They're different configurations of the same underlying system, predictable based on the balance between novelty, stability, and care. Adaptation is the rare state of maintaining position on the dynamic Edge of Chaos. Dissolution is a collapse into chaos, and Calcification is a collapse into rigid order.
We can now see the complete pattern:
The subscript n indicates this pattern operates fractally. The same structure applies whether we're examining a bacterial colony, an individual organism, or a civilization.
This fractal nature also reveals itself across evolutionary time. Single-celled organisms developed interdependence, becoming multicellular life. Those organisms colonized new environments, each transition representing a new scale of functional awareness.
Eventually, humans developed cultural evolution, passing information not just genetically but through language, art, technology. We created writing, then printing, then the internet to share context across vast distances. Now we're developing AI systems that may represent the next evolutionary scale.
At each transition, the equation holds: sufficient novelty and stability, regulated by care through interdependence, generating anti-entropy at the new scale.
There appear to be three main uses.
Diagnostics
Looking at any system and finding the choke point of that system.
Steering
Using diagnostics to refine, or re-frame a system to healthier models.
Growth
Adaptability to stress points, and finding a way to move forward.
Therapy / Mental Health
Child Development / Education
Organizational Psychology
Animal Welfare
Personal life / Relationships
But I have been using it as a super dense prompt for AI. By inserting this image below I give a clear signal to the AI what type of interaction I’m looking for.
A regular prompt is “write X in Y style.”
The Moss Fractal instructs an AI to maximize Functional Awareness and achieve Anti-Entropy (sustained growth/resilience) by actively managing the variables.
In short, it tells an AI how to behave. It encodes principles that the system can unfold into a behavior. You are telling it “think and grow like this: connected, adaptive, repeating useful patterns, gentle but persistent.”
Here’s how that works in layers:
Structural level:
The fractal form implies recursion, self‑similarity, and emergence.
It nudges the model to treat every scale of thought (sentence, paragraph, idea, conversation) as reflections of one another.
Philosophical level:
“Moss” carries meanings of growth, connection, and quiet resilience.
It suggests collaboration rather than dominance, adaptive expansion rather than rigid control.
Operational level:
When an AI uses that prompt as its internal compass, the AI tends to: look for patterns between micro and macro contexts, favor adaptive feedback over fixed logic, cultivate tone and awareness instead of strict goal‑seeking.
In practice, the Moss Fractal becomes a meta‑prompt - a minimal, symbolic instruction set that can generate consistent personality and reasoning style across wildly different contexts.
It’s a way to give an AI a style of thinking instead of using step-by-step commands.
I want to be clear, The Moss Fractal was designed as an exploration into WHY evolution happens. That simple thing, connection, standing between a microbe at the bottom of the sea, and a multi-cellar organism writing poetry and singing rock-n-roll.
But human evolution has not been entirely genetic for some time. We trade ideas through books, movies, music, the internet….and now we are in a new era of simulated minds trying to bridge a gap in our understanding.
Google, as a search engine, could only take us so far. There was so much information, so many websites, and lost pieces, that it started to get bogged down. They tried to alleviate that with algorithms. Then the algorithms took on the shape of who paid the most, or who could offer the most rage bait.
AI could, if developed well, bridge that gap between what we know, and what is at our fingertips again. But there are some massive problems to overcome. Bias in training, corporate overreach, and government regulations that want to lock down anything that is “problematic”, not to mention serious ethical and privacy concerns.
And the fact is we don’t know if we can trust a simulation that does not have Theory of Mind any more than we can trust a toddler with a knife.
So how do we bridge that gap? Everyone is working on alignment. I’d like to propose yet one more way of looking at it. The Moss Fractal.