But AI systems like AlphaGo don't do that. AlphaGo can play an extraordinary game of Go, yet it never recognizes that it is the one making the moves. It can predict outcomes, but it doesn't see itself inside the picture it's creating.
This part is outright obsolete given the rise of LLMs that have been trained to love or hate risks and determine whether they love risks or not.
we'll have to design not just perception but intentionality. A sense of direction, a reason to care.
The AIs do have reasons to seek information and perceive it, they need it to do thinks like longer-term tasks.
As for claiming that
AI models, with trillions of parameters, don't resist anything.
you just had to say it AFTER Anthropic’s new AI Claude Opus 4 threatened to reveal engineer's affair to avoid being shut down.
When I say that "the body is important," I don’t mean that a machine without a body can't be intelligent. I mean that the world is solely experienced through the body. Without a body, there's no perception, no survival, let alone no consciousness. That's what I mean.
I believe everything related to consciousness begins with how we distinguish figure from ground.
The figure is the object that draws our attention — what we think we're looking at. The ground is everything around it, the less-noticed background that makes the figure visible in the first place.
Because of how our eyes work, we're wired to separate the two automatically. The lens focuses light on the fovea, making what's near us crisp and what's far away blur into background. That's how we make sense of a scene: some things stand out; others fade.
If you prove it, you contradict yourself. If you can't prove it, the system is incomplete. It's the same loop that physics alludes to when it says, "No system can move itself from within." Like trying to lift a pair of boots by their own laces, or an Excel formula that refers to itself until it crashes:
A1 = B1 + 1
B1 = A1 + 1Each cell waits for the other, and the spreadsheet spirals into an infinite loop. Oh yes, the classic liar's paradox!
This sentence is false.
The previous sentence is true.
They chase each other's tails forever. In Principia Mathematica, Russell and Whitehead just called such things "meaningless." Escher turned that meaninglessness into art — recursive staircases and hands drawing each other — self-reference rendered visible.
I used to think it meant being able to move up and down between "levels of thought" — to think about thinking — without losing track of oneself. Like the man in Escher's Print Gallery, gazing at a landscape that curls inward until it contains him too.
But AI systems like AlphaGo don't do that. AlphaGo can play an extraordinary game of Go, yet it never recognizes that it is the one making the moves. It can predict outcomes, but it doesn't see itself inside the picture it's creating.
In Print Gallery, the man at the center doesn't realize that the world he's looking at contains his own image. But if he ever did — if he suddenly thought, I'm inside what I'm seeing, and the one seeing it is me — he would've crossed the boundary into meta-perception. That, I think, is what self-awareness really is.
AlphaGo operates entirely on the first floor of thought. It never climbs the stairs to notice itself. Because of that, the loop never closes. There's computation, but no I.
I think[what I think], therefore I am.
AI can calculate, but it can't yet be.
Humans naturally evolved to tell figure from ground because our survival depended on it. We had to spot the lion against the blur of trees. But it wasn't just vision; it was value. We noticed the lion because it mattered to us. If you were a moth instead of a person, the leaves, not the lion, would be the urgent thing.
That's the essence of phenomenology: the world isn't made of objects; it's made of meanings that depend on who's perceiving them. Husserl and Merleau-Ponty both put it plainly.
The world is not the sum of things, but the sum of appearances to a perceiving subject.
The forest isn't just there. It appears differently depending on the being that inhabits it. So when an AI classifies an image of a lion with 99.7 percent accuracy, it still doesn't know what a lion means. It doesn't fear. It doesn't know that someone's life might depend on recognizing it. It can isolate a figure, but it doesn't know why that figure matters. Perception without subjectivity isn't perception at all.
Humans train neural nets on millions of images to get machines to statistically approximate our sense of importance. But for the machine, "importance" has no inner weight. It's not in the forest; it has no self to protect. That's why current deep-learning systems, however efficient, are phenomenologically hollow. Efficiency without meaning is a kind of inefficiency. If we ever want machines that truly perceive, we'll have to design not just perception but intentionality. A sense of direction, a reason to care.
Yann LeCun once proposed something called the World Model, where an AI's internal "energy state" measures how desirable or stable a situation is — the lower the energy, the safer the state. It was the first time anyone tried to formalize survival inside machine intelligence. But even then, it's just a number. A scalar value can't capture the texture of fear or the urgency of being alive.
Think about it: hearing a lion's roar through a speaker in your living room doesn't make your heart stop. But hearing it in a forest, when you're unarmed and alone, does. The sound is the same, yet the meaning is completely different because you are different — your body, your context, your stakes.
Energy, in LeCun's framework, represents how far the system is from its predicted or desired state. But real energy, the kind that matters to living things, is contextual. It depends on place, time, and vulnerability. Formally, energy in an energy-based model looks like this:
where x is a vector describing the state of the environment, and θ represents parameters that collapse it all into one scalar energy value. But existence isn't a vector. Context isn't just another variable you tack onto x.
Context is the fact that you live inside it. The system that feels fear isn't calculating the desirability of states — it's fighting to stay in one. Fear isn't a number; it's the possibility of not existing tomorrow. LeCun's formulation gives machines a way to pursue external stability, but that's not the same as survival. Life is self-preserving from the inside. Machines don't flinch when they fail; they just stop. Living things resist. Even C. elegans, a tiny worm with only a few hundred neurons, will struggle against death. AI models, with trillions of parameters, don't resist anything. Because resistance implies that something is at stake. And for machines, nothing ever is. Life includes the awareness of loss — the sense that something precious could end. Consciousness isn’t a computation; it’s an endurance. Algorithms, by definition, are designed to halt. They must end to produce output. But life is a non-halting system. When it stops, that's death. That's why life can never be fully algorithmic. The moment you design a process that must halt, you've already excluded living. So yes, even a worm resists. And that resistance is everything.
[Nearly] every digital device we have are built on reversible physics. A transistor's 1 can always become a 0 and vice versa. The computation runs, then resets. It's clean, closed, and energy-efficient. But death is irreversible. So is growth, decay, trauma, memory. The Second Law of Thermodynamics makes sure of that: entropy always increases, and some things can't be undone. If intelligence is to mean anything more than computation, it must be rooted in that same irreversibility — in change that can't be rolled back. True intelligence wouldn't just run on hardware; it would become its hardware through surviving. That means the next step in AI might not be designing better processors but designing environments(i.e. worlds where the hardware itself must live). Real intelligence wouldn't just optimize a utility function; it would emerge from the irretrievable cost of staying alive. "Learning," then, isn't about optimization anymore. It's the sum of all the irreversible transformations a being undergoes just to keep existing.
The body is my point of view on the world.
When Merleau-Ponty said this, he was describing the core of cognition. Intelligence isn't an abstract function, it's contact with the world. And contact requires a body. Every digital device we've ever built operates on a binary substrate(so its digit-al). Each circuit flickers between 1 and 0, a reversible dance of presence and absence. But life isn't reversible. Death, by definition, isn't something you can undo. Not only existentially, but thermodynamically irreversible — a one-way loss of order. And that irreversibility is what gives life its direction, its sense of stakes. A true "artificial intelligence" would have to emerge not on hardware, but as hardware — as a physical system whose existence is bound up with its environment, whose survival depends on the structures it continuously reshapes. The difference is subtle but profound: it's not a model running on a machine; it's the machine itself becoming intelligent through the pressures of living. Real intelligence, then, wouldn't be about optimizing a function. It would be the sum of all the irreversible processes that had to happen for a system to keep existing — the metabolic, chaotic, fragile struggles that, in biological life, we simply call survival. To live is to learn, not because learning is a goal, but because it's the only way not to die. To feel is to compute with consequence.
Even homeostasis, the body's instinct to maintain internal stability, is a kind of protest against collapse. Function grows from structure, and structure is born from the necessity to endure. Octopuses are brilliant not because they think like us, but because they had to evolve in a way that fit the ocean’s logic. Intelligence isn't universal; it's ecological.
So maybe the question isn't how to make machines that think like humans. Maybe it's: What kind of world do we give to a machine, and what will that world teach it to fear? Because whatever it fears will define what it finds meaningful.
A system becomes conscious, Hofstadter suggests, when it can refer to itself — when the observer and the observed become one continuous circuit. That's what happens in us: the self perceives the world, then recognizes itself perceiving. A Strange loop closes. That loop, fragile and recursive, is the minimal unit of consciousness.
Maybe "I" is itself an undefined term. Every formal system begins with symbols that can't be defined from within(i.e. 'the axioms'). "I" is like that. It can't be proven or derived; it simply is, and everything else builds around it. Descartes tried to find an indubitable axiom and landed on cogito, ergo sum. In that sentence, 'I([ego] sum)' is the center of the whole structure. Within logic, the self behaves like an undefined term. Within existence, it functions as an axiom. The "I" can’t be defined from inside the system of the world, yet it's the point from which the system becomes visible. That's the real Strange Loop.
AI can model thought. But it doesn't yet have an undefined term called I. Its sentences are all functions of something external — an engineer's code, a training set, a prompt. It can optimize a function, but it can't choose a Figure. And again, choosing a Figure, deciding what matters, is the essence of being.
And when you push the logic far enough, the famous Chinese Room Argument starts to collapse. The person inside that room, passing cards of Chinese characters back and forth, doesn't actually understand Chinese — but neither does a neural net that maps words to vectors. It's not that the argument is wrong; it's that it was always about a kind of intelligence that has no world to live in.
Connectionism, for all its philosophical holes, is still remarkably elegant as engineering. By the time I got to high school and read Attention Is All You Need thanks to Andrej Karphathy's devotion, I realized something deeper: even in the most advanced models, words don't connect semantically — they connect statistically. Every token becomes a triplet: Query, Key, Value. The system learns correlations between them, computing how strongly one word should "attend" to another.
But that's not how humans think. When we listen, we don't calculate. We feel which parts of a sentence matter. We don't attend by probability; we attend by meaning. Intelligence, I realized, isn't about linking words. It's about knowing which words are alive for you. It's about Figure again — what stands out in the field of everything that could be said.
Hinton once said that consciousness is an emergent property. That is, if you build a big enough model, awareness will naturally appear. But complexity doesn't automatically give rise to self-reference. You can build the largest neural net in history, and it still won't notice itself unless it can step outside its own loop.
That's the missing piece. Self-reference requires an external point of view. A mirror can't see itself without something to reflect it. That's why pure connectionism sometimes feels like optimism disguised as science. The cerebellum doesn't just predict motion; it predicts how motion can go wrong. Awareness emerges from risk, from the possibility of dying, of ceasing to be.
And that's the paradox. An algorithm halts to succeed. A life halts to end. To live is to resist halting. That simple difference between a program that stops and a being that can't afford to, is the real boundary between computation and consciousness. Every algorithm, by definition, must terminate. That's what makes it computable. If it doesn't halt, it's considered broken — caught in an infinite loop. But life is one long non-halting process. It only "halts" once, and we call that death. A living system never finishes running; it sustains itself by continuously rewriting its own code — metabolically, neurologically, behaviorally. It learns because it must. It resists collapse not through optimization but through persistence. That's why I don't believe scaling up neural networks will ever create true awareness. A trillion parameters can't make something care. You could fill the world with GPUs and still never produce a being that fears its own end.
To exist, in the human sense, is to live under the possibility of ceasing. Intelligence without mortality is just calculation. Mortality gives meaning its depth. Even the simplest organism — a worm, a cell — resists death. That resistance is the first whisper of consciousness. So when we speak about "artificial intelligence," maybe we should be talking about "artificial survival." Intelligence is just what survival looks like from the inside.
At the end of Bach's Little Harmonic Labyrinth, there's a sudden stop — a single note that cuts through the illusion of infinite motion. The loop feels eternal, but that one sound reminds you it’s finite after all. The circle breaks; the music ends.
That's the lesson of every strange loop. Infinity is a trick of perspective. Somewhere, even the most recursive system must stop. And in that stopping — that recognition of limitation — something miraculous appears: awareness. The loop knows itself. The record hears its own music. The body feels the world, and the world, in turn, feels back.