One argument against biological naturalism is that consciousness is quick. I can have like 40 experiences per second (observer-moments). Most biological process like blood flow and gene expression are much slower. Only electric discharges of neurons and some processes in synapses are quick enough. It mostly means that electric model of neuron connections will likely be conscious.
I agree all these reasons are weak. What I've seen a lot less of are good reasons in favor of potential AI consciousness. Maybe that would just be a recap of functionalist arguments, idk.
Incidentally, I have particular disdain for the 'simulation is not instantiation' argument. My favorite counterexample is: sure, a simulation of a hurricane does not get you wet, but does a simulation of a calculator add and multiply numbers? Or does a simulation/emulation of an Atari game system let you play Adventure? In other words, simulations of computation are instantiations of computation.
Agreed. I feel like there's an argument to be made about how consciousness is similar in nature to thinking, and thinking seems computational, so... but I haven't seen a really compelling version of that anywhere.
I think that Anil Seth's main intuitions around why LLMs can't be conscious stem from the Free Energy Principle (FEP) - see, for example, section 4.0 onwards near the bottom of page 15 of "Conscious Artificial Intelligence and Biological Naturalism." I disagree with this view quite strongly. In fact, I think that, if you think FEP is correct, you should think it's more likely that LLMs are conscious because FEP is quite general and substrate-agnostic. Substrate does matter insofar as it gives rise to the certain structural properties that make FEP applicable: for example, the thing you're modelling must have states that render the environment's influence on it conditionally independent (sensory states) and states that render its influence on the environment conditionally independent (active states); it might also be important for the thing you're modelling to have active states hidden from itself so that it must infer things about itself, and in doing so, update a self-representation (fulfilling these requirements makes the thing a "strange particle", in the literature). Related.
I might write on this and flesh out my views more at some point; just thought you might find it interesting, because I think this really is the substance of his argument (as opposed to over-attribution, anti-computationalism, simulation is not instantiation, etc). It's also the "positive" part of the argument, in the sense that the other arguments presented argue against computational functionalism, but don't really argue for biological naturalism.
I agree with the conclusion that LLMs arent conscious, but Seth Anil's thinking seems muddled, and the video description has a suspicious number of em dashes.
The first reason, self-projection, is not actually evidence against the theory that LLMs are conscious.
The second reason is phrased as though he thinks a simulation of a human brain wouldnt be conscious, even if it included all of the electric and chemical properties of the brain.
As for the rest, well, ive nothing nice to say. I dont think he understands what hes talking about.
There’s broad (though not universal) agreement that present day AI is probably not conscious, but very little agreement about whether consciousness is likely to emerge as we move toward AGI. This isn’t an abstract question: AI consciousness has major implications for alignment. Further, a conscious AI might have moral rights that complicate our ability to control it, put it to work, or turn it off.
The debate about AI consciousness has two factions:
Many biological naturalists argue that because consciousness is inextricably linked to neurobiology, AI consciousness is highly improbable. I’m here today to argue that they’re wrong: biological naturalism may be correct, but the arguments in favor of it aren’t nearly strong enough to confidently rule out AI consciousness.
I’m specifically going to argue against Anil Seth, who is one of the most prominent and best qualified biological naturalists. He’s a neuroscientist who specializes in consciousness, he runs two separate organizations that study consciousness, and he’s spoken and written extensively about AI consciousness. I disagree with him, but he’s a serious person for whom I have great respect.
Anil has argued against AI consciousness in an essay that won the 2025 Berggruen Prize Essay Competition as well as a recent interview with Conspicuous Cognition. Those are both excellent, but for simplicity I’m going to focus on this short video that puts forward four reasons why he thinks AI consciousness is “extremely unlikely”. We’ll go through these in detail, but briefly:
Throughout, he shows that the brain has non-computational features that could in principle be essential for consciousness, but does not provide strong evidence that they are essential.
It’s absolutely possible that biological naturalism is correct—maybe AI will never become conscious. But absent more compelling arguments than these, it would be foolish to dismiss the possibility.
Reason #1: we over-attribute consciousness to LLMs
(2:28 in the video, if you want to follow along)
I completely agree with Anil’s first reason: because LLMs seem very human in conversation, we are strongly predisposed to perceive them as conscious regardless of whether they actually are.
I’d go further and say that because LLMs are trained on an enormous body of human-generated text, they are very fluent at emulating humans. Assessing whether AI is conscious is hard in part because of the difficulty of distinguishing between actual consciousness and simple mimicry.
But let’s be clear: reason #1 isn’t in any way an argument against the possibility of AI consciousness. It’s merely a warning that we need to proceed with caution when assessing whether a given AI is conscious.
Reason #2: consciousness might not be a property of computation
(3:39 in the video)
Anil points out that brains are very different from computers—for example, hardware and software very distinct things in a computer, but not in a human brain. We’re all agreed on this point. He then jumps straight to his thesis:
That’s too big a leap. It is absolutely true that there are many non-computational biological processes in the brain that are vital to the functioning of the brain, but he presents no evidence that those processes serve as anything more than support systems for the computational systems that do the actual work. Human consciousness would be impossible without blood circulation, but that doesn’t mean that blood circulation produces consciousness, or that computers can’t be conscious without it.
A more accurate claim would be: “if we have evidence that computation doesn’t exhaustively describe brains, we have evidence that computation might not be sufficient for consciousness”. To go further than that, he would need to present evidence that some non-computational system is actually vital to consciousness, not merely that those systems exist.
Reason #3: there are plenty of other games in town
(6:05 in the video)
Here Anil points out that neurobiology is complex and there are all kinds of processes and systems that might plausibly be critical to consciousness. He doesn’t go into detail here, but has elsewhere proposed autopoiesis and neuronal cleaning cycles as possibly essential to how the brain produces consciousness.
Like reason #2, this is a strong argument that biological naturalism is possible, but not a strong argument that it must be correct. He proposes neurobiological processes that could be required for consciousness, but doesn’t present any evidence that they are required.
Reason #4: simulation is not instantiation
(7:07 in the video)
Reason #4 is that simulation is not reality: a computer simulation of a rainstorm doesn’t make anything wet, so why would a computer simulation of a brain produce consciousness? It’s a clever argument, but it has two major flaws.
First and foremost, the analogy to simulation is actively misleading. LLMs and brains have some (limited) architectural similarities, but LLMs do not in any way simulate brains. Speculating about what would happen if you simulated a brain is interesting in its own right but tells us nothing about the likelihood of AI consciousness.
Beyond that, this is more a restatement of the difference between biological naturalism and computational functionalism than an argument for one over the other:
This reminds us that brain simulation would only produce consciousness if computational functionalism is true, but it doesn’t argue against computational functionalism.
In conclusion
I want to be clear that I’m not here (today, at least) to argue for or against biological naturalism. Consciousness is a legendarily hard problem and we simply don’t understand it well enough to make confident predictions about whether AI is likely to achieve consciousness.
Given that uncertainty, it seems prudent to plan for the possibility that AI may soon be conscious. What that means is a topic for next time.
Addendum
It’s beyond the scope of today’s article, but I want to briefly mention that neuroscientists are divided on the question of AI consciousness. For a very different perspective, I recommend PRISM’s recent interview with Michael Graziano. He not only argues that AI consciousness is likely, but that a conscious AI would likely be more aligned and safer than a “zombie” AI.