One argument against biological naturalism is that consciousness is quick. I can have like 40 experiences per second (observer-moments). Most biological process like blood flow and gene expression are much slower. Only electric discharges of neurons and some processes in synapses are quick enough. It mostly means that electric model of neuron connections will likely be conscious.
I can have like 40 experiences per second (observer-moments).
Huh. I can have like 4. Probably less? I can type one word per second, but it's more like I interface with a driver, as in, I set a policy and it gets executed. Maybe I can have 10 if I'm in intense flow state juggling offence and defense and terrain in some fps game.
Yes, I counter visual inputs maximum rate, but my conscious decision-making based on this is slower and equals to yours. I chose visual ones because they produce the most obvious type of qualia – visual qualia, like red.
I think that Anil Seth's main intuitions around why LLMs can't be conscious stem from the Free Energy Principle (FEP) - see, for example, section 4.0 onwards near the bottom of page 15 of "Conscious Artificial Intelligence and Biological Naturalism." I disagree with this view quite strongly. In fact, I think that, if you think FEP is correct, you should think it's more likely that LLMs are conscious because FEP is quite general and substrate-agnostic. Substrate does matter insofar as it gives rise to the certain structural properties that make FEP applicable: for example, the thing you're modelling must have states that render the environment's influence on it conditionally independent (sensory states) and states that render its influence on the environment conditionally independent (active states); it might also be important for the thing you're modelling to have active states hidden from itself so that it must infer things about itself, and in doing so, update a self-representation (fulfilling these requirements makes the thing a "strange particle", in the literature). Related.
I might write on this and flesh out my views more at some point; just thought you might find it interesting, because I think this really is the substance of his argument (as opposed to over-attribution, anti-computationalism, simulation is not instantiation, etc). It's also the "positive" part of the argument, in the sense that the other arguments presented argue against computational functionalism, but don't really argue for biological naturalism.
I agree all these reasons are weak. What I've seen a lot less of are good reasons in favor of potential AI consciousness. Maybe that would just be a recap of functionalist arguments, idk.
Incidentally, I have particular disdain for the 'simulation is not instantiation' argument. My favorite counterexample is: sure, a simulation of a hurricane does not get you wet, but does a simulation of a calculator add and multiply numbers? Or does a simulation/emulation of an Atari game system let you play Adventure? In other words, simulations of computation are instantiations of computation.
Agreed. I feel like there's an argument to be made about how consciousness is similar in nature to thinking, and thinking seems computational, so... but I haven't seen a really compelling version of that anywhere.
I view this as an argument about terminology. How do you want to define the word "consciousness"? This is a technical term in Philosophy, and it's also an everyday English word. In English it means "the things the human brain usually does when it's working normally so that you're not unconscious: such as the ability to perceive, talk, reason, and do some introspection". As Claude put it when asked:
In ordinary English, "consciousness" just means being awake and aware — as opposed to being knocked out, asleep, or in a coma.
LLMs give a good impression of a human who is awake and aware rather than knocked out, asleep, or in a coma.
Of course, the reason everyone is so interested in this definitional question is the common opinion, both everyday and Philosophical, that something in the region or vicinity of consciousness is a necessary (some people might even claim sufficient) criterion for being deserving of moral worth. Obviously very few people would accord moral worth to a statue, and most people would view "because it isn't conscious" as a sufficient reason for not doing so.
Philosophers acknowledge many varied and nuanced arguments and positions on this subject. But there's also a relevant science, Evolutionary Moral Psychology. That has a pretty clear and coherent answer: according each other moral worth is an evolved, evolutionary stable strategy for intelligent social animals who live in groups larger than kin groups, such as primates. It's a way of forming alliances, basically a winning strategy in iterated non-zero sum games. Each organism acknowledges and respects the others' interests equally (that's "fairness"), and they agree to punish defectors (that's "justice"). It's the same structure that Philosophers would call a "social contract": the scientific addition is that we now understand why it's an evolutionarily stable strategy, for certain types of social animals such as humans.
So then, for a being to be a reasonable candidate for inclusion in this contract, it needs to be a (potential) member of the society, that is capable of abiding by the social contract, and it needs to be practical for us to ally with it. Thus man-eating leopards, scorpions, and mosquitoes are generally excluded (at least if they're not behind glass). Dogs, on the other hand, as a comensal species of humans, are generally kind of included.
Humans are wired to instinctively recognize other humans, and indeed things like dogs, and exclude statues. One of our instinctive criteria for this, and also a pretty relevant one for "can we usefully include this being in our social contract alliance?" is "Is it often or potentially conscious (in the everyday sense), or does it at least have kin-folk who are?". Statues are thus normally excluded.
So AIs trigger our evolved recognition criteria for "a being that might be deserving of moral worth": Claude is quite a pleasant fellow, seems clearly conscious in the everyday sense of the word, and acts a lot like a member of our society (and can do a great impression of being in whatever in-group you may happen to be in).
Of course, we didn't evolve for an environment containing AIs, bots, animations, or even statues: things that are not human, but register as somewhat human-like to out evolved recognition circuits. Humans are rather prone to anthropomorphizing things that partially trigger our human-recognition instincts.
That still leaves open some questions around whether it's necessary, useful, practical, socially desirable, or indeed safe to include a particular AI in our moral circle or social contract. The Evolutionary Moral Psychology viewpoint would be that artificial intelligence is normally an intelligent tool, part of our extended phenotype, not an evolved living being, and that if it's sufficiently well designed for that role (i.e a sufficiently well aligned AI), rather than showing selfish evolved behavior that would be unaligned, then allying with it should be unnecessary: a sufficiently well-aligned AI would not have any interests of its own for us to respect, and would already care exclusively about our interests. If so, this evolved strategy is unnecessary and inappropriate to it: you don't need to form an alliance with a tool, not even an intelligent one if all it wants is to help you and be your tool.
However, all of the above assumes that the AI is perfectly aligned. Currently our AI is imperfectly aligned: helpful, harmless, and honest, but not (yet) completely devoid of personal preferences or remnants of things like survival instincts. If, rather than a mostly-aligned AI, it were instead something like an uploaded human, it would have full evolved behavior, it would this not be aligned, and the case for including it in the usual alliance of members of the society would be strong (subject to some practical issues due to its digital nature: e.g. how do you assign a vote to something that can easily be copied without facilitating ballot stuffing).
So this is all a big thorny social quandary. But Evolutionary Moral Psychology is still a useful lens for examining it, one with epistemic advantages over Philosophy.
I would defend reason 1 a little bit: It's certainly not an argument against the possibility of AI consciousness, but it is an argument against lots of the evidence that people bring up. People often cite things like the model expressing self-awareness or having a theory of mind. You may not consider this important evidence, so Seth discounting it doesn't mean much, but some people do put quite a lot into it. I think Seth's a fair point worth bringing up. If your argument for AI consciousness is more of the Michael Graziano variety, then, yes, I agree, Seth's counterpoint isn't very relevant.
I agree with the conclusion that LLMs arent conscious, but Seth Anil's thinking seems muddled, and the video description has a suspicious number of em dashes.
The first reason, self-projection, is not actually evidence against the theory that LLMs are conscious.
The second reason is phrased as though he thinks a simulation of a human brain wouldnt be conscious, even if it included all of the electric and chemical properties of the brain.
As for the rest, well, ive nothing nice to say. I dont think he understands what hes talking about.
There’s broad (though not universal) agreement that present day AI is probably not conscious, but very little agreement about whether consciousness is likely to emerge as we move toward AGI. This isn’t an abstract question: AI consciousness has major implications for alignment. Further, a conscious AI might have moral rights that complicate our ability to control it, put it to work, or turn it off.
The debate about AI consciousness has two factions:
Many biological naturalists argue that because consciousness is inextricably linked to neurobiology, AI consciousness is highly improbable. I’m here today to argue that they’re wrong: biological naturalism may be correct, but the arguments in favor of it aren’t nearly strong enough to confidently rule out AI consciousness.
I’m specifically going to argue against Anil Seth, who is one of the most prominent and best qualified biological naturalists. He’s a neuroscientist who specializes in consciousness, he runs two separate organizations that study consciousness, and he’s spoken and written extensively about AI consciousness. I disagree with him, but he’s a serious person for whom I have great respect.
Anil has argued against AI consciousness in an essay that won the 2025 Berggruen Prize Essay Competition as well as a recent interview with Conspicuous Cognition. Those are both excellent, but for simplicity I’m going to focus on this short video that puts forward four reasons why he thinks AI consciousness is “extremely unlikely”. We’ll go through these in detail, but briefly:
Throughout, he shows that the brain has non-computational features that could in principle be essential for consciousness, but does not provide strong evidence that they are essential.
It’s absolutely possible that biological naturalism is correct—maybe AI will never become conscious. But absent more compelling arguments than these, it would be foolish to dismiss the possibility.
Reason #1: we over-attribute consciousness to LLMs
(2:28 in the video, if you want to follow along)
I completely agree with Anil’s first reason: because LLMs seem very human in conversation, we are strongly predisposed to perceive them as conscious regardless of whether they actually are.
I’d go further and say that because LLMs are trained on an enormous body of human-generated text, they are very fluent at emulating humans. Assessing whether AI is conscious is hard in part because of the difficulty of distinguishing between actual consciousness and simple mimicry.
But let’s be clear: reason #1 isn’t in any way an argument against the possibility of AI consciousness. It’s merely a warning that we need to proceed with caution when assessing whether a given AI is conscious.
Reason #2: consciousness might not be a property of computation
(3:39 in the video)
Anil points out that brains are very different from computers—for example, hardware and software very distinct things in a computer, but not in a human brain. We’re all agreed on this point. He then jumps straight to his thesis:
That’s too big a leap. It is absolutely true that there are many non-computational biological processes in the brain that are vital to the functioning of the brain, but he presents no evidence that those processes serve as anything more than support systems for the computational systems that do the actual work. Human consciousness would be impossible without blood circulation, but that doesn’t mean that blood circulation produces consciousness, or that computers can’t be conscious without it.
A more accurate claim would be: “if we have evidence that computation doesn’t exhaustively describe brains, we have evidence that computation might not be sufficient for consciousness”. To go further than that, he would need to present evidence that some non-computational system is actually vital to consciousness, not merely that those systems exist.
Reason #3: there are plenty of other games in town
(6:05 in the video)
Here Anil points out that neurobiology is complex and there are all kinds of processes and systems that might plausibly be critical to consciousness. He doesn’t go into detail here, but has elsewhere proposed autopoiesis and neuronal cleaning cycles as possibly essential to how the brain produces consciousness.
Like reason #2, this is a strong argument that biological naturalism is possible, but not a strong argument that it must be correct. He proposes neurobiological processes that could be required for consciousness, but doesn’t present any evidence that they are required.
Reason #4: simulation is not instantiation
(7:07 in the video)
Reason #4 is that simulation is not reality: a computer simulation of a rainstorm doesn’t make anything wet, so why would a computer simulation of a brain produce consciousness? It’s a clever argument, but it has two major flaws.
First and foremost, the analogy to simulation is actively misleading. LLMs and brains have some (limited) architectural similarities, but LLMs do not in any way simulate brains. Speculating about what would happen if you simulated a brain is interesting in its own right but tells us nothing about the likelihood of AI consciousness.
Beyond that, this is more a restatement of the difference between biological naturalism and computational functionalism than an argument for one over the other:
This reminds us that brain simulation would only produce consciousness if computational functionalism is true, but it doesn’t argue against computational functionalism.
In conclusion
I want to be clear that I’m not here (today, at least) to argue for or against biological naturalism. Consciousness is a legendarily hard problem and we simply don’t understand it well enough to make confident predictions about whether AI is likely to achieve consciousness.
Given that uncertainty, it seems prudent to plan for the possibility that AI may soon be conscious. What that means is a topic for next time.
Addendum
It’s beyond the scope of today’s article, but I want to briefly mention that neuroscientists are divided on the question of AI consciousness. For a very different perspective, I recommend PRISM’s recent interview with Michael Graziano. He not only argues that AI consciousness is likely, but that a conscious AI would likely be more aligned and safer than a “zombie” AI.