Hameroff's work is a precious contribution to expanding the scientific imagination, and I even include this latest twist of time crystals. (Ryan Kidd has studied Floquet dynamics, which underpins the discrete time crystals he's talking about.) There are Ising-type models of microtubule dynamics and you can get time crystals in Ising systems... However, I am extremely skeptical of the Bandyopadhyay group's interpretations of its data.
The fading and dancing qualia arguments originate from Chalmers. Like most of Chalmers’ argumentative progeny, they’re very persuasive.
Chalmers's arguments are correct, and they can be generalized to only having to preserve inputs and outputs.
For example, we can imagine replacing each neuron by an LLM on one side of the spectrum, the entire brain on the other side of the spectrum, and an arbitrarily high number of intermediate states in between, where in each step, the brain is replaced by a collection of progressively larger LLMs. At what exact point will consciousness disappear?
We can also, among other things, use a probability argument: What people mean by consciousness can't depend on the computational details in which our mind is implemented, because those details don't impact our thoughts and reasoning (whereas qualia do). It would be probabilistically unjustified to conclude our mind has to be implemented using specific computations to have what we mean by qualia, because there is no way for that extra information to find its way into our reasoning processes in a non-arbitrary way. And so, to assume (or conclude) specific computations are necessary would be like believing that a conscious mind has to be implemented in human neural tissue.
Also, consider evolution: what computations implement our thoughts, beliefs and behavior is largely an evolutionary accident, e.g. octopuses or aliens would use very different computations. Does that mean only humans (and primates that are sufficiently evolutionarily close) have consciousness? That's extremely implausible. Also, what would be the chance that evolution got all those computations right in humans? For that to happen would be unimaginably unlikely.
Humans have minds, which are the things one has when one is conscious. So do animals, though exactly which ones have them is a matter of serious debate.
Do they? I know I have a mind, but I can only guess at anyone else.
I’m serious - until you propose an operational and testable definition which gives a specific answer in all cases, you can’t debate whether various classes of things have that property.
This is missing a key feature of substrate dependence, namely the role of time in the universe. What is implicit in the substrate, is that it exists in the universe. By existing in the universe, it must "flow" through time. And it is this connection to time that is key to its role in consciousness.
Take the standard thought experiment of a simulated digital mind. Is this mind conscious? Well here is the thing about "digital existence": it lies outside of time. You can run the simulation at 1x speed or 1000x speed, but it makes no difference to the simulated mind, as long as it is isolated from the external world. Even if it was connected to the external world, the fact that it is digital in nature means that save states are possible and there is no longer a linear world line that the digital mind can follow.
And this is why substrate dependence is crucial: everything in the physical world must flow in time, and there is no "going back". That is, you cannot copy the state of the conscious being, which would be akin to time travel. Digital minds lie outside of time.
So how do we address the fading qualia argument? Well the entire argument falls apart at step one: namely they would be functionally equivalent. Nonsense, they are not equivalent for the most important feature: for something to be digital means perfect knowledge of the state. Perfect knowledge of the state is not a feature of base reality, but is an abstraction layer that lies in a Platonic realm. Having digital neurons is equivalent to the statement that time travel is possible from a subjective point of view.
Of course, this has no bearing on whether or not these minds can act on the world, just that they would not experience time or have subjective experience.
1 Introduction
Crosspost of my blog article.
You’re conscious if there’s something it’s like to be you—if you have experiences, which are like little movies playing in your head. Experiences are the things that begin when you wake up in the morning, that end when you go to sleep, and resume when you have dreams. Examples of experiences include: tasting an orange, thinking about the self-indication assumption, feeling sad (that you’re not thinking about the self-indication assumption), having a headache, and so on.
Humans have minds, which are the things one has when one is conscious. So do animals, though exactly which ones have them is a matter of serious debate. But could AIs have a conscious mind? That’s the question I hope to answer here. My answer is a tentative yes with maybe 80% confidence (it was 70% when I started writing this).
This debate largely comes down to the question of whether consciousness is substrate dependent. Substrate dependence is the idea that for some physical system to produce consciousness, it needs to be made out of a specific kind of material. If you think, for instance, that consciousness can only exist in organisms made of carbon, rather than silicon, then you’ll naturally deny AI consciousness.
Here, I’ll canvass some of the main arguments both for and against AI consciousness and explain why I think AI could probably be conscious. I haven’t investigated this topic in that much detail, so let me know if there are any arguments I’m missing. The extent of my research on the subject had been reading The Conscious Mind several years ago, hearing the basic anti-digital-consciousness arguments, and thinking for a few minutes.
Three brief notes before I proceed:
2 Some intuition pumps
Here is one reason I deny substrate dependence: it seems like a very odd way for consciousness to work.
There are two ways for consciousness to be substrate dependent:
Thus, the person who affirms substrate dependence must either believe in some totally new kind of substrate dependence very different from the natural world or think that there’s some unknown role for functions only carbon can perform. The second is obviously much more likely, but it still requires a picture of the mind rather different from contemporary neuroscience. This isn’t impossible, but it’s pretty unlikely.
At a deeper level, it just seems odd for consciousness to be substrate dependent. Why would consciousness depend on material rather than functions? There’s nothing incoherent about this, but it just seems odd!
An analogy: octopi brains work very differently from human brains. Octopuses don’t have a cortex, for instance. Nevertheless, I’m pretty confident that octopuses are conscious from their complex, goal-directed behavior that resembles how conscious organisms behave. Thus, it seems reasonable to infer that a thing is conscious, even if its brain is very different from ours, if it behaves as if it’s conscious. So if AIs behave like they’re conscious—displaying complex, goal-directed behavior—then we should attribute consciousness to them.
Brian Cutter has another apt analogy along these lines. Imagine we came across aliens capable of a wide range of tasks. They could make art, music, and writing—they had a robust civilization. They could declare their love for each other, argue about philosophy, and do all the other things that one does in a full life. It would be odd to deny that the aliens were conscious just because their brains ran on a different substrate from ours. But this is analogous to advanced AIs. If we’d attribute consciousness to the aliens, based on behavior and brain structure, why not AI?
This is not to say that we should necessarily attribute consciousness to current LLMs. It’s not clear whether they have anything like goals. But if an AI can coherently aim for things and behave in the ways one would expect it to if it was conscious, then we should suspect that it’s conscious.
3 The probability argument
Suppose that consciousness was substrate dependent. Well, in principle, it could be dependent on a range of substrates. It could be that consciousness could only be produced from carbon, or silicon, or aluminum—or even deuterium. It could even be that the only way to make minds is to build them out of some physically impossible but conceivable material.
Thus, if consciousness was substrate dependent, it would be pretty surprising that you could make minds out of carbon. If consciousness might be dependent on a range of substrates, the odds that minds could be made out of carbon is low. In contrast, if consciousness is substrate independent, then it’s guaranteed it could be made out of carbon, because it could be made out of any substrate.
Thus, the fact that some carbon-based life is conscious is straightforward evidence against substrate dependence.
As an analogy, imagine that there are two theories:
You’re in a room that has one element. You find that it can be made into a bomb. This gives you strong evidence for theory 2. Theory 2 guarantees that the material you have would be able to make a bomb, while theory 1 makes that unlikely. Replace “make a bomb” with “make consciousness,” and you have my argument in a nutshell.
4 Dancing and fading qualia
These shallow waters never met what I needed
I’m letting go a deeper dive
Eternal silence of the sea, I’m breathing
Alive
Where are you now?
—Alan Walker “Faded”
The fading and dancing qualia arguments originate from Chalmers. Like most of Chalmers’ argumentative progeny, they’re very persuasive.
First: the fading qualia argument. Imagine that consciousness depends on a substrate (carbon specifically), so AI lacks consciousness. Now imagine swapping neurons in a person’s brain with digital neurons that carry the same information. On this view, their consciousness would gradually fade away.
But crucially, their brain would be functionally the same. So either:
Now, you could, in theory, deny that consciousness is substrate independent but think that different substrates produce different conscious states. On this picture, if you replace my neurons with digital neurons, my consciousness wouldn’t fade but would instead change. But this gives rise to the dancing qualia worry. By switching out my neurons with digital neurons, my conscious experience would dance before my very eyes, without me noticing.
Because my brain remains functionally the same, presumably my behavior would stay the same. So as you toggle back and forth between normal neurons and digital neurons, my conscious experience would change dramatically, but I would never notice or report anything differently.
Probably fading qualia imply dancing qualia. By swapping out the neurons in my digital system, you could eliminate my visual processing while keeping other things the same. But surely you couldn’t eliminate my ability to see without me noticing or reporting differently.
From these, I conclude: probably an AI with the right sorts of functions would be conscious.
5 Theories of consciousness
There are many different theories of consciousness. Butlin et al (2025) (see also Scott’s writeup) examined what each of the theories has to say about the possibility of consciousness in AI systems. After carrying out this very thorough literature survey, the authors summarize:
Thus, if you think that our current theories of consciousness are correct, probably AI can be conscious. Even if you don’t think one of them is right, the fact that theories of consciousness that are empirically close to adequate seem to, by default, imply AI consciousness should raise our credence in AI consciousness being possible.
6 A pipe dream
Probably the most popular argument against AI consciousness is that it’s very difficult to see how the AIs are doing anything that naturally gives rise to consciousness. AIs are sort of like very complicated calculators, with many different calculated values going into their overall output. Yet if a calculator isn’t conscious, then why would AIs be?
Along these lines is another argument: everything that a computer does could, in principle, be done with water and pipes. Water could be used to perform the same computations. But a system of complex water flowing across pipes wouldn’t be conscious. That you could make dreams out of water and pipes is a pipe dream! So then how could AI be?
Now, I share the sense that it is very surprising that water and pipes could give rise to a mind. But I find it similarly surprising that a fleshy, oatmeal-like substance shooting electric blasts (colloquially called a brain) can produce consciousness. Despite its surprisingness, I accept it, because we have significant empirical evidence for such a thesis.
What brains do is a lot like what could be done with water and pipes. There’s no reason why electricity is a better conduit for mind-stuff than water and pipes. So while it is very surprising that water and pipes could produce a mind, this is surprising in exactly the same way that brains producing minds is. Once we know that brains can produce consciousness, we shouldn’t be that surprised that water and pipes can too!
For this reason, I don’t find the main argument against AI consciousness very convincing. It’s pretty weird that you can make consciousness out of silicon, but it’s also weird you can make it out of carbon. Thought experiments—like a brain the size of China made out of water pipes—are deceptive, because that’s basically what brains are, just made out of a different material.
An aside: I’m not saying that consciousness is the fleshy stuff in brains. I’m not saying this because I don’t think it is (I’m not just being political). I think it’s non-physical. But it’s clearly caused, given our laws of nature, by the fleshy stuff in brains, so I don’t see why other stuff that works the same way as the fleshy stuff in brains wouldn’t be able to cause it.
7 Conclusion
Going into writing this post, I was about 70% confident that AIs could be conscious. I’ve gone up to about 80%. I think the case for possible digital consciousness is pretty good and there aren’t any very strong arguments against. In light of this, it’s important that we take AI welfare seriously and think hard about AI welfare risks. In expectation, almost all sentient beings will be digital (and this would remain true even if one thought the odds of AI consciousness were pretty low), so making sure their welfare is taken seriously may be the single most important thing we as a species ever do. As homo erectus’s most important contribution was birthing us, ours may be birthing digital descendants.