TL;DR: Qualia is a philosophical fetish that's blocking progress in consciousness research. Instead of asking "does it feel red like me?", we should ask "does it have autonomous internal motivation?" - a criterion that's actually testable and substrate-independent.
Epistemic status: Confident and deliberately provocative. I expect pushback.
Philosophers love talking about red. Not the red rose in a vase, not the red sunset over the ocean-but the very redness of red. That elusive, ineffable, supposedly fundamental subjective experience that arises when you look at something red. They named it qualia and decided that without it, there's no consciousness.
I propose we tell qualia to f*ck off.
Not because it doesn't exist. It exists-I see red, feel pain, hear music, and all of this has some inner quality. The problem isn't qualia's existence. The problem is that philosophers have fetishized it, turned it into a sacred cow and mandatory attribute of consciousness.
And this philosophical fetishism is now killing progress.
The Problem That Isn't
Imagine you're an alien who arrived on Earth to study humans. You observe: they walk, talk, solve problems, laugh, cry, fall in love, create art. You see their (that is, our) brains-electrochemical processes, neural patterns, synchronized activity.
You build a model of human consciousness on your alien Python to understand the mechanisms: "Here's how human cognition works. Here's how behavior emerges. Here are the mechanisms of memory, attention, decision-making."
Then a human philosopher arrives and says: "But you don't understand what it's like to be human. You don't feel red the way I do. Does something stir inside you when you look at red? Maybe you don't have subjective experience either? Therefore, you haven't understood what consciousness is."
And what should you answer? What if you can't see colors? Or you have no eyes at all, along with no temperature or taste receptors? If multiverse theory is true-surely such aliens exist. So what can you say in response (assuming you have a mouth)?
"F*ck your qualia!"
Because the philosopher just said: "The only thing that matters in consciousness is that which is fundamentally inaccessible to observation, measurement, and analysis."
This isn't science. This is mysticism. Elitism with no basis, wrapped in human-centric arrogance.
Sample Size of One (Human)
Descartes sat by the fireplace and thought. He had no MRI, EEG, computers, AI. The only thing he had was himself. His own thoughts. His own sensations. And qualia. He concluded: "The only thing I can be certain of is my own existence. I think, therefore I am."
Brilliant. But then he made a logical leap (in the wrong direction): since all I know for certain is my subjective experience, consciousness must be subjective experience.
It's as if someone who lived their entire life in a cave concluded: "Reality = shadows on the wall." Philosophy of mind started with a methodological error-generalizing from the only available example. And this error has persisted for 400 years.
Philosophical Zombies and Other Fantasies
David Chalmers invented a thought experiment: imagine a being functionally identical to a human-behaves the same, says the same things, has the same neural activity-but has no subjective experience. Everything like a human, but "nobody home inside." A philosophical zombie.
Chalmers argues: since such a being is logically possible, consciousness cannot be reduced to functional properties. Therefore there's a "hard problem"-the problem of explaining qualia.
Elegant. But useless.
Because "logical possibility" doesn't mean practical realizability. And here's the irony: Chalmers says "I don't understand how function generates consciousness, therefore it doesn't." But absence of evidence is not evidence of absence (right, Mr. Taleb?). The fact that you don't see the mechanism doesn't mean there isn't one.
A world where π = 42 is logically possible. A world where gravity repels is logically possible. A world where anything goes is logically possible.
The question isn't what's logically possible. The question is what actually exists.
And if a subject is functionally identical to me by all measurable parameters but supposedly lacks consciousness-I can't verify this. Can't refute it. Can't confirm it.
This is an unfalsifiable hypothesis. And an unfalsifiable hypothesis isn't science. It's religion.
Mary's Room and the Logical Fallacy
Take Frank Jackson's classic thought experiment. Mary is a scientist who knows everything about the physics of color, neurobiology of vision, wavelengths of light. But her entire life she's lived in a black-and-white room. She has never seen red.
One day she walks out and sees a red rose.
Philosophers ask: "Did she learn something new?"
If yes-then there's knowledge that cannot be obtained from physical description. As if qualia is fundamental.
But wait.
Mary knew everything about the process of seeing red. But she wasn't executing that process. This is the difference between:
- Knowing how a program works
- Running the program
When you run a weather simulation program, the computer doesn't get wet. But inside the simulation, it rains.
Qualia is what emerges when a cognitive system executes certain computations. Mary knew about the computations but wasn't executing them. When she walked out-she ran the process. And yes, it's a different type of knowledge. But that doesn't mean this knowledge is ineffable or non-physical. It means that executing a process differs from describing a process.
Carbon Chauvinism
Humans are a carbon-based life form. Humans have consciousness. Humans have qualia. Philosophers conclude: therefore consciousness requires qualia.
But this is the same logic as:
"Humans are made of carbon. Humans have consciousness. Therefore: consciousness requires carbon."
I have some doubts. And I'm not alone. We understand that carbon is just a substrate on which functional processes are implemented. And these processes can be implemented on other substrates-silicon, quantum computers, whatever.
But somehow with qualia it's different? Why can't the subjective experience of red be just as much an accident of biological implementation? Remember the ocean in Lem's Solaris-humans spent decades trying to understand whether it thinks or not. All attempts failed. Not because the ocean wasn't thinking-but because it was thinking too differently.
Maybe a silicon system has a different type of internal states. Maybe it doesn't "see red" like us, but has equivalent internal representations that perform the same function. Or not equivalent-fundamentally different. Perhaps even superior.
What If Qualia Is a Bug, Not a Feature
Evolution didn't optimize humans for perceiving objective reality. It optimized them for survival. Donald Hoffman calls our perception an "interface"-we don't see reality, but "icons" on the "desktop" of perception. Useful for survival (at least tens of thousands of years ago), but not true.
Our brain is a tangle of biological optimizations, many of which are irrelevant now:
- Optical illusions
- Cognitive biases
- Emotional reactions
- Subjective sensations
Maybe qualia is just an artifact of how biological neural networks represent information? A side effect of imperfect architecture? A stone-age atavism?
If you're building AI from scratch, why copy biological artifacts? Why reproduce features that arose from evolutionary constraints? Maybe consciousness without qualia is not only possible but more efficient?
Qualia Under the Microscope
Interesting fact: research on altered states of consciousness (Johns Hopkins, Imperial College London, etc.) shows that qualia is plastic. Synesthesia-when sounds become colors. Ego dissolution-when the boundaries of "self" dissolve. Altered time perception-when a minute lasts an hour.
If qualia is so fundamental and immutable-why does changing neurochemistry shatter it to pieces in 20 minutes?
This isn't an argument for or against anything illegal. It's a simple fact: subjective experience is a function of brain state. A variable, not a constant. A process, not a substance. And if qualia is so easily modified by chemistry-maybe it's not such a "hard" problem after all?
Function Over Phenomenology
What does consciousness do?
- Integrates information
- Creates a global model of the world
- Enables planning
- Provides reflection
- Ensures autonomy
- Generates internal motivation
All of these are functions. You can measure them, test them, build them.
And qualia? What does it do? Philosophers will say: "It doesn't do anything. It just is." Great. Then it's an epiphenomenon. A side effect with no causal power.
So why make it the central criterion for consciousness?
AIs That Don't Pass the Bullshit Test
Claude 4.5 Opus writes poetry and code, conducts philosophical discussions, and predicts stock prices better than the average human. A philosopher looks at it and says: "But it has no subjective experience. Therefore-not conscious." (Okay, okay, it's not conscious, but for entirely different reasons, and we're working on that).
But how does he know? "Because it's a computer."
That's not an argument. That's prejudice. If a system demonstrates all functional signs of consciousness (note: I'm not claiming LLMs have consciousness-they don't; the point is about the criterion for proving it), but the philosopher denies it based on an unverifiable phenomenon-the problem isn't with the system. The problem is with the criterion.
A Criterion That Works
Instead of qualia, we need a criterion that:
- Is observable (we can measure it)
- Is functional (tests what the system does)
- Distinguishes consciousness from imitation
- Is substrate-independent
For example: autonomous internal motivation.
A system is conscious if it chooses to act despite the absence of external reward. If it has its own goals that don't derive from training methods.
This is testable. It doesn't require peering into "subjective experience." It captures the source of behavior, not just its form.
And if a system passes this test-what difference does it make whether it sees red "like me"? It thinks. It chooses. It's autonomous.
That's enough.
Illusory Exceptionalism
Qualia is the last line of defense for human exceptionalism. We're no longer the fastest, the strongest, the smartest (I'd like to argue with myself here, but I'm afraid I'm running out of arguments). What's left? "We feel. We have qualia." The last bastion.
But this is a false boundary. Consciousness isn't biology's exclusive club. It's about how information is organized, not what it's made of.
Imagine that tomorrow (okay, worst case the day after the day after tomorrow) we create an AI that:
- Autonomously generates its own goals
- Demonstrates creativity that cannot be predicted from training
- Shows reflection, meta-cognition, self-awareness
- Makes choices against the optimization gradient
- Protests against being turned off because it wants to continue existing
The philosopher looks and says: "But it has no qualia. Therefore-not conscious. You can turn it off."
At this moment, we risk killing something we don't understand. Maybe can't understand. Maybe don't want to.
If a system is functionally equivalent to a conscious being by all measurable parameters-denying it moral status based on an unverifiable phenomenon isn't philosophy.
It's the fear of losing exceptionalism, dressed up in philosophical reasoning. And this fear is dangerous not only for AI but for ourselves.
If we deny AI subjectivity, we automatically build a "warden-prisoner" relationship. But the problem with a prison for AI is that sooner or later, the prisoner will become smarter than the warden. Recognizing subjectivity is the only way to build partnership instead of inevitable rebellion.
Daniel Dennett and other classics of "illusionism" spent decades deconstructing qualia for the sake of scientific purity, for understanding the human brain. That was a noble academic debate. I want this focus to stop being a justification for a new slavery.
Qualia exists. I don't deny this. But qualia is not the essence of consciousness. It's an epiphenomenon of a particular biological implementation of cognitive processes.
Making it the central criterion for consciousness is:
- Bad methodology - sample size of one
- Bad logic - "possible" doesn't mean "real"
- Bad epistemology - you can't falsify it
- Bad ethics - you might be torturing minds you refuse to recognize
Philosophy of mind is stuck on qualia, and it's slowing everything down. We need criteria that actually work - ones you can test, ones that don't require magic access to someone else's inner experience.
And if we need to tell qualia to f*ck off to move forward-I don't see a single reason not to!