As someone who could be described as "pro-qualia": I think there are still a number of fundamental misconceptions and confusions that people bring to this debate. We could have a more productive dialogue if these confusions were cleared up. I don't think that clearing up these confusions will make everyone agree with me on everything, but I do think that we would end up talking past each other less if the confusions were addressed.
First, a couple of misconceptions:
1.) Some people think that part of the definition of qualia is that they are necessarily supernatural or non-physical. This is false. A qualia is just a sense perception. That's it. The definition of "qualia" is completely, 100% neutral as to the underlying ontological substrate. It could certainly be something entirely physical. By accepting the existence of qualia, you are not thereby committing yourself to anti-physicalism.
2.) An idea I sometimes see repeated is that qualia are this sort of ephemeral, ineffable "feeling" that you get over and above your ordinary sense perception. It's as if, you see red, and the experience of seeing red gives you a certain "vibe", a...
I have a simple, yet unusual, explanation for the difference between camp #1 and camp#2: we have different experiences of consciousness. Believing that everyone has our kind of consciousness, of course we talk past each other.
I’ve noticed that in conversations about qualia, I’m always in the position of Mr Boldface in the example dialog: I don’t think there is anything that needs to be explained, and I’m puzzled that nobody can tell me what qualia are using sensible words. (I‘m not particularly stupid or ignorant; I got a degree in philosophy and linguistics from MIT.) I suggest a simple explanation: some of us have qualia and some of us don’t. I‘m one of those who don’t. And when someone tries to point at them, all I can do is to react with obtuse incomprehension, while they point at the most obvious thing in the world. It apparently is the most obvious thing in the world, to a lot of people.
Obviously I have sensory impressions; I can tell you when something looks red. And I have sensory memories; I can tell you when something looked red yesterday. But there isn’t any hard-to-explain extra thing there.
One might object that qualia are...
Alternative explanation: everyone has qualia, but some people lack the mental mechanism that makes them feel like qualia require a special metaphysical explanation. Since qualia are almost always represented as requiring such an explanation (or at least as ineffable, mysterious and elusive), these latter people don't recognize their own qualia as that which is being talked about.
How can people lack such a mental mechanism? Either
I don't have a clue about the relative prevalences of these groups, nor do I mean to make a claim about which group you personally are in.
That's interesting, but I doubt it's what's going on in general (though maybe it is for some camp #1 people). My instinct is also strongly camp #1, but I feel like I get the appeal of camp #2 (and qualia feel "obvious" to me on a gut level). The difference between the camps seems to me to have more to do with differences in philosophical priors.
D) the idea that the word must mean something weird, since it is a strange word -- it cannot be an unfamilar term for something familiar.
You said you had the experience of redness. I told you that's a quale. Why didn't that tell you what "qualia" means?
“There’s some confusing extra thing on top of behavior, namely sensations.” Wow, that’s a fascinating notion. But presumably if we didn’t have visual sensations, we’d be blind, assuming the rest of our brain worked the same, right? So what exactly requires explanation? You’re postulating something that acts just like me but has no sensations, I.e. is blind, deaf, etc. I don’t see how that can be a coherent thing you’re imagining.
When I read you saying “is like something to be,” I get the same feeling I get when someone tries to tell me what qualia are— it’s a peculiar collection of familiar words. It seems to me that you’re trying to turn a two-place predicate “A imagines what it feels like to be B” into a one-place predicate “B is like something to be”, where it’s a pure property of B.
It's not more than sensation. It's just the subjective aspect without the behavioural aspect.
Integrated Information Theory is peak Camp #2 stuff
As a Camp #2 person, I just want to remark that from my personal viewpoint, Integrated Information Theory is sharing the key defect with Global Workspace Theory, and hence is no better.
Namely, I think that the Hard Problem of Consciousness has the Hart Part: the Hard Problem of Qualia. As soon as the Hard Problem of Qualia is solved, the rest of the Hard Problem of Consciousness is much less mysterious (perhaps, the rest can be treated in the spirit of the "Easy Problems of Consciousness", e.g. the question why I am me and not another person might be treatable as a symmetry violation, a standard mechanism in physics, and the question why human qualia seem to normally cluster into belonging to a particular subject (my qualia vs. all other qualia) might not be excessively mysterious either).
So the theory purporting to actually solve the Hard Problem of Consciousness needs to shed some light onto the nature and the structure of the space of qualia, in order to be a viable contender from my personal viewpoint.
Unfortunately, I am not aware of any such viable contenders, i.e. of any theories shedding much light onto the nature and the...
I think a lot of Camp #2 people would agree with you that IIT doesn't make meaningful progress on the hard problem. As far as I remember, it doesn't even really try to; it just states that consciousness is the same thing as integrated information and then argues why this is plausible based on intuition/simplicity/how it applies to the brain and so on.
I think IIT "is Camp #2 stuff" in the sense that being in Camp #2 is necessary to appreciate IIT - it's definitely not sufficient. But it does seem necessary because, for Camp #1, the entire approach of trying to find a precise formula for "amount of consciousness" is just fundamentally doomed, especially since the math doesn't require any capacity for reporting on your conscious states, or really any of the functional capabilities of human consciousness. In fact, Scott Aaronson claims (haven't read the construction myself) here that
the system that simply applies the matrix W to an input vector x—has an enormous amount of integrated information Φ
So yeah, Camp #2 is necessary but not sufficient. I had a line in an older version of this post where I suggested that the Camp #2 memeplex is so large that, even if you're firmly in Camp #2, you'll probably find some things in there that are just as absurd to you as the Camp #1 axiom.
This is a really clear breakdown. Thank you for writing it up!
I'm struck by the symmetry between (a) these two Camps and (b) the two hemispheres of the brain as depicted by Iain McGilchrist. Including the way that one side can navigate the relationship between both sides while the other thinks the first is basically just bonkers!
It's a strong enough analogy that I wonder if it's causal. E.g., I expect someone from Camp 1 to have a much harder time "vibing". I associate Camp 1 folk with rejection of ineffable insights, like "The Tao that can be said is not the true Tao" sounding to them like "the Tao" is just incoherent gibberish.
In which case the "What is this 'qualia' thing you're talking about?" has an awful lot in common with the daughter's arm phenomenon. The whole experience of seeing a rainbow, knowing it's beautiful, and then witnessing the thought "Wow, what a beautiful rainbow!" would be hard to pin down because the only way to pin it down in Camp 1 is by modeling the experiential stream and then talking about the model. The idea that there could be a direct experience that is itself being modeled and is thus prior to any thoughts about it… just doesn't make sense to the left hemisphere. It's like talking about the Tao.
I don't know how big a factor, if at all, this plays in the two camps thing. It's just such a striking analogy that it seems worth bringing up.
Good writeup, I certainly agree with the frustration of people talking past each other with no convergence in sight.
First, I don't understand why IIT is still popular, Scott Aaronson showed its fatal shortcomings 10 years ago, as soon as it came out.
Second, I do not see any difference between experiencing something and claiming to experience something, outside of intentionally trying to deceive someone.
Third, I don't know which camp I am in, beyond "of course consciousness is an emergent concept, like free will and baseball". Here by emergence I mean the Sean Carroll version: https://www.preposterousuniverse.com/blog/2020/08/11/the-biggest-ideas-in-the-universe-21-emergence/
I have no opinion on whether some day we will be able to model consciousness and qualia definitively, like we do with many other emergent phenomena. Which may or may not be equivalent to them being outside the "laws of physics", if one defines laws of physics as models of the universe that a human can comprehend. I can certainly imagine a universe where some of the more complicated features just do not fit into the tiny minds of the tiny parts of the universe that we are. In that kind of a universe there would be equivalents of "magic" and "miracles" and "paranormal", i.e. observations that are not explainable scientifically. Whether our world is like that, who knows.
This is a clear and convincing account of the intuitions that lead to people either accepting or denying the existence of the Hard Problem. I’m squarely in Camp #1, and while I think the broad strokes are correct there are two places where I think this account gets Camp #1 a little wrong on the details.
...According to Camp #1, the correct explanandum is still "I claim to have experienced X" (where X is the apparent experience). After all, if we can explain exactly why you, as a physical system, uttered the words "I experienced X", then there's nothing else to
I think there's a pervasive error being made by both camps, although more especially Camp 2 (and I count myself in Camp 2). There is a frantic demand for and grasping after explanations, to the extent of counting the difficulty of the problem as evidence for this or that solution. "What else could it be [but my theory]?"
We are confronted with the three buttons labelled "Explain", "Ignore", and "Worship". A lot of people keep on jabbing at the "Explain" button, but (in my view) none of the explanations get anywhere. Some press the "Ignore" button and procla...
I'm going to argue a complementary story: The basic reason why it's so hard to talk about consciousness has to do with 2 issues that are present in consciousness research, and both make it impossible to do productive research on:
Extraordinarily terrible feedback loops, almost reminiscent of the pre-deep learning alignment work on LW (I'm looking at you MIRI, albeit even then it achieved more than the consciousness research to date, and LW is slowly shifting to a mix of empirical and governance work, which is quite a lot faster than any consciousness rel
Great post! I think this captures most of the variance in consciousness discussions.
I've been interested in consciousness through a 23 year career in computational cognitive neuroscience. I think making progress on bridging the gap between camp 1 and camp 2 requires more detailed explanations of neural dynamics. Those can be inferred from empirical data, but not easily, so I haven't seen any explanations similar to the one I've been developing in my head. I haven't published on the topic because it's more of a liability for a neuroscience career than an as...
Strong agree all around—this post echoes a comment I made here (me in Camp #1, talking to someone in Camp #2):
...If you ask me a question about, umm, I’m not sure the exact term, let’s say “3rd-person-observable properties of the physical world that have something to do with the human brain”…then I feel like I’m on pretty firm ground, and that I’m in my comfort zone, and that I’m able to answer such questions, at least in broad outline and to some extent at a pretty gory level of detail. (Some broad-outline ingredients are in my old post here, and I’m open to
-It's obvious that conscious experience exists.
-Yes, it sure looks like the brain is doing a lot of non-parallel processing that involves several spatially distributed brain areas at once, so
-You mean, it looks from the outside. But I'm not just talking about the computational process, which I am not even aware of as such, I am talking about conscious experience.
-Define qualia
-Look at a sunset. The way it looks is a quale. taste some chocolate,. The way it tastes is a quale.
-Well, I got my experimental subject to look at a sunset and taste some chocolate,...
There are many features you get right about the stubbornness of the problem/discussion. Certainly, modulo the choice to stop the count at two camps, you've highlighted some crucial facts about these clusters. But now I'm going to complain about what I see as your missteps.
Moreover, even if consciousness is compatible with the laws of physics, ... [camp #2 holds] it's still metaphysically tricky, i.e., it poses a conceptual mystery relative to our current understanding.
I think we need to be careful not to mush together metaphysics and epistemics...
I think you are a bit off the mark.
As a reductive materialist, expecting to find a materialistic explanation for consciousness, in your model I'd be Camp 2. And yet in the dialogue
..."It's obvious that consciousness exists."
-Yes, it sure looks like the brain is doing a lot of non-parallel processing that involves several spatially distributed brain areas at once, so-
"I'm not just talking about the computational process. I mean qualia obviously exists."
-Define qualia.
"You can't define qualia; it's a primitive. But you know what I mean."
-I don't. How could I if
Thanks for that comment. Can you explain why you think you're Camp #2 according to the post? Because based on this reply, you seem firmly (in fact, quite obviously) in Camp #1 to me, so there must be some part of the post where I communicated very poorly.
( ... guessing for the reason here ...) I wrote in the second-last section that consciousness, according to Camp #1, has fuzzy boundaries. But that just means that the definition of the phenomenon has fuzzy boundaries, meaning that it's unclear when consciousness would stop being consciousness if you changed the architecture slightly (or built an AI with similar architecture). I definitely didn't mean to say that there's fuzziness in how the human brain produces consciousness; I think Camp #1 would overwhelmingly hold that we can, in principle, find a full explanation that precisely maps out the role of every last neuron.
Was that section the problem Or sth else?
[Thanks to Charlie Steiner, Richard Kennaway, and Said Achmiz for helpful discussion. Extra special thanks to the Long-Term Future Fund for funding research related to this post.]
[Epistemic status: confident]
There's a common pattern in online debates about consciousness. It looks something like this:
One person will try to communicate a belief or idea to someone else, but they cannot get through no matter how hard they try. Here's a made-up example:
"It's obvious that consciousness exists."
-Yes, it sure looks like the brain is doing a lot of non-parallel processing that involves several spatially distributed brain areas at once, so-
"I'm not just talking about the computational process. I mean qualia obviously exist."
-Define qualia.
"You can't define qualia; it's a primitive. But you know what I mean."
-I don't. How could I if you can't define it?
"I mean that there clearly is some non-material experience stuff!"
-Non-material, as in defying the laws of physics? In that case, I do get it, and I super don't-
"It's perfectly compatible with the laws of physics."
-Then I don't know what you mean.
"I mean that there's clearly some experiential stuff accompanying the physical process."
-I don't know what that means.
"Do you have experience or not?"
-I have internal representations, and I can access them to some degree. It's up to you to tell me if that's experience or not.
"Okay, look. You can conceptually separate the information content from how it feels to have that content. Not physically separate them, perhaps, but conceptually. The what-it-feels-like part is qualia. So do you have that or not?"
-I don't know what that means, so I don't know. As I said, I have internal representations, but I don't think there's anything in addition to those representations, and I'm not sure what that would even mean.
and so on. The conversation can also get ugly, with boldface author accusing quotation author of being unscientific and/or quotation author accusing boldface author of being willfully obtuse.
On LessWrong, people are arguably pretty good at not talking past each other, but the pattern above still happens. So what's going on?
The Two Intuition Clusters
The basic model I'm proposing is that core intuitions about consciousness tend to cluster into two camps, with most miscommunication being the result of someone failing to communicate with the other camp. I will call the camp of boldface author Camp #1 and the camp of quotation author Camp #2.
Characteristics
Camp #1 tends to think of consciousness as a non-special high-level phenomenon. Solving consciousness is then tantamount to solving the Meta-Problem of consciousness, which is to explain why we think/claim to have consciousness. In other words, once we've explained the full causal chain that ends with people uttering the sounds kon-shush-nuhs, we've explained all the hard observable facts, and the idea that there's anything else seems dangerously speculative/unscientific. No complicated metaphysics is required for this approach.
Conversely, Camp #2 is convinced that there is an experience thing that exists in a fundamental way. There's no agreement on what this thing is – some postulate causally active non-material stuff, whereas others agree with Camp #1 that there's nothing operating outside the laws of physics – but they all agree that there is something that needs explaining. Therefore, even if consciousness is compatible with the laws of physics, it still poses a conceptual mystery relative to our current understanding. A complete solution (if it is even possible) may also have a nontrivial metaphysical component.
The camps are ubiquitous; once you have the concept, you will see it everywhere consciousness is discussed. Even single comments often betray allegiance to one camp or the other. Apparent exceptions are usually from people who are well-read on the subject and may have optimized their communication to make sense to both sides.
The Generator
So, why is this happening? I don't have a complete answer, but I think we can narrow down the disagreement. Here's a somewhat indirect explanation of the proposed crux.
Suppose your friend John tells you he has a headache. As an upstanding
citizenBayesian agent, how should you update your beliefs here? In other words, what is the explanandum – the thing-your-model-of-the-world-needs-to-explain?You may think the explanandum is "John has a headache", but that's smuggling in some assumptions. Perhaps John was lying about the headache to make sure you leave him alone for a while! So a better explanandum is "John told me he's having a headache", where the truth value of the claim is unspecified.
(If we want to get pedantic, the claim that John told you anything is still smuggling in some assumptions since you could have also hallucinated the whole thing. But this class of concerns is not what divides the two camps.)
Okay, so if John tells you he has a headache, the correct explanandum is "John claims to have a headache", and the analogous thing holds for any other sensation. But what if you yourself seem to experience something? This question is what divides the two camps:
According to Camp #1, the correct explanandum is only slightly more than "I claim to have experienced X" (where X is the apparent experience). After all, if we can explain exactly why you, as a physical system, uttered the words "I experienced X", then there's nothing else to explain. The reason it's slightly more is that you do still have some amount of privileged access to your own experience: a one-sentence testimony doesn't communicate the full set of information contained in a subjective state – but this additional information remains metaphysically non-special. (HT: wilkox.)
According to Camp #2, the correct explanandum is "I experienced X". After all, you perceive your experience/consciousness directly, so it is not possible to be wrong about its existence.
In other words, the two camps disagree about the epistemic status of apparently perceived experiences: for Camp #2, they're epistemic bedrock, whereas for Camp #1, they're model outputs of your brain, and like all model outputs of your brain, they can be wrong. The axiom of Camp #1 can be summarized in one sentence as "you should treat your own claims of experience the same way you treat everyone else's".
From the perspective of Camp #1, Camp #2 is quite silly. People have claimed that fire is metaphysically special, then intelligence, then life, and so on, and their success rate so far is 0%. Consciousness is just one more thing on this list, so the odds that they are right this time are pretty slim.
From the perspective of Camp #2, Camp #1 is quite silly. Any apparent evidence against the primacy of consciousness necessarily backfires as it must itself be received as a pattern of consciousness. Even in the textbook case where you're conducting a scientific experiment with a well-defined result, you still need to look at your screen to read the result, so even science bottoms out in predictions about future states of consciousness!
An even deeper intuition may be what precisely you identify with. Are you identical to your physical brain or body (or program/algorithm implemented by your brain)? If so, you're probably in Camp #1. Are you a witness of/identical to the set of consciousness exhibited by your body at any moment? If so, you're probably in Camp #2. That said, this paragraph is pure speculation, and the two camp phenomenon doesn't depend on it.
Representations in the literature
If you ask GPT-4 about the two most popular academic books about consciousness, it usually responds with
Consciousness Explained by Daniel Dennett; and
The Conscious Mind by David Chalmers.
If the camps are universal, we'd expect the two books to represent one camp each because economics. As it happens, this is exactly right!
Dennett devotes an entire chapter to the proper evaluation of experience claims, and the method he champions (called "heterophenomenology") is essentially a restatement of the Camp #1 axiom. He suggests that we should treat experience claims like fictional worldbuilding, where such claims are then "in good standing in the fictional world of your heterophenomenology". Once this fictional world is complete, it's up to the scientist to evaluate how its components map to the real world. Crucially, you're supposed to apply this principle even to yourself, so the punchline is again that the epistemic status of experience claims is always up for debate.
Conversely, Chalmers says this in the introductory chapter of his book (emphasis added):
In other words, Chalmers is having none of this heterophenomenology stuff; he wants to condition on "I experience X" itself.
On Researching Consciousness
Before we return to the main topic of communication, I want to point out that the camps also play a major role for research programs into consciousness. The reason is that the work you do to make progress on understanding a phenomenon is different if you expect the phenomenon to be low level vs. high level. A mathematical equation is a good goal if you're trying to describe planetary motions, but less so if you're trying to describe the appeal of Mozart.
The classic example of how this applies to consciousness is the infamous[1] Integrated Information Theory (IIT). For those unfamiliar, IIT is a theory that takes as input a description of a system based on a set of elements with state-space and probability transition matrix,[2] which it uses to construct a mathematical object (that is meant to correspond to the system's consciousness). The math to construct this object is extensive but precisely defined. (The object includes a qualitative description and a scalar quantity meant to describe the 'amount' of consciousness.) As far as I know, IIT is the most formalized theory of consciousness in the literature.
Attempting to describe consciousness with a mathematical object assumes that consciousness is a low-level phenomenon. What happens if this assumption is incorrect? I think the answer is that the approach becomes largely useless. At best, IIT's output could be a correlate of consciousness (though there probably isn't much reason to expect so), but it cannot possibly describe consciousness precisely because no precise description exists. In general, approaches that assume a Camp #2 perspective are in bad shape if Camp #1 ends up correct.
Is the reverse also true? Interestingly, the answer is no. If Camp #2 is correct, then research programs assuming a Camp #1 perspective are probably not optimal, but they aren't useless, either. The reason is that attempting to formalize a high-level property is not as big of a mistake as trying to informally describe a low-level property. (This is true even for our leading example: a mathematical equation for the appeal of Mozart will very likely be unhelpful, whereas an informal description of planetary motions could plausibly still be useful.) With respect to consciousness, the most central example of a Camp #1 research agenda is Global Workspace Theory, which is mostly a collection of empirical results and, as such, is still of interest to Camp #2 people.
So there is an inherent asymmetry where Camp #1 reasoning tends to appeal to the opposing camp in a way that Camp #2 reasoning does not, which is also a segue into our next section.
On Communication
In light of the two camp model, how does one write or otherwise communicate effectively about consciousness?
Because of the asymmetry we've just talked about, a pretty good strategy is probably "be in Camp #1". This is also born out empirically:
Consciousness Explained is more popular than The Conscious Mind (or any other Camp #2 book).
Global Workspace Theory is more popular than Integrated Information Theory (or any other Camp #2 theory).
Virtually every high karma post about consciousness ever published on LessWrong takes a Camp #1 perspective, with the possible exception of Eliezer's posts in the sequences.
If you're going to write something from the Camp #2 perspective, I advise making it explicit that you're doing so (even though I don't have empirical evidence that this is enough to get a positive reception on LessWrong). One thing I've seen a lot is people writing from a Camp #2 perspective while assuming that everyone agrees with them. Surprisingly often, this is even explicit, with sentences like "everyone agrees that consciousness exists and is a mystery" (in a context where "consciousness" clearly refers to Camp #2 style consciousness). This is probably a bad idea.
If you're going to respond to something about consciousness, I very much advise trying to figure out which perspective the author has taken. Chances are this is easy to figure out even if they haven't made it explicit.
On Terminology
(Section added on 2025/01/14.)
I think one of the main culprits of miscommunication is overloaded terminology. When someone else uses a term, a very understandable assumption is that they mean the same thing you do, but when it comes to consciousness, this assumption is false surprisingly often. Here is a list of what I think are the most problematic terms.
Consciousness itself is overloaded (go figure!) since it can refer to both "a high-level computational process" and "an ontologically fundamental property of the universe". I recommend making the meaning explicit. Ways to signal the former meaning include stating that you're Camp #1, calling yourself an Illusionist or Eliminativist, or mentioning that you like Dennett. Ways to clarify the latter meaning include stating that you're in Camp #2 or calling yourself a (consciousness) realist.
Emergence can mean either "Camp #2 consciousness appears when xyz happens due to a law of the universe" or "a computational process matches the 'consciousness' cluster in thingspace sufficiently to deserve the label". I recommend either not using this term, or specifying strong vs. weak emergence, or (if the former meaning is intended) using "epiphenomenalism" instead.
Materialist can mean "I agree that the laws of physics exhaustively describe the behavior of the universe" or "the above plus I am an Illusionist" or "the above plus I think the universe is apriori unconscious" (which may be compatible with epiphenomenalism). I recommend never using this term.
Qualia can be a synonym for consciousness (if you are in Camp #2) or mean something like "this incredibly annoying and ill-defined concept that confused people insist on talking about" (if you're in Camp #1). I recommend only using this term if you're talking to a Camp #2 audience.
Functionalist can mean "I am a Camp #2 person and additionally believe that a functional description (whatever that means exactly) is sufficient to determine any system's consciousness" or "I am a Camp #1 person who takes it as reasonable enough to describe consciousness as a functional property". I would nominate this as the most problematic term since it is almost always assumed to have a single meaning while actually describing two mutually incompatible sets of beliefs.[3] I recommend saying "realist functionalism" if you're in Camp #2, and just not using the term if you're in Camp #1.
Whenever you see any of those terms used by other people, alarm bells should go off in your head. There is a high chance that they mean something else than you do, especially if what they're saying doesn't seem to make sense.
I'm calling it infamous because it has a very bad reputation on LessWrong specifically. In the broader literature, I think a lot of people take it seriously. In fact, I think it's still the most popular Camp #2 proposal. ↩︎
You can think of this formalism as a strictly more complex description than specifying the system as a graph. While edges are not specified explicitly, all relevant information about how any two nodes are connected should be implicit in how the probability to transition to any next state depends on the system's current state. IIT does have an assumption of independence, meaning that the probability of landing in a certain state is just the product of the probabilities of landing in the corresponding states for each element/node. This is written as p(¯¯¯u|u)=∏ni=1p(¯¯¯ui|u), where ui is a state of Ui and U=(U1,...,Un) is the total system of n elements. ↩︎
For example, I think both Daniel Dennett and Giulio Tononi (the creator of IIT) could reasonably be described as a functionalist (precisely because IIT relies on an abstract description of a system). However, the approaches that both of them defend could hardly be more different. ↩︎