[Thanks to Charlie Steiner, Richard Kennaway, and Said Achmiz for helpful discussion. Extra special thanks to the Long-Term Future Fund for funding research related to this post.]

[Epistemic status: my best guess after having read a lot about the topic, including all LW posts and comment sections with the consciousness tag]

There's a common pattern in online debates about consciousness. It looks something like this:

One person will try to communicate a belief or idea to someone else, but they cannot get through no matter how hard they try. Here's a made-up example:

"It's obvious that consciousness exists."

-Yes, it sure looks like the brain is doing a lot of non-parallel processing that involves several spatially distributed brain areas at once, so-

"I'm not just talking about the computational process. I mean qualia obviously exist."

-Define qualia.

"You can't define qualia; it's a primitive. But you know what I mean."

-I don't. How could I if you can't define it?

"I mean that there clearly is some non-material experience stuff!"

-Non-material, as in defying the laws of physics? In that case, I do get it, and I super don't-

"It's perfectly compatible with the laws of physics."

-Then I don't know what you mean.

"I mean that there's clearly some experiential stuff accompanying the physical process."

-I don't know what that means.

"Do you have experience or not?"

-I have internal representations, and I can access them to some degree. It's up to you to tell me if that's experience or not.

"Okay, look. You can conceptually separate the information content from how it feels to have that content. Not physically separate them, perhaps, but conceptually. The what-it-feels-like part is qualia. So do you have that or not?"

-I don't know what that means, so I don't know. As I said, I have internal representations, but I don't think there's anything in addition to those representations, and I'm not sure what that would even mean.

and so on. The conversation can also get ugly, with boldface author accusing quotation author of being unscientific and/or quotation author accusing boldface author of being willfully obtuse.

On LessWrong, people are arguably pretty good at not talking past each other, but the pattern above still happens. So what's going on?

The Two Intuition Clusters

The basic model I'm proposing is that core intuitions about consciousness tend to cluster into two camps, with most miscommunication being the result of someone failing to communicate with the other camp. For this post, we'll call the camp of boldface author Camp #1 and the camp of quotation author Camp #2.


Camp #1 tends to think of consciousness as a non-special high-level phenomenon. Solving consciousness is then tantamount to solving the Meta-Problem of consciousness, which is to explain why we think/claim to have consciousness. (Note that this means explaining the full causal chain in terms of the brain's physical implementaton.) In other words, once we've explained why people keep uttering the sounds kon-shush-nuhs, we've explained all the hard observable facts, and the idea that there's anything else seems dangerously speculative/unscientific. No complicated metaphysics is required for this approach.

Conversely, Camp #2 is convinced that there is an experience thing that exists in a fundamental way. There's no agreement on what this thing is – some postulate causally active non-material stuff, whereas others agree with Camp #1 that there's nothing operating outside the laws of physics – but they all agree that there is something that needs explaining. Moreover, even if consciousness is compatible with the laws of physics, it still poses a conceptual mystery relative to our current understanding. A complete solution (if it is even possible) may also have a nontrivial metaphysical component.

The camps are ubiquitous; once you have the concept, you will see it everywhere consciousness is discussed. Even single comments often betray allegiance to one camp or the other. Apparent exceptions are usually from people who are well-read on the subject and may have optimized their communication to make sense to both sides.

The Generator

With the description out the way, let's get to the interesting question: why is this happening? I don't have a complete answer, but I think we can narrow down the disagreement. Here's a somewhat indirect explanation of the proposed crux.

Suppose your friend John tells you he has a headache. As an upstanding citizen Bayesian agent, how should you update your beliefs here? In other words, what is the explanandum – the thing-your-model-of-the-world-needs-to-explain?

You may think the explanandum is "John has a headache", but that's smuggling in some assumptions. Perhaps John was lying about the headache to make sure you leave him alone for a while! So a better explanandum is "John told me he's having a headache", where the truth value of the claim is unspecified.

(If we want to get pedantic, the claim that John told you anything is still smuggling in some assumptions since you could have also hallucinated the whole thing. But this class of concerns is not what divides the two camps.)

Okay, so if John tells you he has a headache, the correct explanandum is "John claims to have a headache", and the analogous thing holds for any other sensation. But what if you yourself seem to experience something? This question is what divides the two camps:

  • According to Camp #1, the correct explanandum is only slightly more than "I claim to have experienced X" (where X is the apparent experience). After all, if we can explain exactly why you, as a physical system, uttered the words "I experienced X", then there's nothing else to explain. The reason it's slightly more is that you do still have some amount of privileged access to your own experience: a one-sentence testimony doesn't communicate the full set of information contained in a subjective state – but this additional information remains metaphysically non-special. (HT: wilkox.)

  • According to Camp #2, the correct explanandum is "I experienced X". After all, you perceive your experience/consciousness directly, so it is not possible to be wrong about its existence.

In other words, the two camps disagree about the epistemic status of apparently perceived experiences: for Camp #2, they're epistemic bedrock, whereas for Camp #1, they're model outputs of your brain, and like all model outputs of your brain, they can be wrong. The axiom of Camp #1 can be summarized in one sentence as "you should treat your own claims of experience the same way you treat everyone else's".

From the perspective of Camp #1, Camp #2 is quite silly. People have claimed that fire is metaphysically special, then intelligence, then life, and so on, and their success rate so far is 0%. Consciousness is just one more thing on this list, so the odds that they are right this time are pretty slim.

From the perspective of Camp #2, Camp #1 is quite silly. Any apparent evidence against the primacy of consciousness necessarily backfires as it must itself be received as a pattern of consciousness. Even in the textbook case where you're conducting a scientific experiment with a well-defined result, you still need to look at your screen (or other output device) to read the result, so even science bottoms out in predictions about future states of consciousness!

An even deeper intuition may be what precisely you identify with. Are you identical to your physical brain or body (or program/algorithm implemented by your brain)? If so, you're probably in Camp #1. Are you a witness of/identical to the set of consciousness exhibited by your body at any moment? If so, you're probably in Camp #2. That said, this paragraph is pure speculation, and the two-camp phenomenon doesn't depend on it.

Representations in the literature

If you ask GPT-4 about the two most popular academic books about consciousness, it usually responds with

  1. Consciousness Explained by Daniel Dennett; and

  2. The Conscious Mind by David Chalmers.

If the camps are universal, we'd expect the two books to represent one camp each because economics. As it happens, this is exactly right!

Dennett devotes an entire chapter to the proper evaluation of experience claims, and the method he champions (called "heterophenomenology") is essentially a restatement of the Camp #1 axiom. He suggests that we should treat experience claims like fictional worldbuilding, where such claims are then "in good standing in the fictional world of your heterophenomenology". Once this fictional world is complete, it's up to the scientist to evaluate how its components map to the real world. Crucially, you're supposed to apply this principle even to yourself, so the punchline is again that the epistemic status of experience claims is always up for debate.

Conversely, Chalmers says this in the introductory chapter of his book (emphasis added):

Some say that consciousness is an "illusion," but I have little idea what this could even mean. It seems to me that we are surer of the existence of conscious experience than we are of anything else in the world. I have tried hard at times to convince myself that there is really nothing there, that conscious experience is empty, an illusion. There is something seductive about this notion, which philosophers throughout the ages have exploited, but in the end it is utterly unsatisfying. I find myself absorbed in an orange sensation, and something is going on. There is something that needs explaining, even after we have explained the processes of discrimination and action: there is the experience.

True, I cannot prove that there is a further problem, precisely because I cannot prove that consciousness exists. We know about consciousness more directly than we know about anything else, so "proof" is inappropriate. The best I can do is provide arguments wherever possible, while rebutting arguments from the other side. There is no denying that this involves an appeal to intuition at some point; but all arguments involve intuition somewhere, and I have tried to be clear about the intuitions involved in mine.

In other words, Chalmers is having none of this heterophenomenology stuff; he wants to condition on "I experience X" itself.

Why it matters

While my leading example was about miscommunication, I think the camps have consequences in other areas as well, which are arguably more significant. To see why, suppose we

  • model the brain as a computational network; and
  • ask where consciousness is located in this network.

For someone in Camp #1, the answer has to be something like this:

I.e., consciousness is [the part of our brain that creates a unified narrative and produces our reports about "consciousness"].[1] So consciousness will be a densely connected part of this network – that is, unless you dispute that it's even possible to restrict it to just a part of the network, in which case it's more "some of the activity of the full network". Either way, consciousness is identified with its functional role, which makes the concept inherently fuzzy. If we built an AI with a similar architecture, we'd probably say it also had consciousness – but if someone came along and claimed, "wait a minute, that's not consciousness!", there'd be no fact of the matter as to who is correct, any more than there's a fact of the matter about the precise number of pebbles required to form a heap. The concept is inherently fuzzy, so there's no right or wrong here.

Conversely, Camp #2 views consciousness as a precisely defined phenomenon. And if this phenomenon is causally responsible for our talking about it,[2] then you can see how this view suggests a very different picture: consciousness is now a specific thing in the brain (which may or may not be physically identifiable with a part of the network), and the reason we talk about it is that we have it – we're reporting on a real thing.

These two views suggest substantially different approaches to studying the phenomenon – whether or not something has clear boundaries is an important property! So the camps don't just matter for esoteric debates about qualia but also for attempts to reverse-engineer consciousness, and to a lesser extent, for attempts to reverse-engineer the brain...

... and also for morality, which is a case where the camps are often major players even if consciousness isn't mentioned. Camp #2 tends to view moral value as mostly or entirely reducible to conscious states, an intuition so powerful that they sometimes don't realize it's controversial. But the same reduction is problematic for Camp #1 since consciousness is now an inherently fuzzy phenomenon – and there's no agreed-upon way to deal with this problem. Some want to tie morality to consciousness anyway, which can arguably work under a moral anti-realist framework. Others deny that morality should be about consciousness to begin with. And some bite the bullet and accept that their views imply moral nihilism. I've seen all three views (plus the one from Camp #2) expressed on LessWrong.


Given the gulf between the two camps, how does one avoid miscommunication?

The answer may depend on which camp you're in. For the reasons we've discussed, it tends to be easier for ideas from Camp #1 to make sense to Camp #2 than vice-versa. If you study the brain looking for something fuzzy, there's no reason you can't still make progress if the thing actually has crisp boundaries – but if you bake the assumption of crisp boundaries into your approach, your work will probably not be useful if the thing is fuzzy. Once again, we need only look at the two most prominent theories in the literature for an example of this. Global Workspace Theory is peak Camp #1 stuff,[3] but it tends to be at least interesting to most people in Camp #2. Integrated Information Theory is peak Camp #2 stuff,[4] and I'm yet to meet a Camp #1 person who takes it seriously. Global Workspace Theory is also the more popular one of the two, even though Camp #1 is supposedly in the minority among researchers.[5]

The same pattern seems to hold on LessWrong across the board: Consciousness Explained gets brought up a lot more than The Conscious Mind, Global Workspace Theory gets brought up a lot more than Integrated Information Theory, and most high karma posts (modulo those of Eliezer) are Camp #1 adjacent – even though there are definitely a lot of Camp #2 people here. Kaj Sotala's Multi Agent Models of Mind series is a particularly nice example of a Camp #1 idea[6] with cross-camp appeal, and there's nothing analogous out of Camp #2.

So if you want to share ideas about this topic, it's probably a good idea to be in Camp #1. If that's not possible, I think just having a basic understanding of how ~half your audience thinks is helpful. There are a lot of cases where asking, "does this argument make sense to people with the other epistemic starting point?" is all you need to avoid the worst misunderstandings.

You can also try to convince the other side to switch camps, but this tends to work only around 0% of the time, so it may not be the best practical choice.

  1. This doesn't mean anything that claims to be conscious is conscious. Under this view, consciousness is about the internal organization of the system, not just about its output. After all, a primitive chatbot can be programmed to make arbitrary claims about consciousness. ↩︎

  2. This assumption is not trivial. For example, David Chalmers' theory suggests that consciousness has little to no impact on whether we talk about it. The class of theories that model consciousness as causally passive is called epiphenomenalism. ↩︎

  3. Global Workspace Theory is an umbrella term for a bunch of high-level theories that attempt to model the observable effects of consciousness under a computational lens. ↩︎

  4. Integrated Information theory holds that consciousness is identical to the integrated information of a system, modeled as a causal network. There are precise rules to determine which part(s) of a network are conscious, and there is a scalar quantity called ("big phi") that determines the amount of consciousness of a system, as well as a much more complex object (something like a set of points in high-dimensional Euclidean space) that determines its character. ↩︎

  5. According to David Chalmer's book, the proportion skews about 2/3 vs. 1/3 in favor of Camp #2, though he provides no source for this, merely citing "informal surveys". The phenomenon he describes isn't exactly the same as the two-camp model, but it's so similar that I expect high overlap. ↩︎

  6. I'm calling it a Camp #1 idea because Kaj defines consciousness as synonymous with attention for the purposes of the sequence. Of course, this is just a working definition. ↩︎

New to LessWrong?

New Comment
152 comments, sorted by Click to highlight new comments since: Today at 6:17 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

As someone who could be described as "pro-qualia": I think there are still a number of fundamental misconceptions and confusions that people bring to this debate.  We could have a more productive dialogue if these confusions were cleared up.  I don't think that clearing up these confusions will make everyone agree with me on everything, but I do think that we would end up talking past each other less if the confusions were addressed.

First, a couple of misconceptions:

1.) Some people think that part of the definition of qualia is that they are necessarily supernatural or non-physical.  This is false.  A qualia is just a sense perception.  That's it.  The definition of "qualia" is completely, 100% neutral as to the underlying ontological substrate.  It could certainly be something entirely physical.  By accepting the existence of qualia, you are not thereby committing yourself to anti-physicalism.

2.) An idea I sometimes see repeated is that qualia are this sort of ephemeral, ineffable "feeling" that you get over and above your ordinary sense perception.  It's as if, you see red, and the experience of seeing red gives you a certain "vibe", a... (read more)

The thoughtexperiments suggest that qualia is tied to memory-formation. If your nociceptors are firing like crazy but the CNS never updates on it, was there any pain? Then the obvious next question is what distinguishes qualia from memory-formation?
3Ape in the coat5mo
Hard upvote for taking time to describe the concept explicitly and conprehensibly, highlighting the possible places of confusion - non-physical aspect of qualia that is occasionally smuggled in the definition.  When you define qualia like you do, I (a Camp 1 person, as it turned out) am completely on board with you. Indeed I expect them to be explained with neuroscience, but that's as you've noticed yourself - a bit of a different story.

I have a simple, yet unusual, explanation for the difference between camp #1 and camp#2: we have different experiences of consciousness.  Believing that everyone has our kind of consciousness, of course we talk past each other.

I’ve noticed that in conversations about qualia, I’m always in the position of Mr Boldface in the example dialog: I don’t think there is anything that needs to be explained, and I’m puzzled that nobody can tell me what qualia are using sensible words.  (I‘m not particularly stupid or ignorant; I got a degree in philosophy and linguistics from MIT.)  I suggest a simple explanation: some of us have qualia and some of us don’t.  I‘m one of those who don’t.  And when someone tries to point at them, all I can do is to react with obtuse incomprehension, while they point at the most obvious thing in the world.  It apparently is the most obvious thing in the world, to a lot of people.

Obviously I have sensory impressions; I can tell you when something looks red.  And I have sensory memories; I can tell you when something looked red yesterday.  But there isn’t any hard-to-explain extra thing there.

One might object that qualia are... (read more)

Alternative explanation: everyone has qualia, but some people lack the mental mechanism that makes them feel like qualia require a special metaphysical explanation. Since qualia are almost always represented as requiring such an explanation (or at least as ineffable, mysterious and elusive), these latter people don't recognize their own qualia as that which is being talked about.

How can people lack such a mental mechanism? Either

  1. they simply have never done the particular kind of introspection that's needed to realize the weirdness of qualia, or
  2. there is a correct reductive explanation for qualia, and some people's naive intuition just happens to naturally coincide with this explanation, or
  3. same as 2 except that the explanation is (partially or wholly) incorrect. Presumably, sufficient introspection of the right type would move these people to either 1 or 2 (edit: or to the category of people who are puzzled about qualia, of course).

I don't have a clue about the relative prevalences of these groups, nor do I mean to make a claim about which group you personally are in.

4Carl Feynman4mo
You've summarized this more elegantly than I can.   Let me rewrite your explanation into my slightly different terminology: "everyone has qualia sensations, but some people lack the mental mechanism that makes them feel like there are also qualia requireing a special metaphysical explanation. Since qualia are almost always represented as requiring such an explanation (or at least as ineffable, mysterious and elusive), these latter people don't recognize their own qualia sensations as that which is being talked about." I would agree with this rephrasing as describing my experience.  I think the rephrasing is harmless, just that what I'm calling (sensation + qualia) is what you're calling (qualia + the mental mechanism etc.) As for how I can lack such a mental mechanism, I don't think you're on the right track.  Taking the points in order: 1. I've done plenty of introspection.  I suppose I might be doing 'the wrong kind', but until someone tells me how do 'the right kind', I doubt it. 2. This might be the case for me.  But if it is, I don't know what the 'correct explanation' is.  When I introspect, I simply don't experience anything 'requiring a metaphysical explanation', or that is 'mysterious, ineffable or elusive', to use your terminology. 3. I'd want to hear from someone who had actually done this before I think it's possible.

That's interesting, but I doubt it's what's going on in general (though maybe it is for some camp #1 people). My instinct is also strongly camp #1, but I feel like I get the appeal of camp #2 (and qualia feel "obvious" to me on a gut level). The difference between the camps seems to me to have more to do with differences in philosophical priors.

4Carl Feynman5mo
Oh, I don’t think it’s the only difference between Camp #1 and Camp #2.  But it certainly creates a pre-philosophical bias toward Camp #1, for those of us who don’t have qualia.  I suspect Daniel Dennet is also in the no-qualia camp, given the arguments advanced in his paper “Quining Qualia”.
There are less drastic ways of explaining qualiaphobia. Firstly, to get qualia you have to stop believing in naive realism. Naive realism means that colours are taken to be painted on the surfaces of objects and perceived exactly as they are. People vary a lot about in how easy they find it to get away from naive realism Secondly subjective feelings are what scientists are trained to ignore in favour of 3rd person perspective. That's a perfectly good methodological rule in the most areas of science, but it tends to get exaggerated into a fact of reality - - "feels don't real". Consciousness isn't a typical scientific field -- subjectivity is central.
2Carl Feynman5mo
First, a side note: I don’t like the word “qualiaphobia” for what we’re discussing here, because (a) I’m not afraid of qualia, I just don’t think I have them, and (b) it smacks of homophobia or transphobia, which have a negative connotation. More later— your comments provoke me to have many thoughts, which I’ll have to finish thinking later, because I have to go to work now.
1Carl Feynman5mo
“To get qualia you have to stop believing in naive realism.”  Does “get” mean “experience” or “acquire”?  In any event, I don’t believe in naive realism. (if I have correctly understood what naive realism means). I am quite aware of the enormous processing it takes to keep object colors constant under changes in illumination.  I further believe that many things that we feel are “out there” are in fact concocted by our brain to make the world easier to understand.  That includes the ideas of objects that have properties, kinds of objects, people who have beliefs, desires and intentions, and the passage of time,  None of these appear in true reality, but everybody thinks with them, because otherwise it’s too hard. “Feelings are what scientists are trained to ignore.”  It’s true that I was raised as a scientist, but I’ve believed in the validity of subjective evidence since my sophomore year at college, when I took a cognitive science class and had my mind expanded.  That was also about the time people tried to explain qualia to me, and my first experience of completely failing to get the point.
Neither, it means "understand semantically". what does "get the point" mean? Are you saying you failed to understand what "qualia" means , or.failed to understand why qualia are significant?
1Carl Feynman5mo
I failed to understand what qualia were.  Their attempts at explanation failed to engage with anything in my introspection, and in some cases seemed like word salad.  I was eventually led to the conclusion that one of the following was true: (a) I am too dumb to understand qualia.  Probably not true, since I am smart enough for most things.  (B) It’s one of those wooly concepts that continental philosophers like, and doesn’t actually have a referent.  Probably not true, since down-to-earth philosophers, like Dennet or Ned Block, talk about it.  (C) my cognition is such that I don’t have what they were trying to point at.
D) the idea that the word must mean something weird, since it is a strange word -- it cannot be an unfamilar term for something familiar. You said you had the experience of redness. I told you that's a quale. Why didn't that tell you what "qualia" means?
When you see the color red, what is that like? When you run your hand over something rough and bumpy, what is that like? When you taste salt, what is that like?
1[comment deleted]5mo
5Rafael Harth5mo
I tend to think that, regardless of which camp is correct, it's unlikely that the difference is due to different experiences, and more likely that one of the two sides is making a philosophical error. Reason being that experience itself is a low-level property, whereas judgments about experience are a high-level property, and it generally seems to be the case that the variance in high-level properties is way way higher. E.g., it'd be pretty surprising if someone claimed that red is more similar to green than to orange, but less surprising if they had a strange idea about the meaning of life, and that's pretty much true regardless of what exactly they think about the meaning of life. We've just come to expect that pretty much any high-level opinion is possible.
2Ape in the coat5mo
I've heard this approach to the question multiple times and I must say I really dislike it.  Because  1. It's an attempt to sidestep the philosophical disagreement instead of resolving one 2. It makes us even more map-territory confused as now we conflate abscense of belief in qualia with abscence of qualia 3. Most obviously it fails to acknowledge that people do change their views on the subject. I used to be a subjective idealist and now I'm a reductive materialist. Did I lost my qualia in the process?
2Carl Feynman5mo
1. The existence of people without qualia might be a way to displace the question from philosophy to cognitive psychology, where at least we have some ways to answer questions.  I don’t think it’s illegitimate for me to say what I say; I think it’s fascinating additional data. 2. Well, we have to be careful to keep the two concepts separate.  I don’t think I have qualia, but I’m sure other people do.  They’ve claimed to on many occasions, and I don’t think they’re lying or deceived.  From my point of view, other people have some extra thing on top of their sensations, which produces philosophical conundrums when they try to think about it. 3. You tell me! People say qualia are the most obvious thing in the world.  Do you feel like you have them?
As someone who definitely has qualia (and believes that you do too), no, that's not what's going on. There's some confusing extra thing on top of behavior - namely, sensations. There would be no confusion if the world were coupled differential equations all the way down (and not just because there would be no one home to be confused), but instead we're something that acts like a collection of coupled differential equations but also, unlike abstract mathematical structures, is like something to be. 
1Carl Feynman5mo
“There’s some confusing extra thing on top of behavior, namely sensations.”  Wow, that’s a fascinating notion.  But presumably if we didn’t have visual sensations, we’d be blind, assuming the rest of our brain worked the same, right?  So what exactly requires explanation?  You’re postulating something that acts just like me but has no sensations, I.e. is blind, deaf, etc.  I don’t see how that can be a coherent thing you’re imagining. When I read you saying “is like something to be,” I get the same feeling I get when someone tries to tell me what qualia are— it’s a peculiar collection of familiar words.  It seems to me that you’re trying to turn a two-place predicate “A imagines what it feels like to be B” into a one-place predicate “B is like something to be”, where it’s a pure property of B.  
If you lacked information about your environment, you would be functionally impaired. Information about your environment doesn't have to be visual...it could be sonar or something. It doesn't have to be sensory either...you could just somehow know that there is a door ahead of you ,and a turning to the left. Presumably , that's how Dennett thinks it works. "Time and space are, and they can bend and warp" is a peculiar combination of familiar words.
3Ape in the coat5mo
There are both philosophical (What are qualia? What having/not having qualia implies?) and neuroscientific (How exactly the closest referent to "qualia" actually works?) aspects to the problem. Both require an answer. Substituting one for another won't do. The issue with the philosophical aspect isn't that we can't get an answer. It's that we get too many, incompatible with each other answers and it's hard to use definitions consistently in such situation.  I agree that there may be fascinating additional data in the realm of neurosciency. I wouldn't be much surprised if some people indeed have much more impressive subjective experiences than others. It's legitimate to talk about it as a possibility, and yet it's only tangental to the philosophical questions at hand. As you may see from the comments these people also claim that you misunderstand them with such interpretaton. I don't think they are lying either. See my reply to GeorgeWilfrid and his original comment. I have qualia defined the way he did and I expect you to have them too. Let's call it weak qualia (wq). On the other hand, if qualia are defined as irreducible and non-physical - hard qualia (hq) - then I believe that I don't have them, nor that I had them in my subjective idealist days and I don't think anyone does no matter how awesome their subjective experience is. The problem, however, that there is mob and bailey dynamics going on. Some people confuse wq with hq, some people think that wq imply hq. People that think they have hq often use the same language that people who think they have only wq. People arguing past each other often use different definitions. And so on. When we've fixed the definitions. I believe we can properly solve the philosophical aspect. The question is reduced to whether wq indeed imply hq. I think the argument for works like that (if there is someone who holds wq->hq position here, please correct me): The mistake her is in failure to account for map-territoiry destinc
What do you think about zombies? Can you imagine something like you, that doesn't feel anything, when looking anywhere?
1Carl Feynman5mo
So the philosophical zombie is a person who reports a completely normal set of sensations and emotions, while actually having none of them, right?  I think zombies would be a ridiculous way to build an organism.  Much easier to build something that reported the truth, rather than build a perfect liar.  I could imagine such a thing, but that doesn’t say much about whether a zombie could exist.  I read a lot of science fiction and can imagine six impossible things before breakfast.
The point is not that zombies exist. The point is that "it's a ridiculous way to build an organism" is not a physical law and actual physical laws don't seem to specify that our world is not a zombie-world. For anything else from science fiction you can in principle check corresponding physical equation and conclude that this thing is impossible. How do you do it for the difference between our world and zombie-world?
4Ape in the coat5mo
It kind of is. An organism evolved to be a perfect liar about having consciousness has to have a different causal history than a organism evolved to have consciousness and tell about it so the physical laws that provided these histories have to be different too. Also, notice that what you are talking here isn't a classical PZ as originally stated: an entity that does everything that a conscious human does for exactly the same reasons up to every elemental particle in the brain but still lacks consciousness. It's a "zombie master" scenario where there are some other causes that makes the zombie pretend that it has consciousness. Confusion between this two scenarios is common and misleading.
1Carl Feynman5mo
Well, looks like I misremebered what a P-zombie is.  I think the notion of “an entity that does everything a human being does for exactly the same reasons […] but lacks consciousness” is completely absurd.  Obviously someone who lacks consciousness is asleep or comatose.  I don’t see how someone who’s walking around, talking about past experience, reporting sensations, etc, could fail to be conscious. This has always seemed perfectly obvious to me, but it’s not obvious to other highly sensible people.  Could it be they’re experiencing some extra thing in their sensations, that says “this could be dispensed with, you would have the same sensations, but then you wouldn’t be conscious.”?  If so, I’m here to tell you the good news that your brain is lying about that.
A p zombie is supposed to lack qualia , not consciousness in the medical.sense.
Absurd why? What physical law prevents walking around, talking about past experience and reporting sensations from feeling like being comatose?
1Carl Feynman5mo
Well, it’s fascinating the extent to which we each find the other’s position completely unrealistic.  I think we’re getting closer to a crux, which is good. I presume you’re not talking about Cotard’s delusion, which can result in people walking around and talking while claiming they’re dead.  That’s just a delusion. We measure comatoseness with the Glasgow Coma Scale, which ranges from 0 (eyes closed, no speech, motionless even under painful stimuli) to 15 (normal). You’re talking about people who feel comatose while still scoring 15 on the Glasgow coma scale?  How can someone be comatose and still respond to stimuli, report memories, and perform voluntary action?  It seems implicit in the definition of comatose that that’s impossible.  It may not be a physical law, but it’s certainly a medical one.
(For the record, I don't find your position completely unrealistic). Not "be comatose" - "feel comatose". No one is disputing medical knowledge - it certainly works in our world. But, regardless of how much it contradicts usual science heuristics, how unlikely it is to actually work like that in reality - can you imagine that the world could be different in only "feeling" aspect? Where zombie-you is looking at the blue sky and doesn't feel like you in the same situation, but feels like you imagine feeling when comatose. If you don't immediately reject that idea as implausible, do you have a concept for it at all? If you do, then the problem is that, regardless of how absurd it is heuristically, actual laws of physics don't seem to specify that our world is not a zombie-world.
Crucially, in a world with only these zombies- where no-one who has ever had qualia - the zombies start arguing about the existing of qualia. (Otherwise, this would be a way to distinguish zombies from people using a physical test)
1Carl Feynman5mo
That’s just unimaginably weird.   In my experience of feeling comatose, having no vision and not laying down any memories were notable features. There’s no way I can experience a blue sky while simultaneously not experiencing it.  Nor can I report on my recent experiences while being unable to form memories. See, this is why I think qualia are a thing on top of sensation.  You experience qualia, and feel that without them, something vital would be missing, and it would be like feeling comatose.   And I’m here to tell you that life without qualia is pretty sweet.  
Zombie-you wouldn't experience blue sky - they would always only experience being comatose. They would behave like you behave down to the level of neurons and atoms, but they would not experience what you experience when you are seeing a blue sky. I understand that this may sound unlikely and, yes, weird, but what's so hard to imagine? You just imagine feeling comatose, nothing more. Sure you can imagine feeling angry, when in reality you would feel sad - how is this different?
1Carl Feynman5mo
That is, from my point of view, asking me to have two contradictory experiences at once: being normal and being comatose.  And you’re going to say, “not being comatose, feeling comatose.“ And I will say, I can’t imagine acting awake and also feeling comatose.   Let‘s look at a particular feature of coma: not being able to stand upright.  I would feel like I was unable to stand, while in fact standing up whenever appropriate.  And this is not some crazy delusion— in fact my brain is operating normally.  No, I can’t imagine what that would feel like. We‘re both intelligent persons, not trying to be deceptive.  And yet we have a large difference in what we can imagine ourselves being like when we introspect.  I claim this is due to an actual difference in the structure of our cognition, best summed up as “I don’t have qualia, you do.”
That would feel like being comatose. Again, I could understand if you said "it's unlikely to happen", but I still don't understand how not being able to even imagine it would work. Some similar things are even can happen in the real world: you can not consciously see anything, don't feel like you can move your hand, but still move your hand. You can just extrapolate from this to not feeling anything. You can say that feelings about being comatose are delusional in that case. Or, can you imagine that it's not you that experiences blue sky - your copy does - when actual you are a comatose ghost? Like, you don't even need to have qualia to imagine qualia - they can be modeled by just adding a node to your casual graph that includes neurons or whatever. You can do that with your models, right?
Your disagreement is mirrored almost exactly in Yudkowsky's post Zombies Redacted. The crucial point (as mentioned also in Hastings' sister comment) is that the thought experiment breaks down as soon as you consider the zombies making just the same claims about consciousness as we do, while not actually having any coherent reason for making such claims (as they are defined to not have consciousness in the first place). I guess you can imagine, in some sense, a scenario like that, but what's the point of imagining a hypothetical set of physical laws that lack internal coherence?
Zombies being wrong is not a problem for experiment's coherence - their reasons for making claims about consciousness are just terminated on the level of physical description. The point is that the laws of physics don't seem to prohibit a scenario like this: for other imagined things you can in principle run the calculations and say "no, evolution on earth would not produce talking unicorns", but where is the part that says that we are not zombies? There are reasons to not believe in zombies and more reasons to not believe in epiphenomenalism, like "it would be coincidence for us to know about epiphenomenal consciousness", but the problem is that these reasons seem to be outside of physical laws.
I don't think they lack internal coherence; you haven't identified a contradiction in them. But one point of imagining them is to highlight the conceptual distinction between, on the one hand, all of the (in principle) externally observable features or signs of consciousness, and, on the other hand, qualia. The fact that we can imagine these coming completely apart, and that the only 'contradiction' in the idea of zombie world is that it seems weird and unlikely, shows that these are distinct (even if closely related) concepts. This conceptual distinction is relevant to questions such as whether a purely physical theory could ever 'explain' qualia, and whether the existence of qualia is compatible with a strictly materialist metaphysics. I think that's the angle from which Yudkowsky was approaching it (i.e. he was trying to defend materialism against qualia-based challenges). My reading of the current conversation is that Signer is trying to get Carl to acknowledge the conceptual distinction, while Carl is saying that while he believes the distinction makes sense to some people, it really doesn't to him, and his best explanation for this is that some people have qualia and some don't.
What is "looking red", in terms of something physical?
4Carl Feynman5mo
The brevity of your question makes me suspect that I am about to fall into a philosophical trap.  But I will go ahead and answer it. There’s one particular interface in my brain.  It’s got some kind of reference to the thing in question, bound to a representation for the color I’ve been trained to call ‘red’.  This color mostly is detected for objects that mostly reflect the longest wavelengths of visible light.  Is that the kind of ‘physical’ you were looking for?
Let's just say it's a test to see whether you have qualia in your worldview after all.  I'll try not to get stuck on terms like interface, reference, binding, and focus for now just on this entity called a "representation". Is that the thing which is red, or which looks red? And if so, could you remind us what it is, physically? 
1Carl Feynman5mo
Nope, it’s the object in the world, an apple or whatever, that looks red and (usually) is red. The representation in my brain usually responds to a red object in the world, but it can be fooled by psychedelics or clever illumination.  I don’t know how data structures are represented in my brain, so I can’t answer “what it is, physically”. if I knew more neuroscience, I might be able to localize it to a particular brain area, but no more (given my current understanding of what we know). I hope you’re going to tell me some way to tell if I have qualia :-).
So what happens if you hallucinate a color? When that happens, is there anything red, any "redness" or "experience of redness" there? 
1Carl Feynman5mo
There is nothing red, there is no redness, but there is an experience of redness.  It’s just another case of my brain lying to me, like telling me I don’t have a blind spot, or have color vision all the way to the periphery.
That's exactly a quale.
What about when you're not hallucinating? On that occasion, is there redness as well as an experience of redness?
1Carl Feynman5mo
The object is red, I experience it as red.  I suppose you could say there “is redness”, but I find that a strange way to put it.  
I have been mulling over this discussion, trying to identify the best ways to move it forward - focus on the case of an object that isn't red but still looks red? focus on the relationship between representation and experience? - not just because the nature of reality is interesting, but because getting the nature of consciousness right, is potentially central to alignment of superintelligence (what OpenAI is now calling "superalignment"). I was also interested in exploring your hypothesis that some substantial difference (of phenomenology, cognition, and/or metacognition), maybe even a phenotypic difference, might be the reason why some naturally favor qualia, and others don't.  However, in another comment you have declared that along with qualia, you also disbelieve in properties, kinds, people, and time. These are all concocted by our brains. So your ontology seems to be one in which, there's physics, and then there are brains, which fabricate an entire fake reality, which is nonetheless the reality that we live in.  At this point, I have to conclude that I'm not dealing with a subtly different phenomenology, but rather with the effects of a philosophical belief system. There's no reason to suppose that your skepticism towards qualia has a special subtle cause, when you deny the reality of so much else.  Maybe we could call it Democritus Syndrome, since he had a very similar outlook. In reality, there's just atoms and the void; but "by convention", we also say there's color and taste and everything else. Interestingly, the fragment which reports this proposition (fragment 125) actually attributes it to the intellect, and also presents a riposte from the senses, who say, how can you deny us when you rely on our evidence?  But Democritus is just one of the first known examples of this stance. When Locke distinguishes secondary qualities from primary qualities, it's a step in the same direction. One response to that distinction is found in doctrines like propert
2Carl Feynman5mo
I first noticed my inability to understand qualia in 1981 or 1982, when I was an undergraduate in Ned Block’s Philosophy of Mind class.  That wasn’t a big deal, I didn’t understand lots of things as an undergraduate.  But it was a niggling problem. It wasn’t until the ‘90s sometime that I came up with my “I don’t have qualia” theory.  And it wasn’t until 2016, when I read The Thing Itself by Adam Roberts, and other Kantian philosophy, that I realized that many things whose reality I accepted were actually constructed by my mind for my convenience.   That‘s a problem for the theory that Democritus Syndrome causes claiming disbelief in qualia, since I claimed to not have qualia before I caught Democritus Syndrome. Here’s an alternate theory.  Qualia are a kind of tag on top of perceptions, that says “This is real, reason on that basis.” I don’t have that tag, so it’s easier for me to believe that my mind has constructed reality from sense data, rather than that I directly perceive it.  The direction of causality is reversed from your theory.
Saying that qualia dont really exist, but only appear to, solves nothing. For one thing qualia are definitionally appearances , so you haven't eliminated them. For another, the physicalist still need to explain how and why the brain produces such appearances. For a third, you have to know what "qualia" means to express a sceptical.theory about them.
I mean, by your definition the experience of red is a quale, by their definition experience is some neural activity, and then there is nothing else to explain. The sceptical theory is only sceptical about "but experience is not neural activity!" and for that "qualia, as a thing that is not neural activity, only appears to exist" is a reasonable answer when appearances are defined to be some neural activity.
That's a theory, not a definition. Confusion between theories and definitions is one of the persistent problems in this debate.
The way I see it works is every definition is under a theory of the model that includes these definitions describing reality better or worse. Otherwise they are just empty words. So "qualia are experiences" or just "there are such things as qualia" are also implicitly low-resolution theories. Experiences are privileged only under misguided theories of knowledge (which are theories because it's in the name) which make experiences axiomatically true. Otherwise just gesturing to "you know, experiences, you obviously see some things" is not fundamentally different from gesturing to neural activity, and the one about neural activity is more precise. So, I don't understand which part of the above you have a problem with. You don't disbelief in theoretical ability of neuroscience to show on a screen what you are seeing, right? Because all that talk about reductive explanation may give such impression. So it's all about Mary? That even after we obtain precise theory of what you see, it still wouldn't make you see and that... "seems necessary" or something. I don't mind corrections to specific steps, but would appreciate you confirming that yes, you think Mary is a strong argument. And then it would be nice to have a better justification for this than "seems necessary".
>Experiences are privileged only under misguided theories of knowledge (which are theories because it’s in the name) which make experiences axiomatically true. Science regards experiences as probably correct about their causes, because you can't do empiricism without that assumption. "Qualia are axiomatically true" is not something you need to claim to define qualia, and not something that is always claimed about qualia, and not central to the problem of qualia. > Otherwise just gesturing to “you know, experiences, you obviously see some things” is not fundamentally different from gesturing to neural activity, and the one about neural activity is more precise. It's different because we don't experience neural activity as neural activity. That doesn't rule out neural activity being causal or constitutive of qualia. But what the camp #2 person wants is an explanation of how neural activity constitutes the experience. Asserting, as a definition, that it does isn't a persuasive explanation...and is talking-past. >So, I don’t understand which part of the above you have a problem with. You don’t disbelief in theoretical ability of neuroscience to show on a screen what you are seeing, right?  That's ambiguous in just the way that Mary's Room is supposed to disambiguate. Mary is able to tell what someone is seeing in the third-person reading-the label sense, just not in the first person, drinking the wine, sense. >Because all that talk about reductive explanation may give such impression. So it’s all about Mary? That even after we obtain precise theory of what you see, it still wouldn’t make you see and that… “seems necessary” or something. An objective explanation of seeing red doesn't make you personally see red *and* personally seeing red is necessary to know what red looks like...ie. the explanation is incomplete.  Physicalists sometimes respond to Mary's Room by saying that one can not expect Mary actually to actually instantiate Red herself just by looking
Have you actually seen orthonormal's sequence on this exact argument? My intuitions say the "Martha" AI described therein, which imitates "Mary," would in fact have qualia; this suffices to prove that our intuitions are unreliable (unless you can convincingly argue that some intuitions are more equal than others.) Moreover, it suggests a credible answer to your question: integration is necessary in order to "understand experience" because we're talking about a kind of "understanding" which necessarily stems from the internal workings of the system, specifically the interaction of the "conscious" part with the rest. (I do note that the addendum to the sequence's final post should have been more fully integrated into the sequence from the start.)
Yes. Obviously, both arguments rely on intuition. I don't think intuitions are 100% reliable. I do think we are stuck with them. I have been addressing the people who have the expected response to Mary's Room ..I can't do much about the rest. I think that sort of objection just pushes the problem back. If "integration" is a fully physical and objective process, and if Mary is truly a superscientist, then Mary will fully understand how her subject "integrated" their sense experience, and won't be surprised by experiencing red.
Thank you for clarifying things. Yeah, that's what I mean when I talk about axiomatically privileging experience and what I explicitly disagree with - we don't experience experiences as experiences either. It's not different. Describing things as "I'm seeing blue" or having similar internal thoughts is not inherently better. In fact it's worse, because it's less precise. There is no strong foundation for preferring such theory/definitions and so there is no reason to demand for a better theory to logically derive concepts from a worse one - it's not how reductionism works[1]. And as to why Mary doesn't provide such foundation... ...it's not necessary. At the point where physical theory fully describes both knowledge and state of Mary, there is no argument for why you must define knowledge in a way that leads to contradictions. And there are arguments why you shouldn't - we understand how knowledge works physically, so you can't just say that "not fully understand" feels appropriate here and treat it as enough of a justification. And again, experience is not the only case - if you told someone to look at Mary falling from a bicycle and asked them whether she knows how to ride a bicycle, they would say that she doesn't. So, considering that the meta explanation is correct in identifying the demand to use the bad definitions as wrong, why would someone not be persuaded? What is the argument for the necessity of instantiating experience for knowledge that keeps you persuaded in it? 1. ^ It doesn't need to be. An explanation of fire is not about fire in logical sense - it's about atoms.
Of course we do. It would be a less accurate way of defining the same thing, if we already knew that experiences are fully identifiable with neural activity. But we don't know, that....that is what the whole debate is about. Once you have a successful theory, it is reasonable to change a definition in accordance. For instance, knowing that water ("wet stuff in rivers, seas, and lakes") is H2O,you can define water as H2O. You can't make the arrow go in the other direction. Defining a tail as a leg doesn't prove a dog has five legs. Would you concede that it's ever possible to misuse arbitrary redefinitions? Definitions aren't theories. Preferring "precise",objective, etc., definitions of words doesn't prove everything is objective , because it's just your own preference. What you are not doing is investigating reality in an unbiased way... instead you have placed yourself in the driving seat. It is of course, about both. A reductive explanation relates a higher level phenomenon to a lower level one, If you insist on ignoring the higher level phenomeneon because it is "bad" or "imprecise", you can't achieve an explanation. You have to have the vague understanding of water as wet stuff before you can have the precise understanding of water as H2O. What contradiction? If something is contradictory, you need to show it. Physical theory doesn't fully describes the knowledge and state of Mary, because physical theory can't describe sensations. That's the whole point. There is an argument against physical theory being fully adequate, and since the theory isn't known to be correct, we shouldn't change the definition of "quale". We understand how some kinds of knowledge do, but maybe not all kinds. People have believed in knowledge-by-aquaintance for a long time. You cant just say "fully understand" feels appropriate here and treat it as enough of a justification. It's intuitions either way You're not the first person to think that knowledge-by-acquaintance is the
To be clear, I don't argue for physicalism about qualia in general here, only against Mary. Yes, of course, it's possible to use a definition from incomplete or wrong theory, among other things. The contradiction between with physical description of knowledge. It's not - it's intuitions and precise description of everything about the situation (Which you agree with, right? That it's not surprising for an image of a brain scan to have a different effect on Mary from seeing something red, that physicalism predicts the difference) on the one side and just intuition on the other. So... ...we (or we from future, or Mary) do know this by observing that neural activity works the same way the thing that you call "experience" works. The argument for identifying experiences with neural activity works as much now as arguments for reductive explanation about trajectory of a falling leaf, but even if you want to check whether it works in the future and imagine Mary, you would still discover that at best it's slightly unintuitive. The problem is that the whole argument is "it feels unintuitive", when the theory is known to be correct to the level of precisely describing everything about the situation. We also understand how knowledge-by-acquaintance works physically - it just changes your brain. There is nothing problematic on the knowledge level. The only part being ignored in physical description of knowledge-by-aquaintance is the feeling of it being unintuitive. Which is explained physically. What's the argument for demanding more? They are not precisely the same thing - they are different neural processes. But yes, they both harder to obtain with just description. What's there to show? The argument was that experiences are the only kind of knowledge that requires something except physical description. Do you disagree that Mary can have all physical knowledge but still don't know how to drive a bike? The thing we can deduce from this is that such definition of physica
You're arguing against Mary's Room on the basis of physicalism:- The idea that a complete physical and explanation captures "everything" is a clam equivalent to physicalism. Of course "Mary doesn't know what Red looks like" contradicts "physical descriptions in the form of detailed brain scans capture everything"...and vice versa. That's the point. An argument for X contradicts not X. That's not the same as a self contradiction. The point is not just that seeing a tomato has a different effect , the point is that Mary learns something. And physicalism does not predict that , because it implies complete physical descriptions leave something out. We don't know that. Assuming it is equivalent to assuming physicalism, which begs the question against Mary's Room. No. One of them is only knowable by personal instantiation...as you concede...kind of. Every individual reductive argument has to pay it's own way. There's no global argument for reductionism. Since we don't actually have a reductive explanation of conscious experience, it's intuition telling you that we will or should or must. No it isn't. Are you saying:- 1. we don't learn from acquaintance.. 2. but we have a false intuition we do... 3. and science can definitely predict 2. ...? Ie., something like illusionism. Because I'm pretty sure 3 is false. You’re not the first person to think that knowledge-by-acquaintance is the same thing as know-how. But...consider showing it , not just telling it. I agree, but I don't see how that makes both things the same. I've didn't say there was. I'm calling for experiences to be accepted as having some sort of existence, and explained somehow. To not be ignored. I agree with “I know what it’s like to see red” , but I don't see how it equates to "we experience experiences as experiences". What else would we experience our own experiences as? Brain scans? It's important not to disregard things, and the claim that you have a "complete" explanation.
I'm saying that it's ok to beg the question here, because, as you say, Mary is not a logical argument: if there is no contradiction either way, then physicalism wins by precision. And you don't need to explicitly assume "physical descriptions in the form of detailed brain scans capture everything" - you only need to consistently use one of common-sense-to-someone-who-knows-physics definitions of knowledge. Yes, I'm saying that you can non-contradictory choose your definitions of knowledge in such a way that 1 is true and so 2 is also true because intuition asserting non-true proposition is wrong and 3 is true because intuition is just neural activity and science predicts all of it. And yes, that means that illusionism is right in that you can be wrong about your (knowledge of) experiences. As neural signals. There is no justification to start from a model that includes experiences. If Mary is not an argument for adding experiences to a physical model, then it's not an argument for not ignoring (contradictory aspects of) them when reducing high level description to a physical model. They are not ignored, they're represented by corresponding neural processes. Like, what is ignored and not explained by a physical description? It's not the need for instantiation - it's predicted by experiences being separate neural process. You can't say "it ignores qualia" - that would be ignoring the whole Mary setup and begging the question - as far as Mary goes there is no problem with "qualia are neural processes". So it leaves only intuition about knowledge - about high-level concept which you can define however you want. Under a definition of knowledge that calls experiences "knowledge" knowing some of your own neural activity also requires instantiating that neural activity.
There is a fact of the matter about whether physical descriptions are exhaustive, even if Mary's Room doesnt prove it. If physical descriptions don't convey experiences as such , they are fundamentally flawed , and the precision isn't much compensation. Defining knowledge as purely physical doesn't prove anything about the world. (But you are probably using "definition" to mean "theory'.) Lots of things are non contradictory. Non circularity is more of an achievement. Again, you can't prove things by adopting definitions. If we had a detailed understanding of neuroscience that predicted an illusion of knowledge-by -acquaintance specifically, you'd be onto something. But illusionist claims are philosophical theories, not scientific facts. We don t experience experiences as neural signals. A person can spend their life with no idea that there is such a thing as a neural signal. Experience need to be explained because everything needs to be explained. Experiences need not end up in the final ontological model, because sometimes less an explanation explains-away. The experience itself. That would be the case if physicalism is true, but you don't know that physicalism is.true.. You basically assumed it, by assuming that physical explanations are complete. That's circular. So maybe I could arbitrarily assume that definition?
Such physical definitions of knowledge are not more circular than anything, I think? I mean, go ahead - then Mary would just be able to imagine red. Exactly - that's why Mary doesn't work. There is no need for additional scientific facts. There are enough scientific facts to accept physical explanation of the whole Mary setup. That's why people mostly seek philosophical problems with physicalism and why physicalists answer with philosophical theories - if physicalism is philosophically coherent, then it is undoubtedly true. The Mary's room was supposed to be an argument against physicalism. If there are no philosophical problems in the setup after you assume physicalism, then argument fails. It is equivalent to disagreeing with some step of an argument, like "Mary gets new knowledge" - you can't just disallow disagreeing with this because it's logically equivalent to assuming physicalism - that would be assuming non-physicalism that the argument was about. Of course, I don't just assume physicalism - you need to satisfy the "no philosophical problems" condition, so I talk about why "Mary gets new knowledge" is just trying to prove things by adopting definitions. I don't see how do you think it can work otherwise - you can't derive "physicalism is true" from Mary's assumptions alone. Obviously, assuming physicalism doesn't prove that physicalism is true. But again, I don't argue, that physicalism is true, I'm arguing that Mary is a bad argument. Sure. So you do agree now that talking about Mary or knowledge is unnecessary? So, what is your argument against "experience itself is explained by "human experiences are neural processes"", if it's not Mary? If you don't demand specific experiences to be in the final ontological model, they are explained the same way the fire is explained. The explanation of fire does not usually set you on fire. What you call "I'm seeing blue" is actually "your neurons are activated in a way similar to a way they are activated when b
I don't know what your mean. I wasn't intentionally saying anything physical or non physical. No,because you can't prove things through definitions. The Mary's Room argument is not an argument from definitions. If we had a detailed understanding of neuroscience that predicted an illusion of knowledge-by -acquaintance specifically, you’d be onto something. But illusionist claims are philosophical theories, not scientific facts. Show me a prediction of a novel quale! No. Consistency is necessary for truth, but nowhere near sufficient. That would be the case if physicalism is true, but you don’t know that physicalism is.true..You basically assumed it, by assuming that physical explanations are complete. That’s circular. It's suppose to be an argumetn against phsysicalism, so you can't refute it by assuming physicalism. I don't disallow disagreeing with it. I disallow assuming physicalism. The point is to think about what would happen in the situation whilst suspending judgement about the ontology the world works on. Non physicalism doesn't imply "Mary would not know what Red looks like"> No. There's no fact of the matter about that. If they are fully represented , then Mary would know what red looks like, otherwise not. If we could perform M's R as a rela experiment, we would not need it as a thought experiment. There's no reason it shouldn't be Mary. Mary's Room isn't a proof, but there is no proof of the contrary. Arguments that start "assuming physicalism" are not proof because they are invalid because they are circular. We have a detailed gears-level explanation of fire, we do not have one of conscious experience. There are three possibilities, not two: 1. X is explained, and survives the explanation as part of ontology. 2. X is explained away. 3. X is not explained at all. Merely saying that "X is an emergent, high level phenomenon..but don't ask me how or why" is not an explanation, despite what many here think. You only need to instantiate som
Here - "fully understand" depends on definition of "understand". What you understand is not a matter of fact, it's a matter of definition. All you talk about is how it is "counterintuitive" to call instantiating nuclear reaction in yourself "understanding". "It's intuitive to call new experience "additional knowledge"" is an argument from definitions. They are only edge cases of specific definitions of knowledge. There is no fundamental reason why you must call "knowledge" heart attack's effect on your brain and not call "knowledge" fire's effect on your hand. "Necessary" for what? Judging from "epistemically unique" it is implied that it is necessary for knowledge? Then it's certainly incorrect - it's either not necessary, because Mary can have a more compact representation of knowledge about color, or it's necessary for all things, if Mary supposed to have all representations of knowledge. It may be necessary for satisfying Mary's preferences to have qualia independently of their epistemic value - that's your perfectly physicalist source of subjectivity. If you only care about matters of fact, then there are no problems for physicalism in that the human qualia are unusual - it predicts that different neural processes are different. And predicts that it's useful to see things for yourself. And that it will feel intuitive to say "Mary gets new knowledge" for some people. I think it even follows from casual closure, that it doesn't make sense for there to be unphysical explanation for intuitions? If your intuition is not predicted by physics, then atoms somewhere have to be unexpectedly nudged - is it what you propose? I... don't really understand the argument here? The physicalism doesn't say that all things that it is intuitive to call "knowledge" are equally easy to get from books, or something - why exactly it is an argument against physicalism that Mary gets what it predicts? Wait, is the problem that you actually think that it is not obviously physically po
Do you think you have experienced a dissociative crisis at any point of your life? I mean the sensations of derealisation/depersonalisation, not other symptoms, and it doesn't need to have been 'strong' at all.        I ask because those sensations are not in any obvious way about processing sensory data, and because of the feeling of detachment from reality that comes with them. So I was curious if you could identify anything like that. 
3Carl Feynman5mo
I have on three occasions experienced a state where I can still perceive shapes, but they don’t have any meaning, don’t feel real, and do not resolve into separate objects.  It only lasted a few seconds in each case, and was not distressing.  In fact it was fascinating and I wished it lasted longer so I could gather more data. I’m still capable of voluntary action during these spells— I know this because I once said “Oh hey, I’m derealized!” (or something like that) while it was happening. I used to experience a phenomenon that I privately call ‘paralysis of the will’, which lasted about ten seconds, and during which I was incapable of willing any new voluntary action, but could continue with my present activity.  For example, it happened when I was driving, and I continued to drive, halted a stop sign, and then proceeded.  But if someone had asked me a question during that time, I wouldn’t have been able to reply.  It’s never been a problem for me, since it looks like absent-mindedness or preoccupation and doesn’t last long.  I used to get it every few months, but not for the last ten years.  It’s not an absence seizure because my memory is continuous through it. I don’t know if this tells you anything.  I might be typical-minding here, but I think lots of people get various brief funny mental phenomena, and most people just shrug it off.
Do you simultaneously know what it's like when something looks red, and also believe that you don't have qualia?
1Carl Feynman5mo
Yes.   If qualia is defined as George Wilfrid describes it elsewhere in this thread, as nothing more than sensation, then I definitely have it.  But I suspect there’s something more— plenty of people have tried to point to it, using phrases like “conceptually separate the information content from what it feels like”.  Well, I can’t.  That phrase doesn’t mesh with a phenomenon in my mind.  The information content is what it feels like.
It's not more than sensation. It's just the subjective aspect without the behavioural aspect.
That depends on how we define "information" - for one definition of information, qualia are information (and also everything else is, since we can only recognize something by the pattern it presents to us). But for another definition of information, there is a conceptual difference - for example, morphine users report knowing they are in pain, but not feeling the quale of pain.

Integrated Information Theory is peak Camp #2 stuff

As a Camp #2 person, I just want to remark that from my personal viewpoint, Integrated Information Theory is sharing the key defect with Global Workspace Theory, and hence is no better.

Namely, I think that the Hard Problem of Consciousness has the Hart Part: the Hard Problem of Qualia. As soon as the Hard Problem of Qualia is solved, the rest of the Hard Problem of Consciousness is much less mysterious (perhaps, the rest can be treated in the spirit of the "Easy Problems of Consciousness", e.g. the question why I am me and not another person might be treatable as a symmetry violation, a standard mechanism in physics, and the question why human qualia seem to normally cluster into belonging to a particular subject (my qualia vs. all other qualia) might not be excessively mysterious either).

So the theory purporting to actually solve the Hard Problem of Consciousness needs to shed some light onto the nature and the structure of the space of qualia, in order to be a viable contender from my personal viewpoint.

Unfortunately, I am not aware of any such viable contenders, i.e. of any theories shedding much light onto the nature and the... (read more)

9Rafael Harth5mo
I think a lot of Camp #2 people would agree with you that IIT doesn't make meaningful progress on the hard problem. As far as I remember, it doesn't even really try to; it just states that consciousness is the same thing as integrated information and then argues why this is plausible based on intuition/simplicity/how it applies to the brain and so on. I think IIT "is Camp #2 stuff" in the sense that being in Camp #2 is necessary to appreciate IIT - it's definitely not sufficient. But it does seem necessary because, for Camp #1, the entire approach of trying to find a precise formula for "amount of consciousness" is just fundamentally doomed, especially since the math doesn't require any capacity for reporting on your conscious states, or really any of the functional capabilities of human consciousness. In fact, Scott Aaronson claims (haven't read the construction myself) here that So yeah, Camp #2 is necessary but not sufficient. I had a line in an older version of this post where I suggested that the Camp #2 memeplex is so large that, even if you're firmly in Camp #2, you'll probably find some things in there that are just as absurd to you as the Camp #1 axiom.
Yes, I agree with all this. (Some years ago I tried to search for "qualia" in IIT texts, and I think I got literally zero results; I was super disappointed to discover that indeed "it doesn't even really try to make meaningful progress on the hard problem". I was particularly disappointed because it came from Christof Koch, and their "40hz paper" from 1990 has been a revelation and a remarkable conceptual breakthrough Francis Crick and Christof Koch, "Towards a neurobiological theory of consciousness", so I had all those hopes and expectations for IIT because it was from Koch :-) :-( )
As another Camp #2 person, I mostly agree - IIT is at best barking up a different wrong tree from the functionalist accounts - but Russellian [1]monism makes it at least part of the way to square 1. The elevator pitch goes like this: * On the one hand, we know an enormous amount about what physical entities do, and nothing whatsoever about what they are. The electromagnetic field is the field that couples to charges in such and such a way; charge is the property in virtue of which particles couple to the electromagnetic field. To be at some point X in space is to interact with things in the neighborhood of X; to be in the neighborhood of X is (among other things) to interact with things at X. For all we know there might not be such things as things at all: nothing (except perhaps good taste) compels us to believe that "electrons" are anything more than a tool for making predictions about future observations. * On the other hand, we're directly acquainted with the intrinsic nature of at least some qualia, but know next to nothing about their causal structure. I know what red is like, I know what blue is like, I know what high pitches are like, and I know what low pitches are like, but nothing about those experiences seems sufficient to explain why we experience purple but not highlow.  * So we have lawful relations of opaque relata and directly accessible relata with inexplicable relations: maybe there's just the one sort of stuff, which simultaneously grounds physics and constitutes experience. Is it right? No clue. I doubt we'll ever know. But it's at least the right sort of theory. 1. ^ As in Bertrand Russell
If your intuitions about the properties of qualia are the same as mine, you might appreciate this schizo theory pattern-matching them to known physics.
Neutral monism does sound like a good direction to probe further. If we survive long enough, we might live to see a convincing solution for the "Hard Problem". If we don't solve this ourselves, then I expect that advanced AIs will get very curious about what is this thing ("subjective experience", "qualia") those humans are talking about and they will get very curious about finding ways to experience those things themselves. And being very smart, they might have better chances to solve this. But groups of humans might also try to organize to solve this themselves (I think not nearly enough is done at present, both theoretically and empirically; for example, people often tend to assume that Neuralink-style interfaces are absolutely necessary to explore hybrid consciousness between biological entities and electronics, but I strongly suspect that a lot can be done with non-invasive interfaces (which is much cheaper/easier/quicker to accomplish and also somewhat safer (although still not quite safe) for participating biological entities))... That's for experiments. For theory, we just need to demand what we usually demand of novel physics: non-trivial novel experimental predictions of subjectively observable effects. Some highly non-standard ways to obtain strange qualia or to synchronize two minds, something like that. Something we don't expect, and which a new candidate theory predicts, and which turns out to be correct... That's how we'll know that a particular candidate theory in question is more than just a philosophical take...
I get "the nature" part, but why the structure is a part of the Hard Problem? It sure would be nice to have more advanced neuroscience, but we already have working theories about structure and can make you see blue instead of red. So it's not a square zero situation.
Because qualia are related to each other. We want to understand that relation at least to some extent(otherwise it is unlikely that we'll understand what they are reasonably well). Our subjective reality is made out of them, but their interrelations probably do matter a lot. But this example is about relationship between physical stimuli and qualia (in this particular instance, "red" is a not a quale, only "blue" is a quale (and "red" is a physical stimulus which would result in a red quale under different conditions). But yes, we do understand quite a bit about color qualia (e.g. we understand that we can structure them in a three-dimensional space if we want to do so based on how mixed colors are perceived in various psychophysical experiments (so that's a parametrization of them by a subset of physical stimuli), or we can consider them independent and consider an infinite-dimensional space generated by them as atomic primitives, and it's not all that clear which of these ways is more adequate for the purpose of describing the structure of subjective experience (which seems to be likely to be much older historically than humanity experience with deliberately mixing the colors)). Quoting my old write-up: However, if one fancies to choose the infinitely-dimensional space, some colors are still similar to each other, they do change gradually, there is still a non-trivial metric between them, this space is not well-understood... But when I say "we are at square zero", "color qualia" are not just "abstract colors", but "subjectively perceived colors", and we just don't understand what this means... Like, at all... We do understand quite a bit about the "physical colors => neural processing" chain, but that chain is as "subjectively colorless" as any theoretical model, as "subjectively colorless" as words "red" and "blue" in this text (in my perception at the moment)... I do hope we'll start making progress in this direction sooner rather than later...

This is a clear and convincing account of the intuitions that lead to people either accepting or denying the existence of the Hard Problem. I’m squarely in Camp #1, and while I think the broad strokes are correct there are two places where I think this account gets Camp #1 a little wrong on the details.

According to Camp #1, the correct explanandum is still "I claim to have experienced X" (where X is the apparent experience). After all, if we can explain exactly why you, as a physical system, uttered the words "I experienced X", then there's nothing else to

... (read more)
But presumably everyone in camp 2 will agree that memories are not perfectly reliable and that memories of experiences are different from those experiences themselves. We could be misremembering. The actually interesting case is whether you can be wrong about having certain experiences now, such that no memory is involved. Say, you are having a strong headache. Here the headache itself seems to be the evidence. Which seems to mean you can't be mistaken about currently having a headache.
You’re absolutely right that this is the more interesting case. I intentionally chose the past tense to make it easier to focus on the details of the example rather than the Camp #1/Camp #2 distinction per se. For completeness, I'll try to recapitulate my understanding of Rafael's account for the present-tense case ‘I have a headache right now’. From my Camp #1 perspective, any mechanistic description of the brain that explained why it generated the thought/belief/utterance ‘I have a headache right now’ instead of ‘I don’t have a headache right now’ in response to a given set of inputs would be a fully satisfying explanation. Perhaps it really is impossible for a human brain to generate the output ‘I have a headache right now’ without meeting some objective definition of a headache (some collection of facts about sensory inputs and brain state that distinguishes a headache from e.g. a stubbed toe), but there doesn’t seem to be any reason why this impossibility could not be a mundane fact conditional on the physical details of human brains. The brain is taking some combination of inputs, which might include external sensory data as well as introspective data about its own state, and generating a thought/belief/utterance output. It doesn’t seem impossible in principle that, by tweaking certain connections or using TMS or whatever, the mapping between these inputs and outputs could be altered such that the brain reliably generates the output ‘I don’t have a headache right now’ in situations where the chosen objective definition of ‘having a headache’ holds true. So, for Camp #1 the explanandum really is the output ‘I have a headache right now’. (The purpose of my comment was to expand the definition of ‘output’ to explicitly include thoughts and beliefs as well as utterances, and to acknowledge that the inputs in the case ‘I have a headache’ really are different to those in the case ‘John says he has a headache’.) Camp #2 would say that it is impossible even in princ
Okay, so you are saying that in the first-person case, the evidence for having a headache is not itself the experience of having a headache, but the belief that you have the experience of having a headache. So according to you, one could be wrong about currently having a headache, namely when the aforementioned belief is false, when you have the belief but not the experience. Is this right? If so, I see two problems with this. * Intuitively it doesn't seem possible to be wrong about one's own current mental states. Imagine a patient complains to a doctor about having a terrible headache. The doctor replies: "You may be sure you are having a terrible headache, but maybe you are wrong and actually don't have a headache at all." Or a psychiatrist: "I'm sure you aren't lying, but you may yourself be mistaken about being depressed right now, maybe you are actually perfectly happy". These cases seem absurd. I don't remember any case where I considered myself being wrong about a current mental state. We don't say: I just thought I was feeling pain, but actually I didn't. * A belief seems to be itself a mental state. So even if you add the belief as an intermediary layer of evidence between the agent and their experience, then you still have something which the agent is infallible about: Their belief. The evidence for having a belief would be the belief itself. Beliefs seem to be different from utterances, in that the latter are mechanistically describable third person events (sound waves), while beliefs seem to be just as mental as experiences. So the explanandum, the evidence, would in both cases be something mental. But it seems you require the explanandum to be something "objective", like an utterance.
Not quite. I would say that in the first-person case, the explanandum – the thing that needs to be explained – is the belief (or thought, or utterance) that you have the experience of having a headache. Once you have explained how some particular set of inputs to the brain led to that particular output, you have explained everything that is going on, in the Camp #1 view. Quoting the original post, in the Camp #1 view ‘if we can explain exactly why you, as a physical system, uttered the words "I experienced X", then there's nothing else to explain.’ I would actually agree that ’you can't be mistaken about your own current experiences’, but I think the problem Rafael's post points out is that Camp #1 and Camp #2 would understand that to mean different things. I'm a bit confused about what you mean by ‘mental states’. It's certainly possible to be wrong about one's own current mental state, as I understand the term; people experiencing psychosis usually firmly believe they are not psychotic. I don't think the two Camps would disagree on this. The three examples you mention, of having a headache, being depressed (by which I assume you mean feeling down rather than the psychiatric condition specifically), and feeling pain, all seem like examples of subjective experiences. Insofar as this paragraph is saying ‘it's not possible to be wrong about your own subjective experience’, I would agree, with the caveat as above that what I think this means might be different to what a Camp #2 person thinks this means. I don't require the explanandum to be an utterance, and I don't think there's any important sense in which an utterance is more objective than a thought or belief. My original comment was intended only to point out that in the first-person case you have privileged access to certain data, namely the contents of your own mind, that you don't have in the third-person case. The reasons for this are completely mundane and conditional on the current state of affairs, name
I think this is the crucial point of contention. I find the following obvious: thoughts or beliefs are on the same subjective level as experiences, which is quite different from utterances, which are purely mechanical third-person events, similar to the movement of a limb. In your view however, if I'm not misunderstanding you, beliefs are more similar to utterances than to experiences. So while I think beliefs are equally hard to explain as experiences, in your view beliefs are about as easy to explain as utterances. Is this a fair characterization? The reason I think utterances are "easy" to explain is that they are physical events and therefore obviously allow for a mechanistic third-person explanation. The explanation would not in principle be different from explaining a simple spinal reflex. Nerve inputs somehow cause nerve outputs, except that for an utterance there are orders of magnitude more neurons involved, which makes the explanation much harder in practice. But the principle is the same. For subjective attitudes like beliefs and experiences the explanandum is not just a mouth movement (as in the case of utterances) which would be directly caused by nervous signals. It is unclear how to even grasp subjective beliefs and experiences in a mechanical language of cause and effect. As an illustration, it is not obvious why an organism couldn't theoretically be a p-zombie -- have the usual neuronal configuration, behave completely normally, do all the same utterances -- without having any subjective beliefs or experiences. (It seems vaguely plausible to me that for beliefs and experiences, a reductive, rather than causal, explanation would be needed. Yet the model of other reductive explanations in science, like explaining the temperature of a gas with the average kinetic energy of the particles it is made out off, doesn't obviously fit what would be needed in the case of mental states. But this is a longer story.)
Huh, this is interesting. I wouldn't have suspected this to be the crux. I'm not sure how well this maps to the Camp 1 vs 2 difference as opposed to idiosyncratic differences in our own views. This is a fair characterisation, though I don't think ease of explanation is a crucial point. I would certainly say that beliefs are more similar to utterances than to experiences. To illustrate this, sitting here now on the surface of Earth I think it's possible for me to produce an utterance that is about conditions at the centre of Jupiter, and I think it's possible for me to have a belief or a thought that is about conditions at the centre of Jupiter, and all of these could stand in a truth relation to what conditions are actually like at the centre of Jupiter. I don't think I can have an experience that is about conditions at the centre of Jupiter. Strictly, I don't think I can have an experience that is ‘about’ anything. I don't think experiences are models of the world, in the way that utterances, beliefs, and thoughts can be. This is why I would agree that it is not possible to be mistaken about an experience, though in everyday language we often elide experiences with claims about the world that do have truth values (‘it looks red’ almost always means ‘I believe it is actually red’, not ‘when I look at it I experience seeing red but maybe that's just a hallucination’). What do you see as the important difference between ‘subjective’ and ‘objective’? Is subjectivity about who has access to a phenomenon, or is it a quality of the phenomenon itself? I agree with this. If for the sake of argument we strike out ‘beliefs’ here and make it just about experiences, this seems to be a restatement of the Camp 1 vs 2 distinction. As a Camp 1 person, a mechanical explanation of whatever chain of events leads me to think or say that I have a headache would fully dissolve the question. I wouldn't feel that there is anything left to explain. From what I understand of Camp 2, even
Mental states do not need to be "about" something, but it is pretty clear they can be. One can be just happy, but it seems one can also be happy about something. One certainly can wish for something, or fear that something is the case, or hope for it, etc. The form in the following is the same: the belief that x, the desire that x, the fear that x, the hope that x. Here x is a proposition. In case of e.g. loving x or hating x, x is an object, not a proposition, but again the mental state is about something. These states seem all hard to explain in a way that utterances aren't. The relevant difference here is the access. The "subjective" is exactly that which an agent is directly acquainted with, while the "objective" stuff is only inferred indirectly. It is unclear how one could explain one with the other. As I said, it is unclear how such a mechanical explanation of a thought or belief would look like. It is clear that utterances are caused by mouth movements which are caused by neurons firing, but it is not clear how neurons could "cause" a belief, or how to otherwise (e.g. reductively) explain a belief. It is not clear how to distinguish p-zombies from normal people, or explain why they wouldn't be possible.
I'm still a bit confused by what you mean by ‘mental states’. My best guess is that you are using it as a catch-all term for everything that is or might be going on in the brain, which includes experiences, beliefs, thoughts, and more general states like ‘psychotic’ or ‘relaxed’. I agree that mental states do not need to be about something, but I think beliefs do need to be about something and thoughts can be about something (propositional in the way you describe). I don't think an experience can be propositional. I don't understand this relates to whether these particular mental states are able to be explained. My best account for what is going on here is that we have two interacting intuitive disagreements: 1. The ‘ordinary’ Camp 1 vs 2 disagreement, as outlined in Rafael's post, where we disagree where the explanandum lies in the case of subjective experience. 2. A disagreement over whether whatever special properties subjective experience has also extend to other mental phenomena like beliefs, such that in the Camp 2 view there would be a Hard Problem of why and how we have beliefs analogous to or identical with the Hard Problem of why and how we have subjective experience. Does this account seem accurate to you?
I would not count "psychotic" here, since one is not necessarily directly acquainted with it (one doesn't necessarily know one has it). I thought you saw the fact that beliefs are about something as evidence that they are easier to explain than experiences, or that they at least more similar to utterances than to experiences. I responded that aboutness (technical term: intentionality) doesn't matter, as several things that are commonly regarded as qualia, just like experiences, can be about something, e.g. loves or fears. So accepting beliefs as explanandum would not be in principle different from accepting fears or experiences as explanandum, which would seem to put you more in Camp #2 rather than #1. I think the main disagreement is actually just one, the above: What counts as a simple explanandum such that we would not run into hard explanatory problems? My position is that only utterances act as such a simple explanandum, and that no subjective mental state (things we are directly acquainted with, like intentional states, emotions and experiences) is simple in this sense, since they are not obviously compatible with any causal explanation.
Would it be fair to say then that by ‘mental states’ you mean ‘everything that the brain does that the brain can itself be aware of’? I don't think there is any connection between whether a thought/belief/experience is about something and whether it is explainable. I'm not sure about ‘easier to explain’, but it doesn't seem like the degree of easiness is a key issue here. I hold the vanilla Camp 1 view that everything the brain is doing is ultimately and completely explainable in physical terms. I do think beliefs are more similar to utterances than experiences. If we were to draw an ontology of ‘things brains do’, utterances would probably be a closer sibling to thoughts than to beliefs, and perhaps a distant cousin to experiences. A thought can be propositional (‘the sky is blue’) or non-propositional (‘oh no!’), as can an utterance, but a belief is only propositional, while an experience is never propositional. I think an utterance could be reasonably characterised as a thought that is not content to stay swimming around in the brain but for whatever reason escapes out through the mouth. To be clear though, I don't think any of this maps on to the question of whether these phenomena are explicable in terms of the physical implementation details of the brain. I think there is an in-principle difference between Camp 1 ‘accepting beliefs [or utterances] as explanandum’ and Camp 2 ‘accepting experiences as explanandum’. When you ask ‘What counts as a simple explanandum such that we would not run into hard explanatory problems?’, I think the disagreement between Camp 1 and Camp 2 in answering this question is not over ‘where the explanandum is’ so much as ‘what it would mean to explain it’. It might help here to unpack the phrase ‘accepting beliefs as explanandum’ from the Camp 1 viewpoint. In a way this is a shorthand for ‘requiring a complete explanation of how the brain as a physical system goes from some starting state to the state of having the belief’. The b
Yeah, aware of, or conscious of. Psychosis seems to be less a mental state in this sense than a disposition to produce certain mental states. What you call "model" here would presumably correspond only to the the externally observable neural correlate of a belief, not to the belief. The case would be the same for a neural correlate of an experience, so this doesn't provide a difference between the two. Explaining the neural correlate is of course just as "easy" as explaining an utterance. The hard problem is to explain actual mental states with their correlates. So the case doesn't explain the belief/experience in question in terms of this correlate. It would be unable to distinguish between a p-zombie, which does have the neural correlate but not the belief/experience, and a normal person. So it seems that either the explanation is unsuccessful as given, since it stops at the neural correlate and doesn't explain the belief or experience, or you assume the explanandum is actually just the neural correlate, not the belief.
Apologies for the repetition, but I'm going to start by restating a slightly updated model of what I think is going on, because it provides the context for the rest of my comment. Basically I still think there are two elements to our disagreement: 1. The Camp 1 vs Camp 2 disagreement. Camp 1 thinks that a description of the physical system would completely and satisfactorily explain the nature of consciousness and subjective experience; Camp 2 thinks that there is a conceptual element of subjective experience that we don't currently know how to explain in physical terms, even in principle. Camp 2 thinks there is a capital-H Hard Problem of consciousness, the ‘conceptual mystery’ in Rafael's post; Camp1 does not. I am in Camp 1, and as best I can tell you are in Camp 2. 2. You think that all(?) ‘mental states’ pose this conceptual Hard Problem, including intentional phenomena like thoughts and beliefs as well as more ‘purely subjective’ phenomena like experiences. My impression is that this is a mildly unorthodox position within Camp 2, although as I mentioned in my original comment I've never really understood e.g. what Nagel was trying to say about the relationship between mental phenomena being only directly accessible to a single mind and them being Hard to explain, so I might be entirely wrong about this. In any case, because I don't believe that there is a conceptual mystery in the first place, the question of (e.g.) whether the explanandum is an utterance vs a belief means something very different to me than it does to you. When I talk about locating the explanandum at utterances vs beliefs, I'm talking about the scope of the physical system to be explained. When you talk about it, you're talking about the location(s) of the conceptual mystery. As a Camp 1 person, I don't think that there is any (non-semantic) difference between the observable neurological correlates of a belief or any other mental phenomenon and the phenomenon itself.  Once we have a comp
Of course it's possible, at least in principle: the doctor could have connected all your neurons, that detect headache and generate thoughts about it, to another person's neurons, that generate headache. Then you would be sure, that you are having a having a headache, but actually it is another person who is having a headache.
1Ape in the coat5mo
You can definetely be mistaken regarding what the headache means. When the headache is extreme you may feel as if you are dying. Yet, despite feeling this way you may not actually die. Likewise you may feel as if your feelings are immaterial even though they are not. As soon as the question isn't just about your immediate experience but also about how this experience is related to the world - you may very well be wrong.
3Rafael Harth5mo
Yeah, I agree with both points. I edited the post to reflect it; for the whole brain vs parts thing I just added a sentence; for the kind of access thing I made it a footnote and also linked to your comment. As you said, it does seem like a refinement of the model rather than a contradiction, but it's definitely important enough to bring up.
You don't just have a level of access, you have a type of access. Your access to your own mind isn't like looking at a brain scan. The Mary's Room thought experiment brings it out. Mary has complete access to someone elses mental state, from the outside, but still doesn't experience it from the inside.
From my Camp 1 perspective, this just seems like a restatement of what I wrote. My direct access to my own mind isn't like my indirect access to other people's minds; to understand another person's mind, I can at best gather scraps of sensory data like ‘what that person is saying’ and try to piece them together into a model. My direct access to my own mind isn't like looking at a brain scan of my own mind; to understand a brain scan, I need to gather sensory data like ‘what the monitor attached to the brain scanner shows’ and try to piece them into a model. This seems to be completely explained by the fact that my brain can only gather data about the external world though a handful of imperfect sensory channels, while it can gather data about its own internal processes through direct introspection. To make things worse, my brain is woefully underpowered for the task of modelling complex things like brains, so it's almost inevitable that any model I construct will be imperfect. Even a scan of my own brain would give me far less insight into my mind than direct introspection, because brains are hideously complicated and I'm not well-equipped to model them. Whether you call that a ‘level’ or ‘type’ of access, I'm still no closer to understanding how Nagel relates the (to me mundane) fact that these types of access exist to the ‘conceptual mystery’ of qualia or consciousness. Imagine a one-in-a-million genetic mutation that causes a human brain to develop a Simulation Centre. The Simulation Centre might be thought of as a massively overdeveloped form of whatever circuitry gives people mental imagery. It is able to simulate real-world physics with the fidelity of state-of-the-art computer physics simulations, video game 3D engines, etc. The Simulation Centre has direct neural connections to the brain's visual pathways that, under voluntary control, can override the sensory stream from the eyes. So, while a person with strong mental imagery might be able to fuzzily visu
At this point, I can prove to you that you are actually in Camp #2. All I have to is point out that the kind of access you have to your mind is (or rather includes) qualia! The mystery relates entirely to the expectation that there should be a reductive physical explanation of qualia. The Hard Problem of Qualia Whilst science has helped with some aspects of the mind body problem, it has made others more difficult, or at least exposed their difficulty. In pre scientific times, people were happy to believe that the colour of an object was an intrinsic property of it, which was perceived to be as it was. This "naive realism", was disrupted by a series of discoveries, such as the absence of anything resembling subjective colour in scientific descriptions, and a slew of reasons for recognising a subjective element in perception. A philosopher's stance on the fundamental nature of reality is called an ontology. The success of science in the twentieth and twentyfirst centuries has led many philosophers to adopt a physicalist ontology, basically the idea that the fundamental constituents of reality are what physics says they are. (It is a background assumption of physicalism that the sciences form a sort of tower, with psychology and sociology near the top, and biology and chemistry in the middle , and with physics at the bottom. The higher and intermediate layers don't have their own ontologies -- mind-stuff and elan vital are outdated concepts -- everything is either a fundamental particle, or an arrangement of fundamental particles) So the problem of mind is now the problem of qualia, and the way philosophers want to explain it is physicallistically. However, the problem of explaining how brains give rise to subjective sensation, of explaining qualia in physical terms, is now considered to be The Hard Problem. In the words of David Chalmers:- " It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subje
And because there is a physicalist explanation for the difference of access, there is physicalist explanation for qualia and the problem is solved.
It is not an explanation to predict that one thing is different from another in an unspecified way.
Yes, but the actual explanation is obviously possible. One access is different from another because one is between regions of the brain via neurons, and the other is between brain and brain scan via vision. What part do you think is impossible to specify?
The qualia. How does a theory describe a subjective sensation?
Riding a bicycle. And you need to instantiate a brain state to know anything - instantiating brain states is what it means for a brain to know something. The explanation for "why it seems to be unnecessary in other cases" is "people are bad at physics". Or you can use a sensible theory of knowledge where Mary understands everything about seeing red without seeing it and the explanation for "why it seems that she doesn't understand" is "people are bad in distinguishing between being and knowing". I mean, there is physicalist explanation of everything about this scenario. You could have an arguments on the level of "but people find it confusing for a couple of seconds!" against physicality of anything from mirrors to levers.
No, knowledge can be stored outisde brains. Or people insist by fiat that they are the same, when they are plainly different.

This probably isn't the case, but I secretly wonder if the people in camp #1 are p-zombies.

They wouldn't strictly be p-zombies, because by definition, p-zombies display behaviour indistinguishable from non-zombies. Instead, Camp 1 people notably talk about consciousness differently from Camp 2 — as one would expect if they have different experiences of their own consciousness, or none at all. So Camp 1 are just ordinary zombies. ETA: It's not just you and me. Some actual psychologists have speculated that grand psychological theories are nothing more than accounts of their creators' subjective experiences of themselves. Radical behaviourists are the ones without such experience. I don't have easily findable references, but I just found a mention of this book, whose title is suggestive: "Psychology's Grand Theorists How Personal Experiences Shaped Professional Ideas".
3Ape in the coat5mo
Wait a second! I think you are onto something. What if it's Camp 2 people who are p-zombies? Lacking the ability to experience things but pretending that they do, they overcompensate with singing dithyrambs to qualia and subjective experience, proclaiming its obvious fundamentality due to how awesome the experience alledgedly is!  While regular people who have consciousness, notice that while it's curious and an obvious starting point for epistemology, after some amount of evidence it becomes very likely that the same material laws that works for everything else works for consciousness as well and that it will be eventually explained by matter interactions. Joking, of course.
1Ape in the coat5mo
See this comment and my ongoing discussion with Carl Feynman.

Good writeup, I certainly agree with the frustration of people talking past each other with no convergence in sight.

First, I don't understand why IIT is still popular, Scott Aaronson showed its fatal shortcomings 10 years ago, as soon as it came out. 

Second, I do not see any difference between experiencing something and claiming to experience something, outside of intentionally trying to deceive someone. 

Third, I don't know which camp I am in, beyond "of course consciousness is an emergent concept, like free will and baseball". Here by emergence ... (read more)

7Rafael Harth5mo
Thanks! Well, Scott constructed an example for which the theory gives a highly unintuitive result. This isn't obviously a fatal critique; you could always argue that a lot of theories give some unintuitive results. It's also the kind of thing you could maybe fix by tweaking the math,[1] rather than tossing out the entire approach. I believe Tononi is on record somewhere biting the bullet on that point (i.e., agreeing that Scott's construction would indeed have high Φ, and that that's okay). But I don't know where, and I think I already searched for it a few months ago (probably right after IIT4.0 was dropped) and couldn't find it. I think this puts you firmly into Camp #1 (though you saying this proves that, at a minimum, the idea wasn't communicated as clearly as I'd hoped). Like, the introductory dialogue shows someone failing to communicate the difference, so if this difference isn't intuitively obvious to you, this would be a Camp #1 characteristic. And like, since the whole point was that [trying to articulate what exactly it means for experience to exist independently of the report] is extremely difficult and usually doesn't work, I'm not gonna attempt it here. ---------------------------------------- 1. Though as mentioned in another comment, I haven't actually read through the construction -- I always just trusted Scott here -- so maybe I'm wrong. ↩︎
There’s a link in SA last post on the topic: https://scottaaronson.blog/?p=1823
2Rafael Harth2mo
Thanks! Sooner or later I would have searched until finding it, now you've saved me the time.
Imho comment #41 from SA last post on the topic (link above) explains the appeal, plus the smart/sneaky move of forever saying this theory is not finished yet.

I think there's a pervasive error being made by both camps, although more especially Camp 2 (and I count myself in Camp 2). There is a frantic demand for and grasping after explanations, to the extent of counting the difficulty of the problem as evidence for this or that solution. "What else could it be [but my theory]?"

We are confronted with the three buttons labelled "Explain", "Ignore", and "Worship". A lot of people keep on jabbing at the "Explain" button, but (in my view) none of the explanations get anywhere. Some press the "Ignore" button and procla... (read more)

I'm going to argue a complementary story: The basic reason why it's so hard to talk about consciousness has to do with 2 issues that are present in consciousness research, and both make it impossible to do productive research on:

  1. Extraordinarily terrible feedback loops, almost reminiscent of the pre-deep learning alignment work on LW (I'm looking at you MIRI, albeit even then it achieved more than the consciousness research to date, and LW is slowly shifting to a mix of empirical and governance work, which is quite a lot faster than any consciousness rel

... (read more)
2Rafael Harth2mo
Agreed. My impression has been for a while that there's a super weak correlation (if any) between whether an idea goes into the right direction and how well it's received. Since there's rarely empirical data, one would hope for an indirect correlation where correctness correlates with argument quality, and argument quality correlates with reception, but second one is almost non-existent in academia.

Great post! I think this captures most of the variance in consciousness discussions.

I've been interested in consciousness through a 23 year career in computational cognitive neuroscience. I think making progress on bridging the gap between camp 1 and camp 2 requires more detailed explanations of neural dynamics. Those can be inferred from empirical data, but not easily, so I haven't seen any explanations similar to the one I've been developing in my head. I haven't published on the topic because it's more of a liability for a neuroscience career than an as... (read more)

2Rafael Harth5mo
Thanks! If you do, and if you're interested in exchanging ideas, feel free to reach out. I've been thinking about this topic for several years now and am also planning to write more about it, though that could take a while.

Strong agree all around—this post echoes a comment I made here (me in Camp #1, talking to someone in Camp #2):

If you ask me a question about, umm, I’m not sure the exact term, let’s say “3rd-person-observable properties of the physical world that have something to do with the human brain”…then I feel like I’m on pretty firm ground, and that I’m in my comfort zone, and that I’m able to answer such questions, at least in broad outline and to some extent at a pretty gory level of detail. (Some broad-outline ingredients are in my old post here, and I’m open to

... (read more)

-It's obvious that conscious experience exists.

-Yes, it sure looks like the brain is doing a lot of non-parallel processing that involves several spatially distributed brain areas at once, so

-You mean, it looks from the outside. But I'm not just talking about the computational process, which I am not even aware of as such, I am talking about conscious experience.

-Define qualia

-Look at a sunset. The way it looks is a quale. taste some chocolate,. The way it tastes is a quale.

-Well, I got my experimental subject to look at a sunset and taste some chocolate,... (read more)

Can you explain that?  It seems that plenty of qualiaphiles believe they are irreducible, epistemically if not metaphysically.  (But not all:  at least some qualiaphiles think qualia are emergent metaphysically.  So, I can't explain what you wrote by supposing you had a simple typo.)
What is misrepresented in the linked comment?
It isn't an example of misrepresentation: it points out a misrepresentation. As in the first sentence.
Ok, then I don't get what misinterpretation is not addressed in https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies?commentId=chZLkQ8Piu4J5ibC9. Or is it just that the post itself presents Chalmers as believing in epiphenomenalism (which he shouldn't do), when he actually believes in epiphenomenalism|dualism|monism (which he also shouldn't do)?

There are many features you get right about the stubbornness of the problem/discussion.  Certainly, modulo the choice to stop the count at two camps, you've highlighted some crucial facts about these clusters.  But now I'm going to complain about what I see as your missteps.

Moreover, even if consciousness is compatible with the laws of physics, ... [camp #2 holds] it's still metaphysically tricky, i.e., it poses a conceptual mystery relative to our current understanding.

I think we need to be careful not to mush together metaphysics and epistemics... (read more)

It's nonetheless the best reason. The amount of times you should add new ontological categories isn't zero, ever -- even if you shouldn't also add a category every time you are confused. Physicists were not wrong to add the nuclear forces to gravity and electromagnetism. Unfortunately, there is no simple algorithm to tell you when you should add categories. Do they? Camp #1 is generally left with denialism about qualia (including illusionism), or promissory physicalism, neither of which is hugely attractive. Regarding promissory physicalism, it's a subjective judgement, not a proof , that we will have a full reductive explanation of consciousness one day, so it is quite cheeky to call the other camp "wrong" because they have a subjective judgement that we won't. No, it's about the implications. People are quite explicit that they don't want to believe in qualia becasue they don't want to have to believe in epiphenomenalism, zombies, non physical properties, etc.. Of course, rejecting evidence because it doesn't fit a theory is the opposite of rationality. Well, materialist -- it doesn't require immaterial substances or non physical properties, but it also denies that all facts are physical facts, contra strong physicalism. I don't see DANM as a radical third option to the two camps, I see it as the lightweight or minimalist position in camp #2.
4Rafael Harth5mo
Agreed; too tired right now but will think about how to rewrite this part. I don't think I said that. I think I said that Camp #2 claims one cannot be wrong about the experience itself. I agree (and I don't think the post claims otherwise) that errors can come in during the step from the experience to the task of finding a verbalization of the experience. You chose an example where that step is particularly risky, hence it permits a larger error. Note that for Camp #2, you can draw a pretty sharp line between conscious and unconscious modules in your brain, and finding the right verbalization is mostly an unconscious process.
Fair point about the experience itself vs its description.  But note that all the controversy is about the descriptions.  "Qualia" is a descriptor, "sensation" is a descriptor, etc.  Even "illusionists" about qualia don't deny that people experience things.
4Rafael Harth5mo
Alright, so I changed the paragraph into this: I think a lot of Camp #2 people want to introduce new metaphysics, which is why I don't want to take out the last sentence. I don't think this is true. E.g., Dennett has these bits in Consciousness Explained: 1, 2, 3, 4. Of course, the issue is still tricky, and you're definitely not the only one who thinks it's just a matter of description, not existence. Almost everyone agrees that something exists, but Camp #2 people tend to want something to exist over and above the reports of that thing, and Dennett seems to deny this. And (as I mentioned in some other comment) part of the point of this post is that you empirically cannot nail down exactly what this thing is in a way that makes sense to everyone. But I think it's reasonable to say that Dennet doesn't think people experience things. Also, Dennett in particular says that there is no ground truth as to what you experience, and this is arguably a pretty well-defined property that's in contradiction with the idea that the experience itself exists. Like, I think Camp #2 people will generally hold that, even if errors can come in during the reports of experience, there is still always a precise fact of the matter as to what is being experienced. And depending on their metaphysics, it would be possible to figure out what exactly that is with the right neurotech. And another reason why I don't think it's true is because then I think illusionism wouldn't matter for ethics, but as I mentioned in the post, there are some illusionists who think their position implies moral nihilism. (There are also people who differentiate illusionism and eliminativism based on this point, but I'm guessing you didn't mean to do that.)
I beg to differ.  The thrust of Dennett's statement is easily interpreted as the truth of a description being partially constituted by the subject's acceptance of the description.  E.g., in one of the snippets/bits you cite, "I seem to see a pink ring."  If the subject said "I seem to see a reddish oval", perhaps that would have been true.  But compare: My freely drinking tea rather than coffee is partially constituted by saying to my host "tea, please."  Yet there is still an actual event of my freely drinking tea.  Even though if I had said "coffee, please" I probably would have drunk coffee instead. We are getting into a zone where it is hard to tell what is a verbal issue and what is a substantive one.  (And in my view, that's because the distinction is inherently fuzzy.)  But that's life.

I think you are a bit off the mark.

As a reductive materialist, expecting to find a materialistic explanation for consciousness, in your model I'd be Camp 2. And yet in the dialogue

"It's obvious that consciousness exists."

-Yes, it sure looks like the brain is doing a lot of non-parallel processing that involves several spatially distributed brain areas at once, so-

"I'm not just talking about the computational process. I mean qualia obviously exists."

-Define qualia.

"You can't define qualia; it's a primitive. But you know what I mean."

-I don't. How could I if

... (read more)
2Rafael Harth5mo
Thanks for that comment. Can you explain why you think you're Camp #2 according to the post? Because based on this reply, you seem firmly (in fact, quite obviously) in Camp #1 to me, so there must be some part of the post where I communicated very poorly. ( ... guessing for the reason here ...) I wrote in the second-last section that consciousness, according to Camp #1, has fuzzy boundaries. But that just means that the definition of the phenomenon has fuzzy boundaries, meaning that it's unclear when consciousness would stop being consciousness if you changed the architecture slightly (or built an AI with similar architecture). I definitely didn't mean to say that there's fuzziness in how the human brain produces consciousness; I think Camp #1 would overwhelmingly hold that we can, in principle, find a full explanation that precisely maps out the role of every last neuron. Was that section the problem Or sth else?
8Ape in the coat5mo
At first I also thought that I'm a central example of Camp 1 based on the general vibes but then I reread the descriptions. I've boldened the things that I agree with in both of them  I do not think that explaining why people talk about consciousness is the same as explaining what consciousness is. People talk about "consciousness" because they possess some mental property that they call "consciousness". What exactly this is is still an open problem. I expect to find something like a specific encoding that my brain uses to translate signals from my body to the interface that the central planning agent interacts with. And while I agree that no complicated metaphysics is required, discarding metaphysics still counsts as getting it exactly right. I do not think that consciousness is fundamental but as you've included hardcore physicalist accounts into Camp 2 - I'm definetely Camp 2.
3Rafael Harth5mo
Okay, that makes a lot of sense. I'm still pretty sure that you're a central example of what I meant by Camp #1, and that the problem was how I described them. In particular, * Solving consciousness = solving the Meta Problem: what I meant by "solving the meta problem" here entails explaining the full causal chain. So if you say "People talk about 'consciousness' because they possess some mental property that they call 'consciousness'", then this doesn't count as a solution until you also recursively unpack what this mental property is, until you've reduced it to the brain's physical implementation. So I think you agree with this claim as it was intended. The way someone might disagree is if they hold something like epiphenomenalism, where the laws of physics are not enough and additional information is required. Or, if they are physicalists, they might still hold that additional conceptual/philosophical/metaphysical work is required from our part. * hardcore physicalist accounts: I think virtually everyone in Camp #1 is a physicalist, whereas camp #2 is split. So this doesn't put you in camp #2. * getting your metaphysics right: well, this formulation was dumb since, as you say, needing to not bring strange metaphysics into the picture is also one way of getting it right. What I meant was that the metaphysics is nontrivial. I've just rewritten the descriptions of the two camps. Ideally, you should now fully identify with the first. (Edit: I also rewrote the part about consciousness being fuzzy, since I think that was poorly phrased even if it didn't cause issues here.)
3Ape in the coat5mo
Okay, now Camp 1 feels more like home. Yet, I notice that I'm confused. How can anyone in Camp 2 be a physicalist then? Can you give me an example? Sounds about right. But just to be clear it doesn't mean that "consciousness" equals "talks about consciousness". It's just that by explaining a bigger thing (consciousness) we will also explain the smaller one (talks about consciousness) that depends on it. I expect consciousness to be related to many other stuff and talks about it being just an obvious example of a thing that wouldn't happen without consciousness. I was under the impression that your camps were mostly about whether a person thinks there is a Hard Problem of Consciousness or not. But now it seems that they are more about whether the person includes idealism in some sense into their worldview? I suppose you are trying to compress both these dimensions (idealism/non-idealism, HP/non-HP) into one. And if so, I'm afraid your model is going to miss a lot of nuances.
2Rafael Harth5mo
Yes, this is also how I meant it. Never meant to suggest that the consciousness phenomenon doesn't have other functional roles. So first off, using the word physicalist in the post was very stupid since people don't agree what it means, and the rewrite I made before my previous comment took the term out. So what I meant, and what the text now says now without the term, is "not postulating causal power in addition to the laws of physics". With that definition, lots of Camp #2 people are physicalists -- and on LW in particular, I'd guess it's well over 80%. Even David Chalmers is an example; consciousness doesn't violate the laws of physics under his model, it's just that you need additional -- but non-causally-relevant -- laws to determine how consciousness emerges from matter. In general, you can also just hold that consciousness is a different way to look at the same process, which is sometimes called dual-aspect monism, and that's physicalist, too. I mean, I don't think it's just about the hard problem; otherwise, the post wouldn't be necessary. And I don't think you can say it's about idealism because people don't agree what idealism means. Like, the post is about describing what the camps are, I don't think I can do it better here, and I don't think there's a shorter description that will get everyone on board. In general, another reason why it's hard to talk about consciousness (which was in a previous version of this post but I cut it) is that there's so much variance in how people think about the problem, and what they think terms mean. Way back, gwern said about LLMs that "Sampling can prove the presence of knowledge but not the absence". The same thing is true about the clarity of concepts; discussion can prove that they're ambiguous, but never that they're clear. So you may talk to someone, or even to a bunch of people, and you'll communicate perfectly, and you may think "hooray, I have a clear vocabulary, communication is easy!". And then you talk to
1Ape in the coat5mo
  Oh I see. Yeah, that's an unconventianal use of "physicalism" I don't think I've ever seen it before.  Using the conventional philosophical language, or at least the one supported by Wikipedia and search engines, Camp 1 maps pretty well to monist materialism aka physicalism, while Camp 2 is everything else: all kinds of metaphysical pluralism, dualism, idealism and more exotic types of monism.  Anyway, then indeed, camp one is all the way for me. While I'm still a bit worried that people using such a broad definitions will miss the important nuance it's a very good first approximation.

This is a really clear breakdown. Thank you for writing it up!

I'm struck by the symmetry between (a) these two Camps and (b) the two hemispheres of the brain as depicted by Iain McGilchrist. Including the way that one side can navigate the relationship between both sides while the other thinks the first is basically just bonkers!

It's a strong enough analogy that I wonder if it's causal. E.g., I expect someone from Camp 1 to have a much harder time "vibing". I associate Camp 1 folk with rejection of ineffable insights, like "The Tao that can be said is not ... (read more)

I don't feel like I fall super hard into one of these camps or the other, although I agree they exist. I think from the outside folks would probably say I'm a very camp 2 person, but as I see it that's only insofar as I'm not willing to give up and say that there's nothing of value beyond the camp 1 approach.

This is perhaps reflected in my own thinking about "consciousness". I think the core thing going on is not complex, but instead quite simple: negative feedback loops that create information that's internal to a system. I identify signals within a feedb... (read more)

I agree that the epistemic status of experience is important, but... First of all does anyone actually disagree with concrete things that Dennett says? That people are often wrong about their experiences is obviously true. If that was the core disagreement, it would be easy to persuade people. The only persistent disagreement seems to be about whether there is something additional to the physical explanation of experience (hence the zombies argument) or whether fundamental consciousness is even coherent concept at all - just replacing absolute certainty with uncertainty wouldn't solve it, when you can't even communicate what's your evidence is.

The disagreement is about whether qualia exist enough to need explaining. A rainbow is ultimately explained as a kind of illusion, but to arrive at the explanation , you have to accept that they appear to exist, that people aren't lying about them. Dennett doesn't just think you can be wrong about what's going on in your mind, he thinks qualia don't exist at all, and that he is zombie ... but his opponents don't all think that qualia are fundamental, indefinable, non physical etc. It's important to remember that the camp #2 argument given here is very exagerated.
3Rafael Harth5mo
Yes; there are definitely people who disagree with most things Dennett says, including how exactly you can be wrong about your experience. Don't really want to get into the details here since that's not part of the post.

Great post, I felt it really defined and elaborated on a phenomena I've seen recur on a regular basis.

It's funny how consciousness is so difficult to understand, to the point that it seems pre-paradigmatic to me. At this point, I like, like presumably many others, evaluate claims of conscientiousness by setting the prior that I'm personally conscious to near 1, and then evaluating the consciousness of other entities primarily by their structural similarity to my own computational substrate, the brain.

So another human is almost certainly conscious, most mam... (read more)

I'm wondering where Biological Naturalism[1] falls within these two camps? It seems like sort of a "third way" in between them, and incidentally, is the explanation that I personally have found most compelling.

Here's GPT-4's summary:

Biological Naturalism is a theory of mind proposed by philosopher John Searle. It is a middle ground between two dominant but opposing views of the mind: materialism and dualism. Materialism suggests that the mind is completely reducible to physical processes in the brain, while dualism posits that the mind and body are di

... (read more)
Wait, isn't that just dualism with hand-waving about complexity?   The analogy of water and H2O is a good one:  the property of wetness IS measurable in surface tension, viscosity, and adhesion to various surfaces.  And those are absolutely caused by interactions at the level of molecules (or lower down, but definitely physics).  "Wetness" is not easily CALCULABLE from first principles, but that's a failing of us as modelers and our computational power, not a distinct category of properties.
2Rafael Harth5mo
My take based on the summary is that it's squarely in Camp #2. In particular, I think this part seals the deal According to Camp #1, there's nothing ontologically special about consciousness, so as soon as you give it its own ontology, you've decided which camp you're in.

I wonder how much camp #1 correlates with aphantasia.