TL;DR: To your brain, "explaining things" means compressing them in terms of some smaller/already-known other thing. So the seemingly inexplicable nature of consciousness/qualia arises because qualia are primitive data elements which can't be compressed. The feeling of there nonetheless being a "problem" arises from a meta-learned heuristic that thinks everything should be compressible.

What's up with consciousness? This question has haunted philosophers and scientists for centuries, and has also been a frequent topic of discussion on this forum. A solution to this problem may have moral relevance soon, if we are going to create artificial agents which may or may not have consciousness. Thus, attempting to derive a satisfying theory seems highly desirable.

One popular approach is to abandon the goal of directly explaining consciousness and instead try to solve the 'meta-problem' of consciousness -- explaining why it is that we think there is a hard problem in the first place. This is the tactic I will take.

There have been a few attempts to solve the meta-problem, including some previously reviewed on LessWrong. However, to me, none of them have felt quite satisfying because they lacked analysis of what I think of as the central thing to be explained by a meta-theory of consciousness -- the seeming inexplicability of consciousness. That is, they will explain why it is that we might have an internal model of our awareness, but they won't explain why aspects of that model feel inexplicable in terms of a physical mechanism. After all, there are other non-strictly-physical parts of our world model, such as the existence of countries, plans, or daydreams, which don't generate the same feeling of inexplicability. So, whence inexplicability? To answer this, we first need to have a model of what "explaining things" means to our brain in the first place.

Explanations as Compression

What is an explanation? On my (very rough) model of the brain[1], there are various internal representations of things -- concepts, memories, plans, sense-data. These representations can be combined together, or transformed into each other. A representation X explains representation(s) Y, if Y can be produced from X under a transformation. The brain is constantly trying to explain complex representations in terms of simpler ones, thus producing an overall compression of its internal data. This drive towards compression is the source of most of our concepts as well as a motivator of explanation-seeking behavior in everyday life.

I assume people here are familiar with this basic idea or something like it, but here's some examples anyway. Say you are awoken by a mysterious tapping sound in the night. Confused, you wander around, following the sound of the tapping until you discover a leak in your roof. You have explained the tapping sound -- what was previously an arbitrary series of noises can be derived from your previous knowledge of the world plus the assumption that your roof has a leak, producing an overall compression of your experiences. High-level concepts such as the idea of "dog" are also produced in this way -- after encountering sufficiently many dogs, your brain notices its basic experiences with them can be re-used across instances, producing an abstract 'dog' concept with wide applicability[2].

This basic explanation-seeking behavior also drives our quest to understand the world using science.[3] Newtonian mechanics compresses the behavior of many physical situations. Quantum mechanics compresses more. The standard model+general relativity+standard cosmological model, taken together, form a compression of virtually everything we know of in the world to date. This mechanistic world model still ultimately serves the function of compressing our experiences, however. It is at this point that the question of the hard problem arises -- yes, we can explain everything in the physical realm, but can we truly explain the ineffable redness of red?

However, given the mechanism of 'explanation' provided, I think it's not too surprising that 'qualia' seemingly can't be explained mechanistically! The reason is that 'qualia', as representations, are simply too simple to be compressed further. They're raw data, like 0/1 in a binary stream -- the string "1" can't be compressed to anything simpler than itself; likewise, the percept 'red' is too simple and low-level to be explained in terms of anything else. Likewise, "why am I experiencing anything at all" cannot be answered because the aspect of experience that is being queried about is too simple -- that it exists in the first place. So that's it -- the brain has internal representations which it tries to compress, and 'qualia' are just incompressible representations.

Meta-Learning[4]

"But wait", you might be thinking, "your model is missing something. If we were just compression-producing algorithms, we wouldn't think it was mysterious or weird that there are some inputs that are incompressible, we would just compress as best we could and then stop. Your model doesn't explain why we think there's a hard problem at all!"

To explain why we think there's a hard problem at all, we need to add another layer -- meta-learning. The basic idea is simple. While our brain hardware has certain native capabilities that let us do pattern-matching/compression, it also does reinforcement learning on actions that lead us to obtain better compressions, even if those actions are not immediately compressing. For example, imagine you live in a town with a library, containing an encyclopedia. If you find a weird new animal while exploring in the woods, you might learn to go to the library and check the encyclopedia, hopefully finding an explanation that links the new phenomenon and what you already know. The action of going to the library is not itself a compression, but it tends to reliably lead to forming better compressions.

On a subtler level, we can imagine there are certain internal habits of thought or frames of mind that tend to lead to producing better compressions, while not themselves directly being compressing. For instance, you might learn the habit of trying to construct a mathematical model of what you're learning about, or of trying to mentally imagine a very specific example when hearing about an abstract idea. "Materialism" can be thought of as a complex of facts and habits of thought that tend to produce good compressions -- "for any given phenomenon, a good explanation for it can be found by localizing it in time and space and applying the laws of physics".

These meta-learned habits of thought, I claim, are the source of the intuition that 'redness' ought to be explainable mechanistically(after all, everything else is) The paradoxical feeling of the hard problem arises from the conflict between this intuition and the underlying compressor's inability to compress basic percepts such as red.

As an analogy, imagine a string-compressing algorithm which uses reinforcement learning to train an 'adaptive compressor' which searches for strategies to generate better compressions. If this RL algorithm is sophisticated enough, it might learn to have an internal self-model and model of the task it's carrying out -- making strings shorter. But if its training data mainly consists of strings which are highly compressible, it might learn the heuristic that its goal should be to make any string shorter, and that this should always be possible. Such robots, if they could communicate with each other, might write essays about "the hard problem of 0", and the paradoxical impossibility of a string which somehow seems to have no possible compression!

Practical Application

"That might make sense of the feeling of paradox, although the details of the meta-learning stuff sound a bit iffy", you may be thinking. "But even if I grant that, I still don't feel like I understand qualia on a gut level. It still feels like the innate redness of red can't possibly be explained by physics, even if it can be predicted that I would feel that way. Solving the meta-problem isn't enough to solve the hard problem in a way that satisfies".

For myself, I have mostly lost the feeling that there is anything paradoxical about qualia or consciousness. But this is not because the above argument convinced me they can be explained away. Rather, it caused me to adjust my sense of how I should think about reality, my senses, and the relationship between them.

Imagine you were giving advice to the string-compressing robot which thought '0' ought to be compressible. It wouldn't be right to tell it that actually, 0 is compressible, you just need to take a meta-view of the situation which explains why you think that. Instead, you might use that meta-view to motivate it to adjust its heuristics -- it should learn to accept that some strings are incompressible, and focus its efforts on those that in fact can be compressed. Similarly, although both redness and physics still seem as real as ever to me, I've lost my sense that I should necessarily be able to explain redness in terms of physics. Physics is a framework developed to summarize our experiences, but some aspects of those experiences will always be beyond its scope. I no longer feel that there is anything confusing here(if any readers feel differently, feel free to try to re-confuse me in the comments!)

"So, your 'solution' to the hard problem is just to give up, and accept both physics and qualia as real? How does that explain anything, isn't that the naive view we are trying to overcome?" Well yeah, basically, depending on how you define and think about realness. You could frame it as 'giving up', but I think that sometimes resolving confusion about a question necessarily looks like giving up on one level while gaining understanding on another: take the halting problem for instance. "What about the implications for building AI and morality and so forth? And how does this relate to the consciousness of other people?" Those are interesting hard questions -- but they exceed the scope of this post, which is just intended to explain the inexplicability of consciousness! I will hopefully return to moral/etc. implications later.


  1. I'm not super attached to any of the details here. But the basic structure seems likely to hold well enough for the rest of the analysis to go through. ↩︎

  2. Yes, there is also social learning and other methods we use to learn concepts, but IMO those are overlays atop this more basic mechanism. ↩︎

  3. There's some details here about how e.g. our brains can't natively use the standard model to predict things, we have to rely on social testimony for most of our evidence, etc., but I'm going to elide that here because I don't think it's relevant. ↩︎

  4. This section is somewhat more speculative than the previous. ↩︎

New Comment
13 comments, sorted by Click to highlight new comments since: Today at 7:19 PM

One issue with this is that "explanation" (like "consciousness") is a bit of a grab-bag word. We can have a feeling that something has been explained, but this feeling can take several different forms and each can be triggered in several different ways. In addition to compression, we might call things explanations if they're non-compressive generative models ("This sequence starts 01 because it's counting all the integers in order"), if they assert but don't contain the information needed to predict more data ("This sequence starts 01 because it's copying the sequence from page 239 of Magical Munging"), if they put our data in a social context ("This sequence starts 01 because the Prime Minister ordered it to be so") and many other things both sensical (like Aristotle's four causes) and nonsensical ("A witch did it").

Still, I agree that if we just take compressive explanation and look at the second half of your post, this makes a lot of sense.

But allowing for more diverse explanations re-raises the question of why people have a hard time explaining qualia. First I should not that actually, they don't: "I see red because I'm looking at something red" is a great explanation. It's not the qualia themselves that aren't explained, it's some mysterious overarching pattern (some sort of "why qualia at all?" in some specific hoped-for sense) that's the issue.

This is a lot like the issue with free will. Lots of people think they have/want/need "libertarian free will," but since that's false they feel like they're running into a hard problem. Sure, they can give attempted explanations for why we have libertarian free will, but somehow those explanations always seem to have holes in them. Clearly (they think) we have this thing, it's just mysteriously hard to explain.

By analogy, you can now figure out where I'm going with qualia. Some people think they have/want/need "Cartesian qualia" that are the unique things our singular "I" sees when it gets information about the world through our brain. But...

Some people think they have/want/need "Cartesian qualia" that are the unique things our singular "I" sees when it gets information about the world

So I think I agree that there are no ontologically fundamental qualia in the world. But I have a certain amount of sympathy with people who want to consider qualia to be nonetheless real...like yeah, on one level there seem to be no fundamental qualia in physics, on another level the whole point of physics is to provide an explanation of our experiences, so it seems weird to demote those experiences to a lesser degree of reality.

Say you're a string-compressing robot. Which is more 'real', the raw string you're fed, or the program you devise to compress it? Overall I don't think you can say either - both are needed for the function of string compression to be carried out. Similarly, I think both our experiences and world-model seem necessary for our existence as world-modeling beings. Maybe the problem is that the word 'real' is too coarse, we need some kind of finer-grained vocabulary for different levels of our ontology.

[-]TAG2y10

In addition to compression, we might call things explanations if they’re non-compressive generative models (“This sequence starts 01 because it’s counting all the integers in order”), if they assert but don’t contain the information needed to predict more data (“This sequence starts 01 because it’s copying the sequence from page 239 of Magical Munging”), if they put our data in a social context (“This sequence starts 01 because the Prime Minister ordered it to be so”) and many other things both sensical (like Aristotle’s four causes) and nonsensical (“A witch did it”).

Yes. In particular , explanations decrease arbitrariness and increase predictability. That allows us to put a more concrete interpretation on "we can't explain qualia": we can't predict qualia from their neural correlates.

But allowing for more diverse explanations re-raises the question of why people have a hard time explaining qualia. First I should not that actually, they don’t: “I see red because I’m looking at something red” is a great explanation. It’s not the qualia themselves that aren’t explained, it’s some mysterious overarching pattern (some sort of “why qualia at all?” in some specific hoped-for sense) that’s the issue

It's not quite the latter either: it's why qualia given physicalism.

What is hard about the hard problem is the requirement to explain consciousness, particularly conscious experience, in terms of a physical ontology. Its the combination of the two that makes it hard. Which is to say that the problem can be sidestepped by either denying consciousness, or adopting a non-physicalist ontology.

Examples of non-physical ontologies include dualism, panpsychism and idealism . These are not faced with the Hard Problem, as such, because they are able to say that subjective, qualia, just are what they are, without facing any need to offer a reductive explanation of them. But they have problems of their own, mainly that physicalism is so successful in other areas.

By analogy [with libertarian free will], you can now figure out where I’m going with qualia. Some people think they have/want/need “Cartesian qualia

It's not really analogous with LFW, because denial of libertarian free will isn't directly self contradictory...but illusionism, the claim "nothing seems to you in any particular way, it only seems to be so" is.

If I understand you right, basically, you say that once we postulate consciousness as some basic, irreducible building block of reality, confusion related to consciousness will evaporate. Maybe it will help partially, but I think it will not solve problem completely. Why? Let's say that consciousness is some terminal node in our world-model, this still leaves the question "What systems in word are conscious?". And I guess that current hypotheses for answer to this question are rather confusing. We didn't have same level of confusion with other models of basic building blocks. For example, with atoms we thought "yup, everything is an atom, to build this rock we need these atoms and for the cat -- other", then with quantum configurations we think "OK, universe is one gigantic configuration, the rock is this factor and this cat is other" etc and it doesn't seem very unintuitive (even if the process of producing these factors is hard, it is know 'in principle'), but with consciousness we don't know (even in principle!) how to measure of consciousness in any particular system and that's, IMHO, the important difference.

Sort of. I consider the stuff about the 'meta-hard problem', aka providing a mechanical account of an agent that would report having non-mechanically-explicable qualia, to be more fundamental. Then the postulation of consciousness as basic is one possible way of then relating that to your own experiences. (Also, I wouldn't say that consciousness is a 'building block of reality' in the same way that quarks are. Asking if consciousness is physically real is not a question with a true/false answer, it's a type error within a system that relates world-models to experiences)

Relating this meta-theory to other minds and morality is somewhat trickier. I'd say that the theory in this post already provides a plausible account of which other cognitive systems will report having mechanically-explicable qualia(and thus provides as close of an answer to "which systems are conscious" as we're going to get) On the brain side, I think this is implemented intuitively by seeing which parts of the external world can be modeled by re-using part of your brain to simulate them, then providing a build-in suite of social emotions towards such things. This can probably be extrapolated to a more general theory of morality towards entities with a mind architecture similar to ours(thus providing as close as an answer as we're going to get to 'which physical systems have positive or negative experiences?')

[-]ZT52y20

I suppose it makes sense? I think I had thoughts like that before, "why should the base building blocks of our experience be explainainable/reducible in simpler terms, internally speaking".

Externally is a different question. What if my internal experience/qualia of redness is the same thing as, from an external observer, some patterns of neuronal activation in my brain (not just caused by it, but exactly the same thing, only seen from two different perspective). Would there be any contradiction in that?

At which point the believer of the "hard problem of consciousness" usually slips into asking "but how could you possibly prove that is the case". Well, I obviously cannot prove it, but it a perfectly plausible explanation for consciousness, and the only such explanation I am familiar with that is based in known physics, so I feel it is likely to be true.

Would there be any contradiction in that?

No, there wouldn't be. Indeed, the point of this post is to provide a 3rd-person account of why people would claim to have irreducible 1st-person experiences.

Another way of stating my claim in the last part is that you shouldn't think of

What if my internal experience/qualia of redness is the same thing as, from an external observer, some patterns of neuronal activation in my brain

as being true or false, but rather, a type error. The point of your brain is to construct world-models that can predict your experiences. Asking if those experiences are equal to some part of that world-model, or someone else's world model, does not type-check.

[-]ZT52y10

I think we agree. To, say, 95%, on this particular topic.

Though to me, the idea that "these two different aspect of my mental-model/experience are generated by the same underlying feature of reality" is a very important conclusion to draw.

as being true or false, but rather, a type error.

Sure. I mean, fundamentally, yes.

[-]TAG2y10

likewise, the percept ‘red’ is too simple and low-level to be explained in terms of anything else

The simplicity of qualia is subjective. Which isn't surprising,because qualia are subjective. But it's still different from the simplicity of "0", which doesn't require any particular context.

The subjective simplicity is unsurprising from some perspectives. A complete drill down would be equivalent to looking at a fine grained brain scan of your own brain....but you need to have much more information about the external world than your brain state for survival purposes. ETA

The problem is not the incompatibility of qualia, which is only.subjective anyway, but the predictability of qualia. If qualia could be reduced to (and therefore predicted from) physics, they would be compressible because physics is.

Even if ones own brain prevents one from predicting or compressing qualia, there should be no problem in predicting or compressing qualia the of other people ... so long as qualia are an ordinary physical phenomenon. That's the intuition that the Mary's Room thought experiment explores. But qualia appear not to be an ordinary physical phenomenon -- there are no qualiometers, so we can't detect someone else's qualia. But it qualia are intrinsically subjective, only epistemically accessible to the person having them, that is what is strange and special about them., since all (other) physical phenomena are objective.

So the incompressibility of qualia is "merely" subjective, but subjectivity isn't mere!

Hmm. I'm not sure if we disagree? I agree that the incompressibility of qualia is relative to a given perspective(or to be a bit pedantic, I would say that qualia themselves are only defined relative to a perspective) By incompressible, I mean incompressible to the brain's coding mechanism. This is enough to explain the meta-hard problem, as it is our brain's particular coding mechanism that causes us to report that qualia seem inexplicable physically.

So the incompressibility of qualia is "merely" subjective, but subjectivity isn't mere

I also think I might agree here? The way I have come to think about it is like this: the point of world-models is to explain our experiences. But in the course of actually building and refining those models, we forget this and think instead that the point is instead to reduce everything to the model, even our experiences(because this is often a good heuristic in the usual course of improving our world-model). This is analogous to the string-compressing robot who thinks that its string-compressing program is more "real" than the string it is attempting to compress. I think the solution is to simply accept that experiences and physics occupy different slots in our ontology and we shouldn't expect to reduce either to the other.

[-]TAG2y10

By incompressible, I mean incompressible to the brain’s coding mechanism

So is 0, but no one worries about that. There's more to mysterious incompressibility than incompressibility...you also need the expectation that something should be compressible. Physical reductionism has the implication that high level phenomena should be compressible, and that qualia are high level phenomena. The incompressibility of 0 or a quark isn't a problem to physical reductionism, because it can safely regard them as basic.

as it is our brain’s particular coding mechanism that causes us to report that qualia seem inexplicable physically.

If there were an objective account of how qualia are reducible, then the subjective judgement wouldn't matter. There is an objective account of other mental phenomena. If the answer to some question "just pops into our head", we can understand objectively how it could have been generated by a computation, even though the fine neurological details are veiled subjectively.

I think the solution is to simply accept that experiences and physics occupy different slots in our ontology and we shouldn’t expect to reduce either to the other.

So, dualism is true? For the dualist, there is no expectation that qualia should be compressible or reducible. But that's not a meta-explanation. It's just an object level explanation that isn't physicalism.

The incompressibility of 0 or a quark isn't a problem to physical reductionism

I actually do think some people register these being incompressible as a problem. Think of "what breathes fire into the equations" or "why does anything exist at all"(OK, more about the incompressibility of the entire world than a quark, but same idea -- I could imagine people being confused about "what even are quarks in themselves" or something...)

So, dualism is true? For the dualist, there is no expectation that qualia should be compressible or reducible. But that's not a meta-explanation

We can distinguish two levels of analysis.

  • firstly, accepting a naïve physicalism, we can try to give an account of why people would report being confused about consciousness, which could in principle be cashed out in purely physical predictions about what words they speak or symbols they type into computers. That's what I was attempting to do in the first two sections of the article(without spelling out in detail how the algorithms ultimately lead to typing etc., given that I don't think the details are especially important for the overall point) I think people with a variety of metaphysical views could come to agree on an explanation of this "meta-problem" without coming to agree on the object level.

  • Secondly, there is the question of how to relate that analysis to our first-person perspective. This was what I was trying to do in the last section of the article(which I also feel much less confident in than the first two sections). You could say it's dualist in a sense, although I don't think there "is" a non-physical mental substance or anything like that. I would rather say that reality, from the perspective of beings like us, is necessarily a bit indexical -- that is, one always approaches reality from a particular perspective. You can enlarge your perspective, but not so much that you attain an observer-independent overview of all of reality. Qualia are a manifestation of this irreducible indexicality.

[-]TAG2y10

I actually do think some people register these being incompressible as a problem. Think of “what breathes fire into the equations

If the equations are something like string theory or the standard model, they are pretty complex, and there is a legitimate concern why you can compress things so far but not further.

firstly, accepting a naïve physicalism, we can try to give an account of why people would report being confused about consciousness, which could in principle be cashed out in purely physical predictions about what words they speak or symbols they type into computers

But there is no reason for the maximally naive person to be confused about qualia, because the maximally naive person doesn't know what they are, or what physics is or what reductionism is. The maximally naive approach is that I am looking at a red chair, and I see it exactly as it is, and the redness is a property of the chair ,not a proxy representation generated in my brain.

I think people with a variety of metaphysical views could come to agree on an explanation of this “meta-problem” without coming to agree on the object level.

Sure. A physicalist could say that the the hard problem is the problem of explaining qualia on physical terms ,at the meta level, and the obvious object level solution is to ditch qualia.

Whereas an idealistist could agree that the the hard problem is the problem of explaining qualia on physical terms ,at the meta level, but say that the obvious object level solution is to ditch physicalism.

I would rather say that reality, from the perspective of beings like us, is necessarily a bit indexical—that is, one always approaches reality from a particular perspective.

Physicalism admits perspectives, but not irreducible ones. Under relativity, things will seem different to different observers, but in a way that is predictable to any observer. Albert can predict someone else's observations of mass, length and time....but can Mary predict someone else's colour qualia?