Some interesting examples but this seems to be yet another take that claims to solve/dissolve consciousness by simply ignoring the Hard Problem.
It sounds like Seth's position is that the hard problem of consciousness is the result of confusion, so he's not ignoring it, but saying that it only appears to exist because it's asked within the context of a confused frame.
Seth seems to be suggesting that the hard problem of consciousness is a bit like asking why don't people fall off the edge of the Earth? We think of this question as confused because we believe the Earth is round. But if you start from the assumption that the Earth is flat, then this is a reasonable question, and no amount of explanation will convince you otherwise.
The reason these two situations look different is that it's now easy for us to verify that the Earth is not flat, but it's hard for us to verify what's going on with consciousness. Seth's book is making a bid, by presenting the work of many others, to say that what we think of as consciousness is explainable in ways that make the Hard Problem a nonsensical question.
That seems quite a big different from "simply ignoring the Hard Problem", though I admit Jacob does not go into great detail about Seth's full arguments for this. But I'd posit that if you want to disagree with something, you need to disagree with the object-level claims Seth makes first, and only after reaching a point where you have no more disagreements is it worth considering whether or not the Hard Problem still makes sense, and if you do then it should be possible to make a specific argument about where you think the Hard Problem arises and what it looks like in terms of the presented model.
Without reading the book we can't be sure. But the trouble is that this claim has been made a million times, and in every previous case the author has turned out to be either ignoring the hard problem, misunderstanding it, or defining it out of existence. So if a longish, very positive review with the title 'x explains consciousness' doesn't provide any evidence that x really is different this time, it's reasonable to think that it very likely isn't.
The reason these two situations look different is that it's now easy for us to verify that the Earth is flat, but it's hard for us to verify what's going on with consciousness.
Even if I had no way of verifying it, "the earth is (roughly) spherical and thus has no edges, and its gravity pulls you toward its centre regardless of where you are on its surface" would clearly be an answer to my question, and a candidate explanation pending verification. My question was only 'confused' in the sense that it rested on a false empirical assumption; I would be perfectly capable of understanding your correction to this assumption. (Not necessarily accepting it -- maybe I think I have really strong evidence that the earth is flat, or maybe you...
Are you suggesting that in the case of the hard problem, there may be some equivalent of the 'flat earth' assumption that the hard-problemists hold so tightly that they can't even comprehend a 'round earth' explanation when it's offered?
Yes. Dualism is deeply appealing because most humans, or at least most of humans who care about the Hard Problem, seem to experience themselves in dualistic ways (i.e. experience something like the self residing inside the body). So even if it becomes obvious that there's no "consciousness sauce" per se, the argument is that the Problem seems to exist only because there are dualistic assumptions implicit in the worldview that thinks the Problem exists.
I'd go on to say that if we address the Meta Hard Problem like this in such a way that it shows the Hard Problem to be the result of confusion, then there's nothing to say about the Hard Problem, just like there's nothing interesting to say about why ships never sail off the edge of the Earth.
This bit was very interesting to me:
These models are “predictive” in the important sense that they perceive not just how things are at the moment but also anticipate how your sensory inputs would change under various conditions and as a consequence of your own actions. Thus:
- Red = would create a perception of a warmer color relative to the illuminant even if the illumination changes.
My current pet theory of qualia is that there is an illusion that they are a specific thing (e.g. the redness of red) when in reality there are only perceived relations between a quale and other qualia, and a perceived identity between that quale and memories of that quale. But the sense of identity (or constancy through time) is not caused by an actual specific thing (the "redness" that one erroneously tries to grasp but always seems just beyond reach), but by a recurrence of those relations.
Why I like the quoted part is because it can be read as a predictive processing-flavoured version of the same theory. The illusion (that there is a reified thing instead of only a jumble of relationships) is strengthened by the fact that we not only recognize the cluster of qualia relationships and can correc...
...But it is not clear at all that such a being could even exist in principle. A digital mind that lacks a body it is trying to keep alive, that has entirely different senses than our interoceptive and exteroceptive ones, and that has an entirely different repertoire of actions available to it will have an entirely alien generative model of its world, and thus an entirely alien phenomenology — if it even has one. We can guess what it’s like to be a bat: its reliance on sonar to navigate the world likely creates a phenomenology of “auditory colors” that track
Having not read the book yet, I'm kind of stumped at how different this review is to the one from Alexander. The two posts make it sound like a completely different book, especially with respect to the philosophical questions, and especially especially with respect to the expressed confidence. Is this book a neutral review of the leading theories that explicitly avoids taking sides, or is it a pitch for another I-solved-the-entire-problem theory? It can't really be both.
The Real Problem
For as long as there have been philosophers, they loved philosophizing about what life really is. Plato focused on nutrition and reproduction as the core features of living organisms. Aristotle claimed that it was ultimately about resisting perturbations. In the East the focus was less on function and more on essence: the Chinese posited ethereal fractions of qi as the animating force, similar to the Sanskrit prana or the Hebrew neshama. This lively debate kept rolling for 2,500 years — élan vital is a 20th century coinage — accompanied by the sense of an enduring mystery, a fundamental inscrutability about life that will not yield.
And then, suddenly, this debate dissipated. This wasn’t caused by a philosophical breakthrough, by some clever argument or incisive definition that satisfied all sides and deflected all counters. It was the slow accumulation of biological science that broke “Life” down into digestible components, from the biochemistry of living bodies to the thermodynamics of metabolism to genetics. People may still quibble about how to classify a virus that possesses some but not all of life’s properties, but these semantic arguments aren’t the main concern of biologists. Even among the general public who can’t tell a phospholipid from a possum there’s no longer a sense that there’s some impenetrable mystery regarding how life can arise from mere matter.
In Being You, Anil Seth is doing the same to the mystery of consciousness. Philosophers of consciousness have committed the same sins as “philosophers of life” before them: they have mistaken their own confusion for a fundamental mystery, and, as with élan vital, they smuggled in foreign substances to cover the gaps. This is René Descartes’ res cogitans, a mental substance that is separate from the material.
This Cartesian dualism in various disguises is at the heart of most “paradoxes” of consciousness. P-zombies are beings materially identical to humans but lacking this special res cogitans sauce, and their conceivability requires accepting substance dualism. The famous “hard problem of consciousness” asks how a “rich inner life” (i.e., res cogitans) can arise from mere “physical processing” and claims that no study of the physical could ever give a satisfying answer.
Being You by Anil Seth answers these philosophical paradoxes by refusing to engage in all but the minimum required philosophizing. Seth’s approach is to study this “rich inner life” directly, as an object of science, instead of musing about its impossibility. After all, phenomenological experience is what’s directly available to any of us to observe.
As with life, consciousness can be broken into multiple components and aspects that can be explained, predicted, and controlled. If we can do all three we can claim a true understanding of each. And after we’ve achieved it, this understanding of what Seth calls “the real problem of consciousness” directly answers or simply dissolves enduring philosophical conundrums such as:
Or at least, these conundrums feel resolved to me. Your experience may vary, which is also one of the key insights about experience that Being You imparts.
The original photograph of “the dress”
Seeing a Strawberry
On a plate in front of you is a strawberry. Inside your skull is a brain, a collection of neurons that have direct access only to the electrochemical state of other neurons, not to strawberries. How does the strawberry out there create the perception of redness in the brain?
In the common view of perception, red light from the strawberry hits the red-sensitive cones in your retina. These cones are wired into other neurons that detect edges, then shapes, and finally these are combined into an image of a red strawberry. This view is intuitively appealing: when we see a strawberry we perceive that a strawberry is right there, and the extant strawberry intuitively seems to be the sole and sufficient cause of our perception of redness.
But if we study closely any element of this perception, we find the common sense intuition immediately challenged.
You may see a strawberry up close or far away, at different angles, partially obscured, in dim light, etc. The perception of it as red, roughly conical, and about an inch across doesn’t change even though the light hitting your retina is completely different in each case: different angles of your visual field, different wavelengths, and so on. In fact, you will perceive a red strawberry in the absence of any red light at all, as in the following image that contains nary a single red-hued pixel in it:
You can zoom in to check, the red-seeming pixels are all gray with R<G, B
You (well, some of you) can simply visualize a red strawberry with your eyes closed, or in a dream, or on acid. People can perceive redness through color-grapheme synaesthesia, including people who have been blind for decades. You easily perceive magenta, a color which has no associated wavelength at all, while the same exact wavelengths coming from different parts of the same image can produce perceptions of entirely different colors as in Adelson’s chessboard illusion. Wherever the perception of color is coming from, it is certainly not the mere bottom-up decoding of wavelengths of light.
And again, this redness is somehow perceived by a collection of 86 billion neurons, none of which come labeled “red” or “strawberry” or even “part of the visual system”. They just are. To understand seeing a strawberry we need to ask: how could you derive strawberries if all you had access to are the states of 86 billion simple variables and no prior idea of the connection between them?
After observing patterns of neurons firing for a while, you will notice that the states of some neurons are entirely determined by the states of others. Others appear more independent, with states that can’t be conclusively derived from the state of the rest of the brain. These independent neurons have non-random patterns — you may notice that some of them statistically tend to fire together, for example. You could infer that there are hidden causes outside the brain that affect these, and consider the state of independent neurons to be a sensory input effected by these hidden causes. To make sense of your senses, you must model this hidden world external to the brain.
Perhaps these sensory neurons that fired together are red-sensing cones in your retina. Their congruence implies that the hidden cause of their firing (let’s call it “colored light”, although it isn’t labeled as such in the brain itself) comes from continuous “surfaces” that reflect a similar hue throughout. This isn’t a given — “color” could be distributed randomly pixel by pixel throughout space — but it’s a reasonable inference from the states of neurons that come into being. Your model doesn’t contain the labels “colored light” and “surface” but it does contain objects with the property of stably and homogeneously colored surfaces.
You also notice that if some blue-sensing cones suddenly activate (perhaps you went from a warmly illuminated room to stand under a blue sky) this dampens the activation of the red-sensing ones elsewhere. Thus, the best model that predicts the state of all your retinal neurons is that surfaces have a fixed property that affects the relative activation of your cones. “Red” is your model of a property of a surface that activates red-sensing cones if illuminated by warm light but will activate all cones similarly (as a gray surface would normally) if the illuminant is cool. This explains the red-appearing gray strawberries in the greenish image above, and the general property of “discounting the illuminant” in human vision.
We have also solved the mystery of “the dress”: if you perceive it as white and gold it’s because your brain models it as being illuminated by a blue light — perhaps you trained it by shopping for clothes often in outdoor bazaars. If you see it as blue and black you must imagine it in a warmly lit indoor space. Spend more time outdoors and you may start to perceive it differently!
The important point here is that “redness” is a property of your brain’s best model for predicting the states of certain neurons. Redness is not “objective” in the sense of being “in the object”, it only exists in the models generated by the sort of brains which are hooked up to eyes. It feels objective because 92% of people will share your assessment (8% are colorblind) as opposed to ~50% agreement on the dress, but they have the same status of being generated by your brain. We intuitively separate “objective” properties of a strawberry (red, real, occupying a volume) from “subjective” ones (good in salads, pretty, evocative of spring) but all of these are properties of your brain’s predictive model of strawberries, they’re not out there to be perceived in a brain-independent way.
These models are “predictive” in the important sense that they perceive not just how things are at the moment but also anticipate how your sensory inputs would change under various conditions and as a consequence of your own actions. Thus:
This, in broad strokes, is the predictive processing theory of perception. But Being You doesn’t just lay out the how of predictive processing, it also answers the how come and the so what.
Cybernetic Organism
Why is our brain “trying to predict its sensory inputs”? What does it even mean for it to have a goal?
Seth draws the insightful comparison to cybernetic systems, systems that control some variable of interest using feedback loops. These range in complexity from a thermostat that turns a heater on and off to regulate temperature in a room to an ecosystem where plant and animal populations are balanced through complicated interactions. A conspicuous feature of cybernetic systems is that they usually appear to have a “purpose”, like a self-guided missile aiming at a target or your thermostat aiming at a temperature goal.
The Good Regulator Theorem highlights the importance of appropriately matching the complexity of a regulator system to the complexity of the environment it operates in. By understanding this principle, scientists and engineers can develop more robust and efficient control systems that can effectively regulate and adapt to dynamic environments.
An important insight about cybernetic systems was formulated by William Ashby and Roger Conant, stating that “every good regulator of a system must be (or contain) a model of that system”. A thermostat that has access only to a thermometer will do a worse job at regulating a room’s temperature than one that has access to weather forecasts, the properties of the room and the heater, and its own tolerances and inaccuracies. The more mutual information there is between the regulator and its environment, the better it can exert control over it.
Your brain is a cybernetic regulator controlling your body into the state of being alive.
The Terminator, a cybernetic organism of a more metal variety
This is a consequence of evolution and, more broadly, thermodynamics. Being alive is a very low entropy state — your body temperature is in a narrow range around 98.6°F, your organs are neatly arranged — in a high entropy world that inexorably tries to dissolve you into room-temperature mush. You can’t persist long in your special state of living by being passive, you have to actively regulate every aspect of yourself and your environment that impacts your vitals. You have to take action. As Ashby and Conant said, to regulate yourself and your environment you must comprehensively model both, and in particular the consequences of your actions on both.
Thus, while all of your perceptions are subjective in the sense that they are features of your mind’s map and not of any independent territory, they are not arbitrary or subject to your whims. The core of your model is an unmodifiable prediction, a fixed “hyperprior”, of remaining alive. Its subjective flavor is the base inchoate sense of simply being a living organism, and the survival instinct that overrides all other perceptions when the prediction of staying alive is threatened with disconfirmation. One level above the prediction of just living are predictions of your body’s vitals, from its basic integrity to control of variables like heart rate or blood sugar and oxygenation. Your most important and vivid perceptions — embodiment, pain, emotions, moods — are interoceptive experiences that have more to do with your body than with the world outside.
Finally, exteroceptive sensations are your model of the outside world, primarily as you can act upon it to impact your body. The strawberry comes with an immediate perception of edibility along with redness. This is because eating stuff is an action potentially available to you, and discriminating between edible and inedible things is vital to staying vital.
Self Image
To summarize so far: the contents of your consciousness, your perceptions, are features of a best-guess model concocted by your brain to predict its own present and future states. It’s driven by an overriding prediction of staying alive, which impacts your brain through a rich and vivid channel of interoceptive sensation.
While it’s easy to see how perceptions like redness, hunger, and edibility fit into this picture it may not seem immediately applicable to more complex conscious experiences that have to do with your selfhood. This is the biggest section of the book, disentangling selfhood into components such as ownership of a body, a first person view, a continuous narrative history, a sense of volition, and more. It details how all these are explained through the lens of a generative model keeping your body alive, and also the clever experimental setups used by Seth and his fellow scientists to understand each one (and mess with it at will). I can’t do this section full justice in a brief review, but we can take a whirlwind tour of what it means not just to be but to be you.
Body Ownership
To a collection of neurons locked inside a skull, the rest of the body is as much “out there” as any other object. And yet, it doesn’t feel that way: you have a strong sense of the exact extent of your body and police this boundary rigorously. You apply this even to something like saliva, which turns from a normal part of your body into a yucky foreign substance the moment it crosses an invisible line somewhere in the vicinity of your teeth.
The main determinant of what feels like your body is that your body elicits interoceptive sensations that match external ones. This can be demonstrated by the rubber hand illusion: a detached rubber hand feels like a mere object if you just look at it, but if it is stroked with a brush at the same time as your real hand the consilience between seeing and feeling the strokes creates a strong feeling that it is part of you. You will instinctively flinch in horror if “your” rubber is suddenly threatened.
Illustration of the rubber hand illusion experiment from The Scientist magazine
Your map of the world is chiefly a model of the sensory consequences of your actions, and these sensory consequences are what distinguishes your body from everything else.
First-person Perspective
The experience of observing the world from a single point somewhere between your eyes and slightly behind your forehead comes from your generative model of vision. As you move around the world, this first-person POV is the prediction that surfaces will be visible if they are facing that point with no obstruction, and not visible otherwise.
Illustration from Steven Lehar’s “The Boundaries of Human Knowledge” demonstrating that although you model objects as occupying volume, you are also aware that you are only seeing a 2D surface that faces a single point
But as we’ve seen, visual-based perceptions aren’t as fundamental and stable as embodied ones. In a 2007 experiment, subjects saw through a VR set a real-time video of their own body filmed from a few feet behind their back. When they were stroked with a brush and saw this happening synchronously in the video feed, they reported feeling that the “virtual” body standing a few feet in front of them was in fact theirs and that they were observing it from a third-person POV.
Of course, people have forever reported having “out of body” experiences. These experiences are almost certainly true, even if explaining them via demonic possession or astral projection is clearly bunk. In a survey I ran myself, 27% of responders said that they see themselves in the third person when visualizing entering a familiar room. Coincidentally, this is similar to the percentage of people who apply makeup daily, an activity that involves looking at yourself in the third person (through a mirror) for some time while stroking yourself with a brush.
Narrative Self
An important component of selfhood is having a narrative about yourself, your continuous life history and the type of person you are. This self-story is not strictly necessary for a lot of normal functioning; Being You recounts the story of a music producer suffering from total amnesia who lives permanently with no memory beyond the last few seconds. And yet he is able to play the piano perfectly and even rekindle love with his wife.
What is your narrative self predicting and controlling? Most likely: your social self, how other people perceive you, predict you, and treat you. This is of course of vital importance to social creatures like us! Hanson and Simler’s The Elephant in the Brain meticulously demonstrates that our story of who we are and why we do the things we do often has little to do with our real motives and a lot to do with securing assistance from others and avoiding punishment.
Free Will
For many people, the aspect of selfhood they cling to most tightly is their volition, the feeling of being the originator and director of their own actions. And if philosophically inclined, they may worry about reconciling this volition with a deterministic universe. Are you truly exercising free will or merely following the laws of physics?
This question betrays that same dualistic map-territory confusion that asks how the material redness of a strawberry could cause the phenomenological redness in your mind. Redness, free will, belief in deterministic physics — these are all features of your generative model. There is no “spooky free will” that violates the laws of physics in our common model of them, but the experience of free will certainly exists and is informative.
Imagine that you are making a cup of tea. When did it feel like you exercised free will? Likely more so at the start of the process, when you observed that all tea-making tools are available to you and contemplated alternatives like coffee or wine. Once you’re deep in the process of making the cup the subsequent actions feel less volitional, executed on autopilot if making tea is a regular habit of yours. What is particular about that moment of initiation?
One particularity is the perception that you are able to predict and control many degrees of freedom: observe and move several objects around, react in complex ways to setbacks, etc. This separates free will from actions that feel “forced” by the configuration of the world outside, like slipping on a wet surface, and reinforces the sensation that free will comes from within.
The experience of volition is also a useful flag for guiding future behavior. If the universe were arranged again in the same exact configuration as when you made tea you will always end up making tea. But the universe (in particular, the state of your brain) never repeats. The feeling of “I could have done otherwise” is the experience of paying attention to the consequences of your action so that in a subjectively similar but not perfectly identical situation you could act differently if the consequences were not as you predicted. If the tea didn’t satisfy as expected, the experience of free will you had when you made it shall guide you the next day to the cold beer you should have drank instead.
Being a Map
Being You is an science book, covering the results of research in various fields and presenting a comprehensive model of what your phenomenology is and how it works. But I suspect that it’s impossible to read it without it actually changing what being you is like — if all you are is a generative model of the world then enhancing this model with new insights will surely affect it.
For one, the decoupling of conscious experience from deterministic external causes implies that there’s truly no such thing as a “universal experience”. Our experiences are shared by virtue of being born with similar brains wired to similar senses and observing a similar world of things and people, but each of us infers a generative model all of our own. For every single perception mentioned in Being You it also notes the condition of having a different one, from color blindness to somatoparaphrenia — the experience that one of your limbs belongs to someone else. The typical mind fallacy goes much deeper than mere differences in politics or abstract beliefs.
The subjective nature of all experience also offers a way to purposefully change yourself that falls between utter fatalism and “it’s all in your head” solipsistic voluntarism. Your brain is always making its best predictions; these can’t be changed by a single act of will but can be updated with enough evidence. Almost everything you know and perceive was inferred from scratch, and what was trained can be retrained. If you want to build habits, just observing yourself doing the thing for whatever reason is more useful than any story or incentive structure you can come up with. In particular, do things with your body to learn them, that’s the part of the universe your brain pays the most attention to.
The intimate connection of our consciousness to our living body also implies that we shouldn’t blithely assume that it can be easily disembodied. Robin Hanson posits digital emulations that have a similar basic consciousness to biological humans, like an emulated Elon Musk who shares a sense of selfhood with the biological version and enjoys virtual feasts or Ferraris. But it is not clear at all that such a being could even exist in principle. A digital mind that lacks a body it is trying to keep alive, that has entirely different senses than our interoceptive and exteroceptive ones, and that has an entirely different repertoire of actions available to it will have an entirely alien generative model of its world, and thus an entirely alien phenomenology — if it even has one. We can guess what it’s like to be a bat: its reliance on sonar to navigate the world likely creates a phenomenology of “auditory colors” that track how surfaces reflect sound waves similar to our perception of visual color. It’s much harder to guess what it’s like, if anything, to be an “em”.
In general, the fact that our consciousness has a lot to do with living and little with intelligence implies that we should more readily ascribe it to animals and less readily to AI. Eliezer seems to think that selfhood is necessary for conscious experience, and that babies and animals aren’t sentient. But selfhood is itself just a bundle of perceptions, separable from each other and from experiences like pain or pleasure. An animal with no selfhood cannot report “it is I, Bambi, who is suffering”, but that doesn’t mean there is no suffering happening when you harm it. And as for AI, I believe that Eliezer and Seth are in agreement that world-optimizing intelligence and what-it’s-like-to-be-you consciousness are quite orthogonal.
But there is something interesting still about intelligence: how is it that reading a book of declarative knowledge can change how I perceive the world, how I think of my own self, how I relate to my mortality? This all happened to me after reading Being You and again upon rereading it. Yet almost everything the book talks about is the domain of “system 1”, our intuitive and automatic perception. I had the chance to ask this of Seth personally and he sent me a link to a paper on how “system 2” function relates to perceiving the contents of working memory. But it still feels to me that there is something magic about our ability to reason explicitly and how it fits into this new understanding of consciousness.
Since you are reading this review you are likely interested in the same things as well. I highly recommend that you read Being You, and then that you spend a good while thinking about it.