The Real Problem

For as long as there have been philosophers, they loved philosophizing about what life really is. Plato focused on nutrition and reproduction as the core features of living organisms. Aristotle claimed that it was ultimately about resisting perturbations. In the East the focus was less on function and more on essence: the Chinese posited ethereal fractions of qi as the animating force, similar to the Sanskrit prana or the Hebrew neshama. This lively debate kept rolling for 2,500 years — élan vital is a 20th century coinage — accompanied by the sense of an enduring mystery, a fundamental inscrutability about life that will not yield.

And then, suddenly, this debate dissipated. This wasn’t caused by a philosophical breakthrough, by some clever argument or incisive definition that satisfied all sides and deflected all counters. It was the slow accumulation of biological science that broke “Life” down into digestible components, from the biochemistry of living bodies to the thermodynamics of metabolism to genetics. People may still quibble about how to classify a virus that possesses some but not all of life’s properties, but these semantic arguments aren’t the main concern of biologists. Even among the general public who can’t tell a phospholipid from a possum there’s no longer a sense that there’s some impenetrable mystery regarding how life can arise from mere matter.

In Being You, Anil Seth is doing the same to the mystery of consciousness. Philosophers of consciousness have committed the same sins as “philosophers of life” before them: they have mistaken their own confusion for a fundamental mystery, and, as with élan vital, they smuggled in foreign substances to cover the gaps. This is René Descartes’ res cogitans, a mental substance that is separate from the material.

This Cartesian dualism in various disguises is at the heart of most “paradoxes” of consciousness. P-zombies are beings materially identical to humans but lacking this special res cogitans sauce, and their conceivability requires accepting substance dualism. The famous “hard problem of consciousness” asks how a “rich inner life” (i.e., res cogitans) can arise from mere “physical processing” and claims that no study of the physical could ever give a satisfying answer.

Being You by Anil Seth answers these philosophical paradoxes by refusing to engage in all but the minimum required philosophizing. Seth’s approach is to study this “rich inner life” directly, as an object of science, instead of musing about its impossibility. After all, phenomenological experience is what’s directly available to any of us to observe. 

As with life, consciousness can be broken into multiple components and aspects that can be explained, predicted, and controlled. If we can do all three we can claim a true understanding of each. And after we’ve achieved it, this understanding of what Seth calls “the real problem of consciousness” directly answers or simply dissolves enduring philosophical conundrums such as:

  • What is it like to be a bat?
  • How can I have free will in a deterministic universe?
  • Why am I me and not Britney Spears?
  • Is “the dress” white and gold or blue and black?

Or at least, these conundrums feel resolved to me. Your experience may vary, which is also one of the key insights about experience that Being You imparts.

The original photograph of “the dress”

Seeing a Strawberry

On a plate in front of you is a strawberry. Inside your skull is a brain, a collection of neurons that have direct access only to the electrochemical state of other neurons, not to strawberries. How does the strawberry out there create the perception of redness in the brain?

In the common view of perception, red light from the strawberry hits the red-sensitive cones in your retina. These cones are wired into other neurons that detect edges, then shapes, and finally these are combined into an image of a red strawberry. This view is intuitively appealing: when we see a strawberry we perceive that a strawberry is right there, and the extant strawberry intuitively seems to be the sole and sufficient cause of our perception of redness.

But if we study closely any element of this perception, we find the common sense intuition immediately challenged. 

You may see a strawberry up close or far away, at different angles, partially obscured, in dim light, etc. The perception of it as red, roughly conical, and about an inch across doesn’t change even though the light hitting your retina is completely different in each case: different angles of your visual field, different wavelengths, and so on. In fact, you will perceive a red strawberry in the absence of any red light at all, as in the following image that contains nary a single red-hued pixel in it:

You can zoom in to check, the red-seeming pixels are all gray with R<G, B

You (well, some of you) can simply visualize a red strawberry with your eyes closed, or in a dream, or on acid. People can perceive redness through color-grapheme synaesthesia, including people who have been blind for decades. You easily perceive magenta, a color which has no associated wavelength at all, while the same exact wavelengths coming from different parts of the same image can produce perceptions of entirely different colors as in Adelson’s chessboard illusion. Wherever the perception of color is coming from, it is certainly not the mere bottom-up decoding of wavelengths of light.

And again, this redness is somehow perceived by a collection of 86 billion neurons, none of which come labeled “red” or “strawberry” or even “part of the visual system”. They just are. To understand seeing a strawberry we need to ask: how could you derive strawberries if all you had access to are the states of 86 billion simple variables and no prior idea of the connection between them? 

After observing patterns of neurons firing for a while, you will notice that the states of some neurons are entirely determined by the states of others. Others appear more independent, with states that can’t be conclusively derived from the state of the rest of the brain. These independent neurons have non-random patterns — you may notice that some of them statistically tend to fire together, for example. You could infer that there are hidden causes outside the brain that affect these, and consider the state of independent neurons to be a sensory input effected by these hidden causes. To make sense of your senses, you must model this hidden world external to the brain.

Perhaps these sensory neurons that fired together are red-sensing cones in your retina. Their congruence implies that the hidden cause of their firing (let’s call it “colored light”, although it isn’t labeled as such in the brain itself) comes from continuous “surfaces” that reflect a similar hue throughout. This isn’t a given — “color” could be distributed randomly pixel by pixel throughout space — but it’s a reasonable inference from the states of neurons that come into being. Your model doesn’t contain the labels “colored light” and “surface” but it does contain objects with the property of stably and homogeneously colored surfaces. 

You also notice that if some blue-sensing cones suddenly activate (perhaps you went from a warmly illuminated room to stand under a blue sky) this dampens the activation of the red-sensing ones elsewhere. Thus, the best model that predicts the state of all your retinal neurons is that surfaces have a fixed property that affects the relative activation of your cones. “Red” is your model of a property of a surface that activates red-sensing cones if illuminated by warm light but will activate all cones similarly (as a gray surface would normally) if the illuminant is cool. This explains the red-appearing gray strawberries in the greenish image above, and the general property of “discounting the illuminant” in human vision.

We have also solved the mystery of “the dress”: if you perceive it as white and gold it’s because your brain models it as being illuminated by a blue light — perhaps you trained it by shopping for clothes often in outdoor bazaars. If you see it as blue and black you must imagine it in a warmly lit indoor space. Spend more time outdoors and you may start to perceive it differently!

The important point here is that “redness” is a property of your brain’s best model for predicting the states of certain neurons. Redness is not “objective” in the sense of being “in the object”, it only exists in the models generated by the sort of brains which are hooked up to eyes. It feels objective because 92% of people will share your assessment (8% are colorblind) as opposed to ~50% agreement on the dress, but they have the same status of being generated by your brain. We intuitively separate “objective” properties of a strawberry (red, real, occupying a volume) from “subjective” ones (good in salads, pretty, evocative of spring) but all of these are properties of your brain’s predictive model of strawberries, they’re not out there to be perceived in a brain-independent way.

These models are “predictive” in the important sense that they perceive not just how things are at the moment but also anticipate how your sensory inputs would change under various conditions and as a consequence of your own actions. Thus:

  • Red = would create a perception of a warmer color relative to the illuminant even if the illumination changes.
  • Good in salads = would create perceptions of deliciousness and positive affect if consumed alongside arugula and goat cheese.
  • Real = would look like a strawberry from a different angle if you walked around it, and would generate perceptions of solidity and weight if you picked it up. An image of a strawberry on a screen generates almost the same visual input as a physical strawberry, but you perceive it as very different (unreal) because you predict different consequences to trying to grab it.

This, in broad strokes, is the predictive processing theory of perception. But Being You doesn’t just lay out the how of predictive processing, it also answers the how come and the so what.

Cybernetic Organism

Why is our brain “trying to predict its sensory inputs”? What does it even mean for it to have a goal?

Seth draws the insightful comparison to cybernetic systems, systems that control some variable of interest using feedback loops. These range in complexity from a thermostat that turns a heater on and off to regulate temperature in a room to an ecosystem where plant and animal populations are balanced through complicated interactions. A conspicuous feature of cybernetic systems is that they usually appear to have a “purpose”, like a self-guided missile aiming at a target or your thermostat aiming at a temperature goal.

The Good Regulator Theorem highlights the importance of appropriately matching the complexity of a regulator system to the complexity of the environment it operates in. By understanding this principle, scientists and engineers can develop more robust and efficient control systems that can effectively regulate and adapt to dynamic environments.

An important insight about cybernetic systems was formulated by William Ashby and Roger Conant, stating that “every good regulator of a system must be (or contain) a model of that system”. A thermostat that has access only to a thermometer will do a worse job at regulating a room’s temperature than one that has access to weather forecasts, the properties of the room and the heater, and its own tolerances and inaccuracies. The more mutual information there is between the regulator and its environment, the better it can exert control over it.

Your brain is a cybernetic regulator controlling your body into the state of being alive.

The Terminator, a cybernetic organism of a more metal variety

This is a consequence of evolution and, more broadly, thermodynamics. Being alive is a very low entropy state — your body temperature is in a narrow range around 98.6°F, your organs are neatly arranged — in a high entropy world that inexorably tries to dissolve you into room-temperature mush. You can’t persist long in your special state of living by being passive, you have to actively regulate every aspect of yourself and your environment that impacts your vitals. You have to take action. As Ashby and Conant said, to regulate yourself and your environment you must comprehensively model both, and in particular the consequences of your actions on both.

Thus, while all of your perceptions are subjective in the sense that they are features of your mind’s map and not of any independent territory, they are not arbitrary or subject to your whims. The core of your model is an unmodifiable prediction, a fixed “hyperprior”, of remaining alive. Its subjective flavor is the base inchoate sense of simply being a living organism, and the survival instinct that overrides all other perceptions when the prediction of staying alive is threatened with disconfirmation. One level above the prediction of just living are predictions of your body’s vitals, from its basic integrity to control of variables like heart rate or blood sugar and oxygenation. Your most important and vivid perceptions — embodiment, pain, emotions, moods — are interoceptive experiences that have more to do with your body than with the world outside. 

Finally, exteroceptive sensations are your model of the outside world, primarily as you can act upon it to impact your body. The strawberry comes with an immediate perception of edibility along with redness. This is because eating stuff is an action potentially available to you, and discriminating between edible and inedible things is vital to staying vital.

Self Image

To summarize so far: the contents of your consciousness, your perceptions, are features of a best-guess model concocted by your brain to predict its own present and future states. It’s driven by an overriding prediction of staying alive, which impacts your brain through a rich and vivid channel of interoceptive sensation.

While it’s easy to see how perceptions like redness, hunger, and edibility fit into this picture it may not seem immediately applicable to more complex conscious experiences that have to do with your selfhood. This is the biggest section of the book, disentangling selfhood into components such as ownership of a body, a first person view, a continuous narrative history, a sense of volition, and more. It details how all these are explained through the lens of a generative model keeping your body alive, and also the clever experimental setups used by Seth and his fellow scientists to understand each one (and mess with it at will). I can’t do this section full justice in a brief review, but we can take a whirlwind tour of what it means not just to be but to be you.

Body Ownership

To a collection of neurons locked inside a skull, the rest of the body is as much “out there” as any other object. And yet, it doesn’t feel that way: you have a strong sense of the exact extent of your body and police this boundary rigorously. You apply this even to something like saliva, which turns from a normal part of your body into a yucky foreign substance the moment it crosses an invisible line somewhere in the vicinity of your teeth.

The main determinant of what feels like your body is that your body elicits interoceptive sensations that match external ones. This can be demonstrated by the rubber hand illusion: a detached rubber hand feels like a mere object if you just look at it, but if it is stroked with a brush at the same time as your real hand the consilience between seeing and feeling the strokes creates a strong feeling that it is part of you. You will instinctively flinch in horror if “your” rubber is suddenly threatened. 

Illustration of the rubber hand illusion experiment from The Scientist magazine

Your map of the world is chiefly a model of the sensory consequences of your actions, and these sensory consequences are what distinguishes your body from everything else.

First-person Perspective

The experience of observing the world from a single point somewhere between your eyes and slightly behind your forehead comes from your generative model of vision. As you move around the world, this first-person POV is the prediction that surfaces will be visible if they are facing that point with no obstruction, and not visible otherwise.

Illustration from Steven Lehar’s “The Boundaries of Human Knowledge” demonstrating that although you model objects as occupying volume, you are also aware that you are only seeing a 2D surface that faces a single point

But as we’ve seen, visual-based perceptions aren’t as fundamental and stable as embodied ones. In a 2007 experiment, subjects saw through a VR set a real-time video of their own body filmed from a few feet behind their back. When they were stroked with a brush and saw this happening synchronously in the video feed, they reported feeling that the “virtual” body standing a few feet in front of them was in fact theirs and that they were observing it from a third-person POV.

Of course, people have forever reported having “out of body” experiences. These experiences are almost certainly true, even if explaining them via demonic possession or astral projection is clearly bunk. In a survey I ran myself, 27% of responders said that they see themselves in the third person when visualizing entering a familiar room. Coincidentally, this is similar to the percentage of people who apply makeup daily, an activity that involves looking at yourself in the third person (through a mirror) for some time while stroking yourself with a brush.

Narrative Self

An important component of selfhood is having a narrative about yourself, your continuous life history and the type of person you are. This self-story is not strictly necessary for a lot of normal functioning; Being You recounts the story of a music producer suffering from total amnesia who lives permanently with no memory beyond the last few seconds. And yet he is able to play the piano perfectly and even rekindle love with his wife.

What is your narrative self predicting and controlling? Most likely: your social self, how other people perceive you, predict you, and treat you. This is of course of vital importance to social creatures like us! Hanson and Simler’s The Elephant in the Brain meticulously demonstrates that our story of who we are and why we do the things we do often has little to do with our real motives and a lot to do with securing assistance from others and avoiding punishment.

Free Will

For many people, the aspect of selfhood they cling to most tightly is their volition, the feeling of being the originator and director of their own actions. And if philosophically inclined, they may worry about reconciling this volition with a deterministic universe. Are you truly exercising free will or merely following the laws of physics?

This question betrays that same dualistic map-territory confusion that asks how the material redness of a strawberry could cause the phenomenological redness in your mind. Redness, free will, belief in deterministic physics — these are all features of your generative model. There is no “spooky free will” that violates the laws of physics in our common model of them, but the experience of free will certainly exists and is informative.

Imagine that you are making a cup of tea. When did it feel like you exercised free will? Likely more so at the start of the process, when you observed that all tea-making tools are available to you and contemplated alternatives like coffee or wine. Once you’re deep in the process of making the cup the subsequent actions feel less volitional, executed on autopilot if making tea is a regular habit of yours. What is particular about that moment of initiation?

One particularity is the perception that you are able to predict and control many degrees of freedom: observe and move several objects around, react in complex ways to setbacks, etc. This separates free will from actions that feel “forced” by the configuration of the world outside, like slipping on a wet surface, and reinforces the sensation that free will comes from within.

The experience of volition is also a useful flag for guiding future behavior. If the universe were arranged again in the same exact configuration as when you made tea you will always end up making tea. But the universe (in particular, the state of your brain) never repeats. The feeling of “I could have done otherwise” is the experience of paying attention to the consequences of your action so that in a subjectively similar but not perfectly identical situation you could act differently if the consequences were not as you predicted. If the tea didn’t satisfy as expected, the experience of free will you had when you made it shall guide you the next day to the cold beer you should have drank instead.

Being a Map

Being You is an science book, covering the results of research in various fields and presenting a comprehensive model of what your phenomenology is and how it works. But I suspect that it’s impossible to read it without it actually changing what being you is like — if all you are is a generative model of the world then enhancing this model with new insights will surely affect it.

For one, the decoupling of conscious experience from deterministic external causes implies that there’s truly no such thing as a “universal experience”. Our experiences are shared by virtue of being born with similar brains wired to similar senses and observing a similar world of things and people, but each of us infers a generative model all of our own. For every single perception mentioned in Being You it also notes the condition of having a different one, from color blindness to somatoparaphrenia — the experience that one of your limbs belongs to someone else. The typical mind fallacy goes much deeper than mere differences in politics or abstract beliefs.

The subjective nature of all experience also offers a way to purposefully change yourself that falls between utter fatalism and “it’s all in your head” solipsistic voluntarism. Your brain is always making its best predictions; these can’t be changed by a single act of will but can be updated with enough evidence. Almost everything you know and perceive was inferred from scratch, and what was trained can be retrained. If you want to build habits, just observing yourself doing the thing for whatever reason is more useful than any story or incentive structure you can come up with. In particular, do things with your body to learn them, that’s the part of the universe your brain pays the most attention to.

The intimate connection of our consciousness to our living body also implies that we shouldn’t blithely assume that it can be easily disembodied. Robin Hanson posits digital emulations that have a similar basic consciousness to biological humans, like an emulated Elon Musk who shares a sense of selfhood with the biological version and enjoys virtual feasts or Ferraris. But it is not clear at all that such a being could even exist in principle. A digital mind that lacks a body it is trying to keep alive, that has entirely different senses than our interoceptive and exteroceptive ones, and that has an entirely different repertoire of actions available to it will have an entirely alien generative model of its world, and thus an entirely alien phenomenology — if it even has one. We can guess what it’s like to be a bat: its reliance on sonar to navigate the world likely creates a phenomenology of “auditory colors” that track how surfaces reflect sound waves similar to our perception of visual color. It’s much harder to guess what it’s like, if anything, to be an “em”.

In general, the fact that our consciousness has a lot to do with living and little with intelligence implies that we should more readily ascribe it to animals and less readily to AI. Eliezer seems to think that selfhood is necessary for conscious experience, and that babies and animals aren’t sentient. But selfhood is itself just a bundle of perceptions, separable from each other and from experiences like pain or pleasure. An animal with no selfhood cannot report “it is I, Bambi, who is suffering”, but that doesn’t mean there is no suffering happening when you harm it. And as for AI, I believe that Eliezer and Seth are in agreement that world-optimizing intelligence and what-it’s-like-to-be-you consciousness are quite orthogonal.

But there is something interesting still about intelligence: how is it that reading a book of declarative knowledge can change how I perceive the world, how I think of my own self, how I relate to my mortality? This all happened to me after reading Being You and again upon rereading it. Yet almost everything the book talks about is the domain of “system 1”, our intuitive and automatic perception. I had the chance to ask this of Seth personally and he sent me a link to a paper on how “system 2” function relates to perceiving the contents of working memory. But it still feels to me that there is something magic about our ability to reason explicitly and how it fits into this new understanding of consciousness.  

Since you are reading this review you are likely interested in the same things as well. I highly recommend that you read Being You, and then that you spend a good while thinking about it.

New Comment
124 comments, sorted by Click to highlight new comments since: Today at 1:43 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Some interesting examples but this seems to be yet another take that claims to solve/dissolve consciousness by simply ignoring the Hard Problem.

It sounds like Seth's position is that the hard problem of consciousness is the result of confusion, so he's not ignoring it, but saying that it only appears to exist because it's asked within the context of a confused frame.

Seth seems to be suggesting that the hard problem of consciousness is a bit like asking why don't people fall off the edge of the Earth? We think of this question as confused because we believe the Earth is round. But if you start from the assumption that the Earth is flat, then this is a reasonable question, and no amount of explanation will convince you otherwise.

The reason these two situations look different is that it's now easy for us to verify that the Earth is not flat, but it's hard for us to verify what's going on with consciousness. Seth's book is making a bid, by presenting the work of many others, to say that what we think of as consciousness is explainable in ways that make the Hard Problem a nonsensical question.

That seems quite a big different from "simply ignoring the Hard Problem", though I admit Jacob does not go into great detail about Seth's full arguments for this. But I'd posit that if you want to disagree with something, you need to disagree with the object-level claims Seth makes first, and only after reaching a point where you have no more disagreements is it worth considering whether or not the Hard Problem still makes sense, and if you do then it should be possible to make a specific argument about where you think the Hard Problem arises and what it looks like in terms of the presented model.

Without reading the book we can't be sure. But the trouble is that this claim has been made a million times, and in every previous case the author has turned out to be either ignoring the hard problem, misunderstanding it, or defining it out of existence. So if a longish, very positive review with the title 'x explains consciousness' doesn't provide any evidence that x really is different this time, it's reasonable to think that it very likely isn't.

The reason these two situations look different is that it's now easy for us to verify that the Earth is flat, but it's hard for us to verify what's going on with consciousness. 

Even if I had no way of verifying it, "the earth is (roughly) spherical and thus has no edges, and its gravity pulls you toward its centre regardless of where you are on its surface" would clearly be an answer to my question, and a candidate explanation pending verification. My question was only 'confused' in the sense that it rested on a false empirical assumption; I would be perfectly capable of understanding your correction to this assumption. (Not necessarily accepting it -- maybe I think I have really strong evidence that the earth is flat, or maybe you... (read more)

Are you suggesting that in the case of the hard problem, there may be some equivalent of the 'flat earth' assumption that the hard-problemists hold so tightly that they can't even comprehend a 'round earth' explanation when it's offered?

Yes. Dualism is deeply appealing because most humans, or at least most of humans who care about the Hard Problem, seem to experience themselves in dualistic ways (i.e. experience something like the self residing inside the body). So even if it becomes obvious that there's no "consciousness sauce" per se, the argument is that the Problem seems to exist only because there are dualistic assumptions implicit in the worldview that thinks the Problem exists.

I'd go on to say that if we address the Meta Hard Problem like this in such a way that it shows the Hard Problem to be the result of confusion, then there's nothing to say about the Hard Problem, just like there's nothing interesting to say about why ships never sail off the edge of the Earth.

So you don't believe there is such a thing as first-person phenomenal experiences, sort of like Brian Tomasik? Could you give an example or counterexample of what would or wouldn't qualify as such an experience?
3Gordon Seidoh Worley3mo
I think that there's a process we can meaningfully point to and call qualia, and it includes all the things we think of as qualia, but qualia is not itself a thing per se but rather the reification of observations of mental processes that allows us to make sense of them. I have theories of what these processes are and how they work and they mostly line up with the what's pointed at by this book. In particular I think cybernetic models are sufficient to explain most of the interesting things going on with consciousness, and we can mostly think of qualia as the result of neurons in the brain hooked up in loops so that their inputs include information not only from other neurons but also from themselves, and these self-sensing loops provide the input stream of data that other neurons interpret as self-experience/qualia/consciousness.
I don't see how that helps. We don't have a reductive explanation of consciousness as a thing, and we don't have a reductive explanation of consciousness as a process.
I wouldn't say "can’t even comprehend" but my current theory is that one such detrimental assumption is "I have direct knowledge of content of my experiences".
It's true this is the weakest link, since instances of the template "I have direct knowledge of X" sound presumptuous and have an extremely bad track record. The only serious response in favor of the presumptuous assumption [edit] that I can think of is epiphenomenalism in the sense of "I simply am my experiences", with self-identity (i.e. X = X) filling the role of "having direct knowledge of X". For explaining how we're able to have conversations about "epiphenomenalism" without it playing any local causal role in us having these conversations, I'm optimistic that observation selection effects could end up explaining this.
Personally I wouldn't say "I am my experiences" is epiphenomenalism - I have a casual role.
Response to what?
Response in favor of the assumption that Signer said was detrimental.
Similarly, I think that one inapplicable assumption is the idea that people can reliably self-analyze and come to accurate conclusions, thus being presumed reliable in their reports, including consciousness. I remember reading something that people's ability to self-analyze correctly is basically 0, that is people are pretty much always incorrect about their own traits and thoughts.
Interpret things strictly enough and everyone is always wrong about everything. They can still be usefully right.
The point is that they're usually not even that useful, as bringing an outsider would probably help the situation, and therefore one of the basic assumptions of a lot of consciousness discourse and intuitions is false, and they don't know this, and in particular, it's why I now dislike a lot of consciousness intuitions, but this goes especially for dualism. The fact that we are so bad at self-analysis is why we need outsider help so much.
Is there a reason why it is detrimental? Note that it “I have direct knowledge of content of my experiences”.doesn't imply certain knowledge, a or non-physical ontology, or epiphenomenalism...
Doesn't "direct" have the implication of "certain" here?
Some people think so, other don't. Indirectness adds extra uncertainty, but it's not the only possible source of uncertainty.
I think it's detrimental because "direct" there prevents people from accepting weak forms of illusionism, and that creates problems additional to The Hard Problem like Mary or Chalmer's conceivability of qualia's structure. And because... I don't want to say "the assumption is wrong" because knowledge is arbitrary high-level concept, but you can formulate a theory of knowledge where it doesn't hold and that theory is better.
4Caleb Reske3mo
Agreed! These topics - the narrative self, the perception of free will, the predictive processing-theory, etc. - are all incredibly interesting and worth studying. But what has been explained in the book doesn't seem to come close to what consciousness is at all - rather, how our perceptions in consciousness are influenced by our sense of self and story, something that has already been well-studied. I'm fairly convinced by the predictive-processing theory of self and cognition - but I don't treat this as an explanation for the existence of experience itself.  A generative artificial model can have "predictive processing," but does this give it a subjective, conscious experience? What would it mean, exactly, if it did? I'm reminded of this post - the reason the two "consciousness camps" seem to be talking past each other might be because we have different intuitions about what needs explaining. To me, what really needs explaining is the fact of consciousness - its existence, not its qualities. Why "feel" at all? This book, while it looks interesting, doesn't look like it touches that question.
2Jacob Falkovich3mo
I tried to communicate a psychological process that occurred for me: I used to feel that there's something to the Hard Problem of Consciousness, then I read this book explaining the qualities of our phenomenology, now I don't think there's anything to HPoC. This isn't really ignoring HPoC, it's offering a way out that seems more productive than addressing it directly. This is in part because terms HPoC insists on for addressing it are themselves confused and ambiguous. With that said, let me try to actually address HPoC directly although I suspect that this will not be much more convincing. HPoC roughly asks "why is perceiving redness accompanies by the quale of redness". This can be interpreted in one of two ways. 1. Why this quale and not another? This isn't a meaningful question because the only thing that determines a quale as being a "quale of redness" is that it accompanies a perception of something red. I suspect that when people read these words they imagine something like looking at a tomato and seeing blue, but that's incoherent — you can't perceive red but have a "blue" quale.  2. Why this quale and not nothing? Here it's useful to separate the perception of redness, i.e. a red object being part of the map, and the awareness of perceiving redness, i.e. a self that perceives a red object being part of the map. These are two separate perceptions. I suspect that when people think about p-zombies or whatever they imagine experiencing nothingness or oblivion, and not a perception unaccompanied by experience, or they imagine some subliminal "red" making them hungry similar to how it would affect a p-zombie.  There is no coherent way to imagine being aware of perceiving red, and this being different from just perceiving red, without this awareness being an experience. All you have is experience.  HPoC is demanding a justification of experience from within a world in which everything is just experiences. Of course it can't be answered! If it could formulat
Edit: It's a meaningful question because we, as far as we are concerned, it could have been different because we don't have a way of predicting it. Moreover, iyt quite possibly does vary between individuals, because red-green colour blindness is a thing. What determines, in the sense of pinning down, a quale is a combination of the external stimulus, eg. 600nm light, and the subject. But that isn't the relevant sense of "determines". It isn't causal deterinism, and it isn't the kind of "vertical" determinism that arises from having a reductive explanation. If subjective red is an entirely physical phenomenon, then it should be determined by, and predictable from, the underlying physics. This we cannot do--we cannot predict non human qualia, or novel human qualia. If there is a set of facts that cannot be deduced from physics, physicalism is wrong. Reductionism allows some basic facts, about fundamental laws and primitive entities to go unreduced, but not high level phenomena, which includes consciousness. No, it demands a justification of experience on the basis of a physical world, if you assume you are in one. There is no HP in an Idealist ontology, because there is no longer a need to explain one thing on terms of another. It's unlikely that Seth is an idealist. The success of science in the twentieth and twentyfirst centuries has led many philosophers to adopt a physicalist ontology, basically the idea that the fundamental constituents of reality are what physics says they are. (It is a background assumption of physicalism that the sciences form a sort of tower, with psychology and sociology near the top, and biology and chemistry in the middle , and with physics at the bottom. The higher and intermediate layers don't have their own ontologies -- mind-stuff and elan vital are outdated concepts -- everything is either a fundamental particle, or an arrangement of fundamental particles) So the problem of mind is now the problem of qualia, and the way philosoph
I think I see what you're saying and I do suspect that experience might be too fundamentally subjective to have a clear objective explanation, but I also think it's premature to give up on the question until we've further investigated and explained the objective correlates of consciousness or lack thereof - like blindsight, pain asymbolia, or the fact that we're talking about it right now. And does "everything is just experiences" mean that a rock has experiences? Does it have an infinite number of different ones? Is your red, like, the same as my red, dude? Being able to convincingly answer questions like these is part of what it would mean to me to solve the Hard Problem.
2Jacob Falkovich3mo
By "everything is just experiences" I mean that all I have of the rock are experiences: its color, its apparent physical realness, etc. As for the rock itself, I highly doubt that it experiences anything. As for your red being my red, we can compare the real phenomenology of it: does your red feel closer to purple or orange? Does it make you hungry or horny? But there's no intersubjective realm in which the qualia themselves of my red and your red can be compared, and no causal effect of the qualia themselves that can be measured or even discussed. I feel that understanding that "is your red the same as my red" is a  question-like sentence that doesn't actually point to any meaningful question is equivalent to understanding that HPoC is a confusion, and it's perhaps easier to start with this.  Here's a koan: WHO is seeing two "different" blues in the picture below?
Presumably you mean all you have your other comments,it doesn't sound like you are solving the HP with idealism.
1. In general how can you know whether and how much something has experiences? 2. I think with things like the nature of perception you could say there's a natural incomparability because you couldn't (seemingly) experience someone else's perceptions without translating them into structures your brain can parse. But I'm not very sure on this.
The Hard Problem doesn’t exist. Do you believe that all your beliefs are represented only in the structure of your brain? Then changing the structure of your brain changes your beliefs; this way you could theoretically be made to believe anything, including, of course, things that are false. Some false beliefs are useful, such as some optical illusions and illusions in general, such as the belief that “you” are “experiencing” things. (I once interviewed a man who had had a stroke and reported feeling like “he wasn’t there” anymore and he would look at his own hand and say, “it’s like it’s not mine”. This caused difficulties with locomotion and knowing when “he” had to go to the bathroom, because it was hard for him to realize it was actually “his” bladder being full and “him” having to make a decision to relieve it.) It’s a useful, evolved structure in your brain that makes you believe that. But it’s technically still false. Similarly, you could hallucinate that a dragon is standing in front of you; some people actually have such hallucinations. The rational thing to do at such a moment is to disbelieve your direct experience, based on all the other things you know about the world; you know that the evidence against the existence of dragons is overwhelming and you know hallucinations happen. However, disbelieving that what you’re experiencing is real doesn’t make you suddenly not experience it, that’s why hallucinations can be so debilitating. Ever had déjà vu? When I have déjà vu, I get an overwhelming sense of having experienced something before and my mind starts racing, trying to explain it. I usually recognize rationally that this is probably a déjà vu, but that explanation feels very unsatisfying in the moment because the feeling of recognition is so convincing. It’s only when this sensation subsides about ten seconds later that I can put the matter to rest, assured that it was just a glitch in my brain. But what if it never subsided? What if you had a déjà
This is again simply ignoring the Hard Problem. Your supporting paragraphs seem both true and irrelevant. You're equivocating, conflating consciousness with self-awareness. Consciousness is not the sense-of-self. That is merely one of many things that one can be conscious of.
The burden of proof is on those who assert that the Hard Problem is real. You can say what consciousness is not, but can you say what it is? As it stands, no explanation of the Hard Problem is possible, because the Hard Problem has no criteria for what would comprise a satisfying explanation; no way to distinguish a correct explanation from an incorrect one. All real science has such criteria, yet even David Chalmers has none. Until those criteria are established, the existence of the Hard Problem will forever remain unfalsifiable, unscientific and belief in it irrational. Unfortunately, proper criteria for explanations always involve (physical) observations and their predictions. Therefore any attempt to establish criteria for explanations of the Hard Problem is met with the criticism that, because it refers to physical aspects of consciousness, it ignores the Hard Problem. Evidently, proponents of the Hard Problem have backed themselves into a contradictory corner; the Hard Problem is unfalsifiable and any attempt to make it falsifiable makes it not the Hard Problem. If the Hard Problem is “above” science (i.e., not science), as it seems to be, then it is above inquiry and if it’s above inquiry, why inquire? The naked truth is that belief in the existence of the Hard Problem fetishizes mystery; it abhors actual explanation and therefore scrambles to keep its suppositions immune to it. Belief in the Hard Problem, being unscientific and therefore not real, begs the question why such beliefs can nonetheless take root in the face of overwhelming contrary evidence, which is what my earlier post attempted to explain. I’m experiencing just like you, but the Hard Problem doesn’t jive at all with the rest of my beliefs (and I have seen many attempts to reconcile them, all unsuccessful). Therefore I choose to accept the benefits of the sensation of experience and accept the Easy Problem of consciousness as the overwhelmingly likely Only Problem of consciousness.
The fact that we can't fully explain consciousness is a point in favour of the HP. Of course, any statement of a problem has to state something about what it is ... but something isn't everything. Yes, and they have arguments you haven't addressed. I use the criterion of being able to make novel predictions. We clearly don't have a solution that reaches that criterion.
But my question was, what exactly can’t we fully explain? What are you referring to when you say “consciousness” and what about it can’t we explain? Such as? Agreed, but what exactly should it predict? General relativity made novel predictions when it was first formulated, but about the movement of planets and so forth, so I presume that doesn’t count as a solution to the Hard Problem of consciousness.
Hard problem stuff?
I feel like most of your comment is unfair, except for this part. Let me attempt to make it more concrete for you. Suppose a future scientist offers you technological immortality, but the procedure will physically destroy your brain over time, replacing it with synthetic parts. Your new synthetic brain won't fail from old age and unlike a biological brain, can be backed up (and reconstructed) to protect against its inevitable accidental destruction over the coming eons. Do you take his offer? What assurances do you need? If you're wrong about certain details and accept, you die (brain destroyed) so you'd better get it right. I expect that assurances one could rationally accept would constitute a solution to the Hard Problem. But maybe you'll surprise me. This scenario is a crux for me (well, one of a few perhaps) such that were they addressed, I would either consider the Hard Problem solved, or else decide that I have no reason left to care about the Hard Problem. The scenario has a number of assumptions that may not hold for you. But I can only guess. Can we agree on the following? * Humans do not have ectoplasmic-ghost souls or the like. Rather the mind more directly inhabits the brain, and if it's destroyed, you have permanently died. Gradually replacing your brain with the wrong sort of synthetic parts (such as plastic) will kill you. * The physical molecules of the brain are completely replaced by natural biological processes over time, i.e., your mind is not your brain's atoms, but rather something about their structure, and therefore a procedure like the offer could (in principle) work. * There are physical structures, including complex (even biological) ones that are not alive in the sense of being conscious/aware. I.e., panpsychism is false. * Automatons (chatbots?) can say they are conscious when they are not. I.e., zombies can be constructed, in principle, and a procedure like this could replace you with one. (This is not a Chalmers "p-zombie". It
Thank you for this reply, I think this helps to pin down where our disagreement comes from. Technically I don’t disagree with your assumptions, because I think it’s equally valid to say they’re true as that they’re false, which is exactly the issue I have with them. There doesn’t seem to be a fact of the matter about them (i.e., there’s no way to experimentally distinguish a world in which any of these assumptions holds from one in which it does not), so if the existence of the Hard Problem is derived from them, then that doesn’t alleviate the issue of its unfalsifiability. The cause of this issue is that (from my point of view) many of the words you’re using don’t have clear definitions in the domain that you’re trying to use them in. I don’t mean to be a pedant, but if we’re really trying to use language for extraordinary investigations like these, then I think precision is warranted. For now, let me just focus on the thought experiment you posed. The way I see it, it’s equivalent to the Ship of Theseus. I think what we’re ultimately trying to grapple with is how best to model reality and it seems to me that we actually already have a perfectly good model to solve the Ship of Theseus and your thought experiment, namely particle physics. If you look at the Ship of Theseus or a person’s brain or body (or a piece of text they wrote), these are collections of particles that create a causal chain to somebody saying “Hey, it’s the Ship of Theseus!” or “Hey, gilch wrote a reply!” Over time, some of those particles may get swapped for others and cause us to still use the same name or maybe not. There’s no mystery or contradiction there, it’s a bunch of particles doing their thing and names are patterns in those particles, for example in the air when we speak them or in silicon when we’re writing it on a phone. Do we think about the world in terms of fundamental particles? No, it’s wildly impractical, so we’ve been forced to resort, through our evolution and the evoluti
In the sense that you mean this, this is a general argument against the existence of everything, because ultimately words have to be defined either in terms of other words or in terms of things that aren't words. Your ontology has the same problem, to the same degree or worse. But we only need to give particular examples of conscious experience, like suffering. There's no need to prove that there is some essence of consciousness. Theories that deny the existence of these particular examples are (at best) at odds with empiricism. It's deeply unclear to me what you mean by this. If you're denying that you have phenomenal experiences like suffering (i.e. negative valences), your rational decision making should be strongly affected by this belief. In the same way that someone who has stopped believing in Hell and Heaven should change their behavior to account for this radical change in their ontology.
Hi, please see my reply to gilch above. To add to that reply, an explanation only ever serves one function, namely to aid in prediction; every moment of our life, we try to achieve outcomes by predicting which action will lead to which outcome. An explanation to the Hard Problem doesn’t do that. Any state of consciousness that I try to achieve I do so with concepts related to the Easy Problem. I do have experiences (I don’t know what the word “phenomenal” would add to that), such as pain, but to the extent that I can predict and control these, I do so purely with solutions to the Easy Problem. And in my book, concepts that exist only in explanations that don’t aid in prediction are by definition not real. But the Hard Problem is even worse than that; it’s set up so that we can’t tell the difference between a correct and incorrect explanation in the first place, which means literally anything could be an explanation, which is equivalent to no explanation at all. Sure, you can choose to believe that something like panpsychism is real or that it’s not real, but because neither belief adds any predictive power, you’re better off just cutting it out, as per Occam’s Razor.
You seem to be claiming that you have experiences, but that their role is purely functional. If you were to experience all tactile sensations as degrees of being burnt alive, but you could still make predictions just as well as before, it wouldn't make any difference to you?
It doesn’t make sense to say that I could make predictions just as well as before if I experienced all tactile sensations as degrees of being burnt alive, because such sensations would be equivalent to predictions that I would be burning alive, which would be false and therefore interfere with my functioning. You can’t separate experience from its consequences. That’s also why philosophical zombies are impossible; if you could have a body which doesn’t experience, then it’s not going to function as normal. If I were to experience all tactile sensations as degrees of being burnt alive, I would assume something was wrong with my body and I would want to alleviate that situation by making predictions about which actions would alleviate it by wielding only concepts related to the Easy Problem. How would the Hard Problem help me in that situation?
I don't see a necessary equivalence here. You could be fully aware that the sensations were inaccurate, or hallucinated. But it would still hurt just as much. A human body, or any kind of body? It seems like a robot could engage in the same self-preservation behavior as a human without needing to have anything like burning sensations. I can imagine a sort of AI prosthesis for people born with congenital insensitivity to pain that would make their hand jerk away from a burning hot surface, despite them not ever experiencing pain or even knowing what it is.
The experience of hurting makes you respond as if you really were hurting; you have some voluntary control over your response by the frontal cortex’ modulation of pain signals, but it is very limited. Any control we exert over our experiences corresponds to physical interventions. The Hard Problem simply does not add anything of value here. That you can imagine such a prosthesis does not mean that it could exist. It depends on how such a prosthesis would work exactly. I suspect that the more such a prosthesis was able to mimick the normal response, the more its wearer would experience pain, i.e., inducing the normal response is equivalent to inducing the normal experience.
Here are some cruxes, stated from what I take to be your perspective: 1. That there's nothing at stake whether or not we have first person experiences of the kind that eliminitivists deny; it makes no practical difference to our lives whether we're so-called "automatons" or "zombies", such terms being only theoretical distinctions. Specifically it should make no difference to a rational ethical utilitarian whether or not eliminitivism happens to be true. Resources should be allocated the same way in either case, because there's nothing at stake. 2. Eliminitivism is a more parsimonious theory than non-eliminitivism, and is strictly better than it for scientific purposes; elimitivism already explains all of the facts about our world, and adding so-called "first person experiences" is just a cog which won't connect to anything else; removing it wouldn't require arbitrary double standards for the validity of evidence. 3. There's no way of separating experience from functionality in a system. If an organism manifests consistent and enduring behaviors of self-preservation, goal-seeking, etc. then it must have experiences, regardless of how the organism itself happens to be constructed. I'm looking for double cruxes now. The first two don't seem very useful to me as double cruxes, but maybe the last one is. Any ideas?
From my point of view, much or all of the disagreement around the existence of the Hard Problem seems to boil down to the opposition between nominalism and philosophical realism. I’ll discuss how I think this opposition applies to consciousness, but let me start by illustrating it with the example of money having value. In one sense, the value of money is not real, because it's just a piece of paper or metal or a number in a bank’s database. We have systems in place such that we can track relatively consistently that if I work some number of hours, I get some of these pieces of paper or metal or the numbers on my bank account change in some specific way and I can go to a store and give them some of these materials or connect with the bank’s database to have the numbers decrease in some specific way, while in exchange I get a coffee or a t-shirt or whatever. But this is a very obtuse way of communicating, so we just say that “money has value” and everybody understands that it refers to this system of exchanging materials and changing numbers. So in the case of money, we are pretty much all nominalists; we say that money has value as a shorthand and in that sense the value of money is real. On the other hand, a philosophical realist would say that actually the value of money is real independently from our definition of the words. (I view this idea similarly to how Eliezer Yudkowsky talks about buckets being “magical” in this story.) In the case of the value of money, philosophical realism does not seem to be a common position. However, when it comes to consciousness, the philosophical realist position seems much more common. This strikes me as odd, since both value and consciousness appear to me to originate in the same way; there is some physical system which we, through the evolution of language and culture generally, come to describe with shorthands (i.e., words), because reality is too complicated to talk about exhaustively and in most practical matters we all u
I appreciate hearing your view; I don't have any comments to make. I'm mostly interested in finding a double crux. This isn't really a double crux, but it could help me think of one: If someone becomes convinced that there isn't any afterlife, would this rationally affect their behavior? Can you think of a case where someone believed in Heaven and Hell, had acted rationally in accordance with that belief, then stopped believing in Heaven and Hell, but still acted just the same way as they did before? We're assuming their utility function hasn't changed, just their ontology.
Well, for me, one crux is this question of nominalism vs philosophical realism. One way to investigate this question for yourself is to ask whether mathematics is invented (nominalism) or discovered (philosophical realism). I don’t often like to think in terms of -isms, but I have to admit I fall pretty squarely in the nominalist camp, because while concepts and words are useful tools, I think they are just that: tools, that we invented. Reality is only real in a reductionist sense; there are no people, no numbers and no consciousness, because those are just words that attempt to cope with the complexity of reality, so we just shouldn’t take them so seriously. If you agree with this, I don’t see how you can think the Hard Problem is worth taking seriously. If you disagree, I’m interested to see why. If you could convince me that there is merit to the philosophical realist position, I would strongly update towards the Hard Problem being worth taking seriously.
That isn't what reductionism says. Reduction is a form of explanation. It is not elimination. What is reductively explained still exists -- heat still exists-- it is just not different to its reduction base. The hard problem emerges from the requirement to explain consciousness reductively. Examples of non-physical ontologies include dualism, panpsychism and idealism . These are not faced with the Hard Problem, as such, because they are able to say that subjective, qualia, just are what they are, without facing any need to offer a reductive explanation of them.
Saying that something “exists” is more subtle than that. In everyday life we don’t have to be pedantic about it, but in this discussion, I think we do. There are lots of different ontologies which explain how certain parts of reality work. The concept of heat is one that most people include in their ontologies, because it’s just very useful most of the time, though not always. For example, there’s not much sense in asking what the temperature is of a single particle. Virtually every ontology breaks down in such a way at some point, which is to say that in certain situations it does not describe what happens in reality closely enough to be of practical value in that situation. In pagan cultures, there were ontologies containing gods which ostensibly influenced certain parts of reality. There’s a storm? Zeus must be angry. To these cultures, Zeus existed, because it seemed to explain what was happening. It wasn’t a very good explanation from our perspective because it didn’t bestow great power in predicting storms. But also in modern science, we have had and still do have theories which explain reality only partially. Newtonian mechanics describes the world very accurately, but not quite exactly. Einstein’s general relativity filled in some of the gaps, but we’re pretty sure that that is not exactly right either, because it’s not a quantum theory, which we think a better theory should be. Given that we know our theories are wrong, does inertia exist? Does spacetime exist? Do points of infinite density exist? You could similarly say that any valid ontology has the requirement to explain heat reductively, but then the pagan could also say that any ontology has the requirement to explain Zeus reductively. Seeing reality through the lens of ontologies, which we all have no choice but to do, colors the perception of what you think exists and needs to be explained. True, “heat” needs to be explained insofar as it does correspond to reality, but we might Pareto-improve o
But that's a completely general argument. If the worst thing you can say about phenomenal consciousness is that it is occasionally inapplicable, it is no worse off than heat. Yep. Hardly a damning indictment., Note the difference between the phenomenon being explained, the explanandum, and the explanation. There is not much doubt that thunder and lightning exist , but there is much doubt that Zeus or Thor causes them . That's another completely general argument. That's the wrong way round: Zeus is posited, dobntfully, to explain something for which there is clear evidence. Consciousness is equivalent to the thunder, not the thunder god (particularly under a minimal definition ... it's important not to get misled by the idea that qualia are necessarily nonphyiscal or something). In general. Still not a specific point against consciousness. It needs to be explained inasmuch as it appears to exist. Corresponding to reality is setting the bar far too high -- we won't know what is real until we have complete explanations. Rainbows are a useful example: they are worth explaining , and we have an explanation, and the explanation tells us they don't literal exist as arches in the sky. That only tells me consciousness might not exist. It doesn't tell me that the problem of consciousness is easy or a non problem. Who said otherwise? Useful for what? The litmus test of philosophy is that it must tell the truth. If prediction isn't available, you should accept that. You shouldn't argue against X on the basis that it prevents prediction , because you have no reason to believe that the universe is entirely predictable. Science is based on the hope that things are predictable and comprehensible, but not in the certainty.. They are falsifiable claims. Why? Where is that proven? Consciousness as described by the HP is not necessarily epiphenomenal. Now, Chalmers set out the HP, and Chalmers *might* be an epiphneomnealist, but it doesn't follow that the hP requires epiph
Unlike heat, I can’t imagine any situation in which consciousness as described by the Hard Problem is applicable. Can you give me a situation in which you can make better predictions using the concept? We can agree that thunder and lightning exist and that Zeus and Thor do not, but not that consciousness exists as posed by the Hard Problem. To resolve that disagreement we need to agree on what it means for something to exist. I proposed this litmus test of additive predictive power. How does one test that a statement is true (or at least not false)? I accept that there may be things that are true that I can’t know are true, but there is an infinite number of such possible things. How would I decide which to believe and which not? And if I did, what would that get me? Consciousness as described by the Hard Problem is not derived from any observation that can be independently corroborated. When you claim to observe your own consciousness, you are not observing reality directly, you are observing your own ontology. Your ontology contains consciousness as described by the Hard Problem and that is why you’re seeing it.
Applicable to what? As I said ,it is an explanandum, not an explanation. We have prima facie evidence of consciousness because we are conscious. Also , I dont buy that "consciousness as described by the Hard Problem" is epiphenomneal. I can give you situations where I can make equally good predictions of external behaviour. Pain is a quale, I fI feel pain, I grimace, go "ouch!" etc. What does "as posed by the HP" mean? Again, the HP does not state that consciousness is epiphenomenal or nonphysical. I propose that it is predictive power or, explanatory power , or prima facie evidence. You can't just have closed loop of explanatory posits explaining each other. Explanations are explanations of something. It's complicated. But persistent failure to explain thing X using method Y is a hint of falsehood. No one is asking you to believe in things which are invisible and non predictive. No. So what? If you had certain apriori knowledge that everything is objective, that would be a problem for consciousness. If there are fundamentally subjective perceptions, that's a problem for physicalism. You have evidence of your own perceptions, whether or not the you can corroborate then. You can make predictions from your own perceptions. Insisiting on objective evidence is looking for the key under the lamppost. When you claim to observe the outside world, you are not observing reality directly. Unless it's really there. You can't assume that something necessarily doesn't exist in the territory, just because it does feature in the map. Well, if you did, scientific realism is dead.
I believe consciousness exists and that we both have it, but I don’t think either of us have the kind of consciousness that you claim you have, namely consciousness as described by the Hard Problem. By consciousness as described by the Hard Problem I mean the kind of consciousness that is not fully explained by solutions to the Easy Problem. Why do you believe that solutions to the Easy Problem are not sufficient? Conversely, why do you believe that heat is a sufficient explanation for what happens to one’s finger when touching fire? What does the latter do that the former does not? How do you in general decide that an explanation is sufficient?
I believe that consciousness as described by the HP is just phenomenal consciousness...not epiphenomenal consciousness. Phenomenal consciousness is actually's just how things seem to you, how it feels to be sitting in a seat, looking at a screen , reading these words, right here, right now. I have that, and I'm pretty sure you do too The kind of consciousness the HP is about isn't defined as inexplicable. It's defined as phenomenal ... and then noticed as being unexplained. They are not sufficient to explain phenomenal my own experience. They may well be sufficient to explain others behaviour. What happens is an objective process,.my finger gets hotter and a subjective sensation. The latter is not explained by the reductive explanations of heat. Reductive explanations are able to predict and terrorist their explananda. One can predict a temperature, and confirm the prediction , because one can measure temperature.
How do you decide whether a candidate explanation is sufficient to explain phenomenal consciousness?
I don't think I fall into either camp because I think the question is ambiguous. It could be talking about the natural structure of space and time ("mathematics") or it could be talking about our notation and calculation methods ("mathematics"). The answer to the question is "it depends what you mean". The nominalist vs realist issue doesn't appear very related to my understanding of the Hard Problem, which is more about the definition of what counts as valid evidence. Eliminitivism says that subjective observations are problematic. But all observations are subjective (first person), so defining what counts as valid evidence is still unresolved.
What exactly do you mean by this? That nature is mathematical? This sounds like it could be a double crux, because if I believed this the Hard Problem would follow trivially, but I don’t believe it.
You don't believe that all human observations are necessarily made from a first-person viewpoint? Can you give a counter-example? All I can think of are claims that involve the paranormal or supernatural.
Observations being made from a first person perspective is a rather trivial definition of subjective,. because it's quite possible for different observers to agree on observations.(And for some aspects of the persepective to be predictable from objective facts).The forms of subjectivity that count are where they disagree, or where they can't even express their perceptions.
I meant subjective in the sense of "pertaining to a subject's frame of reference", not subjective in the sense of "arbitrary opinion". I'm sorry if that was unclear.
Like TAG said, in a trivial sense human observations are made from a first-person, subjective viewpoint. But all of these observations are also happening from a third-person perspective, just like the rest of reality. The way I see it, the third-person perspective is basically the default, i.e., reality is like a list of facts, from which the first-person view emerges. Then of course the question is, how is that emergence possible? I can understand the intuition that the third-person and first-person view seem to be fundamentally different, but I think of it this way: all the thoughts you think and the statements you make are happening in reality and the structure of that reality determines your thoughts. This is where the illusion arguments become relevant; illusions, such as optical ones, demonstrate clearly that you can be made to believe things that are wrong because of convenience or simply brain malfunction. Changing the configuration of your brain matter can make you believe absolutely anything. The belief in the first-person perspective has evolved because it’s just very useful for survival and you can’t choose to disbelieve what your brain makes you believe. Given the above, to say that the first-person perspective is fundamentally different seems like the more supernatural claim to me.
The HP is about sensory qualities, not thoughts. Are you saying that we have, today, a theory which can predict the nature of sensory qualities from objective facts? (Which would be a solution to the HP, which would imply there ever was an HP)
Yes, for example, if blood flow to the brain is decreased, you can use that to correctly predict a decrease in consciousness. If I show you a red piece of paper, you will experience red, if the paper is green, you experience green, etc.
Now, how about the hard problems: can you predict novel qualia? Can you predict non human qualia? Can you explain why red looks like that? (And are you admitting hta thtere ever was a problem?)
Sure, change the neural patterns in a person’s brain and they’ll get new experiences. As far as non-humans are concerned, if you punch them in the face they’ll experience pain and fear or anger. Red looks like that because that’s what we mean when we say red. If a cup breaks, can you explain where its cupness has gone?
Can you really not imagine why I put forward those particular examples? What new experiences? That's the hard problem. Yes, non human animals have some experiences in common with humans. They also have some that are different, like dolphin sonar. That's the other hard problem? Could you really not see what I was getting at? And are you now admitting that there ever was a problem?
>What new experiences? That's the hard problem. Sure, that’s a hard problem, but it’s not the hard problem. You can go through the usual scientific process and identify what neural patterns correlate with which experiences, but that’s all doable with solutions to the Easy Problem. >Yes, non human animals have some experiences in common with humans. They also have some that are different, like dolphin sonar. That's the other hard problem? Again, sure, a hard problem, but to explain such things you can go through the usual scientific process and come up with new ontologies to describe new kinds of experiences. In contrast, the problem with the Hard Problem is that you can’t even begin the scientific process. What it looks like to me what you’re trying to get at is that, for example, if there is a cup, we can both acknowledge that there are physical constituents that make up the cup, but you seem to pose that in addition to this there is a “cupness” to the cup. This is basically the essentialist position, which is related to philosophical realism. In terms of consciousness, you seem to be saying that there is something it is like to be conscious, in addition to what the brain is doing from an objective standpoint. I deny that this is the case and therefore I deny that there is a problem that needs explanation. What does need explanation is why some people such as yourself claim that it requires an explanation, which I have tried to explain earlier.
Why would you? The point of using reductive explanation is that it identifies phenomenal consciousness with neural activity, and therefore supports physicalism. On the other hand, you would still be able find correlations in a universe where dualism holds. You can't prove that physicslism, or anything else is true just by assuming it. So what? You can't assume that only things you want to explain in a particular way exist. Why would the universe care? You seem to be assuming that science/physicslism is unfalsifiable -- that if something defies scientific epistemology or physical ontology, then it can be dismissed for that reason. I''m not positing that there is: I have subjective conscious experience because I'm a subject. I'm not looking at myself from the outside. Are you? Qualia don't go away when you stop believing in them: pains still hurt , tomatoes are still red. I am saying that there is something that is not , currently, explained from an objective viewpoint. It's not the problem that assumes a non physicalist ontology, it's the lack of solution that implies it.
>Why would you? The point of using reductuive explanation is that it *identitfies" phenomenal consciousness with neural activity, and therefore supports physicalism. On the other hand, you would still be able find correlations in a universe where dualism holds You asked how changing neural patterns in a person’s brain can be linked to what experiences. You can use the scientific process to establish those links inasfar as they can be linked. There is no possible universe in which dualism holds due to the interaction problem, unless you use a very narrow definition of what’s physical. (For example, I have encountered people who claimed that light is not physical. That’s fair, but that’s not the definition of physical that I or the vast majority of physicists and scientists use.) >So what? You can't assume that only things you want to explain in a particular way exist. Why would the universe care? The point is that you can’t say anything meaningful about things you can’t explain using the scientific process. You can’t even say they exist. They may well exist, but you can’t tell something that doesn’t exist apart from something that can’t be explained scientifically. The scientific process is not just a particular way to explain things; indeed, the universe does not care to what degree you can know things; it just so happens that falsifying theories through predictions is the only way to know things. If horoscopes or dowsing rods were a way to know what’s true they would be science, but they aren’t so they're not. >I''m not positing that there is: I have subjective conscious experience because I'm a subject. I'm not looking at myself from the outside. Are you? You are an object. Of course it looks like you’re a subject because that’s what your brain (i.e., “you”) looks like to that brain. >I am saying that there is something that is not , currently, explained from an objective viewpoint. I've said so several times. How do you decide whether a candidate explanat
Hopefully making this point won't derail your discussion:  Quantum mechanics is not deterministic. This is a gap which dualists can use. The major names here might be John Eccles (in neuroscience) and Henry Stapp (in physics). 
MWI is deterministic.
How could dualists use a random process?
If a theory says that something is fundamentally random - e.g. when a nucleus undergoes radioactive decay - then the event had no cause. It just happened. This is an opportunity to extend the theory by introducing a cause. In these dualistic quantum mind theories, a nonphysical mind is added to quantum mechanics as an additional causal factor that determines some of the randomness. 
Firstly, how would we know that the correct way to extend the theory was to introduce a nonphysical mind as a cause? How would we tell the difference between the validity of this hypothesis and that of the infinite other possible causes? Secondly, what is the difference between something physical and nonphysical? I hope I can assume that you agree that if something exists, then it behaves in some way. It is then up to us to try to describe that behavior as far as we can. Whether or not something is physical or not seems meaningless at this point. Quarks might as well be considered supernatural, magical, nonphysical objects whose behavior we happen to be able to describe, including how our mundane, physical reality emerges from it. Supernatural, magical and nonphysical are contradictions in terms unless one decides on some arbitrary distinction between behaviors that are such and those that are not, because they will regardless behave in some way and we can predict that behavior inasfar as we can describe it.
In naive, pre-scientific, pre-philosophical experience, there's a world of things that we know through the senses, and a world of our own thoughts and feelings that we know in some other way. That is the root of physical versus non-physical, or matter versus mind.  Once science and philosophy get involved, the dividing line between physical and nonphysical can shift away from its naive starting point. Idealist philosophy can try to claim everything for the mind, physical science can try to claim everything for matter.  In the case of these quantum mind theories which have a Cartesian kind of dualism (mind and matter as distinct kinds of "substance"), there is no attempt to assimilate the mental world of thoughts and feelings, to the material world of things.  For example, in the Eccles theory, apparently thoughts and feelings in some way set the probabilities of quantum events in the synapse, and that's how the interaction of substances occurs. Thoughts and feelings may fairly be called non-physical in such a theory, because there is no attempt to identify them with attributes of the physical brain. The mental realm is made of thoughts and feelings, the physical realm is made of particles with mass and spin, they are separate kinds of entity that interact in a specific way, and that's it.  Any theory (whether dualist, monist, or something else) that includes both mind and matter, is constrained by two kinds of data: introspective observation of thoughts and feelings, and physical observation of the material world. So you test it against the facts, like any other theory. Facts are not always easy to ascertain, they may be ambiguous, disputed, or denied, but they are still the touchstone of truth. 
Which isnt enough to exclude dualism. There are lots of logically universes where the interaction problem doesn't apply. If there is complete determinism, or physical closure, then interaction implies overdetermination. But complete determinism and physical closure aren't logical implications of just having some sort of physics. Sure you can. I have qualia right now, and so do you. Of course not. People knew things before Francis Bacon. Horoscopes and dowsing rods aren't the only alternative to science. I'm an object to you , and a subject to me. You can doubt my qualia, but I can't doubt your own. You would notice your own if you did not insist on looking at yourself from the outside. Of course it looks to you like I'm a object because that’s what my brain looks like to your brain. So far, so symmetrical. You're not showing that the object perspective is the only possible one. You could show that by showing that all subjective phenomena reduce to objective ones, since reduction is asymmetrical. But that involves actually solving the HP, not just writing down correlations. Using the same criteria I apply to anything else, such as falsifiability the ability to make novel predictions. You are the one who is special-pleading for a lower bar.
How do you decide that an explanation specifically for this something (that is not currently explained from an objective viewpoint) is falsified?
If it mispredicts a quale , I suppose. Of course, I don't know how an equation describes a quale, and I also don't know how to build a qualiometer. But then I'm not on side that thinks the HP can be solved by ordinary scientific means.
When you find an explanation, how will you know that that was the explanation you were looking for? If as you say you don’t know in advance how to describe qualia, that means you won’t be able to recognize that an explanation actually describes qualia, which in turn means you don’t actually know what you mean when you talk about qualia. If as you say you don’t know in advance how to measure qualia, that means the explanation’s predictions can’t be tested against observations because we won’t know whether we are actually measuring qualia, which in turn means any explanation is a priori unfalsifiable. You need to know in advance how to describe and measure what you’re seeking to explain in such a way that a third party can use those descriptions and measurements to falsify an explanation, otherwise the falsity of any explanation depends on your personal sensibilities; somebody else may have different sensibilities and come to an equally legitimate yet contradictory decision. Presumably, we are in a shared reality where it is an objective matter of fact that we either have qualia or we don’t; it can’t be subjectively true and false at the same time, depending on who you are. I’m not saying qualia don’t exist, but I am saying that without objective descriptions of qualia and the ability to measure them objectively we can’t tell the difference between qualia and something that doesn’t exist.
That's the thing, though -- qualia are inherently subjective. (Another phrase for them is 'subjective experience'.) We can't tell the difference between qualia and something that doesn't exist, if we limit ourselves to objective descriptions of the world.
Which is to say, the difference between qualia and nothing is easy to detect subjectively ..there's a dramatic difference between having an operation with and without anaesthetic.
That doesn’t mean qualia can be excused and are to be considered real anyway. If we don’t limit ourselves to objective descriptions of the world then anyone can legitimately claim that ghosts exist because they think they’ve seen them, or similarly that gravity waves are transported across space by angels, or that I’m actually an attack helicopter even if I don’t look like one, or any other unfalsifiable claim, including the exact opposite claims, such as that qualia actually don’t exist. You won’t be able to disagree on any grounds except that you just don’t like it, because you sacrificed the assumptions to do so in order to support your belief in qualia.
The claim is that there is a hard problem...that qualia exist enough to need explaining,...not that they are ultimately real. At one time, the existence of meteorites was denied because it didn't fit with what people "knew" to be true. There's a problem in taking scattered subjective reports as establishing some conclusion definitively...but there's an equal and opposite problem in rejecting reports because they don't fit a prevailing dogma.
Those analogies don't hold, because you're describing claims I might make about the world outside of my subjective experience ('ghosts are real', 'gravity waves are carried by angels', etc.). You can grant that I'm the (only possible) authority on whether I've had a 'seeing a ghost' experience, or a 'proving to my own satisfaction that angels carry gravity waves' experience, without accepting that those experiences imply the existence of real ghosts or real angels. I wouldn't even ask you to go that far, because -- even if we rule out the possibility that I'm deliberately lying -- when I report those experiences to you I'm relying on memory. I may be mistaken about my own past experiences, and you may have legitimate reasons to think I'm mistaken about those ones. All I can say with certainty is that qualia exist, because I'm (always) having some right now. I think this is one of those unbridgeable or at least unlikely-to-be-bridged gaps, though, because from my perspective you are telling me to sacrifice my ontology to save your epistemology. Subjective experience is at ground level for me; its existence is the one thing I know directly rather than inferring in questionable ways.
The analogies do hold, because you don’t get to do special pleading and claim ultimate authority about what’s real inside your subjective experience any more than about what’s real outside of it. Your subjective experience is part of our shared reality, just like mine. People are mistaken all the time about what goes on inside their mind, about the validity of their memories, or about the real reasons behind their actions. So why should I take at face value your claims about the validity of your thoughts, especially when those thoughts lead to logical contradictions?
I think we're mostly talking past each other, but I would of course agree that if my position contains or implies logical contradictions then that's a problem. Which of my thoughts lead to which logical contradictions?
Let’s say the Hard Problem is real. That means solutions to the Easy Problem are insufficient, i.e., the usual physical explanations. But when we speak about physics, we’re really talking about making predictions based on regularities in observations in general. Some observations we could explain by positing the force of gravity. Newton himself was not satisfied with this, because how does gravity “know” to pull on objects? Yet we were able to make very successful predictions about the motions of the planets and of objects on the surface of the Earth, so we considered those things “explained” by Newton’s theory of gravity. But then we noticed a slight discrepancy between some of these predictions and our observations, so Einstein came up with General Relativity to correct those predictions and now we consider these discrepancies “explained”, even though the reason why that particular theory works remains mysterious, e.g., why does spacetime exist? In general, when a hypothesis correctly predicts observations, we consider these observations scientifically explained. Therefore to say that solutions to the Easy Problem are insufficient to explain qualia indicates (at least to me) one of two things. 1. Qualia have no regularity that we can observe. If they really didn’t have regularities that we could observe, we wouldn’t be able to observe that they exist, which contradicts the claim that they do exist. However, they do have regularities! We can predict qualia! Which means solutions to the Easy Problem are sufficient after all, which contradicts the assumption that they’re insufficient. 2. We’re aspiring to a kind of explanation for qualia over and above the scientific one, i.e., just predicting is not enough. You could posit any additional requirements for an explanation to qualify, but presumably we want an explanation to be true. You can’t know beforehand what’s true, so you can’t know that such additional requirements don’t disqualify the truth. There is only
I'm describing how the process of explaining qualia scientifically would look. Science isn't based on exactly predermining an explanation before you have it. Says who? If I can tell that qualia are indescribable or undetectable, I must know something of "qualia" means. One of the problems with the Logical Positivist theory of meaning is tha it can't be the whole story. I said we dont know how to measure qualia, not that some device might be measuring something else. One could test a qualiometer on oneself. L If you exclude qualiometers , entirely subjective approaches and a few other things, you can't have a standard scientific explanation. You can still have a philosophical explanation, such as qualia don't exist, qualia are non physical, etc. But I'm not saying there is a scientific explanation of qualia And if it is an objective fact that there is some irreducible subjectivity, it is clan objective fact that it doesn't have a full scientific explanation. I don't know who is suggesting that. But you can notice your own qualia.. anaesthesia makes a difference.
But then how you would know that a given explanation, scientific or not, explains qualia to your satisfaction? How will you be able to tell that that explanation is indeed what you were looking for before? People have earnestly claimed the same thing about various deities. Do you believe in those? Why would your specific belief be true if theirs weren’t? Why are you so sure you’re not mistaken? Could be, but we don’t know that. How would you determine that it is working? That if you’re seeing something red, the qualiometer says “red”? If so, how would that show that there is something more going on than what’s explained with solutions to the Easy Problem? It’s a logical consequence of claiming there is no objective fact about something. Again, I agree with you that subjective experience exists, but I don’t see why solutions to the Easy Problem wouldn’t satisfy you. There’s something mysterious about subjective experience, but that’s true for everything, including atoms and electromagnetic waves and chairs and the rest of objective reality. Why does anything in the universe exist? It’s “why?” all the way down.
You keep asking the same question. If we are talking about scientific explanation: a scientific explanation of X succeeds if it is able to predict X's, particularly novel ones, and it doesn't mispredict X's. A scientific explanation of qualia is exactly that with X=qualia. It's not a different style of explanation. It may well be impossible, but that's another story. As for a philosophical explanation...well, how do you know? You have some philosophical account, probably along the lines of qualia don;'t exist or aren't meaningful, although you refuse to say which. So you have some criteria for judging that to be the best explanation. Of course they have. To believe in Zeus youmust know what "Zeus" means, and likewise to disbelieve in Zeus. HUh? Why are you asking?. I said "qualia" is meaningful. I also believe in qualia, but I don't believe in qualia just because "qualia" is meaningful, I believe in qualia because I have them, as I have stated many times. I don't see Zeus, I do see colours. Do you find that confusing? Whatever. I am not saying there is a scientific explanation of qualia. That's conflating two senses of "subjective". Qualia are subjective in the sense that subjects can access their own qualia, but not other peoples. They don't explain subjective experience. The Easy Problem is everything except subjective experience. The fact that qualia are physically mysterious can't be predicted from physics .. if physicalism is true, they should as explicable as the Easy problem stuff. That suggests physicalism is wrong.
You say you see colors and have other subjective experiences and you call those qualia and I can accept that, but when I ask why solutions to the Easy Problem wouldn’t be sufficient you say it’s because you have subjective experiences, but that’s circular reasoning. You haven’t said why exactly solutions to the Easy Problem don’t satisfy you, which is why I keep asking what kind of explanation would satisfy you. I genuinely do not know, based on what you have said. It doesn’t have to be scientific. But it’s not clear to me how you would judge that any explanation, scientific or not, does these things for qualia, because it seems to me that solutions to the Easy Problem do exactly this; I can already predict what kind of qualia you experience, even novel ones. If I show you a piece of red paper, you will experience the qualia of red. If I give you a drink or a drug you haven’t had before I can predict that you will have a new experience. I may not be able to predict quite exactly what those experiences will be in a given situation because I don’t have complete information, but that’s true for virtually any explanation, even when using quantum mechanics. I suspect you may now object again and say, “but that doesn’t explain subjective experience”. Then I will object again and say, “what explanation would satisfy you?”, to which you will again say, “if it predicts qualia”, to which I will say, “but we can already predict what qualia you will have in a given situation”. Then you will again object and say, “but that doesn’t explain subjective experience”. And so on. It looks to me like you’re holding out for something you don’t know how to recognize. True, maybe an explanation is impossible, but you don’t know that either. When some great genius finally does explain it all, how will you know he’s right? You wouldn’t want to miss out, right? But this is the very thing in question. Can you explain to me how exactly you come to this conclusion? Having subjective experien
No it's because the Easy Problem is , by definition ,everything except subjective experience. Its [consciousness-experience] explained [however], not [consciousness] explained [physically]. It happens to be the case that easy problems can be explained physically, but its not built into the definition. Because I've read the passages where Chalmers defines the Easy/ Hard distinction. "What makes the hard problem hard and almost unique is that it goes beyond problems about the performance of functions. To see this, note that even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience? (1995, 202, emphasis in original)." See? It's not defined in terms of physicality! Have you even read that passage before? EP explanation isn't it even trying to be an explanation of X for X=qualia. Only by lowering the bar. Of course not can't even express them. I'm a colour blind super scientist , what is this Red? Unfortunately , that's what "predict novel experiences" means. CF other areas of science: you don't get Nobels for saying "I predict some novel effect I cant describe or quantify". The problem isnt that you don't have infinite information, it's that you are not reaching the base line of every other scientific theory, because "novel qualia , don't ask me what" isn't a meaningful prediction. Not in a good enough way, you can't. Then you will again object and say, “but that doesn’t explain subjective experience”. And so on. It looks to me like you’re holding out for something you don’t know how to recognize. True, maybe an explanation is impossible, but you don’t know that either. When some great genius finally does explain it all, how will you know he’s right? You wouldn’t want to miss out, right? They
The core issue is that there’s an inference gap between having subjective experience and the claim that it is non-physical. One doesn’t follow from the other. You can define subjective experience as non-physical, as Chalmer’s definition of the Hard Problem does, but that’s not justified. I can just as legitimately define subjective experience as physical. I can understand why Chalmers finds subjective experience mysterious, but it’s not more mysterious than the existence of something physical such as gravity or the universe in general. Why is General Relativity enough for you to explain gravity, even though the reason for the existence of gravity is mysterious?
Of course there is. There is no reason there should not be. Who told you otherwise? Chalmers takes hundreds of pages to set out his argument. Physical reductionism is compatible with the idea that the stuff at the bottom of the stack is irreducible, but consciousness appears to be a high level phenomenon.
His argument does not bridge that gap. He, like you, does not provide objective criteria for a satisfying explanation, which means by definition you do not know what the thing is that requires explanation, no matter how many words are used trying to describe it.
The discussion was about whether there is a Hard Problem , not whether Chalmers or I have solved it.
I know. Like I said, neither Chalmers nor you or anyone else have shown it plausible that subjective experience is non-physical. Moreover, you repeatedly avoid giving an objective description what you’re looking for. Until either of the above change, there is no reason to think there is a Hard Problem.
Like.I said, I don't have to justify non physicalism when that is not.what the discussion is about. Also., the existence of a problem does not depend on the existence of a solution.
Agreed, but even if no possible solution can ultimately satisfy objective properties, until those properties are defined the problem itself remains undefined. Can you define these objective properties?
Weve been through this. 1. You don't have a non circular argument that everything is objective 2. It can be an objective fact that subjectivity exists.
All I’m asking for is a way for other people to determine whether a given explanation will satisfy you. You haven’t given enough information to do that. Until that changes we can’t know that we even agree on the meaning of the Hard Problem.
The meaning of the Hard Problem doesn't depend on satisfying me, since I didn't invent it. If you want to find out what it is, you need to read Chalmers at some point.
This is a hypothesis, based on information in your first-person perspective. To make arguments about a third-person reality, you will always have to start with first-person facts (and not the other way around). This is why the first person is epistemologically more fundamental. It's possible to doubt that there is a third-person perspective (e.g. to doubt that there's anything like being God). But our first person perspective is primary, and cannot be escaped from. Optical illusions and stage tricks aren't very relevant to this, except in showing that even our errors require a first-person perspective to occur. EDIT: The third-person perspective being epistemologically more/less fundamental than the first-person perspective could work as a double crux with me. Does it work on your end as well?
Would I be correct to say that you think the third-person perspective emerges from the first-person perspective? Or would you say that they’re simply separate?
If I had to choose between those two phrasings I would prefer the second one, for being the most compatible between both of our notions. My notion of "emerges from" is probably too different from yours. The main difference seems to be that you're a realist about the third-person perspective, whereas I'm a nominalist about it, to use your earlier terms. Maybe "agnostic" or "pragmatist" would be good descriptors too. The third-person is a useful concept for navigating the first-person world (i.e. the one that we are actually experiencing). But that it seems useful is really all that we can say about it, due to the epistemological limitations we have as human observers. I think this is why it would be a good double crux if we used the issue of epistemological priority: I would think very differently about Hard Problem related questions if I became convinced that the 3rd person had higher priority than the 1st person perspective. Do you think this works as a double crux? Is it symmetrical for you as well in the right way?
That actually sounds more like the first phrasing to me. If you are a nominalist about the third-person perspective, then it seems that you think the third-person perspective does not actually exist and the concept of the third-person perspective is borne of the first-person perspective. I’m not sure whether this is a good double crux, because it’s not clear enough to me what we mean by first- and third-person perspectives. It seems conceivable to me that my conception of the third-person perspective is functionally equivalent to your conception of the first-person perspective. Let me expand on that below. If only the first-person perspective exists, then presumably you cannot be legitimately surprised, because that implies something was true outside of your first-person perspective prior to your experiencing it, unless you define that as being part of your first-person perspective, which seems contradictory to me, but functionally the same as just defining everything from the third-person perspective. The only alternative possibility that seems available is that there are no external facts, which would mean reality is actually an inconsistent series of experiences, which seems absurd; then we wouldn’t even be able to be sure of the consistency of our own reasoning, including this conversation, which defeats itself.
I'm sorry that comparing my position to yours led to some confusion: I don't deny the reality of 3rd person facts. They probably are real, or at least it would be more surprising if they weren't than if they were. (If not, then where would all of the apparent complexity of 1st person experience come from? It seems positing an external world is a good step in the right direction to answering this). My comparison was about which one we consider to be essential. If I had used only "pragmatist" and "agnostic" as descriptors, it would have been less confusing. Again, I think the main difference between our positions is how we define standards of evidence. To me, it would be surprising if someone came to know 3rd person facts without using 1st person facts in the process. If the 1st person facts are false, this casts serious doubt on the 3rd person facts which were implied. At our stage of the conversation, it seems like we can start proposing far more effective theories, like that nothing exists at all, which explains just as much of the available evidence we still have if we have no 1st person facts. You seem to believe we can get at the true third person reality directly, maybe imagining we are equivalent to it. You can imagine a robot (i.e. one of us) having its pseudo-experiences and pseudo-observations all strictly happening in the 3rd person, even engaging in scientific pursuits, without needing to introduce an idea like the 1st person. But as you said earlier, just because you can imagine something, doesn't mean that it's possible. You need to start with the evidence available to you, not what sounds reasonable to you. The idea of that robot is occurring in your 1st person perspective as a mental experience, which means it counts as evidence for the 1st person perspective at least as much as it counts as evidence for the 3rd. So does what it feels like to think eliminitivism is possible, and so does what it feels like to chew 5 Gum® and etc, and etc. To me, all
I agree those are separate, but the (useful, evolved) sense-of-self leads to a belief in consciousness. Disproving the reality of the self (i.e., the sense of self being illusory) removes the logical support for consciousness. Moreover, proponents of the Hard Problem often say “If consciousness is illusory, who is experiencing the illusion?”, thereby revealing their belief that a self is required for consciousness. So, ostensibly, consciousness and a sense of self are not the same but do imply each other. However, I argue that proponents of the Hard Problem confuse the existence of the sense of self with the existence of an actual self, which leads to erroneous conclusions.
It just seems to me like there's a deeper level of explaination required to conclude the experience of consciousness is a false belief, relative to things like deja vu. It seems like you're using terms that presume experience in order to explain how experience is false via analogy. You experience deja vu, the experience lends towards false beliefs about previous experiences. How can that have explanatory power in concluding experience itself is false? Wouldn't that conclusion undermine the premises of the comparison?
I'm usually very on board with claim that Hard Problem is non-problem, but I always struggled to understand illusionst point of view. When I see illusion of appearing white spots in grid illusion, I know that it is illusion in a sense that there are reality in form of constant state of pixels that doesn't change. If "experiencing" is "illusion", what is reality? (My position on Hard Problem is that subjective experience exists and has completely mundane physical/computatonal explanation.)
Which is what?
I don't know! I don't like the whole formulation of "Hard Problem" because it looks so arrogant: it is assumed that all possible computational solution to question "how can conscious behaviour be produced" are so trivial, that they can't provide answer to "why we have subjective experience". Let's find computational solution for conscious behavior first and check that we won't find any unexpected insights that make us say "wow, this is really obvious mundane way to produce conscious experience".
The hard problem argument only says that some problems are harder than others, not that they are impossible. The mundane approach had been tried over and over. When you say that a mundane solution exists , you seem to mean that the thing that hasn't worked for decades will start working.
By "mundane solution" I mean "deep gear-level understanding of functional aspects of consciousness, such as someone who has such understanding can program functionally-conscious entity from scratch". Claim of "Hard Problem" is "even if you have such understanding, you can't explain subjective experience" and I consider this claim to be false.
I agree that if you can use a non trial-and-error method to build consciousness, then you understand it well enough. But do you have a non trial-and-error method for building something that has conscious experience? Or are you assuming you get it for free with the rest of the functionality?
It's plausible that reverse-engineering the human mind requires tools that are much more powerful than the human mind.
I don't like the word "illusionism" here because people just get caught on the obvious semantic 'contradiction' and always complain about it. The arguments based on perceptual illusions in general are meant to show that our perception is highly constructed by the brain, it's not something 'simple'. The point of illusionism is just to say that we are confused about what the phenomenological properties of qualia really are qua qualia because of wrong ideas that come from introspection.
I don't like "illusionism" either, since it makes it seem like illusionists are merely claiming that consciousness is an illusion, i.e., it is something different than what it seems to be. That claim isn't very shocking or novel, but illusionists aren't claiming that. They're actually claiming that you aren't having any internal experience in the first place. There isn't any illusion. "Fictionalism" would be a better term than "illusionism": when people say they are having a bad experience, or an experience of saltiness, they are just describing a fictional character.
You don't get to infer that someone is mistaken about some specific thing from the fact they could be.
-4Shankar Sivarajan3mo
Of course it exists! It's like the Harder Problem of Zelionicity. And the Impossible Problem of Allimoxing.
Are you saying that you don't think there's any fact of the matter whether or not you have phenomenal experiences like suffering? Or do you mean that phenomenal experience is unreal in the same way that the hellscape described by Dante is unreal?
3Shankar Sivarajan3mo
Closer to the latter, though I wouldn't call it "unreal." The experience of suffering exists in the same sense that my video game character is down to one life and has low HP: that state is legibly inspectable on screen, and is encoded in silicon. I only scorn the term "consciousness" to the extent it is used as woo. I think some version of the Hard Problem really was meaningful in the past, and that it was hard: it's far from obvious how "mere matter" could encode states as rich as what one … perceives, experiences, receives as the output of the brain's processes. Mills and clocks, as sophisticated as they may have been, were still far too simple. I consider modern technology to have demonstrated in sufficient detail precisely how it's possible. It didn't require anything conceptually new, so also I understand why some don't find the answer satisfying.

This bit was very interesting to me:

These models are “predictive” in the important sense that they perceive not just how things are at the moment but also anticipate how your sensory inputs would change under various conditions and as a consequence of your own actions. Thus:

  • Red = would create a perception of a warmer color relative to the illuminant even if the illumination changes.

My current pet theory of qualia is that there is an illusion that they are a specific thing (e.g. the redness of red) when in reality there are only perceived relations between a quale and other qualia, and a perceived identity between that quale and memories of that quale. But the sense of identity (or constancy through time) is not caused by an actual specific thing (the "redness" that one erroneously tries to grasp but always seems just beyond reach), but by a recurrence of those relations.

Why I like the quoted part is because it can be read as a predictive processing-flavoured version of the same theory. The illusion (that there is a reified thing instead of only a jumble of relationships) is strengthened by the fact that we not only recognize the cluster of qualia relationships and can correc... (read more)

But it is not clear at all that such a being could even exist in principle. A digital mind that lacks a body it is trying to keep alive, that has entirely different senses than our interoceptive and exteroceptive ones, and that has an entirely different repertoire of actions available to it will have an entirely alien generative model of its world, and thus an entirely alien phenomenology — if it even has one. We can guess what it’s like to be a bat: its reliance on sonar to navigate the world likely creates a phenomenology of “auditory colors” that track

... (read more)
Similarly, the consciousness explanation that Jacob Falkovich gives would give a lot of things consciousness, though not everything. In particular, the improved Good Regulator Theorem, which proposes that at least one part of consciousness is essentially having models of the world, is applicable to any capable system. Similarly, I expect the cybernetic model to have widespread use, in the sense that a lot of things will find it useful to regulate something. I think the strongest takeaway from Anil Seth's model is that future consciousness could be very, very alien, especially once we take away certain parts of it.

Having not read the book yet, I'm kind of stumped at how different this review is to the one from Alexander. The two posts make it sound like a completely different book, especially with respect to the philosophical questions, and especially especially with respect to the expressed confidence. Is this book a neutral review of the leading theories that explicitly avoids taking sides, or is it a pitch for another I-solved-the-entire-problem theory? It can't really be both.

Yes it can. Like any bistable perception can be both one percept for someone and a very different percept for someone else. That means they don’t have the same kernels.