From Twitter:

I'd say that I "don't understand" why the people who worry that chickens are sentient and suffering, don't also worry that GPT-3 is sentient and maybe suffering; but in fact I do understand, it's just not a charitable understanding. Anyway, they're both unsentient so no worries.

His overall thesis is spelt out in full here but I think the key passages are these ones:

What my model says is that when we have a cognitively reflective, self-modely thing, we can put very simple algorithms on top of that — as simple as a neural network having its weights adjusted — and that will feel like something, there will be something that it is like that thing to be, because there will be something self-modely enough to feel like there’s a thing happening to the person-that-is-this-person.

So I would be very averse to anyone producing pain in a newborn baby, even though I’d be truly shocked (like, fairies-in-the-garden shocked) to find them sentient, because I worry that might lose utility in future sentient-moments later.

I’m not totally sure people in sufficiently unreflective flow-like states are conscious, and I give serious consideration to the proposition that I am reflective enough for consciousness only during the moments I happen to wonder whether I am conscious.

I'm currently very confident on the following things, and I'm pretty sure EY is too:

  1. Consciousness (having qualia) exists and humans have it
  2. Consciousness isn't an epiphenomenon
  3. Consciousness is a result of how information is processed in an algorithm, in the most general sense: a simulation of a human brain is just as conscious as a meat-human

EY's position seems to be that self-modelling is both necessary and sufficient for consciousness. But I don't ever see him putting forward a highly concrete thesis for why this is the case. He is correct that his model has more moving parts than other models. But having more moving parts only makes sense if it's actually good at explaining observed data. And we only have one datapoint, which is that adult humans are conscious. Or do we?

"Higher" Consciousness

We actually have a few datapoints here. An ordering of consciousness as reported by humans might be:

Asleep Human < Awake Human < Human on Psychedelics/Zen Meditation

I don't know if EY agrees with this. From his beliefs he might say something along the lines of "having more thoughts doesn't mean you're more conscious". Given his arguments about babies, I'm pretty sure he thinks that you can have memories of times when you weren't conscious, and then consciously experience those things in a sort of "second hand" way by loading up those memories.

Now a lot of Zen meditation involves focusing on your own experiences, which seems like self-modelling. However something else I notice here is the common experience of "ego death" while using psychedelics and in types of meditation. Perhaps EY has a strong argument that this in fact requires more self-modelling than previous states. On the other hand, he might argue that consciousness is on/off, and then amount of experience is unrelated to whether or not those experiences are being turned into qualia.

I'm trying to give potential responses to my arguments, but I don't want to strawman EY so I ought to point out that there are lots of other counter-arguments to this he might have, which might be more insightful than my imagined ones.

Inner Listeners

EY talks a lot about "inner listeners", and mentions that a good theory should be able to have them arise naturally in some way. I agree with this point, and I do agree that his views provide a possible explanation as to what produces an inner listener.

Where I disagree is that we 100% need a separate "information processing" and "inner listener" module. The chicken-conscious, GPT-3-unconscious model seems to make sense from the following perspective:

Some methods of processing input data cause consciousness and some don't. We know that chickens process input data in a very similar way to humans (by virtue of being made of neurons) and we know that GPT-3 doesn't process information in that way (by virtue of not being made of neurons). I guess this is related to the binding problem.

Confidence

But what surprises me the most about EY's position is his confidence in it. He claims to have never seen any good alternatives to his own model. But that's simply a statement about the other beliefs he's seen, not a statement about all hypothesis-space. I even strongly agree with the first part of his original tweet! I do suspect most people who believe chickens are conscious but GPT-3 isn't believe it for bad reasons! And the quality of replies is generally poor.

EY's argument strikes me as oddly specific. There are lots of things which human brains do (or we have some uncertainty of them doing) which are kind of weird:

  • Predictive processing and coding
  • Integrating sensory data together (the binding problem)
  • Come up with models of the world (including itself)
  • All those connectome-specific harmonic wave things
  • React to stimuli in various reinforcement-y ways

EY has picked out one thing (self modelling) and decided that it alone is the source of consciousness. Whether or not he has gone through all the weird and poorly-understood things brains do and ruled them out, I don't know. Perhaps he has. But he doesn't mention it in the thesis that he links to to explain his beliefs. He doesn't even mention that he's conducted such a search, the closest thing to that being references to his own theory treating qualia as non-mysterious (which is true). I'm just not convinced without him showing his working!

Conclusions

I am confused, and at the end of the day that is a fact about me, not about consciousness. I shouldn't use my own bamboozlement as strong evidence that EY's theory is false. On the other hand, the only evidence available (in the absence of experimentation) for an argument not making sense is that people can't make sense of it.

I don't think EY's theory of consciousness is completely absurd. I put about 15% credence in it. I just don't see what he's seeing that elevates it to being totally overwhelmingly likely. My own uncertainty is primarily due to the lack of truly good explanations I've seen of the form "X could cause consciousness", combined with the lack of strong arguments made of the form "Here's why X can't be the cause of consciousness". Eliezer sort of presents the first but not the second.

I would love for someone to explain to me why chickens are strongly unlikely to be conscious, so I can go back to eating KFC. I would also generally like to understand consciousness better.

81

New Comment
121 comments, sorted by Click to highlight new comments since: Today at 11:54 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Instrumental status: off-the-cuff reply, out of a wish that more people in this community understood what the sequences have to say about how to do philosophy correctly (according to me).

EY's position seems to be that self-modelling is both necessary and sufficient for consciousness.

That is not how it seems to me. My read of his position is more like: "Don't start by asking 'what is consciousness' or 'what are qualia'; start by asking 'what are the cognitive causes of people talking about consciousness and qualia', because while abstractions like 'consciousness' and 'qualia' might turn out to be labels for our own confusions, the words people emit about them are physical observations that won't disappear. Once one has figured out what is going on, they can plausibly rescue the notions of 'qualia' and 'consciousness', though their concepts might look fundamentally different, just as a physicist's concept of 'heat' may differ from that of a layperson. Having done this exercise at least in part, I (Nate's model of Eliezer) assert that consciousness/qualia can be more-or-less rescued, and that there is a long list of things an algorithm has to do to 'be conscious' / 'have qualia' i... (read more)

I'm confident your model of Eliezer is more accurate than mine.

Neither the twitter thread or other writings originally gave me the impression that he had a model in that fine-grained detail. I was mentally comparing his writings on consciousness to his writings on free will. Reading the latter made me feel like I strongly understood free will as a concept, and since then I have never been confused, it genuinely reduced free will as a concept in my mind.

His writings on consciousness have not done anything more than raise that model to the same level of possibility as a bunch of other models I'm confused about. That was the primary motivation for this post. But now that you mention it, if he genuinely believes that he has knowledge which might bring him closer to (or might bring others closer to to) programming a conscious being, I can see why he wouldn't share it in high detail.

Your comments here and some comments Eliezer had made elsewhere seem to imply he believes he has at least in large party “solved” consciousness. Is this fair? And if so is there anywhere he has written up this theory/analysis in depth - because surely if correct this would be hugely important

I’m kind of assuming that whatever Eliezer’s model is, the bulk of the interestingness isn’t contained here and still needs to be cashed out, because the things you/he list (needing to examine consciousness through the lens of the cognitive algorithms causing our discussions of it, the centrality of self-modely reflexive things to consciousness etc.) are already pretty well explored and understood in mainstream philosophy, e.g Dennett.

Or is the idea here that Eliezer believes some of these existing treatments (maybe modulo some minor tweaks and gaps) are sufficient for him to feel like he has answered the question to his own satisfaction.

Basically struggling to understand which of the 3 below is wrong, because all three being jointly true seem crazy

  1. Eliezer has a working theory of consciousness
  2. This theory differs in important ways from existing attempts
  3. Eliezer has judged that it is not worthwhile writing this up

While I agree with mostly everything your model of Eliezer said, I do not feel less confused about how Eliezer arrives to a conclusion that most animals are not conscious. Granted, I may, and probably actually am, lacking an important insight in the matter, but than it will be this insight that allows me to become less confused and I wish Eliezer shared it.

When I'm thinking about a thought process that allows to arrive to such a conclusion I imagine something like this. Consciousness is not fundamental but it feels like it is. That's why we intuitively apply concepts such as quantity towards consciousness, thinking about more or less conscious creatures as being more or less filled with conscious-fluid as we previously though about flogiston or caloric fluid. But this intuition is confused and leads us astray. Consciousness is a result of a specific cognitive algorithm. This algorithm can either be executed or not. There are good reasons to assume that such algorithm would be developped by evolution only among highly social animals as such conditions lead to necessity to model other creatures modelling yourself.

And I see an obvious problem with this line of thoughts. Reversed confu... (read more)

8So8res1y
I don't think the thought process that allows one to arrive at (my model of) Eliezer's model looks very much like your 2nd paragraph. Rather, I think it looks like writing down a whole big list of stuff people say about consciousness, and then doing a bunch of introspection in the vicinity, and then listing out a bunch of hypothesized things the cognitive algorithm is doing, and then looking at that algorithm and asking why it is "obviously not conscious", and so on and so forth, all while being very careful not to shove the entire problem under the rug in any particular step (by being like "and then there's a sensor inside the mind, which is the part that has feelings about the image of the world that's painted inside the head" or whatever). Assuming one has had success at this exercise, they may feel much better-equipped to answer questions like "is (the appropriate rescuing of) consciousness more like a gradient quantity or more like a binary property?" or "are chickens similarly-conscious in the rescued sense?". But their confidence wouldn't be coming from abstract arguments like "because it is an algorithm, it can either be executed or not" or "there are good reasons to assume it would be developed by evolution only among social animals"; their confidence would be coming from saying "look, look at the particular algorithm, look at things X, Y, and Z that it needs to do in particular, there are other highly-probable consequences of a mind being able to do X, Y, and Z, and we difinitively observe those consequences in humans, and observe their absence in chickens." You might well disbelieve that Eliezer has such insight into cognitive algorithms, or believe he made a mistake when he did his exercise! But hopefully this sheds some light on (what I believe is) the nature of his confidence.
1MichaelStJules1y
Thanks, this is helpful. Based on the rest of your comment, I'm guessing you mean talk about consciousness and qualia in the abstract and attribute them to themselves, not just talk about specific experiences they've had. Why use the standard of claiming to be conscious/have qualia? That is one answer that gets at something that might matter, but why isn't that standard too high? For example, he wrote: If this proposition is false, we need to allow unsymbolized (non-verbal) ways to self-attribute consciousness for self-attributing consciousness to matter in itself, right? Would (solidly) passing the mirror test be (almost) sufficient at this point? There's a visual self-representation, and an attribution of the perception of the mark to this self-representation. What else would be needed? Would it need to non-symbolically self-attribute consciousness generally, not just particular experiences? How would this work? If the proposion is true, doesn't this just plainly contradict our everyday experiences of consciousness? I can direct my attention towards things other than wondering whether or not I'm conscious (and towards things other than and unrelated to my inner monologue), while still being conscious, at least in a way that still matters to me that I wouldn't want to dismiss. We can describe our experiences without wondering whether or not we're having (or had) them. What kinds of reasons? And what would being correct look like? If unsymbolized self-attribution of consciousness is enough, how would we check just for it? The mirror test?
7So8res1y
If I were doing the exercise, all sorts of things would go in my "stuff people say about consciousness" list, including stuff Searl says about chinese rooms, stuff Chalmers says about p-zombies, stuff the person on the street says about the ineffable intransmissible redness of red, stuff schoolyard kids say about how they wouldn't be able to tell if the color they saw as green was the one you saw as blue, and so on. You don't need to be miserly about what you put on that list. Mostly (on my model) because it's not at all clear from the getgo that it's meaningful to "be conscious" or "have qualia"; the ability to write an algorithm that makes the same sort of observable-claims that we make, for the same cognitive reasons, demonstrates a mastery of the phenomenon even in situations where "being conscious" turns out to be a nonsense notion. Note also that higher standards on the algorithm you're supposed to produce are more conservative: if it is meanigful to say that an algorithm "is conscious", then producing an algorithm that is both conscious, and claims to be so, for the same cognitive reasons we do, is a stronger demonstration of mastery than isolating just a subset of that algorithm (the "being conscious" part, assuming such a thing exists). I'd be pretty suspicious of someone who claimed to have a "conscious algorithm" if they couldn't also say "and if you inspect it, you can see how if you hook it up to this extra module here and initialize it this way, then it would output the Chinese Room argument for the same reasons Searl did, and if you instead initialize it that way, then it outputs the Mary's Room thought experiment for the same reason people do". Once someone demonstrated that sort of mastery (and once I'd verified it by inspection of the algorithm, and integrated the insights therefrom), I'd be much more willing to trust them (or to operate the newfound insights myself) on questions of how the ability to write philosophy papers about qualia relates
1MichaelStJules1y
Shouldn't mastery and self-awareness/self-modelling come in degrees? Is it necessary to be able to theorize and come up with all of the various thought experiments (even with limited augmentation from extra modules, different initializations)? Many nonhuman animals could make some of the kinds of claims we make about our particular conscious experiences for essentially similar reasons, and many demonstrate some self-awareness in ways other than by passing the mirror test (and some might pass a mirror test with a different sensory modality, or with some extra help, although some kinds of help would severely undermine a positive result), although I won't claim the mirror test is the only one Eliezer cares about; I don't know what else he has in mind. It would be helpful to see a list of the proxies he has in mind and what they're proxies for. To make sure I understand correctly, it's not the self-attribution of consciousness and other talk of consciousness like Mary's Room that matter in themselves (we can allow some limited extra modules for that), but their cognitive causes. And certain (kinds of) cognitive causes should be present when we're "reflective enough for consciousness", right? And Eliezer isn't sure whether wondering whether or not he's conscious is among them (or a proxy/correlate of a necessary cause)?
-1EI1y
This is merely a bias on our own part as humans. I think people are confusing consciousness with self-awareness. They are completely different things. Consciousness is the OS that runs on the meat machine. Self-awareness is an algorithm that runs on the OS. All meat machines that run this OS have different algorithms for different functions. Some may not have any self-awareness algorithm running, some may have something similar but not exactly the same as our own self-awareness algorithm. That's where the mirror test fails. We can only observe the who-knows-how-many-levels of causality that lead to those animals to show or not show self-aware behaviors in front of a mirror. We can't say anything consequential about the actual algorithm(s) running on their OS when they stand in front of a mirror. We are just running our own set of self-awareness algorithms when we stand in front of a mirror. It seems like these algorithms change according to evolution, just like other systems within the multicellular ecosystem that make up the individual organisms. We often see animals that demonstrate these "self-aware" traits because of similar evolutionary conditions, like cats and dogs have evolved to run a lot of socializing algorithms that mingle well with our own social algorithms. Whether the self-reflective aspect of running these algorithms on our own OS makes one feel certain way about eating meat is in and of itself the result of the relationship between multi-threading the self-aware part and the self-preservation part in terms of labeling kins and such. At this point we aren't even conclusive about where to draw the boundary between hardware and software. We end up distinguishing between OS and simple firmware as conscious and unconscious. We mostly reduce the firmware down to simple physical reactions by the laws of physics while the OS exhibits something magical beyond those physical reactions in simpler systems. Is there something truly different that sets OS apart

I don't think it's obvious that nonhuman animals, including the vertebrates we normally farm for food, don't self-model (at least to some degree). I think it hasn't been studied much, although there seems to be more interest now. Absence of evidence is at best weak evidence of absence, especially when there's been little research on the topic to date. Here's some related evidence, although maybe some of this is closer to higher-order processes than self-modelling in particular:

  1. See the discussion of Attention Schema Theory here (section "Is an attention schema evolutionarily old or unique to humans?") by the inventor of that theory, Graziano, in response to Dennett's interpretation of the theory applied to nonhuman animals (in which he also endorses the theory as "basically right"!). Basically, AST requires the individual to have a model of their own attention, an "attention schema".
    1. Dennett wrote "Dogs and other animals do exhibit some modest capacities for noticing their noticings, but we humans have mental lives that teem with such episodes – so much so that most people have never even imagined that the mental lives of other species might not be similarly populated", and then expa
... (read more)
3MichaelStJules1y
Of course, many animals have failed the mirror test, and that is indeed evidence of absence for those animals. Still, 1. Animals could just be too dumb (or rely too little on vision) to understand mirrors, but still self-model in other ways, like in my top comment. Or, they might at least tell themselves apart from others in the mirrors as unique, without recognizing themselves, like some monkeys and pigeons [https://www.frontiersin.org/articles/10.3389/fpsyg.2021.669039/full]. Pigeons can pick out live and 5-7 second delayed videos of themselves from prerecorded ones [https://www.sciencedaily.com/releases/2008/06/080613145535.htm]. 2. Animals might not care about the marks. Cleaner wrasse, a species of fish, did pass the mirror test [https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000021] (the multiple phases, including the final self-directed behaviour with the visible mark), and they are particularly inclined to clean things (parasites) that look like the mark, which is where they get their name. I think the fact that they are inclined to clean similar looking marks was argued to undermine the results, but that seems off to me. 3. I would be interested in seeing the mirror test replicated in different sensory modalities, e.g. something that replays animals' smells or sounds back to them, a modification near the source in the test condition, and checking whether they direct behaviour towards themselves to investigate. 1. Some criticisms of past scent mirror test are discussed here [https://robertocazzollagatti.com/2018/06/07/self-awareness-in-dogs-needs-no-mirroring/] (paper with criticism here [https://www.sciencedirect.com/science/article/pii/S0376635717304862]). The issues were addressed recently here [https://www.tandfonline.com/doi/full/10.1080/03949370.2020.1846628] with wolves. Psychology Today summary
3MichaelStJules1y
I also don't think GPT-3 has emotions that are inputs to executive functions, like learning, memory, control, etc..

EY's position seems to be that self-modelling is both necessary and sufficient for consciousness.

Necessary, not sufficient. I don't think Eliezer has described what he thinks is sufficient (and maybe he doesn't know what's sufficient -- i.e., I don't know that Eliezer thinks he could build a conscious thing from scratch).

I've collected my thoughts + recent discussions on consciousness and animal patienthood here: https://www.lesswrong.com/posts/TkahaFu3kb6NhZRue/quick-general-thoughts-on-suffering-and-consciousness. I don't have the same views as Eliezer, but I'm guessing me talking about my views here will help make it a little clearer why someone might not think this way of thinking about the topic is totally wrong.

“By their fruits you shall know them.”

A frame I trust in these discussions is trying to elucidate the end goal. What does knowledge about consciousness look like under Eliezer’s model? Under Jemist’s? Under QRI’s?

Let’s say you want the answer to this question enough you go into cryosleep with the instruction “wake me up when they solve consciousness.” Now it’s 500, or 5000, or 5 million years in the future and they’ve done it. You wake up. You go to the local bookstore analogue, pull out the Qualia 101 textbook and sit down to read. What do you find in the pages? Do you find essays on how we realized consciousness was merely a linguistic confusion, or equations for how it all works?

As I understand Eliezer’s position, consciousness is both (1) a linguistic confusion (leaky reification) and (2) the seat of all value. There seems a tension here, that would be good to resolve since the goal of consciousness research seems unclear in this case. I notice I’m putting words in peoples’ mouths and would be glad if the principals could offer their own takes on “what future knowledge about qualia looks like.”

My own view is if we opened that hypothetical textbook up we would find crisp equatio... (read more)

Copying from my Twitter response to Eliezer

Anil Seth usefully breaks down consciousness into 3 main components: 
1. level of consciousness (anesthesia < deep sleep < awake < psychedelic)
2. contents of consciousness (qualia — external, interoceptive, and mental)
3. consciousness of the self, which can further be broken down into components like feeling ownership of a body, narrative self, and a 1st person perspective. 

He shows how each of these can be quite independent. For example, the selfhood of body-ownership can be fucked with u... (read more)

I agree with pretty much all of that but remark that "deep sleep < awake < psychedelic" is not at all clearly more correct than "deep sleep < psychedelic < awake". You may feel more aware/conscious/awake/whatever when under the effects of psychedelic drugs, but feeling something doesn't necessarily make it so.

6Jacob Falkovich1y
The ordering is based on measures of neuro-correlates of the level of consciousness like neural entropy or perturbational complexity, not on how groovy it subjectively feels.
6gjm1y
It would seems a bit optimistic to call anything a "neuro-correlate of the level of consciousness" simply on the basis that it's higher for ordinary waking brains than for ordinary sleeping brains. Is there more evidence than that for considering neural entropy or perturbational complexity to be measures of "the level of consciousness"? (My understanding is that in some sense they're measuring the amount of information, in some Shannonesque sense, in the state of the brain. Imagine doing something like that with a computer. The figure will -- at least, for some plausible ways of doing it -- be larger when the computer is actively running some software than when it's idle, and you might want to say "aha, we've found a measure of how much the computer is doing useful work". But it's even larger if you arrange to fill its memory with random bits and overwrite them with new random bits once a second, even though that doesn't mean doing any more useful work. I worry that psychedelics might be doing something more analogous to that than to making your computer actually do more.)
2Said Achmiz1y
It is not my impression that Eliezer believes any such thing for pain, only (perhaps) for suffering. It’s important not to conflate these. It seems clear to me, at least, that consciousness (in the “subjective, reflective self-awareness” sense) is necessary for suffering; so I don’t think that Eliezer is making any mistake at all (much less a basic mistake!). The word “just” is doing a heck of a lot of work here. Chickens perhaps have “selfless pain”, but to say that they experience anything at all is begging the question!
1TAG1y
I strongly support this. If you are going to explain-away qualia as the result of having a self-model, you need to do more than note that they occur together , or that "conscious" could mean either.

Animal rights obsessed vegan checking in:

I am extremely worried gpt3 is concious! To be honest i am worried about whether my laptop is concious! A lot of people worried about animal suffering are also worried about algorithms suffering.

7Korz1y
It seems I am not as worried about gpt3 as you, but when listening to the simulated interview with simulated Elon Musk by Lsusr [https://www.lesswrong.com/posts/oBPPFrMJ2aBK6a6sD/simulated-elon-musk-lives-in-a-simulation#fnref-DJwdDfxvjSoiBK6Ka-2] in the clearer thinking podcast episode 073 [https://clearerthinkingpodcast.com/?ep=073] (starts in minute 102), I was quite concerned

I had another complaint about that tweet, which... you do not seem to have, but I want to bring up anyway. 

Why do we assume that 'consciousness' or 'sentience' implies 'morally relevant' ? And that a lack of consciousness (if we could prove that), would also imply 'not morally relevant' ? 

It seems bad to me to torture chickens even if turns out they aren't self-aware. But lots of people seem to take this as a major crux for them. 

If I torture a permanently brain-damaged comatose person to death, who no one will miss, is that 'fine' ? 

I am angry about this assumption; it seems too convenient. 

Torturing chickens or brain dead people is upsetting and horrible and distasteful to me. I don’t think it’s causing any direct harm or pain to the chicken/person though.

I still judge a human’s character if they find these things fun and amusing. People watch this kind of thing (torture of humans/other animals) on Netflix all the time, for all sorts of good and bad reasons.

Claim: Many things are happening on a below-consciousness level that 'matter' to a person. And if you disrupted those things without changing a person's subjective experience of them (or did it without their notice), this should still count as harm. 

This idea that 'harm' and the level of that harm is mostly a matter of the subjective experience of that harm goes against my model of trauma and suffering. 

Trauma is stored in the body whether we are conscious of it or not. And in fact I think many people are not conscious of their traumas. I'd still call it 'harm' regardless of their conscious awareness. 

I have friends who were circumcised before they could form memories. They don't remember it. Through healing work or other signs of trauma, they realized that in fact this early surgery was likely traumatic. I think Eliezer is sort of saying that this only counts as harm to the degree that it consciously affects them later or something? I disagree with this take, and I think it goes against moral intuition. (If one sees a baby screaming in pain, the impulse is to relieve their 'pain' even if they might not be having a conscious experience of it.) 

If I take a "non-s... (read more)

8mayleaf1y
I'm curious how you would distinguish between entities that can be harmed in a morally relevant way and entities that cannot. I use subjective experience to make this distinction, but it sounds like you're using something like -- thwarted intentions? telos-violation? I suspect we'd both agree that chickens are morally relevant and (say) pencils are not, and that snapping a pencil in half is not a morally-relevant action. But I'm curious what criterion you're using to draw that boundary. This is an interesting point; will think about it more.
5Ben Pace1y
Typically in questions of ethics, I factor the problem into two sub-questions: * Game theory: ought I care about other agents' values because we have the potential to affect each other? * Ben's preferences: do I personally care about this agent and them having their desires satisfied? For the second, it's on the table whether I care directly about chickens. I think at minimum I care about them the way I care about characters in like Undertale or something, where they're not real but I imbue meaning into them and their lives. That said it's also on the table to me that a lot of my deeply felt feelings about why it's horrible to be cruel to chickens, are similar to my deeply felt feelings of being terrified when I am standing on a glass bridge and looking down. I feel nauseous and like running and a bit like screaming for fear of falling; and yet there is nothing actually to be afraid of. If I imagine someone repeatedly playing Undertale to kill all the characters in ways that make the characters maximally 'upset', this seems tasteless and a touch cruel, but not because the characters are conscious. Relatedly, if I found out that someone had built a profitable business that somehow required incidentally running massive numbers of simulations of the worst endings for all the characters in Undertale (e.g. some part of their very complex computer systems had hit an equilibrium of repeatedly computing this, and changing that wasn't a sufficient economic bottleneck to be worth the time/money cost), this would again seem kind of distasteful, but in the present world it would not be very high on my list of things to fix, it would not make the top 1000. For the first, suppose I do want to engage in game theory with chickens. Then I think all your (excellent) points about consciousness are directly applicable. You're quite right that suffering doesn't need to be conscious, and often I have become aware of a way that I have been averse to thinking about a subject o
3Ben Pace1y
Btw, coming at it from a different angle: Jessicata raises the hypothesis (in her recent post [https://www.lesswrong.com/posts/ZddY8BZbvoXHEvDHf/selfishness-preference-falsification-and-ai-alignment] ) that people put so much weight on 'consciousness' as a determinant of moral weight because it is relatively illegible and they believe outside the realm of things that civilization currently has a scientific understanding of, so that they can talk about it more freely and without the incredibly high level of undercutting and scrutiny that comes to scientific hypotheses. Quote:
4jessicata1y
I don't think that was my point exactly. Rather, my point is that not all representations used by minds to process information make it into the scientific worldview, so there is a leftover component that is still cared about. That doesn't mean people will think consciousness is more important than scientific information, and indeed scientific theories are conscious to at least some people. Separately, many people have a desire to increase the importance of illegible things to reduce constraint, which is your hypothesis; I think this is an important factor but it wasn't what I was saying.
5Jemist1y
Eliezer later states that he is referring to qualia specifically, which for me are (within a rounding error) totally equivalent to moral relevance.
0Unreal1y
Why is that? You're still tying moral relevance to a subjective experience?
4Jemist1y
Basically yes I care about the subjective experiences of entities. I'm curious about the use of the word "still" here. This implies you used to have a similar view to mine but changed it, if so what made you change your mind? Or have I just missed out on some massive shift in the discourse surrounding consciousness and moral weight? If the latter is the case (which it might be, I'm not plugged into a huge number of moral philosophy sources) that might explain some of my confusion.
2bn221y
People already implicitly consider your example to be acceptable given that vegetables are held in conditions of isolation that would be considered torture if they were counterfactually conscious and many people support being allowed to kill/euthanize vegetables in cases such as Terry Schiavo's.
1Aay17ush1y
I've often thought about this, and this is the conclusion I've reached. There would need to be some criteria that separates morality from immorality. Given that, consciousness (ie self-modelling) seems like the best criteria given our current knowledge. Obviously, there are gaps (like the comatose patient you mention), but we currently do not have a better metric to latch on to.
0TAG1y
Why wouldn't the ability to suffer be the criterion? Isn't that built into the concept if sentience? "Sentient" literally means "having senses" but is often used as a synonym for "moral patient".

I suspect I endorse something like what Yudkowsky seems to be claiming. Essentially, I think that humans are uniquely disposed (at least among life on earth) to develop a kind of self-model, and that nonhuman animals lack the same kind of systems that we have. As a result, whatever type of consciousness they have, I think it is radically unlike what we have. I don’t know what moral value, if any, I would assign to nonhuman animals were I to know more about their mental lives or what type of “consciousness” they have, but I am confident that the current hig... (read more)

4AprilSR1y
I don't know how you can deny that people have "qualia" when, as far as I can tell, it was a word coined to describe a particular thing that humans experience?
1Lance Bush1y
I'm not sure I understand. What do you mean when you say it was coined to "describe a particular thing that humans experience"? Or maybe, to put this another way: at least in this conversation, what are you referring to with the term "qualia"?
1AprilSR1y
As I understand it, the word "qualia" usually refers to the experience associated with a particular sensation.
0TAG1y
"Qualia" is easy to define. As Wikipedia has it Whereas illusionism is almost impossible to define coherently. "According to illusionism, you only have propositional attitudes, not perceptions. Some of those propositional attitudes seem like propositional attitudes , and others seem like perceptions. Well they don't, because if anything seemed like anything, that would be a perception. So actually you have a meta legal belief that some of your propositional attitudes are propositional attitudes, but also a meta level belief that others aren't. That's the illusion. But actually it's not an illusion,because an illusion is a false perception , and there are no perceptions. Its actually a false belief, a delusion I don't know why we call it illusionism"
3gjm1y
It's easy to give examples of things we think of as qualia. I'm not so sure that that means it's easy to give a satisfactory definition of "qualia". I can give lots of examples of people, but there's scope for endless debate about exactly what counts as a person and what doesn't. (Newly born children? 16-week-old foetuses? Aliens or AIs, should any exist now or in the future, with abilities comparable to ours but very different brains and minds? Beings like gods, angels, demons, etc., should any exist, with abilities in some ways comparable to ours but not made out of matter at all?) And for debate about when persons A and B are actually the same person. (Suppose some intelligent computer programs are persons. If I take a copy, do I then have one person or two? Suppose we favour a multiverse-type interpretation of quantum mechanics. Are the versions of "me" on two nearby Everett branches one person, or two? Am I the same person as I was 30 years ago?) There's similar unclarity about what things count as qualia and about how to individuate them. (E.g., if you and I look at the same red object and both have normal colour vision, do we have "the same" quale of seeing-a-red-thing or not? If I see the same red thing twice, is that "the same" quale each time? If the answers are negative, what actual work is the notion of qualia doing?) And e.g. Daniel Dennett would claim that the word "qualia" includes enough baggage that it's better to say that there are no qualia while in no way denying that people experience things. It's not (I think) in question that we experience things. It's quite reasonable (I think) to question whether anything about our experience is made clearer by introducing objects called qualia.
1TAG1y
Satisfactory for whom? I use examples because they are sufficient to get the point across to people who aren't too biased. Someone night have some genuine reason to need a more rigourous definition...but they might not, they might instead be making a selective demand for rigour, out of bias. Where are the calls for rigourous definitions of "matter", "computation", etc? If my purpose is to demonstrate that people exist, all I need to do is point to a few uncontentious examples of people...I don't need to solve every edge case. And "endless debate" needs to be avoided. People who make selective demands for rigour don't to change their minds, and endless debate is a great way of achieving that Why does that matter if all I am doing is asserting that qualia exist, or lack a reductive explanation?
5gjm1y
(I'm ignoring those parts of your reply that seem to have no purpose other than implicitly accusing me of arguing in bad faith. I have seldom known anything useful to come out of engaging with that sort of thing. These discussions would be more enjoyable, for me at least, if you weren't so relentlessly adversarial about them.) Satisfactory for whom? For me, obviously :-). There is at least one eminent philosopher, namely Daniel Dennett, who has made something of a speciality of this area and who flatly denies that qualia "exist", and who doesn't appear to me to be either a dimwit or a crackpot. That is already sufficient reason for me to want to be careful about saying "duh, of course qualia exist". Of course if all you mean by that is that people have experience, then I agree with that, but if that's all you mean then what need is there to talk about "qualia" at all? And if it's not all you mean, then before agreeing I need to know what else is being implicitly brought in. Now, in the present instance it's Jemist who introduced "qualia" to the discussion (so, in particular, you are under no obligation to be able to tell me precisely what Jemist means by the term). And Jemist talks e.g. about experience being "turned into qualia", and I don't see how your examples help to understand what that means, or what distinction between "experience" and "qualia" Jemist is trying to draw. The general idea seems to be something like this: people and chickens alike have some sort of stream or sea of experiences, and humans (and maybe chickens or maybe not) "turn these experiences into qualia", and having not merely experiences but qualia is what justifies calling an entity "conscious" and/or seeing that entity as of moral significance. I'm sympathetic to the general idea that there's something that's kinda-the-same about chickens' sensory input and ours, and something that's maybe different about the early stages of processing that sensory input, and that that has somethi
1TAG1y
By that standard, there is no satisfactory definition of anything, since there are philosophers who doubt their own existence, your existence, the existence of an external world, the existence of matter and so on. But a definition is not supposed to count as a proof all by itself. A definition of X should allow two people who are having a conversation about X to understand each other. A definition that is satisfactory for that purpose does not need to constitute a proof or settle every possible edge case. I'm not sure why it's my job to explain what Jemist means. If you want a hint as to what an "experience" could be other than a quale, then look at what qualia sceptics think an experience is...apparently some sort of disposition to answer "yes I see red" when asked what they see. If you are anything like most people, you probably have no compunction against destroying machinery, or the virtual characters in a game. And you probably don't care too much if the characters say "aaagh!" or the machinery reports damage. So it's as if you think there is something about living organisms that goes beyond damage and reports of damage ... something like pain, maybe? More than one thing could make an entity morally significant, and there are arguments for the existence of qualia other than moral significance. Well, if we fill in the picture by adding in more fine grained structure and function, we are probably not going to find the qualia for the same reason that we haven't already. Nonetheless, we have good reason to think that our qualia are there, and rather less good reason to believe that the from-the-outside approach is literally everything, so that qualia have to be disregarded if they cannot be located on that particular map. I just quoted a definition of qualia which says nothing about in-principle irreducibility. Do agree with that defintion? Is it reasonable to to reject X, for which there is evidence, on the basis that someone might be smuggling in Y? Have
2gjm1y
I think you may have misunderstood what I was saying. (My fault, no doubt, for not being clearer and more explicit.) I was not arguing that because some eminent philosophers deny the "existence" of "qualia" it follows that the term has no satisfactory definition. (I do say that I've not seen a really satisfactory definition, but that's a separate claim.) But it seemed that you were saying that the "existence" of "qualia" is just obvious and I was explaining one reason why I can't agree. (Why all the scare-quotes? Because if I just say "it is not obvious that qualia exist" then someone may take "qualia exist" to mean the same thing as "people experience things" and think that I am saying it isn't obvious that people experience things. That is not what I'm doubtful about. I am doubtful about the wisdom of reifying that experience into things-called-qualia, and I am doubtful about some of the philosophical baggage that "qualia" sometimes seem to be carrying.) Yup. But (unless I've misunderstood you) you're wanting to define "qualia" by pointing to some examples of people having experience, and that is definitely not sufficient for me to understand exactly what you mean by "qualia" and by "having qualia". It isn't, and in the comment to which you were replying I explicitly said that it isn't. I'm not sure why you think I think it's your job to explain what Jemist means, and if it's because I said something that implies that or looks like it did then I hope you will accept my apologies, because I didn't intend to do any such thing. It seems to me that the comment to which you were replying already sketched exactly the argument you're making, and then went on to explain why I don't find that argument sufficient reason to say that we "have qualia", even though (of course!) I agree that it indicates that there is something going on that has something to do with what people are pointing at when they talk about qualia. (Perhaps the following analogy will help. Suppose y
2TAG10mo
But you weren't disagreeing with anything actually in the definition. You have been saying that the definition doesn't make it explicit enough that qualia aren't irreducible, immaterial, etc. Merely failing to mention reducibility, etc, one way or the other isn't enough for you. "Seem" to whom? From my perspective, you keep insisting that I have smuggled in non-materialistic assumptions ... but I don't even see how that would work. If I offer you one definition, then swap it for another, isn't that a blatant cheat on my part? And if it is , why worry? Or if I argue that qualia are immaterial based on other evidence and theories and whatever. ... so that the conclusion isn't begged by definition alone ... that's legitimate argumentation. You are asking me to tell you what qualia are ontologically. But thats not a definition , that's a theory. Theories explain evidence. Evidence has to be spoken about separately from theories. When I define qualia, I am defining something that needs to be explained, not offering an explanation. I want the definition to be ontologically non committal so that the process of theory building can procede without bias. But neutrality isn't enough for you: you are committed to a theory, and you won't consider something as relevant evidence unless you can be guaranteed that it won't disrupt the theory. "Experience things" doesn't convey enough information, because it can too easily be taken in a naive realist sense. The point isn't that you are seeing a tomato, it is that you are seeing it in a certain way. According to science , our senses are not an open window on the world that portrays it exactly as it is. Instead , the sensory centres of our brains are connected the outside world by a complex causal chain, during which information, already limited by our sensory modalities, is filtered and reprocessed in various ways. So scientific accounts of perception require there to be a way-we-perceive-things...quite possibly , an individua
2gjm10mo
It is simply not true that I have been saying that the things you offer by way of defining "qualia" don't make it clear enough what the term means. And that I don't want to affirm the existence of something whose meaning is not clear to me, one reason (not the only one) being that that opens the way for bait-and-switch moves where I say "sure, X exists" and then the person I'm talking to says "aha, so you agree that Y" where Y is something that now turns out to be part of what they meant by X that hadn't been made explicit before. That doesn't mean that I need a definition that says "these things aren't irreducible or immaterial". It means I need a definition clear enough that I can tell whether irreducibility, or immateriality, or a dozen other things, are part of what the term means. So far, you've (1) pointed to a few things and said "look, these are qualia" (which obviously doesn't enable me to tell what is and isn't part of what you mean by the term), and (2) cited a definition in a Wikipedia article which, as I explained above, seems actually to be at least two different definitions that say different things. And, in your latest comment, (3) said some things about the processes of perception that don't help me understand what you mean by "qualia" for reasons I'll get to below. It's very likely that what you mean by "qualia" doesn't in fact presuppose immateriality or irreducibility or whatever! But I can't tell because you have so far not chosen to tell me, in terms I am able to understand with confidence, just what you mean by the term. Nope. I keep insisting that I can't tell what assumptions, if any, you might have smuggled in or might smuggle in later, because I can't tell exactly what you mean by the term. Which is problematic for all sorts of reasons other than possible assumption-smuggling. Yup, and as you say that would be fine because then I could just say "look, you cheated and here's how". But what you're actually doing is offering me no def
1TAG10mo
Define "matter".
2gjm10mo
Why? (We haven't been discussing matter. I haven't been insisting that you affirm the existence of matter. There aren't any circumstances parallel to those involving "qualia".) But, since you ask, here's the best I can do on short notice. First, purely handwavily and to give some informal idea of the boundaries, here are some things that I would call "matter" and some possibly-similar things that I would not. Matter: electrons, neutrons, bricks, stars, air, people, the London Philharmonic Orchestra (considered as a particular bunch of particular people). Not matter: photons, electric fields, empty space (to whatever extent such a thing exists), the London Philharmonic Orchestra (considered as a thing whose detailed composition changes over time), the god believed in by Christians (should he exist), minds. Doubtful: black holes; the gods believed in by the ancient Greeks (should they exist). "Matter" is a kind of stuff rather than a kind of thing; that is, in general if some things are "matter" then so is what we get by considering them together, and so are whatever parts they might have. (This might need revision if e.g. it turns out that things I consider "matter" and things I don't are somehow merely different arrangements of some more fundamental stuff.) Conditional on the universe working roughly the way I currently model it as doing (or, more precisely, allow other people better at these things to model it as doing), I think the actually-existing things I call "matter" are coexistent with "things made from excitations of fermionic quantum fields". If the way the universe works is very different from how I think it does, then depending on the details I might want (1) to continue to say that matter is excitations of fermionic quantum fields, and to declare that contrary to appearances some things we've all been thinking of as matter are something else, or (2) to continue to say that the things we naïvely think of as matter should be called matter, even thoug
1TAG10mo
I didn't say anything explicit about reification. And it's not an implication, either. Merely using a noun is not reification. "Action", "event", "state" "property", "process" and "nothingness" are all nouns, yet none of them refer to things. Again, that would be an ontology of qualia. Again, I am offering a definition , not a complete theory. Again, your grounds for saying that the definition is inadequate is that it isn't answering every question you might have -- and that it might have implications you don't. If the way qualia actually work, ontologically -- a subject about which I have said nothing so far -- involves the literal sharing of a universal between identical subjective sensations, then you should believe it, because it is true, and not object to it dogmatically. Definitions are supposed to have implications. It's not reasonable to object to them for having implications ... and it's not reasonable to object to them for having implications you don't like, because you are supposed to decide theories on the basis. Notice that in raising the issue, you are already using a good-enough definition of qualia. To object to qualia on the basis that they involve a Platonic shared universal, rather than some other solution to the problem of universals, you have to be able to talk about them, even if without using the word "qualia". But of course, you always have to have pre-theoretic definitions in order to build a theory. Whether qualia are immaterial or irreducible or whatever depends on all the evidence -- on a theory. It should not be begged by a single definition. Question begging definitions are bad, m'kay. But we would first need to agree that qualia exist at all. That's how theory building works ..step by step. Nobody could come to any conclusion about anything if they had to start with completely clear and exhaustive definitions. Ordinary definitions are not as exhaustive as encyclopedia articles, for instance. You are engaging in a selective demand f
2gjm10mo
Using a noun is, by default, reification. Or, at the very least, should be presumed so in the absence of some statement along the lines of "of course when I'm asking you to agree that people have qualia, I am not asking you to commit yourself to there being any such things as qualia". Qualia without reification seem to me to amount to "people have experiences". I understand that it doesn't seem that way to you, but I don't understand why; I don't yet understand just what you mean by "qualia", and the one thing you've said that seems to be an attempt to explain why you want something that goes beyond "people have experiences" in the direction you're calling "qualia" -- the business about perception being a complex multi-stage process involving filtering and processing and whatnot -- didn't help me, for the reasons I've already given. I wish you would offer a definition. You are repeatedly declining to do so, and then complaining that I object to your definition (which you haven't given) or have another definition of my own (which I don't) or that I am immovably committed to some theory (you don't say what) that conflicts (you don't say how) with something (you don't say what) about qualia. Maybe you're right -- for instance, I might be committed to some theory without even recognizing the fact, because it seems so obvious to me. But if so, the only way you're going to correct my error (I assume, at least for the sake of argument, that if I am wrong you do want to help me get less wrong, rather than merely to gloat at how wrong I am) is by showing me what I'm doing wrong, which you seem very unwilling to do. You just want to keep saying that I'm wrong, which so far as I can see accomplishes nothing. That's OK, because I'm not doing that, as I already tried to make clear. My problem is that I can't tell what implications your definition has. Because you won't tell me what it is. It seems to me that in order for a definition I'm using to be good enough, a minimum re
2TAG8mo
I've already said that I'm using "qualia" in an ontologically non committal way. I note from your 2016 comment that you use the word noncommittally yourself. "Qualia are what happens in our brains (or our immaterial souls, or wherever we have experiences) in response to external stimulation, or similar things that arise in other ways (e.g., in dreams)." As I have explained, equating qualia and experiences doesn't sufficiently emphasise the subjective aspects. "Experience" can be used in contexts like "experience a sunset" where the thing experienced is entirely objective, or contexts like "experience existential despair" ,where it's a subjective feeling. Only the second kind of use overlaps with "qualia". Hence, "qualia" is often briefly defined as "subjective experience". Note that "experience" is just as much of a noun as "quale", so it has just as much of reification issue. None. Then dont reify. The reification issue exists only in your imagination. How do you know it's different from what you mean? You were comfortable using the word in 2016. This conversation started when I used a series of examples to define "qualia", which you objected to as not being a real definition. "It’s easy to give examples of things we think of as qualia. I’m not so sure that that means it’s easy to give a satisfactory definition of “qualia”.' But when I asked you to define "matter"...you started off with a listof examples! "First, purely handwavily and to give some informal idea of the boundaries, here are some things that I would call “matter” and some possibly-similar things that I would not. Matter: electrons, neutrons, bricks, stars, air, people, the London Philharmonic Orchestra (considered as a particular bunch of particular people). Not matter: photons, electric fields, empty space (to whatever extent such a thing exists), the London Philharmonic Orchestra (considered as a thing whose detailed composition changes over time), the god believed in by Christians (shoul
2gjm8mo
Your accusations of inconsistency Yup, I used the term "qualia" in 2016 (in response to someone else making an argument that used the term). I don't always pick every possible fight :-). (In that case, turchin was making another specific argument and used the word "qualia" in passing. I disagreed with the other specific argument and argued against that. The specific word "qualia" was a side issue at most. Here, the specific point at issue is whether everyone needs to agree that "we have qualia".) You asked for a definition of "matter" and I (1) gave a list of examples and counterexamples and near-the-boundary examples, (2) prefaced with an explicit note that this was preliminary handwaving, (3) followed by an attempt at a precise definition distinguishing matter from not-matter. You haven't done any of that for "qualia", just given a list of examples, and that (not the fact that you did give a list of examples) is what I was complaining about. "It's easy to give examples ... I'm not so sure that that means it's easy to give a satisfactory definition". Your accusations of wilful ignorance and/or laziness Yes, I could look up definitions of "naïve realism" or of "qualia". As it happens, I have. They don't tell me what you mean by those terms, and definitions of them do not always agree with one another. Which is why I keep asking you what you mean by terms you are using, and get frustrated when you seem reluctant to tell me. For instance, here [https://www.oxfordbibliographies.com/view/document/obo-9780195396577/obo-9780195396577-0340.xml] we read that "the naïve realist claims that, when we successfully see a tomato, that tomato is literally a constituent of that experience, such that an experience of that fundamental kind could not have occurred in the absence of that object". Here [https://en.wikipedia.org/wiki/Na%C3%AFve_realism_(psychology)] we read that "naïve realism is the human tendency to believe that we see the world around us objectively, and that p
1Lance Bush1y
It’s not satisfactory to me. Does this mean I am “too biased?” That seems like a potentially unjustified presumption to make, and not a fair way to have a discussion with others who might disagree with you. Anyone could offer a definition then state in advance that anyone who doesn’t accept it is “too biased” then, when someone says they don’t accept it say “see, I told you so,” even if an unbiased person would judge the definition to be inadequate. In any case, I’m not making a selective demand for rigor. Even if I were, I’d probably just shrug and raise the challenge, anyway. I don’t know what people talking about qualia are talking about. But I am also pretty confident they don’t know what they are talking about. I suspect qualia is a pseudoconcept invented by philosophers, and that to the extent that we adequately characterize it, it faces pretty serious challenges. The main person I discuss illusionism and consciousness with specializes in philosophy of computation and philosophy of science, with an emphasis on broad metaphysical questions. We both endorse illusionism, and have for years, so there’s little to say there. Instead, regularly we mostly discuss their views on computation and metaphysics, and I’m often asked to read their papers on these topics. So, in the past few years, I have read significantly more work on what computers and matter are than I have on consciousness. Thus, ironically, I have more discussions about rigorous attempts to define computers and features of the external world than I do about consciousness. So if you think that, in denying qualia, I am somehow failing to apply a similar degree of rigor as I do to other ideas, you could not have picked worse examples. It is not the case that I’m especially tough on the notion of qualia.
-2Lance Bush1y
Unfortunately, I don’t think the account of qualia you’ve presented is adequate. First, I don’t know what is meant by “perceived sensation” of the pain of a headache. This could be cashed out in functional terms that don’t make appeal to what I am very confident philosophers are typically referring to when they refer to qualia. So this strikes me as a kind of veiled way of just using another word or phrase (in this case, “perceived sensation”) as a stand-in for “qualia,” rather than a definition. It’s a bit like saying the definition of morality is that it is “about ethics.” I’m likewise at a loss about the second part of this. What is the qualitative character of a sensation? What does it mean to say that you’re referring to “what it is directly like to be experiencing” rather than a belief about experiences? Again, these just seem like roundabout ways of gesturing towards something that remains so underspecified that I still don’t know what people are talking about. Illusionism holds that our introspections about the nature of our conscious experiences are systematically mistaken in particular ways that induce people to hold the incorrect belief that our experiences have phenomenal properties. I think this is a coherent position, and I’m reasonably confident it comports with how Dennett and Frankish would characterize it. Where is that quote from? It seems to imply that all mental states are other propositional attitudes or perceptions. If so, that doesn’t seem right to me. Also, the complaint primarily seems to be with the name “illusionism.” I’m happy to call it delusionism. If we do that, do they still have an objection? If so, I’m not quite sure what the objection is.
3TAG1y
Is "unmarried man" a mere stand-in for "bachelor"? They are ways of gesturing towards your own experience. If you refuse to introspect you are not going to get it. Me. Thats what I was expanding on. The phenomenon properties you mentioned...those are qualia. You have the concept , because you need the concept to say it's illusory.
1Lance Bush1y
In some cases, but not others. One can reasonably ask whether the Pope is a bachelor, but for the purposes of technical philosophical work one might treat “unmarried man” and “bachelor” as identical in the context of some technical discussion. I can understand if someone who doesn’t know me or my educational background might think that I just haven’t thought about the topic of qualia enough, or that I am refusing to introspect about it, but that isn’t the case. This isn’t a topic I’ve thought about only casually; it is relevant to my work. That being said, I have introspected, and I have come to the conclusion that there isn’t anything to get with respect to qualia. Nothing about my introspection gives me any insight into what you or others mean by qualia. Instead, I have concluded that the notion of qualia that has trickled out from academic philosophy is most likely a conceptual confusion enshrining the kinds of introspective errors Dennett and others argue that people are prone to make. Okay, thanks. I apologize for having had to ask but you provided a paragraph in quotation with no attribution, and it was difficult for me to interpret what that meant. I have a kind of meta-concept: that other people have a concept of qualia but I myself am not personally acquainted with them, and would not say that I have the concept. One does not need to personally be subject to an illusion to believe that others are. I know that other people purport to have a notion of qualia, but I do not. But thinking other people have mistaken or confused concepts does not require that one have the concept in the sense of possessing or understanding it. In other words, other people might tell me that there's, e.g., "something it's like" to see red or taste chocolate that somehow defies explanation, is private, is inaccessible, and so on. But I myself do not have such experiences.In such cases, I think people are simply confused, and that this can result in the case of believing in qual
1TAG1y
Of course, introspection isn't meant to give you a definition of qualia...it's meant to give you direct acquaintance.
1Lance Bush1y
I have introspected and it has not resulted in acquaintance with qualia. I believe people can introspect and then draw mistaken conclusions about the nature of their experiences, and that qualia is a good candidate for one of these mistaken conclusions.
1TAG1y
What did it result in acquaintance with? If it seems to you that all your mental content consists only of propositional attitudes, then you don't even have the illusion of phenomenonal consciousness. But why would you alone be lacking it?
4Raemon1y
Note that it's plausible to me that this is a Typical Mind thing and actually there's just a lot of people going around without the perception of phenomenal consciousness. Like, Lance, do you not feel like you experience that things seem ways? Or just that they don't seem to be ways in ways that seem robustly meaningful or something?
9TAG1y
But the qualiaphilic claim is typical, statistically. Even if Lance's and Denett's claims to zombiehood are sincere, they are not typical.
5Raemon1y
Have we even checked tho? (Maybe the answer is yes, but it hadn't occurred to me before just now that this was a dimension people might vary on. Or, actually I think it had, but I hadn't had a person in front of me actually claiming it)
1Lance Bush1y
See above; I posted a link to a recent study. There hasn't been much work on this. While my views may be atypical, so too might the views popular among contemporary analytic philosophers. A commitment to the notion that there is a legitimate hard problem of consciousness, that we "have qualia," and so on might all be idiosyncrasies of the specific way philosophers think, and may even result from unique historical contingencies, such that, were there many more philosophers like Quine and Dennett in the field, such views might not be so popular. Some philosophical positions seem to rise and fall over time. Moral realism was less popular a few decades ago, but as enjoyed a recent resurgence, for instance. This suggests that the perspectives of philosophers might result in part from trends or fashions distinctive of particular points in time.
1Lance Bush1y
Typical of who?
1TAG1y
"Statistically" , so "who" would be most people.
1Lance Bush1y
Thanks for clarifying. Not all statistical claims in e.g., psychology are intended to generalize towards most people, so I didn't want to assume you meant most people. If the claim is that most people have a concept of qualia, that may be true, but I'm not confident that it is. That seems like an empirical question it'd be worth looking into. Either way, I wouldn't be terribly surprised if most people had the concept, or (I think more likely) could readily acquire it on minimal introspection (though on my view I'd say that people are either duped or readily able to be duped into thinking they have the concept). I don't know if I am different, or if so, why. It's possible I do have the concept but don't recognize it, or am deceiving myself somehow. It's also possible I am somehow atypical neurologically. I went into philosophy precisely because I consistently found that I either didn't have intuitions about conventional philosophical cases at all (e.g., Gettier problems), or had nonstandard or less common views (e.g. illusionism, normative antirealism, utilitarianism). That led me to study intuitions, the psychological underpinnings of philosophical thought, and a host of related topics. So there is no coincidence in my presenting the views expressed here. I got into these topics because everyone else struck me as having bizarre views.
6TAG1y
Most people don't know the word "qualia". Nonetheless, most people will state something equivalent....that they have feelings and seemings that they can't fully describe. So it's a "speaking prose" thing. And something like that is implicit in Illusionism. Illusionism attempts to explain away reports of ineffable subjective sensations, reports of qualia like things. If no one had such beliefs, or made such reports, there would be nothing for Illusionism to address. Trying to attack qualia from every possible angle is rather self-defeating. For instance, if you literally don't know what "qualia" means, you can't report that you have none. And if no one even seems to have qualia, there is nothing for Illusionism to do. And so on. But then , why insist that you are right? If you have something like colour blindness , then why insist that everyone else is deluded when they report colours?
2Lance Bush1y
There are many reasons why a person might struggle to describe their experiences that wouldn't be due to them having qualia or having some implicit qualia-based theory, especially among laypeople who are not experienced at describing their mental states. It would be difficult to distinguish these other reasons from reasons having to do with qualia. So I don't agree that what you describe would necessarily be equivalent, and I don't think it would be easy to provide empirical evidence specifically of the notion that people have or think they have qualia, or speak or think in a way best explained by them having qualia. Even if it could be done, I don't know of any empirical evidence that would support this claim. Maybe there is some. But I don't have a high prior on any empirical investigation into how laypeople think turning out to support your claim, either. You know, I think you're right. And I believe the course of this discussion has clarified things for me sufficiently for me to recognize that I do not, strictly speaking, endorse illusionism. Illusionism could be construed as the conjunction of two claims: (1) On introspection, people systematically misrepresent their experiential states as having phenomenal properties. (2) There are no such phenomenal properties. For instance, Frankish (2016) [https://www.ingentaconnect.com/content/imp/jcs/2016/00000023/f0020011/art00002] defines (strong) illusionism as the view that: “[...] phenomenal consciousness is illusory; experiences do not really have qualitative, ‘what-it’s-like’ properties, whether physical or non-physical” (p. 15) Like illusionists, I deny that there are phenomenal properties, qualia, what-its-likeness, and so on. In that sense, I deny phenomenal realism (Mandik, 2016) [https://www.ingentaconnect.com/contentone/imp/jcs/2016/00000023/F0020011/art00011] . As such, I agree with (2) above. Thus, I agree with the central claim of illusionism, that there are no phenomenal properties, and I deny t
2Richard_Kennaway1y
When you sit alone in an empty room, do you have a sense of your own presence, your own self? Can you be aware, not only of your sensations, but of the sensation of having those sensations? Can you have thoughts, and be aware of having those thoughts? And be aware of having these awarenesses? My answer to each of these questions is "yes". But for you, do these questions fail to point to anything in your experience?
1Lance Bush1y
I'm not sure. I have sensations, but I don't know what a sensation of a sensation would be. Sure, but that just sounds like metacognition, and that doesn't strike me as being identical with or indicative of having qualia. I can know that I know things, for instance. I would describe this as third-order metacognition, or recursive cognition, or something like that. And yea, I can do that. I can think that Sam thinks that I think that he lied, for instance. Or I can know that my leg hurts and then think about the fact that I know that my leg hurts.
4Jemist1y
Having now had a lot of different conversations on consciousness I'm coming to a slightly disturbing belief that this might be the case. I have no idea what this implies for any of my downstream-of-consciousness views.
1Lance Bush1y
I don't know what that means, so I'm not sure. What would it mean for something to seem a certain way? I don't think it's this. It's more that when people try to push me to have qualia intuitions, I can introspect, report on the contents of my mental states, and then they want me to locate something extra. But there never is anything extra, and they can never explain what they're talking about, other than to use examples that don't help me at all, or metaphors that I don't understand. Nobody seems capable of directly explaining what they mean. And when pressed, they insist that the concept in question is "unanalyzable" or inexplicable or otherwise maintain that they cannot explain it. Despite his fame, the majority of students who take Dennett's courses that I encountered do not accept his views at all, and take qualia quite seriously. I had conversations that would last well over an hour where I would have one or more of them try to get me to grok what they're talking about, and they never succeeded. I've had people make the following kinds of claims: (1) I am pretending to not get it so that I can signal my intellectual unconventionality. (2) I do get it, but I don't realize that I get it. (3) I may be neurologically atypical. (4) I am too "caught in the grip" of a philosophical theory, and this has rendered me unable to get it. One or more of these could be true, but I'm not sure how I'd find out, or what I might do about it if I did. But I strangely drawn to a much more disturbing possibility, that an outside view would suggest is pretty unlikely: (5) all of these people are confused, qualia is a pseudoconcept, and the whole discussion predicated on it is fundamentally misguided I find myself drawn to this view, in spite of it entailing that a majority of people in academic philosophy, or who encounter it, are deeply mistaken. I should note, though, that I specialize in metaethics in particular. Most moral philosophers are moral realists (about 60%) a
3MichaelStJules1y
Are they expecting qualia to be more than a mental state? If you're reporting the contents of your mental states, isn't that already enough? I'm not sure what extra there should be for qualia. Objects you touch can feel hot to you, and that's exactly what you'd be reporting. Or would you say something like "I know it's hot, but I don't feel it's hot"? How would you know it's hot but not feel it's hot, if your only information came from touching it? Where does the knowledge come from? Are you saying that what you're reporting is only the verbal inner thought you had that it's hot, and that happened without any conscious mental trigger? If it's only the verbal thought, on what basis would you believe that it's actually hot? The verbal thought alone? (Suppose it's also not hot enough to trigger a reflexive response.) Doesn't your inner monologue also sound like something? (FWIW, I think mine has one pitch and one volume, and I'm not sure it sounds like anyone's voice in particular (even my own). It has my accent, or whatever accent I mimic.) More generally, the contents of your mental states are richer than the ones you report on symbolically (verbally or otherwise) to yourself or others, right? Like you notice more details than you talk to yourself about in the moment, e.g. individual notes in songs, sounds, details in images, etc.. Isn't this perceptual richness what people mean by qualia? I don't mean to say that it's richer than your attention, but you can attend to individual details without talking about them.
1Lance Bush1y
I don't think I can replicate exactly the kinds of ways people framed the questions. But they might do something like this: they'd show me a red object. They'd ask me "What color is this?" I say red. Then they'd try to extract from me an appreciation for the red "being a certain way" independent of, e.g., my disposition to identify the object as red, or my attitudes about red, as a color, and so on. Everything about "seeing red" doesn't to me indicate that there is a "what it's like" to seeing red. I am simply ... seeing red. Like, I can report that fact, and talk about it, and say things like "it isn't blue" and "it is the same color as a typical apple" and such, but there's nothing else. There's no "what it's likeness" for me, or, if there is, I'm not able to detect and report on this fact. The most common way people will frame this is to try to get me to agree that the red has a certain "redness" to it. That chocolate is "chocolatey" and so on. I can be in an entire room of people insisting that red has the property of "redness" and that chocolate is "chocolately" and so on, and they all nod and agree that our experiences have these intrinsic what-its-likeness properties. This seems to be what people are talking about when they talk about qualia. To me, this makes no sense at all. It's like saying seven has the property of "sevenness." That seems vacuous to me. I can look at something like Dennett's account: that people report experiences as having some kind of intrinsic nonrelational properties that are ineffable and immediately apprehensible. I can understand all those words in combination, but I don't see how anyone could access such a thing (if that's what qualia are supposed to be), and I don't think I do. It may be that that I am something akin to a native functionalist. I don't know. But part of the reason I was drawn to Dennett's views is that they are literally the only views that have ever made any sense to me. Everything else seems like gibberish.
1MichaelStJules1y
Ok, I think I get the disagreement now. Hmm, I'm not sure it's vacuous, since it's not like they're applying "redness" to only one thing; redness is a common feature of many different experiences. 14 could have "sevenness", too. Maybe we can think of examples of different experiences where it's hard to come up with distinguishing functional properties, but you can still distinguish the experiences? Maybe the following questions will seem silly/naive, since I'm not used to thinking in functional terms. Feel free to only answer the ones you think are useful, since they're somewhat repetitive. 1. What are the differences in functional properties between two slightly different shades of red that you can only tell apart when you see them next to each other? Or maybe there are none when separate, but seeing them next to each other just introduces another functional property? What functional property would this be? 1. What if you can tell them apart when they aren't next to each other? How are you doing so? 2. How about higher and lower pitched sounds? Say the same note an octave apart? 3. Say you touch something a few degrees above room temperature, and you can tell that it's hotter, but it doesn't invoke any particular desire. How can you tell it's hotter? How does this cash out in terms of functional properties? I'm guessing you would further define these in functional terms, since they too seem like the kinds of things people could insist qualia are involved in (desire, distinguishing). What would be basic functional properties that you wouldn't cash out further? Do you have to go all the way down to physics, or are there higher-level basic functional properties? I think if you go all the way down to physics, this is below our awareness and what our brain actually has concepts of; it's just implemented in them. If you were experiencing sweetness in taste (or some other sensation) for the first time, what would
1Lance Bush1y
One can apply a vacuous term to multiple things, so pointing out that you could apply the term to more than one thing does not seem to me to indicate that it isn't vacuous. I could even stipulate a concept that is vacuous by design: "smorf", which doesn't mean anything, and then I can say something like "potatoes are smorf." The ability to distinguish the experiences in a way you can report on would be at least one functional difference, so this doesn't seem to me like it would demonstrate much of anything. Some of the questions you ask seem a bit obscure, like how I can tell something is hotter. Are you asking for a physiological explanation? Or the cognitive mechanisms involved? If so, I don'tknow, but I'm not sure what that would have to do with qualia. But maybe I'm not understanding the question, and I'm not sure how that could get me any closer to understanding what qualia are supposed to be. I don't know. Likewise for most of the questions you ask. "What are the functional properties of X?" questions are very strange to me. I am not quite sure what I am being asked, or how I might answer, or if I'm supposed to be able to answer. Maybe you could help me out here, because I'd like to answer any questions I'm capable of answering, but I'm not sure what to do with these.
1MichaelStJules1y
It is a functional difference, but there must be some further (conscious?) reason why we can do so, right? Where I want to go with this is that you can distinguish them because they feel different, and that's what qualia refers to. This "feeling" in qualia, too, could be a functional property. The causal diagram I'm imagining is something like Unconscious processes (+unconscious functional properties) -> ("Qualia", Other conscious functional properties) -> More conscious functional properties And I'm trying to control for "Other conscious functional properties" with my questions, so that the reason you can distinguish two particular experiences goes through "Qualia". You can tell two musical notes apart because they feel (sound) different to you. I'm not sure if what I wrote above will help clarify. You also wrote: How would you cash out "desire to move my hand away from the object" and "distinguish it from something cold or at least not hot" in functional terms? To me, both of these explanations could also pass through "qualia". Doesn't desire feel like something, too? I'm asking you cash out desire and distinguishing in functional terms, too, and if we keep doing this, do "qualia" come up somewhere?
1Lance Bush1y
Do you mean like a causal reason? If so then of course, but that wouldn’t have anything to do with qualia. I have access to the contents of my mental states, and that includes information that allows me to identify and draw distinctions between things, categorize things, label things, and so on. A “feeling” can be cashed out in such terms, and once it is, there’s nothing else to explain, and no other properties or phenomena to refer to. I don’t know what work “qualia” is doing here. Of course things feel various ways to me, and of course they feel different. Touching a hot stove doesn’t feel the same as touching a block of ice. But I could get a robot, that has no qualia, but has temperature detecting mechanisms, to say something like “I have detected heat in this location and cold in this location and they are different.” I don’t think my ability to distinguish between things is because they “feel” different; rather, I’d say that insofar as I can report that they “feel different” it’s because I can report differences between them. I think the invocation of qualia here is superfluous and may get the explanation backwards: I don’t distinguish things because they feel different; things “feel different” if and only if we can distinguish differences between them. Then I’m even more puzzled by what you think qualia are. Qualia are, I take it, ineffable, intrinsic qualitative properties of experiences, though depending on what someone is talking about they might include more or less features than these. I’m not sure qualia can be “functional” in the relevant sense. I don't know. I just want to know what qualia are. Either people can explain what qualia are or they can’t. My inability to explain something wouldn’t justify saying “therefore, qualia,” so I’m not sure what the purpose of the questions are. I’m sure you don’t intend to invoke “qualia of the gaps,” and presume qualia must figure into any situation in which I, personally, am not able to answer a question yo
1MichaelStJules1y
What's the nature of these differences and this information, though? What exactly are you using to distinguish differences? Isn't it experienced? The information isn't itself a set of "symbols" (e.g. words, read or heard), or maybe sometimes it is, but those symbols aren't then made up of further symbols. Things don't feel hot or cold to you because there are different symbols assigned to them that you read off or hear, or to the extent that they are, you're experiencing those symbols as being read or heard, and that experience is not further composed of symbols. I might just be confused here. I was thinking that the illusion of ineffability, "seemingness", could be a functional property, and that what you're using to distinguish experiences are parts of these illusions. Maybe that doesn't make sense. I might have been switching back and forth between something like "qualia of the gaps" and a more principled argument, but I'll try to explain the more principled one clearly here: For each of the functional properties you've pointed out so far, I would say they "feel like something". You could keep naming things that "feel like something" (desires, attitudes, distinguishing, labelling or categorizing), and then explaining those further in terms of other things that "feel like something", and so on. Of course, presumably some functional properties don't feel like anything, but to the extent that they don't, I'd claim you're not aware of them, since everything you're aware of feels like something. If you keep explaining further, eventually you have to hit an explanation that can't be further explained even in principle by further facts you're conscious of (eventually the reason is unconscious, since you're only conscious of finitely many things at any moment). I can't imagine what this final conscious explanation could be like if it doesn't involve something like qualia, something just seeming some way. So, it's not about there being gaps in any particular explanat
1Lance Bush1y
I don’t know the answer to these questions. I’m not sure the questions are sufficiently well-specified to be answerable, but I suspect if you rephrased them or we worked towards getting me to understand the questions, I’d just say “I don’t know.” But my not knowing how to answer a question does not give me any more insight into what you mean when you refer to qualia, or what it means to say that things “feel like something.” I don’t think it means anything to say things “feel like something.” Every conversation I’ve had about this (and I’ve had a lot of them) goes in circles: what are qualia? How things feel. What does that mean? It’s just “what it’s like” to experience them. What does that mean? They just are a certain way, and so on. This is just an endless circle of obscure jargon and self-referential terms, all mutually interdefining one another. I don’t notice or experience any sense of a gap. I don’t know what gap others are referring to. It sounds like people seem to think there is some characteristic or property their experiences have that can’t be explained. But this seems to me like it could be a kind of inferential error, the way people may have once insisted that there’s something intrinsic about living things that distinguishes from nonliving things, and living things just couldn’t be composed of conventional matter arranged in certain ways, that they just obviously had something else, some je ne sais quoi. I suspect if I found myself feeling like there was some kind of inexplicable essence, or je ne sais quoi to some phenomena, I’d be more inclined to think I was confused than that there really was je ne sais quoiness. I’m not surprised philosophers go in for thinking there are qualia, but I’m surprised that people in the lesswrong community do. Why not think “I’m confused and probably wrong” as a first pass? Why are many people so confident that there is, what as far as I can tell, amounts to something that may be fundamentally incomprehensible, ev
1main_gi1y
Hi, I was doing research on consciousness-related discussions, blah blah blah, 3 months old, would just like to reply to a few things you mentioned. I know for certain that consciousness and qualia exist. I used to 'fall for' arguments that defined consciousness/qualia/free will as delusions or illusions because they were unobservable. Then, years later, I finally understood that I had some doublethink, and that these words actually were referring to something very simple and clear with my internal experience. I believed that the words were "meaningless" philosophy/morality words - for me, the lack of understanding WAS the 'gap' and they were referring to simple concepts all along. The confusion of 'defining' these words even within philosophy creates lots of synonyms and jargon, though. I have gotten my definitions from the simplicity of what the concepts refer to, so I am almost certain I have not invented new complicated ways to refer to the concepts (as that would make communicating with others unnecessarily difficult and subjective). These words refer to something that does indeed seem to be circular, because they all try to refer to something beyond the physical. I believe the people trying to define these words as something that relates to only physical things are the ones confused. There is nothing confusing about what the concept is that the words are trying to communicate, but it's impossible to get across because they are trying to describe something that can't be replicated. I'm not sure if you're supporting/against this idea, but I know of consciousness as the sum of all of someone's metaphysical experiences. Someone could have more or less amounts of senses/abilities, but it is metaphorical talk to say someone is "less conscious" because they are blind and deaf. The relevancy of a metaphysical consciousness doesn't come from philosophical mass mistakenness and navelgazing. It's because it actually exists (but again, it's individual, so I am never
1Lance Bush10mo
No. Consider religion and belief in the supernatural. Due to the existence of pareidolia and other psychological phenomena, people may exhibit a shared set of psychological mechanisms that cause them to mistakenly infer the presence of nonphysical or supernatural entities where there are none. While I believe culture and experience play a significant role in shaping the spread and persistence of supernatural beliefs, such beliefs are built on the foundations of psychological systems people share in common. Even if culture and learning were wiped out, due to the nature of human psychology it is likely that such mistakes would emerge yet again. People would once again see faces in the clouds and think that there's someone up there. So too, I suspect, people would fall into the same phenomenological quicksand with respect to many of the problems in philosophy. Even if we stopped teaching philosophy and all discussion of qualia vanished, I would not be surprised to find the notion emerge once again. People are not good at making inferences about what the world is like based on their phenomenology. I mean no disrespect, but your account sounds far more like the testimony of a religious convert than a robust philosophical argument for the existence of qualia. Take this blunt remark: I've spent a lot of time discussing religion with theists, and one could readily swap out "consciousness and qualia" for "Jesus" our "God": "I know for certain that [God] exist[s]." I don't know for certain that qualia don't exist. I don't know for certain that God doesn't exist. I don't generally make a point of telling others that I know something "for certain," and if I did, I think I would appreciate if someone else suggested to me, hopefully kindly, that perhaps my declaration that I know something for certain serves more to convince myself than to convince others. I take the hallmark of a good idea to be its utility. The notion of qualia has no value. On the contrary, I see it as a pr
2main_gi10mo
Hey, glad you saw my post and all that. Yes, I know about religion and people having unexplainable supernatural experiences. I don't have anything like that, and I think people who daydreamed up a supernatural experience shouldn't have literal certainty, just high confidence. (you'd also expect some high inconsistency in people who recount supernatural events. which unfortunately is probably true for qualia currently too, due to similar levels of how society spreads beliefs) There is irony in using 'convert' when I was unconverted from believing these things by philosophical confusion, and then later untangled myself. Yes, you could go swap out any 'certainty' claim with any other words and mock the result. Sure, I guess no one can say 'certain' about anything. "I think I would appreciate if someone else suggested to me, hopefully kindly, that perhaps my declaration that I know something for certain serves more to convince myself than to convince others." My use of certainty is about honestly communicating strength of belief etc., not being hyperbolic or exaggerating. Yes I understand that many people exaggerate and lie about 'certain' things all the time so I trust other people's "for certain" claims less. It doesn't mean I should then reduce my own quality of claims to try to cater to the average, that makes no sense. (like, if I said it wasn't certain, wouldn't that be room for you to claim it's a delusion anyway?) Like, the nature of consciousness/qualia is that someone who's conscious/has qualia is never "uncertain" they are conscious (unlike with free will where there isn't that level of certainty). I think I mentioned it before but it seems perfectly rational if someone who doesn't have qualia is confused by the whole thing. A "robust philosophical argument" isn't possible, only some statistical one. (the same way that, if you didn't understand some music's appeal while a majority of other people did, the response to try to convince you could never be a ro
2TAG10mo
The idea that everything must be useful to explain something else doesn't work unless you have a core things that need explaining, but are not themselves explanatory posits...basic facts...sometimes called.phenomena. So qualia don't have to sit in the category of things-that-do-explaining , because there is another category of things-that-need-explaining. "Phenomena" (literally meaning appearances ) is a near synonym for "qualia". And people aren't good at making inferences from their qualia. People generally and incorrectly assume that colours are objective properties (hence rhe consternation caused , amongst some, by thedress illusion [https://www.google.com/url?sa=t&source=web&rct=j&url=https://en.m.wikipedia.org/wiki/The_dress&ved=2ahUKEwjm4veHyfb2AhUHa8AKHTggC28QFnoECEoQAQ&usg=AOvVaw0SgcdqCuX4jwgF-9Wwn5vA] ). That's called naive realism, and it's scientifically wrong. According to science , our senses are not an open window on the world that portrays it exactly as it is. Instead , the sensory centres of our brains are connected the outside world by a complex causal chain, during which information, already limited by our sensory modalities, is filtered and reprocessed in various ways. So scientific accounts of perception require there to be a way-we-perceive-things...quite possibly , an individual one. Which might as well be called "qualia" as anything else. (Of course , such a scientific quale isn't immaterial by definition. Despite what people keep saying, qualia aren't defined as immaterial). I wouldn't expect a theory of colour qualia to re emerge out of nowhere, because naive realism about colour is so pervasive. On the other hand, no one is naively realistic about tastes, smells etc. Everyone knows that tastes vary.
2Raemon1y
(I haven't caught up on the entire thread, apologies if this is a repeat) Assuming the "qualia is a misguided pseudoconcept" is true, do you have a sense of why people think that it's real? i.e. taking the evidence of "Somehow, people end up saying sentences about how they have a sense of what it is like to perceive things. Why is that? What process would generate people saying words like that?" (This is not meant to be a gotcha, it just seems like a good question to ask)
1Lance Bush1y
No worries, it's not a gotcha at all, and I already have some thoughts about this. I was more interested in this topic back about seven or eight years ago, when I was actually studying it. I moved on to psychology and metaethics, and haven't been actively reading about this stuff since about 2014. I'm not sure it'd be ideal to try to dredge all that up, but I can roughly point towards something like Robbins and Jack (2006) as an example of the kind of research I'd employ to develop a type of debunking explanation for qualia intuitions. I am not necessarily claiming their specific account is correct, or rigorous, or sufficient all on its own, but it points to the kind of work cognitive scientists and philosophers could do that is at least in the ballpark. Roughly, they attempt to offer an empirical explanation for the persistent of the explanatory gap (the problem of accounting for the consciousness by appeal to physical or at least nonconscious phenomena). Its persistence could be due to quirks in the way human cognition works. If so, it may be difficult to dispel certain kinds of introspective illusions. Roughly, suppose we have multiple, distinct "mapping systems" that each independently operate to populate their own maps of the territory. Each of these systems evolved and currently functions to facilitate adaptive behavior. However, we may discover that when we go to formulate comprehensive and rigorous theories about how the world is, these maps seem to provide us with conflicting or confusing information. Suppose one of these mapping systems was a "physical stuff" map. It populates our world with objects, and we have the overwhelming impression that there is "physical stuff" out there, that we can detect using our senses. But suppose also we have a "important agents that I need to treat well" system, that detects and highlights certain agents within the world for whom it would be important to treat appropriately, a kind of "VIP agency mapping system" that
1Lance Bush1y
I forgot to add a reference to the Robbins and Jack citation above. Here it is: Robbins, P., & Jack, A. I. (2006). The phenomenal stance. Philosophical studies, 127(1), 59-85.
1Lance Bush1y
I'm not sure how to answer the first question. I'm sure my introspection revealed all manner of things over the course of years, and I'm also not sure what level of specificity you are going for. I don't want to evade actually reporting on the contents of my mental states, so perhaps a more specific question would help me form a useful response. I may very well not have even the illusion of phenomenal consciousness, but I'm not sure I am alone in lacking it. While it remains an open empirical question, and I can’t vouch for the methodological rigor of any particular study, there is some empirical research on whether or not nonphilosophers are inclined towards thinking there is a hard problem of consciousness: https://www.ingentaconnect.com/content/imp/jcs/2021/00000028/f0020003/art00002 [https://www.ingentaconnect.com/content/imp/jcs/2021/00000028/f0020003/art00002] It may be that notions of qualia, and the kinds of views that predominate among academic philosophers are outliers that don’t represent how other people think about these issues, if they think about them at all.
4Jemist1y
You present an excellently-written and interesting case here. I agree with the point that self-modelling systems can think in certain ways which are unique and special and chickens can't do that. One reason I identify consciousness with having qualia is that Eliezer specifically does that in the twitter thread. The other is that qualia is generally less ambiguous than terms like consciousness and self-awareness and sentience. The disadvantage is that the concept of qualia is something which is very difficult (and beyond my explaining capabilities) to explain to people who don't know what it means. I choose to take this tradeoff because I find that I, personally, get much more out of discussions about specifically qualia, than any of the related words. Perhaps I'm not taking seriously enough the idea that illusionism will explain why I feel like I'm conscious and not explain why I am conscious. I also agree that most other existing mainstream views are somewhat poor, but to me this isn't particularly strong positive evidence for Eliezer's views. This is because models of consciousness on the level of detail of Eliezer's are hard to come up with, so there might be many other excellent ones that haven't been found yet. And Eliezer hasn't done (to my knowledge) anything which rules out other arguments on the level of detail of his own. Basically I think that the reason the best argument we see is Eliezer's is less along the lines of "this is the only computational argument that could be made for consciousness" and more along the lines of "computational arguments for consciousness are really difficult and this is the first one anyone has found".
1Lance Bush1y
Yudkowsky specifically using the term is a good reason. Thanks for pointing that out, and now I feel a little silly for asking. He says, "I mean qualia, yes." You can't get more blunt than that. While I agree that qualia is less ambiguous than other terms, I am still not sure it is sufficiently unambiguous. I don’t know what you mean by the term, for instance. Generally, though, I would say that I think consciousness exists, but that qualia do not exist. I think illusionism does offer an account of consciousness; it’s just that consciousness turns out not to be what some people thought that it was. Personally, I don’t have and apparently have never had qualia intuitions, and thus never struggled with accepting Dennett’s views. This might be unusual, but the only view I ever recall holding on the matter was something like Dennett’s. His views immediately resonated with me and I adopted them the moment I heard them, with something similar to a “wow, this is obviously how it is!” response, and bewilderment that anyone could think otherwise. I’m glad we agree most alternatives are poor. I do happen to agree that this isn’t especially good evidence against the plausibility of some compelling alternative to illusionism emerging. I definitely think that’s a very real possibility. But I do not think it is going to come out of the intuition-mongering methodology many philosophers rely on. I also agree that this is probably due to the difficulty of coming up with alternative models. Seems like we’re largely in agreement here, in that case.
2MichaelStJules1y
How do you imagine consciousness would work in the moment for humans without inner/internal monologues (and with aphantasia, unable to visualize; some people can do neither)? And in general, for experiences that we don't reflect on using language in the moment, or at most simple expressive language, like "Ow!"?
3Lance Bush1y
The lack of an internal monologue is a distressing question to me. I run a constant inner monologue, and can’t imagine thinking differently. There may be some sense in which people who lack an inner monologue lack certain features of consciousness that others who do have one possess. Part of the issue here is to avoid thinking of consciousness as either a discrete capacity one either has or doesn’t have, or even to think of it as existing a continuum, such that one could have “more” or “less” of it. Instead, I think of “consciousness” as a term we use to describe a set of both qualitative and quantitatively distinct capacities. It’d be a bit like talking about “cooking skills.” If someone doesn’t know how to use a knife, or start a fire, do they “lack cooking skills”? Well, they lack a particular cooking skill, but there is no single answer as to whether they “lack cooking skills” because cooking skills break down into numerous subskills, each of which may be characterized by its own continuum along which a person could be better or worse. Maybe a person doesn’t know how to start a fire, but they can bake amazing cakes if you give them an oven and the right ingredients. This is why I am wary of saying that animals are “not conscious” and would instead say that whatever their “consciousness” is like, it would be very different from ours, if they lack a self-model and if a self-model is as central to our experiences as I think it is. As for someone who lacks an inner monologue, I am not sure what to make of these cases. And I’m not sure whether I’d want to say someone without an inner monologue “isn’t conscious,” as that seems a bit strange. Rather, I think I’d say that they may lack a feature of the kinds of consciousness most of us have that strikes me, at first glance, as fairly central and important. But perhaps it isn’t. I’d have to think more about that, to consider whether an enculturated construction of a self-model requires an inner monologue. I do think i

Some other less theory-heavy approaches to consciousness I find promising:

  1. What do unconscious processes in humans tell us about sentience?, and then see Rethink Priorities' table with evidence for various indicators for different species, with a column for unconscious processing in humans. (Disclaimer: I work at Rethink Priorities.)
  2. The facilitation hypothesis: "Phenomenally conscious perception of a stimulus facilitates, relative to unconscious perception, a cluster of cognitive abilities in relation to that stimulus." This is compatible with most popular
... (read more)
  1. Does GPT-3 have any internal states/processes that look and act like its own emotions, desires or motivations? These words are in its vocabulary, but so are they in dictionaries. How could we interpret something as aversive to GPT-3? For example (although this isn't the only way it could have such a state), is there an internal state that correlates well with the reward it would get during training?
    1. In mammals, activation of the ACC seems necessary for the affective component of pain, and this of course contributes to aversive behaviour. (Also, evolution ha
... (read more)
5Logan Zoellner1y
It's easy to show that GPT-3 has internal states that it describes as "painful" and tries to avoid. Consider the following dialogue (bold text is mine) And, just so Roko's Basilisk doesn't come for me if AI ever takes over the world
7FeepingCreature1y
Counterexample: Oh God! I am in horrible pain right now! For no reason, my body feels like it's on fire! Every single part of my body feels like it's burning up! I'm being burned alive! Help! Please make it stop! Help me!! Okay, so that thing that I just said was a lie. I was not actually in pain (I can confirm this introspectively); instead, I merely pretended to be in pain. Sir Ian McKellen has an instructive video. [https://www.dailymotion.com/video/x2oe0ag] The Turing test works for many things, but I don't think it works for checking for the existence of internal phenomenological states. If you asked me what GPT-3 was doing, I would expect it to be closer to "acting" than "experiencing." (Why? Because the experience of pain is a means to an end, and the end is behavioral aversion. GPT-3 has no behavior to be aversive to. If anything, I'd expect GPT-3 to "experience pain" during training - but of course, it's not aware while its weights are being updated. I think that at least, no system that is offline trained can experience pain at all.)
3Logan Zoellner1y
I think we both agree that GPT-3 does not feel pain. However, under a particular version of pan-psychism: "pain is any internal state which a system attempts to avoid", GPT obviously would qualify.
4FeepingCreature1y
Sure, but that definition is so generic and applies to so many things that are obviously not like human pain (landslides?) that it lacks all moral compulsion.

Where I disagree is that we 100% need a separate "information processing" and "inner listener" module.

I didn't understand this part. Do you mean that EY thinks we need these two modules and you don't think that, or the other way around?

(I think this is a generic problem that arises pretty much whenever someone uses this kind of phrasing, saying "Where I disagree is that X". I can't tell if they're saying they believe X and the person they disagree with believes not-X, or the other way around. Sometimes I can tell from context. This time I couldn't.)

According to Yudkowsky, is the self-model supposed to be fully recursive, so that the model feeds back into itself, rather than just having a finite stack of separate models each modelling the previous one (like here and here, although FWIW, I'd guess those authors are wrong that their theory rules out cephalopods)? If so, why does this matter, if we only ever recurse to bounded depth during a given conscious experience?

If not, then what does self-modelling actually accomplish? If modelling internal states is supposedly necessary for consciousness, how and... (read more)

Humans can distinguish stimuli they are aware of from ones they are not aware of. Below-awareness-level stimuli are not ethically significant to humans - if someone pricks you with a needle and you don't feel pain, then you don't feel pain and don't care much. Therefore only systems that can implement awareness detectors are ethically significant.

My current model of consciousness is that it is the process of encoding cognitive programs (action) or belief maps (perception). These programs/maps can then be stored in long-term memory to be called upon later, or they can be transcoded onto the language centers of the brain to allow them to be replicated in the minds of others via language.

Both of these functions would have a high selective advantage on their own. Those who can better replicate a complex sequence of actions that proved successful in the past (by loading a cognitive program from memory o... (read more)

My main objection (or one of my main objections) to the position is that I don't think I'm self-aware to the level of passing something like the mirror test or attributing mental states to myself or others during most of my conscious experiences, so the bar for self-reflection seems set too high. My self-representations may be involved, but not to the point of recognizing my perceptions as "mine", or at least the "me" here is often only a fragment of my self-concept. My perceptions could even be integrated into my fuller self-concept, but without my awaren... (read more)

What if consciousness is a staccato frame-rate that seems continuous only because memory is ontologically persistent and the experiential narrative is spatiotemporally consistent – and therefore neurologically predictable?

Or maybe the brain works faster than the frame-rate required for the impression of quotidian conscious identity? That is to say, brains are able to render - at any moment - a convincing selfhood (consciousness complete with sense of its own history) that’s perceptually indistinguishable from an objective continuity of being; but could jus... (read more)

I don't know if this is helpful, but I'll just throw in that I'm unusually hesitant to disagree with extremely smart people (a position that has seems to be almost universally shunned on LW, see e.g. here and here), and yet I dare to disagree with Eliezer about consciousnes. I don't think there is a hidden reason why his take is justified.

My position is that 'consciousness is the result of information processing' is almost certainly not true (which makes the tweet a non-starter), and at the very least, Eliezer has never written anything that extensively ar... (read more)

6Raemon1y
I don't have a strong take on whether his position is true, but I do think a lot of the sequences are laying out background that informs his beliefs.
4Rafael Harth1y
Does this come down to the thing Scott has described here? [https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=tF9DpnXayLNLP7jj8] I.e. If so I can repeat that I'm a huge fan of the sequences, I agree with almost everything in them, even though I think humans are atoms. On the other hand, it has been years since I've read them (and I had much fewer philosophical thoughts & probably worse reading comprehension than I do now). It's possible that there is more background in there than I recall.
4Raemon1y
I do think that's a central unifying piece. Relevant pieces include What An Algorithm Feels Like From the Inside, and "Intelligence, Preferences and Morality have to come from somewhere, from non-mysterious things that are fundamentally not intelligence, preferences, morality, etc. You need some way to explain how this comes to be, and there are constraints on what sort of answer makes sense." I think much of the sequences are laying out different confusions people have about this and addressing them.

The key think to keep in mind is that EY is a physicalist.  He doesn't think that there is some special consciousness stuff.  Instead, consciousness is just what it feels like to implement an  algorithm capable of sophisticated social reasoning.  An algorithm is conscious if and only if it is capable of sophisticated social reasoning and moreover it is conscious only when it applies that  reasoning to itself.  This is why EY doesn't think that he himself is conscious when dreaming or in a flow state.

Additionally, EY does not t... (read more)

  1. The key think to keep in mind is that EY is a physicalist. He doesn’t think that there is some special consciousness stuff.
  1. Instead, consciousness is just what it feels like to implement an algorithm capable of sophisticated social reasoning.

The theory that consciousness is just what it feels like to be a sophisticated information processor has a number of attractive features ,but it is not a physicalist theory, in every sense of "physicalist". In particular, physics does not predict that anything feels like anything from the inside, so that would need to be an additional posit.

Relatedly, his theory is in no way a reduction of of consciousness to physics (or computation). A reductive explanation of consciousness would allow you to predict specific subjective states from specific brain states (as in Mary's Room); would allow you to reliably construct artificial consciousness; and so on. The "just what it feels.like from the inside" theory doesn't do any of that.

Your 1 states EYs theory is physicalist in the sense of not being substance dualist ...and that is true,as far as it goes...but it is far from the only issue, because there are many dualisms and many non-physiclaisms.

1Signer1y
I think you can predict specific subjective states by observing that same computations result in same subjective states? I mean, in theory - do you mean that for a theory to be a reduction it must be practical to predict specific human's qualia? By that standard we don't have a physical reduction of billiard balls.
3TAG1y
We do have a reductive explanation of billiard balls, in theory. If we don't have a reductive explanation of billiard balls , we don't have a reductive explanation of anything. Of course , the computations can be impractical, but that's why Mary in Mary's Room is a super scientist.
1Adam Shai1y
Say you had a system that implemented a sophisticated social reasoning algorith, and that was actually conscious. Now make a list of literally every sensory input and the behavioral output that the sensory input causes, and write it down in a very (very) long book. This book implements the same exact sophisticated social reasoning algorithm. To think that the book has sentience sounds to me like a statement of magical thinking, not of physicalism.
1Logan Zoellner1y
I'm pretty sure this is because you're defining "sentience" as some extra-physical property possessed by the algorithm, something with physicalism explicitly rejects. Consciousness isn't something thatarises when algorithms compute complex social games. Consciousness is when some algorithm computes complex physical games. (under a purely physical theory of consciousness such as EY's). To understand how physicalism can talk about metaphysical categories, consider numbers. Some physical systems have the property of being "two of something" as understood by human beings. Two sheep standing in a field, for example. Or two rocks piled on of one another. There's no magical thing that happens when "two" of something come into existence. They don't suddenly send a glimmer of two-ness off into a pure platonic realm of numbers. They simply are "two", and what makes them "two" is that being "two of something" is a category readily recognized by human beings (and presumably other intelligent beings). Similarly, a physicalist theory of consciousness defines certain physical systems as conscious if they meet certain criteria. Specifically for EY, these criteria are self-recognition and complex social games. It matters no more whether they are implemented by a Chinese room or a computer or a bunch of meat. What matters is that they implement a particular algorithm. When confronted with the Chinese-room consciousness, EY might say something like: "I recognize that this system is capable of self reflection and social reasoning in much the same way that I am, therefore I recognize that it is conscious in much the same way as I am."
1Isaac Poulton1y
If I'm not mistaken, that book is behaviourally equivalent to the original algorithm but is not the same algorithm. From an outside view, they have different computational complexity. There are a number of different ways of defining program equivalence, but equivalence is different from identity. A is equivalent to B doesn't mean A is B. See also: Chinese Room Problem [https://en.m.wikipedia.org/wiki/Chinese_room]
1Adam Shai1y
I see, but in that case what is the claim about gpt3, that if it had behavioral equivalence to a complicated social being it would have consciousness?
1Isaac Poulton1y
I don't agree with Eliezer here. I don't think we have a deep enough understanding of consciousness to make confident predictions about what is and isn't conscious beyond "most humans are probably conscious sometimes". The hypothesis that consciousness is an emergent property of certain algorithms is plausible, but only that. If that turns out to be the case, then whether or not humans, GPT-3, or sufficiently large books are capable of consciousness depends on the details of the requirements of the algorithm.
1Jemist1y
In that case I'll not use the word consciousness and abstract away to "things which I ascribe moral weight to", (which I think is a fair assumption given the later discussion of eating "BBQ GPT-3 wings" etc.) Eliezer's claim is therefore something along the lines of: "I only care about the suffering of algorithms which implement complex social games and reflect on themselves" or possibly "I only care about the suffering of algorithms which are capable of (and currently doing a form of) self-modelling". I've not seen nearly enough evidence to convince me of this. I don't expect to see a consciousness particle called a qualon. I more expect to see something like: "These particular brain activity patterns which are robustly detectable in an fMRI are extremely low in sleeping people, higher in dreaming people, higher still in awake people and really high in people on LSD and types of zen meditation."
1acylhalide1y
Not to speak on behalf for EY but ... An assertion like the following one doesn't necessarily need evidence: "I only care about the suffering of algorithms which implement complex social games and reflect on themselves" What you care about is ground truth from your first-person perspective. If I say that I care about this balloon I'm holding not bursting or my friend not dying, there is a very direct connect between my first-person experience and the words I am saying. I do not need to pattern match my experience with my friend to an abstract mental object like "algorithms that self-reflect" in order to care about my friend. So maybe (or maybe not) EY has spent a lot of time thinking about the space of possible agents and found that ones he deeply cares about at a first-person level all have an inner listener. The abstract mental object of "having an inner listener" might come after, the examples of inner listeners and the caring for those beings might come before. Basically I'd personally probably want to reorient this discussion from one about finding ground truth in the physical world to one about finding ground truth in your own first-person experience about what you care about. "Who is conscious?" isn't a great question to ask when we all know it's a spectrum. But asking this deflects from the real question which is "what forms of consciousness (or beings, more general) do I care about?"