Below, I've collected some of my thoughts on consciousness. Topics covered (in the post and/or the comments below) include:

  • To what extent did subjective pain evolve as a social signal?
  • Why did consciousness evolve? What function(s) did it serve?
  • What would the evolutionary precursors of 'full' consciousness look like?
  • What sorts of human values are more or less likely to extend to unconscious things?
  • Is consciousness more like a machine (where there's a sharp cutoff between 'the machine works' or 'the machine doesn't'), or is it more like a basic physical property like mass (where there's a continuum from very small fundamental things that have small amounts of the property, all the way up to big macroscopic objects that have way more of the property)?
  • How should illusionism (the view that in an important sense we aren't conscious, but non-phenomenally 'appear' to be conscious) change our answers to the questions above?

1. Pain signaling

In September 2019, I wrote on my LW shortform:

 

Rolf Degen, summarizing part of Barbara Finlay's "The neuroscience of vision and pain":

Humans may have evolved to experience far greater pain, malaise and suffering than the rest of the animal kingdom, due to their intense sociality giving them a reasonable chance of receiving help.

From the paper:

Several years ago, we proposed the idea that pain, and sickness behaviour had become systematically increased in humans compared with our primate relatives, because human intense sociality allowed that we could ask for help and have a reasonable chance of receiving it. We called this hypothesis ‘the pain of altruism’ [68]. This idea derives from, but is a substantive extension of Wall’s account of the placebo response [43]. Starting from human childbirth as an example (but applying the idea to all kinds of trauma and illness), we hypothesized that labour pains are more painful in humans so that we might get help, an ‘obligatory midwifery’ which most other primates avoid and which improves survival in human childbirth substantially ([67]; see also [69]). Additionally, labour pains do not arise from tissue damage, but rather predict possible tissue damage and a considerable chance of death. Pain and the duration of recovery after trauma are extended, because humans may expect to be provisioned and protected during such periods. The vigour and duration of immune responses after infection, with attendant malaise, are also increased. Noisy expression of pain and malaise, coupled with an unusual responsivity to such requests, was thought to be an adaptation.

We noted that similar effects might have been established in domesticated animals and pets, and addressed issues of ‘honest signalling’ that this kind of petition for help raised. No implication that no other primate ever supplied or asked for help from any other was intended, nor any claim that animals do not feel pain. Rather, animals would experience pain to the degree it was functional, to escape trauma and minimize movement after trauma, insofar as possible.

Finlay's original article on the topic: "The pain of altruism".

 

[Epistemic status: Thinking out loud]

 

If the evolutionary logic here is right, I'd naively also expect non-human animals to suffer more to the extent they're (a) more social, and (b) better at communicating specific, achievable needs and desires.

There are reasons the logic might not generalize, though. Humans have fine-grained language that lets us express very complicated propositions about our internal states. That puts a lot of pressure on individual humans to have a totally ironclad, consistent "story" they can express to others. I'd expect there to be a lot more evolutionary pressure to actually experience suffering, since a human will be better at spotting holes in the narratives of a human who fakes it (compared to, e.g., a bonobo trying to detect whether another bonobo is really in that much pain).

It seems like there should be an arms race across many social species to give increasingly costly signals of distress, up until the costs outweigh the amount of help they can hope to get. But if you don't have the language to actually express concrete propositions like "Bob took care of me the last time I got sick, six months ago, and he can attest that I had a hard time walking that time too", then those costly signals might be mostly or entirely things like "shriek louder in response to percept X", rather than things like "internally represent a hard-to-endure pain-state so I can more convincingly stick to a verbal narrative going forward about how hard-to-endure this was".


2. To what extent is suffering conditional or complex?

In July 2020, I wrote on my shortform:

 

[Epistemic status: Piecemeal wild speculation; not the kind of reasoning you should gamble the future on.]

 

Some things that make me think suffering (or 'pain-style suffering' specifically) might be surprisingly neurologically conditional and/or complex, and therefore more likely to be rare in non-human animals (and in subsystems of human brains, in AGI subsystems that aren't highly optimized to function as high-fidelity models of humans, etc.):

 

1. Degen and Finlay's social account of suffering above.

 

2. Which things we suffer from seems to depend heavily on mental narratives and mindset. See, e.g., Julia Galef's Reflections on Pain, from the Burn Unit.

Pain management is one of the main things hypnosis appears to be useful for. Ability to cognitively regulate suffering is also one of the main claims of meditators, and seems related to existential psychotherapy's claim that narratives are more important for well-being than material circumstances.

Even if suffering isn't highly social (pace Degen and Finlay), its dependence on higher cognition suggests that it is much more complex and conditional than it might appear on initial introspection, which on its own reduces the probability of its showing up elsewhere: complex things are relatively unlikely a priori, are especially hard to evolve, and demand especially strong selection pressure if they're to evolve and if they're to be maintained.

(Note that suffering introspectively feels relatively basic, simple, and out of our control, even though it's not. Note also that what things introspectively feel like is itself under selection pressure. If suffering felt complicated, derived, and dependent on our choices, then the whole suite of social thoughts and emotions related to deception and manipulation would be much more salient, both to sufferers and to people trying to evaluate others' displays of suffering. This would muddle and complicate attempts by sufferers to consistently socially signal that their distress is important and real.)

 

3. When humans experience large sudden neurological changes and are able to remember and report on them, their later reports generally suggest positive states more often than negative ones. This seems true of near-death experiences and drug states, though the case of drugs is obviously filtered: the more pleasant and/or reinforcing drugs will generally be the ones that get used more.

Sometimes people report remembering that a state change was scary or disorienting. But they rarely report feeling agonizing pain, and they often either endorse having had the experience (with the benefit of hindsight), or report having enjoyed it at the time, or both.

This suggests that humans' capacity for suffering (especially more 'pain-like' suffering, as opposed to fear or anxiety) may be fragile and complex. Many different ways of disrupting brain function seem to prevent suffering, suggesting suffering is the more difficult and conjunctive state for a brain to get itself into; you need more of the brain's machinery to be in working order in order to pull it off.

 

4. Similarly, I frequently hear about dreams that are scary or disorienting, but I don't think I've ever heard of someone recalling having experienced severe pain from a dream, even when they remember dreaming that they were being physically damaged.

This may be for reasons of selection: if dreams were more unpleasant, people would be less inclined to go to sleep and their health would suffer. But it's interesting that scary dreams are nonetheless common. This again seems to point toward 'states that are further from the typical human state are much more likely to be capable of things like fear or distress, than to be capable of suffering-laden physical agony.'


3. Consciousness and suffering

Eliezer recently criticized "people who worry that chickens are sentient and suffering" but "don't also worry that GPT-3 is sentient and maybe suffering". (He thinks chickens and GPT-3 are both non-sentient.)

Jemist responded on LessWrong, and Nate Soares wrote a reply to Jemist that I like:

Instrumental status: off-the-cuff reply, out of a wish that more people in this community understood what the sequences have to say about how to do philosophy correctly (according to me).

 

> EY's position seems to be that self-modelling is both necessary and sufficient for consciousness.

 

That is not how it seems to me. My read of his position is more like: "Don't start by asking 'what is consciousness' or 'what are qualia'; start by asking 'what are the cognitive causes of people talking about consciousness and qualia', because while abstractions like 'consciousness' and 'qualia' might turn out to be labels for our own confusions, the words people emit about them are physical observations that won't disappear. Once one has figured out what is going on, they can plausibly rescue the notions of 'qualia' and 'consciousness', though their concepts might look fundamentally different, just as a physicist's concept of 'heat' may differ from that of a layperson. Having done this exercise at least in part, I (Nate's model of Eliezer) assert that consciousness/qualia can be more-or-less rescued, and that there is a long list of things an algorithm has to do to 'be conscious' / 'have qualia' in the rescued sense. The mirror test seems to me like a decent proxy for at least one item on that list (and the presence of one might correlate with a handful of others, especially among animals with similar architectures to ours)."

 

> An ordering of consciousness as reported by humans might be:

> Asleep Human < Awake Human < Human on Psychedelics/Zen Meditation

> I don't know if EY agrees with this.

 

My model of Eliezer says "Insofar as humans do report this, it's a fine observation to write down in your list of 'stuff people say about consciousness', which your completed theory of consciousness should explain. However, it would be an error to take this as much evidence about 'consciousness', because it would be an error to act like 'consciousness' is a coherent concept when one is so confused about it that they cannot describe the cognitive antecedents of human insistence that there's an ineffable redness to red."

 

> But what surprises me the most about EY's position is his confidence in it.

 

My model of Eliezer says "The type of knowledge I claim to have, is knowledge of (at least many components of) a cognitive algorithm that looks to me like it codes for consciousness, in the sense that if you were to execute it then it would claim to have qualia for transparent reasons and for the same reasons that humans do, and to be correct about that claim in the same way that we are. From this epistemic vantage point, I can indeed see clearly that consciousness is not much intertwined with predictive processing, nor with the "binding problem", etc. I have not named the long list of components that I have compiled, and you, who lack such a list, may well not be able to tell what consciousness is or isn't intertwined with. However, you can still perhaps understand what it would feel like to believe you can see (at least a good part of) such an algorithm, and perhaps this will help you understand my confidence. Many things look a lot more certain, and a lot less confusing, once you begin to see how to program them."

 

Some conversations I had on Twitter and Facebook, forking off of Eliezer's tweet (somewhat arbitrarily ordered, and with ~4 small edits to my tweets):

 

Bernardo Subercaseux: I don't understand this take at all. It's clear to me that the best hypothesis for why chickens have reactions to physical damage that are consistent with our models of expressions of suffering, as also do babies, is bc they can suffer in a similar way that I do. Otoh, best hypothesis for why GPT-3 would say "don't kill me" is simply because it's a statistically likely response for a human to have said in a similar context. I think claiming that animals require a certain level of intelligence to experience pain is unfalsifiable...

 

Rob Bensinger: Many organisms with very simple nervous systems, or with no nervous systems at all, change their behavior in response to bodily damage -- albeit not in the specific ways that chickens do. So there must be some more specific behavior you have in mind here.

As for GPT-3: if you trained an AI to perfectly imitate all human behaviors, then plausibly it would contain suffering subsystems. This is because real humans suffer, and a good way to predict a system (including a human brain) is to build a detailed emulation of it.

GPT-3 isn't a perfect emulator of a human (and I don't think it's sentient), but there's certainly a nontrivial question of how we can know it's not sentient, and how sophisticated a human-imitator could get before we'd start wanting to assign non-tiny probability to sentience.

 

Bernardo Subercaseux: I don't think it's possible to perfectly imitate all human behavior for anything non-human, in the same fashion as we cannot perfectly imitate all chicken behaviors, of plant behaviors... I think being embodied as a full X is a requisite to perfectly imitate the behavior of X

 

Rob Bensinger: If an emulated human brain won't act human-like unless it sees trees and grass, outputs motor actions like walking (with the associated stable sensory feedback), etc., then you can place the emulated brain in a virtual environment and get your predictions about humans that way.

 

Bernardo Subercaseux: My worry is that this converges to the "virtual environment" having to be exactly the real world with real trees and real grass and a real brain made of the same as ours and connected to as many things as ours is connected...

 

Rob Bensinger: Physics is local, so you don't have to simulate the entire universe to accurately represent part of it.

E.g., suppose you want to emulate how a human would respond to being slipped notes inside a locked white room. You might have to simulate the room in some sensory detail, but you wouldn't need to simulate anything outside the room in any detail. You can just decide what you want the note to say, and then simulate a realistic-feeling, realistic-looking note coming into existence in the white room's mail chute.

 

Bernardo Subercaseux: 

[Physics is local, so you don't have to simulate the entire universe to accurately represent part of it.

E.g., suppose you want to emulate how a human would respond to being slipped notes inside a locked white room. You might have to simulate the room in some sensory detail...]

i) not sure about the first considering quantum, but you might know more than I do.

ii) but I'm not saying the entire universe, just an actual human body.

In any case, I still think that a definite answer relies on understanding the physical processes of consciousness, and yet it seems to me that no AI at the moment is close to pose a serious challenge in terms of whether it has the ability to suffer. This in opposition to animals like pigs or chicken...

 

Rob Bensinger: QM allows for some nonlocal-looking phenomena in a sense, but it still has a speed-of-light limit.

I don't understand what you mean by 'actual human body' or 'embodied'. What specific properties of human bodies are important for human cognition, and expensive to simulate?

I think this is a reasonable POV: 'Humans are related to chickens, so maybe chickens have minds sort of like a human's and suffer in the situations humans would suffer in. GPT-3 isn't related to us, so we should worry less about GPT-3, though both cases are worth worrying about.'

I don't think 'There's no reason to worry whatsoever about whether GPT-3 suffers, whereas there are major reasons to worry animals might suffer' is a reasonable POV, because I haven't seen a high-confidence model of consciousness grounding that level of confidence in all of that.

 

Bernardo Subercaseux: doesn't your tweet hold the same if you replace "GPT-3" by "old Casio calculator", or "rock"?

 

Rob Bensinger: I'm modeling consciousness as 'a complicated cognitive something-we-don't-understand, which is connected enough to human verbal reporting that we can verbally report on it in great detail'.

GPT-3 and chickens have a huge number of (substantially non-overlapping) cognitive skills, very unlike a calculator or a rock. GPT-3 is more human-like in some (but not all) respects. Chickens, unlike GPT-3, are related to humans. I think these facts collectively imply uncertainty about whether chickens and/or GPT-3 are conscious, accompanied by basically no uncertainty about whether rocks or calculators are conscious.

( Also, I agree that I was being imprecise in my earlier statement, so thanks for calling me out on that. 🙂 )

 

Bernardo Subercaseux: the relevance of verbal reporting is an entire conversation on its own IMO haha! Thanks for the thought-provoking conversation :) I think we agree on the core, and your comments made me appreciate the complexity of the question at hand!

 

Eli Tyre: I mean, one pretty straightforward thing to say:

IF chickens are sentient, then the chickens in factory farms are DEFINITELY in a lot of pain.

IF GPT-3 is sentient, I have no strong reason to think that it is or isn't in pain.

 

Rob Bensinger: Chickens in factory farms definitely undergo a lot of bodily damage, illness, etc. If there are sentient processes in those chickens' brains, then it seems like further arguments are needed to establish that the damage is registered by the sentient processes.

Then another argument for 'the damage is registered as suffering', and another (if we want to establish that such lives are net-negative) for 'the overall suffering outweighs everything else'. This seems to require a model of what sentience is / how it works / what it's for.

It might be that the explanation for all this is simple -- that you get all this for free by positing a simple mechanism. So I'm not decomposing this to argue the prior must be low. I'm just pointing at what has to be established at all, and that it isn't a freebie.

 

Eli Tyre: We have lots of intimate experience of how, for humans, damage and nociception leads to pain experience.

And the mappings make straightforward evolutionary sense. Once you're over the hump of positing conscious experience at all, it makes sense that damage is experienced as negative conscious experience.

Conditioning on chickens being conscious at all, it seems like the prior is that their [conscious] experience of [nociception] follows basically the same pattern as a human's.

It would be really surprising to me if humans were conscious and chickens were conscious, but humans were conscious of pain, while chickens weren't?!?

That would seem to imply that conscious experience of pain is adaptive for humans but not for chickens?

Like, assuming that consciousness is that old on the phylogenitc tree, why is conscious experience of pain a separate thing that comes later?

I would expect pain to be one of the first things that organisms evolved to be conscious of.

 

Rob Bensinger: I think this is a plausible argument, and I'd probably bet in that direction. Not with much confidence, though, 'cause it depends a lot on what the function of 'consciousness' is over and above things like 'detecting damage to the body' (which clearly doesn't entail 'conscious').

My objection was to "IF chickens are sentient, then the chickens in factory farms are DEFINITELY in a lot of pain."

I have no objection to 'humans and chickens are related, so we can make a plausible guess that if they're conscious, they suffer in situations where we'd suffer.'

Example: maybe consciousness evolved as a cog in some weird specific complicated function like 'remembering the smell of your kin' or, heck, 'regulating body temperature'. Then later developed things like globalness / binding / verbal reportability, etc.

My sense is there's a crux here like

Eli: 'Conscious' is a pretty simple, all-or-nothing thing that works the same everywhere. If a species is conscious, then we can get a good first-approximation picture by imagining that we're inside that organism's skull, piloting its body.

Me: 'Conscious' is incredibly complicated and weird. We have no idea how to build it. It seems like a huge mechanism hooked up to tons of things in human brains. Simpler versions of it might have a totally different function, be missing big parts, and work completely differently.

I might still bet in the same direction as you, because I know so little about which ways chicken consciousness would differ from my consciousness, so I'm forced to not make many big directional updates away from human anchors. But I expect way more unrecognizable-weirdness.

More specifically, re "why is conscious experience of pain a separate thing that comes later": https://www.lesswrong.com/posts/HXyGXq9YmKdjqPseW/rob-b-s-shortform-feed?commentId=mZw9Jaxa3c3xrTSCY#mZw9Jaxa3c3xrTSCY [section 2 above] provides some reasons to think pain-suffering is relatively conditional, complex, social, high-level, etc. in humans.

And noticing + learning from body damage seems like a very simple function that we already understand how to build. If a poorly-understood thing like consc is going to show up in surprising places, it would probably be more associated with functions that are less straightforward.

E.g., it would be shocking if consc evolved to help organisms notice body damage, or learn to avoid such damage.

It would be less shocking if a weird esoteric aspect of intelligence eg in 'aggregating neural signals in a specific efficiency-improving way' caused consciousness.

But in that case we should be less confident, assuming chickens are conscious, that their consciousness is 'hooked up' to trivial-to-implement stuff like 'learning to avoid bodily damage at all'.

 

silencenbetween: 

This seems to require a model of what sentience is.

I think I basically agree with you, that there is a further question of pain=suffering? And that that would ideally be established.

But I feel unsure of this claim. Like, I have guesses about consciousness and I have priors and intuitions about it, and that leads me to feeling fairly confident that chickens experience pain.

But to my mind, we can never unequivocally establish consciousness, it’s always going to be a little bit of guesswork. There’s always a further question of first hand experience.

And in that world, models of consciousness refine our hunches of it and give us a better shared understanding, but they never conclusively tell us anything.

I think this is a stronger claim than just using Bayesian reasoning. Like, I don’t think you can have absolute certainty about anything…

but I also think consciousness inhabits a more precarious place. I’m just making the hard problem arguments, I guess, but I think they’re legit.

I don’t think the hard problem implies that you are in complete uncertainty about consciousness. I do think it implies something trickier about it relative to other phenomena.

Which to me implies that a model of the type you’re imagining wouldn’t conclusively solve the problem anymore than models of the sort “we’re close to each other evolutionarily” do.

I think models of it can help refine our guesses about it, give us clues, I just don’t see any particular model being the final arbiter of what counts as conscious.

And in that world I want to put more weight on the types of arguments that Eli is making. So, I guess my claim is that these lines of evidence should be about as compelling as other arguments.

 

Rob Bensinger: I think we'll have a fully satisfying solution to the hard problem someday, though I'm pretty sure it will have to route through illusionism -- not all parts of the phenomenon can be saved, even though that sounds paradoxical or crazy.

If we can't solve the problem, though (the phil literature calls this view 'mysterianism'), then I don't think that's a good reason to be more confident about which organisms are conscious, or to put more weight on our gut hunches.

I endorse the claim that the hard problem is legit (and hard), btw, and that it makes consciousness trickier to think about in some ways.

 

David Manheim: 

[It might be that the explanation for all this is simple -- that you get all this for free by positing a simple mechanism. So I'm not decomposing this to argue the prior must be low. I'm just pointing at what has to be established at all, and that it isn't a freebie.]

Agreed - but my strong belief on how any sentience / qualia would need to work is that it would be beneficial to evolutionary fitness, meaning pain would need to be experienced as a (fairly strong) negative for it to exist.

Clearly, the same argument doesn't apply to GPT-3.

 

Rob Bensinger: I'm not sure what you mean by 'pain', so I'm not sure what scenarios you're denying (or why).

Are you denying the scenario: part of an organism's brain is conscious (but not tracking or learning from bodily damage), and another (unconscious) part is tracking and learning from bodily damage?

Are you denying the scenario: a brain's conscious states are changing in response to bodily damage, in ways that help the organism better avoid bodily damage, and none of these conscious changes feel 'suffering-ish' / 'unpleasant' to the organism?

I'm not asserting that these are all definitely plausible, or even possible -- I don't understand where consciousness comes from, so I don't know which of these things are independent. But you seem to be saying that some of these things aren't independent, and I'm not sure why.

 

David Manheim: Clearly something was lost here - I'm saying that the claim that there is a disconnect between conscious sensation and tracking bodily damage is a difficult one to believe. And if there is any such connection, the reason that physical damage is negative is that it's beneficial.

 

Rob Bensinger: I don't find it difficult to believe that a brain could have a conscious part doing something specific (eg, storing what places look like), and a separate unconscious part doing things like 'learning which things cause bodily damage and tweaking behavior to avoid those'.

I also don't find it difficult to believe that a conscious part of a brain could 'learn which things cause bodily damage and tweak behavior to avoid those' without experiencing anything as 'bad' per se.

Eg, imagine building a robot that has a conscious subsystem. The subsystem's job is to help the robot avoid bodily damage, but the subsystem experiences this as being like a video game -- it's fun to rack up 'the robot's arm isn't bleeding' points.

 

David Manheim: That is possible for a [robot]. But in a chicken, you're positing a design with separable features and subsystems that are conceptually distinct. Biology doesn't work that way - it's spaghetti towers the whole way down.

 

Rob Bensinger: What if my model of consciousness says 'consciousness is an energy-intensive addition to a brain, and the more stuff you want to be conscious, the more expensive it is'? Then evolution will tend to make a minimum of the brain conscious -- whatever is needed for some function.

people can be conscious about almost any signal that reaches their brain, largely depending on what they have been trained to pay attention to

This seems wrong to me -- what do you mean by 'almost any signal'? (Is there a paper you have in mind operationalizing this?)

 

David Manheim: I mean that people can choose / train to be conscious of their heartbeat, or pay attention to certain facial muscles, etc, even though most people are not aware of them. (And obviously small children need to be trained to pay attention to many different bodily signals.)

[Clearly something was lost here - I'm saying that the claim that there is a disconnect between conscious sensation and tracking bodily damage is a difficult one to believe. And if there is any such connection, the reason that physical damage is negative is that it's beneficial.]

(I see - yeah I phrased my tweet poorly.)

I meant that pain would need to be experienced as a negative for consciousness to exist - otherwise it seems implausible that it would have evolved.

 

Rob Bensinger: I felt like there were a lot of unstated premises here, so I wanted to hear what premises you were building in (eg, your concept of what 'pain' is).

But even if we grant everything, I think the only conclusion is "pain is less positive than non-pain", not "pain is negative".

 

David Manheim: Yeah, I grant that the only conclusion I lead to is relative preference, not absolute value.

But for humans, I'm unsure that there is a coherent idea of valence distinct from our experienced range of sensation. Someone who's never missed a meal finds skipping lunch painful.

 

Rob Bensinger: ? I think the idea of absolute valence is totally coherent for humans. There's such a thing as hedonic treadmills, but the idea of 'not experiencing a hedonic treadmill' isn't incoherent.

 

David Manheim: Being unique only up to to linear transformations implies that utilities don't have a coherent notion of (psychological) valence, since you can always add some number to shift it. That's not a hedonic treadmill, it's about how experienced value is relative to other things.

 

Rob Bensinger: 'There's no coherent idea of valence distinct from our experienced range of sensation' seems to imply that there's no difference between 'different degrees of horrible torture' and 'different degrees of bliss', as long as the organism is constrained to one range or the other.

Seems very false!

 

David Manheim: It's removed from my personal experience, but I don't think you're right. If you read Knut Hamsun's "Hunger", it really does seem clear that even in the midst of objectively painful experiences, people find happiness in slightly less pain.

On the other hand, all of us experience what is, historically, an unimaginably wonderful life. Of course, it's my inside-view / typical mind assumption, but we experience a range of experienced misery and bliss that seems very much comparable to what writers discuss in the past.

 

Rob Bensinger: This seems like 'hedonic treadmill often happens in humans' evidence, which is wildly insufficient even for establishing 'humans will perceive different hedonic ranges as 100% equivalent', much less 'this is true for all possible minds' or 'this is true for all sentient animals'.

"even in the midst of objectively painful experiences, people find happiness in slightly less pain" isn't even the right claim. You want 'people in objectively painful experiences can't comprehend the idea that their experience is worse, seem just as cheerful as anyone else, etc'

 

David Manheim: 

[This seems like 'hedonic treadmill often happens in humans' evidence, which is wildly insufficient even for establishing 'humans will perceive different hedonic ranges as 100% equivalent', much less 'this is true for all possible minds' or 'this is true for all sentient animals'.]

Agreed - I'm not claiming it's universal, just that it seems at least typical for humans.

["even in the midst of objectively painful experiences, people find happiness in slightly less pain" isn't even the right claim. You want 'people in objectively painful experiences can't comprehend the idea that their experience is worse, seem just as cheerful as anyone else, etc']

Flip it, and it seems trivially true - 'people in objectively wonderful experiences can't comprehend the idea that their experience is better, seem just as likely to be sad or upset as anyone else, etc'

 

Rob Bensinger: I don't think that's true. E.g., I think people suffering from chronic pain acclimate a fair bit, but nowhere near completely. Their whole life just sucks a fair bit more; chronic pain isn't a happiness- or welfare-preserving transformation.

Maybe people believe their experiences are a lot like other people's, but that wouldn't establish that humans (with the same variance of experience) really do have similar-utility lives. Even if you're right about your own experience, you can be wrong about the other person's in the comparison you're making.

 

David Manheim:  

Agreed - But I'm also unsure that there is any in-principle way to unambiguously resolve any claims about how good/bad relative experiences are, so I'm not sure how to move forward about discussing this.

[I also don't find it difficult to believe that a conscious part of a brain could 'learn which things cause bodily damage and tweak behavior to avoid those' without experiencing anything as 'bad' per se.]

That involves positing two separate systems which evidently don't interact that happen to occupy the same  substrate. I don't see how that's plausible in an evolved system.

 

Rob Bensinger: Presumably you don't think in full generality 'if a conscious system X interacts a bunch with another system Y, then Y must also be conscious'. So what kind of interaction makes consciousness 'slosh over'?

I'd claim that there are complicated systems in my own brain that have tons of causal connections to the rest of my brain, but that I have zero conscious awareness of.

(Heck, I wouldn't be shocked if some of those systems are suffering right now, independent of 'my' experience.)

 

Jacy Anthis:  

[But in that case we should be less confident, assuming chickens are conscious, that their consciousness is 'hooked up' to trivial-to-implement stuff like 'learning to avoid bodily damage at all'.]

I appreciate you sharing this, Rob. FWIW you and Eliezer seem confused about consciousness in a very typical way, No True Scotsmanning each operationalization that comes up with vague gestures at ineffable qualia. But once you've dismissed everything, nothing meaningful is left.

 

Rob Bensinger: I mean, my low-confidence best guess about where consciousness comes from is that it evolved in response to language. I'm not saying that it's impossible to operationalize 'consciousness'. But I do want to hear decompositions before I hear confident claims 'X is conscious'.

 

Jacy Anthis: At least we agree on that front! I would extend that for 'X is not conscious', and I think other eliminativists like Brian Tomasik would agree that this is a huge problem in the discourse.

 

Rob Bensinger: Yep, I agree regarding 'X is not conscious'.

(Maybe I think it's fine for laypeople to be confident-by-default that rocks, electrons, etc are unconscious? As long as they aren't so confident they could never update, if a good panpsychism argument arose.)

 

Sam Rosen: It's awfully suspicious that:

  • Pigs in pain look and sound like what we would look and sound like if we were in pain.
  • Pigs have a similar brain to us with similar brain structures.
  • The parts of the brain that light up in pig's brain when they are in pain are the same as the parts of the brain that light up in our brains when we are in pain. (I think this is true, but could totally be wrong.)
  • Pain plausibly evolved as a mechanism to deter receiving physical damage which pigs need just as much as humans.
  • Pain feels primal and simple—something a pig could understand. It's not like counterfactual reasoning, abstraction, or complicated emotions like sonder.

It just strikes me as plausible that pigs can feel lust, thirst, pain and hunger—and humans merely evolved to learn how to talk about those things. It strikes me as less plausible that pigs unconsciously have mechanisms that control "lust", "thirst," "pain," and "hunger" and humans became the first species on Earth that made all those unconscious functional mechanisms conscious.

(Like why did humans only make those unconscious processes conscious? Why didn't humans, when emerging into consciousness, become conscious of our heart regulation and immune system and bones growing?)

It's easier to evolve language and intelligence than it is to evolve language and intelligence PLUS a way of integrating and organizing lots of unconscious systems into a consciousness producing system where the attendant qualia of each subsystem incentivizes the correct functional response.

 

Rob Bensinger: Why do you think pigs evolved qualia, rather than evolving to do those things without qualia? Like, why does evolution like qualia?

 

Sam Rosen: I don't know if evolution likes qualia. It might be happy to do things unconsciously. But thinking non-human animals aren't conscious of pain or thirst or hunger or lust means adding a big step from apes to humans. Evolution prefers smooth gradients to big steps.

My last paragraph of my original comment is important for my argument.

 

Rob Bensinger: My leading guess about where consciousness comes from is that it evolved in response to language.

Once you can report fine-grained beliefs about your internal state (including your past actions, how they cohere with your present actions, how this coherence is virtuous rather than villainous, how your current state and future plans are all the expressions of a single Person with a consistent character, etc.), there's suddenly a ton of evolutionary pressure for you to internally represent a 'global you state' to yourself, and for you to organize your brain's visible outputs to all cohere with the 'global you state' narrative you share with others; where almost zero such pressure exists before language.

Like, a monkey that emits different screams when it's angry, hungry, in pain, etc. can freely be a Machiavellian reasoner: it needs to scream in ways that at least somewhat track whether it's really hungry (or in pain, etc.), or others will rapidly learn to distrust its signals and refuse to give aid. But this is a very low-bandwidth communication channel, and the monkey is free to have basically any internal state (incoherent, unreflective, unsympathetic-to-others, etc.) as long as it ends up producing cries in ways that others will take sufficiently seriously. (But not maximally seriously, since never defecting/lying is surely not going to be the equilibrium here, at least for things like 'I'm hungry' signals.)

The game really does change radically when you're no longer emitting an occasional scream, but are actually constructing sentences that tell stories about your entire goddamn brain, history, future behavior, etc.

 

Nell Watson: So, in your argument, would it follow then that feral human children or profoundly autistic human beings cannot feel pain, because they lack language to codify their conscious experience?

 

Rob Bensinger: Eliezer might say that? Since he does think human babies aren't conscious, with very high confidence.

But my argument is evolutionary, not developmental. Evolution selected for consciousness once we had language (on my account), but that doesn't mean consciousness has to depend on language developmentally.

New Comment
47 comments, sorted by Click to highlight new comments since:

4. Similarly, I frequently hear about dreams that are scary or disorienting, but I don't think I've ever heard of someone recalling having experienced severe pain from a dream, even when they remember dreaming that they were being physically damaged.

This may be for reasons of selection: if dreams were more unpleasant, people would be less inclined to go to sleep and their health would suffer. But it's interesting that scary dreams are nonetheless common. This again seems to point toward 'states that are further from the typical human state are much more likely to be capable of things like fear or distress, than to be capable of suffering-laden physical agony.'

My guess would have been that dreams involve hallucinated perceptual inputs (sight, sound, etc.) but dreams don't involve hallucinated interoceptive input (pain, temperature, hunger, etc.).

It seems physiologically plausible—the insular cortex is the home of interoceptive inputs and can have its hyperparameters set to "don't hallucinate", while other parts of the cortex can have their hyperparameters set to "do hallucinate".

It seems evolutionarily plausible because things like "when part of the body is cold, vasoconstrict" and "when the stomach is full, release more digestive enzymes" or whatever, are still very important to do during sleep, and would presumably get screwed up if the insular cortex was hallucinating.

It seems introspectively plausible because it's not just pain, but also hunger, hot-or-cold, or muscle soreness … when I'm cold in real life, I feel like I have dreams where I'm cold, etc. etc.

I think fear reactions are an amygdala thing that doesn't much involve interoceptive inputs or the insular cortex. So they can participate in the hallucinations.

I've had a few dreams in which someone shot me with a gun, and it physically hurt about as much as a moderate stubbed toe or something (though the pain was in my abdomen where I got shot, not my toe). But yeah, pain in dreams seems pretty rare for me unless it corresponds to something that's true in real life, as you mention, like being cold, having an upset stomach, or needing to urinate.

Googling {pain in dreams}, I see a bunch of discussion of this topic. One paper says:

Although some theorists have suggested that pain sensations cannot be part of the dreaming world, research has shown that pain sensations occur in about 1% of the dreams in healthy persons and in about 30% of patients with acute, severe pain.

I would also add that the fear responses, while participating in the hallucinations, aren't themselves hallucinated, not any more than wakeful fear is hallucinated, at any rate. They're just emotional responses to the contents of our dreams.

Since pain involves both sensory and affective components which rarely come apart, and the sensory precedes the affective, it's enough to not hallucinate the sensory.

I do feel like pain is a bit different from the other interoceptive inputs in that the kinds of automatic responses to it are more like those to emotions, but one potential similarity is that it was more fitness-enhancing for sharp pain (and other internal signals going haywire) to wake us, but not so for sight, sound or emotions. Loud external sounds still wake us, too, but maybe only much louder than what we dream.

It's not clear that you intended otherwise, but I would also assume not that there's something suppressing pain hallucination (like a hyperparameter), but that hallucination is costly and doesn't happen by default, so only things useful and safe to hallucinate can get hallucinated.

Also, don't the senses evoked in dreams mostly match what people can "imagine" internally while awake, i.e. mostly just sight and sound? There could be common mechanisms here. Can people imagine pains? I've also heard it claimed that our inner voices only have one volume, so maybe that's also true of sound in dreams?

FWIW, I think I basically have aphantasia, so can't visualize well, but I think my dreams have richer visual experiences.

the fear responses, while participating in the hallucinations, aren't themselves hallucinated

Yeah, maybe I should have said "the amygdala responds to the hallucinations" or something.

pain is a bit different from the other interoceptive inputs in that the kinds of automatic responses to it are more like those to emotions…

"Emotions" is kinda a fuzzy term that means different things to different people, and more specifically, I'm not sure what you meant in this paragraph. The phrase "automatic responses…to emotions" strikes me as weird because I'd be more likely to say that an "emotion" is an automatic response (well, with lots of caveats), not that an "emotion" is a thing that elicits an automatic response.

not that there's something suppressing pain hallucination (like a hyperparameter), but that hallucination is costly and doesn't happen by default

Again I'm kinda confused here. You wrote "not…but" but these all seem simultaneously true and compatible to me. In particular, I think "hallucination is costly" energetically (as far as I know), and "hallucination is costly" evolutionarily (when done at the wrong times, e.g. while being chased by a lion). But I also think hallucination is controlled by an inference-algorithm hyperparameter. And I'm also inclined to say that the "default" value of this hyperparameter corresponds to "don't hallucinate", and during dreams the hyperparameter is moved to a non-"default" setting in some cortical areas but not others. Well, the word "default" here is kinda meaningless, but maybe it's a useful way to think about things.

Hmm, maybe you're imagining that there's some special mechanism that's active during dreams but otherwise inactive, and this mechanism specifically "injects" hallucinations into the input stream somehow. I guess if the story was like that, then I would sympathize with the idea that maybe we shouldn't call it a "hyperparameter" (although calling it a hyperparameter wouldn't really be "wrong" per se, just kinda unhelpful). However, I don't think it's a "mechanism" like that. I don't think you need a special mechanism to generate random noise in biological neurons where the input would otherwise be. They're already noisy. You just need to "lower SNR thresholds" (so to speak) such that the noise is treated as a meaningful signal that can constrain higher-level models, instead of being ignored. I could be wrong though.

I would also add that the fear responses, while participating in the hallucinations, aren't themselves hallucinated, not any more than wakeful fear is hallucinated, at any rate. They're just emotional responses to the contents of our dreams.

I disagree with this statement. For me, the contents of a dream seem only weakly correlated with whether I feel afraid during the dream. I’ve had many dreams with seemingly ordinary content (relative to the baseline of general dream weirdness) that were nevertheless extremely terrifying, and many dreams with relatively weird and disturbing content that were not frightening at all.

Makes sense to me, and seems like a good reason not to update (or to update less) from dreams to 'pain is fragile'.

Another thing to consider regarding dreams is: Insects and fish don't have dreams (I wonder if maybe it only is mammals and some birds that do).

As humans, we make motivational tradeoffs based on pain ("should I go out and freeze in the cold to feed my hunger?", etc). And we change long-term behaviors based on pain ("earlier when I went there, I experienced pain, so I'd rather avoid that in the future"). And both of these things I just mentioned, are also observed among lobsters (as explained in the embedded video).

Something else that lobsters do and we also do: They change behavior (and body language) based on how "alpha" and "beta" they feel. As explained here at 1:21. And as humans we experience that our "qualia" can be tinged by such feelings.

So all animals that dream make motivational tradeoffs based on pain. But not all animals that make motivational tradeoffs based on pain have dreams.

Some hazy speculation from me: Maybe the feeling of pain is more "basic" (shared by older evolutionary ancestors) than some of the cognitive machinery that dreams help us maintain.

Here are some unorganized thoughts I have (these are related to your top-level post, but I'm including them here nonetheless):

  • Note the difference between cognitive tradeoffs and more simple mechanisms like "if bodily damage then scream". If you think of consciousness as relating to a "non-local workspace" or something like that, then making tradeoffs seems like something that maybe could qualify.
  • It's interesting to note that fish and lobsters rub painful parts of their body, and that anesthesia can make them stop doing that.
  • Many animals are social animals. For some examples of how fish can be social in sophisticated ways, see e.g. here. I have also heard it claimed that cockroaches are quite social animals. On Wikipedia they write: "Some species, such as the gregarious German cockroach, have an elaborate social structure involving common shelter, social dependence, information transfer and kin recognition (...) When reared in isolation, German cockroaches show behavior that is different from behavior when reared in a group. In one study, isolated cockroaches were less likely to leave their shelters and explore, spent less time eating, interacted less with conspecifics when exposed to them, and took longer to recognize receptive females. These effects might have been due either to reduced metabolic and developmental rates in isolated individuals or the fact that the isolated individuals had not had a training period to learn about what others were like via their antennae. Individual American cockroaches appear to have consistently different "personalities" regarding how they seek shelter. In addition, group personality is not simply the sum of individual choices, but reflects conformity and collective decision-making.".
  • Many animals have language, including birds and fish and insects. (Or are we to only call something language if we can build complex phrases as humans do? If so, I think human children start reporting suffering before they have learned to properly speak or understand "language".)
  • Admittedly non-toddler humans have a wider repertoire than other animals when it comes to reporting and describing suffering. Maybe that constitutes a qualitative leap of some kind, but I suspect that various other animals also can communicate in considerable nuance about how they feel.
  • Jeffrey Masson about pigs: "Piglets are particularly fond of play, just as human children are, and chase one another, play-fight, play-love, tumble down hills, and generally engage in a wide variety of enjoyable activities. (...) Though they are often fed garbage and eat it, their food choices - if allowed them - would not be dissimilar to our own. They get easily bored with the same food. They love melons, bananas, and apples, but if they are fed them for several days on end, they will set them aside and eat whatever other food is new first. (...) Much like humans, every single pig is an individual. (...) Some pigs are independent and tough, and don’t let the bad times get to them. Others are ultra-sensitive and succumb to sadness and even depression much more readily.". Here is a video of what seems like a pig trying to help another pig.
  • I posit/guess/assume: Pain is often connected enough with language that we are able to report on how we feel using language - but also often not.
  • I posit/guess/assume: There are things that humans use suffering for that don't rely on language, and that would work and be useful without language and is used the same way by evolutionary ancestors of ours that are fish/insects/etc.
  • I do find it a bit weird that we have so much consciousness, as this seems (based on gut feeling) like something that would be ineffective (use unnecessary energy). It seems that you have a similar intuition. But the resolution of this "paradox" that seems most plausible to me, is that for whatever reason evolution has made animals that use "conscious" processes for a lot of things. Calculators are more efficient than human brains at arithmetic, but nonetheless, humans often use "conscious" processes even for simple arithmetic. Why assume that it's any different for e.g. crows when they do arithmetic, or when they come up with creative plans based on spatial visualization?
  • Bees are influenced by "emotion" in ways that overlap with how humans are influenced by emotion. And even springtails sometimes have behavior that seems somewhat sophisticated (see e.g. this mating ritual, which seems more complicated to me than "body damage registered, move away").
  • Here is an example of fish acting as if they have an appreciation for visual sights. Humans also act as if they have an appreciation for visual sights. And so do bears it would seem. If e.g. your 1st cousin says "what a beautiful view", you assume that he has evolved conscious processes that appreciate beauty (just like you). It would after all be weird if different evolutionary mechanisms had evolved for the two of you. The more evolutionary distance there is between someone, the weaker this kind of argument becomes, but I still think a good deal of it is left for distant cousins such as fish and bears.
  • I don't disagree with "Conscious' is incredibly complicated and weird. We have no idea how to build it.". But you could also say "Lobsters are incredibly complicated and weird. We have no idea how to build a lobster."
  • Reducing the risk of being singled out by predators can be an evolutionary disincentive against giving overt signals of pain/hunger/etc.

Me: 'Conscious' is incredibly complicated and weird. We have no idea how to build it. It seems like a huge mechanism hooked up to tons of things in human brains. Simpler versions of it might have a totally different function, be missing big parts, and work completely differently.

What's the reason for assuming that? Is it based on a general feeling that value is complex, and you don't want to generalize much beyond the prototype cases? That would be similar to someone who really cares about piston steam engines but doesn't care much about other types of steam engines, much less other types of engines or mechanical systems.

I would tend to think that a prototypical case of a human noticing his own qualia involves some kind of higher-order reflection that yields the quasi-perceptual illusions that illusionism talks about with reference to some mental state being reflected upon (such as redness, painfulness, feeling at peace, etc). The specific ways that humans do this reflection and report on it are complex, but it's plausible that other animals might do simpler forms of such things in their own ways, and I would tend to think that those simpler forms might still count for something (in a similar way as other types of engines may still be somewhat interesting to a piston-steam-engine aficionado). Also, I think some states in which we don't actively notice our qualia probably also matter morally, such as when we're in flow states totally absorbed in some task.

Here's an analogy for my point about consciousness. Humans have very complex ways of communicating with each other (verbally and nonverbally), while non-human animals have a more limited set of ways of expressing themselves, but they still do so to greater or lesser degrees. The particular algorithms that humans use to communicate may be very complex and weird, but why focus so heavily on those particular algorithms rather than the more general phenomenon of animal communication?

Anyway, I agree that there can be some cases where humans have a trait to such a greater degree than non-human animals that it's fair to call the non-human versions of it negligible, such as if the trait in question is playing chess, calculating digits of pi, or writing poetry. I do maintain some probability (maybe like 25%) that the kinds of things in human brains that I would care most about in terms of consciousness are almost entirely absent in chicken brains.

I have an alternative hypothesis about how consciousness evolved. I'm not especially confident in it.

In my view, a large part of the cognitive demands on hominins consists of learning skills and norms from other hominins. One of a few questions I always ask when trying to figure out why humans have a particular cognitive trait is “How could this have made it cheaper (faster, easier, more likely, etc.) to learn skills and/or norms from other hominins?” I think the core cognitive traits in question originally evolved to model the internal state of conspecifics, and make inferences about task performances, and were exapted for other purposes later.

I consider imitation learning a good candidate among cognitive abilities that hominins may have evolved since the last common ancestor with chimpanzees, since as I understand it, chimps are quite bad at imitation learning. So the first step may have been hominins obtaining the ability to see another hominin performing a skill as another hominin performing a skill, in a richer way than chimps, like “That-hominin is knapping, that-hominin is striking the core at this angle.” (Not to imply that language has emerged yet, verbal descriptions of thoughts just correspond well to the contents of those thoughts; consider this hypothesis silent on the evolution of language at the moment.) Then perhaps recursive representations about skill performance, like “This-hominin feels like this part of the task is easy, and this part is hard.” I’m not very committed on whether self-representations or other-representations came first. Then higher-order things like, “This-hominin finds it easier to learn a task when parts of the task are performed more slowly, so when this-hominin performs this task in front of that-hominin-to-be-taught, this-hominin should exaggerate this part, or that part of the task.” And then, “This-hominin-that-teaches-me is exaggerating this part of the task,” which implicitly involves representing all those lower order thoughts that lead to the other hominin choosing to exaggerate the task, and so on. This is just one example of how these sorts of cognitive traits could improve learning efficiency, in sink and source.

Once hominins encounter cooperative contexts that require norms to generate a profit, there is selection for these aforementioned general imitation learning mechanisms to be exapted for learning norms, which could result in metarepresentations of internal state relevant to norms, like emotional distress, among other things. I also think this mechanism is a large part of how evolution implements moral nativism in humans. Recursive metarepresentations of one’s own emotional distress can be informative when learning norms as well. Insofar as one’s own internal state is informative about the True Norms, evolution can constrain moral search space by providing introspective access to that internal state. On this view, this is pretty much what I think suffering is, where the internal state is physical or emotional distress.

I think this account allows for more or less conscious agents, since for every object-level representation, there can be a new metarepresentation, so as minds become richer, so does consciousness. I don't mean to imply that full-blown episodic memory, autobiographical narrative, and so on falls right out of a scheme like this. But it also seems to predict that mostly just hominins are conscious, and maybe some other primates to a limited degree, and maybe some other animals that we’ll find have convergently evolved consciousness, maybe elephants or dolphins or magpies, but also probably not in a way that allows them to implement suffering.

I don’t feel that I need to invoke the evolution of language for any of this to occur; I find I don’t feel the need to invoke language for most explicanda in human evolution, actually. I think consciousness preceded the ability to make verbal reports about consciousness.

I also don’t mean to imply that dividing as opposed to making pies is a small fraction of the task demands that hominins faced historically, but I also don’t think it’s the largest fraction.

Your explanation does double-duty, with its assumptions at least, and kind of explains how human cooperation is stable where it wouldn’t be by default. I admit that I don’t provide an alternative explanation, but I also feel like it’s outside of the scope of the conversation and I do have alternative explanations in mind that I could shore up if pressed.

Regarding the argument about consciousness evolving as a way for humans to report their internal state, I think there's a sensible case that "unconscious pain" matters, even when it's not noticed or reported on by a higher order processes. This plausibly moves the goalposts away from "finding what beings are conscious in the sense of being aware of their own awareness" (as I believe Eliezer has roughly put it).

To make this case I'd point to this essay from Brian Tomasik, which I find compelling. I would particularly like to quote this part of it,

I emotionally sympathize with the intuition that I don't care about pain when it's not "noticed". But unlike Muehlhauser (2017), I think illusionism does have major implications for my moral sensibilities here. That's because prior to illusionism, one imagines one's "conscious" feelings as "the real deal", with the "unconscious" processes being unimportant. But illusionism shows that the difference between conscious and unconscious feelings is at least partly a sleight of hand. (Conscious and unconscious experiences do have substantive differences, such as in how widely they recruit various parts of the brain (Dehaene 2014).)

Put another way, what is the "this" that's referred to when Muehlhauser cares about "whatever this is"? From a pre-illusionism mindset, "this" refers to the intrinsic nature of pain states, which is assumed by many philosophers to be a definite thing. After embracing illusionism, what does "this" refer to? It's not clear. Does it refer to whatever higher-order sleight of hand is generating the representation that "this feels like something painful"? Is it the underlying signaling of pain in "lower" parts of the nervous system? Both at once? Unlike in the case of qualia realism, there's no clear answer, nor does there seem to me a single non-realist answer that best carves nature at its joints. That means we have to apply other standards of moral reasoning, including principles like non-arbitrariness. And as my current article has explained, the principle of non-arbitrariness makes it hard for me to find an astronomical gulf between "noticed" and "unnoticed" pains, especially after controlling for the fact that "noticed" pains tend to involve a lot more total brain processing than "unnoticed" ones.

I maybe 1/4 agree with this, though I also think this is a domain that's evidence-poor enough (unless Brian or someone else knows a lot more than me!) that there isn't a lot that can be said with confidence. Here's my version of reasoning along these lines (in a Sep. 2019 shortform post):

[...]

3. If morality isn't "special" -- if it's just one of many facets of human values, and isn't a particularly natural-kind-ish facet -- then it's likelier that a full understanding of human value would lead us to treat aesthetic and moral preferences as more coextensive, interconnected, and fuzzy. If I can value someone else's happiness inherently, without needing to experience or know about it myself, it then becomes harder to say why I can't value non-conscious states inherently; and "beauty" is an obvious candidate. My preferences aren't all about my own experiences, and they aren't simple, so it's not clear why aesthetic preferences should be an exception to this rule.

4. Similarly, if phenomenal consciousness is fuzzy or fake, then it becomes less likely that our preferences range only and exactly over subjective experiences (or their closest non-fake counterparts). Which removes the main reason to think unexperienced beauty doesn't matter to people.

Combining the latter two points, and the literature on emotions like disgust and purity which have both moral and non-moral aspects, it seems plausible that the extrapolated versions of preferences like "I don't like it when other sentient beings suffer" could turn out to have aesthetic aspects or interpretations like "I find it ugly for brain regions to have suffering-ish configurations".

Even if consciousness is fully a real thing, it seems as though a sufficiently deep reductive understanding of consciousness should lead us to understand and evaluate consciousness similarly whether we're thinking about it in intentional/psychologizing terms or just thinking about the physical structure of the corresponding brain state. We shouldn't be more outraged by a world-state under one description than under an equivalent description, ideally.

But then it seems less obvious that the brain states we care about should exactly correspond to the ones that are conscious, with no other brain states mattering; and aesthetic emotions are one of the main ways we relate to things we're treating as physical systems.

As a concrete example, maybe our ideal selves would find it inherently disgusting for a brain state that sort of almost looks conscious to go through the motions of being tortured, even when we aren't the least bit confused or uncertain about whether it's really conscious, just because our terminal values are associative and symbolic. I use this example because it's an especially easy one to understand from a morality- and consciousness-centered perspective, but I expect our ideal preferences about physical states to end up being very weird and complicated, and not to end up being all that much like our moral intuitions today.

Addendum: As always, this kind of thing is ridiculously speculative and not the kind of thing to put one's weight down on or try to "lock in" for civilization. But it can be useful to keep the range of options in view, so we have them in mind when we figure out how to test them later.

Notably, I think that my 'maybe I should feel something-like-disgust about many brain states of fruit lies' suggests pretty different responses than 'maybe I should feel something-like-compassion about fruit flies, by sort of putting myself in their shoes' or 'maybe I should extend reciprocity-like fairness intuitions to fruit flies, optimizing them in directions kinda-analogous to how I'd want them to optimize me'.

If Brian's thinking is more like the latter (quasi-compassion, quasi-reciprocity, etc., rather than generalized disgust, beauty, etc.), then it's further from how I'm modeling the situation. I think compassion, reciprocity, etc. are more fragile and harder to generalize to weird physical systems, because they require you to 'see yourself in the system' in some sense, whereas beauty/disgust/etc. are much more universal attitudes we can take to weird physical systems (without being confused about the nature of those systems).

If Brian's thinking is more like the latter (compassion, reciprocity, etc., rather than disgust, beauty, etc.), then it's further from how I'm modeling the situation.

I identify his stance as more about compassion and empathy than about beauty and disgust. He's talked about both in this essay, though a more complete understanding of his moral perspective can be found in this essay.

Personally, I share the intuition that ability to see yourself in someone else's shoes is a key component to ethics. I wouldn't say that all my values are derived from it, but it's a good starting point.

I also agree that it seems more difficult to imagine myself in the shoes of other organisms other than humans (though this is partly just a reflection of my ignorance). But one insight from illusionism is that all computations in the world, including ours, are more similar than we might otherwise thought: there's no clear cut line between computations that are conscious and computations that aren't. In other words, illusionism makes us less special, and more like the world around us, which both operate via the same laws. We just happen to be an unusually self-reflective part of the world.

Whereas before it seemed like it would be hard to put myself in the shoes of an unconscious object, like a rock, illusionism makes it easier for me because I can see that, whatever "it's like" for the rock, it probably isn't "like nothing" whereas my experience is "like something." Put another way, there isn't a magical consciousness juice that lights up some parts of physics and keeps other parts in the dark. There are just unconscious chunks of matter that act on other unconscious chunks.

You can of course identify with unconscious chunks of matter that are more similar to you functionally, causally, or computationally. But I'd hesitate to say that we should expect a sharp cutoff for what counts as "experiences we should care about" vs. "experiences we shouldn't" as opposed to a gentle drop-off as we get farther from the central cluster. That's partially what motivates my empathy towards things that might be dissimilar to me along some axes (such as self-reflection) but more similar to me across many other axes (such as having nociception).

But one insight from illusionism is that all computations in the world, including ours, are more similar than we might otherwise thought: there's no clear cut line between computations that are conscious and computations that aren't.

Note that I lean toward disagreeing with this, even though I agree with a bunch of similar-sounding claims you've made here.

Also, like you, illusionism caused me to update toward thinking we're "less special, and more like the world around us". But I think I'm modeling the situation pretty differently, in a way that is making me update a lot less in that direction than you or Brian.

I think consciousness will end up looking something like 'piston steam engine', if we'd evolved to have a lot of terminal values related to the state of piston-steam-engine-ish things.

Piston steam engines aren't a 100% crisp natural kind; there are other machines that are pretty similar to them; there are many different ways to build a piston steam engine; and, sure, in a world where our core evolved values were tied up with piston steam engines, it could shake out that we care at least a little about certain states of thermostats, rocks, hand gliders, trombones, and any number of other random things as a result of very distant analogical resemblances to piston steam engines.

But it's still the case that a piston steam engine is a relatively specific (albeit not atomically or logically precise) machine; and it requires a bunch of parts to work in specific ways; and there isn't an unbroken continuum from 'rock' to 'piston steam engine', rather there are sharp (though not atomically sharp) jumps when you get to thresholds that make the machine work at all.

Suppose you had absolutely no idea how the internals of a piston steam engine worked mechanically. And further suppose that you've been crazily obsessed with piston steam engines your whole life, all your dreams are about piston steam engines, nothing else makes you want to get up in the morning, etc. -- basically the state of humanity with respect to consciousness. It might indeed then be tempting to come up with a story about how everything in the universe is a "piston steam engine lite" at heart; or, failing that, how all steam engines, or all complex machines, are piston-steam-engines lite, to varying degrees.

The wise person who's obsessed with piston steam engines, on the other hand, would recognize that she doesn't know how the damned thing works; and when you don't understand an engine, it often looks far more continuous with the rest of reality, far more fuzzy and simple and basic. "They're all just engines, after all; why sweat the details?"

Recognizing this bias, the wise obsessive should be cautious about the impulse to treat this poorly-understood machine as though it were a very basic or very universal sort of thing; because as we learn more, we should expect a series of large directional updates about just how specific and contingent and parts-containing the thing we value is, compared to the endless variety of possible physical structures out there in the universe.

When your map is blank, it feels more plausible that there will be a "smooth drop-off", because we aren't picturing a large number of gears that will break when we tweak their location slightly. And because it feels as though almost anything could go in the blank spot, hence it's harder to viscerally feel like it's a huge matter of life-or-death which specific thing goes there.

Thanks for this discussion. :)

I think consciousness will end up looking something like 'piston steam engine', if we'd evolved to have a lot of terminal values related to the state of piston-steam-engine-ish things.

I think that's kind of the key question. Is what I care about as precise as "piston steam engine" or is it more like "mechanical devices in general, with a huge increase in caring as the thing becomes more and more like a piston steam engine"? This relates to the passage of mine that Matthew quoted above. If we say we care about (or that consciousness is) this thing going on in our heads, are we pointing at a very specific machine, or are we pointing at machines in general with a focus on the ones that are more similar to the exact one in our heads? In the extreme, a person who says "I care about what's in my head" is an egoist who doesn't care about other humans. Perhaps he would even be a short-term egoist who doesn't care about his long-term future (since his brain will be more different by then). That's one stance that some people take. But most of us try to generalize what we care about beyond our immediate selves. And then the question is how much to generalize.

It's analogous to someone saying they love "that thing" and pointing at a piston steam engine. How much generality should we apply when saying what they value? Is it that particular piston steam engine? Piston steam engines in general? Engines in general? Mechanical devices in general with a focus on ones most like the particular piston steam engine being pointed to? It's not clear, and people take widely divergent views here.

I think a similar fuzziness will apply when trying to decide for which entities "there's something it's like" to be those entities. There's a wide range in possible views on how narrowly or broadly to interpret "something it's like".

yet I'm confident we shouldn't expect to find that rocks are a little bit repressing their emotions, or that cucumbers are kind of directing their attention at something, or that the sky's relationship to the ground is an example of New Relationship Energy.

I think those statements can apply to vanishing degrees. It's usually not helpful to talk that way in ordinary life, but if we're trying to have a full theory of repressing one's emotions in general, I expect that one could draw some strained (or poetic, as you said) ways in which rocks are doing that. (Simple example: the chemical bonds in rocks are holding their atoms together, and without that the atoms of the rocks would move around more freely the way the atoms of a liquid or gas do.) IMO, the degree of applicability of the concept seems very low but not zero. This very low applicability is probably only going to matter in extreme situations, like if there are astronomical numbers of rocks compared with human-like minds.

I think consciousness will end up looking something like 'piston steam engine', if we'd evolved to have a lot of terminal values related to the state of piston-steam-engine-ish things.

I think this is a valid viewpoint, and I find it to be fairly similar to the one Luke Muehlhauser expressed in this dialogue. I sympathize with it quite a lot, but ultimately I part ways with it.

I suppose my main disagreement would probably boil down to a few things, including,

  • My intuition that conciousness is not easily classifiable in the same way a piston steam engine would be, even if you knew relatively little about how piston steam engines worked. I note that your viewpoint here seems similar to Eliezer's analogy in Fake Causality. The difference, I imagine, is that consciousness doesn't seem to be defined via a set of easily identifiable functional features. There is an extremely wide range of viewpoints about what constitutes a conscious experience, what properties consciousness has, and what people are even talking about when they use the word (even though it is sometimes said to be one of the "most basic" or "most elementary" concepts to us).
  • The question I care most about is not "how does consciousness work" but "what should I care about?" Progress on questions about "how X works" has historically yielded extremely crisp answers, explainable by models that use simple moving parts. I don't think we've made substantial progress in answering the other question with simple, crisp models. One way of putting this is that if you came up to me with a well-validated, fundamental theory of consciousness (and somehow this was well defined), I might just respond, "That's cool, but I care about things other than consciousness (as defined in that model)." It seems like the more you're able to answer the question precisely and thoroughly, the more I'm probably going to disagree that the answer maps perfectly onto my intuitions about what I ought to care about.
  • The brain is a kludge, and doesn't seem like the type of thing we should describe as a simple, coherent, unified engine. There are certainly many aspects of cognition that are very general, but most don't seem like the type of thing I'd expect to be exclusively present in humans but not other animals. This touches on some disagreements I (perceive) I have with the foom perspective, but I think that even people from that camp would mostly agree with the weak version of this thesis.

I think this is a valid viewpoint, and I find it to be fairly similar to the one Luke Muehlhauser expressed in this dialogue. I sympathize with it quite a lot, but ultimately I part ways with it.

I hadn't seen that before! I love it, and I very much share Luke's intuitions there (maybe no surprise, since I think his intuitions are stunningly good on both moral philosophy and consciousness). Thanks for the link. :)

The difference, I imagine, is that consciousness doesn't seem to be defined via a set of easily identifiable functional features.

Granted, but this seems true of a great many psychology concepts. Psychological concepts are generally poorly understood and very far from being formally defined, yet I'm confident we shouldn't expect to find that rocks are a little bit repressing their emotions, or that cucumbers are kind of directing their attention at something, or that the sky's relationship to the ground is an example of New Relationship Energy. 'The sky is in NRE with the ground' is doomed to always be a line of poetry, never a line of cognitive science.

(In some cases we've introduced new technical terms, like information-theoretic surprisal, that borrow psychological language. I think this is more common than successful attempts attempts to fully formalize/define how a high-level psychological phenomenon occurs in humans or other brains.)

I do expect some concept revision to occur as we improve our understanding of psychology. But I think our state is mostly 'human psychology is really complicated, so we don't understand it well yet', not 'we have empirically confirmed that human psychological attributes are continuous with the attributes of amoebas, rocks, etc.'.

I don't think we've made substantial progress in answering the other question with simple, crisp models.

[...]

The brain is a kludge

My view is:

  • Our core, ultimate values are something we know very, very little about.
  • The true nature of consciousness is something we know almost nothing about.
  • Which particular computational processes are occurring in animal brains is something we know almost nothing about.

When you combine three blank areas of your map, the blank parts don't cancel out. Instead, you get a part of your map that you should be even more uncertain about.

I don't see a valid way to leverage that blankness-of-map to concentrate probability mass on 'these three huge complicated mysterious brain-things are really similar to rocks, fungi, electrons, etc.'.

Rather, 'moral value is a kludge' and 'consciousness is a kludge' both make me update toward thinking the set of moral patients are smaller  -- these engines don't become less engine-y via being kludges, they just become more complicated and laden-with-arbitrary-structure.

A blank map of a huge complicated neural thingie enmeshed with verbal reasoning and a dozen other cognitive processes in intricate ways, is not the same as a filled-in map of something that's low in detail and has very few crucial highly contingent or complex components. The lack of detail is in the map, but the territory can be extraordinarily detailed. And any of those details (either in our CEV, or in our consciousness) can turn out to be crucial in a way that's currently invisible to us.

It sounds to me like you're updating in the opposite direction -- these things are kludges, therefore we should expect them (and their intersection, 'things we morally value in a consciousness-style way') to be simpler, more general, more universal, less laden with arbitrary hidden complexity. Why update in that direction?

Thinking about it more, my brain generates the following argument for the perspective I think you're advocating:

  • Consciousness and human values are both complicated kludges, but they're different complicated kludges, and they aren't correlated (because evolution didn't understand what 'consciousness' was when it built us, so it didn't try to embed that entire complicated entity into our values, it just embedded various messy correlates that break down pretty easily).

    It would therefore be surprising if any highly specific cognitive feature of humans ended up being core to our values. It's less surprising if a simple (and therefore more widespread) cognitive thingie ends up important to our values, because although the totality of human values is very complex, a lot of the real-world things referred to by specific pieces of human value (e.g., 'boo loud sudden noises') are quite simple.

    A lot of the complexity of values comes from the fact that it glues together an enormous list of many different relatively-simple things (orgasms, symmetry, lush green plants, the sound of birds chirping, the pleasure of winning a game), and then these need to interact in tons of complicated ways.

    In some cases, there probably are much-more-complicated entities in our values. But any given specific complicated thing will be a lot harder to exactly locate in our values, because it's less likely on priors that evolution will hand-code that thing into our brains, or hand-code a way for humans to reliably learn that value during development.

This argument moves me some, and maybe I'll change my mind after chewing on it more.

I think the main reasons I don't currently find it super compelling are:

 

1 - I think a lot of human values look like pointers to real-world phenomena, rather than encodings of real-world phenomena. Humans care about certain kinds of human-ish minds (which may or may not be limited to human beings). Rather than trying to hand-code a description of 'mind that's human-ish in the relevant way', evolution builds in a long list of clues and correlates that let us locate the 'human-ish mind' object in the physical world, and glom on to that object. The full complexity of the consciousness-engine is likely to end up pretty central to our values by that method (even though not everything about that engine as it's currently implemented in human brains is going to be essential -- there are a lot of ways to build a piston steam engine).

I do think there will be a lot of surprises and weird edge cases in 'the kind of mind we value'. But I think these are much more likely to arise if we build new minds that deliberately push toward the edges of our concept. I think it's much less likely that we'll care about chickens, rocks, or electrons because these pre-existing entities just happen to exploit a weird loophole in our empathy-ish values -- most natural phenomena don't have keys that are exactly the right shape to exploit a loophole in human values.

(I do think it's not at all implausible that chickens could turn out to have 'human-ish minds' in the relevant sense. Maybe somewhere between 10% likely and 40% likely? But if chickens are moral patients according to our morality, I think it will be because it empirically turns out to be the case that 'being conscious in the basic way humans are' arose way earlier on the evolutionary tree, or arose multiple times on the tree, not because our brain's moral 'pointer toward human-ish minds' is going haywire and triggering (to various degrees) in response to just about everything, in a way that our CEV deeply endorses.)

 

2 - In cases like this, I also don't think humans care much about the pointers themselves, or the 'experience of feeling as though something is human-like' -- rather, humans care about whether the thing is actually human-like (in this particular not-yet-fully-understood way).

 

3 - Moral intuitions like fairness, compassion, respect-for-autonomy, punishment for misdeeds, etc. -- unlike values like 'beauty' or 'disgust' -- seem to me to all point at this poorly-understood notion of a 'person'. We can list a ton of things that seem to be true of 'people', and we can wonder which of those things will turn out to be more or less central. We can wonder whether chickens will end up being 'people-like' in the ways that matter for compassion, even if we're pretty sure they aren't 'people-like' in the ways that matter for 'punishment for misdeeds'.

But regardless, I think eventually (if we don't kill ourselves first) we're just going to figure out what these values (or reflectively endorsed versions of these values) are. And I don't think eg 'respect-for-autonomy' is going to be a thing that smoothly increases from the electron level to the 'full human brain' level; I think it's going to point at a particular (though perhaps large!) class of complicated engines.

Thinking about it more, my brain generates the following argument for the perspective I think you're advocating:

I'm not actually sure if that's the exact argument I had in mind while writing the part about kludges, but I do find it fairly compelling, especially the way you had written it. Thanks.

I think a lot of human values look like pointers to real-world phenomena, rather than encodings of real-world phenomena.

I apologize for not being a complete response here, but I think if I were to try to summarize a few lingering general disagreements, I would say,

  1. "Human values" don't seem to be primarily what I care about. I care about "my values" and I'm skeptical that "human values" will converge onto what I care about.
  2. I have intuitions that ethics is a lot more arbitrary than you seem to think it is. Your argument is peppered with statements to the effect of what would our CEV endorse?. I do agree that some degree of self-reflection is good, but I don't see any strong reason to think that reflection alone will naturally lead all or most humans to the same place, especially given that the reflection process is underspecified.
  3. You appear to have interpreted my intuitions about the arbitrariness of concepts as instead about the complexity and fragility of concepts, which you expressed in confusion. Note that I think this reflects a basic miscommunication on my part, not yours. I do have some intuitions about complexity, less about fragility; but my statements above were (supposed to be) more about arbitrariness (I think).

I don't see any strong reason to think that reflection alone will naturally lead all or most humans to the same place, especially given that the reflection process is underspecified.

I think there's more or less a 'best way' to extrapolate a human's preferences (like, a way or meta-way we would and should endorse the most, after considering tons of different ways to extrapolate), and this will get different answers depending on who you extrapolate from, but for most people (partly because almost everyone cares a lot about everyone else's preferences), you get the same answer on all the high-stakes easy questions.

Where by 'easy questions' I mean the kinds of things we care about today -- very simple, close-to-the-joints-of-nature questions like 'shall we avoid causing serious physical damage to chickens?' that aren't about entities that have been pushed into weird extreme states by superintelligent optimization. :)

I think ethics is totally arbitrary in the sense that it's just 'what people happened to evolve', but I don't think it's that complex or heterogeneous from the perspective of a superintelligence. There's a limit to how much load-bearing complexity a human brain can even fit.

And I don't think eg 'respect-for-autonomy' is going to be a thing that smoothly increases from the electron level to the 'full human brain' level; I think it's going to point at a particular (though perhaps large!) class of complicated engines.

I actually agree with this, and I suspect that we might not disagree as much as you think if we put "credences" on what we thought were conscious. I'd identify my view as somewhere between Luke's view and Brian's view, which takes into account Brian's cosmopolitan perspective while insisting that consciousness is indeed a higher-level thing that doesn't seem to be built into the universe.

The way I imagine any successful theory of consciousness going is that even if it has a long parts (processes) list, every feature on that list will apply pretty ubiquitously to at least a tiny degree. Even if the parts need to combine in certain ways, that could also happen to a tiny degree in basically everything, although I'm much less sure of this claim; I'm much more confident that I can find the parts in a lot of places than in the claim that basically everything is like each part, so finding the right combinations could be much harder. The full complexity of consciousness might still be found in basically everything, just to a usually negligible degree.

I've written more on this here.

When you combine three blank areas of your map, the blank parts don't cancel out. Instead, you get a part of your map that you should be even more uncertain about.

I think this makes sense. However, and I don't know whether I obfuscated this point somewhere, I don't think I was arguing that we should be more certain about a particular theory. Indeed, from my perspective, I was arguing against reifying a single concept (self-reflectivity) as the thing that defines whether something is conscious, before we know anything about humans, much less whether humans are even capable of self-reflection in some discontinuous way from other animals.

Rather, 'moral value is a kludge' and 'consciousness is a kludge' both make me update toward thinking the set of moral patients are smaller  -- these engines don't become less engine-y via being kludges, they just become more complicated and laden-with-arbitrary-structure.

I guess that when I said that brains are kludges, I was trying to say that their boundaries were fuzzy, rather than saying that they have well-defined boundaries but that the concept is extremely fragile, such that if you take away a single property from them they cease to be human. (I probably shouldn't have used the term, and described it this way).

Complex structures like "tables" tend to be the type of thing that if you modify them across one or two dimensions, they belong to the same category. By contrast, a hydrogen atom is simple, and is the type of thing that if you take a property away from it, it ceases to be a hydrogen atom.

When I imagined a "consciousness engine" I visualized a simple system with clear moving parts, like a hydrogen atom. And conceptually, one of those moving parts could be a highly modular self-reflectivity component. Under this view, it might make a lot of sense that self-reflectivity is the defining component to a human, but I don't suspect these things are actually that cleanly separable from the rest of the system.

In other words, it seems like the best model of a "table" or some other highly fuzzy concept, is not some extremely precise description of the exact properties that define a table, but rather some additive model in which each feature contributes some "tableness", and such that no feature alone can either make something a table or prevent something from being a table. My intuitions about consciousness feel this way, but I'm not too certain about any of this.

I'd say my visualization of consciousness is less like a typical steam engine or table, and more like a Rube Goldberg machine designed by a very confused committee of terrible engineers. You can remove some parts of the machine without breaking anything, but a lot of other parts are necessary for the thing to work.

It should also be possible to design an AI that has 'human-like consciousness' via a much less kludge-ish process -- I don't think that much complexity is morally essential.

But chickens were built by a confused committee just like humans were, so they'll have their own enormous intricate kludges (which may or may not be the same kind of machine as the Consciousness Machine in our heads), rather than having the really efficient small version of the consciousness-machine.

Note: I think there's also a specific philosophical reason to think consciousness is pretty ubiquitous and fundamental -- the hard problem of consciousness. The 'we're investing too much metaphysical importance into our pet obsession' thing isn't the only reason anyone thinks consciousness (or very-consciousness-ish things) might be ubiquitous.

But per illusionism, I think this philosophical reason turns out to be wrong in the end, leaving us without a principled reason to anthropomorphize / piston-steam-engine-omorphize the universe like that.

It's true (on your view and mine) that there's a pervasive introspective, quasi-perceptual illusion humans suffer about consciousness.

But the functional properties of consciousness (or of 'the consciousness-like thing we actually have') are all still there, behind the illusion.

Swapping from the illusory view to the almost-functionally-identical non-illusory view, I strongly expect, will not cause us to stop caring about the underlying real things (thoughts, and feelings, and memories, and love, and friendship).

And if we still care about those real things, then our utility function is still (I claim) pretty obsessed with some very specific and complicated engines/computations. (Indeed, a lot more specific and complicated than real-world piston steam engines.)

I'd expect it to mostly look more like how our orientation to water and oars changes when we realize that the oar reflected in the water isn't really broken.

I don't expect the revelation to cause humanity to replace its values with such vague values that we reshape our lives around slightly adjusting the spatial configurations of rocks or electrons, because our new 'generalized friendship' concept treats some common pebble configurations as more or less 'friend-like', more or less 'asleep', more or less 'annoyed', etc.

(Maybe we'll do a little of that, for fun, as a sort of aesthetic project / a way of making the world feel more beautiful. But that gets us closer to my version of 'generalizing human values to apply to unconscious stuff', not Brian's version.)

Swapping from the illusory view to the almost-functionally-identical non-illusory view, I strongly expect, will not cause us to stop caring about the underlying real things (thoughts, and feelings, and memories, and love, and friendship).

Putting aside my other disagreements for now (and I appreciate the other things you said), I'd like to note that I see my own view as "rescuing the utility function" far more than a view which asserts that non-human animals are largely unconscious automatons.

To the extent that learning to be a reductionist shouldn't radically reshape what we care about, it seems clear to me that we shouldn't stop caring about non-human animals, especially larger ones like pigs. I think most people, including the majority of people who eat meat regularly, think that animals are conscious. And I wouldn't expect that having a dog or a cat personally would substantially negatively correlate with believing that animals are conscious (which would be weakly expected if we think our naive impressions track truth, and non-human animals aren't conscious).

There have been quite a few surveys about this, though I'm not quickly coming up any good ones right now (besides perhaps this survey which found that 47% of people supported a ban on slaughterhouses, a result which was replicated, though it's perhaps only about one third when you subtract those who don't know what a slaughterhouse is).

To the extent that learning to be a reductionist shouldn't radically reshape what we care about, it seems clear to me that we shouldn't stop caring about non-human animals, especially larger ones like pigs. I think most people, including the majority of people who eat meat regularly, think that animals are conscious.

This seems totally wrong to me.

I'm an illusionist, but that doesn't mean I think that humans' values are indifferent between the 'entity with a point of view' cluster in thingspace (e.g., typical adult humans), and the 'entity with no point of view' cluster in thingspace (e.g., braindead humans).

Just the opposite: I think there's an overwhelmingly large and absolutely morally crucial difference between 'automaton that acts sort of like it has morally relevant cognitive processes' (say, a crude robot or a cartoon hand-designed to inspire people to anthropomorphize it), and 'thing that actually has the morally relevant cognitive processes'.

It's a wide-open empirical question whether, e.g., dogs are basically 'automata that lack the morally relevant cognitive processes altogether', versus 'things with the morally relevant cognitive processes'. And I think 'is there something it's like to be that dog?' is actually a totally fine intuition pump for imperfectly getting at the kind of difference that morally matters here, even though this concept starts to break when you put philosophical weight on it (because of the 'hard problem' lllusion) and needs to be replaced with a probably-highly-similar functional equivalent.

Like, the 'is there something it's like to be X?' question is subject to an illusion in humans, and it's a real messy folk concept that will surely need to be massively revised as we figure out what's really going on. But it's surely closer to asking the morally important question about dogs, compared to terrible, overwhelmingly morally unimportant questions like 'can the external physical behaviors of this entity trick humans into anthropomorphizing the entity and feeling like it has a human-ish inner life'.

Tricking humans into anthropomorphizing things is so easy! What matters is what's in the dog's head!

Like, yes, when I say 'the moral evaluation function takes the dog's brain as an input, not the cuteness of its overt behaviors', I am talking about a moral evaluation function that we have to extract from the human's brain.

But the human moral evaluation function is a totally different function from the 'does-this-thing-make-noises-and-facial-expressions-that-naturally-make-me-feel-sympathy-for-it-before-I-learn-any-neuroscience?' function, even though both are located in the human brain.

Thinking (with very low confidence) about an idealized, heavily self-modified, reflectively consistent, CEV-ish version of me:

If it turns out that squirrels are totally unconscious automata, then I think Ideal Me would probably at least weakly prefer to not go around stepping on squirrels for fun. I think this would be for two reasons:

  • The kind of reverence-for-beauty that makes me not want to randomly shred flowers to pieces. Squirrels can be beautiful even if they have no moral value. Gorgeous sunsets plausibly deserve a similar kind of reverence.
  • The kind of disgust that makes me not want to draw pictures of mutilated humans. There may be nothing morally important about the cognitive algorithms in squirrels' brains; but squirrels still have a lot of anatomical similarities to humans, and the visual resemblance between the two is reason enough to be grossed out by roadkill.

In both cases, these don't seem like obviously bad values to me. (And I'm pretty conservative about getting rid of my values! Though a lot can and should change eventually, as humanity figures out all the risks and implications of various self-modifications. Indeed, I think the above descriptions would probably look totally wrong, quaint, and confused to a real CEV of mine; but it's my best guess for now.)

In contrast, conflating the moral worth of genuinely-totally-conscious things (insofar as anything is genuinely conscious) with genuinely-totally-unconscious things seems... actively bad, to me? Not a value worth endorsing or protecting?

Like, maybe you think it's implausible that squirrels, with all their behavioral complexity, could have 'the lights be off' in the way that a roomba with a cute face glued to it has 'the lights off'. I disagree somewhat, but I find that view vastly less objectionable than 'it doesn't even matter what the squirrel's mind is like, it just matters how uneducated humans naively emotionally respond to the squirrel's overt behaviors'.

 

Maybe a way of gesturing at the thing is: Phenomenal consciousness is an illusion, but the illusion adds up to normality. It doesn't add up to 'therefore the difference between automata / cartoon characters and things-that-actually-have-the-relevant-mental-machinery-in-their-brains suddenly becomes unimportant (or even less important)'.

[+]EI-80
[-]EI-40

I think the the self-reflective part of evolution brought us the revelation of suffering to our understandings. The self-unaware computations simply operate on pain as a carrot/stick system as they were initially evolved to function as. Most of the laws of civilization is about reducing suffering in the populations. Such realization in evolution has introduced new concepts regarding the relationship between ourselves as an individual self-contained computation and these smaller chunks of functions/computations that exist within us. Because of the carrot/stick functionality, by minimizing suffering, we also achieve what the function was originally designed to do, to help us with our self-preservation. This is the first level of the self-referential loop.

In the second loop, we can now see that this type of harm reduction is mainly geared toward the preservation of our own genes as we owe our knowledge to the people who have found out about multicellular organism and genetic makeup of living things. We can then again reflect on this loop to see whether we should do anything different given our new knowledge.

If one accepts Eliezer Yudkowsky's view on consciousness, the complexity of suffering in particular is largely irrelevant. The claim "qualia requires reflectivity" implies all qualia require reflectivity. This includes qualia like "what is the color red like?" and "how do smooth and rough surfaces feel different?" These experiences seem like they have vastly different evolutionary pressures associated with them that are largely unrelated to social accounting.

If you find the question of whether suffering in particular is sufficiently complex that it exists in certain animals but not others by virtue of evolutionary pressure, you're operating in a frame where these arguments are not superseded by the much more generic claim that complex social modeling is necessary to feel anything

If you think Eliezer is very likely to be right, these additional meditations on the nature of suffering are mostly minutiae.

[EDIT to note: I'm mostly pointing this out because it appears that there is one group that uses "complex social pressures" to claim animals do not suffer because animals feel nothing and another group that uses "complex social pressures" to claim that animals do not specifically suffer because suffering specifically depend on these things. That these two groups of people just happen to start from a similar guiding principle and happen to reach a similar answer for very different reasons makes me extremely suspicious of the epistemics around the moral patienthood of animals.]

I don't know what Eliezer's view is exactly. The parts I do know sound plausible to me, but I don't have high confidence in any particular view (though I feel pretty confident about illusionism).

My sense is that there are two popular views of 'are animals moral patients?' among EAs:

  1. Animals are obviously moral patients, there's no serious doubt about this.
  2. It's hard to be highly confident one way or the other about whether animals are moral patients, so we should think a lot about their welfare on EV grounds. E.g., even if the odds of chickens being moral patients is only 10%, that's a lot of expected utility on the line.

(And then there are views like Eliezer's, which IME are much less common.)

My view is basically 2. If you ask me to make my best guess about which species are conscious, then I'll extremely tentatively guess that it's only humans, and that consciousness evolved after language. But a wide variety of best guesses are compatible with the basic position in 2.

"The ability to reflect, pass mirror tests, etc. is important for consciousness" sounds relatively plausible to me, but I don't know of a strong positive reason to accept it -- if Eliezer has a detailed model here, I don't know what it is. My own argument is different, and is something like: the structure, character, etc. of organisms' minds is under very little direct selection pressure until organisms have language to describe themselves in detail to others; so if consciousness is any complex adaptation that involves reshaping organisms' inner lives to fit some very specific set of criteria, then it's likely to be a post-language adaptation. But again, this whole argument is just my current best guess, not something I feel comfortable betting on with any confidence.

I haven't seen an argument for any 1-style view that seemed at all compelling to me, though I recognize that someone might have a complicated nonstandard model of consciousness that implies 1 (just as Eliezer has a complicated nonstandard model of consciousness that implies chickens aren't moral patients).

The reason I talk about suffering (and not just consciousness) is:

  • I'm not confident in either line of reasoning, and both questions are relevant to 'which species are moral patients?'.
  • I have nonstandard guesses (though not confident beliefs) about both topics, and if I don't mention those guesses, people might assume my views are more conventional.
  • I think that looking at specific types of consciousness (like suffering) can help people think a lot more clearly about consciousness itself. E.g., thinking about scenarios like 'part of your brain is conscious, but the bodily-damage-detection part isn't conscious' can help draw out people's implicit models of how consciousness works.

Note that not all of my nonstandard views about suffering and consciousness point in the direction of 'chickens may be less morally important than humans'. E.g., I've written before that I put higher probability than most people on 'chickens are utility monsters, and we should care much more about an individual chicken than about an individual human' -- I think this is a pretty straightforward implication of the 'consciousness is weird and complicated' view that leads to a bunch of my other conclusions in the OP.

Parts of the OP were also written years apart, and the original reason I wrote up some of the OP content about suffering wasn't animal-related at all -- rather, I was trying to figure out how much to worry about invisibly suffering subsystems of human brains. (Conclusion: It's at least as worth-worrying-about as chickens, but it's less worth-worrying-about than I initially thought.)

Thanks for clarifying. To the extent that you aren't particularly sure about consciousness comes about, it makes sense to reason about all sorts of possibilities related to capacity for experience and intensity of suffering. In general, I'm just kinda surprised that Eliezer's view is so unusual given that he is the Eliezer Yudkowsky of the rationalist community.

My impression is that the justification for the argument your mention is something along the lines of "the primary reason one would develop a coherent picture of their own mind is so they could convey a convincing story about themselves to others -- which only became a relevant need once language developed."

I was under the impression you were focused primarily on suffering from the first two sections and the similarity of the above logic to the discussion of pain-signaling earlier. When I think about your generic argument about consciousness, I get confused however. While I can imagine why would one benefit from an internal narrative around their goals, desires, etc, I'm not even sure how I'd go about squaring pressures for that capacity with respect to the many basic sensory qualia that people have (e.g. sense of sight, sense of touch) -- especially in the context of language.

I think things like 'the ineffable redness of red' are a side-effect or spandrel. On my account, evolution selected for various kinds of internal cohesion and temporal consistency, introspective accessibility and verbal reportability, moral justifiability and rhetorical compellingness, etc. in weaving together a messy brain into some sort of unified point of view (with an attendant unified personality, unified knowledge, etc.).

This exerted a lot of novel pressures and constrained the solution space a lot, but didn't constrain it 100%, so you still end up with a lot of weird neither-fitness-improving-nor-fitness-reducing anomalies when you poke at introspection.

This is not a super satisfying response, and it has basically no detail to it, but it's the least-surprising way I could imagine things shaking out when we have a mature understanding of the mind.

[suffering's] dependence on higher cognition suggests that it is much more complex and conditional than it might appear on initial introspection, which on its own reduces the probability of its showing up elsewhere

Suffering is surely influenced by things like mental narratives, but that doesn't mean it requires mental narratives to exist at all. I would think that the narratives exert some influence over the amount of suffering. For example, if (to vastly oversimplify) suffering was represented by some number in the brain, and if by default it would be -10, then maybe the right narrative could add +7 so that it became just -3.

Top-down processing by the brain is a very general thing, not just for suffering. But I wouldn't say that all brain processes that are influenced by it can't exist without it. (OTOH, depending on how broadly we define top-down processing, maybe it's also somewhat ubiquitous in brains. The overall output of a neural network will often be influenced by multiple inputs, some from the senses and some from "higher" brain regions.)

Once you can report fine-grained beliefs about your internal state (including your past actions, how they cohere with your present actions, how this coherence is virtuous rather than villainous, how your current state and future plans are all the expressions of a single Person with a consistent character, etc.), there's suddenly a ton of evolutionary pressure for you to internally represent a 'global you state' to yourself, and for you to organize your brain's visible outputs to all cohere with the 'global you state' narrative you share with others; where almost zero such pressure exists before language.

 

I have two main thoughts on alternative pictures to this:

  1. Local is global for a smaller set of features. Nonhuman animals could have more limited "global you states", even possibly multiple distinct ones at a time, if they aren't well integrated (e.g. split brain, poor integration between or even within sensory modalities). What's special about the narrative?
  2. Many animals do integrate inputs (within and across senses), use attention, prioritize and make tradeoffs. Motivation (including from emotions, pain, anticipated reward, etc.) feeds into selective/top-down attention to guide behaviour, and it seems like their emotions themselves can be inputs for learning, not just rewards, since they can be trained to behave in trainer-selected ways in response to their own emotions, generalizing to the same emotions in response to different situations. Animals can answer unexpected questions about things that have just happened and their own actions. See my comment here for some studies. I wouldn't put little credence on in-the-moment "global you states" in animals, basically as described in global workspace theory.

 

Nell Watson: So, in your argument, would it follow then that feral human children or profoundly autistic human beings cannot feel pain, because they lack language to codify their conscious experience?

Rob Bensinger: Eliezer might say that? Since he does think human babies aren't conscious, with very high confidence.

But my argument is evolutionary, not developmental. Evolution selected for consciousness once we had language (on my account), but that doesn't mean consciousness has to depend on language developmentally.

I'd still guess the cognitive and neurological differences wouldn't point to babies being conscious, but not most mammals. What differences could explain the gap?

I'd probably say human babies and adult chickens are similarly likely to be phenomenally conscious (maybe between 10% and 40%). I gather Eliezer assigns far lower probability to both propositions, and I'm guessing he thinks adult chickens are way more likely to be conscious than human babies are, since he's said that "I’d be truly shocked (like, fairies-in-the-garden shocked) to find them [human babies] sentient", whereas I haven't heard him say something similar about chickens.

Here's a related illusionist-compatible evolutionary hypothesis about consciousness: consciousness evolved to give us certain resilient beliefs that are adaptive to have. For example, belief in your own consciousness contributes to the belief that death would be bad, and this belief is used when you reason and plan, especially to avoid death. The badness or undesirability of suffering (or the things that cause us suffering) is another such resilient belief. In general, we use reason and planning to pursue things we belive are good and prevent things we believe are bad. Many of the things we believe are good or bad have been shaped by evolution to cause us pleasure or suffering, so evolution was able to highjack our capacities for reason and planning to spread genes more.

Then this raises some questions: for what kinds of reasoning and planning would such beliefs actually be useful (over what we would do without them)? Is language necessary? How much? How sophisticated was the language of early Homo sapiens or earlier ancestors, and how much have our brains and cognitive capacities changed since then? Do animals trained to communicate more (chimps, gorillas, parrots, or even cats and dogs with word buttons) meet the bar?

When I think about an animal simulating outcomes (e.g. visualizing or reasoning about them) and deciding how to act based on whichever outcome seemed most desirable, I'm not sure you really need "beliefs" at all. The animal can react emotionally or with desire to the simulation, and then that reaction becomes associated with the option that generated it, so options will end up more or less attractive this way.

Also, somewhat of an aside: some illusions (including optical illusions, magic) are like lies of omission and disappear when you explain what's missing, while others are lies of commission, and don't disappear when you explain them (many optical illusions). Consciousness illusions seem more like the latter: people aren't going to stop believing they're conscious even if they understood how consciousness works. See https://link.springer.com/article/10.1007/s10670-019-00204-4

I think some nonhuman animals also have some such rich illusions, like the rubber tail illusion in rodents and I think some optical illusions, but it's not clear what this says about their consciousness under illusionism.

I wanted to say basically what Sam Rosen said, so just to make sure I understood your point correctly, do you literally believe that statement "animals can feel hunger" is false? (And the same for babies?)

It seems to me that you have basically redefined "feeling X" to "feeling X while adult human".

I do not understand why animals evolved to pretend having emotions. I mean, what's the point of one pig signalling "I am in pain" to another pig, if that other pig obviously knows that there is no such thing as pain. Why did all the lying evolve, when there was no one to lie to?

Does Occam's razor really favor this hypothesis over "animals act as if they feel pain/hunger, because they feel pain/hunger"?

I am not saying that lying does not exist. I am saying that in order for lying to make sense, the "what the lie is about" must exist first, otherwise no one will respond to the lie. I can lie about being in pain, because there is such a thing as pain. I can't lie about being in qwertyuiop (especially if I am an animal and can't make up words).

Your theory predicts that first the animals would evolve pretending to qwertyuiop (which isn't even a thing at that moment), and only millions of years later a sapient species would evolve which actually qwertyuiops.

EDIT:

LOL, I asked the same thing a year ago, didn't notice that until now.

If animals don't feel pain, the obvious question is why did they evolve to pretend having qualia that are uniquely human? Especially considering that they evolved long before humans existed.

[-][anonymous]10

In that case we would just be anthropomorphising, clearly.

My recent improvement in de-confusing consciousness happened due to understanding of Good Regulator Theorem. Every good regulator must contain a model of the system it's regulating. 

I see consciousness as this model, a high level interface, representing some facts about the body and its environment, developped probably for impulse control, longterm planning and communication. 

Humans have a lot of different desires, occasionally contradicting each other. In order to effectively regulate behaviour of this complex system of systems, there was required some kind central planner, who gets simplified representation of what's going on. Our "qualia" is this representation - the encoding of some processes in our body available for consciousness. Some of them are read only, some allow a level of editing while there is also a lot of things not encoded in our consciousness, thus inavailable at all. 

4. Similarly, I frequently hear about dreams that are scary or disorienting, but I don't think I've ever heard of someone recalling having experienced severe pain from a dream, even when they remember dreaming that they were being physically damaged.

In my childhood I used to have a reoccuring nightmare about a shapeshifting monster that killed me in a really unpleasant way. The best way I can describe this feeling is as being pushed through something very narrow, like a syringe needle. I used to describe this as severe pain and I did my best to evade it. This actually lead me to reinventing all kind of lucid dreaming practises, starting from learning how to voluntary wake up, so that I didn't experience being killed by the monster.

What is interesting, that on a reflection, this feeling of "being pushed through a syringe needle" is closer to claustrophobic fear than pain. It was based on my experience of discomfort due to been tightly squeezed, which wasn't painfull de facto. It's like my brain created a pain-resembling-substitute from fear.

[+]TAG-90
[+][comment deleted]20