While doing some reading on philosophy I came across some interesting questions about the nature of having desires and preferences. One, do you still have preferences and desires when you are unconscious? Two, if you don't does this call into question the many moral theories that hold that having preferences and desires is what makes one morally significant, since mistreating temporarily unconscious people seems obviously immoral? 

Philosophers usually discuss this question when debating the morality of abortion, but to avoid doing any mindkilling I won't mention that topic, except to say in this sentence that I won't mention it.

In more detail the issue is:  A common, intuitive, and logical-seeming explanation for why it is immoral to destroy a typical human being, but not to destroy a rock, is that a typical human being has certain desires (or preferences or values, whatever you wish to call them, I'm using the terms interchangably) that they wish to fulfill, and destroying them would hinder the fulfillment of these desires.  A rock, by contrast does not have any such desires so it is not harmed by being destroyed.  The problem with this is that it also seems immoral to harm a human being who is asleep, or is in a temporary coma. And, on the face of it, it seems plausible to say that an unconscious person does not have any desires. (And of course it gets even weirder when considering far-out concepts like a brain emulator that is saved to a hard drive, but isn't being run at the moment)

After thinking about this it occurred to me that this line of reasoning could be taken further.  If I am not thinking about my car at the moment, can I still be said to desire that it is not stolen?  Do I stop having desires about things the instant my attention shifts away from them?

I have compiled a list of possible solutions to this problem, ranked in order from least plausible to most plausible.

1.  One possibility would be to consider it immoral to harm a sleeping person because if they will have desires in the future, even if they don't now.  I find this argument extremely implausible because it has some extremely bizarre implications, some of which may lead to insoluble moral contradictions.  For instance, this argument could be used to argue that it is immoral to destroy skin cells because it is possible to use them to clone a new person, who will eventually grow up to have desires.

Furthermore, when human beings eventually gain the ability to build AIs that possess desires, this solution interacts with the orthogonality thesis in a catastrophic fashion.  If it is possible to build an AI with any utility function, then for every potential AI one can construct, there is another potential AI that desires the exact opposite of that AI.  That leads to total paralysis, since for every set potential set of desires we are capable of satisfying there is another potential set that would be horribly thwarted.

Lastly, this argument implies that you can, (and may be obligated to) help someone who doesn't exist, and never has existed, by satisfying their non-personal preferences, without ever having to bother with actually creating them.  This seem strange, I can maybe see an argument for respecting the once-existant preferences of those who are dead, but respecting the hypothetical preferences of the never-existed seems absurd.  It also has the same problems with the orthogonality thesis that I mentioned earlier.

2.  Make the same argument as solution 1, but somehow define the categories more narrowly so that an unconscious person's ability to have desires in the future differs from that of an uncloned skin cell or an unbuilt AI.  Michael Tooley has tried to do this by discerning between things that have the "possibility" of becoming a person with desires (i.e skin cells) and those that have the "capacity" to have desires.  This approach has been criticized, and I find myself pessimistic about it because categories have a tendency to be "fuzzy" in real life and not have sharp borders.

3.  Another solution may be that desires that one has had in the past continue to count, even when one is unconscious or not thinking about them.  So it's immoral to harm unconscious people because before they were unconscious they had a desire not to be harmed, and it's immoral to steal my car because I desired that it not be stolen earlier when I was thinking about it.

I find this solution fairly convincing.  The only major quibble I have with it is that it gives what some might consider a counter-intuitive result on a variation of the sleeping person question.  Imagine a nano-factory manufacturers a sleeping person.  This person is a new and distinct individual, and when they wake up they will proceed to behave as a typical human.  This solution may suggest that it is okay to kill them before they wake up, since they haven't had any desires yet, which does seem odd.

4. Reject the claim that one doesn't have desires when one is unconscious, or when one is not thinking about a topic.  The more I think about this solution, the more obvious it seems.  Generally when I am rationally deliberating about whether or not I desire something I consider how many of my values and ideaks it fulfills.  It seems like my list of values and ideals remains fairly constant, and that even if I am focusing my attention on one value at a time it makes sense to say that I still "have" the other values I am not focusing on at the moment.

Obviously I don't think that there's some portion of my brain where my "values" are stored in a neat little Excel spreadsheet.  But they do seem to be a persistent part of its structure in some fashion.  And it makes sense that they'd still be part of its structure when I'm unconscious.  If they weren't, wouldn't my preferences change radically every time I woke up?

In other words, it's bad to harm an unconscious person because they have desires, preferences, values, whatever you wish to call them, that harming them would violate.  And those values are a part of the structure of their mind that doesn't go away when they sleep.  Skin cells and unbuilt AIs, by contrast, have no such values.

Now, while I think that explanation 4 resolves the issue of desires and unconsciousness best, I do think solution 3 has a great deal of truth to it as well (For instance, I tend to respect the final wishes of a dead person because they had desires in the past, even if they don't now).   The solutions 3 and 4 are not incompatible at all, so one can believe in both of them.

I'm curious as to what people think of my possible solutions.  Am I right about people still having something like desires in their brain when they are unconscious?

New Comment
37 comments, sorted by Click to highlight new comments since:
[-]ygert200

The premise that the reason we do not kill people is because they have desires seems deeply flawed. I don't kill people because they are people, and I have a term in my utility function that cares about people. Thinking about it, this term doesn't care about arbitrary desire, but rather about people specifically. (Not necessarily humans, of course. For the same reason, I would not want to kill a sentient alien or AI.) If it were desires that matter, that would mean a bunch of extremely unintuitive things, far beyond what you cover here.

For example: This premise implies that if someone is easygoing and carefree, it is a lot less bad than if you kill your normal average person. To me, at lest, this conclusion seems rather repugnant. Do carefree people have a lesser moral standing? That is far from obvious.

(Or what about animals? from what we can observe, animals have plenty of desires, almost as much or as much as humans. If we really were using desire as our metric of moral worth, we would have to value animal lives at a very high rate. While I do believe that humanity should treat animals better then they are treated now, I don't think anyone seriously believes that they should be given the same (or very similar) moral weight as humans.)

The "People" argument, once you taboo "people", becomes pretty convoluted; it is to some extent the question of what constitutes a person which the "desires" perspective seeks to answer.

Additionally, if we treat "desires" as a qualitative rather than quantitative measure ("Has desires" rather than "How many desires"), one of your rejections goes away.

That said, I agree with a specific form of this argument, which is that "Desires experienced" isn't a good measure of moral standing, because it fails to add up to normality; it doesn't include everything we'd like to include, and it doesn't exclude everything we'd like to exclude.

The "People" argument, once you taboo "people", becomes pretty convoluted; it is to some extent the question of what constitutes a person which the "desires" perspective seeks to answer.

If I cared about "desires" then I would expect to treat cats and dogs analogous to how I treat humans, and this is patently false if you observe my behavior. Clearly I value "humans", not "animals with desires". Defining human might be beyond me, but I still seem to know them when I see 'em.

this is patently false if you observe my behavior.

Unless you have an insanely low level of akrasia, I'd be wary of using your behavior as a guide to your values.

I would expect to treat cats and dogs analogous to how I treat humans, and this is patently false if you observe my behavior. Clearly I value "humans", not "animals with desires"

Not necessarily. If animals desire radically different things from humans then you'd treat them differently even if you valued their desires equally. I don't think dogs and cats animals have the same sort of complex desires humans do, they seem to value attention and food and disvalue pain, fear, and hunger. So as long as you don't actively mistreat animals you are probably respecting their desires.

If a dog walked up to you and demonstrated that it could read, write, and communicate with you, and seemed to have a genius level IQ, and expressed a desire to go to college and learn theoretical physics, wouldn't you treat it more like a human and less like a normal dog?

Unless you have an insanely low level of akrasia, I'd be wary of using your behavior as a guide to your values.

I'm not saying "having desires" isn't a factor somewhere, but I'm not a vegetarian so clearly I don't mind killing animals. I have no de-facto objection to eating dog meat instead of cow meat, but I'd be appalled to eat human. As near as I can tell, this applies exclusively to humans. I strongly suspect I'd be bothered to eat a talking dog, but I suspect both the talking and non-talking dogs have a desire not to be my dinner. The pertinent difference there seems to be communication, not desire.

I'm fine calling the relevant trait "being human" since, in this reality, it's an accurate generalization. I'm fine being wrong in the counter-factual "Dog's Talk" reality, since I don't live there. If I ever find myself living in a world with beings that are both (!human AND !dinner), I'll re-evaluate what traits contribute. Until then, I have enough evidence to rule out "desire", and insufficient evidence to propose anything other than "human" as a replacement :)

Most of the time. Unfortunately a definition that works "most of the time" is wholly unworkable. Note that the "desire" definition arose out of the abortion debate.

Do not consider this an insistence that you provide a viable alternate, rather, an insistence that you provide one only if you find it to be a viable alternative.

Most of the time. Unfortunately a definition that works "most of the time" is wholly unworkable.

I think general relativity is pretty workable despite working "most of the time".

I've heard a common argument post-tabooing-"people" to be "I care about things that are close in thing-space to me. "People" is just a word I use for algorithms that run like me." (This is pretty much how I function in a loose sense, actually)

There is something I am having that I label "subjective experience." I value this thing in myself and others. That I can't completely specify it doesn't matter much, I can't fully specify most of my values.

You can't even tell whether or not others have this thing that you label subjective experience. Are you sure you value it in others?

The world in which I am the only being having a subjective experience and everyone else is a p-zombie is ridiculously unlikely.

I didn't suggest you were the only one having a subjective experience. I suggested that what you -label- a subjective experience may not be experienced by others.

Are you seeing the same red I am? Maybe. That doesn't stop us from using a common label to refer to an objective phenomenon.

Similarly, whatever similarities you think you share with other people may be the product of a common label referring to an objective phenomenon, experienced subjectively differently. And that can include subjective existence. The qualities of subjective existence you value are not necessarily present in every possible subjective existence.

And when it results in predictions of different futures I'll care.

You shouldn't try to taboo "people". Actual human brains really do think in terms of the category "people". If the world changes and the category no longer carves it at its joints (say, if superhuman AI is developed), human brains will remain to some extent hardwired with their category of "people". The only answer to the question of what constitutes a person is to go look at how human brains pattern-match things to recognize persons, which is that they look and behave like humans.

That kind of attitude is an extremely effective way of -preventing- you from developing superhuman AI, or at least the kind you'd -want- to develop. Your superhuman AI needs to know the difference between plucked chickens and Greek philosophers.

I think I don't understand what you're saying.

If you try to formalize what "people" or "morally valuable agents" are - also known as tabooing the word "people" - then you run into problems with bad definitions that don't match your intuition and maybe think plucked chickens are people.

That's exactly why I'm arguing that you should not formalize or taboo "people", because it's not a natural category; it's something that is best defined by pointing to a human brain and saying "whatever the brain recognizes as people, that's people".

Are you going to put a human brain in your superhuman AI so it can use it for a reference?

I could if I had to. Or I could tell it to analyze some brains and remember the results.

As OrphanWilde said in their reply, when I say that it is bad to kill people because they have desires that killing them would thwart, what I was actually trying to do is taboo "person" and figure out what makes someone a person. And I think "an entity that has desires" is one of the best definitions of "person" that we've come up with so far. This view is shared by many philosophers, see Wikipedia's entry on "personism" for instance.

For example: This premise implies that if someone is easygoing and carefree, it is a lot less bad than if you kill your normal average person. To me, at lest, this conclusion seems rather repugnant. Do carefree people have a lesser moral standing? That is far from obvious.

I don't regard the quantity of desires someone has as being what makes it wrong to kill them. Rather it is that they have future-directed preferences at all. In other words, being able to have desires is part of (or maybe all of, I'm not sure) what makes you a "person," and killing a person is bad.

Also, if the quantity of desires was what made you morally significant you could increase your moral significance by subdividing your desires. For instance, if I was arguing with someone over the last slice of pizza, it would not be a morally valid argument to say "I want to eat the crust, cheese, and sauce, while you just want to eat the pizza. Three desires trumps one!"

So it's having desires at all that make us morally significant (ie make us persons), not how many we have.

Or what about animals? from what we can observe, animals have plenty of desires, almost as much or as much as humans. If we really were using desire as our metric of moral worth, we would have to value animal lives at a very high rate.

What I think gives humans more moral weight than animals is that we are capable of conceiving of the future and having preferences about how the future will go. Most animals do not. Killing a human is bad because of all their future-directed preferences and life-goals that will be thwarted. Most animals, by contrast, literally do not care whether they live or die (animals do behave in ways that result in their continued existence, but these activities seem motivated by a desire to gain pleasure and avoid pain, rather than prudence about the future). So killing an animal generally does not have the same level of badness as killing a human does (animals can feel pleasure and pain however, so the method of killing them had better be painless).

Of course, there might be a few species of animals that can conceive of the future and have preferences about it (the great apes, for instance). I see no difference between killing those animals and killing a human who is mildly retarded.

It seems plausible to say that an unconscious person does not have any desires.

To further complicate this question, I will proudly report that I have learned to take off my socks in my sleep. Is that an unconscious desire to not have socks on?

I can report instances where I've apparently walked over to my alarm, turned it off, returned to bed, and returned to sleep, all without having any memory of it afterwards. I'm not sure if maybe I should classify this as being awake and having memory formation turned off, though (as I have also been known to respond to someone while mostly-asleep fairly cogently and then almost completely forget the whole thing).

I think next time that happens, you should get someone to ask "hey, are you conscious?" and record your answer.

Ah, but is reporting that one is conscious really evidence of being conscious? (Well, I'm also reasonably confident the answer I would give is "yes".) Unless you meant literally "record your answer yourself", in which case I'm not sure I could pull that one off without waking up sufficiently to fully form memories. Mostly I think this is evidence for the unsurprising conclusion that consciousness is not binary, and possibly for the very slightly more surprising conclusion that memory formation is not the same as consciousness despite the fact that memories are one of the main ways we get evidence of being conscious at some point.

Yeah, I wasn't trying to design a very ambitious experiment. I'm just not sure I can predict what I would say to that question if I were asleep. Could you get the other person to make you convince them that you're conscious if you say yes and have them report back what you say? I predict non-sequiturs!

I always have issues wrapping my head around how to deal with morality or responsibility-related issues when dealing with memory formation. Like really drunk people that say mean things and don't remember them after -- was that really them being mean? Whatever that means.

I always have issues wrapping my head around how to deal with morality or responsibility-related issues when dealing with memory formation. Like really drunk people that say mean things and don't remember them after -- was that really them being mean? Whatever that means.

I think the best way to look at it is pragmatic (or instrumental or whatever you want to call it) - figure out what behaviour you'd like them to exhibit (e.g. be less mean, generally avoid destructive behaviours), decide whether they can influence it (probably yes, at least by drinking less), and then influence them accordingly. Which is a roundabout way of saying that you should tell them they suck when drunk and you're unhappy with them so hopefully they'll act better next time (or get less drunk) and that legally they should have pretty much the same responsibilities. There's also the secondary question of deciding to what extent being a bad person while drunk suggests that they're also a (less) bad person while not drunk and have maybe just been hiding it well. I tend to think that it probably doesn't (the actual evidence of what we know about them when they're not drunk being more relevant), but I'm not really sure.

There's also the secondary question of deciding to what extent being a bad person while drunk suggests that they're also a (less) bad person while not drunk and have maybe just been hiding it well.

But then if they're a good person while they're sober but they spend a lot of their time drunk, then they're really a weighted average of two people that computes more skewed toward their drunk self (who can't really coherently answer questions about themselves) and their sober self can't remember what their drunk self did so that self can't either and omgarghblarghcomplicated. I generally do just short-circuit all of these computations the way you describe and don't hang out with people like this, but I have one friend whom I've known forever who's generally okay but sometimes acts really weird. And I can't tell when he's drunk, so he slowly acts a little weird over a long period of time before revealing that he's been drinking all day and then I just feel like I don't know who I'm talking to and whether he'll be that person the next time I see him.

Anyways, I'm not too interested in the specifics of the instrumental side? I was just mainly wondering if the model of "conscious" "persons" breaks down really quickly once you introduce intoxicants that mess with memory formation. It kinda seems like it, huh?

Worse than that, I think it breaks down even without removing memory formation. If someone takes drugs regularly which make them act very differently, it's probably best to model them as two people (or at least two sets of behaviours and reactions attached to one person) even if they remembers both sides at all times. On a less drug-related level, for most people, aroused!person acts quite differently from unaroused!person (and while I mainly meant sexual arousal, it's true for anger and other strong emotions as well). Which is just saying that a person acts differently when experiencing different emotions/mental states, which we really already know. It's definitely more salient with drugs, though.

I feel like that's a bit exaggerated, because an angry person will still remember themselves yelling and maybe throwing things. Once they've called down, they might still be inclined to argue that what they did was correct and justified, but they won't have trouble admitting they did it. If a person doesn't remember having the experience of yelling and throwing things, they won't know anything about their internal state at the time it happened. So people telling them something happened is evidence that it did, but it was the ... conscious experience of someone else? (Blargh, fuzzy wording.)

Note that scientists have now made some progress in understanding how desires/preferences work. See especially the field of neuroeconomics.

Thanks a lot! I should have read that article a long time ago. I now have greater confidence that option 4 is correct. Judging from what I read, it looks like if a person has a reasonably intact ventral striatum and medial prefrontal cortex (as well as those other regions you mentioned that might also contain subjective value) they can be said to have desires whether they are conscious or not.

[-]Shmi50

I'm wondering what ethical system you are using. It does not seem to be utilitarianism. Do you subscribe to a hard-and-fast deontological rule "agency deserves life", where the agency in question can be current or potential? If so, are your proposed solutions designed to redefine agency in a way that does not conflict with some other parts of your ethics you do not mention, likely implicit consequentialism?

I think that you should analyze and clarify your values before proposing solutions.

I think that you should analyze and clarify your values before proposing solutions.

The entire point of my attempt to work through this problem was to analyze and clarify my values. Specifically, I was attempting to determine if the common-sense value that "harming sleeping people is just as bad as harming conscious people" in any way conflicted with the preference-utilitarianism-type ethical system I subscribe to. I concluded that it probably didn't, because a person can be said to have "preferences" or "values" even while unconscious.

I like the direction you've started (4). A good way to do something useful here would definitely be to take complicated things that are treated as fundamental in what you've read (like "desires") and try to taboo them to get at how humans recognize these things, or what physical arrangements we mean by these things.

(3) implies that if someone's values change, it is moral to continue to respect their old values as well as their new ones. I can't decide whether this seems counterintuitive or perfectly reasonable.

I think to answer this we need to divide values into categories, which I will call "general" and "specific." A general value would be something like "having friends, having fun, being a good person, etc." A specific value would be something like "being friends with Ted, going to see a movie, volunteering at a soup kitchen, etc." It is a specific manifestation of a more general value.

If someone's specific values change, it is generally because they have found a different specific value that they will believe will fulfill their general values more effectively. For instance my specific value might change from "going to see a movie" to "going for a walk in the park," because I have realized that doing that will fulfill my general value of "having fun" more effectively.

Generally, I think a person's general values are fairly stable, and it is hard to change them without doing something serious to their brain. I think continuing to respect those values, and maybe trying to repair whatever happened to that person's brain, is probably a good idea.

In the case of one's specific values, however, respecting the old specific values after they change would not be a good thing to do, since the reason those values have changed is that the new specific values fulfill a person's general values better. This is, of course, assuming that the person whose specific values have changed is correct that their new specific values fulfill their general values better than their old ones. If they are mistaken it might be better to keep respecting their old specific values.

I think this is related to the concept of CEV. "General desires" are roughly analogous to "volition" while "specific desires" are roughly analogous to "decision."

I think the conceptual distinction you're trying to make is more like terminal vs instrumental goals rather than general vs specific goals, although they may be correlated. I generally think of "values" as synonymous with "terminal goals", and those are what I was referring to. I agree that people's terminal goals change much less often than instrumental goals.

Generally, I think a person's general values are fairly stable, and it is hard to change them without doing something serious to their brain. I think continuing to respect those values, and maybe trying to repair whatever happened to that person's brain, is probably a good idea.

By "something serious", do you mean that you think that a change in terminal goals would require them to become mentally disabled, or just that it would require a fairly substantial change? If the former, then that is a fairly strong claim, which I would like to see evidence for. If the latter, then reversing the change seems cruel to the modified person if they are still functional.

By "something serious", do you mean that you think that a change in terminal goals would require them to become mentally disabled, or just that it would require a fairly substantial change?

I think probably the latter. But regardless of which it is, I think that anyone would regard such a change as highly undesirable, since it is kind of hard to pursue one's terminal goals if the person you've changing into no longer has the same goals.

If the latter, then reversing the change seems cruel to the modified person if they are still functional.

You're right of course. It would be similar to if a person died, and the only way to resurrect them would be to kill another still living person. The only situation where it would be desirable would be if the changed person inflicted large disutilities on others, or the original created huge utilities for others. For instance, if a brilliant surgeon who saves dozens of lives per year is changed into a cruel sociopath (the sociopath is capable of functioning perfectly fine in society, they're just evil) who commits horrible crimes, reversing the change would be a no-brainer. But in a more normal situation you are right that it would be cruel.