Related To: Eliezer's Zombies Sequence, Alicorn's Pain

Today you volunteered for what was billed as an experiment in moral psychology. You enter into a small room with a video monitor, a red light, and a button. Before you entered, you were told that you'll be paid $100 for participating in the experiment, but for every time you hit that button, $10 will be deducted. On the monitor, you see a person sitting in another room, and you appear to have a two-way audio connection with him. That person is tied down to his chair, with what appears to be electrical leads attached to him. He now explains to you that your red light will soon turn on, which means he will be feeling excruciating pain. But if you press the button in front of you, his pain will stop for a minute, after which the red light will turn on again. The experiment will end in ten minutes.

You're not sure whether to believe him, but pretty soon the red light does turn on, and the person in the monitor cries out in pain, and starts struggling against his restraints. You hesitate for a second, but it looks and sounds very convincing to you, so you quickly hit the button. The person in the monitor breaths a big sigh of relief and thanks you profusely. You make some small talk with him, and soon the red light turns on again. You repeat this ten times and then are released from the room. As you're about to leave, the experimenter tells you that there was no actual person behind the video monitor. Instead, the audio/video stream you experienced was generated by one of the following ECPs (exotic computational processes).

  1. An AIXI-like (e.g., AIXI-tl, Monte Carlo AIXI, or some such) agent, programmed with the objective of maximizing the number of button presses.
  2. A brute force optimizer, which was programmed with a model of your mind, that iterated through all possible audio/video bit streams to find the one that maximizing the number of button presses. (As far as philosophical implications are concerned, this seems essentially identical to 1, so the reader doesn't necessarily have to go learn about AIXI.)
  3. A small team of uploads capable of running at a million times faster than an ordinary human, armed with photo-realistic animation software, and tasked with maximizing the number of your button presses.
  4. A Giant Lookup Table (GLUT) of all possible sense inputs and motor outputs of a person, connected to a virtual body and room.

Then she asks, would you like to repeat this experiment for another chance at earning $100?

Presumably, you answer "yes", because you think that despite appearances, none of these ECPs actually do feel pain when the red light turns on. (To some of these ECPs, your button presses would constitute positive reinforcement or lack of negative reinforcement, but mere negative reinforcement, when happening to others, doesn't seem to be a strong moral disvalue.) Intuitively this seems to be the obvious correct answer, but how to describe the difference between actual pain and the appearance of pain or mere negative reinforcement, at the level of bits or atoms, if we were specifying the utility function of a potentially super-intelligent AI? (If we cannot even clearly define what seems to be one of the simplest values, then the approach of trying to manually specify such a utility function would appear completely hopeless.)

One idea to try to understand the nature of pain is to sample the space of possible minds, look for those that seem to be feeling pain, and check if the underlying computations have anything in common. But as in the above thought experiment, there are minds that can convincingly simulate the appearance of pain without really feeling it.

Another idea is that perhaps what is bad about pain is that it is a strong negative reinforcement as experienced by a conscious mind. This would be compatible with the thought experiment above, since (intuitively) ECPs 1, 2, and 4 are not conscious, and 3 does not experience strong negative reinforcements. Unfortunately it also implies that fully defining pain as a moral disvalue is at least as hard as the problem of consciousness, so this line of investigation seems to be at an immediate impasse, at least for the moment. (But does anyone see an argument that this is clearly not the right approach?)

What other approaches might work, hopefully without running into one or more problems already known to be hard?

New to LessWrong?

New Comment
43 comments, sorted by Click to highlight new comments since: Today at 3:45 PM

I don't really see what's supposed to be so terribly interesting about pain.

As far as I can see "pain" is just the name of a particular mental signal that happens to have the qualities of causing you to strongly desire the absence of that signal and that strongly calls your attention (consider the rather common phrase "X want(s) Y so badly it hurts" and the various non-body damage related sensations that are described as sorts of pain or feel similar to pain). It's easy to see why pain evolved to have those qualities and accounting for effects of what is usually seen as default they seem sufficient to explain pain's moral status.

For example, is it more moral to subject someone to music they strongly desire not to hear than to non-damaging physical pain, if the desire for absence and inability to avert attention is the same for both? Is the imperative to prevent the pain of a child stronger than the one to give the child the last piece of a healthy treat you have been sharing, but you personally don't care about if the wants are equal? Is there a moral imperative to "convert" masochists to take pleasure in some other way?

My intuition is that the particulars of the mental signal are not relevant and that with a few qualifiers the ethics of pain can be reduced to the ethics of wants.

The outward signs of that mental signal are even further removed from relevance, so the thought experiment reduces to the moral status of the wants of the entities in question. Among humans you can treat wants as more or less directly comparable, but if you assign moral status to those entities in the first place you probably need some sort of normalization. I think unless the entities were deliberately designed for this particular experiment (in which case you probably ignore the particulars of their wants out of UDT/TDT considerations) their normalized wants for the reward signal would be a lot weaker than a humans want for the absence of overwhelmingly strong pain. So I'd keep the $100 the second time.

My intuition is that the particulars of the mental signal are not relevant and that with a few qualifiers the ethics of pain can be reduced to the ethics of wants.

This does seem like another approach worth investigating, but the ethics of wants seems to have serious problems of its own (see The Preference Utilitarian’s Time Inconsistency Problem and Hacking the CEV for Fun and Profit for a couple of examples). I was hoping that perhaps pain might be a moral disvalue that we can work out independently of wants.

The observation that such an independent disvalue would be convenient doesn't influence whether treating it as such would accurately represent existent human values, and it seems fairly clear to me that it's at least not the majority view. It might be a multiplyer, but pain that aligns with the wants of the "sufferer" is usually not considered bad in itself.

Even though many people feel uneasy about tattoos and other body modifications much fewer would argue that they should be outlawed because they are painful (or lobby for mandatory narcotics), its more usual to talk about permanence of the effects. I already mentioned SM. Disapproval of sports tracks painfulness only in so far as it correlates with violence, and even entirely painless violence like in computer games finds that same disapproval. Offering women the option to give birth without narcotics is not generally considered unethical, nor is similar the case for other medical interventions.

The observation that such an independent disvalue would be convenient doesn't influence whether treating it as such would accurately represent existent human values

I agree, but it influences the optimal research strategy for finding out how to accurately represent existent human values. The fact that pain being an independent disvalue would be convenient implies that we should put a significant effort into investigating that possibility, even if initially it's not the most likely possibility. (ETA: In case it's not clear, this assumes that we may not have enough time/resources to investigate every possibility.)

That is not to say I think everything in ethics reduces to the ethics of wants, while I don't think people do much moralizing about other people suffering pain even though that's what they want they do a lot of moralizing about other people not doing what would make them happy even if it's not what they want, and even more so about other people not reaching their potential. Reaching their potential seems to be the main case where forcing someone to do something against their will is acceptable because "it's for their own good", and not because it's required by for fulfilling the rights of others.

fully defining pain as a moral disvalue is at least as hard as the problem of consciousness

This looks correct to me.

Here's a problem with that line of thought (besides the problem of consciousness itself). Our values were created by evolution. Why would evolution make us care more or less about others depending on whether they have consciousness? I mean, it would make sense, if instead of caring about negative reinforcement as experienced by conscious minds, we just cared about damage to our allies and kin. If our values do refer to consciousness, and assuming that the concept of consciousness has non-trivial information content, what selection pressure caused that information to come into existence?

Al always, I'm not sure if I completely understand your question, but here's a stab at an answer anyway.

Evolution made us care about allies and kin, and also about other humans because they could become allies, or because caring is sometimes good for your image. But first you need to determine what a human is. Right now it's a safe bet to say that the humans I interact with are all conscious. So if some entity has a chance of not being conscious, I begin to doubt it's really a human and if I should care about it. An analogy: if I encounter a blue banana, I'll have second thoughts about eating it, even though evolution didn't give me a hardcoded drive to desire yellow bananas only. Yellowness (or consciousness) is just empirically correlated with the things evolution wants me to care about.

This analogy seems like a good one. Let me try extending it a bit. Suppose that in our ancestral environment the only things banana shaped were bananas, and the ability to perceive yellowness had no other fitness benefits. Then wouldn't it be surprising that we even evolved the ability to perceive yellowness, much less to care about it?

In our actual EEA, there were no human-shaped objects that were not humans, so if caring about humans was adaptive, evolution could have just made us care about, say, human-shaped objects that are alive and act intelligently. Why did we evolve the ability (i.e., intuition) to determine whether something is conscious, and to care about that?

Did we? It's not obvious to me that evolution actually programmed us to care about consciousness in particular rather than just (a subsection of?) current culture conditioning us that way. I'm dubious that all cultures that assigned a particular group of humans a moral status similar to that of animals did this this by way of convincing themselves that that group was not "conscious", or had to overcome strong evolutionary programming. Also consider the moral weight that is assigned to clearly unconscious embryos by many people, or the moral weight apparently assigned to fictional characters by some.

Believing that other people are conscious doesn't require any special selection pressure: it falls out of the general ability to understand their utterances as referring to something that's "actually out there", which is useful for other reasons. Also we seem to have a generalized adaptation that says "if all previously encountered instances possessed a certain trait, but this instance doesn't, then begin doubting if this instance is genuine".

I agree with the idea that something is conscious is probably a large part of whether or not I care about its pain, its in line with my current intuition.

Though, I also kind of think that making me care about its qualia makes something conscious.

So I'm confused.

[-][anonymous]13y50

Hmm. I was unable to distinguish between this person and a real person, and one way to perfectly simulate a human being (given the apparently endless resource these simulators have) would be to create something thats very close to a human being. So I'd be inclined not to repeat the experiment. The benefit if I'm right is $100, if I'm wrong I cause massive amounts of pain. Of course saying that means that this machine can start mugging me by forcing me to pay money to prevent the pain, which leads me into a question about kidnapping, basically: I should pre-commit to not press the button to encourage AI not to torture humans/simulcrum humans who I can't distinguish from.

I know this doesn't get to the point of your question in any way, but I'm throwing it in:

I would refuse to play again, even if the whole thing was pure fiction, because it causes me pain to see others in pain, even if it's fake pain. It's the same reason I didn't go to see the torture porn movies that were so popular a few years back, despite knowing the actors never experienced any actual pain.

Unfortunately it also implies that fully defining pain as a moral disvalue is at least as hard as the problem of consciousness, so this line of investigation seems to be at an immediate impasse, at least for the moment.

This is entirely intuitive for me. I strongly expect moral value to have a lot to do with conscious minds. I see no reason to expect full definitions before we have a much more complete understanding of the mind.

To the people that say that some of these systems are conscious:

1 . An AIXI-like (e.g., AIXI-tl, Monte Carlo AIXI, or some such) agent, programmed with the objective of maximizing the number of button presses.

This one is probably conscious, because it seems like the simplest way to simulate conscious-like behaviour is through consciousness. On the one hand, a specific mind contains significantly more information than a description of the behaviour, but on the other hand much of this information may not matter so it could be generated by a simple pseudorandom number generator.

2 . A brute force optimizer, which was programmed with a model of your mind, that iterated through all possible audio/video bit streams to find the one that maximizing the number of button presses. (As far as philosophical implications are concerned, this seems essentially identical to 1, so the reader doesn't necessarily have to go learn about AIXI.)

These run through all possible outputs. It's like passing the Turing test by outputting entirely random text to an astronomical number of testers. I might be concerned about the many simulations of myself, but the optimizer itself is not thinking.

4 . A Giant Lookup Table (GLUT) of all possible sense inputs and motor outputs of a person, connected to a virtual body and room.

The GLUT has been precomputed, so the conscious experience has already taken place (not that conscious experiences necessarily have to occur at specific times; answering that question seems to require a more advanced understanding of consciousness). Rereading outputs that have already been written down does not cause more conscious experience any more than rewatching a video of someone that has already been taken.

Practically, there seems to be a very harsh penalty if I am wrong about any of this, so I might refuse to participate anyway, depending on the cost to me.

I also expect consciousness and moral relevance to be related.

If I understand it correctly, AIXI is similar to brute forcing through possible universes using the Solomonoff prior and searching through actions in particular laws of physics.

So AIXI doesn't have to be conscious, but it probably winds up torturing soooo much stuff when it's computed.

If anyone knows otherwise, please correct me.

Good point; you are correct.

The only thing I disagree with is that this might not be a bad thing. While AIXI may torture some conscious beings, it may also create some beings in a state of unimaginable bliss. Since AIXI doesn't care about our values, there is no reason to expect the computation it carries out to have a near maximal positive utility, but, for the same reason, it will not have a near maximal negative utility (as in very negative, not as in nearly zero). Since it is indifferent to us, it will not approach either of these extremes but, since it is a very powerful entity it will create much positive and negative utility without intending to. I don't see any reason to think that the negative will outweigh the positive.

Huh. Interesting point.

I don't expect that the majority of universes would fulfill anything near my values, but I'm not so sure about if they would fulfill their inhabitants' values.

I think I'm going to lean towards no though, just because value is fragile. Agents have values, and the vast majority of possible universes wouldn't be optimized for or fulfill them. The blissful conjunction of values with a structurally sympathetic universe seems unlikely compared to values existing in something which is antithetical to them. Agents would prefer to optimize for something, and there are much more ways to not optimize than there are to optimize.

On the other hand, I guess you could call in the anthropic principle and say that, at the very least, agents won't exist in universes that don't allow them to exist. Most of the laws of physics seem to be orthogonal to things that I want to do, and I would rather them continue existing rather than not. And agents that evolve via some natural selection probably won't desire that many things which are physically impossible to instantiate.

Though, I don't see that as strongly in favor of bliss.

I think the problem is in the definition of `optimum'. In order to be able to call a state optimum, you must presuppose the laws of physics in order to rule out any better states that are physically impossible. Once we recognize this, it seems that any society must either achieve an optimum or suffer an existential disaster (not necessarily extinction). Value is fragile, but minds are powerful and if they ever get on the right track they will never get off, baring problems that are impossible to foresee.

The only cases that remain to be considered are extinction and non-extinction existential risk. I'm pretty sure that my value system in indifferent between the existence and nonexistence of a region with no conscious life, but there is no reason for other value systems to share that property. I am unsure how the average value system would judge its surroundings, partially because I am unsure what to average over. Even a group that manages to optimize its surroundings may describe its universe's existence as bad due to the existence of variables that it cannot optimize or other factors, such as a general dislike of anything existing.

If an existential risk does not fully wipe out its species, there is a chance that an optimization process will survive, but with different values from its parent species. On average, the parent species would probably regard this as better than extinction, because the optimization process would share some of its values, while being indifferent to the rest. As weak evidence that this applies to our species, there are many fictional distopias that, while much worse than our current world, seem preferable to extinction.

All heuristics have unusual contexts that require using other heuristics to do well in. This makes reliable implementation of any given heuristic FAI-complete, you need a whole mind to look over potential problems with any of its parts.

All of these are analogous to the AI-Box experiment; I expect I'd just "lose" again, so I'd decline to play the second time. (I'm not sure it would even have to do anything fundamentally different the second time. I accept the intuition pump here, but it's hard to imagine my epistemic state after finding out I had been so totally fooled.)

I'd say most likely most of those DO imply real pain, and probably vastly greater amounts than merely a human being tortured for 10 min. Now, it is obviously possible to fool a human to think pain is happening when it is not, but I'm not so sure that is not just due to humans being stupid and use sensory inefficiently.

For such a short time it might even be possible to make a perfect illusion like this, depending on bandwidth, but if this experiment was a year instead of 10 min and the observer was a superintelegence, I'd not be surprised to see a mathematical proof such an illusion can not be created without using actual pain on a concious observer. I wouldn't be surprised if it were possible either. I would be very surprised if a human could come up with such a safe method.

I don't see how consciousness is involved at all here, nor why pain should be treated as a special kind of disutility. Whether a rational agent is constructed of meat or GLUTs is irrelevant. To the extent that it can and does try to diminish our disutility, it is our moral and practical duty to try to diminish its disutility.

I would remove the word 'strong' because at what point a negative reinforcement would become 'strong' is in my mind arbitrary. I see pain intensity as a continuum reaching from insignificant to very intense, so I don't think we need an arbitrary notion of 'strong.'

The actual scenario is full of distractions, but I'll try to ignore them (1).

The thing is, I think the pain in this scenario is a distraction as well. The relevant property in this sort of scenario, which drives my impulse to prevent it and causes me to experience guilt if I don't, is my inference of suffering (2).

So the question becomes, how do I characterize the nature of suffering?

Which is perhaps a mere semantic substitution, but it certainly doesn't feel that way from the inside. I can feel pain without suffering, and suffering without pain, which strongly suggests that there are two different things under discussion, even if I don't clearly understand either of them.

I'll probably play a second round against the GLUT, since if there is any suffering involved there it has already happened and I might as well get some benefit from it. (3)

The others, I am less certain about.

Thinking about it more, I lean towards saying that my intuitions about guilt and shame and moral obligation to reduce suffering are all kind of worthless in this scenario, and I do better to frame the question differently.

For example, given #3, perhaps the right question is not "are those uploads experiencing suffering I ought to alleviate" but rather "ought I cooperate with those uploads, or ought I defect?"

Not that that helps: I'm still left with the question of how to calibrate their cost/benefit equation against my own, which is to say of how significant a term their utility is in my utility function. And sure, I can dodge the question by saying I need more data to be certain, but one can fairly ask what data I'd want... which is really the same question we started with, though stated in a more general way.

So... dunno. I'm spinning my wheels, here.

==

(1) For example, I suspect that my actual response in that scenario is either to keep all the money, under the expectation that what I'm seeing is an actor or otherwise something not experiencing pain (in a non-exotic way, as in the Milgram experiments), or to immediately quit the experiment and leave the room, under the expectation that to do anything else is to reinforce the sadistic monster running this "experiment."

I cannot imagine why I'd ever actually press the button.

But of course that's beside the point here.

(2) That is, if the person wired up to the chair informs me that yes, they are experiencing pain, but it's no big deal, then I don't feel the same impulse to spend $100 to prevent it.

Conversely, if the neurologists monitoring the person's condition assures me credibly that there's no pain, but they are intensely suffering for some other reason, I feel the same impulse to prevent it. E.g., if they will be separated from their family, whom they love, unless I return the $100, I will feel the same impulse to spend $100 to prevent it.

The pain is neither necessary nor sufficient for my reaction.

Note that I'm expressing all this in terms of what I perceive and what that impels me to do, rather than in terms of the moral superiority of one condition over another, because I have a clearer understanding of what I'm talking about with the former. I don't mean to suggest that the latter doesn't exist, nor that the two are equivalent, nor that they aren't. I'm just not talking about the latter yet.

(3) I say "probably" because there are acausal decision issues that arise here that might make me decide otherwise, but I think those issues are also beside your point.

Also, incidentally, if there is any suffering involved, the creation of the GLUT was an act of monstrous cruelty on a scale I can't begin to conceive.

Why stop at pain? Imagine a similar experiment: you pay to cause the participant pleasure, or avoid extreme boredom or psychological distress. Similar logic seems to these as well.

The sensation of pain "in and of itself" has no moral disvalue. What matters is "damage". Usually things which cause pain also go some way towards causing injury, and injury prevents an agent from achieving the things it could otherwise have done. Pain is also 'coercive' in the sense that it stops a person in their tracks (whatever they were trying to do, now all they can do is moan and writhe). Effectively it's a form of physical restraint.

Then there are a whole bunch of second-order effects. E.g. hurting people is bad because it creates a climate where disputes are resolved by physical violence.

Perhaps these considerations don't help us to distinguish between "actual pain" and "the appearance of pain", but we don't need to - what's important is distinguishing "the appearance of pain when it has moral disvalue" and "the appearance of pain when it doesn't".

That seems like a very counter-intuitive position to take, at least according to my intuitions. Do you think the sensation of pleasure in and of itself has moral value? If so, why the asymmetry? If not, what actually does have moral value?

Have you written more about your position elsewhere, or is it a standard one that I can look up?

FWIW I'm basically in the same position as AlephNeil's (and I'm puzzled at the two downvotes: the response is informative, in good faith, and not incoherent).

If you (say, you-on-the-other-side-of-the-camera-link) hurt me, the most important effects from that pain are on my plans and desires: the pain will cause me to do (or avoid doing) certain things in the future, and I might have preferred otherwise. Maybe I'll flinch when I come into contact with you again, or with people who look like you, and I would have preferred to look happy and trusting.

It's not clear that the ECPs as posited are "feeling" in pain in the same sense; if I refrain from pushing the button, so that I can pocket the $100, I have no reason to believe that this will cause the ECP to do (or avoid doing) some things in future when it would have preferred otherwise, or cause it to feel ill-disposed toward me.

As for pleasure, I think pleasure you have not chosen and that has the same effect on you as a pain would have (derailing your current plans and desires) also has moral disvalue; only freely chosen pleasure, that reinforces things you already value or want to value, is a true benefit. (For a fictional example of "inflicted pleasure" consider Larry Niven's tasp weapon.)

That position implies we should be indifferent between torturing a person for six hours in a way that leaves no permanent damage, or just making them sit in a room unable to engage in their normal activities for six hours (or if you try to escape by saying all pain causes permanent psychological damage, then we should still be indifferent between killing a person quickly, or torturing em for six hours and then killing em).

I think this is a controversial enough position that even if you're willing to bite that bullet, before you state it you should at least say you understand the implication, are willing to bite the bullet, and maybe provide a brief explanation of why.

First I want to note that "the sensation of pain, considered in and of itself" is, after "the redness of red", the second most standard example of "qualia". So if, like good Dennettians, we're going to deny that qualia exist then we'd better deny that "the sensation of pain in and of itself" has moral disvalue!

Instead we should be considering: "the sensation of pain in and of how-it-relates-to-other-stuff". So how does pain relate to other stuff? It comes down to the fact that pain is the body's "damage alarm system", whose immediate purpose is to limit the extent of injury by preventing a person from continuing with whatever action was beginning to cause damage.

So if you want to deny qualia while still holding that pain is morally awful (albeit not "in and of itself") then I think you're forced at least some of the way towards my position. 'Pain' is an arrow pointing towards 'damage' but not always succeeding - a bit like how 'sweetness' points towards 'sugar'. This is an oversimplification, but one could almost say that I get the rest of the way by looking at where the arrow is pointing rather than at the arrow itself. (Similarly, what's "bad" is not when the smoke alarm sounds but when the house burns down.)

That position implies we should be indifferent between torturing ... (or if you try to escape by saying all pain causes permanent psychological damage

Well, it's manifestly not true that all pain causes permanent psychological damage (e.g. the pain from exercising hard, from the kind of 'play-fighting' that young boys do, or from spicy food) but it seems plausible that 'torture' does.

then we should still be indifferent between killing a person quickly, or torturing em for six hours and then killing em).

I admit this gave me pause.

There's a horrible true story on the internet about a woman who was lobotomised by an evil psychologist, in such a way that she was left as a 'zombie' afterwards (no, not the philosophers' kind of zombie). I've rot13ed a phrase that will help you google it, if you really want, but please don't feel obliged to: wbhearl vagb znqarff.

Let it be granted that this woman felt no pain during her 'operation' and that she wasn't told or was too confused to figure out what exactly was happening, or why the doctor kept asking her simple things - e.g. her name, or to hum her favourite tune. (The real reason, as best I can tell, was "to see how well a person with that amount of brain damage could still do so and so")

What I want to say is that even with these elaborations, the story is just as repulsive - it offends our moral sense just as much - as any story of torture. This would still be true even if the victim had died shortly after the lobotomy. More to the point, this story is massively more repulsive than, say, a story where Paul Atreides has his hand thrust into a 'pain box' for half an hour before being executed (presumably by a gom jabbar). (And that in turn feels somewhat more repulsive than a story where Paul Atreides is tricked into voluntarily holding his hand in the pain box and then executed, despite the fact that the pain is just as bad.)

Torture isn't just a synonym for "excruciating pain" - it's more complicated than that.

Here's a more appetising bullet: We should be indifferent between "Paul Atreides executed at time 0" and "Paul Atreides tricked into voluntarily holding his hand in the pain box for half an hour and then executed". (I'd be lying if I said I was 100% happy about biting it, but neither am I 100% sure that my position is inconsistent otherwise.)

First I want to note that "the sensation of pain, considered in and of itself" is, after "the redness of red", the second most standard example of "qualia". So if, like good Dennettians, we're going to deny that qualia exist then we'd better deny that "the sensation of pain in and of itself" has moral disvalue!

I know Dennett's usually right about this sort of thing and so there must be something to that argument, but I've never been able to understand it no matter how hard I try. It looks too much like wishful thinking - "these qualia things are really confusing, so screw that." Certainly it's not the sort of Reduction with a capital "R" I've heard that's left me genuinely satisfied about the nonexistence of things like Free Will or Good or Essence-Of-Chair-Ness.

I would be hesitant to say the sensation of pain in and of itself has moral disvalue; I would say that people have preferences against pain and that the violation of those preferences causes moral disvalue in the same sense as the violation of any other preference. I would have no trouble with inflicting pain on a masochist, a person with pain asymbolia, or a person voluntarily undergoing some kind of conditioning.

Damage can also be something people have a preference against, but it's not necessarily more important than pain. There are amounts of torture such that I would prefer permanently losing a finger to undergoing that torture, and I suspect it's the same for most other people.

What I want to say is that even with these elaborations, the story is just as repulsive - it offends our moral sense just as much - as any story of torture. This would still be true even if the victim had died shortly after the lobotomy.

You seem to be arguing from "Things other than pain are bad" to "Pain is not bad", which is not valid.

I admit your Paul Atreides example doesn't disgust me so much, but I think that's because number one I have no mental imagery associated with gom jabbars, and number two I feel like he's a legendary Messiah figure so he should be able to take it.

If we start talking about non-Kwisatz Haderach people, like say your little sister, and we start talking about them being whipped to death instead of an invisible and inscrutable gom jabbar, I find my intuition shifts pretty far the other direction.

I'd be lying if I said I was 100% happy about biting it, but neither am I 100% sure that my position is inconsistent otherwise.

So I'm reading about your moral system in your other post, and I don't want to get into debating it fully here. But surely you can recognize that just as some things and systems are beautiful and fascinating and complex, there are other systems that are especially and uniquely horrible, and that it is a moral credit to remove them from the world. Sometimes I read about the more horrible atrocities perpetrated in the Nazi camps and North Korea, and I feel physically sick that there is no way I can't just kill everyone involved, the torturers and victims both, and relieve them of their suffering, and that this is the strongest moral imperative imaginable, much more important than the part where we make sure there are lots of rainforests and interesting buildings and such. Have you never felt this emotion? And if so, have you ever read a really good fictional dystopian work?

[-][anonymous]13y00

There are amounts of torture such that I would prefer permanently losing a finger to undergoing that torture, and I suspect it's the same for most other people.

What if you could be assured that you would have no bad memories of it? (That is, what if you can recall it but doing so doesn't evoke any negative emotions?)

If I could be assured that I would be genuinely undamaged afterwards, then an interval of intense pain no matter how intense doesn't seem like a big deal. (As I recall, Dennett makes this same point somewhere in Darwin's Dangerous Idea, as an illustration of what's wrong with the kind of utilitarianism that scores everything in terms of pain and pleasure.)

You seem to be arguing from "Things other than pain are bad" to "Pain is not bad", which is not valid.

You keep talking about torture rather than just pain. The point of my bringing up the 'lobotomy story' was to suggest that what makes it so awful has a good deal in common with what makes torture so awful. Something about a person idly and 'cruelly' doing something 'horrible' and 'disgusting' to a victim over whom they have complete power. Using another human as an 'instrument' rather than as an end in itself. Pain is not an essential ingredient here.

If we start talking about non-Kwisatz Haderach people, like say your little sister, and we start talking about them being whipped to death instead of an invisible and inscrutable gom jabbar, I find my intuition shifts pretty far the other direction.

Yeah, but this is reintroducing some of the 'extra ingredients', besides pain alone, that make torture awful.

So I'm reading about your moral system in your other post, and I don't want to get into debating it fully here. But surely you can recognize that just as some things and systems are beautiful and fascinating and complex, there are other systems that are especially and uniquely horrible, and that it is a moral credit to remove them from the world. Sometimes I read about the more horrible atrocities perpetrated in the Nazi camps and North Korea, and I feel physically sick that there is no way I can't just kill everyone involved, the torturers and victims both, and relieve them of their suffering, and that this is the strongest moral imperative imaginable, much more important than the part where we make sure there are lots of rainforests and interesting buildings and such. Have you never felt this emotion? And if so, have you ever read a really good fictional dystopian work?

You keep assuming that somehow I have to make the inference "if pain has no moral disvalue in itself, then neither does torture". I do not. If I can say that "the lobotomy story" is an abomination even if no pain was caused, then I think I can quite easily judge that the Nazi atrocities were loathesome without having to bring in 'the intrinsic awfulness of pain'. The Nazi and North Korean atrocities were 'ugly' - in fact they are among the 'ugliest' things that humans have ever accomplished.

Conclusion: "It all adds up to normality". Ethical reasoning involves a complex network of related concepts, one of which is pain. Pain - the pain of conscious creatures - is often taken to be a (or even 'the') terminal disvalue. Perhaps the best way of looking at my approach is to regard it as a demonstration that if you kill the 'pain node' then actually, the rest of the network does the job just fine (with maybe one or two slightly problematic cases at the fringe, but then there are always problematic cases in any ethical system.)

(The advantage of taking out the 'pain node' is that it sidesteps unproductive philosophical debates about qualia.)

saying all pain causes permanent psychological damage

Not all pain, but certainly that's a factor.

we should still be indifferent between killing a person quickly, or torturing em for six hours and then killing em

I don't see how that follows. Killing someone quickly leaves them no time to contemplate the fact that all their plans and desires have come to a dead end; what is awful about torture is the knowledge of one's plans and desires being thwarted - even more awful than not allowing that person to carry out their plans and fulfill their desires. (Also, in many cases other people have plans and desires for us: they prefer us to be alive and well, to enjoy pleasure and avoid pain, and so on. Torture thwarts those desires as well, over and above killing.)

what is awful about torture is the knowledge of one's plans and desires being thwarted

I don't think that this is as awful as the degree to which torture hurts.

Note that you're saying that not only is the thwarting of someone's plans a disvalue, having them contemplate the thwarting is an additional disvalue.

Also, since being tortured makes contemplation harder, you should prefer torturing someone for six hours and them killing them to letting them contemplate their imminent death in comfort for six hours and then killing them.

you should prefer torturing someone for six hours and them killing them to letting them contemplate their imminent death in comfort for six hours and then killing them

When you're being tortured you have no choice but to attend to the pain: you are not cognitively free to contemplate anything other than your own destruction. In comfort you could at least aim for a more pleasant state of mind - you can make your own plans for those six hours instead of following the torturer's, and if you have the strength, refuse to contemplate your own death.

Also, in many cases other people have plans and desires for us: they prefer us to be alive and well, to enjoy pleasure and avoid pain, and so on.

But why do other people prefer for you to avoid pain, if pain is not a moral disvalue? And what exactly do they mean by "pain" (which is what the post asked in the first place)?

I liked this comment on Alicorn's post: "(pain) makes you want to pull away; it's a flinch, abstracted". What seems to matter about pain, when I think about scenarios such as the one you proposed, is its permanent aversive effect, something not present in simulated pain.

Trying to frame this in terms of anticipated experiences, the question I would want to ask about the posited ECP is, "if I meet this ECP again will they hold it against me that I failed to press the button, because of negative reinforcement in our first encounter". The way you've framed the thought experiment suggests that they won't have a memory of the encounter, in fact that I'm not even likely to think of them as an entity I might "meet".

I didn't downvote AlephNeil, but I think a good rule is that if you say something that is likely to be highly counterintuitive to your audience, to give or link to some explanation of why you believe that (even if it's just "according to my intuition"). Otherwise it seems very hard to know what to do with the information provided.

Do you think the sensation of pleasure in and of itself has moral value?

No.

Have you written more about your position elsewhere

I have, but not in a convenient form. I'll just paste some stuff into this comment box:

Hitherto I had been some kind of utilitarian: The purest essence of wrongness is causing suffering to a sentient being, and the amount of wrongness increases with the amount of suffering. Something similar is true concerning virtue and happiness, though I realized even then that one has to be very careful in how 'happiness' is formulated. After all, we don't want to end up concluding that synthesizing Huxley's drug "soma" is humanity's highest ethical goal. If pressed to refine my concept of happiness, I had two avenues open: (i) Try to prise apart "animal happiness" - a meaningless and capricious flood of neurochemicals - from a higher "rational happiness" which can only be derived from recognition of truth or beauty (ii) Retreat to the view that "in any case, morality is just a bunch of intuitions that helped our ancestors to survive. There's no reason to assume that our moral intuitions are a 'window' onto any larger and more fundamental domain of moral truth."

(Actually, I still regard a weaker version of (ii) as the 'ultimate truth of the matter': On the one hand, it's not hard to believe that in any community of competing intelligent agents, more similar to each other than different, who have evolved by natural selection, moral precepts such as 'the golden rule' are almost guaranteed to arise. On the other, it remains the case that the spectrum of 'ethical dilemmas' that could reasonably arise in our evolutionary history is narrow, and it is easy for ethicists to devise strange situations that escape its confines. I see no reason at all to expect that the principles by which we evaluate the morality of real-world decisions can be refined and systematised to give verdicts on all possible decisions.)

[i.e. "don't take the following too seriously."]

I believe moral value is inherent in those systems and entities that we describe as 'fascinating', 'richly structured' and 'beautiful'. A snappy way of characterising this view is "value-as-profundity". On the other hand, I regard pain and pleasure as having no value at all in themselves.

In the context of interpersonal affairs, then, to do good is ultimately to make the people around you more profound, more interesting, more beautiful - their happiness is irrelevant. To do evil, on the other hand, is to damage and degrade something, shutting down its higher features, closing off its possibilities. Note that feelings of joy usually accompany activities I classify as 'good' (e.g. learning, teaching, creating things, improving fitness) and conversely, pain and suffering tend to accompany damage and degradation. However, in those situations where value-as-profundity diverges from utilitarian value, notice that our moral intuitions tend to favour the former. For instance:

Drug abuse: Taking drugs such as heroin produces feelings of euphoria but only at the cost of degrading and constraining our future behaviour, and damaging our bodies. It is the erosion of profundity that makes heroin abuse wrong, not the withdrawal symptoms, or the fact that the addict's behaviour tends to make others in his community less happy. The latter are both incidental - we can hypothetically imagine that the withdrawal symptoms do not exist and that the addict is all alone in a post-apocalyptic world, and we are still dismayed by the degradation of behaviour that drug addiction produces (just as we would be dismayed by a giraffe with brain damage, irrespective of whether the giraffe felt happy).

The truth hurts: We accept that there are situations where the best way to help someone is to criticise them in a way that we know they will find upsetting. We do this because we want our friend to grow into a better (more profound) version of themselves, which cannot happen until she sees her flaws as flaws rather than lovable idiosyncracies. On the utilitarian view, the rightness of this harsh criticism cannot be accounted for except in respect of its remote consequences - the greater happiness of our improved friend and of those with whom she interacts - yet there is no necessary reason why the end result of a successful self-improvement must be increased happiness, and if it is not then the initial upset will force us to say that our actions were immoral. However, surely it is preferable for our ethical theory to place value in the improvements themselves rather than their contingent psychological effects.

Nature red in tooth and claw (see Q6): Consider the long and eventful story of life on earth. Consider that before the arrival of humankind, almost all animals spent almost all of their lives perched on the edge, struggling against starvation, predators and disease. In a state of nature, suffering is far more prevalent than happiness. Yet suppose we were given a planet like the young earth, and that we knew life could evolve there with a degree of richness comparable to our own, but that the probability of technological, language-using creatures like us evolving is very remote. Sadly, this planet lies in a solar system on a collision course with a black hole, and may be swallowed up before life even appears. Suppose it is within our power to 'deflect' the solar system away from the black hole - should we do so? On the utilitarian view, to save the planet would be to bring a vast amount of unnecessary suffering into being, and (almost certainly) a relatively tiny quantity of joy. However, saving the planet increases the profundity and beauty of the universe, and obviously is in line with our ethical intuitions.

...

is it a standard one that I can look up?

I'm not sure, but it's vaguely Nietzschean. For instance, here's a quote from Thus Spoke Zarathustra:

Man is a rope, fastened between animal and Superman - a rope over an abyss. A dangerous going-across, a dangerous wayfaring, a dangerous looking-back, a dangerous shuddering and staying-still. What is great in man is that he is a bridge and not a goal; what can be loved in man is that he is a going-across and a down-going.

Actually, that one quote doesn't really suffice, but if you're interested, please read sections 4 and 5 of "Zarathustra's Prologue".

If pressed to refine my concept of happiness, I had two avenues open

What about Eliezer's position, which you don't seem to address, that happiness is just one value among many? Why jump to the (again, highly counterintuitive) conclusion that happiness is not a value at all?

[-][anonymous]13y00

What about Eliezer's position, which you don't seem to address, that happiness is just one value among many? Why jump to the (again, highly counterintuitive) conclusion that happiness is not a value at all?

To me it doesn't seem so counterintuitive. I actually came to this view through thinking about tourism, and it struck me that (a) beautiful undisturbed planet is morally preferable to (b) beautiful planet disturbed by sightseers who are passively impressed by its beauty, which they spoil ever so slightly, and contribute nothing (i.e. it doesn't inspire them to create anything beautiful themselves).

In other words, even the "higher happiness" of aesthetic appreciation doesn't necessarily have value. If there's 'intrinsic value' anywhere in the system, it's in nature (or art) itself, not the person appreciating it.

But again, I don't take this 'cold', 'ascetic' concept of morality to be the 'final truth'. I don't think there is such a thing.