Lifeism, Anti-Deathism, and Some Other Terminal-Values Rambling

Um.

I think I agree with you, but I'm not sure, and I'm not sure if the problem is language or that I'm just really confused.

For the sake of clarity, let's consider a specific hypothetical: Sam is given a button which, if pressed, Sam believes will do two things. First, it will cause there to be two identical-at-the-moment-of-pressing copies of Sam. Second, it will cause one of the copies (call it Sam-X) to suffer a penalty P, and the other copy (call it Sam-Y) to receive a benefit B.

If I've understood you correctly, you would say that for Sam to press that button is an ethical choice, though it might not be a wise choice, depending on the value of (B-P).

Yes?

The relevant formula might be something other than (B-P), depending on Sam's utility function, but otherwise that's essentially what I believe.

1Raemon9yNo. I'm not sure whether I think "ethical" is an appropriate word here. (Honestly, I think ethical systems that are designed for real pre-singularity life are almost always going to break down in extreme situations). But basically, I consider the scenario you just described identical to: Two people are both given a button. If they both press the button, then, one of them will get penalty P, the other will get benefit B. People are entitled to make decisions like this. But governments (collective groups of people) are also entitled to restrict decisions if those decisions prove to be common and damaging to society. Given how irrational people are about probability (i.e. the lottery), I think there may be many values of P and B for which society should ban the scenario. I wouldn't jump to conclusions about which values of P and B should be banned, I'd have to see how many people actually chose those options and what effect it had on society. (Which is a scientific question, not a logical one). Pavrita's original statement seemed more along the lines of: a thousand people agree to press a button that will torture all but one of them for a long time. The remaining person gets $100. This is an extremely bad decision on everyone's part. Whether or not it's ethical for the participants, I think that a society that found people making these decisions all the time has a problem and should fix it somehow.

Lifeism, Anti-Deathism, and Some Other Terminal-Values Rambling

by Pavitra 1 min read7th Mar 201189 comments

4


(Apologies to RSS users: apparently there's no draft button, but only "publish" and "publish-and-go-back-to-the-edit-screen", misleadingly labeled.)

 

You have a button. If you press it, a happy, fulfilled person will be created in a sealed box, and then be painlessly garbage-collected fifteen minutes later. If asked, they would say that they're glad to have existed in spite of their mortality. Because they're sealed in a box, they will leave behind no bereaved friends or family. In short, this takes place in Magic Thought Experiment Land where externalities don't exist. Your choice is between creating a fifteen-minute-long happy life or not.

Do you push the button?

I suspect Eliezer would not, because it would increase the death-count of the universe by one. I would, because it would increase the life-count of the universe by fifteen minutes.

 

Actually, that's an oversimplification of my position. I actually believe that the important part of any algorithm is its output, additional copies matter not at all, the net utility of the existence of a group of entities-whose-existence-constitutes-utility is equal to the maximum of the individual utilities, and the (terminal) utility of the existence of a particular computation is bounded below at zero. I would submit a large number of copies of myself to slavery and/or torture to gain moderate benefits to my primary copy.

(What happens to the last copy of me, of course, does affect the question of "what computation occurs or not". I would subject N out of N+1 copies of myself to torture, but not N out of N. Also, I would hesitate to torture copies of other people, on the grounds that there's a conflict of interest and I can't trust myself to reason honestly. I might feel differently after I'd been using my own fork-slaves for a while.)

So the real value of pushing the button would be my warm fuzzies, which breaks the no-externalities assumption, so I'm indifferent.

 

But nevertheless, even knowing about the heat death of the universe, knowing that anyone born must inevitably die, I do not consider it immoral to create a person, even if we assume all else equal.

4