I have been trying to absorb the Lesswrong near-consensus on cryonics/quantum mechanics/uploading, and I confess to being unpersuaded by it. I'm not hostile to cryonics; just indifferent, and having a bit of trouble articulating why the insights on identity that I have been picking up from the quantum mechanics sequence aren't compelling to me. I offer the following thought experiment in hopes that others may be able to present the argument more effectively if they understand the objection here.

 

Suppose that Omega appears before you and says, “All life on Earth is going to be destroyed tomorrow by [insert cataclysmic event of your choice here]. I offer you the chance to push this button, which will upload your consciousness to a safe place out of reach of the cataclysmic event, preserving all of your memories, etc. up to the moment you pushed the button and optimizing you such that you will be effectively immortal. However, the uploading process is painful, and because it interferes with your normal perception of time, your original mind/body will subjectively experience the time after you pushed the button but before the process is complete as a thousand years of the most intense agony. Additionally, I can tell you that a sufficient number of other people will choose to push the button that your uploaded existence will not be lonely.”

 

Do you push the button?

 

My understanding of the Lesswrong consensus on this issue is that my uploaded consciousness is me, not just a copy of me. I'm hoping the above hypothetical illustrates why I'm having trouble accepting that.

 

New to LessWrong?

New Comment
79 comments, sorted by Click to highlight new comments since: Today at 10:01 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I'm not sure I follow your objection here, but my best guess is something like "the upload can't be me, because I'm experiencing a thousand years of agony, and the upload isn't."

Is that even close to right?

I won't presume to speak for the LW consensus, but personally I would say that the upload is me, and the body is also me. When the body dies in the cataclysm, I have died, and I've also survived. This sounds paradoxical because I'm used to thinking of my identity as traveling along only one track, but in the case you're describing Omega's device has made that no longer true, and in that case I need to stop thinking as though it were.

I am not sure whether either of me, after pressing the button, considers the other me to be them... but I suspect probably not.

Does any of that help?

Oh, and, yes, I press the button. Shortly after pressing it, I both deeply regret having pressed it, and am enormously grateful to myself for having pressed it.

4wedrifid13y
Shortly after pressing it both of me are grateful to myself for having pressed it. I do still consider the other me to be me. It is only after the agony starts to completely strip away my conscious identity and rational thought that I start to experience the regret. Although I suspect even the state of regret wouldn't last long. Regret is a relatively high level emotion, one that would be completely overwhelmed and destroyed by the experience of pain and the desperate, incoherent desire for it to stop.
4TheOtherDave13y
At the risk of utter digression, I'm interested in this question of considering the other me to be me, post-split. The way I experience identity clearly treats the results of various possible future branchpoints as roughly equivalent to one another (and equivalently associated to "me"), but does not treat the results of past branchpoints that way. A decision that has yet to be made feels very different from one that has already been made. Normally it doesn't make much difference -- I don't have much difficulty treating the "me" that put on a different shirt this morning in some other Everett branch as sharing my identity, despite the branchpoint in the past, because we're so awfully similar -- but when we start introducing vast differences in experience, my ability to extend my notion of identity to include them proves inadequate. The timeless approach you describe strikes me as a useful way of experiencing identity, but I can't imagine actually experiencing identity that way. Is this perspective something that seems intuitively true to you, or is it something you've trained (and if so how?), or is it more that you are describing your intellectual rather than your emotional beliefs, or ...? Just to be clear: this is entirely a question about human psychology; I'm not asking about the "actual nature of identity" out in the world (whatever that even means, if indeed it means anything at all).
1wedrifid13y
It does seem like something that is intuitively true. I suspect having spent a lot of time considering bizarre duplication based counterfactuals has had some influence on my intuitions, bringing the intellectual and emotional beliefs somewhat closer together. Also note that the emotion experience of identifying as 'me' isn't an all or nothing question. Even in everyday experience the extent to which I self identify as 'me' does vary - although always in the high ranges. Which parts are me? comes in to it here. So would experimenting with localized magnetic stimulation of certain parts of the brain, if you really looked at the science! Note that I (guess I) would not continue to identify with the other me as me indefinitely. It would probably go from like looking at a mirror (an abstracted intellectual one in this example) to only a vague feeling of association over time and depending on stimulus. In the other direction there are definitely parts of my past history that I don't experience as 'me' either - and not purely dependent on time. There are a couple of memories from when I was 5 that feel like me but some from even my twenties (I am less than thirty) that barely feel like me at all. I compare this to the experience of turning into a vampire in Alicorn's luminosity fanfiction. (FYI: That means a couple of days of extreme pain that does not cause any permanent damage.) While being tortured I may not feel all that much identification with either pre-torture human me or post torture vampire me. As vamp-wedrifid I would (probably) feel a somewhat higher identification with past-human-wedrifid as being 'myself'. Say ballpark 80%. From the perspective of painful-half-turned-wedrifid the main difference in experience from the me in this Omega counterfactual would be the anticipation of being able to remember the torture as opposed to not. Knowing the way the time forks are set up It would make a little difference but not all that much. Summary: Yes, the timeless
0TheOtherDave13y
I've been thinking about this some more, and I'd like to consult your intuitions on some related questions, if you don't mind. Suppose I come along at T1 and noninvasively copy you into a form capable of effectively hosting everything important about you. (E.g., a software upload, or a clone body, or whatever it takes.) I don't tell either of you about the other's existence. Let's label the resulting wedrifids W1 and W2 for convenience. (Labels randomly assigned to the post-copy yous.) I then at T2 convert W2 into a chunk of pure orgasmium (O). If I've understood your view, you would say that at T2, W1 undergoes a utility change (equal to [value(W2) - value(O)]), though of course W1 is unaware of the fact. Yes? Whereas in an alternative scenario where at T2 I create a chunk of orgasmium (O2) out of interstellar hydrogen, without copying you first, W1 (which is uniquely you) doesn't experience any utility change at all at T2. Yes? Feel free to replace the orgasmium with anything else... rocks, a different person altogether, a puppy, W2 experiencing a thousand years of torture, etc. .... if that changes your intuitions.
0wedrifid13y
As situations become harder to imagine in an tangible sense it becomes harder extrapolate from intuitions meaningfully. But I can give some response in this case. Utility functions operate over entire configuration states of the universe - values of objects or beings in the universe can not by default be added or subtracted. Crudely speaking W1 undergoes a utility change of value(universe has W1, O) - value(universe has W1, W2). The change would be significant - clones have value. And this is the first clone. Transforming a hypothetical W534 into orgasmium would be a far, far lesser loss. It is worth elaborating here that the states of the universe that utility is evaluated on are timeless. The entire wave equation gets thrown in, not just a state at a specific time. This means [W1, hydrogen -> W1, W2 -> W1, O] can be preferred over [W1, hydrogen -> W1, O], or anti-preferred as appropriate without it being an exceptional case. This is matches the intuitions most people have in everyday use - it just formulates it coherently. In this case W1 does not seem to care all that much about what happened at T2. Maybe a little. Orgasmium sounds kind of more interesting to have around than hydrogen. Also note that the transition at T1 leaves W2's utility function at a high percentage of W1's - although W2 definitely doesn't know about it!
0TheOtherDave13y
Huh. I'm not sure I even followed that. I'll have to stare at it a while longer. Thanks again for a thoughtful reply.
0TheOtherDave13y
Neat. I'm somewhat envious. Totally agreed about identifying-as-me being a complex thing, and looking at brain science contributing to it. Actually, when I first encountered the concept of blindsight as an undergraduate, it pretty much destroyed my intuition of unique identity. I guess I just haven't thought carefully enough about self-duplication scenarios. Thanks for the thoughtful reply.
[-]PlaidX13y180

Why does every other hypothetical situation on this site involve torture or horrible pain? What is wrong with you people?

Edit: I realize I've been unduly inflammatory about this. I'll restrict myself in the future to offering non-torture alternative formulations of scenarios when appropriate.

Why does every other hypothetical situation on this site involve torture or horrible pain? What is wrong with you people?

We understand why edge cases and extremes are critical when testing a system - be that a program, a philosophy, a decision theory or even just a line of logic.

-5PlaidX13y
9TheOtherDave13y
I've often wondered that. In some sense, it's not actually true... lots of hypotheticals on this site involve entirely mundane situations. But it's true that when we start creating very large stakes hypotheticals, the torture implements come out. I suspect it's because we don't know how to talk about the opposite direction, so the only way we know to discuss a huge relative disutility is to talk about pain. I mean, the thing that is to how-I-am-now as how-I-am-now is to a thousand years of pain is... well, what, exactly?
0PlaidX13y
Why do people feel the need to discuss "huge relative disutilities"? What's the difference between that and being obnoxiously hyperbolic? In the current example, I'm not even sure what kind of point he's trying to make. It sounds like he's saying "Some people like bagels. But what if someone poisoned your bagel with a poison that made your blood turn into fire ants?" Is this an autism thing? There were people doing this at the meetup I went to as well.
4wedrifid13y
I don't know if it's an autism thing... but I'm definitely going to have to include that in a hypothetical one of these days. :)
2CuSithBell13y
YES. Something like: "So Omega offers you a bagel, saying 'Here is a bagel. If this is a thought experiment, it is filled with fire-ant poison.' Do you eat it?!" ?
3TheOtherDave13y
Absolutely! After all, eating fire-ant poison in a thought experiment is harmless, whereas in the real world I'd have a tasty bagel.
3NihilCredo13y
Well, what other kind of disutility would you suggest that could conceivably counterbalance the attractiveness of immortality?
1Zetetic13y
It seems like moral problems get a negative phrasing more often than not in general, not just when Yudkowsky is writing them. I mean, you have the Trolley problem, the violinist, pretty much all of these, the list goes on. Have you ever looked at the morality subsections of any philosophy forums? Everything is about rape, torture, murder etc. I just assumed that fear is a bigger motivator than potential pleasantness and is a common aspect of rhetoric in general. I think that at least on some level it's just the name of the game, moral dilemma -> reasoning over hard decisions during very negative situations, not because ethicist are autistic, but because that is the hard part of morality for most humans. When I overhear people arguing over moral issues, I hear them talking about whether torture is ever justified or if murder is ever o.k. Arguing about whether the tradeoff of killing one fat man to save five people is justified is more meaningful to us as humans than debating whether, say; we should give children bigger lollipops if it means there can't be as much raw material for puppy chow (ergo, we will end up with fewer puppies since we are all responsible and need to feed our puppies plenty, but we want as many puppies as possible because puppies are cute, but so are happy children). This isn't to say that simply because this is how it's done currently means that it is the most rational way to carry on a moral dialogue, only that you seem to be committing a fundamental attribution error due to a lack of general exposure to moral dilemmas and the people arguing them. Besides, it's not like I'm thinking about torture all the time just because I'm considering moral dilemmas in the abstract. I think that most people can differentiate between an illustration meant to show a certain sort of puzzle and reality. I don't get depressed or anxious after reading Lesswrong, if anything; I'm happier and more excited and revitalized. So I'm just not picking up on the neurosi
2PlaidX13y
Considering this style of thinking has lead lesswrong to redact whole sets of posts out of (arguably quite delusional) cosmic horror, I think there's plenty of neurosis to go around, and that it runs all the way to the top. I can certainly believe not everybody here is part of it, but even then, it seems in poor taste. The moral problems you link to don't strike me as philosophically illuminating, they just seem like something to talk about at a bad party.
0Zetetic13y
I catch your drift about the post deletion, and I think that there is a bit of neurosis in the way of secrecy and sometimes keeping order in questionable ways, but that wasn't what you brought up initially; you brought up the tendency to reason about moral dilemmas that are generally quite dark. I was merely pointing out that this seems like the norm in moral thought experiments, not just the norm on lesswrong. I might concede your point if you provide at least a few convincing counterexamples, I just haven't really seen any. If anything, I worry more about the tendency to call deviations from lesswrong standards insane, as it seems to be more of an in-group/out-group bias than is usually admitted, though it might be improving.
2PlaidX13y
Yeah, really what I find to be the ugliest thing about lesswrong by far is the sense of self-importance, which contributed to the post deletion quite a bit as well. Maybe it's the combination of these factors that's the problem. When I read mainstream philosophical discourse about pushing a fat man in front of a trolley, it just seems like a goofy hypothetical example. But lesswrong seems to believe that it carries the world on its shoulders, and that when they talk about deciding between torture and dust specks, or torture and alien invasion, or torture and more torture, i get the impression people are treating this at least in part as though they actually expect to have to make this kind of decision. If all the situations you think about involve horrible things, regardless of the reason for it, you will find your intuitions gradually drifting into paranoia. There's a certain logic to "hope for the best, prepare for the worst", but I get the impression that for a lot of people, thinking about horrible things is simply instinctual and the reasons they give for it are rationalizations.
1Zetetic13y
Do you think that maybe it could also be tied up with this sort of thing? Most of the ethical content of this site seems to be heavily related to the sort of approach Eliezer takes to FAI. This isn't surprising. Part of the mission of this site is to proselytize the idea that FAI is a dire issue that isn't getting anywhere near enough attention. I tend to agree with that idea. Existential risk aversion is really the backbone of this site. The flow of conversation is driven by it, and you see its influence everywhere. The point of being rational in the Lesswrongian sense is to avoid rationalizing away the problems we face each and every day, to escape the human tendency to avoid difficult problems until we are forced to face them. In any event, my main interest in this site is inexorably tied in with existential risk aversion. I want to work on AGI, but I'm now convinced that FAI is a necessity. Even if you disagree with that, it is still the case that there are going to be many ethical dilemmas coming down the pipe as we gain more and more power to change our environment and ourselves through technology. There are many more ways to screw up than there are to get it right. This is all there is to it; someone is going to be making some very hard decisions in the relatively near future, and there are going to be some serious roadblocks to progress if we do not equip people with the tools they need to sort out new, bizarre and disorienting ethical dilemmas. This I believe to likely be the case. We have extreme anti-aging, nanotech and AGI to look forward to, to name only a few. The ethical issues that come hand in hand with these sorts of technologies are immense and difficult to sort out. Very few people take these issues seriously; even fewer are trying to actually tackle them, and those who are don't seem to be doing a good enough job. It is my understanding that changing this state of affairs is a big motive behind lesswrong. Maybe lesswrong isn't all that it sh
0wedrifid13y
I resent the suggestion that I instinctively think of 3^^^3 dust specks! I have to twist my cortex in all sorts of heritage violating imaginative ways to come up with the horrible things I like to propose in goofy hypotheticals! I further assert that executing the kind of playfully ridiculous-but-literal conversation patterns that involve bizarre horrible things did not help my ancestors get laid.
1TheOtherDave13y
FWIW, I'm neurotypical and not exceptionally obnoxious. Can't speak for "people." I can speak a little bit to why I do it, when I do. One difficulty with comparing consequentialist from deontological ethical frameworks is the fact that in many plausible scenarios, they make the same predictions. I can talk about why it's a bad idea to rob a bank in terms of its consequences, but a deontologist will just shrug and say "Or, you can just acknowledge that it's wrong to rob banks, which is simpler," and it's not clear we've accomplished anything. So to disambiguate them, it's helpful to introduce cases where optimizing consequences requires violating deontological rules. And to turn up the contrast, it's often helpful to (a) choose really significant deontological rules, rather than peripheral ones, and (b) introduce very large differences between the value of the +rules and -rules conditions. Which leads to large relative disutilities. Now, one can certainly say "But why is comparing consequentialist from deontological ethical frameworks so important that you're willing to think about such awful things in order to do it? Can't you come up with nicer examples? Or, if not, think about something else altogether?" To which I won't have a response. As for the current example, I'm not exactly sure what point he's making either, but see my comment on the post for my best guess as to what point he's making, and my reaction to that point.
-1PlaidX13y
I think part of what bothers me about these things is I get the impression the readers of lesswrong are PICKING UP these neuroses from each other, learning by example that this is how you go about things. Need to clarify an ethical question, or get an intuitive read on some esoteric decision theory thing, or just make a point? Add torture! If yudkowsky does it, it must be a rational and healthy way to think, right?
1TheOtherDave13y
Interesting. Yeah, I can see where that impression comes from, though I'm not sure it's accurate. If you notice me using hypothetical suffering in examples where you can come up with an alternate example that expresses the same things except for the suffering, feel free to call me on it, either publicly or privately.
1ewang13y
I cringed when I read about that "1000 years of terrible agony". Just thinking about that is bad enough.

I'm hoping the above hypothetical illustrates why I'm having trouble accepting that.

I'm sorry, but I don't understand the illustration. My answer would be the same if my original mind/body was immediately and painlessly dissolved, and it was my uploaded (copied?) mind that experienced the thousand years of pain. Same answer in a more realistic scenario in which I remain physically embodied, but the pain and immortality are caused by ordinary vampire venom rather than some bogus cryonics scheme orchestrated by Omega. :)

I would probably request painles... (read more)

consider the weirdness of 'someone' remembering that his younger self didn't really care for him.

Well, that happens all the time in the actual world. It may be weird, but it's a weird we're accustomed to.

Damn. I laughed so hard at your comment that my dentures fell out. I should have flossed more.

1NihilCredo13y
I have a little trouble seeing this weirdness. Imagine if you were put in Prismattic's scenario, and chose a painless death as you said; you would go to sleep fully expecting never to wake up again. Immediately after you fall asleep but before Omega can kill you, his trickster brother Omicron sneaks in, uploads your consciousness, and wakes up your uploaded copy somewhere safe. Now think about what that consciousness would feel upon waking up. Is that what you were describing in the quote above, and is that particularly weird?
0Perplexed13y
Yes No. I was incorrect in calling that 'weird'. Thx to you and TheOtherDave for pointing out my mistake.

Hmm, speaking as someone who sort of buys into the cryonics part but doesn't buy into the rest of what you label as the "LW consensus" I think for all of these issues the consensus level to them here is probably overestimated. Note that the QM sequence is the sequence which by far has the most subject matter experts who would disagree.

As for the button, I'm not sure if I'd push it or not, I suspect no. But that may indicate irrationality on my part more than any coherent notion of what constitutes "me".

1Sniffnoy13y
I suspect it may be helpful when discussing this to split the QM sequence into the direct QM part, and the "timeless physics" part at the end. The latter seems to have generated a lot more disagreement than the former.
2wedrifid13y
Many Worlds was discussed in the direct QM part, was it not? People whine about that all the time.
1David_Gerard13y
The comments section shows quite a lot of disagreement and "Eliezer, you can't actually do what you just did" along the way.

I would push the button. I'd also feel very grateful to myself for having pushed it and undergone that torment for my sake. Probably similar to the gratefulness that christians feel for Jesus when they think of the crucifixion. The survivors would probably create a holiday to memorialize their own sacrifice for themselves, which sounds kinda self-serving, but hell... I'd think I deserve it.

I, for one, would not say that an upload is "me," or at least doesn't fulfill all of the parts of how I use "me." The most notable lack, since I think I do disagree with LW consensus here, is continuity.

Do you push the button?

My understanding of the Lesswrong consensus on this issue is that my uploaded consciousness is me, not just a copy of me. I'm hoping the above hypothetical illustrates why I'm having trouble accepting that.

I would consider both consciousnesses you. The problem seems to be one of preference. I would press the button but I can understand why people would not.

For a definition of "effectively" such that future lifespan >> 1000 years, yes. The uploading process as described will be that painful for everyone, so either:

a) Everyone will spend roughly the same amount of time getting over the pain, and I wouldn't miss much of significance or be specifically disadvantaged.

or

b) Being uploaded would afford us the capability to delete our memories of the pain; so, though it would be experienced, it wouldn't have to be remembered, reducing its overall effect.

This response assumes that the experience of the... (read more)

3JoshuaZ13y
I don't think you are interpreting the hypothetical as Prismatic intended. You split into two versions, one of which is an upload of you right before the pain starts. The other version (your brain undergoing something like very slow deconstruction) experiences a thousand years of agony.
2Dreaded_Anomaly13y
Oh, I see. That does challenge my usual conception of identity more than my initial interpretation. In essence, then, this is asking if I would choose to sacrifice myself in order to preserve myself. I believe that I like myself enough to do that. If my exact brain-state continues onward while the "original" experiences the pain, my identity does diverge, but it does so after I make the choice to press the button. In that sense, my identity continues, while also being tortured. The continuation, given its time span, seems ultimately more important when considering the alternative of ceasing to exist altogether.

Well, I would NOT press the button. The average copy gets 500 years of being a creationist, plus half of an immortality. My values prefer "short but good".

2Desrtopa13y
If you would value a shorter life for yourself in which you are not a creationist over a longer one in which you are, do you weight the lives of creationists much lower in your utility calculations? Would you rather save one non-creationist than seven creationists?

I identify with my upload on an intellectual level. On the emotional level, I can't really say. Whether I push the button depends on whether I judge "1000 years of agony, then immortality with no memory of the pain" to be better or worse than dying tomorrow, and then on whether I had the guts in the moment to push the button. I want to say I'd go for it, but I don't think I know myself that well.

Oh, by the way: is it one branch of me dying tomorrow and the other being painfully uploaded, or is there only one me with a choice between the two? I in... (read more)

I'd like to add a short poll to the question, assuming Prismattic doesn't mind (in which case I will delete these posts).

Upvote this if you would press the button AND you would NOT be willing to attempt a quantum suicide (with a 'perfect' suicide method that will leave you either dead or unharmed), if you were offered a moderately high personal payoff for the version(s) of you that will survive.

0wedrifid13y
Proposed replacement for parent option: I could vote for that.
2NihilCredo13y
Yes, you're right, I should have put it plainly as "would press the button and A" and "would press the button and non-A". Fixed.
1wedrifid13y
NOTE: Initially proposed alternative to a "no QM no matter what" option. So that it does not interfere with the new improved poll I moved it here and left it only as a reminder that while there are exceptions to the "anyone who chooses to commit quantum suicide fails at life - literally" rule they are nothing to do with perfect implementation. ---------------------------------------- Upvote this if you would press the button AND you would not be willing to commit quantum suicide EXCEPT when you are particularly desperate and would also consider playing a simple Newtonian Russian Roulette with equivalent payoffs. I had to add this because in the form NihilCredo specified the options all of them were crazy! I think NihilCredo intended my kind of reasoning to fit into the no matter what option but I could not bring myself to choose it. There are just so many counterexamples I could think of. Actually, revise that - not pressing the button is merely different to my preferences, not crazy. But both pressing button options do imply craziness.
3NihilCredo13y
Upvote this if you would not press the button.
5[anonymous]13y
When I see people agreeing to 1000 years of agony, I wonder if they are allowing themselves to fully conceptualize that--if they have a real memory of prolonged agony that they can call up. I call up my memories of childbirth. I would do just about anything to avoid 1000 years of labor, and that's not even "the most intense agony" possible. People undergoing torture beg for death. Knowing that I would beg for death in that situation, I choose death ahead of time. If someone else made the decision for me to press the button, and I was the uploaded consciousness learning about what had happened, I would be horrified and devastated to think of myself undergoing that torture. In fact I would die to prevent 1000 years of torture for anyone, not just myself.
2TheOtherDave13y
Well, speaking only for myself, it's clear that I'm not allowing myself to fully conceptualize the costs of a millenium of torture, even if I were able to, which I don't think I actually am. But it's also clear that I'm not allowing myself, and am probably unable, to fully conceptualize the benefits of an immortal enjoyable life. To put this more generally: my preference is to avoid cost/benefit tradeoffs where I can neither fully appreciate the costs nor the benefits. But, that said, I'm not sure that being able to appreciate the costs, but not the benefits, is an improvement. Leaving all that aside, though... I suspect that, like you, I would choose to die (A) rather than choosing a long period of torture for someone else (B) if A and B were my choices. Of course, I don't know for sure, and I hope never to find out. But I also suspect that if I found myself in the position of already having chosen B, or of benefiting from that choice made on my behalf, I would avoid awareness of having made/benefited the choice... that is, I suspect I am not one of those who actually walks away from Omelas. I'm not proud of that, but there it is.
2Nornagest13y
Extreme pain induces perhaps the strongest emotional bias imaginable; aside from simple sadism, that's the main reason why torture has historically been practiced. Knowing this, I'd hesitate to give much weight to my assumed preferences under torture. More generally, wishing for death under extreme emotional stress does not imply a volitional desire for death: that's the assumption behind suicide hotlines, etc., and I've seen no particular evidence that it's a mistaken one. I would press the button if I estimated that the uploaded copy of me stood to gain more from the upload than the copy of me being uploaded would lose in the pain of the process. The answer to that question isn't explicitly given as a premise, but with "effective immortality" to play with, it seems reasonable to assume that it's positive.
1[anonymous]13y
You're not asking yourself whether you'd like to die, all things being equal. You're asking yourself whether you'd prefer death to the torture. And you would--the "you" that was undergoing the torture, would. If you could gain immortality by subjecting someone else, a stranger from a different country, someone you've never met, to a thousand years of torture--would you do it? I would not.
2Nornagest13y
I would prefer immediate death to a thousand subjective years of agony followed by death, if that were all that was at stake. And I'm pretty sure that after, say, a subjective year of agony, I'd happily kill off an otherwise immortal copy of me to spare myself the remaining 999 years. But I estimate that to be true because of the emotional stress associated with extreme pain, not because of any sober cost/benefit analysis. Absent that stress, I quickly conclude that any given moment of agony for Gest(a) is worth an arbitrary but much larger number of pain-free moments of continued existence for Gest(b); and given that I'd consider both branches "me" relative to my current self, I don't find myself with any particular reason to privilege one over the other. People undergoing torture will betray their strongest beliefs, even sell out their friends into precisely the same fate, to get the pain to stop. Well-known. But I don't think this reflects their true volition for any reasonable value of "true": only a product of extreme circumstances which they'd certainly regret later, given the opportunity. This introduces issues of coercion which don't exist in the original problem, so I don't believe it's a fair analogy -- but my answer is "only with their consent". I would consider consenting to the reverse under some circumstances, and I'd be more likely to consent if the world was about to end as per the OP. I'd immediately regret it, of course, but for the reasons given above I don't believe that should influence my decision.
1[anonymous]13y
I'm just not sure about the way you're discounting the preferences of Nornagest(torture1000). In my imagination he's there, screaming for the pain to stop, screaming that he takes it back, he takes it back, he takes it back. His preferences are so diametrically opposed to "yours" (the Nornagest making this decision now) that I almost question your right to make this decision for him.
2Nornagest13y
Well, I actually do believe that Gest(torture1000)'s preferences are consistent with my current ones, absent the stress and lingering aftereffects of the uploading process. That is, if Omega were to pause halfway though that subjective thousand years and offer him a cup of tea and n subjective years of therapy for the inevitable post-traumatic problems, at the end of it I think he'd agree that Gest(now) made the right choice. I don't think that predictably biased future preferences ought to be taken into consideration without adjusting for the bias. Let's say I'm about to go to a party some distance away. I predict that I'll want to drive home drunk after it; I'm also aware both that that's a bad idea and that I won't think it's a bad idea six hours from now. Giving my car keys to the host predictably violates my future preferences, but I'm willing to overlook this to eliminate the possibility of wrapping my car around a fire hydrant.
0[anonymous]13y
That is, if Omega were to pause halfway though that subjective thousand years and offer him a cup of tea and n subjective years of therapy for the inevitable post-traumatic problems, at the end of it I think he'd agree that Gest(now) made the right choice. If I accept that's true, my moral objection goes away.
1TheOtherDave13y
Hm. I can imagine myself agreeing to be tortured in exchange for someone I love being allowed to go free. I expect that, if that offer were accepted, shortly thereafter I would agree to let my loved one be tortured in my stead if that will only make the pain stop. I expect that, if that request were granted, I would regret that choice and might in fact even agree to be tortured again. It would not surprise me to discover that I could toggle between those states several times until I eventually had a nervous breakdown. It's really unclear to me how I'm supposed to account for these future selves' expressed preferences, in that case.
0[anonymous]13y
It's really unclear to me how I'm supposed to account for these future selves' expressed preferences, in that case. In the case that the tortured-you would make the same decision all over again, my intuition (I think) agrees with yours. My objection is basically to splitting off "selves" and subjecting them to things that the post-split self would never consent to.
0TheOtherDave13y
(nods) That's reasonable. OTOH, I do think I can consent now to consequences that my future self will have to suffer, even if my future self will at that point -- when the benefits are past, and the costs are current -- withdraw that consent.
0TheOtherDave13y
My difficulty here is that the difference between making a choice for myself and making it for someone else actually does seem to matter to me, so reasoning from analogy to the "torture someone else" scenario isn't obviously legitimate. That is: let's assume that given that choice, I would forego immortality. (Truthfully, I don't know what I would do in that situation, and I doubt anyone else does either. I suspect it depends enormously on how the choice is framed.) It doesn't necessarily follow that I would forego immortality in exchange for subjecting myself to it. This is similar to the sense in which I might be willing to die to save a loved one's life, but it doesn't follow that I'd be willing to kill for it. It seems to matter whether or not the person I'm assigning a negative consequence to is me.
1[anonymous]13y
It doesn't necessarily follow that I would forego immortality in exchange for subjecting myself to it. But then you're talking about putting a future-you into a situation where you know that experiences will dramatically reshape that future-you's priorities and values, to the point where TheOtherDave(torture1000)'s decisions and preferences would diverge markedly from your current ones. I think making this decision for TheOtherDave(torture1000) is a lot like making it for someone else, given that you know TheOtherDave(torture1000) is going to object violently to this decision.
1NihilCredo13y
Upvote this if you would press the button AND you would be willing to attempt a quantum suicide (with a 'perfect' suicide method that will leave you either dead or unharmed), if you were offered a moderately high personal payoff for the version(s) of you that will survive.
0[anonymous]13y
Upvote this if you would press the button AND you would not be willing to commit quantum suicide EXCEPT when you are particularly desperate and would also consider playing a simple Newtonian Russian Roulette with equivalent payoffs. I had to add this because in the form NihilCredo specified the options all of them were crazy! I think NihilCredo intended my kind of reasoning to fit into the no matter what option but I could not bring myself to choose it. There are just so many counterexamples I could think of. Actually, revise that - not pressing the button is merely different to my preferences, not crazy. But both pressing button options do imply craziness.
0[anonymous]13y
Yes, I had originally phrased the second option as just "given a sufficiently high payoff", but I changed it when I spotted the obvious problems with that - except I didn't alter the complementary option to match. Could we please delete this exchange now that the poll is fixed?
-11NihilCredo13y