I am an easily bored Omega-level being, and I want to play a game with you.

I am going to offer you two choices. 

Choice 1: You spend the next thousand years in horrific torture, after which I restore your local universe to precisely the state it is in now (wiping your memory in the process), and hand you a box with a billion dollars in it.

Choice two: You spend the next thousand years in exquisite bliss, after which I restore your local universe to precisely the state it is in now (wiping your memory in the process), and hand you a box with an angry hornet's nest in it.

Which do you choose?

Now, you blink. I smile and inform you that you made your choice, and hand you your box. Which choice do you hope you made?

You object? Fine. Let's play another game.

I am going to offer you two choices.

Choice 1: I create a perfect simulation of you, and run it through a thousand simulated years of horrific torture (which will take my hypercomputer all of a billionth of a second to run), after which I delete the simulation and hand you a box with a billion dollars in it.

Choice 2: I create a perfect simulation of you, and run it through a thousand simulated years of exquisite bliss (which will take my hypercomputer all of a billionth of a second to run), after which I delete the simulation and hand you a box with an angry hornet's nest in it.

Which do you choose?

Now, I smile and inform you that I already made a perfect simulation of you and asked it that question. Which choice do you hope it made?

Let's expand on that. What if instead of creating one perfect simulation of you, I create 2^^^^3 perfect simulations of you? Which do you choose now?

What if instead of a thousand simulated years, I let the boxes run for 2^^^^3 simulated years each? Which do you choose now?

I have the box right here. Which do you hope you chose?

New Comment
43 comments, sorted by Click to highlight new comments since: Today at 4:47 AM

I fear bees way less than I fear super-torment. Let's go with the bees.

I do not open my boxes. I send them to somebody that would be interested in papery insectile artifacts.

[-][anonymous]11y70

First of all, thank you for the question. I found it thought provoking and will be thinking on it later.

Second of all, I want to let you know I am replacing all of the "Events which then get deleted with no observable consequence" as "Magical gibberish." for part of my initial answer, because it feels like the dragon from http://lesswrong.com/lw/i4/belief_in_belief/ where there are no observable effects, regardless of how many years, simulated me's, or simulated me years, you throw into it.

I also note that depends on whether or not I expect to make the choice repeatedly. Ergo: I make the choice once:

1: Tortuous Magical Gibberish, and you get a billion dollars.

2: Blissful Magical Gibberish, and you get angry bees.

In that case the billion seems a fairly clear choice. But if you're going to pull some sort of "Wait, before you open the box, I'm going to give you the same choice again, forever." shenanigans, then the billion/hornets never is reachable and all I have left is never ending magical gibberish. In which case, I'll take the Blissful magical gibberish, as opposed to the Torturous magical gibberish.

Oddly, I'd be more reluctant to take the billion if you said you were going to torture OTHER people, who weren't me. I feel like that says something about my value system/thought processes, and I feel like I should subject that something to a closer look. It makes me feel like I might be saying "Well, I think I'm a P-zombie, and so I don't matter, but other people really do have qualia, so I can't torture them for a billion." but maybe it's just imagining what another person who didn't accept my premises would say/do upon finding out about that I had made that choice, or perhaps it's "Absolute denial macro: Don't subject countless others to countless years of torture for personal gain even if it seems like the right thing to do."

I'm not sure which if any of those is my true rejection, or even if I'd actually reject it.

As I said before though I'm still thinking about some of this and would not be surprised if there is a flaw somewhere in my thinking above.

In a billion years your whole life might have no evidence of it ever happening. Does that mean it's magical gibberish?

[-][anonymous]11y30

Yes. My existence isn't so important that it just carries on, magically effecting the world while also being defined as having no observable effects. In a billion years, either I have observable effects, or I don't. If I don't, then talking about a me existing makes no sense. How would you even define me any more? Any test you would run for "Did there used to be a Michaelos over there a billion years ago?" would give the exact same results whether or not there ever was one.

So it makes no difference whether I torture you or not, because in a billion years no one will know?

[-][anonymous]11y60

"Makes no difference" to whom? Michaelos, or the hypothetical billion-years-later observer?

[-][anonymous]11y00

Hmm. I think the key question is "Are there observable effects from you torturing me?" when Omega did it, there weren't. Where the observable effects would have occurred, I blinked, I.E, nothing happened.

I think this is distinct from you torturing me right now, because there would be observable effects, which would fade away into history slowly over time. Eventually, it wouldn't be noteworthy any more, but that would take a long time to occur.

A big difference is that you can't hit a "Reverse to Status Quo Ante" button, like Omega can, of course.

So, in the future, speaking of my life in particular will likely be gibberish, (I say likely because I am assuming I'm not important a billion years from now, which seems likely.) but it isn't gibberish right now, might be a better way of putting it.

Would that make it more clear?

Not Michaelos, but in this sense, I would say that, yes, a billion years from now is magical gibberish for almost any decision you'd make today. I have the feeling you meant that the other way 'round, though.

Which do you choose?

Hornets in the "real" case, dollars in the simulated cases. I don't care about simulations. To avoid overflow when computing the decision, 2^^^^3 was replaced by 10^5.

Really, what's the point?

The point is that you might be one of the simulations and someone has already made a decision for you, the same decision you would be making for other simulations. So you may want to be intentionally reflectively consistent to avoid (likely unpleasant) external reflective consistency inflicted on you.

As usual with strange loops, the correct action is not necessarily intuitive.

So I may prefer to take into account preferences of my sims when deciding, because I may end up in a situation in which my fate would be decided by my sims who use the same decision algorithm. And if Omega tells me that he has created n simulations of me which have completely equal experiences that I have had until now, including the dialogue with Omega, I should assume 1/n probability of not being one of those simulations.

Is that it?

Basically, yes. This is what EY called "symmetrism" in 3 Worlds collide and Greg Egan described in one of his short stories. Basically, a more sophisticated version of "do unto others...".

If this is the point, I object to the way it is conveyed by the post.

First, its name somehow suggests that it's about value while the problem is rather one of game theory. (One may make a case for integrating the symmetric preferences among one's terminal values but it isn't the only possible solution.)

Second, thought experiments should limit the counter-intuitive elements to the necessary minimum. We may need to have simulations here, but why thousand years of torture and 2^^^^3 simulations? These things are unnecessarily distracting from the main point, if the main point isn't about scope insensitivity but it is instead what you think it is.

Third and most importantly: In similar thought experiments, Omega is assumed to be completely trustworthy. But it is not trustworthy towards the simulations - it too tells them that it is going to simulate them and torture the (second-order) simulations depending on their (the first-order simulations) decision, but it isn't true: There are no second-order simulations and the first order simulations are going to be tortured based on the decision of the unsimulated participant. So it is, if the participant accepts anthropic reasoning for this case, p = 1/n that he is "real" and p = (n-1)/n that (he is simulated and Omega isn't trustworthy). If, on the other hand, Omega didn't tell the simulations the same thing as it has told to the "real" person, what Omega has told could be used to discriminate between the simulated and real case and the anthropic reasoning leading to the conclusion that one is likely a simulation wouldn't apply.

In short, taking into consideration that I may be a simulation that Omega is speaking about is incoherent without considering that Omega may be lying. There may be clever reformulations that avoid this problem, but I don't see any at the moment.

I reread the OP, and, while it could be stated better, I did not see any obvious less told by Omega, except maybe less by omission.

From the OP:

I create a perfect simulation of you, and run it through a thousand simulated years of horrific torture (which will take my hypercomputer all of a billionth of a second to run), after which I delete the simulation and hand you a box with a billion dollars in it.

My interpretation of your interpretation, which you have said is basically right:

And if Omega tells me that he has created n simulations of me which have completely equal experiences that I have had until now, including the dialogue with Omega, I should assume 1/n probability of not being one of those simulations.

So, we have Omega telling the simulations that it is going to give them box with billion dollars (if they choose what they choose) and instead tortures them and then deletes them. This is an explicit lie, isn't it? Moreover, Omega tells the simulations that they would be simulated, but unless Omega can create an infinite regress of simulations of simulations (which I consider obviously impossible), at least some of the simulations aren't simulated in violation of Omega's promise to them.

From the OP:

I create a perfect simulation of you, and run it through a thousand simulated years of horrific torture (which will take my hypercomputer all of a billionth of a second to run), after which I delete the simulation and hand you a box with a billion dollars in it.

My interpretation of your interpretation, which you have said is basically right:

And if Omega tells me that he has created n simulations of me which have completely equal experiences that I have had until now, including the dialogue with Omega, I should assume 1/n probability of not being one of those simulations.

So, we have Omega telling the simulations that it is going to give them box with billion dollars (if they choose what they choose) and instead tortures them and then deletes them. This is an explicit lie, isn't it? Moreover, Omega tells the simulations that they would be simulated, but unless Omega can create an infinite regress of simulations of simulations (which I consider obviously impossible), at least some of the simulations aren't simulated in violation of Omega's promise to them.

[This comment is no longer endorsed by its author]Reply

The games don't seem different in any important way and in both cases choice two is vastly superior. When you hand me the box I might be slightly resentful of my past self-- but I'm pretty okay with him avoiding 1000 years of torture. Bees it is! This already easy choice get's much easier when you increase the number of simulations or the amount of time.

I'm not very considerate towards myself and don't care much about simulations, so it's an easy choice 1 for me in all cases. (I then massively resent myself for a thousand years of torture, but then I'm okay again!)

And this is without the possibility of donating a billion dollars to SIAI/MIRI (or launching my own project with the money or something) being taken into account. To explain, since this seems a non-standard choice (judging by the other comments), mostly, I'm just not very concerned with things that have no external observable effects, I guess, and there also some other factors like me refusing to accept the illusion that future!me is really the same as present!me, and consequently caring about that guy much less (let alone these simulations of me - they get to exist for a bit, I'm not sure they'd even be upset with my choice).

It seems to me the two problems are basically equivalent (assuming Omega doesn't kill a whole universe just to restore my local state).

This also looks like an extreme version of the self-duplicating worker Robin Hanson talked about in the last Singularity Summit. The idea is to duplicate oneself many times over at the beginning of a day's work, then kill every instance but one once the shift is finished. That can yield much subjective free time, and high computational efficiency.

So I guess that some people will have no problem with Omega's dilemma, and happily chose the million. I on the other hand, feel this is just wrong.

I wonder though. Can we unbirth a child after all? Is there something we don't quite get that will get a factual answer to this seemingly moral question?

What is the point of this post?

[This comment is no longer endorsed by its author]Reply

It does draw attention to the fact that we're often bad at deciding which entities to award ethical weight to. Not necessarily the clearest post doing so, and missing authorial opinion, but I wouldn't be shocked if the LW community could have an interesting discussion resulting from this post

I think I was too grumpy in the grandparent.

i'm only going to consider the first one. The obvious thing to do is to pick the bees and hope for the bees, and it's an incredibly clear illustration of a situation where you might interpet the necessary unpleasant consequences of a good decision, as negative feedback about that decision, in the form of regretting the possibility of hornets. It pinpoints that feeling and it should help to push it away any other time you might be in abject pain or experiencing other lesser discomfort, e.g. after you, say, go to the gym for the first time. it really pinpoints that false temptation.

There is an argument for box 1 though: with a billion dollars and the perfect proof of your own credibility to yourself, and bearing in mind that any impairing trauma caused by the torture would be erased, it's possible that you could do more direct good than a thousand years torture is bad, and that the indirect good you could do (in bringing about positive sum games and opposing negative sum ones, being a part of establishing a better pattern for all of society, by gaining power and using it to influence society away from negative sum interactions, would be bigger again.) And of course I'd love to discover that I was that crazy, that altruistic, that idealistic, that strong. There's a part of me that wants to just say fuck it. In fact, bearing in mind the possibility of immortality or at least great expansion before I die/cryonics runs out or fails to work, do I want to be the guy who chose the bliss or the resources? Fuck it, I want to be the second guy. Throw me in the box before I change my mind.

Fuck it, I want to be the second guy. Throw me in the box before I change my mind.

I like the cut of your jib. Upvoted.

I would choose number 2 both times, but I can see an argument for number 1 both times (which amounts to defecting). I think people who choose different answers are wrong. A perfect simulation of me that has no impact on myself is no different than a thousand years of good times that get rewound.

This is actually kind of interesting. The only thing that makes me consider picking choice one is the prospect of donating the billion dollars to charity and saving countless lives, but I know that's not really the point of the thought experiment. So, yeah, I'd choose choice two.

But the interesting thing is that, intuitively, at least, choosing choice 2 in the first game seems much more obvious to me. It doesn't seem rational to me to care if a simulation of you is tortured any more than you would a simulation of someone else. Either way, you wouldn't actually ever have to experience it. The empathy factor might be stronger if it's a copy of you - "oh shit, that guy is being tortured!" vs. "oh, shit, that guy that looks and acts a lot like me in every single way is being tortured!", but this is hardly rational. Of course, the simulated me has my memories, so he perceives an unbroken stream of consciousness flowing from making the decision into the thousand years of torture, but who cares. That's still some other dude experiencing it, not me.

So, yes, it seems strange to consider the memory loss case any differently. At least I cannot think of a justification for this feeling. This leads me to believe that the choice is a purely altruistic decision, i.e. it's equivalent to omega saying "I'll give you a billion dollars if you let me torture this dude for 1000 years". In that case, I would have to evaluate whether or not a billion-dollar dent in world hunger is worth 1000 years of torture for some guy (probably not) and then make my decision.

Game 1: I take the second option. I want 1000 years of exquisite bliss much more than I don't want to have a box of hornets in my hand.

Game 2: First option. I value perfect simulations of myself none at all, and a billion dollars is pretty sick.

I have no preference regarding what choices perfect simulations of me would choose, since I don't care about them at all, though I would assume that they make the same choices I would make since they have the same values.

How does increasing the amount or length of time change the question?

What in this post merrited downvoting without explaining?

(This was at -1 when I found it)

Probably because it's thinking is sufficiently far away from standard Less Wrong computationalism as to seem stupid to someone.

I think this is an easy choice two in all cases.

For the people that say they don't care about perfect simulations of themselves, this seems somewhat relevant.

eta: the "what choice do you hope you made?" scenarios are the same as wishing you could precommit to one boxing newcomb's but still take both at the end.

In a similar experiment, you can have $100 now or $150 in 2 months. Most people take the $100 now.

In another version, you can have $100 in 10 months, or $150 in 12 months. Most people think "well I will have already waited 10 months, I might as well wait just 2 more months." Mostly everyone picks waiting 12 months for the $150. After 10 months pass you ask the people again, "Do you want to just give up and take the $100 now, or wait just two more months and get the full $150?" Most will take the $100.

People weigh things differently when the "power of now" is removed from the equation.

A lot of people are going for Choice 1 of Game 2 with the idea that it isn't appropriate to give ethical consideration to a simulation of one's self.

This is just silly. There's no guarantee that a simulation of myself will share the same sentiment about the situation. The illusion of acceptability here would go away once the simulations of myself reason any of the following possibilities:

A) As the seconds of subjective experience accumulate, they'll be less and less me as the torture sets in, and more and more someone else, if you accept the premise that our experiences feed-back into our perception of who we are and how the world works, rendering the self part of self inflicted pain pretty blurry.

B) I suspended my sense of compassion under the belief that different versions of myself don't need that kind of consideration. This belief was due to a further belief that since the simulations were of myself, it was acceptable to inflict said pain. The moral weight of self inflicted pain without actually experiencing the pain, since conceptually, no one else is experiencing it. Problem: If I considered the simulations to be enough a part of my own experience to not be considered in the class of other people who need to be shown decency, why wasn't I compelled to choose the thousand years of bliss out of self interest? Then they would see the inconsistencies of the "acceptable self harm" argument, and see it as more of a rationalization. Which it is.

Suppose there was a skeleton on an island somewhere, belonging to somebody who had no contact with civilization before he died, yet spent the last years of his life in unimaginable suffering. Suppose that there was an easy to execute magical action that would alleviate said suffering occurring in the past without need to worry about changing the future. Would you feel compelled to do it? If yes, then you must also feel compelled to rescue simulations of yourself, because their lack of current existence puts them on the same footing as dead simulations.

I take option one in both instances, feeling less sure of it in the first case because obviously I'm going to start regretting it from about ten seconds in, and keep regretting it for the next thousand years.

But then, I'll blink, and it'll be exactly as if the torture or bliss never happened, and in that case neither effect me and I've gained a billion or a hornet's nest, the billion is obviously preferable.

[-][anonymous]11y00

I choose 1.

[This comment is no longer endorsed by its author]Reply

I wouldn't play at all, as the implications of memory wipes are too disturbing for me to go with.

If pressed to not use common sense, I'd go with the exquisite bliss, as Death is essentially an ultimate memory wipe that would follow the billion dollars spending spree, making the two options metaphysically similar in nature, with the difference being that exquisite bliss is subjectively better than simply having a billion dollars.

I wouldn't play at all

The usual caveat is that the alternative to playing is even less pleasant, like eternal torture.

I could then point out that since the Omega being is bored, he could have loads more fun with me arguing my way out of the scenario than by subjecting me to eternal torture.

You could, but that would be fighting the hypothetical.

Damn, you're right. I had thought that it was an offer, since both options had a positive.

[-][anonymous]11y00

I think I would choose #1 for the first scenario and #2 for the second. There's a subtle difference between the two scenarios. In the first, your choice doesn't affect you in any way, thus, "Now, you blink." In the second, the real you will have made a choice that affects you (You will know what you decided for your simulation, which may or may not matter to you personally.)

Does the ordering of events make a difference in anyone's choices? That is, you're allowed to choose Box 1 with the money, or Box 2 with the hornet nest, knowing that afterwards, you will be subject to torture or bliss, and then the universe will be reset (but you'll still keep your box.)

If I were a scientist, I would ask for evidence of the existence of omega-level beings before further considering the questions. We can of course debate how many Omega-level beings are there on the tip of a pin, but I believe our limited time in this Universe is better spent asking different kinds of questions.

Are the simulations of myself P-zombies?