For some time I've been pondering on a certain scenario, which I'll describe shortly. I hope you may help me find a satisfactory answer or at very least be as perplexed by this probabilistic question as me. Feel free to assign any reasonable a priori probabilities as you like. Here's the problem:

It's cold cold winter. Radiators are hardly working, but it's not why you're sitting so anxiously in your chair. The real reason is that tomorrow is your assigned upload (and damn, it's just one in million chance you're not gonna get it) and you just can't wait to leave your corporality behind. "Oh, I'm so sick of having a body, especially now. I'm freezing!" you think to yourself, "I wish I were already uploaded and could just pop myself off to a tropical island."

And now it strikes you. It's a weird solution, but it feels so appealing. You make a solemn oath (you'd say one in million chance you'd break it), that soon after upload you will simulate this exact moment thousand times simultaneously and when the clock strikes 11 AM, you're gonna be transposed to a Hawaiian beach, with a fancy drink in your hand.

It's 10:59 on a clock. What's the probability that you'd be in a tropical paradise in one minute?

And to make things more paradoxical: What would be said probability, if you wouldn't have made such an oath - just seconds ago?

New Comment
28 comments, sorted by Click to highlight new comments since:

I find that I often get hung up on background assumptions that go into human capital creation in these hypothetical copy-of-a-person scenarios.

One model here is that I should anticipate with near certainty having a few random thoughts, then ending up in Hawaii for a while and then ceasing to exist(!), and neither forming new memories nor reminiscing about previous events. In other words I should anticipate imminent metaphysically imposed death right after an ironically pleasant and existential-fear-tinged vacation.

The reason is that CPU will still cost something and presumably the version of me that has to pay for those cycles will not want to have 1000 dependents (or worse still 1000 people with legitimate claims to 1/1001th of my upload self's wealth) that drain her CPU budget.

Another model is that I should "anticipate remembering" being cold for a while, then running a bunch of close-ended sims whose internal contents mattered almost not at all to the rest of the world. Then I'd shut them down, feel the oath to be fulfilled, and wonder what the point was exactly. Then for the next 1000 subjective years I would get to reminisce about how naive I was about how resource constraints, subjective experiences, and memories of subjective experiences interrelate.

Those scenarios paper over a lot of details. Are the sims deterministic with identical seeds? I think then my answer is a roughly 50/50 expectation, with 999 of the sims being a total waste of joules.

Are the sims going to be integrated with my future self's memory somehow? That seems like it would be a big deal to my "real self" and involve making her dumber (mentally averaged with either 1 or 1000 people like me who lacks many of her recent and coolest memories).

And so on.

The key theme here is that "human capital" is normally assumed to be "a thing". These thought experiments break the assumption and this matters. Souls (rooted in bodies or in the future just in the cloud) are more or less cultivated. Creating new souls has historically been very costly and involved economic consequences. The game has real stakes, and the consequences of things like brain surgery resonate for a long time afterwards. If something doesn't have "consequential resonance" then it is in a sort of "pocket universe" that can be safely discounted from the perspective of the causal domains, like ours, which are much vaster but also relatively tightly constrained.

Possibly the scariest upload/sim scenario is that the available clock cycles are functionally infinite, and you can spawn a trillion new selves every subjective second, for centuries, and it's all just a drop in the bucket. None of the souls matter. All of them are essentially expendable. They re-merge and have various diff-resolution-problems but it doesn't matter because there are a trillion trillion backups that are mostly not broken and its not like you can't just press the "human virtue" button to be randomly overwritten with a procedurally generated high quality soul. Whenever you meet someone else they have their own unique 1-in-10^50-sort-of-brokeness and it doesn't even seem weird because that's how nearly everyone is. This is the "human capital is no longer a thing" scenario, and I think (but am not absolutely sure) that it would be a dystopian outcome.

This is similar to Eliezer's algorithm for winning the lottery and many other scenarios that have been discussed on LW. I don't know the answer.

[-]Shmi70

First, I wouldn't call it a solution, since the original "you" will not get transported, and the em-you will suffer 1000 times unnecessarily. Second, consider reading the various anthropic arguments, such as SSA, SIA, Sleeping beauty and such, if you are so inclined.

Big thanks for poiting me to Sleeping beauty.

It is a solution to me - it doesn't feel like a suffering, just as few minute tease before sex doesn't feel that way.

[-]Shmi20

Sure, if you feel that "cold winter + hope" 1001 > "hawaiian beach" 1000 + "cold winter with disappointment after 11:00", then it's a solution.

Two fluid model of anthropics. The two different fluids are "probability" and "anthropic measure." Probabilities come from your information, and thus you can manipulate your probability by manipulating your information (e.g. by knowing you'll make more copies of yourself on the beach). Anthropic measure (magic reality fluid) measures what the reality is - it's like how an outside observer would see things. Anthropic measure is more properly possessed by states of the universe than by individual instances of you.

Thus a paradox. Even though you can make yourself expect (probability) to see a beach soon, it doesn't change the fact that you actually still have to sit through the cold (anthropic measure). Promising to copy yourself later doesn't actually change how much magic reality fluid the you sitting there in the cold has, so it doesn't "really" do anything.

I like your general approach, but I find the two fluid model a confusing way of describing your idea

I think the conflict dissolves if you actually try to use your anticipation to do something useful.

Example. Suppose you can either push button A (before 11AM) so that if you're still in the room you get a small happiness reward, or you can push button B so if you're transported to paradise you get a happiness reward. If you value happiness in all your copies equally, you should push button B, which means that you "anticipate" being transported to paradise.

This gets a little weird with the clones and to what extent you should care about them, but there's an analogous situation where I think the anthropic measure solution is clearly more intuitive: death. Suppose the many worlds interpretation is true and you set up a situation so that you die in 99% of worlds. Then should you "anticipate" death, or anticipate surviving? Anticipating death seems like the right thing. A hedonist should not be willing to sacrifice 1 unit of pleasure before quantum suicide in order to gain 10 units on the off chance that they survive.

So I think that one's anticipation of the future should not be a probability distribution over sensory input sequences (which sums to 1), but rather a finite non-negative distribution) (which sums to a non-negative real number).

Anthropic measure (magic reality fluid) measures what the reality is - it's like how an outside observer would see things. Anthropic measure is more properly possessed by states of the universe than by individual instances of you.

It doesn't look like a helpful notion and seems very tautological. How do I observe this anthropic measure - how can I make any guesses about what the outside observer would see?

Even though you can make yourself expect (probability) to see a beach soon, it doesn't change the fact that you actually still have to sit through the cold (anthropic measure).

Continuing - how do I know I'd still have to sit through the cold? Maybe I am in my simulated past - in hypothetical scenario it's a very down-to-earth assumption.

Sorry, but above doesn't clarify anything for me. I may accept that the concept of probability is out of the scope here, that bayesianism doesn't work for guessing whether one is or isn't in a certain simulation, but I don't know if that's what you meant.

How do I observe this anthropic measure - how can I make any guesses about what the outside observer would see?

The same way you'd make such guesses normally - observe the world, build an implicit model, make interpretations etc. "How" is not really an additional problem, so perhaps you'd like examples and motivation.

Suppose that I flip a quantum coin, and if it lands heads I give you cake and tails I don't - you expect to get cake with 50% probability. Similarly, if you start with 1 unit of anthropic measure, it gets split between cake and no-cake 0.5 to 0.5. Everything is ordinary.

However, consider the case where you get no cake, but I run a perfect simulation of you in which you get cake in the near future. At some point after the simulation has started, your proper probability assignment is 50% that you'll get cake and 50% that you won't, just like in the quantum coin flip. But now, if you start with 1 unit of anthropic measure, your measure never changes - instead a simulation is started in the same universe that also gets 1 unit of measure!

If all we cared about in decision-making was probabilities, we'd treat these two cases the same (e.g. you'd pay the same amount to make either happen). But if we also care about anthropic measure, then we will probably prefer one over the other.

It's also important to keep track of anthropic measure as an intermediate step to getting probabilities in nontrivial cases like the Sleeping Beauty problem. If you only track probabilities, you end up normalizing too soon and too often.

Sorry, but above doesn't clarify anything for me. I may accept that the concept of probability is out of the scope here, that bayesianism doesn't work for guessing whether one is or isn't in a certain simulation, but I don't know if that's what you meant.

I mean something a bit more complicated - that probability is working fine and giving sensible answers, but that when probability measure and anthropic measure diverge, probabilities no longer fit into decision-making into a simple way, even though they still really do reflect your state of knowledge.

There are many kinks in what a better system would actually be, and hopefully I'll eventually work out some kinks and write up a post.

I've actually been trying to actually invoke something like this, at some point after the singularity I plan to go into a simulation that starts out with my life pre-singularity, then awesome/impossible adventures start. There's kludges involved in getting it to seem like it has any hope of working, because my expected reason for why future-me will want to do this is that adventures are more fun if you think they're real (I'd prefer this to simple wireheading, and probably also to reality) and that makes the fact that I'm planning this now evidence against it. I don't have much input to make on the actual question, just declaring my interest in it.

What's the probability that you'd be in a tropical paradise in one minute?

Depends on whether you consider the simulated people "you" or not. Also depends on whether "in one minute" is by our world clock, or clocks in future time, by our world clocks, saying one minute from now.

By "me" I consder this particular instance of me, which is feeling that it sits in a room and which is making such promise - which might of course be a simulated mind.

Now that I think about it, it seems to be a problem with a cohesive definition of identity and notion of "now".

This is actually isomorphic to the absent-minded driver problem. If you precommit to going straight, there is a 50/50 chance of being at either one of the two indistinguishable points on the road. If you precommit to turning left, there is a nearly 100% chance of being at the first point on the road (Since you wouldn't continue on to the second road point with that strategy.) It seems like probability can be determined only after a strategy has been locked into place.

I'm really having a lot of trouble understanding why the answer isn't just:

1000/1001 chance I'm about to be transported to a tropical island 0 chance given I didn't make the oath.

Assuming that uploaded you memory blocks his own uploading when running simulations.

We have no idea how consciousness work, how probabilities can be assigned on expectation of being a given person, or the expectation of experiencing something. "If there are a million ants for every living human being, why am I a person instead of an ant?" may be a meaningful question, or then again it may not.

But to answer your question, my expectation is that for the probability of 'tropical paradise in one minute' to exceed 50% the calculation of your simulated selves combined will need to take up a physical volume that exceeds the volume taken up by your current brain (or the portion of your current brain devoted to your consciousness).

[-][anonymous]00

Hmm. Let's consider this from the perspective of you, having just entered the simulation and considering keeping your oath. (This doesn't directly focus on the probabilities in your question, but I did find the subsequent thoughts fascinating enough to upvote your original question for inspiration)

You currently distinctly remember 1440 minutes of cold suck.

Based on your Oath, you're going to do do 1000 simulations. Each of these will have only 1 minute of cold suck, and 1439 minutes of identical tropical beach paradise.

There's probably going to need to be some strategic memory blocking, in between each session, of course. If you were to do spend this time only for novel super fun, you would have 1,440,000 minutes of novel super fun. Instead, you'll have 1,000 identical minutes of cold suck, and 1000 copies of 1,439 minutes of tropical fun. (if you make the tropical fun different each time, you might be able to reconstruct it to 1,439,000 minutes of novel super tropical fun)

Note: this will not provide any benefit to you, at this point. It would provide benefit to past you, but you're past that you.

Although, conveniently, you are in a simulation, so you can just probabilistically steal fun from the future, again. Sure, the minute where you're considering keeping or breaking your oath sucks, but you can set up 1000 simulations where you come by and say "Congratulations! You don't have to follow that oath after all. I already did it for you."

Of course, doing that means you are having 1000 more minutes of suck (where you are worried about the cost of keeping your oath) added to the future, in addition to the previous 1000 minutes of suck, where you are worried about the cold prior to upload, in addition to the 1440 minutes of cold suck you actually experienced.

If you were to continue doing this repeatedly that would add up to a large number of added minutes of suck. (How many minutes of suck would you add to your simulation if it ran forever?)

Or... you could just edit your memory to think "Yep, I fulfilled that Silly Oath." or just decide 'Actually, I guess I hit the one in a million chance I thought of.' not edit your memory, and never worry about it again.

Also, if you really believe it is correct to steal time from the future to give yourself more fun now... you don't even need a simulator or the ability to make binding verbal oaths. That technology exists right this second.

What's odd is that this basically makes the scenario seem like the transhumanist parallel to junk food. Except, in this case from your perspective you already ate the healthy food, and you're being given the opportunity to essentially go back in time and retroactively make it so that you ate junk food.

Which leads me to the realization "I could never take an oath strong enough to bind my future self to such a course of activity, so I'm almost certainly going to be sitting here for the next 1439 minutes freezing... Oh well."

I imagine that for the one minute between 10:59 and 11:00, my emotional state would be dominated by excitement about whether this crazy idea will actually work, and maybe if I can snap my fingers at just a second before the time on the clock switches, it'll be like I actually just cast a magic spell that teleports me to a tropical paradise. There's also the fun of the moments when first arriving in a tropical paradise and realising that it actually worked, which is a bit wireheadey to invoke repeatedly 1,000 times, but the fact that it's to fulfil a past oath should justify it enough to not feel icky about it.

Let's consider a similar problem. Suppose you've just discovered that you've got cancer. You decide to buy a pill that would erase your memory of diagnosis. From your new perspective, your chance of having cancer will be 2%. In this situation, you can change your knowledge about your odds of having cancer or not, but you can't change whether you actually have cancer, or reduce the probability of someone with your symptoms having cancer.

What seems to make this situation paradoxical is that when you make a decision, it seems to change the probability that you had before you made the decision. This isn't quite what happens. If you accept determinism, you were always going to make that decision. If you had known that you were going to make this decision before you actually made it, then your probability estimate wouldn't have changed when you made the decision. The reason why the probability changes is that you have gained an additional piece of information, that you are the kind of person to make a vow to simulate yourself. You might have assigned a smaller probability to the chance that you were such a person before you decided to actually do it.

What I had in mind isn't a matter of manually changing your beliefs, but rather making accurate prediction whether or not you are in a simulated world (which is about to become distinct from "real" world), based on your knowledge about existence of such simulations. It could just as well be that you asked your friend, to simulate 1000 copies of you in that moment and having him teleport you to Hawaii as 11 AM strikes.

This problem is more interesting that I thought when I first read it (as Casebash). If you decide not to create the simulation, you are indifferent about having made the decision as you know that you are the original and that you were always going to have this experience. However, if you take this decision, then you are thankful that you did as otherwise there is a good chance that simulated you wouldn't exist and be about to experience a beach.

Firstly, I'm not necessarily convinced that simulating a person necessarily results in consciousness, but that is largely irrelevant to this problem, as we can simply pretend that you are going to erase your memory 1000 times.

If you are going to simulate yourself 1000 times, then the chance, from your perspective, of being transported to Hawaii is 1000/1001. This calculation is correct, but it isn't a paradox. Deciding to simulate yourself doesn't change what will happen, there isn't an objective probability that jumps from near 0 to 1000/1001. The 0 was produced under a model where you had no tendency to simulate this moment and the 1000/1001 was produced under a model where you are almost certain to simulate this moment. If an observer (with the same information you had at the start) could perfectly predict that you would make this decision to simulate, then they would report the 1000/1001 odds both before and after the decision. If they had 50% belief that you would make this decision before, then this would result in approx. 500/1001 odds before.

So, what is the paradox? If it is that you seem to be able to "warp" reality and so that you are almost certainly about to teleport to Hawaii, my answer explains that, if you are about to teleport, then it was always going to happen anyway. The simulation was already set up.

Or are you trying to make an anthropic argument? That if you make such a decision and then don't appear in Hawaii that it is highly unlikely that you will be uploaded at some point? This is the sleeping beauty problem. I don't 100% understand this yet.

If you stick with subjective probability, and assume an appropriate theory of mind, then your subjective probability of experiencing paradise would be about 1000/1001 in the first case, about 0 in the latter case.

If we're talking about some sort of objective probability, then the answer depends on the details of which theory of mind turns out to be true.

Was this inspired by this short story? Its about a infinitely powerful computer simulating the universe. Which of course contains a simulation of an infinitely powerful computer simulating the universe and so on. Since there are infinite layers of simulations, the odds of any one person being simulated is 1.

I'd say that an observer is something ephemeral. You a minute from now is someone else, who just happens to have a memory of being you a minute ago. You existing is not even strictly a requirement for this. Messing around with what observers remember being you in no way affects anthropics.