EDIT: I was confused. Confusion now resolved. Disregard this (I don't think it's possible to retract it, and I don't want to delete it in case I wasn't the only confused person).

The frequently discussed quantum lottery thought experiment proposes that a group of people pool their money and arrange for a quantum-random process to determine the winner and kill all the losers. By the Many-Worlds interpretation of quantum physics and the anthropic principle, every person will experience waking up extremely wealthy.

There are lots of reasons not to participate in a quantum lottery, of course, but it seems to me that the general principle is sound. If you go to sleep in a situation where there is only an extremely small chance of waking up, you can anticipate with near-certainty that you will in fact wake up, as long as you survive in at least one of the (virtually infinite) universes in which you exist. 

If this is right, then the implications for cryonics are obvious: you can anticipate with near-certainty that it will work. As long as there is a positive Singularity  or the technologies needed to make cryonics work are developed without a Singularity in at least one world, you are all set.

(Related: If civilization finds itself in a position to revive some but not all of the people cryonically suspended, we should use a quantum-random process to choose who we wake up, so that everyone gets woken up somewhere)

It seems to me there are a few ways to disagree with this:

1) Disbelieve in Many Worlds.

2) Argue that cryonics is literally impossible, to the extent that absolutely no future world could possibly revive people. I haven't seen this argued much by anyone who has reviewed the literature on cryonics, and wouldn't personally consider it probable, but it is possible.

3) Argue that this also means a few copies of you are likely to find themselves revived in an unpleasant situation, no matter how improbable that is, and that it isn't worth the risk.

Any other objections? As I misunderstanding something obvious?


New to LessWrong?

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 7:20 PM

Anthropics becomes tautological when used to predict the future. "Among the 'me's who wake up, 100% will experience waking up!" This is not at all a substitute for making a decision by weighing outcomes according to probability.

There is one more disagreement than you mentioned: "All my current measure/amplitude would have to pay for the suspension process, but only a little bit of me would wake up. This means the real outcome of cryonics is [fraction of me that wakes up]*[awesomeness of immortality]-[disutility of paying], and this is negative." That said, I still want to sign up for cryonics. The problem is, convincing my relatives that many-worlds is true is about as difficult as convincing them that cryonics is a good bet by other means, or even that they'd enjoy living more than 80 years.

[-][anonymous]13y20

What I really don't get at all is how someone could at the same time believe that probability is in the mind and that there is a probability associated with that mind's location. It seems obvious to me that the second proposition requires a notion of probability that is outside the mind. But without that notion any kind of anthropic reasoning just doesn't make sense to me.

Why, the main reason to avoid quantum suicide (and to ignore its statements about the value of scenarios such as cryonics in the sense you mention) is discussed in a post you linked, namely that the domain of your utility function is not experience, but reality, and quantum suicide doesn't make the worlds where you die any better, to the contrary, it makes them worse.

What do you mean by "There are lots of reasons not to participate in a quantum lottery, of course, but..."? What is your understanding of these considerations that permits the "but..." part? Statements such as "you are all set" are statements about value.

Effects on friends/family and possibility of an accident which leaves you crippled but not dead are the objections that exist even without worrying about the domain of your utility function. The objection that worlds where you die are worse doesn't seem to apply to cryonics, since "you die" is the default state. I think most people's objections to cryonics are psychological, and for me at least the thought "I am almost certain I will wake up in a better future" helps overcome that barrier.

Wait wait wait...

... and possibility of an accident which leaves you crippled but not dead are the objections that exist even without worrying about the domain of your utility function

What objections, why are "objections" an interesting topic? Understand what's actually going on instead. If there is no reason to privilege the worlds where you survive, the whole structure of reasoning about these situations is different, so the domain of your utility function is not "an additional argument to worry about", it's a central consideration that fundamentally changes the structure of the problem.

I think most people's objections to cryonics are psychological, and for me at least the thought "I am almost certain I will wake up in a better future" helps overcome that barrier.

Don't you dare use self-deception to convince yourself of something you suspect is true! That way lays madness (normal human insanity, that is). If you start believing that you're almost certain to wake up in a better future for the reason that it makes you believe in cryonics, and not because it's true (if it's true), that belief won't mean what it claims it means, and opting in for cryonics won't become any better.

When I first read about quantum lotteries, my reasons for rejecting the idea was the above (family, accident). Those were sufficient for me to reject it, and there's no point in pretending I had better arguments written above my bottom line. That said, I now see your point about how the domain of my utility function changes the problem, and I have edited the article accordingly. I don't think I had fully internalized the domain-of-utility-function concept. Thank you.

Don't you dare use self-deception to convince yourself of something you suspect is true!

When I decide to do something, I visualize it succeeding. This is the only way I know of to motivate myself. I appreciate your concerns about tricking myself and I wrote this question in an attempt to discover whether "I'm almost certain to wake up in a better future" actually is true. But if it is, I'm going to go on thinking about it.

When I decide to do something, I visualize it succeeding. This is the only way I know of to motivate myself.

This method won't allow you to successfully respond to risks, pursue risky strategies, or act impersonally without deceiving yourself (and since you likely can do these things already, what you described in the words I quoted is not the real problem, or in any case not as severe a problem as you say).

Learn to feel expected utility, to be motivated by correctness of a decision (which in turn derives from consequentialist considerations), rather than by confidently anticipated personal experience.

"Feeling Rational" was one of the most valuable articles on LessWrong, for me. But the way I've implemented it in my life is along the lines of: 1) Determine the correct course of action via consequentialism considerations 2) Think happy thoughts that will make me as excited about donating my money to an optimal charity online as I previously felt about reading to underprivileged children at the local library.

I've always thought of this more along the lines of "forcibly bringing my feelings in line with optimal actions" than as self-deceit.

So in this case, I did some research and considered expected utility and decided signing up for cryonics made sense. But I don't feel (as some have reported feeling), like I've chosen to save my life, or like I'm one of the few sane people in a crazy world. Instead I feel "I'm probably wrong and in ten years I'm really going to regret wasting my money on this."

When this idea occurred to me, suddenly cryonics felt worth it on an emotional level as well as on a rational level. I could reasonably imagine a future worth living in, and a shot at making it there. Visualizing waking up doesn't change the expected utility calculations, but it seemed to bring my intuitions in line with the numbers. So I asked if it made sense or if I was making a mistake. The answer, it seems, is that I was making a mistake, and I appreciate your help in figuring that out. But I don't think my thought process was exceptionally irrational or dangerous.

I wrote this question in an attempt to discover whether "I'm almost certain to wake up in a better future" actually is true. But if it is, I'm going to go on thinking about it.

Manfred's point in conjunction with wedrifid's explanation show that it's either false (including for the reasons you listed), or true in a trivial sense that shouldn't move you.

I don't know, but all of this quantum wager stuff seems suspicious to me. Like, it seems to me that there are no useful conclusions that can be drawn from that stuff.