I feel extremely embarrassed about asking for help with this, but I have a philosophical quandary that has been eating at me for days.  I'm sure that many of you already have it figured out.  I would appreciate it if you would lend your cached thoughts to me, because I can't seem to resolve it.

What has been bothering me are the implications of combining two common views here on Less Wrong.  The first is that the Many Worlds Interpretation of Quantum mechanics is correct.  The second is that two identical copies of a person count as the same person, and that therefore you haven't really died if you manage to make another version of yourself that survives while an earlier version dies  (for instance, if you sign up for cryonics and then in the future an AI scans your frozen corpse and uses the data to synthesize a new version of you). Robin Hanson has even argued that it would be morally acceptable to create billions of brain emulations of one person for a specific task and then erase them afterward; as long as at least one copy of the emulators remains alive then all you've really done is give one person "mild amnesia."  Both these viewpoints seem plausible to me, although I am less sure of Hanson's rather radical extensions of the second view.

Combing these views has the potential for disturbing implications.  If the MWI is correct then there already are large amounts of versions of everyone somewhere out there.  This has filled me with the distressing thought that the badness of death might somehow be diminished because of this. I am aware that Eliezer has written articles that seem to explain why this is not the case ("Living in Many Worlds" and "For the People Who are Still Alive,") but I have read them and am having trouble grasping his arguments.

It seems obvious to me that it is bad to kill someone under normal circumstances, and that the badness of their death does not decrease because there are other parts of the multiverse containing duplicates of them.  Eliezer seems to agree, and I think Robin (who has stated he supports the MWI and has contributed to the work on the subject) does too.  I very much doubt that if Robin Hanson was greeted by a knife-wielding maniac who announced that he intended to "give mild amnesia to alternate universe versions of Robin and his family" that he would make any less effort to save himself and his family than another version of Robin who did not support the MWI.

On the other hand, the argument that making other versions of yourself before you die is a form of survival seems persuasive to me as well. I think that if cryonics works it might be a form of survival, as would having a brain emulation of yourself made.

The line of reasoning that first pushed me down this disturbing line of thought was a thought experiment I was considering where Omega gives you a choice between:

1. Adding fifty years of your life that you would spend achieving large, important, impressive accomplishments.

2. Adding 200 years to your life that you would spend being stuck in a "time loop," repeating one (reasonably good) day of your life over and over again, with your memory erased and your surroundings "reset" at the beginning of every morning to ensure you live each day exactly the same. 

I concluded that I would probably prefer the first option, even though it was shorter, because a life where you do new things and accomplish goals is better than one endlessly repeated (I would prefer the 200 repetitive years to being killed outright though). This thought experiment led me to conclude that, for many people, a life where one makes progress in one's life and accomplishes things is better than one where the same experiences are repeated endlessly. 

"But wait!" I thought, "We already are sort of living in a time loop!  If the MWI is correct than there are countless copies of us all having the exact same experiences repeated endlessly!  Does that mean that if killing someone allowed you to lead a more accomplished life, that it would be alright, because all you'd be doing is reducing the amount of repetitions in an already repeating life?  This seems obviously wrong to me, there must be something wrong with my analogy.

I have made a list of possible reasons why death might still be bad if copies of you in other worlds survive, but not be as bad if you have made copies of yourself in the same world.  I have also made a second list of reasons why the thought experiment I just described isn't a good analogy to MWI.  I'd appreciate it if anyone had any ideas as to whether or not I'm on the right track.

The first list, in order of descending plausibility:

1. I am a moron who doesn't understand quantum physics or the MWI properly, and if I did understand them properly this conundrum wouldn't be bothering me.

2. When someone is duplicated through MWI all the relevant factors in their environment (other people, resources, infrastructure, etc.) are duplicated as well.  Because of this, the moral worth of an action in one world out of many is exactly the same as what it would be if there was only one world.  This seems very plausible to me, but I wish I could see a more formal argument for it.

3. The fact that the multiple worlds cannot currently, and probably never will be able to, interact in any significant way, makes it such that the moral worth of an action in one world out of many is exactly the same as what it would be if there was only one world. I think this might be what Eliezer was talking about when he said: "I would suggest that you consider every world which is not in your future, to be part of the 'generalized past.'", but I'm not sure.

4. 2&3 combined.

5. If the only version of you in a world dies then you cease to be able to impact that world in any way (ie, continue important projects, stay in touch with your loved ones).  This is not the case with technological duplicates living in the same world.  This seems slightly plausible to me; but it still seems like it would still be wrong to kill someone who had no strong social ties or important projects in life, regardless of how many of them might exist in other worlds.

6.  It's impossible to just kill just one version of a person in the multiverse.  Any death in one world will result in a vast amount of other deaths as the worlds continue to diverge.

7. Some kind of quasi-Rawlsian argument where one should try to maximize one's average wellbeing in the worlds one is "born into."  I think Eliezer might have made such an argument in "For the People Who Are Still Alive."

8.  Survival via making copies is a form of survival, but it's a crappy type of survival that is inferior to never having the original be destroyed in the first place.  It's sort of like an accident victim losing their legs, it's good they are alive, and their future life will probably be worth living (at least in a first world country with modern medicine), but it would be a lot better if they survived without losing their legs.

9.  It's good to have as many copies of yourself as possible, so killing one is always bad.  This seems implausible to me. If I discovered someone was going to try use technology to run off a large amount of versions of themselves, and stole large amounts of resources to do so and radically decreased the quality of other people's lives, then it would be right to stop them.  Also, it seems to me that if I could spend money to duplicate myself I would devote some money to that, but devote some other money to enriching the lives of existing copies.

 

The second list (as to why my "Timeloop" thought experiment isn't analogous to MWI).

1.  Again, it's impossible to just kill just one version of a person in the multiverse. Killing someone to improve your life in Many Worlds would be like having Omega stick two people in a "Time Loop," and then have one kill the other at the start of every morning.

2. One life of accomplishment is better than one repetitive life, but it isn't better than two repetitive lives.

3. Prioritarianism is closer to the Meaning of Right than pure utilitarianism, so ever if one life of accomplishment is better than two repetitive lives, it's still wrong to kill someone to improve your own life.

 

Again, I would really appreciate it if someone could explain this for me.  This problem has really been upsetting me.  I have trouble focusing on other things and its negatively affecting my mood.  I know that the fact that I can be affected so severely by an abstract issue is probably a sign of deeper psychological problems that I should have looked at.  But I think realizing that this problem isn't really a problem, and that it all adds up to normality, is a good first step.

New to LessWrong?

New Comment
40 comments, sorted by Click to highlight new comments since: Today at 8:59 AM

Meta note: You shouldn't feel embarrassed, this is exactly the kind of topic discussion is for and you've done the courtesy of attempting to think through the problem carefully before asking for help.

[-]wwa11y160

This has filled me with the distressing thought that the badness of death might somehow be diminished because of this

I find this statement disturbing. This reads as if you'd really like X to be as horrible as possible to justify your preexisting decision of fighting X.

status quo bias

I find this statement disturbing. This reads as if you'd really like X to be as horrible as possible to justify your preexisting decision of fighting X.

You're right, I phrased that badly. Let me restate it in a more rigorous fashion:

  1. I find certain moral arguments regarding the morality of making multiple versions of people to be plausible.
  2. However, when I apply those arguments to scenarios involving MWI they seem to generate results that contradict basic, common sense morality (such as "don't kill people")
  3. Either the moral arguments are wrong, common sense morality is wrong, or I am applying the arguments to MWI scenarios incorrectly.
  4. Regardless of which of the above is correct, it means that at the moment I am extremely uncertain of very basic moral principles.
  5. I am the sort of person who feels extreme distress at the prospect of uncertainty, especially moral uncertainty.
  6. Therefore, thinking about these things makes me feel extreme distress.

So, to put in in a more succinct fashion, I don't think my distress is caused by the Sunk Cost Fallacy. Rather, it is caused by an extreme dislike of moral and philosophical uncertainty. I can seen how you could interpret my initial post otherwise, however.

In general I tend to get very worried any time my explicitly reasoned moral beliefs contradict common sense morality, for reasons PhilGoetz explains better than I can. For every one of these contradictions that is genuine moral progress there are thousands of crazy errors. In this case I suspect I'm just applying MWI improperly, but the uncertainty is very distressing to me.

This has filled me with the distressing thought that the badness of death might somehow be diminished because of this. I am aware that Eliezer has written articles that seem to explain why this is not the case ("Living in Many Worlds" and "For the People Who are Still Alive,") but I have read them and am having trouble grasping his arguments.

This is a question that comes up (comparatively) frequently and it can be tricky to convey all the intuitions and conclusions in the space of a comment. My attempt to explain how this and related evaluations "mostly add up to normal" can be found in Preference For (Many) Future Worlds.

Short answer: If you believe you think a 50%/50% split of death vs gaining $200 is drastically better if done with a quantum coin than with a deterministic one then you are probably confused. It's technically possible to have coherent values where future Everett branches where you die don't count but it is a weird and arbitrary preference system. It is worth being very, very, sure about values, meta-ethics and physics before actually biting a bullet that encourages quantum immortality based decision making.

weird and arbitrary preference system

As is any, I've never seen a convincing reason to preference valuing other Everett branches vs only your experienced one.

EDIT: I stand by this even with the downvotes.

As is any

Most things are not weird. That's kind of the point of the word.

Focus on arbitrary then.

But starting from the viewpoint that Everett branches where I die don't count to me, the viewpoint of weighing of universes over mental states is the one that feels weird. 'As is any', because the 'weirdness', or 'non-normalcy' of anything depends on what you consider to be normal at the start.

If my goal is to maximize some weighted average of everyone's utility function across the multiverse then, all else being equal, the death of someone who would prefer to have been alive is "bad". But if due to limited resources my choice is between having (all else equal) (a) lots of James-ems alive for one hour each + one that lives indefinitely or (b) just one James-em that lives indefinitely than I should prefer (a), assuming that the short lived James-ems would rather live for one than zero hours.

The fact that the existence of a many worlds multiverse means there are lots more people doesn't impact the "badness" of the death of an individual any more than the normal growth of the human population does since the more people there are the more likely that an experience which has given you joy has probably occurred in a pretty similar way before to someone else. (A parent seeing his child take her first step, having sex for the first time, consuming cocaine,...) (Would you be less bothered by the death of a man if you later found out that the man had an identical twin whom he never met?)

Your "Adding 200 years to your life that you would spend being stuck in a 'time loop'" example also seems analogous to the wireheading thought experiment.

[-]crap11y70

Well, if quantum immortality worked, so would quantum insomnia, or quantum sobriety, or the like. Being distracted in the middle of the thought - there's you that weren't distracted, why you're not always him? Or: there got to be a parallel you that via sheer chance effectively did not advance a timestep, and he's more similar to you now than the one that did.

More generally, don't take things too seriously for mere lack of a counter argument. Fringe ideas are mostly discussed by their promoters, while detractors have bigger fish to fry.

Just some thoughts here... Not sure if they will solve the problem, but they might help you think about it:

  1. Does it matter at all what form of "Many Worlds" you accept? Consider Max Tegmark's multiverse hierarchy: In Level 1, all your copies live in the same universe as you, but just widely separated (typically by a googleplex light-years or so). However, they could in principle communicate with each other, if they waited long enough. In Level 2, they live in different "pocket" universes, and can never communicate with each other, no matter how long they wait. In Level 3, they live in different branches of a quantum wave function. In Level 4, they occupy different mathematical structures; all such structures are real.

  2. Do you think there are ANY real ethical questions where an acceptance of many worlds changes the answer? Frame each such question as a choice between two actions A and B, then imagine that the Earth, and its neighbourhood (a few nearby planets and stars, say) is all there is to reality, and all there will ever be. Let's say that the ethical action in that case is A. Are there any cases where the right action changes to B if there is a multiverse? Or if you can imagine a few (philosophers are good at that) are there any that you seriously expect to arise in your life?

  1. The fact that the multiple worlds cannot currently, and probably never will be able to, interact in any significant way, makes it such that the moral worth of an action in one world out of many is exactly the same as what it would be if there was only one world.

This is actually a reasonably lay-friendly way of summarizing the case of the quantum-mechanical independence axiom. In non-interacting cases you should just do what you think is best for your own future (or past, but that's a decision theory feature :P ), treating the quantum-mechanical amplitude-squared as a probability.

But I think the main reason why this time loop stuff sounds bad is just because all this extra time loop "measure" has no causal connection to outside. It breaks the normal laws of physics, so of course you get abnormal results.

You want a reason why death would still be bad?

Pause for a moment and consider what an existence that continues beyond the moment of death would be like. You get shot in the heart - in one universe you die, in another you wake up in a hospital with severe medical problems. You die in 99.9% of universes you continue through for months, your body struggling to recover. You live in extreme pain that will not end, struggling with health issues that medical technology isn't sufficient to resolve. Most likely you have brain damage.

A thousand years later, your body is wracked with cancers that cannot kill you.

All of those things and worse happen in some universes, regardless of your choices in this one, because of the choices made by you earlier or later, because of the choices made by other agents, and due to random non-agenty events. There is no way to know anything about it, there is nothing in QM which tells us about how to influence it, so there is no point worrying about those other hypothetical universes when deciding what to do in the only one we perceive.

I was addressing the horror inherent in quantum immortality as a reason to continue fearing death events; we're not talking about universes you aren't perceiving, but the only universes in which you survive a specific event.

Quantum immortality gave me hope at first, until I thought about it. Now it sounds positively hellish. Given quantum immortality, clinical immortality is a moral imperative.

Quantum immortality gave me hope at first, until I thought about it. Now it sounds positively hellish. Given quantum immortality, clinical immortality is a moral imperative.

I think the probability that someone survives 2 weeks due only to quantum immortality is absolutely dwarfed by the probability that they survive 2 weeks due to someone else (maybe an AI) having an ah-ha moment and inventing a clinical cure sometime during those 2 weeks. The probability of a secret government project inventing immortality and regeneration in the '50s and randomly deciding to give it to everyone 60 years later starts to look fairly likely versus months of random quantum events sustaining human life and consciousness against the laws of thermodynamics.

Dying with quantum immortality would be very strange and world-altering but probably not completely hellish. The experience of the dying person would rapidly get caught up in the majority of remaining universes where they didn't die for random but semi-plausible reasons. Sure, in a Tegmark IV universe some branches just keep on being hellish forever but those are also so rare that someone experiencing it should legitimately expect things to get better in the majority of their futures.

If quantum mechanics really does add up to normality then at any point in the multiverse's configuration space we should expect normality in the future.

Miraculous survivals happen every day; every day, patients in all sorts of conditions beat the odds and live one more day, one more hour, one more minute, one more second. I don't know what odds you're looking at, but the ones I'm looking at aren't even billion-to-one odds, and they get reset constantly; every moment experienced hits the reset button.

To properly calculate the odds on this kind of survival, you don't get to use the odds of surviving a year, a month, a week, a day, or even an hour or a minute. You have to use the odds of surviving through the next momentary experience, the next thought. Consciousness is going to flow down the probability path of least resistance; even if your odds of year-long survival are dominated by secret government projects saving your life and rebuilding you, you're still most likely to end the year surviving through the hellish route, because for any given moment, that's the most likely path to continue existence through.

To explain this counterintuitive result, imagine two lotteries: one has million-to-one odds, one trillion-to-one odds. You -must- win. You get to keep playing until you win. If you win the million-to-one, you start over, and play again. If you win the trillion-to-one game AND the million-to-one game in the same round (any route of existence in which the government agent steps in and then you die before he does his work doesn't count), the game ends. Even if it's more likely to win the trillion and million deal in round 1 than to beat the odds some arbitrarily large number of times, you're still more likely to end up in a universe in which you merely beat the odds some arbitrarily large number of times, because all you need to do to end up in that universe is win the million-to-one odds on round 1, rather than the million and trillion odds. The issue with quantum immortality is that you don't get to compute odds properly; the most likely universe isn't necessarily the universe you're most likely to end up in.

A discontinuous existence may be preferable in such situations, therefore, to stack the odds. A year-long coma would break the continuum, for instance; then you get to calculate the total odds rather than the continuous odds.

To properly calculate the odds on this kind of survival, you don't get to use the odds of surviving a year, a month, a week, a day, or even an hour or a minute. You have to use the odds of surviving through the next momentary experience, the next thought. Consciousness is going to flow down the probability path of least resistance; even if your odds of year-long survival are dominated by secret government projects saving your life and rebuilding you, you're still most likely to end the year surviving through the hellish route, because for any given moment, that's the most likely path to continue existence through.

Once the government invents immortality the probability of it succeeding every second is much better than the probability of it being invented in the first place. Assume you have a 10^-6 probability of surviving through quantum immortality every second and a 10^-12 probability of surviving due to the invention of government immortality and a 10^-5 probability of surviving due to the prior existence of government immortality (we'll pretend it's terribly risky but more likely to keep working than quantum immortality). Out of 10^72 initially identical universes, after one second there are 10^60 government immortals and 10^66 quantum immortals.

After 2 seconds there are 10^55 prior-government immortals and (10^60 + 10^66) * 10^-6 quantum immortals. I'm going to drop the number of quantum immortals who become government immortals after this point since they're much smaller and make the totals too complex.

After 3 seconds there are 10^50 prior-government immortals and (10^55 + 10^54 + 10^60) * 10^-6 quantum immortals.

After 4 seconds there are 10^45 prior-government immortals and (10^50 + 10^49 + 10^48 + 10^54) * 10^-6 quantum immortals.

After 5 seconds there are 10^40 prior-government immortals and (10^45 + 10^44 + 10^43 + 10^42 + 10^48) * 10^-6 quantum immortals.

After 6 seconds there are 10^35 prior-government immortals and (10^40 + 10^39 + 10^38 + 10^37 + 10^36 + 10^42) * 10^-6 quantum immortals.

After 7 seconds there are 10^30 prior-government immortals and (10^35 + 10^34 + 10^33 + 10^32 + 10^31 + 10^30 + 10^36) * 10^-6 = 1.111111 * 10^-30 quantum immortals, or almost even odds.

After 8 seconds there are 10^25 prior-government immortals and only 2.111111 * 10 ^ 10^24 quantum immortals.

By the time there are no quantum immortals left in these 10^72 universes there are still ~ 10^10 government immortals.

Basically, any time the universe changes, your lotteries change. The invention of government immortality affects all future universes sharing that history whereas simply maintaining a quantum-hellish existence doesn't change the universe enough to alter the lotteries. I also didn't account for the fact that quantum immortality becomes more likely once government immortality exists (whatever changes government immortality causes raises your overall chances of survival, so they have to raise the probability that random quantum events can cause you to survive) but as long as surviving one second after government immortality has already been invented is more likely than surviving through quantum immortality it eventually becomes much more likely to have survived because of government immortality.

My point was that quantum immortality doesn't raise the odds of government immortality. Your odds of being saved by government don't improve in the event of quantum immortality.

What's happening in other universes don't change what's happening in yours. Quantum immortality is going to operate on the same terms as evolution; the cheapest short-term path, even if the long-term path is more expensive, in this case in terms of probabilities.

If the odds of government having already invented clinical immortality spontaneously are one in a trillion, your odds of being saved by that remain one in a trillion, because your short-term survival doesn't take into account the long-term probability costs. Just because your probable futures are 99.9999% government-discovered clinical immortality doesn't mean your odds of getting that clinical immortality are 99.9999%. Quantum immortality only guarantees your survival; it doesn't prefer any means, and it only operates on a moment-by-moment basis. A major component of quantum immortality is that it skips probability assessments in favor of immediate short-term survival. The lottery is rigged. Not all lotteries are rigged equally, however; long-term survival lotteries don't get any weight at all. Only short-term survival lotteries get rigged.

Quantum immortality does increase the probability of government immortality in the universes where you are still alive. That was why I started with 10^72 identical universes to directly calculate how many branches had living copies after 10 or 11 seconds. It turned out that 10^10 of the universes had a copy that was government-immortal, and 0 had a copy that was quantum-immortal, so any living individual after 11 seconds has a much higher probability of not surviving the last 11 seconds due to quantum immortality. I think you should expect to experience what a majority of your future copies will experience. Of course you never die out of the metaverse completely, but even if I started with 3^^^^3 identical universes after 11 seconds there would still be many orders of magnitude more government-immortals than quantum-immortals, and that ratio would only increase the longer I calculated. Even if I look at each quantum immortal individually it has the same probability tree in its future so it should expect the government immortality to dominate its own future copies even if it's the result of 3^^^3 seconds of quantum immortality. What am I doing wrong?

You're taking the end-probability and using it as the probability of arriving at that location; you're looking at the problem and asking, assuming I'm alive in ten seconds, what's the most likely way for me to be alive? And asking the question that way, clinical immortality gets a huge boost.

The question, however, comes down to some atomic unit of existence. I don't know what that atomic unit is, but in order for it to add up to normality, it has to survive things like unconsciousness and sleep. Quantum immortality is going to be the process of slipping from atomic unit to atomic unit, even when they're not necessarily in the same universe. There isn't going to be any forward mapping; it will go straight ahead to the "cheapest" next step, which will get incrementally more expensive (unlikely). Maybe the most likely next step is waking up from an unusually realistic dream, maybe it's waking up in a hospital. If quantum immortality works, long-term probability won't enter into it. It will follow the cheapest short-term path.

Or, to put it in more general terms, when you start rigging probabilities, you can't simply count the number of possible paths and assume the most common is the most likely.

The question, however, comes down to some atomic unit of existence. I don't know what that atomic unit is, but in order for it to add up to normality, it has to survive things like unconsciousness and sleep. Quantum immortality is going to be the process of slipping from atomic unit to atomic unit, even when they're not necessarily in the same universe. There isn't going to be any forward mapping; it will go straight ahead to the "cheapest" next step, which will get incrementally more expensive (unlikely). Maybe the most likely next step is waking up from an unusually realistic dream, maybe it's waking up in a hospital. If quantum immortality works, long-term probability won't enter into it. It will follow the cheapest short-term path.

This can't actually be true, though, or else we would never experience any unlikely quantum events. Quantum mechanics works the same whether it makes us immortal or not, so if we expected our experience to only go to the most likely future then we would never see anything unlikely. We would never see radioactive decay because it's less likely than not-radioactive-decay. If we never, ever detect a proton decay then there might be something to the idea that we only experience unlikely quantum events above a certain threshold (or our model of proton decay is wrong), but otherwise it seems like we do have to count up all possibilities and compute our expected outcome from the measure of our experiences in each possibility.

My language was imprecise; we should -expect- to go to the most likely future. The key point however is short-term probability rather than long-term probability.

[-][anonymous]11y10

The choices you make in these universes necessarily correlates with (though not necessarily causes) the decision you make in this one. Deciding as if you're deciding for the other instances of yourself as well is how TDT, UDT, and my decision theory (still unfinished and unnamed) work.

The choices you make in these universes necessarily correlates with (though not necessarily causes) the decision you make in this one.

You are confusing decision theory with MWI. Possible worlds in DTs have little to do with the RQM multiverse.

As you navigate rough philosophical terrain, always remember: everything adds up to normalcy.

This is another of those cases where I would argue the 'does it matter' line. Under MWI, there are countless universes containing many different states, some of which I would prefer less than others - which I have no control over. I can fret about them, worry about what could have been in those other forks, even draw some conclusions; but at the end of the day, the only decisions and actions I can appear to take are here, for my value function here, in this instance of the universe.

Just because there may be many worlds in which I don't die, that doesn't make my possible death here any less bad.

[-][anonymous]11y10
  1. Assume your optimal dread - there is no moral reason not to kill others, with or without an advantage to yourself. There remain legal reasons not to kill others. Prison is living a long time repeating each day largely like the last, which you have identified as undesirable. It is practical and efficient not to kill others, and that is sufficient reason not to do so.

  2. Everything is connected to everything else, but not equally connected. Butterflies flitting about in Kansas do influence ice patterns on Mars, but not as much (by far) as local conditions on Mars. Your local moral conditions matter more than across all time and space. You can act accordingly.

  3. "I cannot know X as a fact" does not lead to "and therefore I must anti-X."

Assume your optimal dread - there is no moral reason not to kill others, with or without an advantage to yourself. There remain legal reasons not to kill others. Prison is living a long time repeating each day largely like the last, which you have identified as undesirable. It is practical and efficient not to kill others, and that is sufficient reason not to do so.

So you would kill people if you were sure you wouldn't get caught?

My own understanding of "Many Worlds" is not necessarily that everything which is possible does exist. We don't have (yet) any real explanation of the Born probability rule, it may come out that the worlds with very very low probability just don't "really exist", as the mangled worlds hypothesis claims.

Actions at macroscopic level are, most of the time, not affected by quantum noise. Unless you're very, very close to the threshold, a whole neuron firing differently in your brain has a very, very low probability (remember probability are multiplicative) which means that in most cases, all the worlds with significant Born probability (non-mangled ones) will have you acting the same (or the sociopath with a knife acting the same).

Only when a very small quantum-level event can have a huge macroscopic effect (like Schroedinger's cat experiment) do you really have worlds with different macroscopic outcomes. And yes, you can have such "chaos theory" effects, like the butterfly effect, even we don't artificially devise experiments that way, but they aren't frequent.

Disclaimer : I'm a computer scientist, not a physicist nor a biologist.

This strikes me as a series of excuses for not biting a bullet. The thing is... it's not a bullet that would need to be bitten anyway.

We don't have (yet) any real explanation of the Born probability rule, it may come out that the worlds with very very low probability just don't "really exist", as the mangled worlds hypothesis claims.

Sure, it is possible that mangled worlds or something else that made quantum amplitudes discrete in some way would influence our preferences but we don't have a solid reason for assuming this to be the case. It is worth looking at how quantum mechanics actually appears to behave and evaluating preferences accordingly.

Only when a very small quantum-level event can have a huge macroscopic effect (like Schroedinger's cat experiment) do you really have worlds with different macroscopic outcomes. And yes, you can have such "chaos theory" effects, like the butterfly effect, even we don't artificially devise experiments that way, but they aren't frequent.

Sure they aren't frequent... but the very premise of the question the post asks relies on implicit selection effects that select for specific highly improbable events. It for some reason arbitrary selects only the freakishly unlikely "live" branches and chooses not to care about "die" branches. This gets progressively more extreme the more 'immortal' the actor is considered to be. So sure, Schroedinger's cat is a contrived circumstance and doesn't happen naturally very often, but they are just the kind of event that comes into play with "immortality" selection.

Actions at macroscopic level are, most of the time, not affected by quantum noise. Unless you're very, very close to the threshold, a whole neuron firing differently in your brain has a very, very low probability (remember probability are multiplicative) which means that in most cases, all the worlds with significant Born probability (non-mangled ones) will have you acting the same (or the sociopath with a knife acting the same).

If I understand you correctly, what you are saying is that if something like "mangled worlds" holds true it may be that if someone is killed in one universe they are killed in every other universe a perfectly identical version of them exists in, because the quantum differences in those universes never add up to a big enough effect to "save" them. This seems plausible to me, and fits well with the sixth reason on my list.

Only when a very small quantum-level event can have a huge macroscopic effect (like Schroedinger's cat experiment) do you really have worlds with different macroscopic outcomes. And yes, you can have such "chaos theory" effects, like the butterfly effect, even we don't artificially devise experiments that way, but they aren't frequent.

Are they that rare? People never decide what to do using quantum random number generators?

I use a quantum coinflip app all the time. I have more nonmangled measure than any of you classical suckers!

There doesn't seem to be a version for Android of that. :-(

(I ‘usually’ flip a virtual Italian €2 coin. The scare quotes are because I only did that once in the past year.)

First, consider that if death isn't as bad as we seem to think it is, that's actually great news!! Because it's happened a lot already.

This has filled me with the distressing thought that the badness of death might somehow be diminished because of this.

This sounds like you're experiencing motivated cognition, which makes me reluctant to say that, in fact, I don't think you need to worry. It sounds to me like this has the same "resolution" as quantum suicide: what you really should care about is your measure (how much of you there is in the multiverse, whatever that means). Reduction in measure seems about as bad as dying in non-multiverse scenarios; every (information-theoretic) death permanently shuts you out of huge regions of the multiverse, and causes googleplexs of your friends and family members great emotional distress.

Plus, things are only guaranteed to happen in the multiverse if they can happen, and you don't necessarily know what can happen...

This sounds like you're experiencing motivated cognition, which makes me reluctant to say that, in fact, I don't think you need to worry.

I'm very sorry, I phrased that sentence you quoted very badly. Here is a more detailed explanation of why I am feeling distressed (it's a reply to someone else who had the same objection you did).

[+]Thomas11y-120