Mentioned in

Quantum Suicide and Aumann's Agreement Theorem

9JBlack

10Yair Halberstadt

4Charlie Steiner

3Ben

2Ben

2Slider

4JBlack

5Yair Halberstadt

2Slider

2avturchin

1Insub

2avturchin

1dadadarren

1TAG

1SurfingOrca

1Aiyen

5Viliam

1Kaarel

1Kaarel

-1simon

New Answer

New Comment

8 Answers sorted by

This scenario does not satisfy the premises for Aumann's Agreement Theorem. There is no common knowledge since in almost all observations by Alice, Bob is dead and in no state to have common knowledge with Alice of anything at all. Even without invoking quantum suicide and anthropic conditionals, the second simple coin-toss example in Aumann's original paper shows a case where the participants do not have common knowledge and hence do not necessarily agree on posterior probabilities.

Alice is right and Bob is wrong, here.

The simple problem is that Bob failed to apply a correction for selection effect to how he updates P(Copenhagen).

The less simple problem is that he's already sneakily applied a correction for selection effect to P(alive|Everettianism), and doesn't even realize it.

I think the setup doesn't quite make sense for three reasons:

1 - The Born rule in Many worlds

I know this isn't the main point of this post at all, but its important (I think) to remember that Bobs 99.99% "probability" argument doesn't really make sense within any widely accepted version of many worlds.

Within Copenhagen (or Qubism, or most quantum interpretations) it is perfectly reasonable to posit a quantum state that responds to a measure with 99% chance of getting "heads", eg. |psi> = sqrt(0.99) |1> + rest |0>.

However, many worlds claims that when this 99 to 1 state is measured in the computational basis the universe branches in **2 **in** **one of which Bob sees the 1 and in the other he sees the 0. Experiment suggests that we see the |1> with 99% probability (this is the Born rule), which puts many worlds in a slightly odd place. There are (in my opinion rubbish) arguments that the universe splits in 2, but one branch is somehow more heavy (by factor 99) than the other. The (in my opinion) obvious suggestion that the universe splits into 100 branches, in 99 of which Bob sees a 1 and in one of which he sees a 0 is not widely popular, I think because people are worried about irrational numbers. Possibly they are also worried about what it actually means on a philosophical level to posit 99 different copies of a single world, all of which are completely impossible to distinguish from one another, even in principle. How is that actually different from 1 copy?

2 - Subjective Death

Lets say that we either find a way of incorporating the Born rule into many worlds that is fully convincing, or else we just sidestep the issue by using a very large number of 50/50 universe branches and kill Bob in all but one of them.

We now encounter what I see as a fundamental issue with the setup. Lets say that the quantum random number generator gives Bob a lethal dose of poison with 99.99% probability. He then experiences a 10 minute dying process from the poison. In the low-weight world branch where the machine *doesn't* inject Bob does he have to wait 10 minutes for the versions of him in other worlds to die before he can believe in many worlds? If the machine had a tiny chance of giving him the poison then the poor, unlucky, envenomed Bob would not conclude that many worlds was true. So why the asymmetry in reasoning between dying and surviving?

This is a long-winded way of saying the post-selection to only worlds in which Bob survives is very suspect. Even if we accept that many worlds is true, then we end up in a state where their is one living Bob and 9999 dead (or dying) Bobs. I reject the supposition that the living Bob should update towards believing in many worlds. I also don't believe that the dead (or dying) Bob's should update against many worlds if they get the chance. If we assume that one day in the future technological resurrection [1] might be possible then suddenly those 9999 dead Bobs might get the chance to update against many worlds.

3 - Is "quantum" actually the source of the weirdness here

Leaving aside many worlds and its confusing-ness. Think about the conservation of expected evidence https://www.lesswrong.com/posts/jiBFC7DcCrZjGmZnJ/conservation-of-expected-evidence and how it is to be considered in cases where one (completely classical) possible outcome kills you immediately and prevents you from updating your beliefs. A man is playing Russian roulette with a classical pistol that kills him with 1/6 probability. He tells you that, if he shoots, he can always update towards knowing the bullet wasn't in that chamber, because otherwise he wouldn't be able to update at all. I am interested if you think that the quantum nature of the problem is actually introducing anything extra over this Russian roulette setup?

[1] or redaction! https://www.lesswrong.com/posts/CKgPFHoWFkviYz7CB/the-redaction-machine

I had a lot of fun with this idea. Made it into a little joke story: https://www.lesswrong.com/posts/zLYBzJttYy49LTpt6/quantum-immortality-foiled

Conditional on Many-Worlds being true, Bob expects to survive with 100% probability.

This is the false bit. "Bob expects to survive" is not a thing that makes sense in Bobs ontology. Bob expects 99.9% of him to die and 0.01% to survive (and future anticipation expects rather than "expectation value" expects).

Copenhagen "bob survives" is not describing the same scenario as Many-Worlds "bob survives" so they are not directly comparable. Think of the 0.01% as 1 in 10 000 possible outcomes. Bob believes that all the 10 000 happen and Alice thinks only one of them is real and the rest are fictious and never really existed. Alice can make easy identification before split and after split. But if Bob wants to identify with the 10 000 versions he probably has no reason to be favoring some over others. And for example if he is ignorant which 1 of the 10 000 survives it becomes tricky to what thing "me after this event" refers.

If one uses the word "apple" to mean a fruit and another uses it to mean "computer" there is no agreement because they are not talking about the same thing.

I think it's quite reasonable for "survive past time T" to mean that "at time T+1 there will exist a living human that has psychological and physical continuity from the state of the human at T-1". In the MW interpretation that sort of survival is guaranteed. In the Copenhagen interpretation it has tiny probability.

The *real* problem is that conservation of expected evidence does not hold for Bob. There is no observation he can make that is in favour of Copenhagen over MW.

This can be seen as a variant of the Sleeping Beauty problem. A coin is flipped, and th...

5

That argument proves too much. E.g. consider Russian roulette and the proposition "the next chamber contains a bullet". This will only ever be (subjectively) disproved, and not proved.

2

For a many worlder "exist" conditions trigger if they are fullfilled in brother timelines that did not take place.
And even so, one can imagine holding your stomach having been mortally wounded by the machine. All of that person is going to die, that person won't survive. It probably does not warm to know that the person from 10 seconds ago is going to survive. "Bob" is ambigious over these individuals.

The surprise here depends on the probability of survival. If half of people on Earth were Bobs, and other were Alices, then 0.01 chance of survival means that 400 000 Bobs will survive. There is no surprise that some of them survive, neither for Bob nor for Alice.

For example, if you survive until 100 years old, it is not an evidence for quantum immortality.

If, however, survival chance is 1 in 10e12, then even for the whole earth there likely be no Bobs in Copenhagen interpretation. So the existing of Bob is an evidence against it.

For example, if I naturally survive until 150 years old by pure chance, it is evidence for MWI.

Well, really every second that you remain alive is a little bit of bayesian evidence for quantum immortality: the likelihood of death during that second according to quantum immortality is ~0, whereas the likelihood of death if quantum immortality is false is >0. So there is a skewed likelihood ratio in favor of quantum immortality each time you survive one extra second (though of course the bayesian update is very small until you get pretty old, because both hypotheses assign very low probaility to death when young)

2

If we take the third-person view, there is no update until I am over 120 years old. This approach is more robust as it ignores differences between perspectives and is thus more compatible with Aumann's theorem: insiders and outsiders will have the same conclusion.
Imagine that there are two worlds:
1: 10 billion people live there;
2: 10 trillion people live there.
Now we get information that there is a person from one of them who has a survival chance of 1 in a million (but no information on how he was selected). This does not help choose between worlds as such people are present in both worlds.
Next, we get information that there is a person, who has a 1 in a trillion chance to survive. Such a person has less than 0.01 chance to exist in the first world, but there are around 8 such people in the second world. (The person, again, is not randomly selected – we just know that she exists.) In that case, the second world is around 100 times more probable to be real.
In the Earth case, it would mean that 1000 more variants of Earth are actually existing, which could be best explained by MWI (but alien worlds may also count).

This experimental outcome will not produce a disagreement between Alice and Bob. As long as they are following the same anthropic logic.

When saying Bob's chance of survival is 100% according to MWI, the statement is made from a god's eye view discussing all post-experiment worlds: Bob will for sure survive: in one/some of the branches.

By the same logic, from the same god's eye view, we can say, Alice will meet Bob for sure: in one/some of the branches, if the MWI is correct.

By saying Alice shall see Bob with a 0.1% chance no matter if MWI is correct, you are talking about the specific Alice's first-person perspective, which is a self-locating probability according to MWI. As in "what is the probability * I* am the Alice who's in the branch where Bob survives?".

By taking the specific subject's perspective, Bob's chance of survival is also 0.1% according to MWI. As in "what is the probability that **I**** am actually **in the branches where Bob survives?"

As long as their reasonings are held at the same level, their answers would be the same.

The real kicker is whether or not they should actually increase their confidence in MWI after the experiment ends (especially in the case where Bob survives). The popular anthropic camps such as SIA seem to say yes. But that would mean any quantum event, no matter the outcome would be evidence favouring MWI. So an armchair philosopher could say with categorical confidence that MWI is correct. (This is essentially the same problem as Nick Bostrom's Presumptuous Philosopher but in the quantum worlds) So SIA supporters and Thirders have been trying to argue their positions do not necessarily lead to such an update (which they called the naive confirmation of the MWI). Whether or not that defence is successful is up for debate. For more information, I recommend the papers by Darren Bradley and Alastar Wilson.

On the other hand, if you think finding oneself exist is a logical truth, thus has 100% probability, then it is possible to produce disagreement against Aumann's Agreement Theorem. And the disagreement is valid and can be logically explained. I have discussed it here. I think this is the correct anthropic reasoning. However, this idea does not recognize self-locating probability thus fundamentally incompatible with the MWI. Therefore if Alice and Bob both favour this type of anthropic reasoning, they would still have the same confidence in the validity of MWI, 0%.

Conditional on Many-Worlds being true, Bob expects to survive with 100% probability.

That can only be subjective probability, since MWI has the same objective probabilities as CI.

Aumann's theorem requires that both agemys have the same evidence, as well as the same priors. But the subjective version of "Bob survives" is not the same evidence as the objective version.

Shouldn't Bob not update due to e.g., the anthropic principle?

The anthropic principle only works where there are many possible worlds, i.e. it's precisely why he *should *update.

Is it possible to remove "quantum" and "suicide" and still keep the example working?

Maybe Bob will just flip a coin, strongly believing that his psychic powers can make it come up heads. If he succeeds, he will try to talk to a scientist Alice. If he fails, he will be embarrassed, and will not contact Alice.

Therefore, the worlds where Alice and Bob talk, are the worlds where Bob has (very little, but nonzero) evidence in favor of his psychic powers. But Alice refuses to accept that.

(Bob is too proud to flip a coin more than once. He does not want to ruin his perfect record.)

I'm inside-view fairly confident that Bob should be putting a probability of 0.01% on surviving conditional on many worlds being true, but it seems possible I'm missing some crucial considerations having to do with observer selection stuff in general, so I'll phrase the rest of this as more of a question.

What's wrong with saying that Bob should put a probability of 0.01% of surviving conditional on many-worlds being true – doesn't this just follow from the usual way that a many-worlder would put probabilities on things, or at least the simplest way for doing so (i.e. not post-normalizing only across the worlds in which you survive)? I'm pretty sure that the usual picture of Bayesianism as having a big (weighted) set of possible worlds in your head and, upon encountering evidence, discarding the ones which you found out you were not in, also motivates putting a probability of 0.01% on surviving conditional on many-worlds. (I'm assuming that for a many-worlder, weights on worlds are given by squared amplitudes or whatever.)

This contradicts a version of the conservation of expected evidence in which you only average over outcomes in which you survive (even in cases where you don't survive in all outcomes), but that version seems wrong anyway, with Leslie's firing squad seeming like an obvious counterexample to me, https://plato.stanford.edu/entries/fine-tuning/#AnthObje .

(By the way, I'm pretty sure the position I outline is compatible with changing usual forecasting procedures in the presence of observer selection effects, in cases where secondary evidence which does not kill us is available. E.g. one can probably still justify [looking at the base rate of near misses to understand the probability of nuclear war instead of relying solely on the observed rate of nuclear war itself].)

Conditional on Many-Worlds being true, Bob expects to survive with 100% probability. Conditional on Copenhagen being true, Bob expects to survive with only 0.01% probability. So if Bob exits the box alive, this is strong evidence for him in favor of Many-Worlds.

I disagree that Many-Worlds and Copenhagen give any different predictions for Bob here. The expected value of the observable (is Bob Alive) is not only the same for Alice, and also remains the same (i.e. 1) when conditionalized on Bob's still having experiences.

A chain of thought experiments connecting regular conditional probability to quantum suicide:

**Thought experiment 1:**

Imagine Bob buys a quantum lottery ticket and hires an unstoppable kidnapper to put him in a red room later the same day if he won the lottery, and in a blue room if he lost.

Regardless of which theory of quantum mechanics he prefers, Bob should expect to most likely experience losing the lottery, but conditional on waking up in a red room the next day, the probability that he won the lottery is 100%.

**Thought experiment 2:**

Same as above, except the unstoppable kidnapper is now an unstoppable assassin who kills him if he lost. I say: this doesn't change the probability of what he experiences at the time of winning the lottery, or the conditional probability of what he remembers given that he is in a red room/alive the next day.

**Thought experiment 3:**

Same as above, except replace the quantum lottery and the assassin with a quantum suicide machine. If it kills him, the machine first momentarily shines a blue light on Bob. Again, same probabilities/conditional probabilities.

**Thought experiment 4:**

Same as above, except no blue light. The only thing this changes is what Bob expects to experience in the moment, not what he should expect conditional on still having experiences later.

Alice and Bob want to know which interpretation of quantum mechanics is true. They devise an experiment: Bob will enter a box, where a quantum random number generator kills him with 99.99% probability.

Conditional on Many-Worlds being true, Bob expects to survive with 100% probability. Conditional on Copenhagen being true, Bob expects to survive with only 0.01% probability. So if Bob exits the box alive, this is strong evidence for him in favor of Many-Worlds.

Alice on the other hand, calculates a 0.01% probability of seeing Bob exit the box alive, no matter which interpretation is true. Moreover, she knows that in the world where she sees Bob come out alive, Bob will be convinced that Many-Worlds is true, and this is again independent of which theory is really true.

As such, if Bob exits the box alive, Bob should update strongly in favor of Many-Worlds being true, and Alice should leave her prior probability unchanged, as she has gained no evidence in either direction.

But if Alice and Bob are fully rational Bayesian agents with the same priors, Aumann's Agreement Theorem says that they should land on the same posterior probability for the truth of Many-Worlds after Bob exits the box. Any evidence that's valid for Bob should be equally valid for Alice, and they shouldn't be able to "agree to disagree" on what probability to assign to the Many-Worlds interpretation.

What resolves this seeming contradiction?