Today's post, Living in Many Worlds was originally published on 05 June 2008. A summary (taken from the LW wiki):

 

The many worlds of quantum mechanics are not some strange, alien universe into which you have been thrust. They are where you have always lived. Egan's Law: "It all adds up to normality." Then why care about quantum physics at all? Because there's still the question of what adds up to normality, and the answer to this question turns out to be, "Quantum physics." If you're thinking of building any strange philosophies around many-worlds, you probably shouldn't - that's not what it's for.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Why Quantum?, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New to LessWrong?

New Comment
32 comments, sorted by Click to highlight new comments since: Today at 10:09 AM
[-][anonymous]12y00

You cannot "choose which world to end up in". In all the worlds, people's choices determine outcomes in the same way they would in just one single world. The choice you make here does not have some strange balancing influence on some world elsewhere. There is no causal communication between decoherent worlds. In each world, people's choices control the future of that world, not some other world.

I may misunderstand what is being said. I thank you for correcting me if so. If choice now equals determined future, does that mean choice past equals determined now? And when does any kind of free will enter the picture for us to be making choices at all?

At this point in the sequences, eliezer starts discussing free will. Spoiler alert: you're supposed to dissolve the question.

If you're thinking of building any strange philosophies around many-worlds, you probably shouldn't - that's not what it's for.

This includes any discussion of quantum immortality/suicide and MWI-based arguments for decision theories, anthropic biases etc. If you can construct a certain line of reasoning about observed world using MWI, you should be able to do the same if you assume that only this single world "exists". Granted, it might not be as obvious, but it must be doable.

There is nothing wrong with using MWI for inspiration, there is everything wrong with saying that some argument is true "because MWI!".

If you can construct a certain line of reasoning about observed world using MWI, you should be able to do the same if you assume that only this single world "exists". Granted, it might not be as obvious, but it must be doable.

This seems false. There is no particular reason that all things must be reducible to a single world map without loss.

There is no particular reason that all things must be reducible to a single world map without loss.

As I understand, MWI is empirically equivalent to other interpretations of quantum mechanics. You should be able to justify any particular course of action without a metaphysical commitment to the reality of unobservable components of the universe's wave function.

You should be able to justify any particular course of action without a metaphysical commitment to the reality of unobservable components of the universe's wave function.

A (counterfactual) agent is accelerated (very) rapidly away from you, taking with him someone you care about and leaving someone he cares about. He passes out of your future light cone. Both the agent and your loved one are now an unobservable components of the universe's wave function. You and the agent have enough information about each other that you can make predictions about each other's behavior. Each of you can choose to be kind to the loved one of the other (at a slight net cost in utility to yourself and a significant gain to the other) or to exploit them for a slight gain to yourself. You know that the agent behaves according to UDT. Do you exploit the agent's loved one or cooperate, being kind?

If you corrupt your model of reality such that you believe parts of the universe's wave function don't exist when you can not observe them then you will defect. You will be making a mistake. Your policy would make you Lose!

My instinct is to not kill the loved one, but on virtue-ethics grounds, not because of any sort of counterfactual reciprocity argument. My understanding is that UDT is not actually computable. As a result, no possible agent can act as you describe. So this doesn't seem like a particularly compelling thought experiment.

if I'm deciding what to do with a hostage, it makes no difference what the other party decides. What matters is my judgement of them right before we became causally separated -- and I am skeptical that my decision-making after the separation is useful evidence on this point.

More broadly, I can think of lots of reasons to take counterfactual possibilities into account. But none of them require me to say that the counterfactual "really exists". For instance, I'm worried about people judging me for being reckless, dishonorable, etc. What's the case where I actually care about non-causal interactions?

My understanding is that UDT is not actually computable. As a result, no possible agent can act as you describe. So this doesn't seem like a particularly compelling thought experiment.

Are you confusing UDT with AIXI? It is certainly possible for an agent to act as described and the tricky part isn't anything to do with "UDT" (but rather the possible but difficult task of making the predictions.)

What's the case where I actually care about non-causal interactions?

The case given is sufficient. Anyone who is capable of one-boxing on Newcomb's problem will, if consistent, also cooperate with agents that cross out of the future light cone based on utility maximisation grounds given the payoffs described. If they either two box or defect then they are implementing a faulty decision algorithm.

For an example that doesn't include any potential exploitation of loved ones see Belief in the Implied Invisible.

My understanding is that UDT requires agent A to have some prediction for what agent B will do. This is, in general, not computable. (The proof follows from Rice's theorem.)

Your hypothetical has nothing to do with quantum mechanics or many worlds, and everything to do with special relativity. "Unobservable components of the wavefunction", in the many world sense, are areas where a different decisions was made or a different outcome observed.

In fact, extending it to many worlds actually hurts the point you want to make. The "(counterfactual) agent" makes both decisions (exploit, be kind), and you make both decisions. Further, you can't win in every world. Consider Newcomb's problem- even if omega (the predicting agent) is correct at 99.99%, there are worlds where two boxing is a losing proposition (omega got it wrong). In fact,the rule 'two-box on Newcomb problems' always creates two worlds- one where you are a winner and one where you are a loser.

So in many worlds, you can't assert such a policy would be a mistake- in some worlds it is, and in some it isn't.

Your hypothetical has nothing to do with quantum mechanics or many worlds, and everything to do with special relativity.

The hypothetical has nothing to do with quantum mechanics. It was obviously, and explicitly constructed to address the specific claim being replied to, using no set up more complex than physical movement. That claim being:

You should be able to justify any particular course of action without a metaphysical commitment to the reality of unobservable components of the universe's wave function.

It so happens that asr's reply indicates that our disagreement regarding how to make decisions when dealing with the implied invisible is not limited to quantum mechanical considerations but also applies in this simple case. (Based on that reply) we disagree both on how to make decisions in general and how to account for the implied invisible when making decisions, even when only very mildly unintuitive physics is in play. That being the case, knowing that we additionally disagree about how to handle the implied invisible when considering quantum mechanics is completely unremarkable.

If you'll notice, he explicitly uses the phrase "unobservable components of the universe's wavefunction", and the context is clearly many worlds quantum mechanics. This means your thought experiment is not at all analogous to his statement.

Your implied invisible (observer outside the light cone) is qualitatively very different then his implied invisible (unobservable components of the wavefunction). Your thought experiment shifts the focus by subtly redefining the original statement.

I'm actually with wedrifid here. I think the key point where wedrifid and I disagree is that I don't believe agents benefit from considering any kind of acausal trade or interaction. And it turns out that if you restrict yourself to physically interacting agents, you don't have to worry about unobservables. In contrast, if you worry about acausal interactions, it can make sense to worry about them.

Your hypothetical has nothing to do with quantum mechanics or many worlds, and everything to do with special relativity.

Special relativity hangs out in a nice, flat, well behaved, Minkowski space where this sort of thing cannot happen. It takes general relativity and specifically a universe with accelerating expansion (such as ours probably is).

It can also happen in flat space, e.g. if you and the other agent are on Rindler trajectories accelerating in opposite directions, then nothing that one of you does can affect the other.

As I understand, MWI is empirically equivalent to other interpretations of quantum mechanics.

It's also empirically equivalent to the theory that "The lady down the street is a witch; she did it."

a metaphysical commitment

Democracy! Freedom! My enemy kicks puppies! (Physics is just called physics, even when it is unintuitive.)

You should be able to justify any particular course of action without a metaphysical commitment to the reality of unobservable components of the universe's wave function.

No I shouldn't. I should use the best possible model that can be constructed of the universe's wave function based on the evidence available and ignore any demands that I justify my decisions according to systems that are artificially crippled.

It so happens that all my decisions can be justified using a single world map but this is entirely an artifact of my preferences and nothing to do with epistemic or decision theoretical considerations. (It would be weird and possibly 'insane' but not irrational to have decisions that were not reducible in this manner.)

"Empirically equivalent" need to be unpackaged when it comes to MWI. Quantum immortality is a kind of empirical observation it's just not a shared, intersubjective observation of the kind science turns into theories (and the observations it predicts might not let you distinguish it from other kinds of Big Universe immortality).

If you can construct a certain line of reasoning about observed world using MWI, you should be able to do the same if you assume that only this single world "exists".

If the two theories make the same predictions, what is the point? Why not just stick with the old fuddy duddy one reality? Is MWI just a fashion statement? Just a theory that falls under Quine's "inscrutability of translation"?

Note that a hidden variable theory does make a prediction - that there are hidden variables we may one day discover and use to make better predictions than we can now, and do more powerful things, while theories that make randomness ontological say that we will absolutely never do that - the unpredictability and uncontrolability are inherent in reality.

I'm a fuddy duddy sticking with other fuddy duddies like Einstein (and I believe Jaynes) until the the detector efficiency and fair sampling loopholes are closed.

Would someone in the know tell me what the MWI supposedly buys? What's the payoff? Even conceptually, ignoring the identical predictions, what's the payoff? What problem does it supposedly solve? And then, what problems does it introduce that fuddy duddies don't already have?

I don't see that branching has gained me anything over wave function collapse - and now I have to deal with being killed by an asteroid strike last week in some parallel universe. Who needs it?

The positives I see are: wavefunctions being real, and not statistical statements about point objects; measurements are actually quantum interactions of joint wavefunctions.

My prescription for physics is to accept that (some) wave functions are real, they can interact in ways that we don't currently understand (wave function collapse, branching, etc.) and let's get on with doing some real physics and figuring out how they interact, how we can measure it, and how we can control it.

Physics seems awash in mathematical wanking that I explain through evolutionary theory. Measurements are expensive, and give a zillion uninteresting and unpublishable failures before an interesting publishable success. Mathematical wanking is cheap, and allows "successful" papers to be published, and phds to be had. Run a couple iterations of that, the measurers go extinct and the wankers have inherited the physics departments.

I'm a fuddy duddy sticking with other fuddy duddies like Einstein (and I believe Jaynes)

I stick with Max Planck instead:

"Science advances one funeral at a time."

Every funeral is a change, but not every change is an advance.

Now get offa my lawn.

If the two theories make the same predictions, what is the point? Why not just stick with the old fuddy duddy one reality?

For example, Lagrangian and Hamiltonian mechanics make the same predictions and are completely equivalent in most cases, but have different uses. There is nothing wrong with picking the formalism more convenient for a specific problem. Granted, MWI does not have a specific formalism, but I allow that it can still provide an inspiration or an intuition in certain problems, which then has to be checked by doing the calculations.

As for the reasons why EY considers the MWI advocacy being important to applied rationality, they are explained in the earlier reruns. Can't say that I agree, but many regulars do, so more power to them.

How gracious!

Can't tell if this remark is sarcastic or serious...

As for the reasons why EY considers the MWI advocacy being important to applied rationality, they are explained in the earlier reruns. Can't say that I agree, but many regulars do, so more power to them.

I don't recall any regulars expressing agreement that MWI advocacy is important to applied rationality to the degree suggested by Eliezer. (It could have happened but it would look odd to me.)

[-][anonymous]12y00

Anyone bothering to try and determine whether this is true or not needs to know your definition of "regular."

Anyone bothering to try and determine whether this is true or not needs to know your definition of "regular."

I would, for the purpose of that particular comment, cede the definition to shminux and outright declare that the claim he is making about "regulars", whoever they may be, is wrong. He has confused endorsement of the QM sequence and rejection of Single World theories in general with the separate issue of agreement that it was as necessary to applied rationality as Eliezer said.

Come to think of it I don't offhand recall anyone or anything ever having expressed such agreement. "Many" and "regulars" only become relevant in as much as I am more likely to have seen and paid attention to such claims if they existed and can thereby be more confident that shminux is simply making a false claim.

EY:

I have a suspicion that when all is said and done and known, quantum immortality is not going to work out.

I wish he would elaborate on the reason(s) for this suspicion; quantum immortality seems to me like a straightforward consequence of MWI plus the anthropic principle.