[ Question ]

When would an agent do something different as a result of believing the many worlds theory?

by MakoYass1 min read15th Dec 201934 comments


Many-Worlds Interpretation

One of the things impeding the many worlds vs wavefunction-collapse dialogue is that nobody seems to be able to point to a situation in which the difference clearly matters, where we would make a different decision depending on which theory we believe. If there aren't any, pragmatism would instruct us to write the question off as meaningless.

Has anyone tried to pose a compelling thought experiment in which the difference matters?

New Answer
Ask Related Question
New Comment

8 Answers

As any collapse (if it does happen) occurs so 'late' that current experiments are unable to differentiate between many worlds and collapse -- it seems quite possible that both theories will continue to give identical predictions for all realisable situations, with the only difference being 'one branch becomes realised' and 'all branches become realised'.


More Human related:

  • One relevant aspect is how natural utility maximisation feels using one of the two theories as world model. Thinking in many worlds terms makes expected utility maximisation a lot more vivid compared to the different future outcomes being 'mere probabilities' -- on the other hand, this vividness makes rationalisation of pre-existing intuitions easier.
  • Another point is that most people strongly value existence/non-existence additionally to the quality and 'probability' of existence (e.g. people might play Quantum Russian Roulette but not normal Russian Roulette as many worlds makes sure that they will survive [in some branches]). This makes many worlds feel more comforting when facing high probabilities of grim futures.
  • A third aspect is the consequences for the concept of identity. Adopting many worlds as world model also means that naive models of self and identity are up for a major revision. As argued above, valuing all future branch selves equally (=weighted by the 'probabilities') should make many worlds and collapse equivalent (up to the 'certain survival [in some branches]' aspect). A different choice in accounting for many worlds might not be translatable into the collapse world model.


I am still very much confused by decision theories that involve coordination without a causal link between agents such as Multiverse-wide Cooperation. For such theories, other considerations might also be important.


¹: To be more exact, I would argue that the case for Quantum Russian Roulette becomes identical to the case for normal Russian Roulette if many world branches are weighted with their 'probabilities' and also takes into account the 'certain survival [in some branches]' bonus that many worlds gives.

Things like determinism and many worlds may not affect fine grained decision making, but they can profoundly impact what decision making, choice volition, agency and moral responsibility are. It is widely accepted that determinism affects freedom of choice, excluding some notions of free will. It is less often noticed that many worlds affects moral responsibility, because it removes refraining: if there is the slightest possibility that you would kill someone, then there is a world where you killed someone. You can't refrain from doing anything that is possible for you to do

Does that mean that utilitarianism is incompatible with Many Worlds? if everything that is possible for you to do is something that you actually do then that would mean that utility, across the whole multiverse, is constant, even assuming any notion of free will.

2Viliam1yEverything is possible, but not everything has the same measure (is equally likely). Killing someone in 10% of "worlds" is worse than killing them in 1% of "worlds". At the end, believing in many worlds will give you the same results as believing in collapse. It's just that epistemologically, the believer in collapse needs to deal with the problem of luck. Does "having a 10% probability of killing someone, and actually killing them" make you a worse person that "having a 10% probability of killing someone, but not killing them"? (From many-worlds perspective, it's the same. You simply shouldn't do things that have 10% risk of killing someone, unless it is to avoid even worse things.) (And yes, there is the technical problem of how exactly you determine that the probability was exactly 10%, considering that you don't see the parallel "words".)
1TAG1yApart from the other problem: MWI is deterministic, so you can't alter the percentages by any kind of free will, despite what people keep asserting. Actually killing them is certainly worse. We place moral weight on actions as well as character.
1Lanrian1yNeither most collapse-theories nor MWI allow for super-physical free will, so that doesn't seem relevant to this question. Since the question concerns what one should do, it seems reasonable to assume that some notion of choice is possible. (FWIW, I'd guess compatibilism is the most popular take on free will on LW.)
1TAG1yYes, but compatibilism doesn't suggest that you choose between different actions or between different decision theories.
1Lanrian1yWait, what? If compatibilism doesn't suggest that I'm choosing between actions, what am I choosing between?
1TAG1yTheories, imaginary ideas.
1MakoYass1yNo, if 99% of timelines have utility 1, while in 1% of timelines something very improbable happens and you instead cause utility to go to 0, the global utility is still pretty much 1. Some part of the human utility function seems to care about absolute existence or nonexistence, and that component is going to be sort of steamrolled by multiverse theory, but we will mostly just keep on going in pursuit of greater relative measure.
0TAG1yThat amounts to saying that if the conjunction of MWI and utilitarianism is correct, we would or should behave as though it isn't. That is a major departure from typical rationalism (eg the Litany of Tarski).
1MakoYass1yThe question isn't really whether it's correct, the question is closer to "is it equivalent to the thing we already believed".

Many of the other comments deal with thought experiments rather than looking at the reality of how "many worlds" is USED. From my point of view as a non-physicist it seems to primarily be used as psuedo-science "woo" - a revival of mystery and awe under the cloak of scientific authority. A kind of paradoxical mysticism for non-religious people, or fans of "science-ism".

An agent might act differently from MISUNDERSTANDING many worlds theory. Or by paying more attention to it. Psychological "priming" is real ansd powerful.

The answer by TAG below is case in point. For someone committed to a belief in determinism or fatalism, having a manyworlds theory in mind may buttress that belief.

There is the Quantum Russian roulette thought experiment. It was posted in LessWrong.

Yeah. I reject it. If you're any good at remapping your utility function after perspective shifts ("rescuing the utility function"), then, after digesting many worlds, you will resolve that being dead in all probable timelines is pretty much what death really is, then, and you have known for a long time that you do not want death, so you don't have much use for quantum suicide gambits.

I think it's more natural to ask "how might an agent behave differently as a result of believing an objective collapse theory?" One answer that comes to mind is that they will be less likely to invest in quantum computers, which will need to rely on entanglement between a large number of quantum systems that under objective collapse theories might not be maintained (depending on the exact collapse theory). Similarly, other different physical theories of quantum mechanics will result in different predictions about what will happen in various somewhat-arcane situations.

More flippantly, an agent might answer the question 'What do you think the right theory of quantum mechanics is?' differently.

[Edited to put the serious answer where people will see it in the preview]

If they are put into an interferometer, someone who thinks the wavefunction has collapsed would think, while in the middle, that they have a 50/50 chance of coming out each arm, while an everettian will make choices as if they might deterministically come out of one arm (depending on the construction of the interferometer).

The difficulty of putting humans into interferometers us more or less why this doesn't matter much. Though of course "pragmatism" shouldn't stop us from applying occam's razor.

Assume you put enormous weight on avoiding being tortured and you recognize that signing up for cryonics results in some (very tiny) chance that you will be revived in an evil world that will torture you and this, absent many worlds, causes you to not sign up for cryonics. There is an argument that in many worlds there will be versions of you that are going to be tortured so your goal should be to reduce the percentage of these versions that get tortured. Signing up for cryonics in this world means you are vastly more likely to be revived and not tortured than revived and tortured and signing up for cryonics will thus likely lower percentage of you across the multiverse who are tortured. Signing up for cryonics in this world reduces the importance of versions of you trapped in worlds where the Nazis won and are torturing you.

If you use some form of noncausal decision theory, it can make a difference.

Suppose Omega flips a quantum coin, if its tails, they ask you for £1, if its heads they give you £100 if and only if they predict that you would have given them £1 had the coin landed tails.

There are some decision algorithms that would pay the £1 if and only if they believed in quantum many worlds. A CDT agent would never pay, and a UDT agent would always pay however.

It is of course possible to construct agents that want to do X if and only if quantum many worlds is true. It is also possible t construct agents that do the same thing whether it's true or false. (Eg Alpha Go)

The answer to this question depends on which wave function collapse theory you use. There are a bunch of quantum superposition experiments where we can detect that no collapse is happening. If photons collapsed their superposition in the double slit experiment, we wouldn't get an interference pattern. Collapse theories postulate a list of circumstances that we haven't measured yet when collapse happens. If you believe that quantum collapse only happens when 10^40 kg of mass are in a single coherent superposition, this belief has almost no effect on your predictions.

If you believe that you can't get 100 atoms into superposition then you are wrong, current experiments have tested that. If you believe that collapse happens at the 1gram level. Then future experiments could test this. In short, there are collapse theories in which collapse is so rare that you will never spot it. There are theories where collapse is so common that we would have already spotted it (so we know those theories are wrong), and there are theories in between. The in between theories will make different predictions about future experiments. They will not expect large quantum computers to work.

Another difference is that current QFT doesn't contain gravity. In the search for a true theory of everything, many worlds and collapse might suggest different successors. This seems important to human understanding. It wouldn't make a difference to an agent that could consider all possible theories.

There are some decision algorithms that would pay the £1 if and only if they believed in quantum many worlds

Go on then, which decision algorithms? Note, though: They do have to be plausible models of agency. I don't think it's going to be all that informative if a pointedly irrational model acts contingent on foundational theory when CDT and FDT don't.

1Gurkenglas1yAn agent might care about (and acausally cooperate with) all versions of himself that "exist". MWI posits more versions of himself. Imagine that he wants there to exist an artist like he could be, and a scientist like he could be - but the first 50% of universes that contain each are more important than the second 50%. Then in MWI, he could throw a quantum coin to decide what to dedicate himself to, while in CI this would sacrifice one of his dreams.
1Donald Hobson1yThe agent first updates on the evidence that it has, and then takes logical counterfactuals over each possible action. This behaviour means that it only cooperates in newcolmblike situations with agents it believes actually exist. It will one box in Newcolmbs problem, and cooperate against an identical duplicate of itself. However it won't pay in logical counterfactual blackmail, or any source of counterfactual blackmail accomplished with true randomness.
-4Charlie Steiner1y(I think this is a good chance for you to think of an answer yourself.)
11 comments, sorted by Highlighting new comments since Today at 4:24 PM

I'm not sure, it sounds very familiar, but I think it would have sounded very familiar to me before reading it or knowing of its existence. It sounds like the sorts of things I would already know.

People who think this way tend to converge on the same ideas. It's hard to tell whether thinking superrationally causes the convergence, or whether thinking in convergent ways causes a person to have more interest in superrationality, ~~or whether causality is involved at all~~

It's hard to tell whether thinking superrationally causes the convergence, or whether thinking in convergent ways causes a person to have more interest in superrationality, or whether causality is involved at all

I recommend reading the paper on Functional Decision Theory, to get an intuition on what an answer to this might look like. I think the question you're interested in is whether we should think of our action as actually having an effect on observers in another universe (or world, in MWI). This might seem absurd if you have the intuition that you can only affect things that are causally dependent on your actions. But if you drop the assumption of causal dependence, you can say that their decision is subjunctively dependent on yours.

Sorry. That last bit about whether causality is involved at all was a little joke. It was bad. That wasn't really what I was pondering.

A short summary of the paper: "Don't be a dick."

When would an agent do something different as a result of believing the many worlds theory?

That depends on their utility function.

Sure. The question, there, is whether we should expect there to be any powerful agents with utility functions that care about that.

Would you buy a ticket for a quantum lottery, for immortality?

No. Measure decrease is bad enough to more than outweigh the utility of the winning timelines. I can imagine some very specific variants that are essentially a technology for assigning specialist workloads to different timelines, but I don't have enough physics to detail it, myself.

I'm noticing a deeper impediment. Before we can imagine how a morality that is relatable to humans might care about the difference between MW and WC, we need to know how to extend the human morality we bare into the bizarre new territory of quantum physics. We don't even have a theory of how human morality extends into modernity, we definitely don't have an idealisation of how human morality should take to the future, and I'm asking for an idealisation of how it would take to something as unprecedented as... timelines popping in and out of existence, universes separated by uncrossable gulfs (how many times have you or your ancestors ever straddled an uncrossable gulf!)

It's going to be very hard to describe a believable agent that has come to care about this new, hidden, bizarre distinction when we don't know how we come to care about anything.