The following post is an adaptation of a paper I wrote in 2017 that I thought might be of interest to people here on LessWrong. The paper is essentially my attempt at presenting the clearest and most cogent defense of the Everett interpretation of quantum mechanics—the interpretation that I very strongly believe to be true—as I could (at least using only undergraduate wave mechanics, which was the level at which I wrote the paper). My motivation for posting this now is that I was recently talking with a colleague of mine who mentioned that they had stumbled upon my paper recently and really enjoyed it, and so realizing that I hadn't ever really shared it here on LessWrong, I figured I would put it out there in case anyone else found it similarly useful or interesting.

It's also worth noting that LessWrong has a storied history with the Everett interpretation, with Yudkowsky also defending it quite vigorously. I actually cite Eliezer at one point in the paper—and I basically agree with what he said in his sequence—though I hope that if you bounced away from that sequence you'll find my paper more persuasive. Also, I include Everett's derivation of the Born rule, which is something that I think is quite important and that I expect even a lot of people very familiar with the Everett interpretation won't have seen before.

Abstract

We seek to present and defend the view that the interpretation of quantum mechanics is no more complicated than the interpretation of plate tectonics: that which is being studied is real, and that which the theory predicts is true. The view which holds that the mathematical formalism of quantum mechanics—without any additional postulates—is a complete description of reality is known as the Everett interpretation. We seek to defend the Everett interpretation of quantum mechanics as the most probable interpretation available. To accomplish this task, we analyze the history of the Everett interpretation, provide mathematical backing for its assertions, respond to criticisms that have been leveled against it, and compare it to its modern alternatives.

Introduction

One of the most puzzling aspects of quantum mechanics is the fact that, when one measures a system in a superposition of multiple states, it is only ever found in one of them. This puzzle was dubbed the “measurement problem,” and the first attempt at a solution was by Werner Heisenberg, who in 1927 proposed his theory of “wave function collapse.”[1] Heisenberg proposed that there was a cutoff length, below which systems were governed by quantum mechanics, and above which they were governed by classical mechanics. Whenever quantum systems encounter the cutoff point, the theory stated, they collapse down into a single state with probabilities following the squared amplitude, or Born, rule. Thus, the theory predicted that physics just behaved differently at different length scales. This traditional interpretation of quantum mechanics is usually referred to as the Copenhagen interpretation.

From the very beginning, the Copenhagen interpretation was seriously suspect. Albert Einstein was famously displeased with its lack of determinism, saying “God does not play dice,” to which Niels Bohr quipped in response, “Einstein, stop telling God what to do.”[2] As clever as Bohr’s answer is, Einstein—with his famous physical intuition—was right to be concerned. Though Einstein favored a hidden variable interpretation[3], which was later ruled out by Bell’s theorem[4], the Copenhagen interpretation nevertheless leaves open many questions. If physics behaves differently at different length scales, what is the cutoff point? What qualifies as a wave-function-collapsing measurement? How can physics behave differently at different length scales, when macroscopic objects are made up of microscopic objects? Why is the observer not governed by the same laws of physics as the system being observed? Where do the squared amplitude Born probabilities come from? If the physical world is fundamentally random, how is the world we see selected from all the possibilities? How could one explain the applicability of quantum mechanics to macroscopic systems, such as Chandrasekhar’s insight in 1930 that modeling neutron stars required the entire star to be treated as a quantum system?[5]

The Everett Interpretation of Quantum Mechanics

Enter the Everett Interpretation. In 1956, Hugh Everett III, then a doctoral candidate at Princeton, had an idea: if you could find a way to explain the phenomenon of measurement from within wave mechanics, you could do away with the extra postulate of wave function collapse, and thus many of the problems of the Copenhagen interpretation. Everett worked on this idea under his thesis advisor, Einstein-prize-winning theoretical physicist John Wheeler, who would later publish a paper in support of Everett’s theory.[6] In 1957, Everett finished his thesis “The Theory of the Universal Wave Function,”[7] published as the “‘Relative State’ Formulation of Quantum Mechanics.”[8] In his thesis, Everett succeeded in deriving every one of the strange quirks of the Copenhagen interpretation—wave function collapse, the apparent randomness of measurement, and even the Born rule—from purely wave mechanical grounds, as we will do in the "Mathematics of the Everett Interpretation" section.

Everett’s derivation relied on what was at the time a controversial application of quantum mechanics: the existence of wave functions containing observers themselves. Everett believed that there was no reason to restrict the domain of quantum mechanics to only small, unobserved systems. Instead, Everett proposed that any system, even the system of the entire universe, could be encompassed in a single, albeit often intractable, “universal wave function.”

Modern formulations of the Everett interpretation reduce his reasoning down to two fundamental ideas:[9][10][11][12][13]

  • the wave function obeys the standard, linear, deterministic Schrodinger wave equation at all times,[1] and
  • the wave function is physically real.

Specifically, the first statement precludes wave function collapse and demands that we continue to use the same wave mechanics for all systems, even those with observers, and the second statement demands that we accept the physical implications of doing so. The Everett interpretation is precisely that which is implied by these two statements.

Importantly, neither of these two principles are additional assumptions on top of traditional quantum theory—instead, they are simplifications of existing quantum theory, since they act only to remove the prior ad-hoc postulates of wave function collapse and the non-universal applicability of the wave equation.[11][14] The beauty of the Everett interpretation is the fact that we can remove the postulates of the Copenhagen interpretation and still end up with a theory that works.

DeWitt’s Multiple Worlds

Removing the Copenhagen postulates had some implications that did not mesh well with many physicists’ existing physical intuitions. If one accepted Everett’s universal wave function, one was forced to accept the idea that macroscopic objects—cats, people, planets, stars, galaxies, even the entire universe—could be in a superposition of many states, just as microscopic objects could. In other words, multiple different versions of the universe—multiple worlds, so to speak—could exist simultaneously. It was for this reason that Einstein-prize-winning physicist Bryce DeWitt, a supporter of the Everett interpretation, dubbed Everett’s theory of the universal wave function the “multiworld” (or now more commonly “multiple worlds”) interpretation of quantum mechanics.[9]

While the idea of multiple worlds may at first seem strange, to Everett, it was simply an extension of the normal laws of quantum mechanics. Simultaneous superposition of states is something physicists already accept for microscopic systems whenever they do quantum mechanics—by virtue of the overwhelming empirical evidence in favor of it. Not only that, but evidence keeps coming out demonstrating superpositions at larger and larger length scales. In 1999 it was demonstrated, for example, that Carbon-60 molecules can be put into a superposition.[15]. While it is unlikely that a superposition of such a macroscopic object as Schrodinger’s cat will ever be conclusively demonstrated, due to the difficulty in isolating such a system from the outside world, it is likely that the trend of demonstrating superposition at larger and larger length scales will continue. It seems that to not accept that a cat could be in a superposition, even if we can never demonstrate it, however, is a failure of induction—a rejection of an empirically-demonstrated trend.

While the Everett interpretation ended up implying the existence of multiple worlds, this was never Everett’s starting point. The “multiple worlds” of the Everett interpretation were not added to traditional quantum mechanics as new postulates, but rather fell out from the act of taking away the existing ad-hoc postulates of the Copenhagen interpretation—a consequence of taking the wave function seriously as a fundamental physical entity. In Everett’s own words, “The aim is not to deny or contradict the conventional formulation of quantum theory, which has demonstrated its usefulness in an overwhelming variety of problems, but rather to supply a new, more general and complete formulation, from which the conventional interpretation can be deduced.”[8] Thus, it is not surprising that Stephen Hawking and Nobel laureate Murray Gell-Mann, supporters of the Everett interpretation, have expressed reservations with the name “multiple worlds interpretation,” and therefore we will continue to refer to the theory simply as the Everett interpretation instead.[16]

The Nature of Observation

Accepting the Everett interpretation raises an important question: if the macroscopic world can be in a superposition of multiple states, what differentiates them? Stephen Hawking has the answer: “in order to determine where one is in space-time one has to measure the metric and this act of measurement places one in one of the various different branches of the wave function in the Wheeler-Everett interpretation of quantum mechanics.”[17] When we perform an observation on a system whose state is in a superposition of eigenfunctions, a version of us sees each different, possible eigenfunction. The different worlds are defined by the different eigenfunctions that are observed.

We can show this, as Everett did, just by acknowledging the existence of universal, joint system-observer wave functions.[7][8] Before measuring the state of a system in a superposition, the observer and the system are independent—we can get their joint wave function simply by multiplying together their individual wave functions. After measurement, however, the two become entangled—that is, the state of the observer becomes dependent on the state of the system that was observed. The result is that for each eigenfunction in the system’s superposition, the observer’s wave function evolves differently. Thus, we can no longer express their joint wave function as the product of their individual wave functions. Instead, we are forced to express the joint wave function as a sum of different components, one for each possible eigenfunction of the system that could be observed. These different components are the different “worlds” of the Everett interpretation, with the only difference between them being which eigenfunction of the system was observed. We will formalize this reasoning in the "The Apparent Collapse of The Wave Function" section.

We are still left with the question, however, of why we experience a particular probability of seeing some states over others, if every state that can be observed is observed. Informally, we can think of the different worlds—the different possible observations—as being “weighted” by their squared amplitudes, and which one of the versions of us we are as a random choice from that weighted distribution. Formally, we can prove that under the Everett interpretation, if an observer interacts with many systems each in a superposition of multiple states, the distribution of states they see will follow the Born rule.[7][8][18][11][19][14] A portion of Everett’s proof of this fact is included in the "The Born Probability Rule" section.

The Mathematics of the Everett Interpretation

Previously, we asserted that universally-applied wave mechanics was sufficient, without ad-hoc postulates such as wave function collapse, to imply all the oddities of the Copenhagen interpretation. We will now prove that assertion. In this section, as per the Everett interpretation, we will accept that basic wave mechanics is obeyed for all physical systems, including those containing observers. From that assumption, we will show that the apparent phenomena of wave function collapse, random measurement, and the Born Rule follow. The proofs given below are adopted from Everett’s original paper.[7][8]

The Apparent Collapse of The Wave Function

Suppose we have a system with eigenfunctions and initial state . Consider an observer with initial state . Let be the state of after observing eigenfunctions of . Since we would like to demonstrate how repeated measurements see a collapsed wave function, we will assume that repeated measurement is possible, and thus that the states of remain unchanged after observation. As we are working under the Everett interpretation, we will let ourselves define a joint system-observer wave function with initial configuration Then, our goal is to understand what happens to when repeatedly observes . Thus, we will define to represent the state of after independent observations of are performed by .

Consider the simple case where and thus we are in initial state . In this case, by our previous definition of and requirement that remain unchanged, we can write the state after the observation as . Since quantum mechanics is linear, and the eigenfunctions are orthogonal, it must be that this same process occurs for each .

Thus, by the principle of superposition, we can write in its general form as For the next observation, each will once again see the same , since it has not changed state. As previously defined, we use the notation to denote the state of after observing in state twice. Thus, we can write as and more generally, we can write as where is repeated times in .

Thus, once a measurement of has been performed, every subsequent measurement will see the same eigenfunction, even though all eigenfunctions continue to exist. We can see this from the fact that the same is repeated in each state of . In this way, we see how, despite the fact that the original wave function for is in a superposition of many eigenfunctions, once a measurement has been performed, each subsequent measurement will always see the same eigenfunction.

Note that there is no longer a single, independent state of . Instead, there are many , one for each eigenfunction. What does that mean? It means that for every eigenfunction of , there is a corresponding state of wherein sees that eigenfunction. Thus, one is required to accept that there are many observers , with corresponding state , each one seeing a different eigenfunction . This is the origin of the Everett interpretation's "multiple worlds."

From the perspective of each in this scenario it will appear as if has "collapsed" from a complex superposition into a single eigenfunction . As we can see from the joint wave function, however, that is not the case—in fact, the entire superposition still exists. What has changed is only that , the state of , is no longer independent of that superposition, and has instead become entangled with it.

The Apparent Randomness of Measurement

Suppose we now have many such systems , which we will denote where . Consider from before, but with the modification that instead of repeatedly observing a single , observes different in each measurement, such that is the joint system-observer wave function after measuring the th .

As before, we will define the initial joint wave function Ψ0 as

where we are summing over all possible combinations of eigenfunctions for the different systems with arbitrary coefficients for each combination.

Then, as before, we can use the principle of superposition to find as

since the first measurement will see the state of . More generally, we can write as

following the same principle, as each measurement of an will see the corresponding state .

Thus, when subsequent measurements of identical systems are performed, the resulting sequence of eigenfunctions observed by in each appear random (according to what distribution we will show in the next subsection), since there is no structure to the sequences . This appearance of randomness is true even though the entire process is completely deterministic. If, alternatively, was to return to a previously-measured , we would get a repeat of the first analysis, wherein would always see the same state as was previously measured.

The Born Probability Rule

As before, consider a system in state . To be able to talk about a probability for an observer to see state , we need some function that will serve as a measure of that probability.

Since we know that quantum mechanics is invariant up to an overall phase, we will impose the condition on P that it must satisfy the equation

Furthermore, by the linearity of quantum mechanics, we will impose the condition on such that for defined as must satisfy the equation

Together, these two conditions fully specify what function must be. Assuming is normalized, such that , it must be that

or equivalently

such that

which, using the phase invariance condition that P(|a|) = P(a), gives

Then, from the linearity condition, we have which, by the phase invariance condition, is equivalent to

Putting it all together, we get

then, defining a new function , yields which implies that must be a linear function such that for some constant Therefore, since , which, imposing the phase invariance condition, becomes which, where is normalized to 1, is the Born rule.

The fact that this measure is a probability, beyond that it is the only measure that can be, is deserving of further proof. The concept of probability is notoriously hard to define, however, and without a definition of probability, it is just as meaningful to call P something as arbitrary as the “stallion” of the wave function as the “probability.”[2] Nevertheless, for nearly every reasonable probability theory that exists, such proofs have been provided. Everett provided a proof based on the standard frequentist definition of probability[7][8], David Deutsch (Oxford theoretical physicist) has provided a proof based on game theory[18], and David Wallace (USC theoretical physicist) has provided a proof based on decision theory[11]. For any reasonable definition of probability, wave mechanics is able to show that the above measure satisfies it in the limit without any additional postulates.[19][14][20]

Arguments For and Against the Everett Interpretation

Despite the unrivaled empirical success of quantum theory, the very suggestion that it may be literally true as a description of nature is still greeted with cynicism, incomprehension, and even anger.[21]

David Deutsch, 1996

Falsifiability and Empiricism

Perhaps the most common criticism of the Everett interpretation is the claim that it is not falsifiable, and thus falls outside the realm of empirical science.[22] In fact, this claim is simply not true—many different methods for testing the Everett interpretation have been proposed, and, a great deal of empirical data regarding the Everett interpretation is already available.

One such method we have already discussed: the Everett interpretation removes the Copenhagen interpretation’s postulate that the wave function must collapse at a particular length scale. Were it ever to be conclusively demonstrated that superposition was impossible past some point, the Everett interpretation would be disproved. Thus, every demonstration performed of superposition at larger and larger length scales—such as for Carbon 60 as was previously mentioned[15]—is a test of the Everett interpretation. Arguably, it is the Copenhagen interpretation which is unfalsifiable, since it makes no claim about where the boundary lies at which wave function collapse occurs, and thus proponents can respond to the evidence of larger superpositions simply by changing their theory and moving their proposed boundary up.

Another method of falsification regards the interaction between the Everett interpretation and quantum gravity. The Everett interpretation makes a definitive prediction that gravity must be quantized. Were gravity not quantized—not wrapped up in the wave function like all the other forces—and instead simply a background metric for the entire wave function, we would be able to detect the gravitational impact of the other states we were in a superposition with.[10][23] In 1957, Richard Feynman, who would later come to explicitly support the Everett interpretation[16] as well as become a Nobel laureate, presented an early version of the above argument as a reason to believe in quantum gravity, arguing, “There is a bare possibility (which I shouldn’t mention!) that quantum mechanics fails and becomes classical again when the amplification gets far enough [but] if you believe in quantum mechanics up to any level then you have to believe in gravitational quantization.”[24]

Another proposal concerns differing probabilities of finding ourselves in the universe we are in depending on whether the Everett interpretation holds or not. If the Everett interpretation is false, and the universe only has a single state, there is only one state for us to find ourselves in, and thus we would expect to find ourselves in an approximately random universe. On the other hand, if the Everett interpretation is true, and there are many different states that the universe is in, we could find ourselves in any of them, and thus we would expect to find ourselves in one which was more disposed than average towards the existence of life. Approximate calculations of the relative probability of the observed universe based on the Hartle-Hawking boundary condition strongly support the Everett interpretation.[10]

Finally, as we made a point of being clear about in the "The Everett Interpretation of Quantum Mechanics" section, the Everett interpretation is simply a consequence of taking the wave function seriously as a physical entity. Thus, it is somewhat unfair to ask the Everett interpretation to achieve falsifiability independently of the theory—quantum mechanics—which implies it.[22] If a new theory were proposed that said quantum mechanics stopped working outside of the future light cone of Earth, we would not accept it as a new physical controversy—we would say that, unless there is incredibly strong proof otherwise, we should by default assume that the same laws of physics apply everywhere. The Everett interpretation is just that default—it is only by historical accident that it happened to be discovered after the Copenhagen interpretation. Thus, to the extent that one has confidence in the universal applicability of the principles of quantum mechanics, one should have equal confidence in the Everett interpretation, since it is a logical consequence. It is in fact all the more impressive—and tantamount to its importance to quantum mechanics—that the Everett interpretation manages to achieve falsifiability and empirical support despite its primary virtue of simply saying that quantum mechanics be applied universally.

Simplicity

Another common objection to the Everett interpretation is that it “postulates too many universes,” which Sean Carroll, a Caltech cosmologist and supporter of the Everett interpretation, calls “the basic silly objection.”[25] At this point, it should be very clear why this objection is silly: the Everett interpretation postulates no such thing—the existence of “many universes” is an implication, not a postulate, of the theory. Opponents of the Everett interpretation, however, have accused it of a lack of simplicity on the grounds that adding in all those additional universes is unnecessary added complexity, and since by the principle of Occam’s razor the simplest explanation is probably correct, the Everett interpretation can be rejected.[26]

In fact, Occam’s razor is an incredibly strong argument in favor of the Everett interpretation. To explain this, we will first need to formalize what we mean by Occam’s razor, which will require some measure of theoretical computer science. Specifically, we will make use of Solomonoff’s theory of inductive inference: the best, most general framework we have for comparing the probability of empirically indistinguishable physical theories.[27][28][29][3] To use Solomonoff’s formalism, only one assumption is required of us: under some encoding scheme, competing theories of the universe can be modeled as programs. This assumption does not imply that the universe must be computable, only that it can be computably described, which all physical theories capable of being written down must abide by. From this assumption, and the axioms of probability theory, Solomonoff induction can be derived.[27]

Solomonoff induction tells us that, if we have a set of programs[4] which encode for empirically indistinguishable physical theories, the probability of the theory described by a given program with length in bits (0s and 1s) is given by

up to a constant normalization factor calculated across all the to make the probabilities sum to 1.[27] We can see how this makes intuitive sense, since if we are predicting an arbitrary system, and thus have no information about the correctness of a program implementing a theory other than its length in bits, we are forced to assign equal probability to each of the two options for each bit, 0 and 1, and thus each additional bit adds a factor of to the total probability of the program. Furthermore, we can see how Solomonoff induction serves as a formalization of Occam's razor, since it gives us a way of calculating how much to discount longer, more complex theories in favor of shorter, simpler ones.

Now, we will attempt to apply this formalism to assign probabilities to competing interpretations of quantum mechanics, which we will represent as elements of the set {Ti}. Let W be the shortest program which computes the wave equation. Since the wave equation is a component of all quantum theories, it must be that |W| ≤ |Ti|. Thus, the smallest that any Ti could possibly be is |W|, such that any Ti of length |W| is at least twice as probable as a Ti of any other length. The Everett interpretation is such a Ti, since it requires nothing else beyond wave mechanics, and follows directly from it. Therefore, from the perspective of Solomonoff induction, the Everett interpretation is provably optimal in terms of program length, and thus also in terms of probability.

To get a sense of the magnitude of these effects, we will attempt to approximate how much less probable the Copenhagen interpretation is than the Everett interpretation. We will represent the Copenhagen interpretation C as made of three parts: W, wave mechanics; O, a machine which determines when to collapse the wave function; and L, classical mechanics. Then, where the Everett interpretation E is just W, we can write their relative probabilities as

How large are O and L? As a quick Fermi estimate for L, we will take Newton’s three laws of motion, Einstein’s general relativistic field equation, and Maxwell’s four equations of electromagnetism as the principles of classical mechanics, for a total of 8 fundamental equations. Assume the minimal implementation for each one averages 100 bits—a very modest estimate, considering the smallest Chess program ever written is 3896 bits long.[30] Then, the relative probability is at most

which is about the probability of picking four random atoms in the universe and getting the same one each time, and is thus so small as to be trivially dismissible.

The Arrow of Time

Another objection to the Everett interpretation is that it is time-symmetric. Since the Everett interpretation is just the wave equation, its time symmetry follows from the fact that the Schrodinger equation is time-reversal invariant, or more technically, charge-parity-time-reversal (CPT) invariant. The Copenhagen interpretation, however, is not, since wave function collapse is a fundamentally irreversible event.[31] In fact, CPT symmetry is not the only natural property that wave function collapse lacks that the Schrodinger equation has—wave function collapse breaks linearity, unitarity, differentiability, locality, and determinism.[13][12][16][32] The Everett interpretation, by virtue of consisting of nothing but the Schrodinger equation, preserves all of these properties. This is an argument in favor of the Everett interpretation, since there are strong theoretical and empirical reasons to believe that such symmetries are properties of the universe.[33][34][35][5]

Nevertheless, as mentioned above, it has been argued that the Copenhagen interpretation’s breaking of CPT symmetry is actually a point in its favor, since it supposedly explains the arrow of time, the idea that time does not behave symmetrically in our everyday experience.[31] Unfortunately for the Copenhagen interpretation, wave function collapse does not actually imply any of the desired thermodynamic properties of the arrow of time.[31] Furthermore, under the Everett interpretation, the arrow of time can be explained using the standard thermodynamic explanation that the universe started in a very low-entropy state.[36]

In fact, accepting the Everett interpretation gets rid of the need for the current state of the universe to be dependent on subtle initial variations in that low-entropy state.[36] Instead, the current state of the universe is simply one of the many different components of the wave function that evolved deterministically from that initial state. Thus, the Everett interpretation is even simpler—from a Solomonoff perspective—than was shown in the "Simplicity" section, since it forgoes the need for its program to specify a complex initial condition for the universe with many subtle variations.

Other Interpretations of Quantum Mechanics

The mathematical formalism of the quantum theory is capable of yielding its own interpretation.[9]

Bryce DeWitt, 1970

Decoherence

It is sometimes proposed that wave mechanics alone is sufficient to explain the apparent phenomenon of wave function collapse without the need for the Everett interpretation’s multiple worlds. The justification for this assertion is usually based on the idea of decoherence. Decoherence is the mathematical result, following from the wave equation, that tightly-interacting superpositions tend to evolve into non-interacting superpositions.[37][38] Importantly, decoherence does not destroy the superposition—it merely “diagonalizes” it, which is to say, it removes the interference terms.[37] After decoherence, one is always still left with a superposition of multiple states.[39][40] The only way to remove the resulting superposition is to assume wave function collapse, which every statistical theory claiming to do away with multiple worlds has been shown to implicitly assume.[41][19] There is no escaping the logic presented in the "The Apparent Collapse of The Wave Function" section—if one accepts the universal applicability of the wave function, one must accept the multiple worlds it implies.

That is not to say that decoherence is not an incredibly valuable, useful concept for the interpretation of quantum mechanics, however. In the Everett interpretation, decoherence serves the very important role of ensuring that macroscopic superpositions—the multiple worlds of the Everett interpretation—are non-interacting, and that each one thus behaves approximately classically.[41][40] Thus, the simplest decoherence-based interpretation of quantum mechanics is in fact the Everett interpretation. From the Stanford Encyclopedia of Philosophy, “Decoherence as such does not provide a solution to the measurement problem, at least not unless it is combined with an appropriate interpretation of the theory [and it has been suggested that] decoherence is most naturally understood in terms of Everett-like interpretations.”[39] The discoverer of decoherence himself, German theoretical physicist Heinz-Dieter Zeh, is an ardent proponent of the Everett interpretation.[42][36]

Furthermore, we have given general arguments in favor of the existence of the multiple worlds implied by the Everett interpretation, which are all reasons to favor the Everett interpretation over any single-world theory. Specifically, calculations of the probability of the current state of the universe support the Everett interpretation[10], as does the fact that the Everett interpretation allows for the initial state of the universe to be simpler[36].

Consistent Histories

The consistent histories interpretation of quantum mechanics, owing primarily to Prof. Robert Griffiths, eschews probabilities over “measurement” in favor of probabilities over “histories,” which are defined as arbitrary sequences of events.[43] Consistent histories provides a way of formalizing what classical probabilistic questions make sense in a quantum domain and which do not—that is, which are consistent. Its explanation for why this consistency always appears at large length scales is based on the idea of decoherence, as discussed above.[43][44] In this context, consistent histories is a very useful tool for reasoning about probabilities in the context of quantum mechanics, and for providing yet another proof of the natural origin of the Born rule.

Proponents of consistent histories claim that it does not imply the multiple worlds of the Everett interpretation.[43] However, since the theory is based on decoherence, there are always multiple different consistent histories, which cannot be removed via any natural history selection criterion.[45][44] Thus, just as the wave equation implies the Everett interpretation, so too does consistent histories. To see this, we will consider the fact that consistent histories works because of Feynmann’s observation that the amplitude of any given final state can be calculated as the sum of the amplitudes along all the possible paths to that state.[44][46] Importantly, we know that two different histories—for example, the different branches of a Mach-Zender interferometer—can diverge and then later merge back together and interfere with each other. Thus, it is not in general possible to describe the state of the universe as a single history, since other, parallel histories can interfere and change how that state will later evolve. A history is great for describing how a state came to be, but not very useful for describing how it might evolve in the future. For that, including the other parallel histories—the full superposition—is necessary.

Once one accepts that the existence of multiple histories is necessary on a microscopic level, their existence on a macroscopic level follows—excluding them would require an extra postulate, which would make consistent histories equivalent to the Copenhagen interpretation. If such an extra postulate is not made, then the result is macroscopic superposition, which is to say, the Everett interpretation. This formulation of consistent histories without any extra postulates has been called the theory of “the universal path integral,” exactly mirroring Everett’s theory of the universal wave function.[46] The theory of the universal wave function—the Everett interpretation—is to the theory of the universal path integral as wave mechanics is to the sum-over-paths approach, which is to say that they are both equivalent formalisms with the same implications.

Pilot Wave Theory

The pilot wave interpretation, otherwise known as the de Broglie-Bohm interpretation, postulates that the wave function, rather than being physically real, is a background which “guides” otherwise classical particles.[47] As we saw with the Copenhagen interpretation, the obvious question to ask of the pilot wave interpretation is whether its extra postulate—in this case adding in classical particles—is necessary or useful in any way. The answer to this question is a definitive no. Heinz-Dieter Zeh says of the pilot wave interpretation, “Bohm’s pilot wave theory is successful only because it keeps Schrodinger’s (exact) wave mechanics unchanged, while the rest of it is observationally meaningless and solely based on classical prejudice.”[42] As we have previously shown in the "The Mathematics of the Everett Interpretation" section, wave mechanics is capable of solving all supposed problems of measurement without the need for any additional postulates. While it is true that pilot wave theory solves all these problems as well, it does so not by virtue of its classical add-ons, but simply by virtue of including the entirety of wave mechanics.[42][48]

Furthermore, since pilot wave theory has no collapse postulate, it does not even get rid of the existence of multiple words. If the universe computes the entirety of the wave function, including all of its multiple worlds, then all of the observers in those worlds should experience physical reality by the act of being computed—it is not at all clear how the classical particles could have physical reality and the rest of the wave function not.[21][42] In the words of David Deutsch, “pilot-wave theories are parallel-universes theories in a state of chronic denial. This is no coincidence. Pilot-wave theories assume that the quantum formalism describes reality. The multiplicity of reality is a direct consequence of any such theory.”[21]

However, since the extra classical particles only exist in one of these worlds, the pilot wave interpretation also does not resolve the problem of the low likelihood of the observed state of the universe[10] or the complexity of the required initial condition[36]. Thus, the pilot wave interpretation, despite being strictly more complicated than the Everett interpretation—both in terms of its extra postulate and the concerns above—produces exactly no additional explanatory power. Therefore, we can safely dismiss the pilot wave interpretation on the grounds of the same simplicity argument used against the Copenhagen interpretation in the "Simplicity" section.

Conclusion

Harvard theoretical physicist Sidney Coleman uses the following parable from Wittgenstein as an analogy for the interpretation of quantum mechanics: “‘Tell me,’ Wittgenstein asked a friend, ‘why do people always say, it was natural for man to assume that the sun went round the Earth rather than that the Earth was rotating?’ His friend replied, ‘Well, obviously because it just looks as though the Sun is going round the Earth.’ Wittgenstein replied, ‘Well, what would it have looked like if it had looked as though the Earth was rotating?’”[49] Of course, the answer is it would have looked exactly as it actually does! To our fallible human intuition, it seems as if we are seeing the sun rotating around the Earth, despite the fact that what we are actually seeing is a heliocentric solar system. Similarly, it seems as if we are seeing the wave function randomly collapsing around us, despite the fact that this phenomenon is entirely explained just from the wave equation, which we already know empirically is a law of nature.

It is perhaps unfortunate that the Everett interpretation ended up implying the existence of multiple worlds, since this fact has led to many incorrectly viewing the Everett interpretation as a fanciful theory of alternative realities, rather than the best, simplest theory we have as of yet for explaining measurement in quantum mechanics. The Everett interpretation’s greatest virtue is the fact that it is barely even an interpretation of quantum mechanics, holding as its most fundamental principle that the wave equation can interpret itself. In the words of David Wallace: “If I were to pick one theme as central to the tangled development of the Everett interpretation of quantum mechanics, it would probably be: the formalism is to be left alone. What distinguished Everett’s original paper both from the Dirac-von Neumann collapse-of-the-wavefunction orthodoxy and from contemporary rivals such as the de Broglie-Bohm theory was its insistence that unitary quantum mechanics need not be supplemented in any way (whether by hidden variables, by new dynamical processes, or whatever).”[11]

There is a tendency of many physicists to describe the Everett interpretation simply as one possible answer to the measurement problem. It should hopefully be clear at this point why that view should be rejected—the Everett interpretation is not simply yet another solution to the measurement problem, but rather a straightforward conclusion of quantum mechanics itself that shows that the measurement problem should never have been a problem in the first place. Without the Everett interpretation, one is forced to needlessly introduce complex, symmetry-breaking, empirically-unjustifiable postulates—either wave function collapse or pilot wave theory—just to explain what was already explicable under basic wave mechanics. The Everett interpretation is not just another possible way of interpreting quantum mechanics, but a necessary component of any quantum theory that wishes to explain the phenomenon of measurement in a natural way. In the words of John Wheeler, Everett’s thesis advisor, “No escape seems possible from [Everett's] relative state formulation if one wants to have a complete mathematical model for the quantum mechanics that is internal to an isolated system. Apart from Everett’s concept of relative states, no self-consistent system of ideas [fully explains the universe].”[6]

References

[1] Heisenberg, W. (1927). THE ACTUAL CONTENT OF QUANTUM THEORETICAL KINEMATICS AND MECHANICS. Zeitschrift für Physik.

[2] Anon. The solvay conference, probably the most intelligent picture ever taken, 1927.

[3] Einstein, A., Podolsky, B. and Rosen, N. (1935). Can quantum-mechanical description of physical reality be considered complete? Physical Review.

[4] Greenberger, D. M. (1990). Bell’s theorem without inequalities. American Journal of Physics.

[5] Townsend, J. (2010). Quantum physics: A fundamental approach to modern physics. University Science Books.

[6] Wheeler, J. A. (1957). Assessment of everett’s “relative state” formulation of quantum theory. Reviews of Modern Physics.

[7] Everett, H. (1957). THE THEORY OF THE UNIVERSAL WAVE FUNCTION. Princeton University Press.

[8] Everett, H. (1957). “Relative state” formulation of quantum mechanics. Reviews of Modern Physics.

[9] DeWitt, B. S. (1970). Quantum mechanics and reality. Physics Today.

[10] Barrau, A. (2015). Testing the everett interpretation of quantum mechanics with cosmology.

[11] Wallace, D. (2007). Quantum probability from subjective likelihood: Improving on deutsch’s proof of the probability rule. Studies in History and Philosophy of Science.

[12] Saunders, S., Barrett, J., Kent, A. and Wallace, D. (2010). Many worlds?: Everett, quantum theory, & reality. Oxford University Press.

[13] Wallace, D. (2014). The emergent multiverse. Oxford University Press.

[14] Wallace, D. (2006). Epistemology quantized: Circumstances in which we should come to believe in the everett interpretation. The British Journal for the Philosophy of Science.

[15] Arndt, M., Nairz, O., Vos-Andreae, J., Keller, C., Zouw, G. van der and Zeilinger, A. (1999). Wave-particle duality of C60 molecules. Nature.

[16] Price, M. C. (1995). THE EVERETT FAQ.

[17] Hawking, S. S. (1975). Black holes and thermodynamics. Physical Review D.

[18] Deutsch, D. (1999). Quantum theory of probability and decisions. Proceedings of the Royal Society of London.

[19] Wallace, D. (2003). Everettian rationality: Defending deutsch’s approach to probability in the everett interpretation. Studies in History and Philosophy of Science.

[20] Clark, C. (2010). A theoretical introduction to wave mechanics.

[21] Deutsch, D. (1996). Comment on lockwood. The British Journal for the Philosophy of Science.

[22] Carroll, S. (2015). The wrong objections to the many-worlds interpretation of quantum mechanics.

[23] Hartle, J. B. (2014). SPACETIME QUANTUM MECHANICS AND THE QUANTUM MECHANICS OF SPACETIME.

[24] Zeh, H. D. (2011). Feynman’s interpretation of quantum theory. The European Physical Journal.

[25] Carroll, S. (2014). Why the many-worlds formulation of quantum mechanics is probably correct.

[26] Rae, A. I. M. (2009). Everett and the born rule. Studies in History and Philosophy of Science.

[27] Solomonoff, R. J. (1960). A PRELIMINARY REPORT ON a GENERAL THEORY OF INDUCTIVE INFERENCE.

[28] Soklakov, A. N. (2001). Occam’s razor as a formal basis for a physical theory.

[29] Altair, A. (2012). An intuitive explanation of solomonoff induction.

[30] Kelion, L. (2015). Coder creates smallest chess game for computers.

[31] Bitbol, M. (1988). THE CONCEPT OF MEASUREMENT AND TIME SYMMETRY IN QUANTUM MECHANICS. Philosophy of Science.

[32] Yudkowsky, E. (2008). The quantum physics sequence: Collapse postulates.

[33] Ellis, J. and Hagelin, J. S. (1984). Search for violations of quantum mechanics. Nuclear Physics.

[34] Ellis, J., Lopez, J. L., Mavromatos, N. E. and Nanopoulos, D. V. (1996). Precision tests of CPT symmetry and quantum mechanics in the neutral kaon system. Physical Review D.

[35] Agrawal, M. (2003). Linearity in quantum mechanics.

[36] Zeh, H. D. (1988). Measurement in bohm’s versus everett’s quantum theory. Foundations of Physics.

[37] Zurek, W. H. (2002). Decoherence and the transition from quantum to classical—revisited. Los Alamos Science.

[38] Schlosshauer, M. (2005). Decoherence, the meausrement problem, and interpretations of quantum mechanics.

[39] Bacciagaluppi, G. (2012). The role of decoherence in quantum mechanics. Stanford Encyclopedia of Philosophy.

[40] Wallace, D. (2003). Everett and structure. Studies in History and Philosophy of Science.

[41] Zeh, H. D. (1970). On the interpretation of measurement in quantum theory. Foundations of Physics.

[42] Zeh, H. D. (1999). Why bohm’s quantum theory? Foundations of Physics Letters.

[43] Griffiths, R. B. (1984). Consistent histories and the interpretation of quantum mechanics. Journal of Statistical Physics.

[44] Gell-Mann, M. and Hartle, J. B. (1989). Quantum mechanics in the light of quantum cosmology. Int. Symp. Foundations of Quantum Mechanics.

[45] Wallden, P. (2014). Contrary inferences in consistent histories and a set selection criterion.

[46] Lloyd, S. and Dreyer, O. (2015). The universal path integral. Quantum Information Processing.

[47] Bohm, D. J. and Hiley, B. J. (1982). The de broglie pilot wave theory and the further development of new insights arising out of it. Foundations of Physics.

[48] Brown, H. R. and Wallace, D. (2005). Solving the measurement problem: De broglie-bohm loses out to everett. Foundations of Physics.

[49] Coleman, S. (1994). Quantum mechanics in your face.


  1. The relativistic variant, to be precise. ↩︎

  2. Fun fact: this paper was part of a paper contest that all undergraduate physics students at Harvey Mudd College participate in (which this paper won) for which there's a longstanding tradition (perpetuated by the students) that each student get a random word and be challenged to include it in their paper. My word was “stallion.” ↩︎

  3. In some of these sources, the equivalent formalism of Kolmogorov complexity is used instead. ↩︎

  4. To be precise, these should be universal Turing machine programs. ↩︎

New Comment
76 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I used to read Lubos Motl's blog (maybe between 2005-2010 or something?), first because I had had him as a QFT professor and liked him personally, and later because, I dunno, I found his physics posts informative and his non-physics ultra-right-wing posts weirdly entertaining and interesting in an insane way. Anyway he used to frequently post rants against the Many Worlds Interpretation, and in favor of the Copenhagen interpretation. (Maybe he still does, I dunno.) After reading those rants and sporadically pushing back in the comments, I maybe came to understand his perspective, though I could be wrong.

So, here's my attempt to describe Lubos's perspective (which he calls the Copenhagen interpretation) from your (and my) perspective:

Every now and then, you learn something about what Everett branch you happen to be in. For example, you peer at the spin-o-meter and it says "This electron is spin up". Before you looked, you had written in your lab notebook that the (partial trace) density matrix for the electron was [[0.5, 0], [0, 0.5]]. But after you see the spin-o-meter, you pull out your eraser and write a new (partial trace) density matrix for the electron in your lab notebook, na... (read more)

Yeah... to paraphrase Deutsch, that just sounds like multiple worlds in a state of chronic denial. Also, it is possible for other Everett branches to influence yours, the probability just gets so infinitesimally tiny as they decohere that it's negligible in practice.

5habryka
(Is this true even when we apply pressure to it (as in, can we design machines or systems that leverage this systematically)? And are there are actually no macroscopic phenomena that are downstream of branches interacting? Like, I feel like one could have said such a sentence about relativity a few decades back, but it would have been pretty obviously wrong, and you end up with weird stuff like black holes if you take relativity seriously. I feel like I would be quite surprised if we ended up with no macroscopic phenomena that doesn't require explicitly modeling the interference by distant branches.)
7evhub
Like I mention in the paper, the largest object for which we've done this so far (at least that I'm aware of) is Carbon 60 atoms which, while impressive, are far from “macroscopic.” Preventing a superposition from decohering is really, really difficult—it's what makes building a quantum computer so hard. That being said, there are some wacky macroscopic objects that do sometimes need to be treated as quantum systems, like neutron stars (as I mention in the paper) or black holes (though we still don't fully understand black holes from a quantum perspective).
2habryka
Ah, yeah, neutron stars do feel like a good example. And I do just recall you mentioning them.
1interstice
There is some reason to think we will never see effects that depend on the other Everett branches, because we could say that a branching event has occurred precisely when the differences between the two components are no longer effectively reversible.
-7TAG

I'm very confused by the mathematical setup. Probably it's because I'm a mathematician and not a physicist, so I don't see things that would be clear for a physicists. My knowledge of quantum mechanics is very very basic, but nonzero. Here's how I rewrote the setup part of your paper as I was going along, I hope I got everything right.

You have a system which is some (seperable, complex, etc..) Hilbert space. You also have an observer system O (which is also a Hilbert space). Elements of various Hilbert spaces are called "states". Then you have the joint system of which is an element of, which comes with a (unitary) time-evolution . Now if were not being observed, it would evolve by some (unitary) time-evolution . We assume (though I think functional analysis gives this to use for free) that is an orthonormal basis of eigenfunctions of , with eigenvalues .

Ok, now comes the trick: we assume that observation doesn't change the system, i.e. that the -component of is . Wait, that doesn't make sense! doesn't have an "-component", something like an -component makes sense only for pure states, if you have mixed states then the idea breaks dow... (read more)

I mean I could accept that the Schrödinger equation gives the evolution of the wave-function, but why care about its eigenfunctions so much?

I'm not sure if this will be satisfying to you but I like to think about it like this:

  • Experiments show that the order of quantum measurements matters. The mathematical representation of the physical quantities needs to take this into account. One simple kind of non-commutative objects are matrices.
  • If physical quantities are represented by matrices, the possible measurement outcomes need to be encoded in there somehow. They also need to be real. Both conditions are satisfied by the eigenvalues of self-adjoint matrices.
  • Experiments show that if we immediately repeat a measurement, we get the same outcome again. So if eigenvalues represent measurement outcomes the state of the system after the measurement must be related to them somehow. The eigenvectors of the matrix representing this state is a simple realization of this.

This isn't a derivation but it makes the mathematical structure of QM somewhat plausible to me.

1mwacksen
Right, but (before reading your post) I had assumed that the eigenvectors somehow "popped out" of the Everett interpretation. But it seems like they are built in from the start. Which is fine, it's just deeply weird. So it's kind of hard to say whether the Everett interpretation is more elegant. I mean in the Copenhagen interpretation, you say "measuring can only yield eigenvectors" and the Everett interpretation, you say "measuring can only yield eigenvectors and all measurements are done so the whole thing is still unitary". But in the end even the Everett interpretation distinguishes "observers" somehow, I mean in the setup you describe there isn't any reason why we can't call the "state space" the observer space and the observer "the system being studied" and then write down the same system from the other point of view... The "symmetric matrices<-> real eigenvectors" is of course important, this is essentially just the spectral theorem which tells us that real linear combinations of orthogonal projections are symmetric matrices (and vice versa). Nowadays matrices are seen as "simple non-commutative objects". I'm not sure if this was true when QM was being developed. But then again, I'm not really sure how linear QM "really" is. I mean all of this takes place on vectors with norm 1 (and the results are invariant under change of phase), and once we quotient out the norm, most of the linear structure is gone. I'm not sure what the correct way to think about the phase is. On one hand, it seems like a kind of "fake" unobservable variable and it should be permissible to quotient it out somehow. On the other hand, the complex-ness of the Schrödinger equation seems really important. But is this complexness a red herring? What goes wrong if we just take our "base states" as discrete objects and try to model QM as the evolution of probability distributions over ordered pairs of these states?
5paragonal
This is a bit of a tangent but decoherence isn't exclusive to the Everett interpretation. Decoherence is itself a measurable physical process independent of the interpretation one favors. So explanations which rely on decoherence are part of all interpretations. In the derivations of decoherence you make certain approximations which loosely speaking depend on the environment being big relative to the quantum system. If you change the roles these approximations aren't valid any more. I'm not sure if we are on the same page regarding decoherence, though (see my other reply to your post). You might be interested in Lucien Hardy's attempt to find a more intuitive set of axioms for QM compared to the abstractness of the usual presentation: https://arxiv.org/abs/quant-ph/0101012
1mwacksen
Isn't the whole point of the Everett interpretation that there is no decoherence? We have a Hilbert space for the system, and a Hilbert space for the observer, and a unitary evolution on the tensor product space of the system. With these postulates (and a few more), we can start with a pure state and end up with some mixed tensor in the product space, which we then interpret as being "multiple observers", right? I mean this is how I read your paper. We are surely not on the same page regarding decoherence, as I know almost nothing about it :) The arxiv-link looks interesting, I should have a look at it.
2TAG
Yes, the coherence-based approach (Everett's original paper, early MWI) is quite different to the decoherence-based approach (Dieter Zeh, post 1970). Deutsch uses the coherence based approach, while most other many worlders use the decoherence based approach. He absolutely does establish that quantum computing is superior to classical computing, that underlying reality is not classical, and that the superiority of quantum computing requires some extra structure to reality. What the coherence based approach does not establish is whether the extra structure adds up to something that could be called "alternate worlds" or parallel universes , in the sense familiar from science fiction. In the coherence based approach, Worlds" are coherent superpositions.That means they in exist at small scales, they can continue to interact with each other, after, "splitting" , and they can be erased. These coherent superposed states are the kind of "world" we have direct evidence for, although they seem to lack many of the properties requited for a fully fledged many worlds theory, hence the scare quotes. In particular, if you just model the wave function, the only results you will get represent every possible outcome. In order to match observation , you will have to keep discarding unobserved outcomes and renormalising as you do in every interpretation. It's just that that extra stage is performed manually, not by the programme.
3SymplecticMan
I don't know if it would make things clearer, but questions about why eigenvectors of Hermitian operators are important can basically be recast as one question of why orthogonal states correspond to mutually exclusive 'outcomes'. From that starting point, projection-valued measures let you associate real numbers to various orthogonal outcomes, and that's how you make the operator with the corresponding eigenvectors. As for why orthogonal states are important in the first place, the natural thing to point to is the unitary dynamics (though there are also various more sophisticated arguments).
1mwacksen
Yes, I know all of this, I'm a mathematician, just not one researching QM. The arxiv link looks interesting, but I have no time to read it right now. The question isn't "why are eigenvectors of Hermitian operators interesting", it is "why would we expect a system doing something as reasonable as evolving via the Schrödinger equation to do something as unreasonable as to suddenly collapse to one of its eigenfunctions".
1SymplecticMan
I guess I don't understand the question. If we accept that mutually exclusive states are represented by orthogonal vectors, and we want to distinguish mutually exclusive states of some interesting subsystem, then what's unreasonable with defining a "measurement" as something that correlates our apparatus with the orthogonal states of the interesting subsystem, or at least as an ideal form of a measurement?
3mwacksen
I think my question isn't really well-defined. I guess it's more along the lines of "is there some 'natural seeming' reasoning procedure that gets me QM ". And it's even less well-defined as I have no clear understanding of what QM is, as all my attempts to learn it eventually run into problems where something just doesn't make sense - not because I can't follow the math, but because I can't follow the interpretation. Yes, this makes sense, though "mutually exclusive state are represented by orthogonal vectors" is still really weird. I kind of get why Hermitian operators here makes sense, but then we apply the measurement and the system collapses to one of its eigenfunctions. Why?
3SymplecticMan
If I understand what you mean, this is a consequence of what we defined as a measurement (or what's sometimes called a pre-measurement). Taking the tensor product structure and density matrix formalism as a given, if the interesting subsystem starts in a pure state, the unitary measurement structure implies that the reduced state of the interesting subsystem will generally be a mixed state after measurement. You might find parts of this review informative; it covers pre-measurements and also weak measurements, and in particular talks about how to actually implement measurements with an interaction Hamiltonian.
1paragonal
You could also turn around this question. If you find it somewhat plausible that that self-adjoint operators represent physical quantities, eigenvalues represent measurement outcomes and eigenvectors represent states associated with these outcomes (per the arguments I have given in my other post) one could picture a situation where systems hop from eigenvector to eigenvector through time. From this point of view, continuous evolution between states is the strange thing. The paper by Hardy I cited in another answer to you tries to make QM as similar to a classical probabilistic framework as possible and the sole difference between his two frameworks is that there are continuous transformations between states in the quantum case. (But notice that he works in a finite-dimensional setting which doesn't easily permit important features of QM like the canonical commutation relations).
1mwacksen
Well yeah sure. But continuity is a much easier pill to swallow than "continuity only when you aren't looking".
3paragonal
This and this doesn't sound correct to me. The basis in which the diagonalization happens isn't put in at the beginning. It is determined by the nature of the interaction between the system and its environment. See "environment-induced superselection" or short "einselection".
1mwacksen
Ok, but OP of the post above starts with "Suppose we have a system S with eigenfunctions {φi}", so I don't see why (or how) they should depend on the observer. I'm not claiming these are just arbitrary functions. The point is that requiring the the time-evolution on pure states of the form ψ⊗φi to map to pure states of the same kind is arbitrary choice that distinguishes the eigenfunctions. Why can't we chose any other orthonormal basis at this point, say some ONB (wi)i, and require that wi⊗ψ↦ESwi⊗ψi, where ψi is defined so that this makes sense and is unitary? (I guess this is what you mean with "diagonalization", but I dislike the term because if we chose a non-eigenfunction orthonormal basis the construction still "works", the representation just won't be diagonal in the first component).

The confusion on the topic of interpretations comes from the failure to answer the question, what is an "interpretation" (or, more generally, a "theory of physics") even supposed to be? What is its type signature, and what makes it true or false?

Imagine a robot with a camera and a manipulator, whose AI is a powerful reinforcement learner, with a reward function that counts the amount of blue seen in the camera. The AI works by looking for models that are good at predicting observations, and using those models to make plans for maximizing blue.

Now our AI discovered quantum mechanics. What does it mean? What kind of model would it construct? Well, the Copenhagen interpretation does a perfectly good job. The wave function evolves via the Schrodinger equation, and every camera frame there is collapse. As long as predicting observations is all we need, there's no issue.

It gets more complicated if you want your agent to have a reward function that depends on unobserved parameters (things in the outside world), e.g. the number of paperclips in the universe. In this case Copenhagen is insufficient, because in Copenhagen an observable is undefined when you don't measure it. But MWI also doe... (read more)