Eliezer recently posted an essay on "the fallacy of privileging the hypothesis". What it's really about is the fallacy of privileging an arbitrary hypothesis. In the fictional example, a detective proposes that the investigation of an unsolved murder should begin by investigating whether a particular, randomly chosen citizen was in fact the murderer. Towards the end, this is likened to the presumption that one particular religion, rather than any of the other existing or even merely possible religions, is especially worth investigating.

However, in between the fictional and the supernatural illustrations of the fallacy, we have something more empirical: quantum mechanics. Eliezer writes, as he has previously, that the many-worlds interpretation is the one - the rationally favored interpretation, the picture of reality which rationally should be adopted given the empirical success of quantum theory. Eliezer has said this before, and I have argued against it before, back when this site was just part of a blog. This site is about rationality, not physics; and the quantum case is not essential to the exposition of this fallacy. But given the regularity with which many-worlds metaphysics shows up in discussion here, perhaps it is worth presenting a case for the opposition.

We can do this the easy way, or the hard way. The easy way is to argue that many-worlds is merely not favored, because we are nowhere near being able to locate our hypotheses in a way which permits a clean-cut judgment about their relative merits. The available hypotheses about the reality beneath quantum appearances are one and all unfinished muddles, and we should let their advocates get on with turning them into exact hypotheses without picking favorites first. (That is, if their advocates can be bothered turning them into exact hypotheses.)

The hard way is to argue that many-worlds is actually disfavored - that we can already say it is unlikely to be true. But let's take the easy path first, and see how things stand at the end.

The two examples of favoring an arbitrary hypothesis with which we have been provided - the murder investigation, the rivalry of religions - both present a situation in which the obvious hypotheses are homogeneous. They all have the form "Citizen X did it" or "Deity Y did it". It is easy to see that for particular values of X and Y, one is making an arbitrary selection from a large set of possibilities. This is not the case in quantum foundations. The well-known interpretations are extremely heterogeneous. There has not been much of an effort made to express them in a common framework - something necessary if we want to apply Occam's razor in the form of theoretical complexity - nor has there been much of an attempt to discern the full "space" of possible theories from which they have been drawn - something necessary if we really do wish to avoid privileging the hypotheses we happen to have. Part of the reason is, again, that many of the known options are somewhat underdeveloped as exact theories. They subsist partly on rhetoric and handwaving; they are mathematical vaporware. And it's hard to benchmark vaporware.

In his latest article, Eliezer presents the following argument:

"... there [is] no concrete evidence whatsoever that favors a collapse postulate or single-world quantum mechanics.  But, said Scott, we might encounter future evidence in favor of single-world quantum mechanics, and many-worlds still has the open question of the Born probabilities... There must be a trillion better ways to answer the Born question without adding a collapse postulate..."

The basic wrong assumption being made is that quantum superposition by default equals multiplicity - that because the wavefunction in the double-slit experiment has two branches, one for each slit, there must be two of something there - and that a single-world interpretation has to add an extra postulate to this picture, such as a collapse process which removes one branch. But superposition-as-multiplicity really is just another hypothesis. When you use ordinary probabilities, you are not rationally obligated to believe that every outcome exists somewhere; and an electron wavefunction really may be describing a single object in a single state, rather than a multiplicity of them.

A quantum amplitude, being a complex number, is not an ordinary probability; it is, instead, a mysterious quantity from which usable probabilities are derived. Many-worlds says, "Let's view these amplitudes as realities, and try to derive the probabilities from them." But you can go the other way, and say, "Let's view these amplitudes as derived from the probabilities of a more fundamental theory." Mathematical results like Bell's theorem show that this will require a little imagination - you won't be able to derive quantum mechanics as an approximation to a 19th-century type of physics. But we have the imagination; we just need to use it in a disciplined way.

So that's the kernel of the argument that many worlds is not favored: the hypotheses under consideration are still too much of a mess to even be commensurable, and the informal argument for many worlds, quoted above, simply presupposes a multiplicity interpretation of quantum superposition. How about the argument that many worlds is actually disfavored? That would become a genuinely technical discussion, and when pressed, I would ultimately not insist upon it. We don't know enough about the theory-space yet. Single-world thinking looks more fruitful to me, when it comes to sub-quantum theory-building, but there are versions of many-worlds which I do occasionally like to think about. So the verdict for now has to be: not proven; and meanwhile, let a hundred schools of thought contend.

The remaining uncertainty in QM is about which slower-than-light, differentiable, configuration-space-local, CPT-symmetric, deterministic, linear, unitary physics will explain the Born probabilities, possibly in combination with some yet-unrealized anthropic truths - and combine with general relativity, and perhaps explain other experimental results not yet encountered.

The uncertainty within this space does not slop over into uncertainty over whether single-world QM - that is, FTL, discontinuous, nonlocal, CPT-asymmetric, acausal, nonlinear, nonunitary QM - is correct. Just because this was a historical mistake is no reason to privilege the hypothesis in our thought processes. It's dead and should never have been alive, and uncertainty within the unmagical versions of QM won't bring the magic back. You don't get to say "It's not resolved, so probability slops around whatever possibilities I happen to be thinking about, and I happen to be thinking about a single world." This really is the classic theistic tactic for keeping God alive.

In similar wise, any difficulties with natural selection are to be resolved within the space of naturalistic and genetically endogenous forces. None of that uncertainty slops over onto whether Jehovah might have done it, and the possibility shouldn't even be thought about without specific evidence pointing in the specific direction of (a) non-endogenous forces (b) intelligent design (c) supernatural agency and (d) Jehovah as opposed to the FSM.

If there's uncertainty within a space, then you might indeed want to try looking outside it - but looking outside it to Jehovah, or to having only a single quantum world, is privileging the hypothesis.

I present to you "The Logic of Quantum Mechanics Derived from Classical General Relativity" by Mark Hadley. Executive summary: Classical general relativity is the whole truth. Spacelike correlations result from exotic topological microstructure, and the specific formal features of quantum mechanics from the resulting logical structure. It's a completely classical single-world theory; all he has left to do is to "explain the Born probabilities".

Your most important argument seems to be: the micro-world is in superposition; there's no exact boundary between micro and macro; therefore the macro-world is in superposition; but this implies many worlds. However, as I said, this only goes through if you assume from the beginning that an object "in superposition" is actually in more than one state at the same time. If you have some other interpretation of microscopic wavefunctions (e.g. as arising from ordinary probability distributions in some way), the inference from many actual states to many actual worlds never gets started.

That paper turns on the argument of Section 5 (p.5-6) that Boolean distribution may not apply. However, I'm having trouble believing the preceding two paragraphs. To begin with, he takes as Axiom 2 the "fact" that a particle has a definite location, something my understanding of QM rejects. Even if we grant him that, he seems to be deriving the lack of Boolean distribution from essentially a rejection of the intersection operation when it comes to statements about states.

Perhaps somebody else can explain that section better, but I remain unconvinced that so sweeping a conclusion as the total foundation of QM on classical principles (including beliefs about the actual existence of some kind of particles) can be derived from what appear to me shaky foundations.

Finally, Mitchell, I would ask: where do you place the boundary between micro-level superposition and macro-level stability? At what point does the magic happen? Or are you just rejecting micro-level superpositions? In that case, how do quantum computers work?

The theories actually used in particle physics can generally be obtained by starting with some classical field theory and then "quantizing" it. You go from something described by straightforward differential equations (the classical theory) to a quantum theory on the configuration space of the classical theory, with uncertainty principle, probability amplitudes, and so forth. There is a formal procedure in which you take the classical differential equations and reinterpret them as "operator equations", that describe relationships between the elements of the Schrodinger equation of the resulting quantum field theory.

Many-worlds, being a theory which says that the universal wavefunction is the fundamental reality, starts with a quantum perspective and then tries to find the observable quasi-classical reality somewhere within it. However, given the fact that the quantum theories we actually use have not just a historical but a logical relationship to corresponding classical theories, you can start at the other end and try to understand quantum theory in basically classical terms, only with something extra added. This is what Hadley is doing. His hypothesis is that the rigmarole of quantization is nothing but the modification to probability theory required when you have a classical field theory coupled to general relativity, because microscopic time-loops ("closed timelike curves") introduce certain constraints on the possible behavior of quantities which are otherwise causally disjoint ("spacelike separated"). To reduce it all to a slogan: Hadley's theory is that quantum mechanics = classical mechanics + loops in time.

There are lots of people out there who want to answer big questions in a simple way. Usually you can see where they go wrong. In Hadley's case I can't, nor has anyone else rebutted the proposal. Superficially it makes sense, but he really needs to exactly re-derive the Schrodinger equation somehow, and maybe he can't do that without a much better understanding (than anyone currently possesses) of "non-orientable 4-manifolds". For (to put it yet another way) he's saying that the Schrodinger equation is the appropriate approximate framework to describe the propagation of particles and fields on such manifolds.

Hadley's theory is one member of a whole class of theories according to which complex numbers show up in quantum theory because you're conditioning on the future as well as on the past. I am not aware of any logical proof that complex-valued probabilities are the appropriate formalism for such a situation. But there is an intriguing formal similarity between quantum field theory in N space dimensions and statistical mechanics in N+1 dimensions. It is as if, when you think about initial and final states of an evolving wavefunction, you should think about events in the intermediate space-time volume as having local classically-probabilistic dependencies both forwards and backwards in time - and these add up to chained dependencies in the space-like direction, as you move infinitesimally forward along one light-cone and then infinitesimally backward along another - and the initial and final wavefunctions are boundary conditions on this chunk of space-time, with two components (real and imaginary) everywhere corresponding to forward-in-time and backward-in-time dependencies.

This sort of idea has haunted physics for decades - it's in "Wheeler-Feynman absorber theory", in Aharonov's time-symmetric quantum mechanics (where you have two state vectors, one evolving forwards and one evolving backwards)... and to date it has neither been vindicated nor debunked, as a possible fundamental explanation of quantum theory.

Turning now to your final questions: perhaps it is a little clearer now that you do not need magic to not have many-worlds at the macro level, you need only have an interpretation of micro-level superposition which does not involve two-things-in-the-one-place. Thus, according to these zigzag-in-time theories, micro-level superposition is a manifestation of a weave of causal/probabilistic dependencies oriented in two time directions, into the past and into the future. Like ordinary probability, it's mere epistemic uncertainty, but in an unusual formalism, and in actuality the quantum object is only ever in one state or the other.

Now let's consider Bohm's theory. How does a quantum computer work according to Bohm? As normally understood, Bohm's theory says you have universal wavefunction and classical world, whose evolution is guided by said wavefunction. So a Bohmian quantum computer gets to work because the wavefunction is part of the theory. However, the conceptually interesting reformulation of Bohm's theory is one where the wavefunction is just treated as a law of motion, rather than as a thing itself. The Bohmian law of motion for the classical world is that it follows the gradient of the complex phase in configuration space. But if you calculate that through, for a particular universal wavefunction, what you get is the classically local potential exhibited by the classical theory from which your quantum theory was mathematically derived, and an extra nonlocal potential. The point is that Bohmians do not strictly need to posit wavefunctions at all - they can just talk about the form of that nonlocal potential. So, though no-one has done it, there is going to be a neo-Bohmian explanation for how a quantum computer works in which qubits don't actually go into superposition and the nonlocal dynamics somehow (paging Dr Aaronson...) gives you that extra power.

To round this out, I want to say that my personally preferred interpretation is none of the above. I'd prefer something like this so I can have my neo-monads. In a quasi-classical, space-time-based one-world interpretation, like Hadley's theory or neo-Bohmian theory, Hilbert space is not fundamental. But if we're just thinking about what looks promising as a mathematical theory of physics, then I think those options have to be mentioned. And maybe consideration of them will inspire hybrid or intermediate new theories.

I hope this all makes clear that there is a mountain of undigested complexity in the theoretical situation. Experiment has not validated many-worlds, it has validated quantum mechanics, and many worlds is just one interpretation thereof. If the aim is to "think like reality" - the epistemic reality is that we're still thinking it through and do not know which, if any, is correct.

What happens when I measure an entangled particle at A after choosing an orientation, you measure it at B, and we're a light-year apart, moving at different speeds, and each measuring "first" in our frame of reference?

Why do these so-called "probabilities" resolve into probabilities when I measure something, but not when they're just being microscopic? When exactly do they resolve? How do you know?

Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?

These are all questions that must be faced by any attempted single-world theory. Without specific evidence pointing to a single world, they are not only lethal for the single-world theory but lethal for anyone claiming that we have good reason to think about it.

Answering from within a zigzag interpretation:

What happens when I measure an entangled particle at A after choosing an orientation, you measure it at B, and we're a light-year apart, moving at different speeds, and each measuring "first" in our frame of reference?

Something self-consistent. And nothing different from what quantum theory predicts. It's just that there aren't any actual superpositions; only one history actually happens.

Why do these so-called "probabilities" resolve into probabilities when I measure something, but not when they're just being microscopic? When exactly do they resolve? How do you know?

Quantum amplitudes are (by our hypothesis) the appropriate formal framework for when you have causal loops in time. The less physically relevant they are, the more you revert to classical probability theory.

Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?

A quantum computation is a self-consistent standing wave of past-directed and future-directed causal chains. The extra power of quantum computation comes from this self-consistency constraint plus the programmer's ability to set the boundary conditions. A quantum computer's wavefunction evolution is just the ensemble of its possible histories along with a nonclassical probability measure. Intelligences (or anything real) can show up "in a wavefunction" in the sense of featuring in a possible history.

(Note for clarity: I am not specifically advocating a zigzag interpretation. I was just answering in a zigzag persona.)

These are all questions that must be faced by any attempted single-world theory. Without specific evidence pointing to a single world, they are not only lethal for the single-world theory but lethal for anyone claiming that we have good reason to think about it.

Well, we know there's at least one world. What's the evidence that there's more than one? Basically it's the constructive and destructive interference of quantum probabilities (both illustrated in the double-slit experiment). The relative frequencies of the quantum events observed in this world show artefacts of the way that the quantum measure is spread across the many worlds of configuration space. Or something. But single-world explanations of the features of quantum probability do exist - see above.

Something self-consistent. And nothing different from what quantum theory predicts. It's just that there aren't any actual superpositions; only one history actually happens.

Gonna be pretty hard to square that with both Special Relativity and the Markov requirement on Pearl causal graphs (no correlated sources of background uncertainty once you've factored reality using the graph).

I only just noticed this reply. I'm not sure what the relevance of the Markov condition is. You seem to be saying "I have a formalism which does not allow me to reason about loops in time, therefore there shall be no loops in time."

The Markov requirement is a problem for saying, "A does not cause B, B does not cause A, they have no common cause, yet they are correlated." That's what you have to do to claim that no causal influence travels between spacelike separated points under single-world quantum entanglement. You can't give it a consistent causal model.

Consider a single run of a two-photon EPR experiment. Two photons are created in an entangled state, they fly off at light speed in opposite directions, and eventually they each encounter a polarized filter, and are either absorbed or not absorbed. Considered together, their worldlines (from point of creation to point of interaction) form a big V in space-time, with the two upper tips of the V being spacelike separated.

In these zigzag interpretations, you have locally mediated correlations extending down one arm of the V and up the other. The only tricky part is at the bottom of the V. In Mark Hadley, there's a little nonorientable region in spacetime there, which can reverse the temporal orientation of a timelike chain of events with respect to its environment without interrupting the internal sequence of the chain. In John Cramer, each arm of the V is a four-dimensional standing wave (between the atoms of the emitter and the atoms of the detector) containing advanced and retarded components, and it would be the fact that it's the same emitter at the base of two such standing waves which compels the standing waves to be mutually consistent and not just internally consistent. There may be still other ways to work out the details but I think the intuitive picture is straightforward.

Does the A measurement and result happen first, or does the B measurement and result happen first, or does some other thing happen first that is the common cause of both results? If you say "No" to all 3 questions then you have an unexplained correlation. If you say "Yes" to either of the first two questions you have a global space of simultaneity. If you say "Yes" to the third question you're introducing some whole other kind of causality that has no ordinary embedding in the space and time we know, and you shall need to say a bit more about it before I know exactly how much complexity to penalize your theory for.

you're introducing some whole other kind of causality that has no ordinary embedding in the space and time we know

The physics we have is at least formally time-symmetric. It is actually noncommittal as to whether the past causes the present or the future causes the present. But this doesn't cause problems, as these zigzag interpretations do, because timelike orientations are always maintained, and so whichever convention is adopted, it's maintained everywhere.

The situation in a zigzag theory (assuming it can be made to work; I emphasize that I have not seen a Born derivation here either, though Hadley in effect says he's done it) is the same except that timelike orientations can be reversed, "at the bottom of the V". In both cases you have causal chains where either end can be treated as the beginning. In one case the chain is (temporally) I-shaped, in the other case it's V-shaped.

So I'm not sure how to think about it. But maybe best is to view the whole of space-time as "simultaneous", to think of local consistency (perhaps probabilistic) rather than local causality, and to treat the whole thing as a matter of global consistency.

The Novikov self-consistency principle for classical wormhole space-times seems like it might pose similar challenges.

By the way, can't I ask you, as a many-worlder, precisely the same question - does A happen first, or does B happen first?

His questions make no sense to me from a timeless perspective. They seem remarkably unsophisticated for him.

John Cramer and transactional interpretation for by far the most prominent example. Wheeler-Feynman absorber theory was the historical precursor; also see "Feynman checkerboard". Mark Hadley I mentioned. Aharonov-Vaidman for the "two state vector" version of QM, which is in the same territory. Costa de Beauregard was another physicist with ideas in this direction.

This paper is just one place where a potentially significant fact is mentioned, namely that quantum field theory with an imaginary time coordinate (also called "Euclidean field theory" because the metric thereby becomes Euclidean rather than Riemannian) resembles the statistical mechanics of a classical field theory in one higher dimension. See the remark about how "the quantum mechanical amplitude" takes on "the form of a Boltzmann probability weight". A number of calculations in quantum field theory and quantum gravity actually use Euclideanized metrics, but just because the integrals are easier to solve there; then you do an analytic continuation back to Minkowski space and real-valued time. The holy grail for this interpretation, as far as I am concerned, would be to start with Boltzmann and derive quantum amplitudes, because it would mean that you really had justified quantum mechanics as an odd specialization of standard probability theory. But this hasn't been done and perhaps it can't be done.

Euclidean rather than Riemannian

I think that you mean Euclidean rather than Minkowskian. Euclidean vs Riemannian has to do with whether spacetime is curved (Euclidean no, Riemannian yes), while Euclidean vs Minkowskian has to do with whether the metric treats the time coordinate differently (Euclidean no, Minkowskian yes). (And then the spacetime of classical general relativity, which answers both questions yes, is Lorentzian.)

The remaining uncertainty in QM is about which slower-than-light, differentiable, configuration-space-local, CPT-symmetric, deterministic, linear, unitary physics will explain the Born probabilities

Why on Earth must real physics play nice with the conceptual way in which it was mathematized at the current level of detail and areas of applicability? It can easily be completely different, with for example "differentiable" or "linear" ceasing to make sense for a new framework. Math is prone to live on patterns, ignoring the nature of underlying detail.

One of these elegances could be wrong. But all of them? In exactly the right way to restore a single world? It's not worth thinking about, at this stage.

From a purely theoretic or philosophical point of view, I'd agree.

However, physical theories are mostly used to make predictions.

Even if you a firm believer in MWI, in 99% of the practical cases, whenever you use QM, you will use state reductions to make predictions.

Now you have an interesting situation: You have two formalisms: one is butt-ugly but usable, the other one is nice and general, but not very helpful. Additionally, the two formalisms are mostly equivalent mathematically, at least as long as it comes to making verifiable predictions.

Additionally there are these pesky probabilities, that the nice formalism may account for automatically, but it's still unclear. These probabilities are essential to every practical use of the theory. So from a practical point of view, they are not just a nuance, they are essential.

If you assess this situation with a purely positivist mind-set: you could ask: "What additional benefits does the elegant formalism give me besides being elegant?"

Now, I don't want to say that MWI does not have a clear and definite theoretical edge, but it would be quite hypocritical to throw out the usable formalism as long as it is even unclear how to make the new one at least as predictive as the old one.

Even if you a firm believer in MWI, in 99% of the practical cases, whenever you use QM, you will use state reductions to make predictions.

How does using a state reduction imply thinking about a single-world theory, rather than just a restriction to one of the branches to see what happens there?

You do the exact same calculations with either formalism.

Try to formally derive any quantitative prediction based on both formalisms.

The problem with MWI formalism that there is one small missing piece and that one stupid little piece seems to be crucial to make any quantitative predictions.

The problem here is a bit of hypocrisy: Theoretically, you prefer MWI, but whenever you have to make a calculation, you go to the closet and use old-fashioned ad hoc state reduction.

Because of decoherence and the linearity of the Schrödinger equation, you can get a very good approximation to the behavior of the wavefunction over a certain set of configurations by 'starting it off' as a very localized mass around some configuration (if you're a physicist, you just say "what the hell, let's use a Dirac delta and make our calculations easier"). This nifty approximation trick, no more and no less, is the operation of 'state reduction'. If using such a trick implies that all physicists are closet single-world believers, then it seems astronomers must secretly believe that planets are point masses.

I don't really see that doing a trick like that really buys you the Born rule. Any reference to back your statement?

Douglas is right: the crux of matter seems to be the description of the measurement process. There have been recent attempts to resolve that, but so far they are not very convincing.

Forgot about this post for a while; my apologies.

Douglas is right: the crux of matter seems to be the description of the measurement process.

The trick, as described in On Being Decoherent, is that if you have a sensor whose action is entropically irreversible, then the parts of the wavefunction supported on configurations with different sensor readings will no longer interfere with each other. The upshot of this is that, as the result of a perfectly sensible process within the same physics, you can treat any sensitive detector (including your brain) as if it were a black-box decoherence generator. This results in doing the same calculations you'd do from a collapse interpretation of measurement, and turns the "measurement problem" into a very good approximation technique (to a world where everything obeys the same fundamental physics) rather than a special additional physics process.

That explains the decoherence as a phenomenon (which I never doubted), but does not explain the subjectively perceived probability values as a function of the wave function.

Ah. On that front, as a mathematician, I'm more than willing to extend my intuitions about discrete numbers of copies to intuitions about continuous measures over sets of configurations. I think it's a bit misleading, intuition-wise, to think about "what I will experience in the future", given that my only evidence is in terms of the state of my current brain and its reflection of past states of the universe.

That is, I believe that I am a "typical" instance of someone who was me 1 year prior, and in that year I've observed events with frequencies matching the Born statistics. To explain this, it's necessary and sufficient for the universe to assign measure to configurations in the way the Schrödinger equation does (neglecting the fact that some different equation is necessary in order to incorporate gravity), resulting in a "typical" observer recalling a history which corresponds to the Born probabilities.

The only sense in which the Born probabilities present me with a quandary is that the universe prefers the L^2 norm to the L^1 norm; but given the Schrödinger equation, that seems natural enough for mathematical reasons.

I think we start to walk in circles. What simply seem to declare your faith(?) that the universe is somehow forced to use the specific quantitative rule while at the same time admitting that you find it strange that it is one norm and not the another (also ad hoc) one.

I don't see how this contradicts the grand-grand-...parent post http://lesswrong.com/lw/19s/why_manyworlds_is_not_the_rationally_favored/151w .

I don't disagree with your general sentiment, but it would be far-fetched to say the problem is solved. It is not (to my best knowledge) and no declaration of faith changes that until a precise mathematical model is presented giving gap-free, quantitative derivations of the experimental results.

However, I would be delighted to chat with you a bit IRL if you still happen to live in Berkeley. I am also a mathematician living in Berkely and I guess it could be fun to share some thoughts over a beer or at a cafe. Drop me a PM, if you are interested.

I think the most charitable interpretation of CS is that if you want to make an actual observation in many worlds, you have to model your measurement apparatus, while if you believe in collapse, then measurement is a primitive of the theory.

Maybe I misunderstand you and this is a non sequitur, but the point is to apply decoherence after the measurement, not (just) before.

Many-worlds are there at the level of quantum mechanics, and there is the single world at the level of classical mechanics, both views correct in their respective frameworks for describing reality. The world-counting is how human intuitions read math, not obviously something inherent in reality (unless there is a better understanding of what "inherent in reality" should mean). What picture is right for a deeper level can be completely different once again.

Another, more important question, is how morally relevant are these conceptions of reality, but I don't know in what way to trust my intuition about morality of concepts it's using for interpreting math. So far, MWI looks to me morally indistinguishable from epistemic uncertainty, and so many-worlds of QM are no more real than single-world of classical mechanics. Many-worldness of QM might well be more due to the properties of math rather than "character of reality", whatever that should mean.

The fact that quantum mechanics is deeper in physics places it further away from human experience and from human morality, and so makes it less obviously adequately evaluated intuitively. The measure of reality lies in human preference, not in the turtles of physics. Exploration of physics starts from human plans, and the fact that humans are made of the stuff doesn't give it more status than a distant star -- it's just a substrate.

If MWI is simpler than nonMWI, then by Solomonoffish reasoning it's more likely that TOE reduces to observed reality via MWI than that it reduces to observed reality via nonMWI, correct? I agree all these properties that Eliezer mentions are helpful only as a proxy for simplicity, and I'm not sure they're all independent arguments for MWI's relative simplicity, but it seems extremely hard to argue that MWI isn't in fact simpler given all these properties.

I don't assume the reality has a bottom, but in human realm it has a beginning, and that's human experience. What we know we learn from experiments, observe more and more about the bigger system, and this process is probably not going to end, even in principle. What's to judge this process rather than us?

If, for example, in prior/utility framework, prior is just one half of preference, that alone demonstrates dependence of notion of "degree of reality" for concepts on human morality, in its technical sense. While I'm not convinced that prior/utility is the right framework for human preference, the case is in point.

P.S. Just to be sure, I'm not arguing for one-world QM, I'm comparing many-world QM to one-world classical mechanics.

If reality is finitely complex, how does it get to have no bottom?

P.S. Just to be sure, I'm not arguing for one-world QM, I'm comparing many-world QM to one-world classical mechanics.

I don't understand. Surely things like the double-slit experiment have some explanation, and that explanation is some kind of QM, and we're forced to compare these different kinds of QM.

If reality is finitely complex, how does it get to have no bottom?

What does it mean for reality to be finitely complex? At some point you not just need to become able to predict everything, you need to become sure in your predictions, and that I consider an incorrect thing to do at any point. Therefore, complexity of reality, as people perceive it is never going to run out (I'm not sure, but it looks this way).

Surely things like the double-slit experiment have some explanation, and that explanation is some kind of QM, and we're forced to compare these different kinds of QM.

Quantum mechanics is valid predictive math. The extent to which interpretation of this math in terms of human intuitions about worlds is adequate is tricky. For example, it's hard to intuitively tell a difference between another person in the same world and another person described by a different MWI world: should these patterns be of equal moral worth? How should we know, how can we trust intuition on this, without technical understanding of morality? Intuitions break down even for our almost-ancestral-environment situations.

Vladimir_Nesov's post is regarding where we should look for morally-relevant conceptions of reality. He is advocating building out our morality starting from human-scale physics, which is well-approximated by one-world classical mechanics.

single-world QM - that is, FTL, discontinuous, nonlocal, CPT-asymmetric, acausal, nonlinear, nonunitary QM

This is a perfect illustration of Mitchell Porter's point. This is not, in fact, what single-world QM is. This is, more or less, what the Copenhagen interpretation is. Given the plethora of interpretations available, the dichotomy between that and MWI is a false one.

If you're a reductionist about things like objects and minds -- if you believe it's enough that there are patterns -- then you can find such patterns in quantum superpositions without further assumptions. You may not be such a reductionist, but most of us are.

More Wallace linkage: http://users.ox.ac.uk/~mert0130/papers/proc_dec.pdf

So you say many-worlds isn't rationally favored over other interpretations because those other interpretations haven't been stated clearly enough? I'm pretty sure I could argue against evolution or gravity in the same manner.

If mere clarity were the issue, then Bohmian mechanics would be #1, spontaneous collapse theories would be #2, and many-worlds and the "zigzag in time" approach would be tied for third place.

The reason for this ranking is that Bohmian mechanics and collapse theories actually have equations of motion which allow you to make the correct predictions. But the collapse theories come off as slightly inferior because there is no principle constraining the form of collapse dynamics.

Zigzag-in-time refers to John Cramer's transactional interpretation and Mark Hadley's QM-from-gravity approach (mentioned above). They're in third place with many-worlds because they cannot presently make predictions.

But the situation is way more complex than this summary suggests. You can have Bohmian mechanics without a pilot wave (the "nomological" version of Bohm), you can have a collapse theory without superpositions (you just quantum jump from one "collapse" to the next), you can have many-worlds without a universal wavefunction (just use the world-probabilities in a "consistent histories" ensemble). Like I said, the known options have been expressed in a babel of theoretical frameworks, and anything resembling objective comparison has hardly begun. The human race is still thinking this through.

Thanks. I'd really like to see a post explaining the different interpretations in detail.

This article makes frequent references Eliezer's arguments against the quantum collapse postulate, including those harking back to OvercomingBias days. Yet I find no reference anywhere in the article to 'faster than light' or, in fact, any of the critical elements to Eliezer's claim.

Michael's argument is founded on the principle behind the "the fallacy of privileging the hypothesis". This is only made relevant by truncating all references to the evidence which supports the hypothesis. Deny the evidence if you will, but even if you do, the problem with the argument would be 'the reasoning behind the premises is bogus'. Privileging the hypothesis doesn't even remotely apply.

I don't want to defend the Copenhagen interpretation, still I'd point out that Eliezer's arguments are purely aesthetic rather than rational.

E.g. faster than light exchange may be required for state-collapse view, but it will always happen in a restricted way that does not allow for real faster than light communication or violation of causality. It may be ugly for you, but it does not mean it makes any difference mathematically.

If there would be a single objective mathematical problem with the Copenhagen interpretation that really requires MWI, then MWI would be undisputed by physicists by now (rather than just favored, as is the case now).

However, Eliezer (or rather Everett) has a strong philosophical case: So far in the history of science, more beautiful theories tended to be more correct as well.

This comment is moving us in the right direction. From an epistemic standpoint all a theory is is a function of some set of observation sentences (propositions about our own sensory experiences) to another set of observation sentences. This debate is merely about the best way to describe the function labeled "quantum mechanics" which takes observations about certain experimental circumstances as a domain and observations about certain experimental outcomes as a range. There are very likely an infinite number of cognitive expressions of this function.

Our problem is that there is no consensus on a Method of Theoretical Interpretation. It is relatively easy to pick the better of two theories when we have inductive evidence distinguishing them. But there is a lot of confusion about how to choose between functionally identical expressions of a theory. We have a number of candidates for criteria but those criteria have yet to be satisfactorily explicated and the relations between the criteria and the relative importance of each remain wholly undefined. Parsimony, Generality, Verifiability, Falsifiability, "cognitive intuitivity" (a human's ability to grasp the theory), pragmatic usability etc. are all things various parties want taken into account.

In some debates over theory interpretation one interpretation might win according to all these criteria and the debate will end. But when the outcome is less straightforward it isn't clear to me what the best way forward is. The argument in favor of MWI seems to be that it is better than single-world interpretations on grounds of parsimony and generality. This seems right to me, though I think this conclusion depends on a particular understanding of these criteria which might not be universally agreed upon (i.e. why doesn't positing the existence of lot of worlds we can't have causal connections to count as multiplying entities?). On the other hand, it still might be the case that something like the Copenhagen interpretation is easier to comprehend and yields more fruitful theorizing. Until a particular interpretation has been determined to be definitively better than the rest or until a General Method of Theoretical Interpretation is agreed to the best option seems to be interpretive pluralism.

However, Eliezer (or rather Everett) has a strong philosophical case: So far in the history of science, more beautiful theories tended to be more correct as well.

If this is true, why doesn't this count as straight-forward induction? It certainly looks like induction and if it is, why is this a philosophical rather than scientific case? Also, if we think CI and MWI make the same predictions what does it mean to say "tended to be more correct"? Doesn't that require experimental evidence falsifying one of the interpretations at a later date?

If this is true, why doesn't this count as straight-forward induction? It certainly looks like induction and if it is, why is this a philosophical rather than scientific case? Also, if we think CI and MWI make the same predictions what does it mean to say "tended to be more correct"? Doesn't that require experimental evidence falsifying one of the interpretations at a later date?

It is not scientific induction, since you can't measure elegance quantitatively. However scientists have subjective intuition based on the successes and failures of past other physical theories. This is what I meant by "philosophical edge".

It is not scientific induction, since you can't measure elegance quantitatively.

You can formally via Kolmogorov complexity.

Doesn't elegance reduce to how elegant scientists feel the theory is? Can't we quantify the opinions of scientists regarding how elegant some theory is? Or if elegance isn't reduce-able that way then isn't the correlation between correctness and elegance really a correlation between correctness and perceived elegance anyway?

What do you mean by subjective intuition? Are you distinguishing it from objective intuition?

I feel like I'm coming off like that jackass Socrates, but everyone seems to be taking loaded, technical terms for granted and applying them sloppily.

What do you mean by subjective intuition?

Sorry, I just tried to emphasize the subjective nature of intuition.

You can quantify the opinions of scientists to measure elegance, but I don't think it's a good idea: It would just further enforce groupthink at the expense of originality, IMO.

So far in the history of science, more beautiful theories tended to be more correct as well.

Theories are for a big part about insight, are tools for looking at the world, and simplicity is a major factor for their usability. What gets selected by usability doesn't necessarily give a good picture of truth.

Come on, you don't seriously believe that in physics simplicity always wins over correctness?

If you look at physicists, they work in both directions:

  • Make better approximations
  • Develop more complete theories

And physicists clearly distinguish between the above two.

Would you seriously believe that e.g. Kepler's model of planetary orbits survived rather than that of Ptolemy just because his was simpler or because it was closer to truth?

Nothing wins over (the necessary extent of) correctness, but what wins within correctness is simple not because simplicity is necessary for correctness, but because it's easier to work with (and often easier to find too).

There is a good rational reason why simpler theories are more probably true: they are less probably tuned for the already existing evidence.

For example: Even if Ptolemy's circles made predictions that were equally predictive within that era's achievable precision of measurement. Those circles were tailored for that specific situation. Even if Kepler's law was quite ad hoc, Its simplicity could indicate that it had more substance, since it was not tuned to the given evidence in such a cumbersome way.

Could you give an example of something simple that won over something equally correct, because it was easier to work with or find?

Special relativity, winning over the add-hoc rules of time dilation and length/mass transformations that were known beforehand (and essentially predicted the same thing).

Special relativity is an example of an equally correct theory winning over an earlier, somewhat entrenched theory, but I'm not sure how it won. When it first appeared, many people declared it obvious, some claiming that this was good, some bad. It was still unpopular when Einstein won the Nobel prize. One possibility is that it won because of his eminence, eg, because of the photo-electric effect, which is an extremely poor reason. The obvious answer is that it won because of GR. I guess that probably constitutes an example of winning because of usability.

I'm a little concerned about how we draw lines between theories, but I suppose that would apply to any answer to the question.

Saying that relativity is "the best theory" is not very different from saying that it won. Stuart says that it won because it was simpler than Lorentz contractions. It was not widely believed to be the best theory in 1915. What happened between then and now? Was it obviously better and the old guard just had to die? Or did something else that happened, like the Nobel or GR change people's minds?

I'm not sure that Lorentz's transformations were more ad hoc than Einstein's, though Minkowski's were a definite improvement. If Einstein's principle lead to Minkowski's work, that's good, and meets Vladimir's usability criterion; and probably counts as simplicity.

Lorentz contractions are special relativity. My understanding was that Einstein's great role was unifying and putting under one roof the various add-hoc results.

While I fundamentally disagree with your claims I don't object to you making them. I do note that the validity of Eliezer's argument is not something that I'm claiming here. There are plenty of other comments (including others of mine) where this would be a more relevant reply.

My reply was mostly triggered by this sentence;

Yet I find no reference anywhere in the article to 'faster than light' or, in fact, any of the critical elements to Eliezer's claim.

However, I'd be really curious which specific claim don't you agree with.

I'd be really curious which specific claim don't you agree with.

This is one of those times where 'agreeing to disagree' would save some frustration, but here is a list.

Eliezer's arguments are purely aesthetic rather than rational.

If Eliezer's claims are wrong his position is irrational, not merely an aesthetic preference. The "MW is more aesthetic" is a common position (as well as a politically appealing one) but Eliezer has made arguments that are quite clearly not aesthetic in nature.

E.g. faster than light exchange may be required for state-collapse view, but it will always happen in a restricted way that does not allow for real faster than light communication or violation of causality. It may be ugly for you, but it does not mean it makes any difference mathematically.

Is that what the dragon in your garage told you?

If there would be a single objective mathematical problem with the Copenhagen interpretation that really requires MWI, then MWI would be undisputed by physicists by now (rather than just favored, as is the case now).

I'd be surprised. I'd expect to have to wait till a generation (at least) died off or retired for that to occur on something that so violates entrenched intuitions. Even more so once a teaching tradition forms.

I also disagree with the embedded claim supported by the appeal to authority. I suspect our disagreement there could be traced to what we consider qualifies as 'objective'.

Thanks for the reply. I found it much more interesting than frustrating.

I also have to admit that I generally tend to believe scientific authority on scientific matters, at least in mathematics and natural sciences. Could be a defect of mine.

OTOH, In my reading, Eliezer never argued that there is a clear mathematical flaw in the classical theory of QM. (besides the ugly and ad hoc nature of the state reduction, which still does not make the theory mathematically unsound).

I also have to admit that I generally tend to believe scientific authority on scientific matters, at least in mathematics and natural sciences. Could be a defect of mine.

No implication of fallacious appeal intended. Just a reference to the claim that you didn't literally make.

I also rely on scientific expertise in scientific matters but have a different prediction on what it would take for new information on significant topics to become undisputed. It is possible that we also select scientific authorities in different manner. I tend to actively discount for the contributions of social dominance to scientific authority when I'm selecting expert opinions where there is disagreement.

OTOH, In my reading, Eliezer never argued that there is a clear mathematical flaw in the classical theory of QM. (besides the ugly and ad hoc nature of the state reduction, which still does not make the theory mathematically unsound).

I like the idea of de-emphasising distracting labels such as 'Many Worlds' and just sticking with the math and calling it QM. There are the (Born, etc.) equations behind quantum mechanics with which we can make our predictions and that's that.

I assert that adding a claim such as 'most of the information in the function is removed in way that allows the math to still work' is an objective scientific mistake that is not merely aesthetic. I think you disagree with me there. Similar reasoning would also claim that including a mathematically irrelevant garage dragon in a theory makes it objectively unsound science. Likewise on 'there gazillions of fairies who hack the quantum state constantly to make it follow Born predictions'.

I assert that adding a claim such as 'most of the information in the function is removed in way that allows the math to still work' is an objective scientific mistake that is not merely aesthetic. I think you disagree with me there.

My positivist personality disagrees, my Platonic personality agrees with you.

I would even go as far as saying that the ad-hoc state-reduction performed at seemingly arbitrary points is clearly a technical (not just philosophical) defect of the classical view.

On the other hand, the incompleteness of the MW description (not accounting for Born probabilities) is an even more serious practical issue (for the time being): it does not allow us to make any quantitative predictions. If we inject the Born "fairies", back to the theory then we will arrive at the same problem as the classical formalism.

So I'd agree to some extent with the OP, that the most probable future resolution of the problem will be some brand new even more elegant math which will be more satisfactory than any of the above two options.

More details on just how those born probabilities work is the area of physics I would most like answers on. It could greatly clarify the foundations of my utility function!

(PS: Downvote of parent not by me.)

Many worlds is favored. It is what you get if you just apply the same laws of physics which correctly describe the observed behaviors of microscopic systems on a large scale without postulating any additional laws of physics which are not suggested by the evidence. If you model a measuring device as a system of particles, then measuring a particle in superposition puts that device into superposition, and if you model a human observer as a system of particles then observing the results of that measurement on the device in superposition puts the human in superposition. Supposing additional physics, that the behavior is different when the number of particles gets large, or worse, are in a special configuration called a "mind", makes the theory more complicated for no reason. Supposing that quantum physics for small systems can be derived from a simpler theory which predicts something different in large systems, without actually presenting such a theory, is just an appeal to logical ignorance. And if we had such a theory, we would, at least in principle, know what experiment would distinguish it from many worlds.

on a large scale

Which is to say, MWI is what you get if you assume there is a universal state without an observer to observe the state of fix the basis. As it happens, it is possible to reject Universal State AND real collapse.

You are arguing a strawman. Many-worlds is contended only conceptually correct, in the same way classical illusions of our billiard ball world are conceptually correct. Obviously, quantum mechanics is technically imprecise, and there is likely another conceptual picture that gives the more accurate layer of description of reality, in the same way as classical physics is technically imprecise, and quantum mechanics serves as a shift in perspective allowing to fix some of its imprecision (theories of relativity working on the same problem on the other end, and quantum relativity on both).

Reductionist analysis is not about getting to the bottom of things (it's pretty bad at that), but about moving between levels, finding simple patterns at the lower levels and using knowledge about them to reach conclusions at the higher levels.

Do you think that some theories are more than merely conceptually correct? Can you unpact "conceptually" for us?

Many-worlds is contended only conceptually correct, in the same way classical illusions of our billiard ball world are conceptually correct.

Upon reading Collapse Postulates, or If Many-Worlds Had Come First, I would say that Eliezer_Yudkowsky is not merely arguing this correct a la "billiard ball world". Quote from the latter article:

Imagine an alternate Earth, where the very first physicist to discover entanglement and superposition, said, "Holy flaming monkeys, there's a zillion other Earths out there!"

Also, we hang on to the billiard ball view only where we know it conincides with the QM view, as we know that "billiard balls", as a theory, is false. Thus, any predictions derived from it would be suspect unless also derived from QM. None of this seems to me to concide with Elizer_Yudkowsky's view on Many-Worlds.

Summary: I disagree that Mitchell_Porter is arguing a strawman. Also, I have a question: What value do you see in Many-Worlds merely as a concept?

My main argument is that most likely none of us really knows enough about mathematics of quantum mechanics to follow emergence of patterns of observable universe out of MWI. My quantum maths stops at quantum computing, which is MWI-interpretable, and Copenhagen-interpretable equally well.

The second argument is that our view of physics is incomplete - we don't know about quantum gravity, our cosmology is ridiculous, filled with inflation, dark matter, dark energy etc., we don't know if there are any tiny non-linearities in 200th decimal place with quantum systems (no physical law so far withstood this). MWI completely fails if any such non-linearities are present, while other theories can handle them. Quantum computers are also spectacularly precise quantum effect measurement devices, so we might find that out.

I find the case for MWI decent, but nowhere near as overwhelming as the usual examples of theism and marijuana legalization. It can collapse with one experiment, and I'm not betting against such experiment happening in my lifetime at odds higher than 10:1.

Here's amusing quantum effect to think about

MWI completely fails if any such non-linearities are present, while other theories can handle them. [...] It can collapse with one experiment, and I'm not betting against such experiment happening in my lifetime at odds higher than 10:1.

So you're saying MWI tells us what to anticipate more specifically (and therefore makes itself more falsifiable) than the alternatives, and that's a point against it?

The possibility of future evidence against some hypothesis isn't evidence against that hypothesis. It also isn't evidence for that hypothesis. The only experiments that count are the ones that have actually been done.

The absence of evidence against a hypothesis that other hypotheses predict you would encounter is evidence in favor of that hypothesis.

Right, but only if they predict you'd encounter the evidence in situations that have actually happened.

What are the probabilities, given Many Worlds or Collapse quantum mechanics, that in our past investigations of sub atomic particles, we would have encountered some non linear term in the Schrodinger equation? I would say this has a higher probability in the collapse theory that would not be falsified by it, and thus its absence does in fact favor Many Worlds.

Sure, it's just that taw and simpleton both seemed to be making stronger claims than that.

The claim I'm making is that Eliezer's acting as if MWI was proven beyond any possibility of doubt, just as non-existence of the Christian god, is not justified.

MWI is a decent interpretation, but preference for it is based mostly on different intuitions on what counts as mathematical simplicity (as data is agnostic between interpretations now), and it might get invalidated in a single experiment - which is not that terribly unlikely to happen, given past performance of our physical theories.

It's the point against certainty about MWI, not against MWI.

If we go down to 200th decimal place and find perfect linearity, it would be weak evidence for MWI (because other interpretations are fairly agnostic about it).

This would be significantly more useful if there were a bit of argument clarifying the point. EY has argued at length why MWH is the best explanation we have; merely saying, "Privileging the hypothesis!" is not adequate.

I'd stake even odds that this post was published accidentally. It looks like the two introductory paragraphs above a (not presently existing) cut hiding the complete argument. It would be rather odd to entitle a post "Why X is not Y," postulate that it is worth explaining the case against X being Y, and then not immediately follow up by doing so.

You're quite right. I saved it and then logged out. Or that's what I thought I was doing. And I came back just now intending to respond to Eliezer's essay with a comment rather than a whole new article. I guess I'll make the counterargument here after all.

You actually have to select the "draft" option from the dropdown menu. "save and continue" basically just means "publish what I have here to wherever I selected to publish it, but keep me in the editor so I can keep working on it"

Yeah -- also remember to choose the "hide" option if you don't want people to be able to see it yet.

I'm looking forward to reading more! What I'd really like to see is an expert rebuttal to Eliezer's arguments for multiple worlds (which I find compelling).

It may just be that you haven't been exposed to a similarly compelling exposition of the other options (parable here). The main point of my article (which is now complete) is that the various rival interpretations have hardly even reached the status of exact theories; and then they still need to be put into commensurable forms, and the rest of the resulting theory-space charted. Only then can we avoid starting-point bias and carry out these Occam-like reasonings.

Your article still looks incomplete to me; there is quite a bit Eliezer_Yudkowsky presented to justify MWI that doesn't appear to be addressed here. I'm not just comparing lengths of what each of you wrote; there is quite a bit of inferential distance he worked through and you would have to cast strong doubt on at least one of those steps to have a case.

Eliezer's expository series doesn't really engage with the other interpretations (see my reply to cousin_it). It is an enthusiastic exposition of a particular approach, but it is nothing like a serious examination of how it stacks up in comparison with the other options.

Things still publish even when you save to drafts, or at least they have for me. You need to click "Hide" after you save it; then others can't see it and you can edit at your leisure, then unhide.

It just looks like the article gets published, but actually it doesn't -- if you log out you'll see it's not published.

The basic wrong assumption being made is that quantum superposition by default equals multiplicity - that because the wavefunction in the double-slit experiment has two branches, one for each slit, there must be two of something there - and that a single-world interpretation has to add an extra postulate to this picture, such as a collapse process which removes one branch. But superposition-as-multiplicity really is just another hypothesis. When you use ordinary probabilities, you are not rationally obligated to believe that every outcome exists somewhere; and an electron wavefunction really may be describing a single object in a single state, rather than a multiplicity of them.

Another wrinkle that is too often overlooked is that superposition is observer dependent.

Go for it. I have extreme difficulty trying to work out how it might even make sense that all possible(*) realities don't exist....

To me, the killer arguments are:

  • How arbitrary both the arrangement of the universe, and the universe itself is,

  • How impossible it is to pin down what existence is, compared to an abstracted implementation#

  • How consciousness itself implies uncertainty and indescernibility between contexts.

(*) In a meaningful sense, of course.

How arbitrary both the arrangement of the universe, and the universe itself is,

Can you elaborate on that a bit more?

How consciousness itself implies uncertainty and indescernibility between contexts.

Uhh.... and that too.