The words “falsifiable” and “testable” are sometimes used interchangeably, which imprecision is the price of speaking in English. There are two different probability-theoretic qualities I wish to discuss here, and I will refer to one as “falsifiable” and the other as “testable” because it seems like the best fit.

As for the math, it begins, as so many things do, with:

This is Bayes’s Theorem. I own at least two distinct items of clothing printed with this theorem, so it must be important.

To review quickly, *B* here refers to an item of evidence, is some hypothesis under consideration, and the are competing, mutually exclusive hypotheses. The expression means “the probability of seeing *B*, if hypothesis *Ai* is true” and means “the probability hypothesis is true, if we see .”

The mathematical phenomenon that I will call “falsifiability” is the scientifically desirable property of a hypothesis that it should concentrate its probability mass into preferred outcomes, which implies that it must also assign low probability to some un-preferred outcomes; probabilities must sum to 1 and there is only so much probability to go around. Ideally there should be possible observations which would drive down the hypothesis’s probability to nearly zero: There should be things the hypothesis *cannot* explain, conceivable experimental results with which the theory is *not* compatible. A theory that can explain everything prohibits nothing, and so gives us no advice about what to expect.

In terms of Bayes’s Theorem, if there is at least some observation *B* that the hypothesis *Ai* can’t explain, i.e., *P*(*B*|*Ai*) is tiny, then the numerator *P*(*B*|*Ai*)*P*(*Ai*) will also be tiny, and likewise the posterior probability *P*(*Ai*|*B*). Updating on having seen the impossible result *B* has driven the probability of *Ai* down to nearly zero. A theory that refuses to make itself vulnerable in this way will need to spread its probability widely, so that it has no holes; it will not be able to strongly concentrate probability into a few preferred outcomes; it will not be able to offer precise advice.

Thus is the rule of science derived in probability theory.

As depicted here, “falsifiability” is something you evaluate by looking at a *single*hypothesis, asking, “How narrowly does it concentrate its probability distribution over possible outcomes? How narrowly does it tell me what to expect? Can it explain some possible outcomes much better than others?”

Is the decoherence interpretation of quantum mechanics *falsifiable*? Are there experimental results that could drive its probability down to an infinitesimal?

Sure: We could measure entangled particles that should always have opposite spin, and find that if we measure them far enough apart, they sometimes have the same spin.

Or we could find apples falling upward, the planets of the Solar System zigging around at random, and an atom that kept emitting photons without any apparent energy source. Those observations would also falsify decoherent quantum mechanics. They’re things that, on the hypothesis that decoherent quantum mechanics governs the universe, we should definitely *not expect* to see.

So there do exist observations *B* whose is infinitesimal, which would drive down to an infinitesimal.

But that’s just because decoherent quantum mechanics is still quantum mechanics! What about the decoherence part, per se, versus the collapse postulate?

We’re getting there. The point is that I just defined a test that leads you to think about one hypothesis at a time (and called it “falsifiability”). If you want to distinguish decoherence *versus* collapse, you have to think about at least two hypotheses at a time.

Now really the “falsifiability” test is not quite *that* singly focused, i.e., the sum in the denominator has got to contain *some* other hypothesis. But what I just defined as “falsifiability” pinpoints the kind of problem that Karl Popper was complaining about, when he said that Freudian psychoanalysis was “unfalsifiable” because it was equally good at coming up with an explanation for every possible thing the patient could do.

If you belonged to an alien species that had never invented the collapse postulate or Copenhagen Interpretation—if the only physical theory you’d ever heard of was decoherent quantum mechanics—if *all* you had in your head was the differential equation for the wavefunction’s evolution plus the Born probability rule—you would still have sharp expectations of the universe. You would not live in a magical world where anything was probable.

But you could say exactly the same thing about quantum mechanics without(macroscopic) decoherence.

Well, yes! Someone walking around with the differential equation for the wavefunction’s evolution, plus a collapse postulate that obeys the Born probabilities and is triggered before superposition reaches macroscopic levels, still lives in a universe where apples fall down rather than up.

But where does decoherence make a newprediction, one that lets us testit?

A “new” prediction relative to what? To the state of knowledge possessed by the ancient Greeks? If you went back in time and showed them decoherent quantum mechanics, they would be enabled to make many experimental predictions they could not have made before.

When you say “new prediction,” you mean “new” relative to some other hypothesis that defines the “old prediction.” This gets us into the theory of what I’ve chosen to label *testability*; and the algorithm inherently considers at least two hypotheses at a time. You cannot call something a “*new* prediction” by considering only one hypothesis in isolation.

In Bayesian terms, you are looking for an item of evidence *B* that will produce evidence for one hypothesis over another, distinguishing between them, and the process of producing this evidence we could call a “test.” You are looking for an experimental result *B* such that

that is, some outcome *B* which has a different probability, conditional on the decoherence hypothesis being true, versus its probability if the collapse hypothesis is true. Which in turn implies that the posterior odds for decoherence and collapse will become different from the prior odds:

This equation is symmetrical (assuming no probability is literally equal to 0). There isn’t one labeled “old hypothesis” and another labeled “new hypothesis.”

This symmetry is a feature, not a bug, of probability theory! If you are designing an artificial reasoning system that arrives at different beliefs depending on the order in which the evidence is presented, this is labeled “hysteresis” and considered a Bad Thing. I hear that it is also frowned upon in Science.

From a probability-theoretic standpoint we have various trivial theorems that say it shouldn’t matter whether you update on *X* first and then *Y*, or update on *Y* first and then *X*. At least they’d be trivial if human beings didn’t violate them so often and so lightly.

If decoherence is “untestable” relative to collapse, then so too, collapse is “untestable” relative to decoherence. What if the history of physics had transpired differently—what if Hugh Everett and John Wheeler had stood in the place of Bohr and Heisenberg, and vice versa? Would it then be right and proper for the people of that world to look at the collapse interpretation, and snort, and say, “Where are the *new* predictions?”

What if someday we meet an alien species that invented decoherence before collapse? Are we each bound to keep the theory we invented first? Will Reason have nothing to say about the issue, leaving no recourse to settle the argument but interstellar war?

But if we revoke the requirement to yield new predictions, we are left with scientific chaos. You can add arbitrary untestable complications to old theories, and get experimentally equivalent predictions. If we reject what you call “hysteresis,” how can we defend our current theories against every crackpot who proposes that electrons have a new property called “scent,” just like quarks have “flavor”?

Let it first be said that I quite agree that you should reject the one who comes to you and says: “Hey, I’ve got this brilliant new idea! Maybe it’s not the electromagnetic field that’s tugging on charged particles. Maybe there are tiny little angels who actually push on the particles, and the electromagnetic field just tells them how to do it. Look, I have all these successful experimental predictions—the predictions you used to call your own!”

So yes, I agree that we shouldn’t buy this amazing new theory, but it is not the *newness* that is the problem.

Suppose that human history had developed only slightly differently, with the Church being a primary grant agency for Science. And suppose that when the laws of electromagnetism were first being worked out, the phenomenon of magnetism had been taken as proof of the existence of unseen spirits, of angels. James Clerk becomes Saint Maxwell, who described the laws that direct the actions of angels.

A couple of centuries later, after the Church’s power to burn people at the stake has been restrained, someone comes along and says: “Hey, do we really need the angels?”

“Yes,” everyone says. “How else would the mere numbers of the electromagnetic field translate into the actual motions of particles?”

“It might be a fundamental law,” says the newcomer, “or it might be something other than angels, which we will discover later. What I am suggesting is that interpreting the numbers *as the action of angels* doesn’t really add anything, and we should just keep the numbers and throw out the angel part.”

And they look one at another, and finally say, “But your theory doesn’t make any new experimental predictions, so why should we adopt it? How do we test your assertions about the absence of angels?”

From a normative perspective, it seems to me that if we should reject the crackpot angels in the first scenario, *even without being able to distinguish the two theories experimentally*, then we should also reject the angels of established science in the second scenario, even without being able to distinguish the two theories experimentally.

It is ordinarily the crackpot who adds on new useless complications, rather than scientists who accidentally build them in at the start. But the problem is not that the complications are new, but that they are useless whether or not they are new.

A Bayesian would say that the extra complications of the angels in the theory lead to penalties on the prior probability of the theory. If two theories make equivalent predictions, we keep the one that can be described with the shortest message, the smallest program. If you are evaluating the prior probability of each hypothesis by counting bits of code, and then applying Bayesian updating rules on all the evidence available, then it makes no difference which hypothesis you hear about first, or the order in which you apply the evidence.

It is usually not possible to apply formal probability theory in real life, any more than you can predict the winner of a tennis match using quantum field theory. But if probability theory can serve as a guide to practice, this is what it says: Reject *useless*complications *in general*, not just when they are *new*.

Yes, and uselessis precisely what the many worlds of decoherence are! There are supposedly all these worlds alongside our own, and they don’t doanything to our world, but I’m supposed to believe in them anyway?

No, according to decoherence, what you’re supposed to believe are the general laws that govern wavefunctions—and these general laws are very visible and testable.

I have argued elsewhere that the imprimatur of science should be associated with general laws, rather than particular events, because it is the general laws that, in principle, anyone can go out and test for themselves. I assure you that I happen to be wearing white socks right now as I type this. So you are probably *rationally* justified in believing that this is a historical fact. But it is not the specially strong kind of statement that we canonize as a provisional belief of science, because there is no experiment that you can do for yourself to determine the truth of it; you are stuck with my authority. Now, if I were to tell you the mass of an electron in general, you could go out and find your own electron to test, and thereby see for yourself the truth of the general law in that particular case.

The ability of anyone to go out and verify a general scientific law for themselves, by constructing some particular case, is what makes our belief in the general law specially reliable.

What decoherentists say they believe in is the differential equation that is observed to govern the evolution of wavefunctions—which you can go out and test yourself any time you like; just look at a hydrogen atom.

Belief in the existence of separated portions of the universal wavefunction is not *additional*, and it is not *supposed* to be explaining the price of gold in London; it is just a deductive consequence of the wavefunction’s evolution. If the evidence of many particular cases gives you cause to believe that is a general law, and the evidence of some particular case gives you cause to believe , then you should have .

Or to look at it another way, if , then .

Which is to say, believing extra details doesn’t cost you extra probability when they are *logical implications* of general beliefs you already have. Presumably the general beliefs themselves are falsifiable, though, or why bother?

This is why we don’t believe that spaceships blink out of existence when they cross the cosmological horizon relative to us. True, the spaceship’s continued existence doesn’t have an impact on our world. The spaceship’s continued existence isn’t helping to explain the price of gold in London. But we get the invisible spaceship for free as a consequence of general laws that imply conservation of mass and energy. If the spaceship’s continued existence were *not* a deductive consequence of the laws of physics as we presently model them, *then* it would be an additional detail, cost extra probability, and we would have to question why our theory must include this assertion.

The part of decoherence that is supposed to be testable is not the many worlds per se, but just the general law that governs the wavefunction. The decoherentists note that, applied universally, this law implies the existence of entire superposed worlds. Now there are critiques that can be leveled at this theory, most notably, “But then where do the Born probabilities come from?” But within the internal logic of decoherence, the many worlds are not offered as an explanation for anything, nor are they the substance of the theory that is meant to be tested; they are simply a logical consequence of those general laws that constitute the substance of the theory.

If then . To deny the existence of superposed worlds is necessarily to deny the universality of the quantum laws formulated to govern hydrogen atoms and every other examinable case; it is this denial that seems to the decoherentists like the extra and untestable detail. You can’t see the other parts of the wavefunction—why postulate *additionally* that they don’t exist?

The events surrounding the decoherence controversy may be unique in scientific history, marking the first time that serious scientists have come forward and said that by historical accident humanity has developed a powerful, successful, mathematical physical theory that includes angels. That there is an entire law, the collapse postulate, that can simply be thrown away, leaving the theory *strictly*simpler.

To this discussion I wish to contribute the assertion that, in the light of a mathematically solid understanding of probability theory, decoherence is not ruled out by Occam’s Razor, nor is it unfalsifiable, nor is it untestable.

We may consider e.g. decoherence and the collapse postulate, side by side, and evaluate critiques such as “Doesn’t decoherence definitely predict that quantum probabilities should always be 50/50?” and “Doesn’t collapse violate Special Relativity by implying influence at a distance?” We can consider the relative merits of these theories on grounds of their compatibility with experience and the apparent character of physical law.

To assert that decoherence is not even in the game—because the many worlds themselves are “extra entities” that violate Occam’s Razor, or because the many worlds themselves are “untestable,” or because decoherence makes no “new predictions”—all this is, I would argue, an outright error of probability theory. The discussion should simply discard those particular arguments and move on.

Excellent post Eliezer. I have just a small quibble: it should be made clear that decoherance and the many worlds interpretations are logically distinct. Many physicists, especially condensed matter physicist working on quantum computation/information, use models of microscopic decoherance on a daily basis while remaining agnostic about collapse. These models of decoherance (used for so-called "partial measurement") are directly experimentally testable.

Maybe a better term for what you are talking about is

macroscopicdecoherance. As of right now, no one has ever created serious macroscopic superpositions. Macroscopic decoherance, and hence the many worlds interpretation, rely on extrapolating microscopic observed phenomena.If there's one lesson we can take from the history of physics, it's that everytime new experimental "regimes" are probed (e.g. large velocities, small sizes, large mass densities, large energies), phenomena are observed which lead to new theories (special relativity, quantum mechanics, general relativity, and the standard model, respectively). This is part of the reason I find it likely that the peculiar implications of uncollapsed hermitian evolution are simply the artifacts of using quantum mechanics outside its regime of applicability.

Here at UC Santa Barbara, Dirk Bouwmeester is trying to probe this macroscopic regime by superposing a cantilever that is ~50 microns across--big enough to see with an optical microscope!

Surely the prior is that the laws of physics hold at all scales? Why wouldn't you extrapolate? Edit: Just noticed how redundant this comment is..

Is there any reason to believe that something interferes with the physics between "microscopic decoherence" and "macroscopic decoherence" that affects the latter and not the former? I'm just saying because I'm getting strong echoes of the "microevolution vs. macroevolution" misconception - in both cases, people seem to be rejecting the obvious extension of a hypothesis to the human level.

I own at least two distinct items of clothing printed with this theorem, so it must be important.Isn't this an

argumentum ad vestemfallacy?This is also the fallacy that leads people to take the Pope seriously. (I mean, if it's

baculum,where is his political power? Yet I can clearly see his big pointy hat with my own eyes.)Jess: "Here at UC Santa Barbara, Dirk Bouwmeester is trying to probe this macroscopic regime by superposing a cantilever that is ~50 microns across--big enough to see with an optical microscope!"

I just want to say that sounds like an absolutely awsome experiment. Any info on results so far? (For that matter, how's he doing it in the first place?)

Does the reality of the wavefunction imply MWI? The wavefunction is a function over every possible configuration of the universe. We may still believe that the universe comprises a single point in configuration space, corresponding to a single value of the wavefunction, along with the value of the wavefunction for every other (counterfactual) point in the configuration space. The reality is the particular configuration space point along with the shape of the wavefunction. This does not imply that the wavefunction is a delta-function in configuration space. Other counterfactual configurations may have similar probability amplitudes depending on their "degree of possibility" compared to the existing configuration.

So, decoherence is a valid scientific theory because it makes the same, correct predictions as the one involving collapse, but is simpler.

There, that didn't take 2800 words, now, did it?

Bob:

We may still believe that the universe comprises a single point in configuration space, corresponding to a single value of the wavefunctionHow is this not immediately ruled out by Bell's Theorem?

@Silas: I've tried just saying that to people, it doesn't work. Doesn't work in academic physics either. Besides which, it may not be the last time the question comes up, and there's no reason why physicists shouldn't know the (epistemic) math.

Bell's Theorem rules out local realism. I'm going with "non-local".

Bob, either I'm missing something, or you are.

If you pick a single point in configuration space in the position basis, nothing has a specified momentum. If you pick a single point in the momentum basis, nothing has a specified position. If you pick a single point in the polarized-45-degrees basis, nothing has a specified 90-degree polarization. Decoherence gives us a preferred basis for our

blobsof amplitude but that preferred basis is changing all the time and different for every particle. How's this single-point trick going to work? And what does the epiphenomenal single pointdothat makes it realer than the causally powerful wavefunction?Bob: That sounds like Bohmian mechanics, which is distinct from either of the interpretations Eliezer has been talking about.

As I understand it, interpretations with actual wavefunctions and collapse still just describe the universe by the wavefunction, it's just that collapse keeps the wave function bunched up so the world can usually be approximately described by a single configuration.

Eliezer: You still calculate the whole wave function, so it's hardly local, and can therefore be a deterministic hidden variable theory that agrees with experiment. I think you just compute an amplitude current from the wave function, and say that the real world or your test particles follow that velocity field in configuration space.

Pretty pictures at http://bohm-mechanics.uibk.ac.at/index.htm

Eliezer: No doubt I am missing a lot. I have the idea of the wavefunction as a real thing, and I am not advocating a collapse interpretation. I am also uncomfortable with any kind of preferred basis. My idea is that the configuration space of the universe is the classical configuration space, but that its evolution is determined by the wavefunction over the quantum mechanical configuration space (in whatever basis you choose). So a point-particle has a real momentum and a real position, which are not simultaneously measureable. For electromagnetism, the electric and magnetic fields have actual values, which are also not simultaneously measurable. The fields evolve continuously but non-deterministically in accordance with the evolution of the wavefunction. There are still blobs of amplitude in configuration space, but only one point in one of those blobs is the real configuration.

Anonymous: I'll look at your reference to refresh my memory of Bohm. The last I heard, there were problems with relativistic versions of that theory.

Actually, they are the same thing, so if you know one, you know the other... they are definitely NOT conjugate variables (variables that cannot be measured at the same time).

Psy-Kosh: It

isan awesome experiment. Here are links to Bouwmeester's home page , the original proposal, and the latest update on cooling the cantilever.(Bouwmeester has perhaps the most annoying web interface of any serious scientist. Click in the upper left on "research" and then the lower right on "macroscopic quantum superposition". Also, the last article appeared in nature and may not be accessible without a subscription.)Obviously, this is a

veryhard experiment and success is not assured.Also, you might be interested to know that at least one other group, Jack Harris's at Yale, is doing similar work.

Psy-Kosh: Oh, I almost forgot to answer your questions. Experimental results are still several years distant. The basic idea is to fabricate a tiny cantilever with an even tinier mirror attached to its end. Then, you position that mirror at one end of a photon cavity (the other end being a regular fixed mirror). If you then send a photon into the cavity through a half-silvered third mirror--so that it will be in a superposition of being in and not in the cavity--then the cantilever will be put into a correlated superposition: it will be vibrating if the photon is in the cavity and it will be still if the photon is not. Of course, the really, really super-hard part is getting all this to happen without the state decohering before you see anything interesting.

Robin Z: The motivation for suspecting that something funny happens as you try scale up decoherance to full blown many-worlds comes from the serious problems that many-worlds has. Beyond the issue with predicting the Born postulate, there are serious conceptual problems with defining individual worlds, even emergently.

The motivation for doing this experiment is even more clear: (1) The many-worlds interpretation is a fantastically profound statement about our universe and therefore demands that fantastic experimental work be done to confirm it as best as is possible. (For instance, despite the fact that I very confidently expect Bell's inequality to continue to hold after each tenuous experimental loophole is closed, I still consider it an excellent use of my tax dollars that these experiments continue to be improved). (2) Fundamental new regimes in physics should always be probed, especially at this daunting time in the history of physics where we seem to be able to predict nearly everything we see around us but unable to extend our theories to in-principally testable but currently inaccessible regimes. (3) It's just plain

cool.But considering this experiment with the 50 micron cantilever, suppose they are successful in putting it into a superposition and verifying that. That will be a fine piece of work and will gain well deserved approval. But suppose OTOH they are able to show somehow that superposition fails once we hit 50 microns (or equivalent mass)! They show that the cantilever does not obey the equations of QM! That would be earth-shaking news, Nobel Prize caliber work. It would be the most important physics discovery of the 21st century, so far, and maybe for the whole rest of it. It would require scientists to go back to square one in their understanding of the fundamental laws of the universe.

If I am right (and truthfully I am just speculating), this suggests that in their hearts, scientists really do believe what Eliezer describes, that the laws of QM apply to 50 micron cantilevers as well as much more besides. They may not be comfortable with the whole many-worlds picture, but they solve that by just not thinking about it. The prospect of actually discovering a level at which QM stops working is something which would have to be viewed as highly unlikely, in the context of current understanding.

Enough said - I withdraw my implied objection. I, too, hope the experiment you refer to will provide new insight.

Jess: That certainly is an interesting web interface.

But yeah, thanks lots for the info, really cool. I want results right now though!

Bouncies(well, i/sqrt(2)|bouncies> + 1/sqrt(2)|sits-still> )From giving a cursory look at the info there, I didn't quite see or grasp how they would plan on detecting it though. ie, after they put it in a oscilating/not so oscilating mode, they then bounce another photon off of it and analyze its path to see if the path would have to be the result of superposition of the different possible states of the mirror? or something completely different?

Roger Penrose predicts that attempts to create macroscale quantum superposition will fail because gravity will keep things with too much energy from being in two places at once. He's a bit of a crackpot when it comes to quantum gravity, but it'll be interesting to see him proven wrong. ;)

(Similarly, the occasional crackpot theory suggests that the "fair sampling assumption" of EPR tests should be

systematicallyviolated, and that our ability to make "better photon detectors" should hit a limit.)Oh and if in this many world interpretation photons would have to appear opposite to what is detected in our world, then when the experiment is over and the experimenters leave in opposite directions, does that mean the experimenters on the other side continuously crash into each other.

Both collapse and MWI have a it happens for an instant quality. As soon as the experiment is over they go back into the box like the Rosicrucians do with God's angels.

Wait it gets better. If there is a probability of your mothers getting pregnant here should there not be the opposite effect such that your double couldn't have been born?

Since both you and he are around that means the other world only begins with the original photons flying apart.

MWI has both local extent, good, and a divergent local behavior at every point in space, pathological. It requires a disjunction between neighborhood elements since the results have to be complementary. The fabric of complementary MWI layers have an increasing tendency to explode the minute something happens in their partner. Which would suggest in fact that collapse is the true picture, but we already know it's a collapse of a description not the event. This is a bit like how water waves travel but no water molecule actually goes beyond the next crest. The collapse is a figment and MWI is unstable. My God what have we done to reason?

I am not sure you really understand the notion of understanding a theory, and I cannot readily discern what exactly you are trying to say with your comment. Have you examined your reasons for believing what you believe? Have you looked inside your associated mental black boxes? Have you read a text book? Have you talked to a MacroDeco adherent? Have you talked to a rugged and experienced QM scientist? Do you have a better theory? Can you simulate that better theory on a computer?

"If P(Y|X) ≈ 1, then P(X∧Y) ≈ P(X). Which is to say, believing extra details doesn't cost you extra probability when they are logical implications of general beliefs you already have."

Shivers went down my spine when I read that; this is the first time that I actually looked at a formula and really saw what it meant. Ah, maths. Thank you, Yu-el.

You're welcome. You warm my heart.

I disagree on five points. The first is my conclusion too; the second leads to the third and the third explains the fourth. The fifth one is the most interesting.

1) In contrast with the title, you did not show that the MWI is falsifiable nor testable; I know the title mentions decoherence (which is falsifiable and testable), but decoherence is very different from the MWI and for the rest of the article you talked about the MWI, though calling it decoherence. You just showed that MWI is "better" according to your "goodness" index, but that index is not so good. Also, the MWI is not at all a consequence of the superposition principle: it is rather an ad-hoc hypothesis made to "explain" why we don't experience a macroscopic superposition, despite we would expect it because macroscopic objects are made of microscopic ones. But, as I will mention in the last point, the superposition of macroscopic objects in not an inevitable consequence of the superposition principle applied to microscopic objects.

2) You say that postulating a new object is better than postulating a new law: so why teach Galileo's relativity by postulating its transformations, while they could be derived as a special case of Lorents transformations for slow speeds? The answer is because they are just models, which gotta be easy enough for us to understand them: in order to well understand relativity you first have to understand non-relativistic mechanics, and you can only do it observing and measuring slow objects and then making the simplest theory which describes that (i.e., postulating the shortest mathematical rules experimentally compatible with the "slow" experiences: Galileo's); THEN you can proceed in something more difficult and more accurate, postulating new rules to get a refined theory. You calculate the probability of a theory and use this as an index of the "truthness" of it, but that's confusing the reality with the model of it. You can't measure how a theory is "true", maybe there is no "Ultimate True Theory": you can just measure how a theory is effective and clean in describing the reality and being understood. So, in order to index how good a theory is, you should instead calculate the probability that a person understands that theory and uses it to correctly make anticipations about reality: that means P(Galileo) >> P(first Lorentz, then show Galileo as a special case); and also P(first Galileo, after Lorentz) != P(first Lorentz, after Galileo), because you can't expect people to be

perfectrationalists: they can be just as rational as possible. The model is just an approximation of the reality, so you can't force the reality of people to be the "perfect rational person" model, you gotta take in account that nobody's perfect.3) Because nobody's perfect, you must take in account the needed RAM too. You said in the previous post that "Occam's Razor was raised as an objection to the suggestion that nebulae were actually distant galaxies—it seemed to vastly multiply the number of entities in the universe", in order to justify that the RAM account is irrelevant. But that argument is not valid: we rejected the hypothesis that nebulae are not distant galaxies not because the Occam's Razor is irrelevant, but because we measured their distance and found that they are inside our galaxy; without this information, the simpler hypothesis would be that they are distant galaxies. The Occam's Razor IS relevant not only about the laws, but about the objects too. Yes, given a limited amount of information, it could shift toward a "simpler yet wrong model", but it doesn't annihilate the probability of the "right" model: with new information you would find out that you were previously wrong. But how often does the Occam's Razor induce you to neglect a good model, as opposed to how often it let us neglect bad models? Also, Occam's Razor may mislead you not only when applied to objects, but when applied to laws too, so your argument discriminating Occam's Razor applicability doesn't stand.

4) The collapse of the wave function is a way to represent a fact: if a microscopic system S is in an eigenstate of some observable A and you measure on S an observable B which is non commuting with A, your apparatus doesn't end up in a superposition of states but gives you a unique result, and the system S ends up in the eigenstate of B corresponding to the result the apparatus gave you. That's the fact. As the classical behavior of macroscopic objects and the stochastic irreversible collapse seems in contradiction with the linearity, predictability and reversibility of the Schrödinger equation ruling the microscopic systems, it appears as if there's an uncomfortable demarcation line between microscopic and macroscopic physics. So, attempts have been made in order to either find this demarcation line, or show a mechanism for the emergence of the classical behavior from the quantum mechanics, or solve or formalize this problem however. The Copenhagen interpretation (CI) just says: "there are classical behaving macroscopic objects, and quantum behaving microscopic ones, the interaction of a microscopic object with a macroscopic apparatus causes the stochastic and irreversible collapse of the wave function, whose probabilities are given by the Born rule, now shut up and do the math"; it is a rather unsatisfactory answer, primarily because it doesn't explain what gives rise to this demarcation line and where should it be drawn; but indeed it is useful to represent effectively what are the results of the typical educational experiments, where the difference between "big" and "small" is in no way ambiguous, and allows you to familiarize fast with the bra-ket math. The Many Worlds Interpretation (MWI) just says: "there is indeed the superposition of states in the macroscopic scale too, but this is not seen because the other parts of the wave function stay in parallel invisible universes". Now imagine Einstein did not develop the General Relativity, but we anyway developed the tools to measure the precession of Mercury and we have to face the inconsistency with our predictions through Newton's Laws: the analogous of the CI would be "the orbit of Mercury is not the one anticipated by Newton's Laws but this other one, now if you want to calculate the transits of Mercury as seen from the Earth for the next million years you gotta do THIS math and shut up"; the analogous of the MWI would be something like "we expect the orbit of Mercury to precede at this rate X but we observe this rate Y; well, there is another parallel universe in which the preceding rate of Mercury is Z such that the average between Y and Z is the expected X due to our beautiful indefeasible Newton's Law". Both are unsatisfactory and curiosity stoppers, but the first one avoids to introduce new objects. The MWI, instead, while explaining exactly the same experimental results, introduces not only other universes: it also introduces the concept itself that there are other universes which proliferate at each electron's cough attack. And it does just for the sake of human pursuit of beauty and loyalty to a (yes, beautiful, but that's not the point) theory.

5) you talk of MWI and of decoherence as they are the same thing, but they are quite different. Decoherence is about the loss of coherence that a microscopic system (an electron, for instance) experiences when interacting with a macroscopic chaotic environment. As this sounds rather relevant to the demarcation line and interaction between microscopic and macroscopic, it has been suggested that maybe these are related phenomenons, that is: maybe the classical behavior of macroscopic objects and the collapse of the wave function of a microscopic object interacting with a macroscopic apparatus are emergent phenomenons, which arise from the microscopic quantum one through some interaction mechanism. Of course this is not an answer to the problem: it is just a road to be walked in order to find a mechanism, but we gotta find it. As you say, "emergence" without an underlying mechanism is like "magic". Anyway, decoherence has nothing to do with MWI, though both try (or pretend) to "explain" the (apparent?) collapse of the wave function. In the last decades decoherence has been probed and the results look promising. Though I'm not an expert in the field, I took a course about it last year and made a seminar as exam for the course, describing the results of an article I read (http://arxiv.org/abs/1107.2138v1). They presented a toy model of a Curie-Weiss apparatus (a magnet in a thermal bath), prepared in an initial isotropic metastable state, measuring the z-axis spin component of a 1/2 spin particle through induced symmetry breaking. Though I wasn't totally persuaded by the Hamiltonian they wrote and I'm sure there are better toy models, the general ideas behind it were quite convincing. In particular, they computationally showed HOW the stochastic indeterministic collapse can emerge from just: a) Schrödinger's equation; b) statistical effects due to the "large size" of the apparatus (a magnet composed by a large number N of elementary magnets, coupled to a thermal bath); c) an appropriate initial state of the apparatus. They did not postulate neither new laws nor new objects: they just made a model of a measurement apparatus within the framework of quantum mechanics (without the postulation of the collapse) and showed how the collapse naturally arose from it. I think that's a pretty impressive result worth of further research, more than the MWI. This explains the collapse without postulating it, nor postulating unseen worlds.

Do you have some notion of the truth of a statement, other than effectively describing reality? If so, I would very much like to hear it.

No, I don't: actually we probably agree about that, with that sentence I was just trying to underline the "being understood" requirement for an effective theory. That was meant to introduce my following objection that the order in which you teach or learn two facts is not irrelevant. The human brain has memory, so a Markovian model for the effectiveness of theories is too simple.

I doubt that you will be successful in convincing EY of the non-privileged position of the MWI. Having spent a lot of time, dozens of posts and tons of karma on this issue, I have regretfully concluded that he is completely irrational with regards to instrumentalism in general and QM interpretations in in particular. In his objections he usually builds and demolishes a version of a straw Copenhagen, something that, in his mind, violates locality/causality/relativity.

One would expect that, having realized that he is but a smart dilettante in the subject matter, he would at least allow for the possibility of being wrong, alas it's not the case.

I agree that he didn't show testable, but rather the possibility of it (and the formalization of it).

There's a problem with choosing the language for Solomonoff/MML, so the index's goodness can be debated. However, I think in general index is sound.

I don't think he's saying that theories fundamentally have probabilities. Rather, as a Bayesian, he gives some priors to each theory. As more evidences accumulate, the right theory will update and its probability approaches 1.

The reason human understanding can't be part of the equations is, as EY says, shorter "programs" are more likely to govern the universe than longer "programs," essentially because these "programs" are more likely to be written if you throw down some random bits to make a program that governs the universe.

So I don't buy your arguments in the next section.

EY is comparing the angel explanation with the galaxies explanation; you are supposed to reject the angels and usher in the galaxies. In that case, the anticipations are truly the same. You can't really prove whether there are angels.

What do you mean by "good"? Which one is "better" out of 2 models that give the same prediction? (By "model" I assume you mean "theory")

You admit that Copenhagen is unsatisfactory but it is useful for education. I don't see any reason not to teach MWI in the same vein.

If indeed the expectation value of observable V of mercury is X but we observe Y with Y not= X (that is to say that the variance of V is nonzero), then there isn't a determinate formula for predict V exactly in your first Newton/random formula scenario. At the same time, someone who has the Copenhagen interpretation would have the same expectation value X, but instead of saying there's another world he says there's a wave function collapse. I still think that the parallel world is a deduced result from universal wave function, superposition, decoherence, and etc that Copenhagen also recognizes. So the Copenhagen view essentially say "actually, even though the equations say there's another world, there is none, and on top of that we are gonna tell you how this collapsing business works". This extra sentence is what causes the Razor to favor MWI.

Much of what you are arguing seems to stem from your dissatisfaction of the formalization of Occam's Razor. Do you still feel that we should favor something like human understanding of a theory over the probability of a theory being true based on its length?

Because it sets people up to think that QM can be understood in terms of wavefunctions that exist and contain parallel realities; yet when the time comes to calculate anything, you have to go back to Copenhagen and employ the Born rule.

Also, real physics is about operator algebras of observables. Again, this is something you don't get from pure Schrodinger dynamics.

QM should be taught in the Copenhagen framework, and then there should be some review of proposed ontologies and their problems.

When I hear about Solomonoff Induction, I reach for my gun :)

The point is that you can't use Solomonoff Induction or MML to discriminate between interpretations of quantum mechanics: these are formal frameworks for inductive inference, but they are underspecified and, in the case of Solomonoff Induction, uncomputable.

Yudkowsky and other people here seem to use the terms informally, which is an usage I object to: it's just a fancy way of saying Occam's razor, and it's an attempt to make their arguments more compelling that they actually are by dressing them in pseudomathematics.

That assumes that Solomonoff Induction is the ideal way of performing inductive reasoning, which is debateable. But even assuming that, and ignoring the fact that Solomonoff Induction is underspecified, there is still a fundamental problem:

The hypotheses considered by Solomonoff Induction are probability distributions on computer programs that generate observations, how do you map them to interpretations of quantum mechanics?

What program corresponds to Everett's interpretation? What programs correspond to Copenhagen, objective collapse, hidden variable, etc.?

Unless you can answer these questions, any reference to Solomonoff Induction in a discussion about interpretations of quantum mechanics is a red herring.

Actually Copenhagen doesn't commit to collapse being objective. People here seem to conflate Copenhagen with objective collapse, which is a popular misconception.

Objective collapse intepretations generally predict deviations from standard quantum mechanics in some extreme cases, hence they are in principle testable.

I doubt that one of the formulas should read: " /fracP(Ad)P(Ac)" LaTEX markup gone wrong?

Eliezer's mistake here was that he didn't, before the QM sequence, write a general post to the effect that you don't have an additional Bayesian burden of proof if your theory was proposed chronologically later. Given such a reference, it would have been a lot simpler to refer to that concept without it seeming like special pleading here.

It's mentioned in passing in the "Technical Explanation" (but yes, not a full independently-linkable post):

Hmm, I'm not sure that point is sufficiently (a) widely applicable and (b) insightful that it would merit its own post. Perhaps I'm being unimaginative though?

There's certainly a tradeoff involved in using a disputed example as your first illustration of a general concept (here, Bayesian reasoning vs the Traditional Scientific Method).

We require new predictions not because the theory is newer than some other theory it could share predictions, but because the predictions must come before the experimental results. If we allow theories theories to rely on the results of already known experiments, we run into two problems:

Now, if the new theory is a strictly simpler versions of an old one - as in "we don't even need X" simpler - then these two problems are nonissue:

So... I will allow it.

That has nothing to do with decoherence. Decoherence is not an automatic outcome of basic QM, so you can't falsify it by falsifying QM; and dechorence of a kind that implies many macroscopic non-interacting worlds is another matter anyway.