The words “falsifiable” and “testable” are sometimes used interchangeably, which imprecision is the price of speaking in English. There are two different probability-theoretic qualities I wish to discuss here, and I will refer to one as “falsifiable” and the other as “testable” because it seems like the best fit.

As for the math, it begins, as so many things do, with:

This is Bayes’s Theorem. I own at least two distinct items of clothing printed with this theorem, so it must be important.

To review quickly, *B* here refers to an item of evidence, is some hypothesis under consideration, and the are competing, mutually exclusive hypotheses. The expression means “the probability of seeing *B*, if hypothesis *Ai* is true” and means “the probability hypothesis is true, if we see .”

The mathematical phenomenon that I will call “falsifiability” is the scientifically desirable property of a hypothesis that it should concentrate its probability mass into preferred outcomes, which implies that it must also assign low probability to some un-preferred outcomes; probabilities must sum to 1 and there is only so much probability to go around. Ideally there should be possible observations which would drive down the hypothesis’s probability to nearly zero: There should be things the hypothesis *cannot* explain, conceivable experimental results with which the theory is *not* compatible. A theory that can explain everything prohibits nothing, and so gives us no advice about what to expect.

In terms of Bayes’s Theorem, if there is at least some observation *B* that the hypothesis *Ai* can’t explain, i.e., *P*(*B*|*Ai*) is tiny, then the numerator *P*(*B*|*Ai*)*P*(*Ai*) will also be tiny, and likewise the posterior probability *P*(*Ai*|*B*). Updating on having seen the impossible result *B* has driven the probability of *Ai* down to nearly zero. A theory that refuses to make itself vulnerable in this way will need to spread its probability widely, so that it has no holes; it will not be able to strongly concentrate probability into a few preferred outcomes; it will not be able to offer precise advice.

Thus is the rule of science derived in probability theory.

As depicted here, “falsifiability” is something you evaluate by looking at a *single*hypothesis, asking, “How narrowly does it concentrate its probability distribution over possible outcomes? How narrowly does it tell me what to expect? Can it explain some possible outcomes much better than others?”

Is the decoherence interpretation of quantum mechanics *falsifiable*? Are there experimental results that could drive its probability down to an infinitesimal?

Sure: We could measure entangled particles that should always have opposite spin, and find that if we measure them far enough apart, they sometimes have the same spin.

Or we could find apples falling upward, the planets of the Solar System zigging around at random, and an atom that kept emitting photons without any apparent energy source. Those observations would also falsify decoherent quantum mechanics. They’re things that, on the hypothesis that decoherent quantum mechanics governs the universe, we should definitely *not expect* to see.

So there do exist observations *B* whose is infinitesimal, which would drive down to an infinitesimal.

But that’s just because decoherent quantum mechanics is still quantum mechanics! What about the decoherence part, per se, versus the collapse postulate?

We’re getting there. The point is that I just defined a test that leads you to think about one hypothesis at a time (and called it “falsifiability”). If you want to distinguish decoherence *versus* collapse, you have to think about at least two hypotheses at a time.

Now really the “falsifiability” test is not quite *that* singly focused, i.e., the sum in the denominator has got to contain *some* other hypothesis. But what I just defined as “falsifiability” pinpoints the kind of problem that Karl Popper was complaining about, when he said that Freudian psychoanalysis was “unfalsifiable” because it was equally good at coming up with an explanation for every possible thing the patient could do.

If you belonged to an alien species that had never invented the collapse postulate or Copenhagen Interpretation—if the only physical theory you’d ever heard of was decoherent quantum mechanics—if *all* you had in your head was the differential equation for the wavefunction’s evolution plus the Born probability rule—you would still have sharp expectations of the universe. You would not live in a magical world where anything was probable.

But you could say exactly the same thing about quantum mechanics without(macroscopic) decoherence.

Well, yes! Someone walking around with the differential equation for the wavefunction’s evolution, plus a collapse postulate that obeys the Born probabilities and is triggered before superposition reaches macroscopic levels, still lives in a universe where apples fall down rather than up.

But where does decoherence make a newprediction, one that lets us testit?

A “new” prediction relative to what? To the state of knowledge possessed by the ancient Greeks? If you went back in time and showed them decoherent quantum mechanics, they would be enabled to make many experimental predictions they could not have made before.

When you say “new prediction,” you mean “new” relative to some other hypothesis that defines the “old prediction.” This gets us into the theory of what I’ve chosen to label *testability*; and the algorithm inherently considers at least two hypotheses at a time. You cannot call something a “*new* prediction” by considering only one hypothesis in isolation.

In Bayesian terms, you are looking for an item of evidence *B* that will produce evidence for one hypothesis over another, distinguishing between them, and the process of producing this evidence we could call a “test.” You are looking for an experimental result *B* such that

that is, some outcome *B* which has a different probability, conditional on the decoherence hypothesis being true, versus its probability if the collapse hypothesis is true. Which in turn implies that the posterior odds for decoherence and collapse will become different from the prior odds:

This equation is symmetrical (assuming no probability is literally equal to 0). There isn’t one