Excellent post Eliezer. I have just a small quibble: it should be made clear that decoherance and the many worlds interpretations are logically distinct. Many physicists, especially condensed matter physicist working on quantum computation/information, use models of microscopic decoherance on a daily basis while remaining agnostic about collapse. These models of decoherance (used for so-called "partial measurement") are directly experimentally testable.
Maybe a better term for what you are talking about is macroscopic decoherance. As of right now, no one has ever created serious macroscopic superpositions. Macroscopic decoherance, and hence the many worlds interpretation, rely on extrapolating microscopic observed phenomena.
If there's one lesson we can take from the history of physics, it's that everytime new experimental "regimes" are probed (e.g. large velocities, small sizes, large mass densities, large energies), phenomena are observed which lead to new theories (special relativity, quantum mechanics, general relativity, and the standard model, respectively). This is part of the reason I find it likely that the peculiar implications of uncollapsed hermitian evolution are simply the artifacts of using quantum mechanics outside its regime of applicability.
Here at UC Santa Barbara, Dirk Bouwmeester is trying to probe this macroscopic regime by superposing a cantilever that is ~50 microns across--big enough to see with an optical microscope!
rely on extrapolating microscopic observed phenomena.
Surely the prior is that the laws of physics hold at all scales? Why wouldn't you extrapolate? Edit: Just noticed how redundant this comment is..
Is there any reason to believe that something interferes with the physics between "microscopic decoherence" and "macroscopic decoherence" that affects the latter and not the former? I'm just saying because I'm getting strong echoes of the "microevolution vs. macroevolution" misconception - in both cases, people seem to be rejecting the obvious extension of a hypothesis to the human level.
I own at least two distinct items of clothing printed with this theorem, so it must be important.
Isn't this an argumentum ad vestem fallacy?
This is also the fallacy that leads people to take the Pope seriously. (I mean, if it's baculum, where is his political power? Yet I can clearly see his big pointy hat with my own eyes.)
Jess: "Here at UC Santa Barbara, Dirk Bouwmeester is trying to probe this macroscopic regime by superposing a cantilever that is ~50 microns across--big enough to see with an optical microscope!"
I just want to say that sounds like an absolutely awsome experiment. Any info on results so far? (For that matter, how's he doing it in the first place?)
Does the reality of the wavefunction imply MWI? The wavefunction is a function over every possible configuration of the universe. We may still believe that the universe comprises a single point in configuration space, corresponding to a single value of the wavefunction, along with the value of the wavefunction for every other (counterfactual) point in the configuration space. The reality is the particular configuration space point along with the shape of the wavefunction. This does not imply that the wavefunction is a delta-function in configuration space. Other counterfactual configurations may have similar probability amplitudes depending on their "degree of possibility" compared to the existing configuration.
So, decoherence is a valid scientific theory because it makes the same, correct predictions as the one involving collapse, but is simpler.
There, that didn't take 2800 words, now, did it?
Bob: We may still believe that the universe comprises a single point in configuration space, corresponding to a single value of the wavefunction
How is this not immediately ruled out by Bell's Theorem?
@Silas: I've tried just saying that to people, it doesn't work. Doesn't work in academic physics either. Besides which, it may not be the last time the question comes up, and there's no reason why physicists shouldn't know the (epistemic) math.
Bob: We may still believe that the universe comprises a single point in configuration space, >>corresponding to a single value of the wavefunction
How is this not immediately ruled out by Bell's Theorem?
Bell's Theorem rules out local realism. I'm going with "non-local".
Bob, either I'm missing something, or you are.
If you pick a single point in configuration space in the position basis, nothing has a specified momentum. If you pick a single point in the momentum basis, nothing has a specified position. If you pick a single point in the polarized-45-degrees basis, nothing has a specified 90-degree polarization. Decoherence gives us a preferred basis for our blobs of amplitude but that preferred basis is changing all the time and different for every particle. How's this single-point trick going to work? And what does the epiphenomenal single point do that makes it realer than the causally powerful wavefunction?
Bob: That sounds like Bohmian mechanics, which is distinct from either of the interpretations Eliezer has been talking about.
As I understand it, interpretations with actual wavefunctions and collapse still just describe the universe by the wavefunction, it's just that collapse keeps the wave function bunched up so the world can usually be approximately described by a single configuration.
Eliezer: You still calculate the whole wave function, so it's hardly local, and can therefore be a deterministic hidden variable theory that agrees with experiment. I think you just compute an amplitude current from the wave function, and say that the real world or your test particles follow that velocity field in configuration space.
Pretty pictures at http://bohm-mechanics.uibk.ac.at/index.htm
Eliezer: No doubt I am missing a lot. I have the idea of the wavefunction as a real thing, and I am not advocating a collapse interpretation. I am also uncomfortable with any kind of preferred basis. My idea is that the configuration space of the universe is the classical configuration space, but that its evolution is determined by the wavefunction over the quantum mechanical configuration space (in whatever basis you choose). So a point-particle has a real momentum and a real position, which are not simultaneously measureable. For electromagnetism, the electric and magnetic fields have actual values, which are also not simultaneously measurable. The fields evolve continuously but non-deterministically in accordance with the evolution of the wavefunction. There are still blobs of amplitude in configuration space, but only one point in one of those blobs is the real configuration.
Anonymous: I'll look at your reference to refresh my memory of Bohm. The last I heard, there were problems with relativistic versions of that theory.
electric and magnetic fields have actual values, which are also not simultaneously measurable.
Actually, they are the same thing, so if you know one, you know the other... they are definitely NOT conjugate variables (variables that cannot be measured at the same time).
Psy-Kosh: It is an awesome experiment. Here are links to Bouwmeester's home page , the original proposal, and the latest update on cooling the cantilever.(Bouwmeester has perhaps the most annoying web interface of any serious scientist. Click in the upper left on "research" and then the lower right on "macroscopic quantum superposition". Also, the last article appeared in nature and may not be accessible without a subscription.)
Obviously, this is a very hard experiment and success is not assured.
Also, you might be interested to know that at least one other group, Jack Harris's at Yale, is doing similar work.
Psy-Kosh: Oh, I almost forgot to answer your questions. Experimental results are still several years distant. The basic idea is to fabricate a tiny cantilever with an even tinier mirror attached to its end. Then, you position that mirror at one end of a photon cavity (the other end being a regular fixed mirror). If you then send a photon into the cavity through a half-silvered third mirror--so that it will be in a superposition of being in and not in the cavity--then the cantilever will be put into a correlated superposition: it will be vibrating if the photon is in the cavity and it will be still if the photon is not. Of course, the really, really super-hard part is getting all this to happen without the state decohering before you see anything interesting.
Robin Z: The motivation for suspecting that something funny happens as you try scale up decoherance to full blown many-worlds comes from the serious problems that many-worlds has. Beyond the issue with predicting the Born postulate, there are serious conceptual problems with defining individual worlds, even emergently.
The motivation for doing this experiment is even more clear: (1) The many-worlds interpretation is a fantastically profound statement about our universe and therefore demands that fantastic experimental work be done to confirm it as best as is possible. (For instance, despite the fact that I very confidently expect Bell's inequality to continue to hold after each tenuous experimental loophole is closed, I still consider it an excellent use of my tax dollars that these experiments continue to be improved). (2) Fundamental new regimes in physics should always be probed, especially at this daunting time in the history of physics where we seem to be able to predict nearly everything we see around us but unable to extend our theories to in-principally testable but currently inaccessible regimes. (3) It's just plain cool.
But considering this experiment with the 50 micron cantilever, suppose they are successful in putting it into a superposition and verifying that. That will be a fine piece of work and will gain well deserved approval. But suppose OTOH they are able to show somehow that superposition fails once we hit 50 microns (or equivalent mass)! They show that the cantilever does not obey the equations of QM! That would be earth-shaking news, Nobel Prize caliber work. It would be the most important physics discovery of the 21st century, so far, and maybe for the whole rest of it. It would require scientists to go back to square one in their understanding of the fundamental laws of the universe.
If I am right (and truthfully I am just speculating), this suggests that in their hearts, scientists really do believe what Eliezer describes, that the laws of QM apply to 50 micron cantilevers as well as much more besides. They may not be comfortable with the whole many-worlds picture, but they solve that by just not thinking about it. The prospect of actually discovering a level at which QM stops working is something which would have to be viewed as highly unlikely, in the context of current understanding.
Robin Z: The motivation for suspecting that something funny happens as you try scale up decoherance to full blown many-worlds comes from the serious problems that many-worlds has. Beyond the issue with predicting the Born postulate, there are serious conceptual problems with defining individual worlds, even emergently.
Enough said - I withdraw my implied objection. I, too, hope the experiment you refer to will provide new insight.
Jess: That certainly is an interesting web interface.
But yeah, thanks lots for the info, really cool. I want results right now though! Bouncies (well, i/sqrt(2)|bouncies> + 1/sqrt(2)|sits-still> )
From giving a cursory look at the info there, I didn't quite see or grasp how they would plan on detecting it though. ie, after they put it in a oscilating/not so oscilating mode, they then bounce another photon off of it and analyze its path to see if the path would have to be the result of superposition of the different possible states of the mirror? or something completely different?
Roger Penrose predicts that attempts to create macroscale quantum superposition will fail because gravity will keep things with too much energy from being in two places at once. He's a bit of a crackpot when it comes to quantum gravity, but it'll be interesting to see him proven wrong. ;)
(Similarly, the occasional crackpot theory suggests that the "fair sampling assumption" of EPR tests should be systematically violated, and that our ability to make "better photon detectors" should hit a limit.)
Oh and if in this many world interpretation photons would have to appear opposite to what is detected in our world, then when the experiment is over and the experimenters leave in opposite directions, does that mean the experimenters on the other side continuously crash into each other.
Both collapse and MWI have a it happens for an instant quality. As soon as the experiment is over they go back into the box like the Rosicrucians do with God's angels.
Wait it gets better. If there is a probability of your mothers getting pregnant here should there not be the opposite effect such that your double couldn't have been born?
Since both you and he are around that means the other world only begins with the original photons flying apart.
MWI has both local extent, good, and a divergent local behavior at every point in space, pathological. It requires a disjunction between neighborhood elements since the results have to be complementary. The fabric of complementary MWI layers have an increasing tendency to explode the minute something happens in their partner. Which would suggest in fact that collapse is the true picture, but we already know it's a collapse of a description not the event. This is a bit like how water waves travel but no water molecule actually goes beyond the next crest. The collapse is a figment and MWI is unstable. My God what have we done to reason?
I am not sure you really understand the notion of understanding a theory, and I cannot readily discern what exactly you are trying to say with your comment. Have you examined your reasons for believing what you believe? Have you looked inside your associated mental black boxes? Have you read a text book? Have you talked to a MacroDeco adherent? Have you talked to a rugged and experienced QM scientist? Do you have a better theory? Can you simulate that better theory on a computer?
"If P(Y|X) ≈ 1, then P(X∧Y) ≈ P(X). Which is to say, believing extra details doesn't cost you extra probability when they are logical implications of general beliefs you already have."
Shivers went down my spine when I read that; this is the first time that I actually looked at a formula and really saw what it meant. Ah, maths. Thank you, Yu-el.
I disagree on five points. The first is my conclusion too; the second leads to the third and the third explains the fourth. The fifth one is the most interesting.
1) In contrast with the title, you did not show that the MWI is falsifiable nor testable; I know the title mentions decoherence (which is falsifiable and testable), but decoherence is very different from the MWI and for the rest of the article you talked about the MWI, though calling it decoherence. You just showed that MWI is "better" according to your "goodness" index, but that index is not so good. Also, the MWI is not at all a consequence of the superposition principle: it is rather an ad-hoc hypothesis made to "explain" why we don't experience a macroscopic superposition, despite we would expect it because macroscopic objects are made of microscopic ones. But, as I will mention in the last point, the superposition of macroscopic objects in not an inevitable consequence of the superposition principle applied to microscopic objects.
2) You say that postulating a new object is better than postulating a new law: so why teach Galileo's relativity by postulating its transformations, while they could be derived as a special case of Lorents transformations for slow speeds? The answer is because they are just models, which gotta be easy enough for us to understand them: in order to well understand relativity you first have to understand non-relativistic mechanics, and you can only do it observing and measuring slow objects and then making the simplest theory which describes that (i.e., postulating the shortest mathematical rules experimentally compatible with the "slow" experiences: Galileo's); THEN you can proceed in something more difficult and more accurate, postulating new rules to get a refined theory. You calculate the probability of a theory and use this as an index of the "truthness" of it, but that's confusing the reality with the model of it. You can't measure how a theory is "true", maybe there is no "Ultimate True Theory": you can just measure how a theory is effective and clean in describing the reality and being understood. So, in order to index how good a theory is, you should instead calculate the probability that a person understands that theory and uses it to correctly make anticipations about reality: that means P(Galileo) >> P(first Lorentz, then show Galileo as a special case); and also P(first Galileo, after Lorentz) != P(first Lorentz, after Galileo), because you can't expect people to be perfect rationalists: they can be just as rational as possible. The model is just an approximation of the reality, so you can't force the reality of people to be the "perfect rational person" model, you gotta take in account that nobody's perfect.
3) Because nobody's perfect, you must take in account the needed RAM too. You said in the previous post that "Occam's Razor was raised as an objection to the suggestion that nebulae were actually distant galaxies—it seemed to vastly multiply the number of entities in the universe", in order to justify that the RAM account is irrelevant. But that argument is not valid: we rejected the hypothesis that nebulae are not distant galaxies not because the Occam's Razor is irrelevant, but because we measured their distance and found that they are inside our galaxy; without this information, the simpler hypothesis would be that they are distant galaxies. The Occam's Razor IS relevant not only about the laws, but about the objects too. Yes, given a limited amount of information, it could shift toward a "simpler yet wrong model", but it doesn't annihilate the probability of the "right" model: with new information you would find out that you were previously wrong. But how often does the Occam's Razor induce you to neglect a good model, as opposed to how often it let us neglect bad models? Also, Occam's Razor may mislead you not only when applied to objects, but when applied to laws too, so your argument discriminating Occam's Razor applicability doesn't stand.
4) The collapse of the wave function is a way to represent a fact: if a microscopic system S is in an eigenstate of some observable A and you measure on S an observable B which is non commuting with A, your apparatus doesn't end up in a superposition of states but gives you a unique result, and the system S ends up in the eigenstate of B corresponding to the result the apparatus gave you. That's the fact. As the classical behavior of macroscopic objects and the stochastic irreversible collapse seems in contradiction with the linearity, predictability and reversibility of the Schrödinger equation ruling the microscopic systems, it appears as if there's an uncomfortable demarcation line between microscopic and macroscopic physics. So, attempts have been made in order to either find this demarcation line, or show a mechanism for the emergence of the classical behavior from the quantum mechanics, or solve or formalize this problem however. The Copenhagen interpretation (CI) just says: "there are classical behaving macroscopic objects, and quantum behaving microscopic ones, the interaction of a microscopic object with a macroscopic apparatus causes the stochastic and irreversible collapse of the wave function, whose probabilities are given by the Born rule, now shut up and do the math"; it is a rather unsatisfactory answer, primarily because it doesn't explain what gives rise to this demarcation line and where should it be drawn; but indeed it is useful to represent effectively what are the results of the typical educational experiments, where the difference between "big" and "small" is in no way ambiguous, and allows you to familiarize fast with the bra-ket math. The Many Worlds Interpretation (MWI) just says: "there is indeed the superposition of states in the macroscopic scale too, but this is not seen because the other parts of the wave function stay in parallel invisible universes". Now imagine Einstein did not develop the General Relativity, but we anyway developed the tools to measure the precession of Mercury and we have to face the inconsistency with our predictions through Newton's Laws: the analogous of the CI would be "the orbit of Mercury is not the one anticipated by Newton's Laws but this other one, now if you want to calculate the transits of Mercury as seen from the Earth for the next million years you gotta do THIS math and shut up"; the analogous of the MWI would be something like "we expect the orbit of Mercury to precede at this rate X but we observe this rate Y; well, there is another parallel universe in which the preceding rate of Mercury is Z such that the average between Y and Z is the expected X due to our beautiful indefeasible Newton's Law". Both are unsatisfactory and curiosity stoppers, but the first one avoids to introduce new objects. The MWI, instead, while explaining exactly the same experimental results, introduces not only other universes: it also introduces the concept itself that there are other universes which proliferate at each electron's cough attack. And it does just for the sake of human pursuit of beauty and loyalty to a (yes, beautiful, but that's not the point) theory.
5) you talk of MWI and of decoherence as they are the same thing, but they are quite different. Decoherence is about the loss of coherence that a microscopic system (an electron, for instance) experiences when interacting with a macroscopic chaotic environment. As this sounds rather relevant to the demarcation line and interaction between microscopic and macroscopic, it has been suggested that maybe these are related phenomenons, that is: maybe the classical behavior of macroscopic objects and the collapse of the wave function of a microscopic object interacting with a macroscopic apparatus are emergent phenomenons, which arise from the microscopic quantum one through some interaction mechanism. Of course this is not an answer to the problem: it is just a road to be walked in order to find a mechanism, but we gotta find it. As you say, "emergence" without an underlying mechanism is like "magic". Anyway, decoherence has nothing to do with MWI, though both try (or pretend) to "explain" the (apparent?) collapse of the wave function. In the last decades decoherence has been probed and the results look promising. Though I'm not an expert in the field, I took a course about it last year and made a seminar as exam for the course, describing the results of an article I read (http://arxiv.org/abs/1107.2138v1). They presented a toy model of a Curie-Weiss apparatus (a magnet in a thermal bath), prepared in an initial isotropic metastable state, measuring the z-axis spin component of a 1/2 spin particle through induced symmetry breaking. Though I wasn't totally persuaded by the Hamiltonian they wrote and I'm sure there are better toy models, the general ideas behind it were quite convincing. In particular, they computationally showed HOW the stochastic indeterministic collapse can emerge from just: a) Schrödinger's equation; b) statistical effects due to the "large size" of the apparatus (a magnet composed by a large number N of elementary magnets, coupled to a thermal bath); c) an appropriate initial state of the apparatus. They did not postulate neither new laws nor new objects: they just made a model of a measurement apparatus within the framework of quantum mechanics (without the postulation of the collapse) and showed how the collapse naturally arose from it. I think that's a pretty impressive result worth of further research, more than the MWI. This explains the collapse without postulating it, nor postulating unseen worlds.
You can't measure how a theory is "true", maybe there is no "Ultimate True Theory": you can just measure how a theory is effective and clean in describing the reality and being understood.
Do you have some notion of the truth of a statement, other than effectively describing reality? If so, I would very much like to hear it.
No, I don't: actually we probably agree about that, with that sentence I was just trying to underline the "being understood" requirement for an effective theory. That was meant to introduce my following objection that the order in which you teach or learn two facts is not irrelevant. The human brain has memory, so a Markovian model for the effectiveness of theories is too simple.
I doubt that you will be successful in convincing EY of the non-privileged position of the MWI. Having spent a lot of time, dozens of posts and tons of karma on this issue, I have regretfully concluded that he is completely irrational with regards to instrumentalism in general and QM interpretations in in particular. In his objections he usually builds and demolishes a version of a straw Copenhagen, something that, in his mind, violates locality/causality/relativity.
One would expect that, having realized that he is but a smart dilettante in the subject matter, he would at least allow for the possibility of being wrong, alas it's not the case.
In contrast with the title, you did not show that the MWI is falsifiable nor testable.
I agree that he didn't show testable, but rather the possibility of it (and the formalization of it).
You just showed that MWI is "better" according to your "goodness" index, but that index is not so good
There's a problem with choosing the language for Solomonoff/MML, so the index's goodness can be debated. However, I think in general index is sound.
You calculate the probability of a theory and use this as an index of the "truthness" of it, but that's confusing the reality with the model of it.
I don't think he's saying that theories fundamentally have probabilities. Rather, as a Bayesian, he gives some priors to each theory. As more evidences accumulate, the right theory will update and its probability approaches 1.
The reason human understanding can't be part of the equations is, as EY says, shorter "programs" are more likely to govern the universe than longer "programs," essentially because these "programs" are more likely to be written if you throw down some random bits to make a program that governs the universe.
So I don't buy your arguments in the next section.
But that argument is not valid: we rejected the hypothesis that nebulae are not distant galaxies not because the Occam's Razor is irrelevant, but because we measured their distance and found that they are inside our galaxy; without this information, the simpler hypothesis would be that they are distant galaxies.
EY is comparing the angel explanation with the galaxies explanation; you are supposed to reject the angels and usher in the galaxies. In that case, the anticipations are truly the same. You can't really prove whether there are angels.
But how often does the Occam's Razor induce you to neglect a good model, as opposed to how often it let us neglect bad models?
What do you mean by "good"? Which one is "better" out of 2 models that give the same prediction? (By "model" I assume you mean "theory")
but indeed it is useful to represent effectively what are the results of the typical educational experiments, where the difference between "big" and "small" is in no way ambiguous, and allows you to familiarize fast with the bra-ket math.
You admit that Copenhagen is unsatisfactory but it is useful for education. I don't see any reason not to teach MWI in the same vein.
Now imagine Einstein did not develop the General Relativity, but we anyway developed the tools to measure the precession of Mercury and we have to face the inconsistency with our predictions through Newton's Laws: the analogous of the CI would be "the orbit of Mercury is not the one anticipated by Newton's Laws but this other one, now if you want to calculate the transits of Mercury as seen from the Earth for the next million years you gotta do THIS math and shut up"; the analogous of the MWI would be something like "we expect the orbit of Mercury to precede at this rate X but we observe this rate Y; well, there is another parallel universe in which the preceding rate of Mercury is Z such that the average between Y and Z is the expected X due to our beautiful indefeasible Newton's Law".
If indeed the expectation value of observable V of mercury is X but we observe Y with Y not= X (that is to say that the variance of V is nonzero), then there isn't a determinate formula for predict V exactly in your first Newton/random formula scenario. At the same time, someone who has the Copenhagen interpretation would have the same expectation value X, but instead of saying there's another world he says there's a wave function collapse. I still think that the parallel world is a deduced result from universal wave function, superposition, decoherence, and etc that Copenhagen also recognizes. So the Copenhagen view essentially say "actually, even though the equations say there's another world, there is none, and on top of that we are gonna tell you how this collapsing business works". This extra sentence is what causes the Razor to favor MWI.
Much of what you are arguing seems to stem from your dissatisfaction of the formalization of Occam's Razor. Do you still feel that we should favor something like human understanding of a theory over the probability of a theory being true based on its length?
You admit that Copenhagen is unsatisfactory but it is useful for education. I don't see any reason not to teach MWI in the same vein.
Because it sets people up to think that QM can be understood in terms of wavefunctions that exist and contain parallel realities; yet when the time comes to calculate anything, you have to go back to Copenhagen and employ the Born rule.
Also, real physics is about operator algebras of observables. Again, this is something you don't get from pure Schrodinger dynamics.
QM should be taught in the Copenhagen framework, and then there should be some review of proposed ontologies and their problems.
There's a problem with choosing the language for Solomonoff/MML, so the index's goodness can be debated. However, I think in general index is sound.
When I hear about Solomonoff Induction, I reach for my gun :)
The point is that you can't use Solomonoff Induction or MML to discriminate between interpretations of quantum mechanics: these are formal frameworks for inductive inference, but they are underspecified and, in the case of Solomonoff Induction, uncomputable.
Yudkowsky and other people here seem to use the terms informally, which is an usage I object to: it's just a fancy way of saying Occam's razor, and it's an attempt to make their arguments more compelling that they actually are by dressing them in pseudomathematics.
The reason human understanding can't be part of the equations is, as EY says, shorter "programs" are more likely to govern the universe than longer "programs," essentially because these "programs" are more likely to be written if you throw down some random bits to make a program that governs the universe.
That assumes that Solomonoff Induction is the ideal way of performing inductive reasoning, which is debateable. But even assuming that, and ignoring the fact that Solomonoff Induction is underspecified, there is still a fundamental problem:
The hypotheses considered by Solomonoff Induction are probability distributions on computer programs that generate observations, how do you map them to interpretations of quantum mechanics?
What program corresponds to Everett's interpretation? What programs correspond to Copenhagen, objective collapse, hidden variable, etc.?
Unless you can answer these questions, any reference to Solomonoff Induction in a discussion about interpretations of quantum mechanics is a red herring.
So the Copenhagen view essentially say "actually, even though the equations say there's another world, there is none, and on top of that we are gonna tell you how this collapsing business works". This extra sentence is what causes the Razor to favor MWI.
Actually Copenhagen doesn't commit to collapse being objective. People here seem to conflate Copenhagen with objective collapse, which is a popular misconception.
Objective collapse intepretations generally predict deviations from standard quantum mechanics in some extreme cases, hence they are in principle testable.
Eliezer's mistake here was that he didn't, before the QM sequence, write a general post to the effect that you don't have an additional Bayesian burden of proof if your theory was proposed chronologically later. Given such a reference, it would have been a lot simpler to refer to that concept without it seeming like special pleading here.
It's mentioned in passing in the "Technical Explanation" (but yes, not a full independently-linkable post):
Humans are very fond of making their predictions afterward, so the social process of science requires an advance prediction before we say that a result confirms a theory. But how humans may move in harmony with the way of Bayes, and so wield the power, is a separate issue from whether the math works. When we’re doing the math, we just take for granted that likelihood density functions are fixed properties of a hypothesis and the probability mass sums to 1 and you’d never dream of doing it any other way.
Hmm, I'm not sure that point is sufficiently (a) widely applicable and (b) insightful that it would merit its own post. Perhaps I'm being unimaginative though?
There's certainly a tradeoff involved in using a disputed example as your first illustration of a general concept (here, Bayesian reasoning vs the Traditional Scientific Method).
We require new predictions not because the theory is newer than some other theory it could share predictions, but because the predictions must come before the experimental results. If we allow theories theories to rely on the results of already known experiments, we run into two problems:
Now, if the new theory is a strictly simpler versions of an old one - as in "we don't even need X" simpler - then these two problems are nonissue:
So... I will allow it.
Is the decoherence interpretation of quantum mechanics falsifiable? Are there experimental results that could drive its probability down to an infinitesimal?
Sure: We could measure entangled particles that should always have opposite spin, and find that if we measure them far enough apart, they sometimes have the same spin.
That has nothing to do with decoherence. Decoherence is not an automatic outcome of basic QM, so you can't falsify it by falsifying QM; and dechorence of a kind that implies many macroscopic non-interacting worlds is another matter anyway.
Just one quick note: this formulation of Bayes' theorem implicitly assumes that the A_j are not only mutually exclusive, but cover the entire theory space we consider - their joint probability is assigned a value of 1.
I think you are slightly misrepresenting the pro-objective-collapse position. A collapser believes in collapse not because the many-worlds interpretation seems too bizarre to be true, but simply because, for him, it is an experimental fact -- the evidence B. To be more precise: it is a fact that he (his consciousness, soul, etc.) directly observes that the cat is dead, which means the state is somehow selected. For him, the real question is why this particular state is realized and why he experiences it.
Of course, one could argue that the state is not preferential since his quantum clone observes the cat as alive. But then, why is he not his quantum clone? One could respond with something like “by definition,” “because you are who you are,” or “this is just a semantic issue,” or "it all sums up to normality", but I think such explanations are perceived by him as mere curiosity stoppers because they are not helping concentrate the probability mass in any way.
By the way, I am not a collapser -- hence why I am using “he” instead of “I” -- just pointing out that your criticism addresses a different and much weaker argument than the one typically held by those who believe in collapse.