Common sense quantum mechanics

7gjm

0Protagoras

0private_messaging

0gjm

2dvasya

1gjm

0dvasya

5private_messaging

0dvasya

2private_messaging

4Emile

2dvasya

3ESRogs

6dvasya

3Luke_A_Somers

1dvasya

0Luke_A_Somers

1V_V

2Luke_A_Somers

0dvasya

0Luke_A_Somers

0dvasya

1DanielFilan

0V_V

0DanielFilan

0V_V

0DanielFilan

1V_V

2DanielFilan

0dvasya

4DanielFilan

0dvasya

3shminux

4dvasya

2shminux

1buybuydandavis

0raisin

2shminux

0dvasya

0raisin

0shminux

1Pattern

0buybuydandavis

New Comment

43 comments, sorted by Click to highlight new comments since: Today at 11:02 AM

I have no more than glanced at the paper. The following may therefore be a dumb question, in which case I apologize.

It seems as if one of the following must be true. Which?

The arguments of this paper show that classical mechanics could never really have been a good way to model the universe, even if the universe had in fact precisely obeyed the laws of classical mechanics.

The arguments of this paper show that actually there's some logical incoherence in the very idea of a universe that precisely obeys the laws of classical mechanics.

The arguments of this paper don't apply to a universe that precisely obeys the laws of classical mechanics, because of some assumption it makes (a) explicitly or (b) implicitly that couldn't be true in such a universe.

The arguments of this paper don't exclude a classical universe either by proving it impossible or by making assumptions that already exclude a classical universe, and yet they somehow don't show that such a universe has to be modelled quantumly.

I'm confused.

What would you mean by incoherence, though?

There's a plenty of possible cellular automations that are neither quantum mechanical nor relativistic, and it's not too hard to construct one that would approximate classical mechanics at macroscopic scale, but wouldn't in any way resemble quantum mechanics as we know it at microscale*, nor would be relativistic.

- caveat: you could represent classical behaviour within quantum mechanical framework, it's just that you wouldn't want to.

1 would disappoint me. 2 would surprise me but (for reasons resembling yours) not astonish me. 3 would be the best case and I'd be interested to know what assumptions. (The boundary between 2 and 3 is fuzzy. A nonrelativistic universe with electromagnetism like ours has problems; should "electromagnetism like ours" be considered part of "the very idea" or a further "assumption"?) 4 and 5 would be very interesting but (kinda obviously) I don't currently see how either would work.

I certainly would not rule out number 5 ;) As for 3, the arguments seem to apply to any universe in which you can carry out a reproducible experiment. However, in a "classical universe" everything is, in principle, exactly knowable, and so you just don't *need* a probabilistic description.

Unless there is limited information, in which case you use statistical mechanics. With perfect information you know which microstate the system is in, the evolution is deterministic, there is no entropy (macrostate concept), hence no second law, etc. Only when you have imperfect information -- an ensemble of possible microstates, a macrostate -- mechanics "becomes" statistical.

Using probabilistic logic in a situation where classical logic applies is either overkill or underconfidence.

In case it's less than perfectly clear, I am very much not ruling out number 5; that's why it's there. But for obvious reasons there's not much I can say about how it might be true and what the consequences would be.

Even in a classical universe your knowledge is always going to be incomplete in practice. (Perfectly precise measurement is not in general possible. Your brain has fewer possible states than the whole universe. Etc.) So probabilistic reasoning, or something very like it, is inescapable even classically. Regardless, though, it would be pretty surprising to me if mere "underconfidence" (supposing it to be so) required a quantum [EDITED TO ADD: model of the] universe.

I'm not sure if we can say much about a classical universe "in practice" because in practice we do not live in a classical universe. I imagine you could have perfect information if you looked at some simple classical universe from the outside.

For classical universes with complete information you have Newtonian dynamics. For classical universes with incomplete information about the state you can still use Newtonian dynamics but represent the state of the system with a probability distribution. This ultimately leads to (classical) statistical mechanics. For universes with incomplete information about the state *and* about its evolution ("category 3a" in the paper) you get quantum theory.

[Important caveat about classical statistical mechanics: it turns out to be a problem to formulate it without assuming some sort of granularity of phase space, which quantum theory provides. So it's all pretty intertwined.]

As far as I can tell, it's highly misleading for laymen. The postulates, as verbally described ("reproducible" is the worst offender by far), look generic and innocent - like something you'd reasonably expect of any universe you could figure out - but as mathematically introduced, they constrain the possible universes far more severely than their verbal description would.

In particular, one could have an universe where the randomness arises from the fine position of the sensor - you detect the particle if some form of binary hash of the bitstring of the position of the sensor is 1, and don't detect when the hash is 0. The experiments in that universe look like reproducible probability of detecting the particle, rather than non-reproducible (due to sensitivity to position) detection of particle. Thus "reproducible" does not constrain us to the universes where the experiments are non-sensitive to small changes.

Position of anything is not known exactly.

The point is, they say in their verbal description something like "reproducible" and then in the math they introduce a very serious constraint on what happens if you move a detector a little bit, or they introduce rotational symmetry, or the like. As far as looking at the words could tell, they're deriving the fundamental laws from the concept of "reproducible".

But what they really do is putting the rabbit into the hat and then pulling it back out.

Which is obvious, even. There's a lot of possible universes which are reproducible and have some uncertainty, where QM is entirely useless, and those aren't going to be rendered impossible by a few tricks. It could be very worthwhile to work out what is the minimal set of assumptions necessary - much more worthwhile than trying to pretend that this set is smaller than it is.

This is a brief overview a recent paper in Annals of Physics:

Quantum theory as the most robust description of reproducible experiments

by Hans De Raedt, Mikhail I. Katsnelson, and Kristel Michielsen.

(Note that a link to this was posted a few days ago.)

In short, they mostly seem far-fetched to me, probably due to a superficial reading of the paper (as Mitchell_Porter admits). For example:

I also noticed that the authors were talking about "Fisher information". This was unsurprising, there are other people who want to "derive physics from Fisher information"

The Fisher information in this paper arises automatically at some point and is only noted in passing. There is no more derivation *from* Fisher information as there is from the wavefunction.

they describe something vaguely like an EPR experiment ... a similarly abstracted description of a Stern-Gerlach experiment

The vagueness and abstraction are required to (1) precisely define the terms (2) under the most general conditions possible, i.e., the minimum information sufficient to define the problem. This is completely in line with Jaynes' logic that the prior should include all the information that we have and no other information (the maximum entropy principle). If you have some more concrete information about the specific instance of Stern-Gerlach experiment you are running then by all means you should include it in your probability assignment.

They make many appeals to symmetry, e.g. ... that the experiment will behave the same regardless of orientation. Or ... translational invariance.

Again, a reader who is familiar with Jaynes will immediately recognize here the principle of transformation groups (extension of principle of indifference). If nothing about the problem changes upon translation/rotation then this fact must be reflected in the probability distribution.

hope that some coalition of Less Wrong readers, knowing about both probability and physics, will have the time and the will to look more closely, and identify specific leaps of logic, and just what is actually going on in the paper

- in fact this is what I was trying to do here.

The Born rule that is so puzzling for MWI results from the particular mathematical form of this functional substitution.

It's not MORE puzzling in MWI. It's just that under MWI you have enough of a reason to suspect that it ought to be the case that you're posed with the puzzle of whether you actually have enough to prove it. Under not-MWI, you have to import it whole cloth, which may feel less puzzling since we aren't so close to an answer.

~~~~

I find this an interesting notion, but I'm not sure quite what it means. This isn't an ontology. It provides no mechanism that would justify the relevance of its assumptions.

I'm not sure "not-MWI" is a single coherent interpretation :) Under Copenhagen, for example, the Born rule has to be postulated. The present paper

does not support the Copenhagen interpretation (in any form)

MWI also postulates it, see V_V's comment.

As for the paper's assumptions, they seem to be no different than the assumptions of normal probabilistic reasoning as laid out by Cox/Polya/Jaynes/etc., with all that ensues in regard to relevance.

(edit: formatting)

under MWI the observation probabilities are indexical, but still MWI doesn't give an explanation of why these probabilities must have the specific values computed by Born rule and not some other values. Thus, MWI assumes the Born rule as an axiom, just like most other interpretations of quantum mechanics.

If you suppose that branching ought to act like probability, then the Born rule follows directly (as pointed out by Born himself in the original paper and reproduced here by me several times). This is not the challenge for MWI. The problem is getting from Wavefunction realism to the notion that we ought to treat branching like probability with any sort of function at all.

Luke, please correct me if I'm misunderstanding something.

The rule follows directly if you require that the wavefunction behaves like a "vector probability". Then you look for a measure that behaves like probability should (basically, nonnegative and adding up to 1). And you find that for this the wavefunction should be complex-valued and the probability should be its squared amplitude. You can also show that anything "larger" than complex numbers (e.g. quaternions) will not work.

But, as you said, the question is not how to derive the Born rule from "vector probability", but rather why would we make the connection of wavefunction with probability in the first place (and why the former should be vector rather than scalar). And in this respect I find the exposition that starts from probability and gets to the wavefunction very valuable.

The two requirements are that it be on the domain of probabilities (reals on 0-1), and that they nest properly.

Quaternions would be OK as far as the Born rule is concerned - why not? They have a magnitude too. If we run into trouble with them, it's with some other part of QM, not the Born rule (and I'm not entirely confident that we do - I have hazy recollection of a formulation of the Dirac equation using quaternions instead of complex numbers).

Here are some nice arguments about different what-if/why-not scenarios, not fully rigorous but sometimes quite persuasive: http://www.scottaaronson.com/democritus/lec9.html

I'm not so sure that this is actually true. It has been shown that, given a fairly minimal set of constraints that don't mention probability, decision-makers in a MWI setting maximise expected utility, where the expectation is given with respect to the Born rule: http://arxiv.org/abs/0906.2718

Nice paper, thanks for linking it.

The quantum representation theorem is interesting, however, I don't think it really proves the Born rule.

If I understand correctly, it effectively assumes it (eq. 13, 14, 15) and then proves that given any preference ordering consistent with the "richness" and "rationality" axioms, there is an utility function such that its expectation w.r.t. the Born probabilities represent that ordering.

But the same applies to any other probability distribution, as long as it has its support inside the support of the Born probability distribution:

Let p(x) be the Born probabilities and u(x) be the original utility function. Let p'(x) be another probability distribution.

Then u'(x) = u(x) p(x)/p'(x) yields the correct preference ordering under expectation w.r.t. p'(x)

Equations 13, 14 and 15 introduce notation that aren't used in the axioms, so they don't really constitute an assumption that maximising Born-expected utility is the only rational strategy.

Your second paragraph has a subtle problem: the argument of u is which reward you get, but the argument of p might have to do with the coefficients of the branches in superposition.

To illustrate, suppose that I only care about getting Born-expected dollars. Then, letting denote the world where I get $n, my preference ordering includes

and

You might wonder if my preferences could be represented as maximising utility with respect to the uniform branch weights: you don't care at all about branches with Born weight zero, but you care equally about all elements with non-zero coefficient, regardless of what that coefficient is. Then, if the new utility function is , we require

%20%3E%20U'(\$3))

and

%20+%20\frac{1}{2}%20U'(\$3)%20=%20\frac{1}{2}%20U'(\$0)%20+%20\frac{1}{2}%20U'($4))

However, this is a contradiction, so my preferences cannot be represented in this way.

Equations 13, 14 and 15 introduce notation that aren't used in the axioms, so they don't really constitute an assumption that maximising Born-expected utility is the only rational strategy.

They are used in the last theorem.

... You might wonder if my preferences could be represented as maximising utility with respect to the uniform branch weights

I think this violates indifference to microstate/branching.

They are used in the last theorem.

I agree that the notation that they introduce is used in the last two theorems (the Utility Lemma and the Born Rule Theorem), but I don't see where in the proof that they assume that you should maximise Born-expected utility. If you could point out which step you think does this, that would help me understand your comment better.

I think this violates indifference to microstate/branching.

I agree. This is actually part of the point: you can't just maximise utility with respect to any old probability function you want to define on superpositions, you have to use the Born rule to avoid violating diachronic consistency or indifference to branching or any of the others.

I agree that the notation that they introduce is used in the last two theorems (the Utility Lemma and the Born Rule Theorem), but I don't see where in the proof that they assume that you should maximise Born-expected utility. If you could point out which step you think does this, that would help me understand your comment better.

It is used in to define the expected utility in the statement of these two theorems, eq. 27 and 30.

This is actually part of the point: you can't just maximise utility with respect to any old probability function you want to define on superpositions, you have to use the Born rule to avoid violating diachronic consistency or indifference to branching or any of the others.

The issue is that the agent needs a decision rule that, given a quantum state computes an action, and this decision rule must be consistent with the agent's preference ordering over observable macrostates (which has to obey the constraints specified in the paper).

If the decision rule has to have the form of expected utility maximization, then we have two functions which are multiplied together, which gives us some wiggle room between them.

If I understand correctly, the claim is that if you restrict the utility function to depend only on the macrostate rather than the quantum state, then the probability distribution must be the Born Rule.

It seems to me that while certain probability distributions are excluded, the paper didn't prove that the Born Rule is the only consistent distribution.

Even if it turns out that it is, the result would be interesting but not particularly impressive, since macrostates are defined in terms projections, which naturally induces a L2 weighting. But defining macrostates this way makes sense precisely because there is the Born rule.

It is used in to define the expected utility in the statement of these two theorems, eq. 27 and 30.

Yes. The point of those theorems is to prove that if your preferences are 'nice', then you are maximising Born-expected utility. This is why Born-expected utility appears in the statement of the theorems. They do not assume that a rational agent maximises Born-expected utility, they prove it.

The issue is that the agent needs a decision rule that, given a quantum state computes an action, and this decision rule must be consistent with the agent's preference ordering over observable macrostates (which has to obey the constraints specified in the paper).

Yes. My point is that maximising Born-expected utility is the only way to do this. This is what the paper shows. The power of this theorem is that other decision algoritms don't obey the constraints specified in the paper.

If the decision rule has to have the form of expected utility maximization, then we have two functions which are multiplied together, which gives us some wiggle room between them.

No: the functions are of two different arguments. Utility (at least in this paper) is a function of what reward you get, whereas the probability will be a function of the amplitude of the branch. You can represent the strategy of maximising Born-expected utility as the strategy of maximising some other function with respect to some other set of probabilities, but that other function will not be a function of the rewards.

Even if it turns out that it is, the result would be interesting but not particularly impressive, since macrostates are defined in terms projections, which naturally induces a L2 weighting. But defining macrostates this way makes sense precisely because there is the Born rule.

A macrostate here is defined in terms of a subspace of the whole Hilbert space, which of course involves an associated projection operator. That being said, I can't think of a reason why this doesn't make sense if you don't assume the Born rule. Could you elaborate on this?

I'm not sure that the proof can be summarised in a comment, but the theorem can:

Suppose you are an agent that knows that you are living in an Everettian universe. You have a choice between unitary transformations (the only type of evolution that the world is allowed to undergo in MWI), that will in general cause your 'world' to split and give you various rewards or punishments in the various resulting branches. Your preferences between unitary transformations satisfy a few constraints:

Some technical ones about which unitary transformations are available.

Your preferences should be a total ordering on the set of the available unitary transformations.

If you currently have unitary transformation U available, and after performing U you will have unitary transformations V and V' available, and you know that you will later prefer V to V', then you should currently prefer (U and then V) to (U and then V').

If there are two microstates that give rise to the same macrostate, you don't care about which one you end up in.

You don't care about branching in and of itself: if I offer to flip a quantum coin and give you reward R whether it lands heads or tails, you should be indifferent between me doing that and just giving you reward R.

You only care about which state the universe ends up in.

If you prefer U to V, then changing U and V by some sufficiently small amount does not change this preference.

Then, you act exactly as if you have a utility function on the set of rewards, and you are evaluating each unitary transformation based on the weighted sum of the utility of the reward you get in each resulting branch, where you weight by the Born 'probability' of each branch.

Downvoted for reposting yet another untestable QM foundations paper, under a misleading title (there is nothing "common-sense" about QM).

In quantum physics, MWI does quite naturally resolve some difficult issues in the "wavefunction-centristic" view. However, we see that the concept wavefunction is not really central for quantum mechanics. This removes the whole problem of wavefunction collapse that MWI seeks to resolve.

Physical theories live and die by testing (or they ought to, unless they happen to be pushed by famous string theorists). I agree that "This removes the whole problem of wavefunction collapse", but only in the minds of philosophers of physics and some misguided philosophically inclined physicists. This paper adds nothing to physics.

Thank you. The title plays on the idea of deriving quantum mechanics from the rules of "common-sense" probabilistic reasoning. Suggestions for a better title are, of course, welcome.

In my view this is not so much "QM foundations" or "adding to physics" (one could argue it *takes away* from physics) as it is an interesting application of Bayesian inference, providing another example of its power. It is however interesting to discuss it in the context of MWI which is a relatively big thing for some here on Less Wrong.

Regarding testability I'm reminded of the recent discussion at Scott Aaronson's blog: http://www.scottaaronson.com/blog/?p=1653

I agree with everything Scott Aaronson said there, actually. As for the common sense, apparently our definitions of it differ. Furthermore, while I agree that this paper might be an interesting exercise in some mathematical aspects of Bayesian inference as applied to something or other, I question its relevance to physics in general and QM in particular.

I agree that "This removes the whole problem of wavefunction collapse", but only in the minds of philosophers of physics and some misguided philosophically inclined physicists. This paper adds nothing to physics.

Giving an alternative formalism that "clears up the mysteries" and suggests an approach for new problems is a huge advance, IMO.

Jaynes did this before for statistical mechanics. Now they've applied the same principles to Quantum Mechanics. Maybe they could apply the gauge extensions of maximum entropy to this new derivation as well.

I agree that "This removes the whole problem of wavefunction collapse", but only in the minds of philosophers of physics and some misguided philosophically inclined physicists. This paper adds nothing to physics.

Is physics important to you in ways other than how well it corresponds to reality? Physics relies on testing and experiments, but if we have another kind of system - let's call it bayesianism - and we have a reason to believe this other kind of system corresponds better to reality even though it doesn't rely perfectly on testing and experimenting, would you reject that in favor of physics? Why?

if we have another kind of system - let's call it bayesianism - and we have a reason to believe this other kind of system corresponds better to reality even though it doesn't rely perfectly on testing and experimenting, would you reject that in favor of physics? Why?

Replace "bayesianism" with "Christianity" in the above and answer your own question.

The moment a model of the world becomes disconnected from "testing and experimenting" it becomes a faith (or math, if you are lucky).

But I still don't understand how raisin proposes to reject physics in favor of probability theory or vice versa.

Well, 'reject' was a bad word. Physics is fine for mostly everything. What I meant was that "bayesianism" could supplement physics in areas that are hard to test like MWI, parallel universes etc. Basically what Tegmark argues here.

I guess one could argue that "bayesianism" (probability-as-logic) is testable practically and, indeed, well-tested by now.

Well, sure, the techniques based on Bayesian interpretations of probabilities (subjective or objective) work at least as well as frequentist (not EYish straw-frequentist, but actual frequentist, Kolmogorov-style), and sometimes better. And yeah, I have no idea what raisin is on about. Bayesianism is not an alternative to physics, just one of its mathematical tools.

Related to: Quantum physics sequence.TLDR: Quantum mechanics can be derived from the rules of probabilistic reasoning. The wavefunction is a mathematical vehicle to transform a nonlinear problem into a linear one. The Born rule that is so puzzling for MWI results from the particular mathematical form of this functional substitution.This is a brief overview a recent paper in

Annals of Physics(recently mentioned in Discussion):Quantum theory as the most robust description of reproducible experiments (arXiv)

by Hans De Raedt, Mikhail I. Katsnelson, and Kristel Michielsen. Abstract:

In a nutshell, the authors use the "plausible reasoning" rules (as in, e.g., Jaynes'

Probability Theory) to recover the quantum-physical results for the EPR and Stern–Gerlach experiments by adding a notion of experimental reproducibility in a mathematically well-formulated way andwithout any "quantum" assumptions. Then they show how the Schrodinger equation (SE) can be obtained from the nonlinear variational problem on the probabilityPfor the particle-in-a-potential problem when the classical Hamilton-Jacobi equation holds "on average". The SE allows to transform the nonlinear variational problem into a linear one, and in the course of said transformation, the (real-valued) probabilityPand the actionSare combined in a single complex-valued function ~P^{1/2}exp(iS) which becomes the argument of SE (the wavefunction).This casts the "serious mystery" of Born probabilities in a new light. Instead of the observed frequency being the square(d amplitude) of the "physically fundamental" wavefunction, the wavefunction is seen as a mathematical vehicle to convert a difficult nonlinear variational problem for inferential probability into a manageable linear PDE, where it so happens that the probability enters the wavefunction under a square root.

Below I will excerpt some math from the paper, mainly to show that the approach actually works, but outlining just the key steps. This will be followed by some general discussion and reflection.

1. Plausible reasoning and reproducibilityThe authors start from the usual desiderata that are well laid out in Jaynes'

Probability Theoryand elsewhere, and add to them another condition:Mathematically, this is a requirement that the probability

P(x|θ,Z) of observationxgiven an uncertain experimental parameterθand the rest of out knowledgeZ, is maximallyrobustto small changes inθand independent ofθ. Using log-probabilities, this amounts to minimizing the "evidence"for any small

εso that |Ev| is not a function ofθ(but the probability is).2. The Einstein–Podolsky–Rosen–Bohm experimentThere is a source S that, when activated, sends a pair of signals to two routers R

_{1,2}. Each router then sends the signal to one of its two detectorsD_{i}_{+,–}(i=1,2). Each router can be rotated and we denote asθthe angle between them. The experiment is repeatedNtimes yielding the data set {x_{1},y_{1}}, {x_{2},y_{2}}, ... {x,_{N}y} where_{N}xandyare the outcomes from the two detectors (+1 or –1). We want to find the probabilityP(x,y|θ,Z).After some calculations it is found that the single-trial probability can be expressed as

P(x,y|θ,Z) = (1 +xyE_{12}(θ) ) / 4, whereE_{12}(θ) = Σ_{x,y=+–1}xyP(x,y|θ,Z) is a periodic function.From the properties of Bernoulli trials it follows that, for a data set of

Ntrials withntotal outcomes of each type {_{xy}x,y},and expanding this in a Taylor series it is found that

The expression in the sum is the Fisher information

Ifor_{F}P. The maximum robustness requirement means it must be minimized. Writing it down asI= 1/(1 –_{F}E_{12}(θ)^{2}) (dE_{12}(θ)/dθ)^{2}one finds thatE_{12}(θ) = cos(θI_{F}^{1/2}+φ), and sinceE_{12}must be periodic in angle,I_{F}^{1/2}is a natural number, so the smallest possible value isI= 1. Choosing_{F}φ=πit is found thatE_{12}(θ) = –cos(θ), and we obtain the result thatwhich is the well-known correlation of two spin-1/2 particles in the singlet state.

3. The Stern–Gerlach experimentThis case is analogous and simpler than the previous one. The setup contains a source emitting a particle with magnetic moment

S, a magnet with field in the directiona, and two detectorsD_{+}andD_{–}.Similarly to the previous section,

P(x|θ,Z) = (1 +xE(θ) ) / 2, whereE(θ) =P(+|θ,Z) –P(–|θ,Z) is an unknown periodic function. By complete analogy we seek the minimum ofIand find that_{F}E(θ) = +–cos(θ), so thatTo repeat: there are no wavefunctions in the present approach. The only assumption is that a dependence of outcome on particle/magnet orientation is observed with robustness/reproducibility.

4. Schrodinger equationA particle is located in unknown position

θon a line segment [–L,L]. Another line segment [–L,L] is uniformly covered with detectors. A source emits a signal and the particle's response is detected by one of the detectors.After going to the continuum limit of infinitely many infinitely small detectors and accounting for translational invariance it is possible to show that the position of the particle

θand of the detectorxcan be interchanged so thatdP(x|θ,Z)/dθ= –dP(x|θ,Z)/dx.In exactly the same way as before we need to minimize Ev by minimizing the Fisher information, which is now

However, simply solving this minimization problem will not give us anything new because nothing so far accounted for the fact that the particle moves in a potential. This needs to be built into the problem. This can be done by requiring that the classical Hamilton-Jacobi equation holds

on average. Using the Lagrange multiplier method, we now need to minimize the functionalHere

S(x) is the action (Hamilton's principal function). This minimization yields solutions for the two functionsP(x|θ,Z) andS(x). It is a difficult nonlinear minimization problem, but it is possible to find a matching solution in a tractable way using a mathematical "trick". It is known that standard variational minimization of the functionalyields the Schrodinger equation for its extrema. On the other hand, if one makes the substitution combining two real-valued functions

PandSinto a single complex-valuedψ,Qis immediately transformed intoF, concluding the derivation of the Schrodinger equation. Incidentally,ψis constructed so thatP(x|θ,Z) = |ψ(x|θ,Z)|^{2}, which is the Born rule.Summing up the meaning of Schrodinger equation in the present context:

The authors then proceed to derive the time-dependent SE (independently from the stationary SE) in a largely similar fashion.

5. What it all meansClassical mechanics assumes that everything about the system's state and dynamics can be known (at least in principle). It starts from axioms and proceeds to derive its conclusions deductively (as opposed to inductive reasoning). In this respect quantum mechanics is to classical mechanics what probabilistic logic is to classical logic.

Quantum theory is viewed here not as a description of what really goes on at the microscopic level, but

as an instance of logical inference:and

Such a treatment of QM is similar in spirit to Jaynes'

Information Theory and Statistical Mechanicspapers (I, II). Traditionally statistical mechanics/thermodynamics is derived bottom-up from the microscopic mechanics and a series of postulates (such as ergodicity) that allow us to progressively ignore microscopic details under strictly defined conditions. In contrast, Jaynes starts with minimum possible assumptions:and proceeds to derive the foundations of statistical physics from the maximum entropy principle. Of course, these papers deserve a separate post.

This community should be particularly interested in how this all aligns with the many-worlds interpretation. Obviously, any conclusions drawn from this work can only apply to the "quantum multiverse" level and cannot rule out or support any other many-worlds proposals.

In quantum physics, MWI does quite naturally resolve some difficult issues in the "wavefunction-centristic" view. However, we see that the concept wavefunction is not really central for quantum mechanics. This removes the whole problem of wavefunction collapse that MWI seeks to resolve.

The Born rule is arguably a big issue for MWI. But here it essentially boils down to "

xis quadratic intwheret =sqrt(x)". Without the wavefunction (only probabilities) the problem simply does not appear.Here is another interesting conclusion:

In particular, this relates to the decoherence of a system via random interactions with the environment. Thus decoherence becomes not as a physical intrinsically-quantum phenomenon of "worlds drifting apart", but a property of experiments that are not well-isolated from the influence of environment and therefore not reproducible. Well-isolated experiments are robust (and described by "quantum inference") and poorly-isolated experiments are not (hence quantum inference

does not apply).In sum, it appears that quantum physics when viewed as inference does not require many-worlds any more than probability theory does.