What is the Bayesian argument, if one exists, for why quantum dynamics breaks the “probability is in the mind” philosophy?
In my world-view the argument is based on Bell inequalities. Other answers mention them, I will try and give more of an introduction.
First, context. We can reason inside a theory, and we can reason about a theory. The two are completely different and give different intuitions. Anyone talking about "but the complex amplitudes exist" or "we are in one Everett branch" is reasoning inside the theory. The theory, as given in the textbooks, is accepted as true and interpretations built on.
However, both historically and (I think) more generally, we should also reason about theories. This means we need to look at experimental observations, and ask questions like "what is the most reasonable model?".
Many quantum experiments give random-looking results. As you point out, randomness is usually just "in the mind". Reality was deterministic, but we couldn't see everything. The terminology is "local hidden variable". For an experiment where you draw a card from a deck the "local hidden variable" was which card was on top. In a lottery with (assumedly deterministic) pinballs the local hidden variable is some very specific details of the initial momentums and positions of the balls. In other words the local hidden variable is the thing that you don't know, so to you it looks random. Its the seed of your pseudorandom number generator.
Entanglement - It is possible to prepare two (or more) particles in a state, such that measurements of those two particles gives very weird results. What do I mean by "very weird". Well, in a classical setting if Alice and Bob are measuring two separate objects then there are three possible (extremal) situations (1): Their results are completely uncorrelated, for example Alice is rolling a dice in Texas and Bob is rolling a different dice in London. (2) Correlated, for example, Alice is reading an email telling her she got a job she applied for, and Bob is reading an email telling him he failed to get the same job. (4) Signalling (we skipped 3 on purpose, we will get to that). Alice and Bob have phones, and so the data they receive is related to what the other of them is doing. Linear combinations of the above (eg noisy radio messages, correlation that is nor perfect etc) are also possible.
By very weird, I mean that quantum experiments give rise (in the raw experimental data, before any theory is glued on) to a fourth type of relation; (3): Non-locality. Alice and Bob's measurement outcomes (observations) are random, but the correlation between their observation's changes depending on the measurements they both chose to make (inputs). Mathematically its no more complex than the others, but its fiddly to get your head around because its not something seen in everyday life.
An important feature of (3) is that it cannot be used to create signalling (4). However, (3) cannot be created out of any mixture of (1) and (2). (Just like (4) cannot be created by mixing (1) and (2)). In short, if you have any one of these 4 things, you can use local actions to go "down hill" to lower numbers but you can't go up.
Anyway, "hidden variables" are shorthand for "(1) and (2)" (randomness and correlation). The "local" means "no signalling" (IE no (3), no radios). The reason we insist on no signalling is because the measurements Alice and Bob do on their particles could be outside one another's light cones (so even a lightspeed signal would not be fast enough to explain the statistics). The "no signalling" condition might sound artificial, but if you allow faster than light signalling then you are (by the standards of relativity) also allowing time travel.
Bell inequality experiments have been done. They measure result (3). (3) cannot be made out of ordinary "ignorance" probabilities (cannot be made from (2)). (3) could be made out of (4) (faster than light signalling), but we don't see the signalling itself, and assuming it exists entails time travel.
So, if we reject signalling, we know that whatever it is that is happening in a Bell inequality experiment it can't be merely apparent randomness due to our ignorance. We also know the individual results collected by Alice and Bob look random (but not the correlations between the results), this backs us into the corner of accepting that the randomness is somehow an intrinsic feature of the world, even the photon didn't "know" if it would go through the polariser until you tried it.
The wiki article on Bell inequalities isn't very good unfortunately.
Ahh. The correlations being dependent on inputs, but things appearing random to Alice and Bob, does seem trickier than whatever I was imaginining was meant by quantum randomness/uncertainty. Don't fully have my head around it yet, but this difference seems important. Thanks!
The Born probabilities are in the mind under the MWI! Reality just has the amplitudes.
Consider an agent about to observe a particle in superposition (causing an subjective collapse). If our agent accepts quantum mechanics, then it will predict with near-certainty that what will happen will be just what the Schrödinger equation dictates. This will result in a world in which the agent is entangled with the particle, which to the agents' perspective looks like two branches, one where each "world" happens (and each with a specific amplitude).
So where are the Born probabilities? What are they even about? They are not about the objective state of the world. Nor even about the agent's subjective knowledge of the objective state of the world. They are about the subjective anticipated experiences of the agent! The agent knows exactly what will happen, but not what its own eyes will actually see next.
How does the agent actually determine what those probabilities are? Like many priors, it ultimately grounds out in symmetry. If the particle was an a superposition where each state had equal amplitudes, then the agent has no basis by which to favor any one of these, and so chooses equal probabilities for each state. A similar symmetry holds for the phase of the amplitude. Then there's a more nuanced symmetry (known as the Epistemic Separability Principle) which essentially says that the agent's probabilities shouldn't depend on irrelevant parts of the environment. [1] And this is what ultimately results in the Born probabilities, (see Carroll and Sebens for the derivation). [2]
I personally believe the ESP symmetry argument can be improved on, but it gets the job done. Specifically, I would like to see an explicit transformation group formulation of it (a la Jaynes) ↩︎
The gist of their argument is that you can reduce to the equal amplitude case by cleverly entangling things with specific external systems (which can be causally isolated from the experiment itself). The mysterious "squaredness" arises from the inner product of a Hilbert space. I believe there is still more mystery to be resolved in the question of "Why Hilbert spaces?", but it's a bedrock assumption in almost any interpretation. ↩︎
Branch counting feels like it makes sense because it feels like the particular branch shouldn't matter, i.e. that there's a permutation symmetry between branches under which the information available to the agent remains invariant.
But you have to actually check that the symmetry is there, which of course, it isn't. The symmetry that is there is the ESP one, and it provides the correct result. Now I'll admit that it would be more satisfying to have the ESP explicitly spelled out as a transformation group under which the information available to the agent remains invariant.
Branch counting stops making sense when there are uncountably many branches, and there are (presumably).
You can't actually presume that... The relevant quantum concept is the "spectrum" of an observable. These are the possible values that a property can take (eigenvalues of the corresponding operator). An observable can have a finite number of allowed eigenvalues (e.g. spin of a particle), a countably infinite number (e.g. energy levels of an oscillator), or it can have a continuous spectrum, e.g. position of a free particle. But the latter case causes problems for the usual quantum axioms, which involve a Hilbert space with a countably infinite number of dimensions - there aren't enough dimensions to represent an uncountable number of distinct position eigenstates. You have to add extra structure to include them, and concrete applications always involve integrals over continua of these generalized eigenstates, so one might reasonably suppose that the "ontological basis" with respect to which branching is defined is something countable. In fact, I don't remember ever seeing a many-worlds ontological interpretation of the generalized eigenstates or the formalism that deals with them (e.g. rigged Hilbert space).
In any case, the counterpart of branch counting for a continuum is simply integration. If you really did have uncountably many branches, you would just need a measure. The really difficult case may actually be when you have a countably infinite number of branches, because there's no uniform measure in that case (I suppose you could use literal infinitesimals, the equivalent of "1/alephzero").
The Born probabilities are in the mind under the MWI! Reality just has the amplitudes
Which are pretty similar. They are objective feature of the territory that tell you how likely you are to see things.
And note that MWI features really existing possible worlds, in that it features a a multiplicity of existing actual worlds...and what is actual is possible. What MWI removes is chance.
And note that the argument from MWI doesn't support "probability is in the Mind" as it is usually stated, because it is usually stated as something that is true unconditionally, and MWI is only one possibility.
I dispute the premise. Weights of quantum configurations are not probabilities, they just share some superficial similarities. (They're modeled with complex numbers!) Iirc Eliezer was very clear about this point in the quantum sequence.
Yes, and (for certain mainstream interpretations) nothing in quantum mechanics is probabilistic at all: the only uncertainty is indexical.
Been thinking about your answer here, and still can’t decide if I should view this as solving the conundrum, or just renaming it. If that makes sense?
Do weights of quantum configuration, though they may not be probabilities, similar enough in concept to still imply that physical, irreducible uncertainty exists?
I’ve phrased this badly (part of why it took me so long to actually write it) but maybe you see the question I’m waving at?
The mathematical structure in common is called a "measure."
I agree that there's something mysterious-feeling about probability in QM, though I mostly think that feeling is an illusion. There's a (among physicists) famous fact that the only way to put a 'measure' on a wavefunction that has nice properties (e.g. conservation over time) is to take the amplitude squared. So there's an argument: probability is a measure, and the only measure that makes sense is the amplitude-squared measure, therefore if probability is anything it's the amplitude squared. And it is! Feels mysterious.
But after getting more used to anthropics and information theory, you start to accumulate more arguments for the same thing that take it from a different angle, and it stops feeling so mysterious.
Jaynes has argued (jn his book on probability theory) that nearly all progress in physics stopped once quantum mechanics was accepted to be "truly random," and that this anti-scientific attitude has to be abandoned for further progress to be possible.
From the little I know about QM he was probably wrong - it seems you can't remove the randomness without breaking other stuff we like such as locality.
Still, I wouldn't be shocked if he (and perhaps you?) were right after all.
Personally, I can't see any way to rule out the possibility that some things in the universe are "truly random" for all practical purposes - in the sense that, say, a sequence of spin measurements can't be compressed. In order for science to be possible, there have to be some discoverable underlying laws, but Solomonoff induction still works if those laws are probabilistic. If you insist on describing that situation as also not objectively random but only uncertain, fine, but I think it matches our intuitions about objective randomness perfectly and I'm inclined to use that term. This deserves a full post though.
I expect our intuitions about objective randomness would clash quite violently! My own intuition revolts at even the phrase itself :)
See Bell's theorem. Basically we know that quantum mechanics is truly random, not just pseudorandom, unless you posit non-locality.
A core tenet of Bayesianism is that probability is in the mind
That argument never had anything to do with Bayesianism as known to the Rev. Bayes...it's much more to do with Jaynes and Yudkowsky.
Also, it was never valid...it was pointed out a long time ago that (a form of) probability being in the mind doesn't imply (a form of) it isn't in the territory as well.
Armchair arguments can't prove anything about the territory...you have to look.
The people whose job it is to investigate this sort of thing, physicists , have been unable to decide the issue.
The specific reason for believing in in-the territory randomness is :
Bell's theorem - Wikipedia https://en.m.wikipedia.org/wiki/Bell's_theorem
I must admit, I think the "probability is in the mind, not the territory" either vacuously true or possibly false if you think that maps aren't the same things as low-resolution territories.
One example of where probability is in the mind, not the territory is false for non-trivial definitions of maps is Wigner functions, which turn out to be basically equivalent to a wavefunction, where it behaves basically like classical Bayesian probability theory, but with the caveat that for quantum physics, negative probabilities are allowed, and the most important impact here is you can't treat the uncertainty as just ignorance anymore (though they can include our ignorance).
More here:
https://www.lesswrong.com/posts/Y6LhXdGfwsAStMuhr/ackshually-many-worlds-is-wrong#nsebEbJbxqkekTbsK
Hm - reading Ben’s linked comment, it seems to me that the thrust is that negative probabilities must be admitted. But I don’t understand how that is related to the map vs. territory / probability-in-the-mind-or-physical distinction?
Like, “one must modify the relevant functions to allow negative probabilities” seems consistent with “probability is in the mind”, since functions are a part of the map, but it seems you consider it a counterexample! So I find myself confused.
The main point here is that it can no longer be just our uncertainty in our map, something else must be added, which was the point.
Another way to say it is that probability can't just be in the mind, so while the probabilities encode our ignorance, it can't be all of the story (according to Wigner functions).
It was way down in the last comment, so maybe you should go to the end of the comment I linked here for more information.
Also, a difference here that doesn't matter for this discussion, but might matter for the general approach, might ultimately be that I disagree with this statement "since functions are a part of the map", because I think the map-territory distinction can often be blurry or fully dissolved in some cases, and also functions can have results when you evaluate them using an algorithm, making them part of the territory (for that specific function).
A core tenet of Bayesianism is that probability is in the mind. But it seems to me that even hardcore Bayesians can waffle a bit when it comes to the possibility that quantum probabilities are irreducible physical probabilities.
I don’t know enough about quantum physics to lay things out in any detailed disagreement, but it seems to me that if one finds a system that one cannot consistently make predictions for, it means we lack the knowledge to predict the systems, not that the system involves physical, outside-the-mind probabilities. For example, I could never predict the exact pattern of raindrops the next time it rains, but no one argues that that means those probabilities are therefore physical.
What is the Bayesian argument, if one exists, for why quantum dynamics breaks the “probability is in the mind” philosophy?