Distinct Configurations

Macroscopic decoherence—also known as “many-worlds”—is the idea that the known quantum laws that govern microscopic events simply govern at all levels without alteration. Back when people didn’t know about decoherence—before it occurred to anyone that the laws deduced with such precision for microscopic physics might apply universally—what did people think was going on?

The initial reasoning seems to have gone something like:

When my calculations showed an amplitude of for this photon to get absorbed, my experimental statistics showed that the photon was absorbed around 107 times out of 1,000, which is a good fit to , the square of the modulus.

to

The amplitude is the probability (by way of the squared modulus).

to

Once you measure something and know it didn’t happen, its probability goes to zero.

Read literally, this implies that knowledge itself—or even conscious awareness— causes the collapse. Which was in fact the form of the theory put forth by Werner Heisenberg!

But people became increasingly nervous about the notion of importing dualistic language into fundamental physics—as well they should have been! And so the original reasoning was replaced by the notion of an objective “collapse” that destroyed all parts of the wavefunction except one, and was triggered sometime before superposition grew to human-sized levels.

Now, once you’re supposing that parts of the wavefunction can just vanish, you might think to ask:

Is there only one survivor? Maybe there are many surviving worlds, but they survive with a frequency determined by their integrated squared modulus, and so the typical surviving world has experimental statistics that match the Born rule.

Yet collapse theories considered in modern academia only postulate one surviving world. Why?

Collapse theories were devised in a time when it simply didn’t occur to any physicists that more than one world could exist! People took for granted that measurements had single outcomes—it was an assumption so deep it was invisible, because it was what they saw happening. Collapse theories were devised to explain why measurements had single outcomes, rather than (in full generality) why experimental statistics matched the Born rule.

For similar reasons, the “collapse postulates” considered academically suppose that collapse occurs before any human beings get superposed. But experiments are steadily ruling out the possibility of “collapse” in increasingly large entangled systems. Apparently an experiment is underway to demonstrate quantum superposition at 50-micrometer scales, which is bigger than most neurons and getting up toward the diameter of some human hairs!

So why doesn’t someone try jumping ahead of the game, and ask:

Say, we keep having to postulate the collapse occurs steadily later and later. What if collapse occurs only once superposition reaches planetary scales and substantial divergence occurs—say, Earth’s wavefunction collapses around once a minute? Then, while the surviving Earths at any given time would remember a long history of quantum experiments that matched the Born statistics, a supermajority of those Earths would begin obtaining non-Born results from quantum experiments and then abruptly cease to exist a minute later.

Why don’t collapse theories like that one have a huge academic following, among the many people who apparently think it’s okay for parts of the wavefunction to just vanish? Especially given that experiments are proving superposition in steadily larger systems?

A cynic might suggest that the reason for collapse’s continued support isn’t the physical plausibility of having large parts of the wavefunction suddenly vanish, or the hope of somehow explaining the Born statistics. The point is to keep the intuitive appeal of “I don’t remember the measurement having more than one result, therefore only one thing happened; I don’t remember splitting, so there must be only one of me.” You don’t remember dying, so superposed humans must never collapse. A theory that dared to stomp on intuition would be missing the whole point. You might as well just move on to decoherence.

So a cynic might suggest.

But surely it is too early to be attacking the motives of collapse supporters. That is mere argument ad hominem. What about the actual physical plausibility of collapse theories?

Well, first: Does any collapse theory have any experimental support? No.

With that out of the way…

If collapse actually worked the way its adherents say it does, it would be:

  1. The only non-linear evolution in all of quantum mechanics.
  2. The only non-unitary evolution in all of quantum mechanics.
  3. The only non-differentiable (in fact, discontinuous) phenomenon in all of quantum mechanics.
  4. The only phenomenon in all of quantum mechanics that is non-local in the configuration space.
  5. The only phenomenon in all of physics that violates CPT symmetry.
  6. The only phenomenon in all of physics that violates Liouville’s Theorem (has a many-to-one mapping from initial conditions to outcomes).
  7. The only phenomenon in all of physics that is acausal / non-deterministic / inherently random.
  8. The only phenomenon in all of physics that is non-local in spacetime and propagates an influence faster than light.

What does the god-damned collapse postulate have to do for physicists to reject it? Kill a god-damned puppy?

23 comments, sorted by
magical algorithm
Highlighting new comments since Today at 12:10 PM
Select new highlight date
Moderation Guidelines: Reign of Terror - I delete anything I judge to be annoying or counterproductiveexpand_more

Remember, in the real world, all of this happens in a continuous configuration space with a differentiable amplitude distribution.

So, in reality, since gravitational interactions and whatnot cause the photon to always have a tiny effect, even with no sensor, it will very rarely show up at Detector 1. And as the level of interaction with the rest of reality increases, P(D1) approaches 50%. Right?

And as the level of interaction with the rest of reality increases, P(D1) approaches 50%. Right?

and this is de-coherence? This is why the macro-world is seemingly classical? There are some many elements in the system that you never get anything that doesn't intact with something else and all the configurations are independent?

Eliezer or anyone else: I am puzzled why the mirror itself doesn't act as a sensitive thingy. A mirror that deflects a photon gains some momentum (hence solar sails or whatever), so I'd expect the configurations to be

"Photon from A to B; and mirror at A with momentum X+e." (0 + -i) "Photon from A to C; and mirror at A with momentum X." (-1 + 0i)

Allan: Is this sentence correct?

No. Ouch. Fixed.

Allan: In Figure 3, is the total number of hits registered in the detectors equal to the total number of hits registered in Figure 2? Or is it half that number, because (intuitively and probably wrongly?) half the photons are hitting the wall?

Half the number. Because amplitude flows to a configuration where the photon hits the wall, and way up at the level of observation, that means we sometimes see no photon in any detector.

Oops, sorry to post twice in a row.

"Like, if you put the sensor between A and C but don't read it, the mysterious phenomenon still goes away, and the photon still always ends up at Detector 1."

Is this sentence correct? I thought the "mysterious phenomenon" was that photons never went to Detector 1, when you would expect them to reach it half the time. So if the mysterious phenomenon goes away, you should see half the photons at Detector 1 and half at Detector 2, not all at 1.

Or have I misread this?

Pearson, my guess is that the natural spread of the mirror's particles over configuration space, and the natural spread of the photon's momentum, is greater than the momentum the mirror's particles gain from the photon bouncing. As a result, the blobs of amplitude in configuration space mostly overlap. Remember, in the real world, all of this happens in a continuous configuration space with a differentiable amplitude distribution.

So I guess I get how this works in theory, but in practice, doesn't a particle going from A-B have SOME kind of effect that is different than if it went from B-C, even without the sensitive thingy? I don't know if it would be from bouncing off other particles on the way, or having some kind of minute gravitational effect on the rest of the universe, or what. And if that is the case, shouldn't the experiments always behave the as if there WERE that sensitive thingy there? Or is it really possible to set it up so there is literally NO difference in all the particle positions in the universe no matter which path is taken?

One of the previous comments (I think in the previous post) pointed out that yes, indeed this does occur - but that these effects mainly cancel each other out.

In my mind this works somewhat like Brownian motion: lots of tiny pushes, but overall, it continues in roughly the same way. ie, mostly the photon carries on as though it hasn't changed configuration space in any significant way.

Very few comments on this one, but my confusion hasn't been extinguished.

The explanation so far is that the amplitudes add back together as per complex addition. But then, "nudging" the photon at one point, eliminates the entire phenomenon involving combination and manipulation of (complex-valued) amplitudes? A nudging whose existence we can't even verify?

Why does the complex amplitude reality slip away upon thus nudge? Why can't I "explain away" any observation now by saying "ah, yeah man, there must have/not have been a nudge, problem solved".

I agree that humans are much too prone to regard conscious awareness and subject experience as an inalienable part of the fabric of reality (and that quantum physicists have not been immune to this bias). Let us pray that humans will avoid this mistake when the stakes get higher.

This is the first clear explanation of the phenomenon of quantum entanglement that I have ever read (though I gather it's still a simplification since we're assuming the mirrors aren't actually made out of particles like everything else). I have never really understood this phenomenon of "observation", but suddenly it's obvious why it should make a difference. Thank you.

MIND IS FUNDAMENTAL AFTER ALL! CONSCIOUS AWARENESS DETERMINES OUR EXPERIMENTAL RESULTS!

You can still read this kind of stuff. In physics textbooks.

I hope this is just a strawman of the Copenhagen interpretation. If not, what textbooks are you reading?

Typo:

"But in this case, the two amplitude flows (1) and (3) are now to distinct configurations; at least one entity, S, is in a different state between (1) and (3). The amplitudes don't cancel out."

=>

"But in this case, the two amplitude flows (1) and (3) are now two distinct configurations; at least one entity, S, is in a different state between (1) and (3). The amplitudes don't cancel out."

Eliezer writes:

"It's as if mere possibilities could have causal effects, in defiance of what the word "real" is usually thought to mean..."

Actually, mere possibilities can make a difference... if you have effects that propagate backwards in time. Here's why.

To simplify, assume that you have a physical system that takes inputs (w,x) and produces outputs

(y,z) = F(w,x),

where w is an initial condition, x = z is an effect propagated back in time, and y is that portion of the system's output that is not propagated back in time. Then given a specific value of w as an initial condition, whatever happens is a solution to the fixed-point equation

(y,x) = F(w,x)

for x.

Given w, a "mere possibility" is a possible output of F that could occur for the right choice of x, but doesn't occur for the actual value of x. But these "mere possibilities" are properties of F. Changing the set of "mere possibilities" means changing the function F, and possibly getting a different fixed-point.

Why is this relevant to QM? Well, Cramer's Transactional Interpretation of QM uses both the retarded and advanced wave solutions to Schrodinger's equation. That is, it has "offer" waves going forward in time and "confirmation" waves going backwards in time. And I'm told that Aharonov's work in QM also postulates subtle influences propagating backwards in time.

Actually, mere possibilities can make a difference... if you have effects that propagate backwards in time.

It still has to happen. It might happen in the future instead of the past, but it still has to happen.

No, it doesn't have to happen. Consider the Elitzur-Vaidman bomb tester. The outcome depends on whether or not the bomb could have exploded, regardless of whether or not it actually does. You might object that in the Many Worlds Interpretation of quantum mechanics both happen, but the situation can equally well be described using Cramer's Transactional Interpretation of quantum mechanics, which involves waves that propagate backwards in time, and in which only one of the two possibilities (explode or don't explode) occurs. Whether MWI or TI or some other interpretation is the correct one, this demonstrates that backward-in-time signalling allows a "mere possibility", that does not actually occur, to have measurable effects.

From what I can understand, Cramer's Transactional Interpretation is basically a way to justify waveform collapse. The tester sees what he does because the plunger sent the signals causing waveform collapse. As far as I can tell, he never says what triggers the wave-form collapse. If it's just too much stuff getting entangled, then that's what causes the result you see, not mere possibilities.

In Eliezer's realist, MWI interpretation, there are definitely "worlds" in which the bomb explodes; they can have small amplitude but what we see in our world is because of events that straightforwardly happen in those other worlds. And of course there aren't really multiple worlds, there's one world, only part of which we can see and interact with once we've separated through decoherence.

I should have asked this back when Figure 3 came up originally:

In Figure 3, is the total number of hits registered in the detectors equal to the total number of hits registered in Figure 2? Or is it half that number, because (intuitively and probably wrongly?) half the photons are hitting the wall?

Or to state it another way, if you launch a single photon in Figure 3, are we guaranteed to see a hit registered on a detector? Or does that happen just half the time?

Good post. For anyone wanting to read further, I recommend

Lindley, David. Where Does the Weirdness Go? Stenger, Victor J. The Unconscious Quantum: Metaphysics in Modern Physics & Cosmology

Both are interesting and readable. The problem with textbooks is that they are too much work for things are aren't particularly imprtant to you at the time.

So I guess I get how this works in theory, but in practice, doesn't a particle going from A-B have SOME kind of effect that is different than if it went from B-C, even without the sensitive thingy? I don't know if it would be from bouncing off other particles on the way, or having some kind of minute gravitational effect on the rest of the universe, or what. And if that is the case, shouldn't the experiments always behave the as if there WERE that sensitive thingy there? Or is it really possible to set it up so there is literally NO difference in all the particle positions in the universe no matter which path is taken?