Sorted by New

Wiki Contributions



 What I was saying is specifically about how Quantum Darwinism views things (in my understanding) - and since interpretations of QM are trying to be more fundamental than QM itself (since QM should be derived from them), we can't use QM arguments here.

With this I just wanted to point out that I was not making any argument that relies on a particular interpretation of QM to work up to interaction-free measurements. I wanted to make it clear that I was not arguing anything about a collapse mechanism/what happens under the hood - it's just empirically correct that the result of any measurement is a definite state. You don't need any theory, it's just brute empirics, all territory. [Tangentially, but still true, there is no distinction even theoretically/"map side"  in how QD and Copenhagen QM treat definite states - all the differences come before this in the postulation of pointer states, collapse mechanism, etc, but QD still completely agrees with the canonical notion of a definite state.] 

I don't think I agree with (or don't understand what you mean by) "including the superposition of dead and alive leads to actual physical consequences" - bomb-testing result is consequence of standard QM, so it doesn't prove anything "new."

All I really wanted to do was to point out an example of "interaction free" measurement,  which throws a brick into the quantum darwinism approach. There can never be an "objective consensus" about what happens in the bomb cavity, because any sea of photons/electrons/whatever present in the cavity will trip the bomb. The point of mentioning the Zeilinger experiment was to say that this is an empirical result, so QD has to be able to explain it, and it can't. The only way to  get Elitzur-Vaidman from QD is to postulate two different splits of system and environment during the experiment - this is a concrete version of the criticism laid out in a paper by Ruth Kastner. It is a plain physical fact that you can have interaction-free measurement, and QD struggles with explaining this since it has to perform an ontological split mid experiment; if the ontological split is arbitrary, why do you need to perform a specific special split reproduce the results of experiment?  If it isn't arbitrary, then you have to do some hard work to explain why changing your definition of system and environment for different experiments (and sometimes mid experiment) is justified.

I implicitly meant that reproducibility could mean wither deterministic (reproducibility of a specific outcome), or statistical (reproducibility of a probability of an outcome over many realizations) - I don't really see those two as fundamentally different.

It's hard for me to see why you think they are not fundamentally different definitions of reproducibility. On an iteration by iteration basis, they clearly differ significantly; in the first case  (reproducibility of a specific outcome), the ball must fall in the same way every time for it to count as evidence towards this kind of reproducibility, and a single instance of it not falling in the same way... The second case (reproducibility of a probability of an outcome over many realizations) immediately destroys the first kind of reproducibility... Is it not the difference between having intrinsic probability in your definition of reproducibility and not having it? Maybe I am missing something blunt here.


So for the cat, a superposition of dead and alive will never be "objective" since it is not stable under interactions with photons – and so cannot be copied many times.

Ask yourself what is being copied many times. The very fact that the quantum cat is in a superposition of only alive and dead tells you something else that is only apparently "consensus objective"; everyone agrees that the only possible definite states associated with the system are that the quantum cat is alive, or dead, and nothing else. This just kicks the definition of "objective" back to the specification of the state vector. There is something objective about the fact that the cat only ever has the possibility of being recorded as alive or dead and nothing else. That list (alive, dead) is revealed by interactions/measurements/recording, but the fact that there is never anything else on the list despite all possible ways of interacting with the system  remains. In fact, including the superposition of dead and alive leads to actual physical consequences (see Elitzur-Vaidman bomb, for example), and Zeilinger won a nobel prize in part for physical instantiation of this "interaction-free" measurement.

This seems to awoke the scientific method, where reproducibility of an experimental result is the core criterion for “objective truth” 

But it isn't. Firstly we are working with two different and incompatible senses of reproducibility. Reproducibility of the naive and classical kind, which is when Anne does an experiment and John does the same experiment but at a different time and a different place, and then they both record the same result, is nothing but evidence that the experiment is a) so big that quantum fluctuations are washed out and b) invariant under space and time translations. Physicists and chemists have long since dispensed with this naive criterion as constituting "objective truth". The whole quantum mechanical picture is that the experiment is only ever reproducible in the aggregate, a completely different kind of reproducible,  that's to say that Anne and John will only agree on two things after conducting the experiment infinitely many times, and those things are  1) the types of possible outcomes of the experiment (i.e the list of definite states)  2) the probabilities with which these outcomes occur and both 1) and 2) take as a given  that the experiment is invariant under space and time translations over a large number of iterations. This is why everybody is satisfied when the large hadron collider reports the existence of the Higgs particle - nobody cares that there is only one large hadron collider, because everyone assumes in QFT that the experimental setup is unaffected by spacetime translations anyway. Instead they only bother "reproducing" the results by independent experiments, i.e in principle different ways of measuring the same effect, like ATLAS or CMS which are literally detectors just tacked onto the LHC apparatus.  In other words, working physicists (at least) have long moved on from the naive notion of reproducibility and are working with something much more constrained.