Quantum theory cannot consistently describe the use of itself

by avturchin2 min read20th Sep 201816 comments


Quantum MechanicsDecision Theory

As I understand it, the new thought experiment in QM demonstrates that all currently known interpretations of QM are wrong, if we extrapolate Winger friend thought experiment to next level, where different friends are using different theories to interpret each other actions. This may have implication for agent foundations.

Short description of the experiment from Scientific American: "Frauchiger and Renner have a yet more sophisticated version (see ‘New cats in town’). They have two Wigners, each doing an experiment on a physicist friend whom they keep in a box. One of the two friends (call her Alice) can toss a coin and—using her knowledge of quantum physics—prepare a quantum message to send to the other friend (call him Bob). Using his knowledge of quantum theory, Bob can detect Alice’s message and guess the result of her coin toss. When the two Wigners open their boxes, in some situations they can conclude with certainty which side the coin landed on, Renner says—but occasionally their conclusions are inconsistent. “One says, ‘I’m sure it’s tails,’ and the other one says, ‘I’m sure it’s heads,’” Renner says."

Couple of quotes from the article which are related to agents:

"Suppose that a casino offers the following gambling game. One round of the experiment is played, with the gambler in the role of W, and the roles of F¯, F, and W¯ taken by employees of the casino. The casino promises to pay $ 1.000 to the gambler if F¯’s random value was r = heads. Conversely, if r = tails, the gambler must pay $ 500 to the casino. It could now happen that, at the end of the game, w = ok and ¯w = ok, and that a judge can convince herself of this outcome. The gambler and the casino are then likely to end up in a dispute, putting forward arguments taken from Table II.

Gambler: “The outcome w = ok implies, due to s ¯F Q, that r = heads, so the casino must pay me $ 1.000.”

Casino: “The outcome ¯w = ok proves, by virtue of s W¯ Q, that our employee observed z = +1 2 . This in turn proves, by virtue of s F Q, that r = tails, so the gambler must pay us $ 500.”

"Theorem 1 now asserts that (Q) and (S) are already in conflict with the idea that agents can consistently reason about each other, in the sense of (C)."


16 comments, sorted by Highlighting new comments since Today at 2:20 AM
New Comment

The result in the paper is "No theory satisfies the three assumptions (Q, C, S)."

The table in the paper says that MWI violates assumption S and is "?" on the other two assumptions.

Unsurprisingly, when I look at the assumptions they all seem wrong or incoherent, depending on how you make the fuzzy statements precise. I'd guess most LW-ers are in a similar place (as are most quantum computing people probably), so this wouldn't really change any minds around here.

(Also their discussion of many-worlds sounds a bit silly. Nowhere in their table of interpretations is the natural one, "the wavefunction is all there is.")

I think this was also recently brought up in the Open Thread.

Mitchell_Porter wrote this:

It's a minor new quantum thought experiment which, as often happens, is being used to promote dumb sensational views about the meaning or implications of quantum mechanics. There's a kind of two-observer entangled system (as in "Hardy's paradox"), and then they say, let's also quantum-erase or recohere one of the observers so that there is no trace of their measurement ever having occurred, and then they get some kind of contradictory expectations with respect to the measurements of the two observers.
Undoing a quantum measurement in the way they propose is akin to squirting perfume from a bottle, then smelling it, and then having all the molecules in the air happening to knock all the perfume molecules back into the bottle, and fluctuations in your brain erasing the memory of the smell. Classically that's possible but utterly unlikely, and exactly the same may be said of undoing a macroscopic quantum measurement, which requires the decohered branches of the wavefunction (corresponding to different measurement outcomes) to then separately evolve so as to converge on the same state and recohere.
Without even analyzing anything in detail, it is hardly surprising that if an observer is subjected to such a highly artificial process, designed to undo a physical event in its totality, then the observer's inferences are going to be skewed somehow. So, you do all this and the observers differ in their quantum predictions somehow. In their first interpretation (2016), Frauchiger and Renner said that this proves many worlds. Now (2018), they say it proves that quantum mechanics can't describe itself. Maybe if they try a third time, they'll hit on the idea that one of the observers is just wrong.

The thought experiment involves observers being in a coherent superposition. But I'm not now 100% sure that it involves actual quantum erasure, I was relying on other people's description. I'm hoping this will be cleared up without having to plough through the paper myself.

Anyway, LW may appreciate this analysis which actually quotes HPMOR.

The Renner-Frauchiger paper has been refuted, but usually with a lot of math. So I tried to write the simplest possible explanation, here:


This paper is wrong.

The error is simple. The overall quantum system is also simple, but obfuscated by the authors spelling out all the steps of the setup.

As some have pointed out here, it's not a system that we could practically implement with people in "labs", no more than we could implement Schrodinger's Cat. But we can implement this system using quantum equipment. In fact Hardy's Paradox refers to a mathematically equivalent system which has been built (and works just as QM predicts).

So it behooves us to identify the error in the author's reasoning. It is this;

At the very beginning of the setup, they have "agent /F" read a quantum "coin". This entangles her with the "coin" and puts her into a superposition of "tails /F" and "heads /F".

The agent "tails /F" then calculates the results of the experiment AS IF SHE WAS THE ONLY COPY OF HERSELF. She ignores the contribution of her twin, "heads /F". But from the point of view of an outside observer, they both contribute information to the qubit that they jointly emit.

The entire chain of reasoning in the paper is based on this faulty assumption and is therefore wrong.

For your convenience, here is the error:


I agree with Scott Aaronson' objections to the paper. I think an inconsistency can be shown with a simpler argument:

Suppose two agents, each of which can be in two states, are prepared like in the paper and Aaronson's post.

Using the reasoning of the paper, if agent A finds it's in the state, it can deduce that B is in the state, so it can deduce that B is certain that A is in the state, so it can be certain that it's in the state, so it's not actually in the state as it sees itself to be. Then, because A is certain that it's in the state, it can deduce that B is in a more complicated state. This can go on infinitely. The problem is that in the first step A assumes it's certain of something when it knows it's in a superposition, and in the paper the details of that superposition matter in the final measurement.

So, if a Schrodinger cat knows that it is a Schrodinger cat, it should assume that it is not alive, but is in the superposition of the dead and alive state?

I'd say that if a superintelligent cat is trying to predict the outcome of someone's measurement of it in a complicated basis it'll only be more accurate if it uses information about its 'true' state as the observer sees it.

Feels like there has to be something wrong with the paper. I don't have the knowledge to analyze it myself, but I read through the paper until the methods section and they don't discuss much beyond the math. It's unclear to me how they're arriving at a conclusion where different things happened from different perspectives, and particularly what percent of the time that would happen.

If someone familiar with the math could explain what the probability of each step is I think it could be a lot simpler to follow.


Here's a paper claiming to identify the error. This is enough, I'm convinced the original paper is just mistaken

The paper was published in Nature Communications and its preprint was discussed widely for two years, so there is probably no flaws which could be easily picked up.


"The conceptual experiment has been debated with gusto in physics circles for more than two years — and has left most researchers stumped, even in a field accustomed to weird concepts. “I think this is a whole new level of weirdness,” says Matthew Leifer, a theoretical physicist at Chapman University in Orange, California.

The authors, Daniela Frauchiger and Renato Renner of the Swiss Federal Institute of Technology (ETH) in Zurich, posted their first version of the argument online in April 2016. The final paper appears in Nature Communications on 18 September1."

I identified one paper, and it cites another that also claims this is flawed. Don't see a reason to believe the original paper over those

https://motls.blogspot.com/2018/09/frauchiger-renner-qm-is-inconsistent.html calls BS, now we just need Scott A to do the same and I'll be convinced


Since that doesn't seem to have been auto-linkfied, here's an actual link: https://www.scottaaronson.com/blog/?p=3975 and a few extracts to help readers judge whether they want to follow the link:

I enjoyed figuring out exactly where I get off Frauchiger and Renner’s train—since I do get off their train.


I reject an assumption that Frauchiger and Renner never formalize.  That assumption is, basically: “it makes sense to chain together statements that involve superposed agents measuring each other’s brains in different incompatible bases, as if the statements still referred to a world where these measurements weren’t being done.”


The first thing to understand about Frauchiger and Renner’s argument is that, as they acknowledge, it’s not entirely new.  As Preskill helped me realize, the argument can be understood as the “Wigner’s-friendification” of Hardy’s Paradox.


I don’t accept that we can take knowledge inferences that would hold in a hypothetical world where |ψ〉 remained unmeasured, with a particular “branching structure” (as a Many-Worlder might put it), and extend them to the situation where Alice performs a rather violent measurement on |ψ〉 that changes the branching structure by scrambling Charlie’s brain.