Posts

Sorted by New

Wiki Contributions

Comments

I'd say that if a superintelligent cat is trying to predict the outcome of someone's measurement of it in a complicated basis it'll only be more accurate if it uses information about its 'true' state as the observer sees it.

I agree with Scott Aaronson' objections to the paper. I think an inconsistency can be shown with a simpler argument:

Suppose two agents, each of which can be in two states, are prepared like in the paper and Aaronson's post.

Using the reasoning of the paper, if agent A finds it's in the state, it can deduce that B is in the state, so it can deduce that B is certain that A is in the state, so it can be certain that it's in the state, so it's not actually in the state as it sees itself to be. Then, because A is certain that it's in the state, it can deduce that B is in a more complicated state. This can go on infinitely. The problem is that in the first step A assumes it's certain of something when it knows it's in a superposition, and in the paper the details of that superposition matter in the final measurement.