Consider the following thought experiment ("Counterfactual Calculation"):
You are taking a test, which includes a question: "Is Q an even number?", where Q is a complicated formula that resolves to some natural number. There is no a priori reason for you to expect that Q is more likely even or odd, and the formula is too complicated to compute the number (or its parity) on your own. Fortunately, you have an old calculator, which you can use to type in the formula and observe the parity of the result on display. This calculator is not very reliable, and is only correct 99% of the time, furthermore its errors are stochastic (or even involve quantum randomness), so for any given problem statement, it's probably correct but has a chance of making an error. You type in the formula and observe the result (it's "even"). You're now 99% sure that the answer is "even", so naturally you write that down on the test sheet.
Then, unsurprisingly, Omega (a trustworthy all-powerful device) appears and presents you with the following decision. Consider the counterfactual where the calculator displayed "odd" instead of "even", after you've just typed in the (same) formula Q, on the same occasion (i.e. all possible worlds that fit this description). The counterfactual diverges only in the calculator showing a different result (and what follows). You are to determine what is to be written (by Omega, at your command) as the final answer to the same question on the test sheet in that counterfactual (the actions of your counterfactual self who takes the test in the counterfactual are ignored).
Should you write "even" on the counterfactual test sheet, given that you're 99% sure that the answer is "even"?
This thought experiment contrasts "logical knowledge" (the usual kind) and "observational knowledge" (what you get when you look at a calculator display). The kind of knowledge you obtain by observing things is not like the kind of knowledge you obtain by thinking yourself. What is the difference (if there actually is a difference)? Why does observational knowledge work in your own possible worlds, but not in counterfactuals? How much of logical knowledge is like observational knowledge, and what are the conditions of its applicability? Can things that we consider "logical knowledge" fail to apply to some counterfactuals?
(Updateless analysis would say "observational knowledge is not knowledge" or that it's knowledge only in the sense that you should bet a certain way. This doesn't analyze the intuition of knowing the result after looking at a calculator display. There is a very salient sense in which the result becomes known, and the purpose of this thought experiment is to explore some of counterintuitive properties of such knowledge.)
We are in the world where the calculator displays even, and we are 99% sure it is the world where the calculator has not made an error. This is Even World, Right Calculator. Counterfactual worlds:
All Omega told us was that the counterfactual world we are deciding for, the calculator shows Odd. We can therefore eliminate Odd World, Wrong Calculator. Answering the question is, in essence, deciding which world we think we're looking at.
So, in the counterfactual world, we're either looking at Even World, Wrong Calculator or Odd World, Right Calculator. We have an equal prior for the world being Odd or Even - or, we think the number of Odd Worlds is equal to the number of Even Worlds. We know the ratio of Wrong Calculator worlds to Right Calculator worlds (1:99). This is, therefore, 99% evidence for Odd World. The correct decision for the counterfactual you in that world is to decide Odd World. The correct decision for you?
Ignoring Bostrom's book on how to deal with observer selection effects (did Omega go looking for a Wrong Calculator wo... (read more)
Suppose you believe that 2+2=4, with the caveat that you are aware that there is some negligible but non-zero probability that The Dark Lords of the Matrix have tricked you into believing that.
Omega appears and tells you that in an alternate reality, you believe that 2+2=3 with the same amount of credence, and asks whether this changes your own amount of credence that 2+2=4.
The answer is the same. You ask Omega what rules he's playing by.
If he says "I'm visiting you in every reality. In each reality, I'm selecting a counterfactual where your answe... (read more)
In what way, if any, is this problem importantly different from the following "less mathy" problem?... (read more)
I suspect that the question sounds confusing because it conflates different counterfactual worlds. Where exactly does the world presented to you by Omega diverge from the actual world, at what point does the intervention take place? If Omega only changes the calculator display, you should say "even". If it fixes an error in the calculator's inner workings, you should say "odd".
I take out a pen and some paper, and work out what the answer really is. ;)
What does it even mean to write an answer on a counterfactual test sheet?
Is it correct to to interpret this as "if-counterfactual the calculator had showed odd, Omega would have shown up and (somehow knowing what choice you would have made in the "even" world) altered the test answer as you specify"?
Viewing this problem from before you use the calculator, your distribution is P(even) = P(odd) = 0.5. There are various rules Omega could be playing by:
It does not work in this counterfactual. Omega could have specified the counterfactual such that the observational knowledge in the counterfactual was as usable as that in the 'real' world. (Most obviously by flat out saying it is so.)
The reason we cannot use the knowledge from this particular counterfactual is that we have no knowledge about how the counterfactual was selected. The 99% figure (as far as we know) is not at all relevant to how likely it is that ... (read more)
This seems easy. Q is most likely even, so in the counterfactual the calculator is most likely in error, and we prefer Omega to write "even". What am I missing?
Consider the following thought experiment
"Why does observational knowledge work in your own possible worlds, but not in counterfactuals?" is the key question here. Perhaps it's easier to parse like this: "Why isn'... (read more)
The thing is, the other world was chosen specifically BECAUSE it had the opposite answer, not randomly like the world you're in.
This is the intuition I find helpful: Your decision only matters when the calculator shows odd. There is a 99% chance your decision matters if it's odd and a 1% chance your decision matters if it's not odd. Therefore the situation where you're told it's even is evidence that it's odd.
In this scenario, we are the counterfactual. The calculator really showed up odd, not even.
Once your calculator returns the result "even", you assign 99% probability to the condition "Q is even". Changing that opinion would require strong bayesian evidence. In this case, we're considering hypothetical bayesian evidence provided by Omega. Based on our prior probabilities, we would say that if Omega randomly chose an Everett branch (I'm going with the quantum calculator, just because it makes vocabulary a bit easier), 99% of the time Omega would chose another Everett branch in which the calculator also read "even". Ho... (read more)
Here's a possible argument.
Assume what you do in the counterfactual is equivalent to what you do in IRL, with even/odd swapped. Then TDT says that choosing in the counterfactual ALSO chooses for you in the real world. So you should choose odd there so that you can choose even in the real world and get it right.
Is this an attempt to replicate in UDT the problems of TDT?
I wonder if the question is enough specified. Naïvely, I would say that Omega will write down "even" with p=0.99, simply because Omega appearing and telling me "consider the counterfactual" is not useful evidence for anything. P(Omega appears|Q even) and P(Omega appears|Q odd) are hard to specify, but I don't see reason to assume that the first probability is greater than the second one, or vice versa.
Of course, the above holds under assumption that all counterfactual worlds have the same value of Q. I am also not sure how to interpret ... (read more)
My understanding is that the question is about how to do counterfactual math. There is no essential distinction between the two types (observational vs. logical) of knowledge, they are "limiting cases" of each other (you always only observe your mental reasoning, or calculator outputs, or publications on one end; Laplace's demon on the other end).
ETA: my thinking went an U-turn from setting the calculator value without severing the Q->calculator correlation (i.e. treating calculator as an observed variable with a fictional observation), to set... (read more)
This consists of just reapplying the algorithm or re-reading the previous paragraph with "even" replaced with "odd", so the answer should be 99% odd.
This is based on my understanding of counterfactual as considering what you would do in some hypothetical alternate branch 'what-if'.
I'm not sure what's supposed to be tricky about this. It's trading off a 99% chance of doing better in 1% of all worlds against a 1% chance of doing worse in 99% of all worlds (if I am in a world where the calculator malfunctioned). Being risk averse I prefer being wrong in some small fraction of the worlds to an equally small chance of being wrong in all of them so I'd want Omega to write "odd" (or even better leave it up to the counterfactual me which should have the same effect but feels better).
You are Sokaling us, right?