According to Ingredients of Timeless Decision Theory, when you set up a factored causal graph for TDT, "You treat your choice as determining the result of the logical computation, and hence all instantiations of that computation, and all instantiations of other computations dependent on that logical computation", where "the logical computation" refers to the TDT-prescribed argmax computation (call it C) that takes all your observations of the world (from which you can construct the factored causal graph) as input, and outputs an action in the present situation.
I asked Eliezer to clarify what it means for another logical computation D to be either the same as C, or "dependent on" C, for purposes of the TDT algorithm. Eliezer answered:
For D to depend on C means that if C has various logical outputs, we can infer new logical facts about D's logical output in at least some cases, relative to our current state of non-omniscient logical knowledge. A nice form of this is when supposing that C has a given exact logical output (not yet known to be impossible) enables us to infer D's exact logical output, and this is true for every possible logical output of C. Non-nice forms would be harder to handle in the decision theory but we might perhaps fall back on probability distributions over D.
I replied as follows (which Eliezer suggested I post here).
If that's what TDT means by the logical dependency between Platonic computations, then TDT may have a serious flaw.
Consider the following version of the transparent-boxes scenario. The predictor has an infallible simulator D that predicts whether I one-box here [EDIT: if I see $1M]. The predictor also has a module E that computes whether the ith digit of pi is zero, for some ridiculously large value of i that the predictor randomly selects. I'll be told the value of i, but the best I can do is assign an a priori probability of .1 that the specified digit is zero.
The predictor puts $1M in the large box iff (D xor E) is true. (And that's explained to me, of course.)
So let's say I'm confronted with this scenario, and I see $1M in the large box.
The flaw then is that E (as well as D) meets your criterion for "depending on" my decision computation C. I'm initially unsure what C and E output. But if C in fact one-boxes here, then I can infer that E outputs False (or else the large box has to be empty, which it isn't). Similarly, if C in fact two-boxes here, then I can infer that E outputs True. (Or equivalently, a third-party observer could soundly draw either of those inferences.)
So E does indeed "depend on" C, in the particular sense you've specified. Thus, if I happen to have a strong enough preference that E output True, then TDT (as currently formulated) will tell me to two-box for the sake of that goal. But that's the wrong decision, of course. In reality, I have no choice about the specified digit of pi.
What's going on, it seems to me, is that the kind of logical/Platonic "dependency" that TDT would need to invoke here is this: that E's output be counterfactually entailed by C's output (which it isn't, in this case [see footnote]), rather than (as you've specified) merely inferable from C's output (which indeed it is, in this case). That's bad news, because distinguishing what my action does or does not counterfactually entail (as opposed to what it implies, causes, gives evidence for, etc.) is the original full-blown problem that TDT's prescribed decision-computation is meant to solve. So it may turn out that in order to proceed with that very computation (specifically, in order to ascertain which other Platonic computations "depend on" the decision computation C), you already need to (somehow) know the answer that the computation is trying to provide.
--Gary
[footnote] Because if-counterfactually C were to two-box, then (contrary to fact) the large box would (probably) be empty, circumventing the inference about E.
...reasoning under logical uncertainty using limited computing power... is another huge unsolved open problem of AI. Human mathematicians had this whole elaborate way of believing that the Taniyama Conjecture implied Fermat's Last Theorem at a time when they didn't know whether the Taniyama Conjecture was true or false; and we seem to treat this sort of implication in a rather different way than '2=1 implies FLT', even though the material implication is equally valid.
I don't follow that. The sense of implication in which mathematicians established that TC implies FLT (before knowing if TC was true) is precisely material/logical implication: they showed ~(TC & ~FLT). And similarly, we can prove ~(3SAT-in-P & ~(P=NP)), etc. There's no need here to construct (or magically conjure) a whole alternative inference system for reasoning under logical uncertainty.
So if the inference you speak of (when specifying what it means for D to "depend on" C) is the same kind as was used in establishing TC=>FLT, then it's just material implication, which (as argued above) leads TDT to give wrong answers. Or if we substitute counterfactual entailment for material implication, then TDT becomes circular (question-begging). Or if you have in mind some third alternative, I'm afraid I don't understand what it might be.
EDIT: The rules of the original transparent-boxes problem (as specified in Good and Real) are: the predictor conducts a simulation that tentatively presumes there will be $1M in the large box, and then puts $1M in the box (for real) iff that simulation showed one-boxing. Thus, if the large box turns out to be empty, there is no requirement for that to be predictive of the agent's choice under those circumstances. The present variant is the same, except that (D xor E) determines the $1M, instead of just D. (Sorry, I should have said this to begin with, instead of assuming it as background knowledge.)
And this was my reply:
This is an unfinished part of the theory that I've also thought about, though your example puts it very crisply (you might consider posting it to LW?)
My current thoughts on resolution tend to see two main avenues:
1) Construct a full-blown DAG of math and Platonic facts, an account of which mathematical facts make other mathematical facts true, so that we can compute mathematical counterfactuals.
2) Treat differently mathematical knowledge that we learn by genuinely mathematical reasoning and by physical observation. In this case we... (read more)
Logical uncertainty has always been more difficult to deal with than physical uncertainty; the problem with logical uncertainty is that if you analyze it enough, it goes away. I've never seen any really good treatment of logical uncertainty.
But if we depart from TDT for a moment, then it does seem clear that we need to have causelike nodes corresponding to logical uncertainty in a DAG which describes our probability distribution. There is no other way you can completely observe the state of a calculator sent to Mars and a calculator sent to Venus, and yet remain uncertain of their outcomes yet believe the outcomes are correlated. And if you talk about error-prone calculators, two of which say 17 and one of which says 18, and you deduce that the "Platonic answer" was probably in fact 17, you can see that logical uncertainty behaves in an even more causelike way than this.
So, going back to TDT, my hope is that there's a neat set of rules for factoring our logical uncertainty in our causal beliefs, and that these same rules also resolve the sort of situation that you describe.
If you consider the notion of the correlated error-prone calculators, two returning 17 and one re... (read more)
I think this problem is based (at least in part) on an incoherence in the basic transparent box variant of Newcomb's problem.
If the subject of the problem will two-box if he sees the big box has the million dollars, but will one-box if he sees the big box is empty. Then there is no action Omega could take to satisfy the conditions of the problem.
In this variant that introduces the digit of pi, there is an unknown bit such that whatever strategy the subject takes, there is a value of that bit that allows Omega an action consistant with the conditions. Howev... (read more)
I'm not clear at all what the problem is, but it seems to be symantic. It's disturbing that this post can get 17 upvotes with almost no (2?) comments actually referring to what you're saying- indicating that no one else here really gets the point either.
It seems you have an issue with the word 'dependent' and the definition that Eliezer provided. Under that definition, E (the ith digit of pi) would be dependent on C (our decision to one or two box) if we two-boxed and got a million dollars, because then we would know that E = 0, and we would not have kno... (read more)
In UDT1, I would model this problem using the following world program. (For those not familiar with programming convention, 0=False, and 1=True.)
We then ask, what function S maximizes the expected payout at the end of P? When S sees "box is empty" clearly it ... (read more)
TDT is Timeless Decision Theory. It wouldn't be bad to say that in the first paragraph somewhere.
EDIT: Excellent. Thanks.
I suggest adding a link to this discussion to the TDT wiki entry.
So let's say I'm confronted with this scenario, and I see $1M in the large box.
So lets get the facts:
1) There is $1M in the large box and thus (D xor E)=true
2) I know that I am an one boxing agent
3) Thus D="one boxing"
4) Thus I know D/=E since the xor is true
5) I one-box and live happily with $1,000,000
When Omega simulates me with the same scenario and without lying there is no problem.
Seems like much of the mindgames are hindered by simply precommitting to choices.
For the red-and-green just toss a coin (or whatever choice of randomness you have).
We could make an ad-hoc repair to TDT by saying that you're not allowed to infer from a logical fact to another logical fact going via a physical (empirical) fact.
In this case, the mistake happened because we went from "My decision algorithm's output" (Logical) to "Money in box" (Physical) to "Digits of Pi" (Logical), where the last step involved following an arrow on a causal graph backwards: The digits of Pi has a causal arrow going into the "money in box" node.
The TDT dependency inference could be implemented by... (read more)
Does ADT solve this particular issue?
I'm trying to get a grip on what this post is about, but I don't know enough of the literature about Newcomb's Problem to be sure what is referred to here by "the transparent-boxes scenario". Can someone who knows briefly recap the baseline scenario of which this is a version?
So let's say I'm confronted with this scenario, and I see $1M in the large box.
So lets get the facts:
When Omega simulates me with the same scenario and without lying there is no problem.
Seems like much of the mindgames are hindered by simply precommitting to choices.
For the red-and-green just toss a coin (or whatever choice of randomness you have).
I have a question that is probably stupid and/or already discussed in the comments. But I don't have time to read all the comments, so, if someone nonetheless would kindly explain why I'm confused, I would be grateful.
The OP writes
... (read more)So let's say I'm confronted with this scenario, and I see $1M in the large box.
So lets get the facts:
When Omega simulates me with the same scenario and without lying there is no problem.
Seems like much of the mindgames are hindered by simply precommitting to choices.
For the red-and-green just toss a coin (or whatever choice of randomness you have).
So let's say I'm confronted with this scenario, and I see $1M in the large box.
So lets get the facts:
When Omega simulates me with the same scenario and without lying there is no problem.
Seems like much of the mindgames are hindered by simply precommitting to choices.
For the red-and-green just toss a coin (or whatever choice of randomness you have).
Let:
When:
Omega fails.
Omega chooses M or !M. I get $1M or 0.
Omega chooses M=false. I get $0.1.
Omega chooses M=true. I get $1M.
M chooses either M or !M. I get either $1.1 or $0.1 depending on Omega's whims
Omega has no option. I make Omega look like a fool.
So, depending on how 'Omega ... (read more)
First thought: We can get out of this dilemma by noting that the output of C also causes the predictor to choose a suitable i, so that saying we cause the ith digit of pi to have a certain value is glossing over the fact that we actually caused the i[C]th digit of pi to have a certain value.
In the setup in question, D goes into an infinite loop (since in the general case it must call a copy of C, but because the box is transparent, C takes as input the output of D).
In Eliezer's similar red/green problem, if the simulation is fully deterministic and the initial conditions are the same, then the simulator must be lying, because he must've told the same thing to the first instance, at a time when there had been no previous copy. (If those conditions do not hold, then the solution is to just flip a coin and take your 50-50 chance.)
Are these still problems when you change them to fix the inconsistencies?