A while ago, drnickbone showed in a LessWrong post that for any decision theory, there is a "fair" decision problem on which that decision theory fails---that is, gets a low reward, even though a simple alternative decision theory gets a high reward. Although drnickbone's argument was given in words, it's intuitively clear that it could be formalized as math---except, what exactly do "decision theory" and "decision problem" mean in this context?
In this post, I'll propose definitions of these notions in the context of provability logic, as used in Vladimir Slepnev's modal version of UDT, and give a formalization of drnickbone's result. This implies that no single modal decision theory, including UDT, can behave optimally on all "fair" decision problems. But in my next post, I'll show that modal UDT is nevertheless optimal in a weaker sense: For every provably "fair" modal decision problem, there is a finite set of true axioms that can be added to PA, such that modal UDT using this extension of PA performs optimally on that decision problem. (This is a modal logic version of a result which Kenny Easwaran, Nate Soares and I recently showed in the context of bounded proof length, and which hasn't been published yet.)
Prerequisite: A primer on provability logic.
Here's an example of how we usually talk about decision problems (Vladimir Slepnev's formalization of Newcomb's problem from "A model of UDT with a halting oracle"):
# Fill boxes, according to predicted action.
box1 = 1000
box2 = 1000000 if (A() == 1) else 0
# Compute reward, based on actual action.
return box2 if (A() == 1) else (box1 + box2)
In other words, we have a universe program, U(), which makes calls to an agent program, A().
The agent program, in turn, uses quining to refer to the source code of the universe program. But while the universe is allowed to call the agent to figure out how it will behave, the agent can't just call the universe---or we'll have an infinite regress. Instead, we generally consider agents that use abstract reasoning in the form of proofs to reason about the universe they're in.
So a universe is a parameterless program, and a decision problem is a universe with a slot in it for an agent program. Similarly, an agent is a parameterless program, and a decision theory is an agent with a slot in it for the universe program. The universe can refer to the agent by calling it, but the agent can refer to the universe's source (and to its own source) only inside quoted formulas.
Let's say that a sequence of modal formulas φ1,…,φn are provably mutually exclusive and exhaustive, or p.m.e.e. for short, if GL proves that exactly one of them is true: For example, φ1, φ2 and φ3 are p.m.e.e. if
We'll say that a (k-outcome) modal universe is a sequence O1,…,Ok, or →O for short, of closed p.m.e.e. formulas, where Oi is interpreted as the proposition that the i'th of the k possible outcomes occurs.
Given a preference ranking on these k possible outcomes, which has n≤k equivalence classes, we'll write Ui for the modal formula asserting that one of the outcomes in the i'th ranked equivalence class has occurred; for example, if the preference ordering is such that O2, O5 and O7 are the most preferred outcomes, then U1≡O2∨O5∨O7. Clearly, U1,…,Un (or →U for short) is a sequence of p.m.e.e. formulas; for brevity, we can avoid talking about →O, and call the sequence →U an n-level universe.
(I'm using ≡ to mean "same formula".)
Similarly, an (m-action) modal agent is a sequence A1,…,Am, or →A for short, of closed p.m.e.e. formulas, where Ai is interpreted as saying that the agent takes its i'th available action.
A modal (m-action, n-level) decision problem is an n-level universe with a slot for an m-action agent →A: that is, a p.m.e.e. sequence P1(→a),…,Pn(→a), or →P(→a) for short, where →a≡(a1,…,am).
Similarly, a decision theory is an agent with a slot for a universe. However, the definition of decision theories isn't exactly analogous to that of decision problems, because we want to reflect the idea that the agent can't directly simulate the universe it's living in, but can only make reference to it inside quoted formulas; in the modal logic world, this translates to: "inside boxes". Thus, we define a modal (m-action, n-level) decision theory to be a p.m.e.e. sequence of fully modalized formulas T1(→u),…,Tm(→u), or →T(→u) for short, where →u=(u1,…,un).
(Aside: We could require one of these to be provably mutually exclusive and exhaustive only under the assumption that its argument is p.m.e.e., but my current guess is that it's not worth it.)
Now we can use the fixed point theorem to show that any modal decision theory has a well-defined output on any modal decision problem, and vice versa: Since →T(→u) is fully modalized, so is →T(→P(→a))---if we substitute the Pi(→a) for ui, and all instances of ui were inside boxes, then all instances of →a in the resulting formula will be inside boxes as well. Thus, we can find a fixed point →A satisfying GL⊢→A↔→T(→P(→A)) (which I'm using as shorthand for stating that GL⊢Ai↔Ti(→P(→A)) holds for i=1,…,m).
By the uniqueness theorem, for any →B satisfying GL⊢→B↔→T(→P(→B)), we have GL⊢→A↔→B, so our decision theory's action is well-defined. Moreover, by the rule about substitution of equivalent formulas, it follows that GL⊢→P(→A)↔→P(→B), meaning that the preference level →U:≡→P(→A) that our decision theory achieves on this decision problem is well-defined as well.
Thus, for every pair of sequences (→T(→u),→P(→a)), there's a pair (→A,→U) such that GL⊢→A↔T(→U) and GL⊢→U↔P(→A); and this pair is unique up to provable equivalence.
drnickbone's result states that for every decision theory, there is a "fair" decision problem which on which this decision theory fails, where "fair" means that an agent's reward depends only on its actions, not on its source code: Any two agents which output the same actions will achieve the same outcome. We can formalize this in modal logic as
where I'm writing →φ↔→ψ for the conjunction of the φi↔ψi. We'll say that a modal decision problem is provably extensional if GL proves the above fairness condition.
There is one case in which provable extensionality is easy to show: If all occurrences of propositional variables in →P(→a) are outside boxes, then the fairness condition is a simple propositional tautology, and all such tautologies are provable in GL.
(Aside: One might wonder whether provable extensionality doesn't follow for all decision problems from the rule about substitution of equivalent formulas, but that's only an inference rule: if you can prove in GL that φ↔φ′, and you can also prove ψ(φ), then it follows that you can prove ψ(φ′), but GL does not in general prove (φ↔φ′)→(ψ(φ)↔ψ(φ′)). Here's a counterexample: Note that □⊥ says "PA proves a contradiction", so ¬□⊥ says "PA is consistent". Hence, GL does not prove that (⊤↔¬□⊥)→(□⊤↔□(¬□⊥)), because that reduces to ¬□⊥→□¬□⊥, which says "if PA is consistent, then PA proves its own consistency". That statement is false, and by soundness, GL only proves true things.)
Intuitively, given a m-action, n-level decision theory →T(→u) (where m,n≥2), drnickbone's "evil" extensional decision problem →P(→a) works as follows: First, →P(→a) checks what →T(→u) would do on this same decision problem. Then it checks whether →a does the same thing. If so, the agent gets the second-best outcome; but if →a differs from →T(→u)'s action, the agent gets the best outcome.
To implement this idea, we first define a sequence F1(→a,→b),…,Fm(→a,→b), where →a stands for the action of the actual agent and →b≡(b1,…,bm) stands for the action that →T(→u) would have taken, as follows:
Now, we use the fixed point theorem to find a sequence →B such that GL⊢Bi↔Ti(→F(→B,→B)) for i=1,…,m. Finally, we let →P(→a):≡→F(→a,→B). It's clear from the definition of →F(→a,→b) that the formulas in →P(→a) are p.m.e.e., as required for a decision problem; and →a only occurs outside boxes in →P(→a), so →P(→a) is provably extensional.
The point of this is that →B is now clearly the action of →T(→u) on the decision problem →P(→a): Bi is equivalent to Ti(→F(→B,→B)), which is equivalent to Ti(→P(→B)), and GL⊢→B↔→T(→P(→B)) is exactly the defining condition of "→B says what →T(→u) does on →P(→a)". Therefore, →P(→a) now implements the intuitive definition of the evil problem: it checks whether →a takes the same action as →T(→u); if so, it returns the second-best outcome; otherwise, it returns the best.
So →T(→u) gets only the second best outcome, but the following decision theory →T′(→u) gets the best one:
If the agent is randomized (and the universe can't observe this randomness) then this problem goes away. So it still seems (a bit) plausible that one can have an "optimal" decision theory. The formalization of this would be non-trivial. Intuitively, it seems like if UDT gets to condition on the output of a competitor X, then UDT ought to be able to produce an output at least as good as X's (and it is essentially the unique decision theory with this property), for any extensional problem.
The modal logic setting might be OK for exploring these questions, but I suspect the results would have to be "pulled back" to a more conventional environment in order to be widely appreciated.
(Sorry if this would be more appropriate as a comment on your follow-up post.)
Actually, drnickbone's original LessWrong post introducing evil problems also gives an extension to the case you are considering: The evil decision problem gives the agent three or more options, and rewards the one that the "victim" decision theory assigns the least probability to (breaking ties lexicographically). Then, no decision theory can put probability >1/N on the action that is rewarded in its probabilistic evil problem.
This setup plays a computational trick, and as a result I don't think it violates the optimality standard I proposed. In order to decide what it should do, the CDT agent needs to think strictly longer than the UDT agent. But if the CDT agent thinks longer than the UDT agent, it's totally unsurprising that it does better! (Basically, the problem just consists of a computational question which is chosen to be slightly too complex for the UDT agent. But the CDT agent is allowed to think as long as it likes. This entire family of problems appears to be predicated on the lack of computational limits for our agents.)
As a result, if the UDT agent is told what the CDT agent decides, then it can get the same performance as the CDT agent. This seems to illustrate that the CDT agent isn't doing better by being wiser, just by knowing something the UDT agent doesn't. (I wasn't actually thinking about this case when I introduced the weakened criterion; the weakening is obviously necessary for UDT with 10 years of time to compete with CDT with 11 years of time, and I included it for that reason.)
Does this seem right? If so, is there a way to set up the problem that violates my weakened standard?
Incidentally, this problem involves a discontinuous dependence on UDT's decision(both by the competitor and by the environment). I wonder if this discontinuous dependence is necessary?
Nevermind, we get the same problem in the Newcomb's problem case when the simulated agent is running UDT, i.e. where P(big box contains $1M) = P(UDT agent 1-boxes after being told that the CDT agent 2-boxes).