In the previous post, we applied some calculus to game theoretic problems. Let's now look at the famous Newcomb's problem, and how we can use calculus to find a solution.

# Newcomb's problem

From Functional Decision Theory: A New Theory of Instrumental Rationality:

An agent finds herself standing in front of a transparent box labeled “A” that contains $1,000, and an opaque box labeled “B” that contains either $1,000,000 or $0. A reliable predictor, who has made similar predictions in the past and been correct 99% of the time, claims to have placed $1,000,000 in box B iff she predicted that the agent would leave box A behind. The predictor has already made her prediction and left. Box B is now empty or full. Should the agent take both boxes (“two-boxing”), or only box B, leaving the transparent box containing $1,000 behind (“one-boxing”)?

First, we need to come up with a function modeling the amount of money the agent gets. There's only one action to do: call this . Then is the function for the amount of money earned for each action (one-boxing or two-boxing). However, as you might know, historically, thinkers have diverged on what should be.

## Causal Decision Theory

Causal Decision Theory (CDT) states that an agent should look only at the causal effects of her actions. In Newcomb's problem, this means acknowledging that the predictor already made her prediction (and either put in box B or not), and that the agent's action now can't causally influence what's in box B. So either there's in box B or not. In both cases, two-boxing earns more (the content of box A) than one-boxing. Then , where is a constant representing the amount of money in box B and , where is one-boxing and is two-boxing. , and of course returns the most value for the highest value of : (two-boxing).

This is all pretty straightforward, but the problem is that almost all agents using CDT ends up with only , as the predictor predicts the agents will two-box with accuracy and put nothing in box B. It's one-boxing that gets you the . Enter logical decision theories, e.g. Functional Decision Theory.

## Functional Decision Theory

Functional Decision Theory (FDT) reasons about the effects of the agents *decision procedure* (which produces an action) instead of the effects of her *actions*. The point is that a decision procedure can be implemented multiple times. In Newcomb's problem, it seems the predictor implements the decision procedure of the agent: she can run this *model *of the agent's decision procedure to see what action it produces, and use it to predict what the actual agent will do. In , this means that and are *dependent on the same decision procedure*! The decision procedure's implementation in the agent produces , and the implementation in the predictor is used to determine what should be. Let's write for the decision procedure: , where means a one-boxing decision and means a two-boxing decision. Then , and . After all: if , the agent decides on one-boxing and the predictor will have predicted that with accuracy, giving an expected value of in box B. Should the agent decide to two-box, the predictor will have predicted *that *with probability and only put in box B if she mistakenly predicted a one-box action. Then the expected value of box B is , which for is represented by . Great! We now have . . The lowest possible decision, then, wins: , which gives , whereas .

This outcome reflects the fact that it's one-boxers who almost always win , whereas two-boxer rarely do. If they do, they get the and the of box A, for a total of , but the probability of getting the is too low for this to matter enough.

Newcomb's problem requires either that a) agents aren't Turing-complete (in which case of course agents can get sub-optimal outcomes) or b) Omega is super-Turing, in which case all bets are off. I'm not sure if it's worth focusing on as a result.

One finite, but otherwise Turing complete system can emulate another,. providing it's sufficiently more powerful.

There is a name for 'a finite analogy to a Turing machine' - it's called a finite state machine. You are correct in that one sufficiently large finite state machine can simulate a smaller FSM.

If agents must be FSMs, case a) applies. Your agents can of course get suboptimal outcomes, and many standard axioms of game theory do not apply

^{[1]}. (For instance: a game's expected value can beloweredby adding another option^{[2]}or raising the payoff of an option^{[3]}.)^{^}Or apply in modified forms.

^{^}Once the FSM is no longer large enough to scan through all outcomes.

^{^}Once the FSM is no longer large enough to be able to do the necessary arithmetic on the output payoffs.

I'm not very shocked by the fact that realistically finite agents can make suboptimal decisions.

Neither am I, which is why I am surprised and confused by people seemingly attaching a fair amount of importance to Newcomb's problems.