Back in 2017, Johannes_Treutlein published a post critiquing logical decision theories: Did EDT get it right all along? Introducing yet another medical Newcomb problem. In it, Treutlein presents the Coin Flip Creation problem (and a second version) and argues logical decision theories (like Updateless Decision Theory (UDT) and Functional Decision Theory (FDT)) handle it wrong. After reading the post, it seems to me Treutlein's argumentation is flawed, and while I am probably not the first to notice this (or even write about it), I still think it's important to discuss this, as I am afraid more people make the same mistake.

Note that I will be talking mostly about how FDT handles the problems Treutlein presents, as this is a theory I have some expertise on.

The Coin Flip Creation Problem

From the original post:

One day, while pondering the merits and demerits of different acausal decision theories, you’re visited by Omega, a being assumed to possess flawless powers of prediction and absolute trustworthiness. You’re presented with Newcomb’s paradox, but with one additional caveat: Omega informs you that you weren’t born like a normal human being, but were instead created by Omega. On the day you were born, Omega flipped a coin: If it came up heads, Omega created you in such a way that you would one-box when presented with the Coin Flip Creation problem, and it put $1 million in box A. If the coin came up tails, you were created such that you’d two-box, and Omega didn’t put any money in box A. We don’t know how Omega made sure what your decision would be. For all we know, it may have inserted either CDT or EDT into your source code, or even just added one hard-coded decision rule on top of your messy human brain. Do you choose both boxes, or only box A?

Treutlein claims EDT one-boxes and "gets it right". But I think it's wrong even to discuss what a decision theory would do in this problem: my claim is that this is not a proper decision theoretic problem. It's an interesting thought experiment, but it is of little value to decision theory. Why? Because the question

Do you choose both boxes, or only box A?

has two branches:

  1. If Omega flipped heads, do you choose both boxes, or only box A?
  2. If Omega flipped tails, do you choose both boxes, or only box A?

In both cases, the answer is already given in the problem statement. In case 1, Omega created you as a one-boxer, and in case 2, you were created as a two-boxer.

Treutlein claims logical decision theories (like UDT and FDT) get this problem wrong, but there literally is no right or wrong here. Without the Omega modification at the coin flip, FDT would two-box (and rightly so). With the Omega modification, there is, in case 1, no FDT anymore (as Omega modifies the agent into a one-boxer), so the question becomes incoherent. The question is only coherent for case 2, where FDT makes the right decision (two-boxing, making $1,000 > $0). And it's not FDT's fault there's no $1,000,000 to earn in case 2: this is purely the result of a coin flip before the agent even existed. It's not the result of any decision made by the agent. In fact, the whole outcome of this game is determined purely by the outcome of the coin flip! Hence my claim that this is not a proper decision theoretic problem.

Treutlein does (sort of) address my counterargument:

There seems to be an especially strong intuition of “absence of free will” inherent to the Coin Flip Creation problem. When presented with the problem, many respond that if someone had created their source code, they didn’t have any choice to begin with. But that’s the exact situation in which we all find ourselves at all times! Our decision architecture and choices are determined by physics, just like a hypothetical AI’s source code, and all of our choices will thus be determined by our “creator.” When we’re confronted with the two boxes, we know that our decisions are predetermined, just like every word of this blogpost has been predetermined. But that knowledge alone won’t help us make any decision.

Indeed. An AI always does what its source code says, so in a way, its decisions are determined by its creator. This is why my intuition with Newcomb's problem is not so much "What action should the agent take?" but "What source code (or decision procedure) should the agent run?" This phrasing makes it more clear that the decision does influence whether there's $1,000,000 to earn, as actions can't cause the past but your source code/decision procedure could have been simulated by Omega. But actions being predetermined is not the objection to the Coin Flip Creation problem. In Newcomb's problem, your action is predetermined, but your decision still influences the outcome of the game. I want to run a one-boxing procedure, as that would give me $1,000,000 in Newcomb's problem. What procedure do I want to run in the Coin Flip Creation problem? This question doesn't make sense! In the Coin Flip Creation problem, my decision procedure is determined by the coin flip!

Coin Flip Creation, Version 2

From the original post:

The situation is identical to the Coin Flip Creation, with one key difference: After Omega flips the coin and creates you with the altered decision algorithm, it actually simulates your decision, just as in Newcomb’s original paradox. Only after Omega has determined your decision via simulation does it decide whether to put money in box A, conditional on your decision. Do you choose both boxes, or only box A?

Treutlein claims UDT does one-box on this version while it two-boxes on the original version, and finds this curious. My objection remains that this, too, is not a problem for decision theories, as the decision procedure is determined by the coin flip in the problem statement.

New to LessWrong?

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 12:36 AM

Under normal decision theory, you can imagine that an agent is asking you the reader how they should decide, and then they will do it. You can't consistently imagine Omega's coin-flip agents doing that, since Omega has preprogrammed them to ignore whatever you say.

This is a much stronger constraint than ordinary agent determinism, since a deterministic agent can take different actions based on sensory input, such as a response to a question about why one action is better than another. In respect of this particular action, I would hesitate to call one of the Omega-created entities an agent at all.

They are certainly not rational agents, and not really suitable objects for examining whether any given decision theory is suitable for rational agents.

I think they can be agents, at least if Omega gave them a decision theory that produces the output determined by the coin flip. I mean, then it's no different then when you normally program an agent with a decision theory. Whether they are rational agents then depends on whether you call e.g. Causal Decision Theory-agents rational - I'd probably say no, but many would disagree, I'm guessing.

[-]TLW2y10

I fail to see why the Coin Flip Creation problems are at all interesting.

It is trivial to get suboptimal outcomes in favor of any target 'optimal' agent if the game can arbitrarily modify the submitted agent.

(Also, Coin Flip Creation Version 2, like the vanilla Newcomb's paradox, requires that either a) the agent is sub-Turing (not capable of general computation) (in which case there is no paradox) or b) Omega has a Halting oracle, or is otherwise super-Turing, but this would require violating the Church-Turing thesis (in which case all bets are off).)

Well, the post did get agreement in the comment section, and had a quite clever sounding (but wrong) argument about how agents are deterministic in general etc., and it seemed important to point out the difference between CFC and Newcomb's problem. 

[-]TLW2y20

Perhaps I should rephrase:

Why do others find Coin Flip Creation problems at all interesting? Is it because they a) have thought of said arguments and dismissed them (in which case, why? What am I missing?), b) because they haven't thought of said arguments (in which case why not? I found it immediately apparent. Am I that much of an outlier?), or c) because of something else (if so, what?)

Ah, I get you now. I don't know, of course; a and b could both be in the mix. I have had a similar feeling with an earlier piece on decision theory, which to me seemed (and still seems) so clearly wrong, and which got quite the upvotes. This isn't meant to be too negative about that piece - it just seems people have very different intuitions about decision theory even after having thought (and read) about it quite a bit.