Newcomb's Problem

Heighn (+816) I briefly described how CDT, EDT and FDT act in this problem.
Heighn (+401) I thought it was important why the "dominance line of reasoning" fails, and introduced the terms one-boxing and two-boxing along the way.
Multicore (+18/-15)
Chris_Leong (+170/-183)
Ruby (+834/-24)
Multicore (+1344/-355)
Multicore
Multicore (+122)
Multicore (+7)
Multicore (+96/-7)

Because the agent's decision in this problem can't causally affect Omega's prediction (which happened in the past), Causal Decision Theory two-boxes. One-boxing is correlated with getting a million dollars, whereas two-boxing is correlated with getting only $1000; therefore, Evidential Decision Theory one-boxes. Functional Decision Theory (FDT) also one-boxes, but for a completely different reason: FDT reasons that Omega must have had a model of the agent's decision procedure in order to make the prediction. Therefore, your decision procedure is run not only by you, but also (in the past) by Omega; whatever you decide, Omega's model must have decided the same. Either both you and Omega's model two-box, or both you and Omega's model one-box; of these two options, the latter is preferable, so FDT one-boxes.

One line of reasoning about the problem says that because Omega has already left, the boxes are set and you can't change them. And if you look at the payoff matrix, you'll see that whatever decision Omega has already made, you get $1000 more for taking both boxes. This makes taking two boxes ("two-boxing") a dominant strategy and therefore the correct choice. Agents who reason this way do not make very much money playing this game. This is because this line of reasoning ignores the connection between the agent and Omega's prediction: two-boxing only makes $1000 more than one-boxing if Omega's prediction is the same in both cases, while the problem states Omega is extremely accurate in its predictions. Switching from one-boxing to two-boxing doesn't give the agent a $1000 more, it results in a loss of $999,000.

Newcomb's Problem is a thought experiment in Decision Theorydecision theory exploring problems posed by having other agents in the environment who can predict your actions.

Sometimes people dismiss Newcomb's problem because of the physical impossibility of a being like Omega. However, Newcomb's problem does not actually depend on the possibility of Omega in order to be relevant. Similar issues arise if we imagine a skilled human psychologist who can predict other people's actions with 65% accuracy.

Sometimes people dismiss Newcomb's problem because of the physical impossibility of a being such as Omega is physically impossible. Actually,like Omega. However, Newcomb's problem does not actually depend on the possibility or impossibility of Omega is irrelevant. Considerin order to relevant. Similar issues arise if we imagine a skilled human psychologist thatwho can predict other humans'people's actions with, say,with 65% accuracy. Now imagine they start running Newcomb trials with themselves as Omega.

Newcomb'Newcomb's Problem is a thought experiment in Decision Theory exploring problems posed by having other agents in the environment who can predict your actions.

The Problem

From Newcomb'Newcomb's Problem and Regret of Rationality:

One line of reasoning about the problem says that because Omega has already left, the boxes are set and you can'can't change them. And if you look at the payoff matrix, you'you'll see that whatever decision Omega has already made, you get $1000 more for taking both boxes. This makes taking two boxes a dominant strategy and therefore the correct choice. Agents who reason this way do not make very much money playing this game.

Irrelevance of Omega's Physical Impossibility

Sometimes people dismiss Newcomb's problem because a being such as Omega is physically impossible. Actually, the possibility or impossibility of Omega is irrelevant. Consider a skilled human psychologist that can predict other humans' actions with, say, 65% accuracy. Now imagine they start running Newcomb trials with themselves as Omega.

Notable Posts

See Also

From Newcomb's Problem and Regret of Rationality:

The mysterious alienA superintelligence from another galaxy, whom we shall call Omega, comes to Earth and sets about playing a strange little game. In this game, Omega has offered youselects a choice. There arehuman being, sets down two boxes,boxes in front of them, and flies away.
Box A is transparent and B. contains a thousand dollars.
Box B is opaque, and contains either a million dollars, or nothing.
You can take both boxes, or take only take box B. Box A definitely contains $1000. If
And the twist is that Omega has put a million dollars in box B iff Omega has predicted that you wouldwill take only take box B, it filled box B with $1,000,000. If it predicted that you would take both boxes, it left box B empty.B.
Omega has played this game many times before,been correct on each of 100 observed occasions so far - everyone who took both boxes has found box B empty and its predictions have almost never been wrong.received only a thousand dollars; everyone who took only box B has found B containing a million dollars. (We assume that box A vanishes in a puff of smoke if you take only box B; no one else can take box A afterward.)
Before you make your choice, Omega has flown off and moved on to its next game. Box B is already made its predictionempty or already full.
Omega drops two boxes on the ground in front of you and filled or not filled box B accordingly. flies off.

One line of reasoning about the problem says that because Omega has already left, the boxes are set and you can't change them. And if you look at the payoff matrix, you'll see that whatever decision Omega has already made, you get $1000 more for taking both boxes. This makes taking two boxes a dominant strategy and therefore the correct choice. Agents who reason this way do not make very much money playing this game.

The general class of decision problems that involve other agents predicting your actions are called Newcomblike Problems.

Newcomb's Problem is a thought experiment in Decision Theory exploring problems posed by having other agents in the environment who can predict your actions.

Newcomb's Problem is a thought experiment in Decision Theory.Theory exploring problems posed by other agents in the environment who can predict your actions.

Load More (10/49)