This post was inspired by taw urging us to mathematize Newcomb's problem and Eliezer telling me to post stuff I like instead of complaining.

To make Newcomb's problem more concrete we need a workable model of Omega. Let me count the ways:

1) Omega reads your decision from the future using a time loop. In this case the contents of the boxes are directly causally determined by your actions via the loop, and it's logical to one-box.

2) Omega simulates your decision algorithm. In this case the decision algorithm has indexical uncertainty on whether it's being run inside Omega or in the real world, and it's logical to one-box thus making Omega give the "real you" the million.

3) Omega "scans your brain and predicts your decision" without simulating you: calculates the FFT of your brainwaves or whatever. In this case you can intend to build an identical scanner, use it on yourself to determine what Omega predicted, and then do what you please. Hilarity ensues.

(NB: if Omega prohibits agents from using mechanical aids for self-introspection, this is in effect a restriction on how rational you're allowed to be. If so, all bets are off - this wasn't the deal.)

(Another NB: this case is distinct from 2 because it requires Omega, and thus your own scanner too, to terminate without simulating everything. A simulator Omega would go into infinite recursion if treated like this.)

4) Same as 3, but the universe only has room for one Omega, e.g. the God Almighty. Then ipso facto it cannot ever be modelled mathematically, and let's talk no more.

I guess this one is settled, folks. Any questions?

Well... for whatever it's worth, the case I assume is (3).

"Rice's Theorem" prohibits Omega from doing this with all possible computations, but not with

humans. It's probably not even all that difficult: people seem strongly attached to their opinions about Newcomb's Problem, so their actual move might not be too difficult to predict.Anymind that has an understandable reason for the move it finally makes, is not all that difficult to simulate at a high-level;youare doing it every time youimaginewhat it would do!Omega is assumed to be in a ... (read more)

Aren't these rather ducking the point? The situations all seem to be assuming that we ourselves have Omega-level information and resources, in which case why do we care about the money anyway? I'd say the relevant cases are:

3b) Omega uses a scanner,

but we don't know how the scanner works(or we'd be Omega-level entities ourselves).5) Omega is using one of the above methods, or one we haven't thought of, but we don't know which. For all we know he could be reading the answers we gave on this blog post, and is just really good at guessing who will stic... (read more)

This is a good post. It explains that "given any concrete implementation of Omega, the paradox utterly disappears."

I'm quite bothered by Eliezer's lack of input to this thread. To me this seems like the most valuable thread of Newcomb's we had on OB/LW, and he's the biggest fan of the problem here, so I would have guessed he thought about it a lot, and tried some models even if they failed. Yet he didn't write anything here. Why is it so?

(5) Omega uses ordinary conjuring, or heretofore-unknown powers to put the million in the box after you make your decision. Solution: one-box for sure, no decision theory trickery needed. This would be in practice the conclusion we would come to if we encountered a being that appeared to behave like Omega, and therefore is also the answer in any scenario where we don't know the true implementation of Omega (ie any real scenario).

If the boxes are transparent, resolve to one-box iff the big box is empty.

That's a creative attempt to avoid really considering Newcomb's problem; but as I suggested earlier, the noisy real-world applications are real enough to make this a question worth confronting on its own terms.

Least Convenient Possible World: Omega is type (3), and does not offer the game at all if it calculates that its answers turn out to be contradictions (as in your example above). At any rate, you're not capable of building or obtaining an accurate Omega' for your private use.

Aside: If Omega sees probability

pthat you one-box, it puts the million dol... (read more)Yes! Where is the money? A battle of wits has begun! It ends when a box is opened.

Of course, it's so

simple. All I have to do is divine from what I know of Omega: is it the sort of agent who would put the money in one box, or both? Now, a clever agent would put little money into only one box, because it would know that only a great fool would not reach for both. I am not a great fool, so I canclearlynot take only one box.ButOmega must have known I was not a great fool, and would have counted on it, so I canclearlynot choose both boxes.Truly, Omega must admit that I have a dizzying intellect.

On the other hand, perhaps I have confused this with something else.

I find Newcomb's problem interesting. Omega predicts accurately. This is impossible in my experience. We are not discussing a problem any of us is likely to face. However I still find discussing counter-factuals interesting.

I do not think that is the case. Whether Omega predicts by time travel, mind-reading, or even removes money from the box by teleportation when it observes the subject taking two boxes is a separate discussion, considering laws of physics, SF, whatever. This mi... (read more)

All right, I found another nice illustration. Some philosophers today think that Newcomb's problem is a model of certain real-world situations. Here's a typical specimen of this idiocy, retyped verbatim from here:

Let me describe a typical medical Newcomb problem. It has long been recognized that in people susceptible to migraine, the onset of an attack tends to follow the consumption of certain foods, including chocolate and red wine. It has usually been assumed that these foods are causal factors, in some way triggering attacks. This belief has been the s... (read more)In the standard Newcomb's, is the deal Omega is making explained to you before Omega makes its decision; and does the answer to my question matter?

Suppose it was.

Thank you. Hopefully this will be the last post about Newcomb's problem for a long time.

Even disregarding uncertainty whether you're running inside Omega or in the real world, assuming Omega is perfect #2 effectively reverses the order of decisions just like #1 - and you decide first (via simulation), omega decides second. So it collapses to a trivial one-box.

I never thought of that!

Can you formalize "hilarity ensues" a bit more precisely?

Omega knows that I have no patience for logical paradoxes, and will delegate my decision to a quantum coin-flipper exploiting the Conway-Kochen theorem. Hilarity ensues.

I would one-box in Newcomb's problem, but I'm not sure why Omega is more plausible than a being that rewards people that it predicts would be two-boxers. And yet it is more plausible to me.

When I associate one-boxing with cooperation, that makes it more attractive. The anti-Omega would be someone who was afraid cooperators would conspire against it, and so it rewards the opposite.

In the case of the pre-migraine state below, refraining from chocolate seems much less compelling.

Why can't God Almighty be modelled mathematically?

Omega/God is running the universe on his computer. He can pause any time he wants (for example to run some calculations), and modify the "universe state" to communicate (or just put his boxes in).

That seems to be close enough to 4). Unlike with 3), you can't use the same process as Omega (pause the universe and run arbitrary calculations that could consider the state of every quark).

What does Newcomb's Problem has to do with reality as we know it anyway? I mean, imagine that I've solved it (whatever that means). Where in my everyday life can I apply it?

I have a very strong feeling that way 3 is not possible. It seems that any scanning/analysis procedure detailed enough to predict your actions constitutes simulating you.