First reading about Newcomb’s Problem my reaction was petty much "wow, interesting thought" and "of course I would one box, I want to win $ 1 million after all". But I had a lingering nagging feeling, that there is something wrong with the whole premise. Now, after thinking about it for a few weeks I think I have found the problem.

First of all I want to point out, that I would still one box after seeing Omega predicting 50 or 100 other people correctly, since 50 to 100 bits of evidence are enough to ovecome (nearly) any prior I have about how the universe works. Only I do not think this scenario is physically possible in our universe.

The mistake is nicely stated here:

After all, Joe is a deterministic physical system; his current state (together with the state of his future self's past light-cone) fully determines what Joe's future action will be.  There is no Physically Irreducible Moment of Choice, where this same Joe, with his own exact actual past, "can" go one way or the other.

This is only true in this sense if neither MWI is true nor there are any quantum probabilistic processes, i.e., our universe allows for a true Laplace's demon (a.k.a. Omega) to exist.

If MWI is true Joe can set it up so, that "after" Omega filled the boxes and left there "will" be Everett Branches, in which Joe "will" twobox and different Everett Branches in which Joe "will" onebox.

Intuitively I think Joe could even do this with his own brain by leaving it in "undecided" mode until Omega leaves and then using an algorithm which feels "random" to decide if he oneboxes or twoboxes. But of course I would not thrust my intuition here and I do not know enough about Joe's brain to decide if this really works. So Joe would use e.g. a single photon reflected/transmitted off/through a semitransparent mirror, ensuring, that he oneboxes respectively twoboxes in say 50% of the Everett Branches.

If MWI is not true but there are quantum probabilistic process, Omega simply cannot predict the future state of the universe. So the same procedure used above would ensure that Omega cannot predict Joes decision due to true randomness.

So I would be very very VERY surprised if I saw Omega pull this trick 100 times in a row and I could somehow rule out Stage Magic (which I could not).

I am not even sure if there is any serious interpretation of quantum mechanics that allows for the strict determinism Omega would need. Would love to hear about one in the comments!

Of course from an instrumental standpoint it is always rational to firmly precommit to onebox, since the extra $1000 are not worth taking the risk. Even the model uncertainity accounts for much more than 0.001.

New Comment
18 comments, sorted by Click to highlight new comments since: Today at 12:03 AM

What's there to dissolve? Infallible Omega is the limit of a better-than-chance Omega as its accuracy goes to 100%. And better-than-chance predicting of one's actions does not require any quantum mumbo-jumbo, people do it all the time.

Newcomb's problem is in my reading about how an agent A should decide in a contrafactual in which another agent B decides conditional on the outcome of a future decision of A.

I tried to show that it is under certain conditions (deliberate noncompliance of A) not possible for B to know A's future decision any better than random (what - in the limit of atomic resolution scanning and practically infinite processing power - is only possible due to "quantum mumbo-jumbo"). This is IMHO a form of "dissolving" the Question, though perhaps the meaning of "dissolving" is somewhat streched here.

This is of cause not applicable to all Newcomblike Problems - namely all these where A complies and B can gather enough data about A and processing power.

(1) Why would Joe intend to use the random process in his decision? I'd assume that he wants million dollars much more than to prove Omega's fallibility (and that only with 50% chance).

(2) Even if Joe for whatever reason prefers proving Omega's fallibility, you can stipulate that Omega gives the quest only to people without semitransparent mirrors at hand.

(3) How is this

First of all I want to point out, that I would still one box after seeing Omega predicting 50 or 100 other people correctly, since 50 to 100 bits of evidence are enough to ovecome (nearly) any prior I have about how the universe works.

compatible with this

So I would be very very VERY surprised if I saw Omega pull this trick 100 times in a row and I could somehow rule out Stage Magic (which I could not).

(emphasis mine)?

Note about terminology: on LW, dissolving a question usually refers to explaining that the question is confused (there is no answer to it as it is stated) together with pointing out the reasons why such a question seems sensible at the first sight. What you are doing is not dissolving the problem, it's rather fighting the hypo.

ad 1: As I pointed out in my post twice, in this case he percommits to oneboxing and and that's it, since assuming atomic resolution scanning and practically infinite processing power he cannot hide his intention to cheat if he wants to twobox.

ad 2: You can, I did not, I suspect - as pointed out - that he could do that with his own brain too, but of course if so Omega woud know and still exclude him.

ad 3:

First of all I want to point out, that I would still one box after seeing Omega predicting 50 or 100 other people correctly, > since 50 to 100 bits of evidence are enough to ovecome (nearly) any prior I have about how the universe works.

This assumed that I could somehow rule out stage magic. Did not say that, my mistake.

On terminology: See my response to shiminux. Yes there is probably an aspect of fighting the hypo, but I think not primarily, since I think it is rather interesting to establish, that you can pervent to be perdicted in a newcomblike problem

OK, I understand now that your point was that one can in principle avoid being predicted. But to put it as an argument proving irrelevance or incoherence of the Newcomb's problem (not entirely sure that I understand correctly what you meant by "dissolve", though) is very confusing and prone to misinterpretation. Newcomb's problem doesn't rely on existence of predictors who can predict any agent in any situation. It relies on existence of rational agents that can be predicted at least in certain situations including the scenario with boxes.

I still don't understand why would you be so much surprised if you saw Omega doing the trick hundred times, assuming no stage magic. Do you find it so improbable that out of the hundred people Omega has questioned not a single one had a quantum coin by him and a desire to toss it on the occasion? Even game-theoretical experiment volunteers usually don't carry quantum widgets.

Newcomb's problem doesn't rely on existence of predictors who can predict any agent in any situation. It relies on existence of rational agents that can be predicted at least in certain situations including the scenario with boxes.

This was probably just me (how I read / what I think is interesting about Newcomb's problem). As I understand the responses most people think the main point of Newcomb's problem is that you rationally should cooperate given the 1000000 / 1000 payoff matrix. I emphazised in my post, that I take that as a given. I thought most about the question if you can successfully twobox at all, so this was the "point" of Newcomb's problem for me. To formalize this say I replaced the payoff matrix by 1000/1000 or even device A / device B where device A corresponds to $1000, device B corresponds to $1000 but device A + device B correspond to= $100000 (E.g. they have a combined function).

I still don't understand why would you be so much surprised if you saw Omega doing the trick hundred times, assuming no stage magic. Do you find it so improbable that out of the hundred people Omega has questioned not a single one had a quantum coin by him and a desire to toss it on the occasion? Even game-theoretical experiment volunteers usually don't carry quantum widgets.

Well, I thought about people actively resisting prediction, so some of them flipping a coin or using at least a mental process with severeal recursion levels (I think, that Omega thinks, that I think...). I am pretty though not absolutely sure that these processes are partly quantum random or at least chaotic enough to be computationally intractable for evrything within our universe. Though Omega would probably do much better than random (except if everyone flipps a coin, I am not sure if that is precictable with computational power levels realizable in our universe).

As I understand the responses most people think the main point of Newcomb's problem is that you rationally should cooperate given the 1000000 / 1000 payoff matrix.

I am no expert on Newomb's problem history, but I think it was specifically constructed as a counter-example to the common-sensical decision-theoretic principle that one should treat past events as independent of the decisions being made now. That's as well how it is most commonly interpreted on LW, although the concept of a near-omniscient predictor "Omega" is employed in wide range of different thought experiments here and it's possible that your objection can be relevant to some of them.

I am not sure whether it makes sense to call one-boxing cooperation. Newcomb isn't Prisoner's dilemma, at least in the original form.

Would you still one-box even if Omega only got it right 99% of the time rather than 100%.

If so, then under the reasonable assumption that low-level quantum non-determinism does not usually have large effects on higher level brain-state, Newcomb's problem is still physically implementable.

1000000 x 0.99 + 0 x 0.01 > 1001000 x 0.01 + 1000 * 0.99 so yes. But this is rather besides (my) point. As I pointed out if my aim is to make money I do everything to make Omega's job as easy as possible (by precommiting) and then onebox (if Omega is any better than random).

My point is rather that Omega can be fooled regardless of it's power. - And fooled throughly enough that Omegas percision can be no better than random

if my aim is to make money I do everything to make Omega's job as easy as possible (by precommiting) and then onebox

If you know this, you are capable of predicting your own actions. Do you think you're smarter than Omega?

What makes you think you have a reliable way of fooling Omega?

In particular, I am extremely sceptical that simply not making your mind up, and then at the last minute doing something that feels random, would actually correspond to making use of quantum nondeterminism. In particular, if individual neurons are reasonably deterministic, then regardless of quantum physics any human's actions can be predicted pretty perfectly, at least on a 5/10 minute scale.

Alternatively, even if it is possible to be delibrately non-cooperative, the problem can just be changed so that if Omega notices you are deliberately making its judgement hard, then it just doesn't fill the box. The problem in this version seems exactly as hard as Newcomb's.

In particular, I am extremely sceptical that simply not making your mind up, and then at the last minute doing something that feels random, would actually correspond to making use of quantum nondeterminism. In particular, if individual neurons are reasonably deterministic, then regardless of quantum physics any human's actions can be predicted pretty perfectly, at least on a 5/10 minute scale.

As stated in my post I am not sure about this either, though my reasoning is, that while memory is probably easy to read out, thinking is probably a chaotic process, where the outcome may depend on single action potentials, especially if the process does not heavily rely on things stored in memory. If a single action potential occurs can be determined by few - in the limit one - sodium ion(s) passing or not passing a channel. If a sodium Ion passes a channel is a quantum probabilistic process. Though as I said before I am not sure of this, so precommit to use a suitable device.

Alternatively, even if it is possible to be delibrately non-cooperative, the problem can just be changed so that if Omega notices you are deliberately making its judgement hard, then it just doesn't fill the box. The problem in this version seems exactly as hard as Newcomb's.

Yep! Omega can of course do so.

Perhaps Omega and the boxes are entangled with Joe in such a way that in every branch where Joe one-boxes, he finds $1,000,000, and in every branch in which he two-boxes, he finds $1,000.

I won't try to flesh this out with QM calculations, because by my standard for being able to say anything sensible about QM (being able to ace a closed-book finals exam), I can't. But from my position of complete ignorance, it seems an obvious direction to look.

my standard for being able to say anything sensible about QM (being able to ace a closed-book finals exam)

I wish standards like this were (self-)applied on this site more often.

I do not see any way how Omega and the boxes now can be entangeled with a photon passing or not passing through a semitransparent mirror in 5 min.

[-][anonymous]11y00

The "paradox" of Newcomb's Problem arises because Omega leaves the room. I'm going to steel-man the situation a little bit by stipulating that Omega doesn't put the million in the black box if it predicts that you will take longer than 1 minute to decide, or if it predicts that you will somehow randomize your decision using some process external to your brain.

Now, instinctively, some people want to say something to themselves like "Well, Omega has left the room. No matter whether I reach for one box or both of them, the amounts of money within them aren't going to change. So I might as well take both and get as much money as I can, given what has already occurred.", and then they take both boxes. The problem is, Omega predicted that they would go through that very chain of reasoning, and thus didn't fill the black box with a million dollars.

A better approach is to say to yourself, "Omega has probably correctly predicted what I'm going to do. So, I'll perform the action that, if Omega predicted it, will end up with me getting the most money." And then you end up with the million.

Now, some counter by saying "Wait a second...then for the two boxers, it's already too late for them to change their decision! So they're not irrational, just unlucky (to have the disposition to two-box)." And, from the outside, I might agree. But the person interacting with Omega can't make the same argument! Because then Omega would have predicted that the two-boxer would say that, and then proceed to use that as an excuse to take both boxes.

However, humans can't predict each other's behavior anywhere close to as well as a hypothetical brain-scanning superintelligence can. So we use precommitment instead. AIs who can read each other's code, though...again, they might be able to very accurately predict each other's decisions.

I don't see how quantum physics has anything to do with this problem. Quantum randomness events don't (with probability epsilon) reach such a high macro-scale as that of a human decision. I've never dropped a penny and seen quantum randomness carry it a meter (or even a centimeter) off to the right.

I don't see how quantum physics has anything to do with this problem. Quantum randomness events don't (with probability epsilon) reach such a high macro-scale as that of a human decision.

Yes they do reach macrocsopic level, or QM would not be an experimental science.

I suppose not fighting the hypothesis here must include ignoring the possibility that there are things in the universe that can't be predicted by a mind localized in space and time. So An Omega can exist, it is possible to know everything about you including the position and momentum of every particle that makes up your body and your brain, and this gives enough information to predict all future values of position and momentum of every one of those particles.

I don't think it can exist in our universe either, but if it could exist in some universe and I could exist in that universe also, would I one box or two box? All I can really say is I hope I would one box, it is not entirely up to the me-now in this universe. Whether the hypotetical then-and-there-me would one box or not, I am confident that if he did one box he'd get the million in that universe, and if he didn't he wouldn't. Unless we assume we have already made hypotheses about whether what we thing we would do in other universes is by hypothesis correct once we state it, in which case we wouldn't want to fight that hypothesis either.

None of this seems to me to read on the question of how much effort should be devoted to making sure an AI in THIS universe would one box, which I thought was the original reason to bring up Necomb's problem here. To answer THAT question, you WOULD have to concern yourself with whether this is a universe in which an honest Omega could exist.

But for the pure problem where we don't get to give the sniff test to our hypotheses, you know what you must do.