If you haven't already seen the paradox, here it is (there are multiple versions but this captures the main features). 

Given credences across different rational agents and the literature on rational disagreement, it makes sense to one-box after doing an EV calculation taking this two-box 39-30 majority into account. Even people who argue that there is even more lean towards two-boxing among decision theorists should admit this. 

Tell me why I'm wrong please! 

New to LessWrong?

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 6:47 PM

It makes sense to very legibly one-box even if Omega is a very far from perfect predictor. Make sure that Omega has lots of reliable information that predicts that you will one-box.

Then actually one-box, because you don't know what information Omega has about you that you aren't aware of. Successfully bamboozling Omega gets you an extra $1000, while unsuccessfully trying to bamboozle Omega loses you $999,000. If you can't be 99.9% sure that you will succeed then it's not worth trying.

The thing about Newcomb's problem for me was always the distribution between the two boxes, one being $1,000,000 and the other being $1,000. I'd rather not risk losing $999,000 for a chance at an extra $1,000! I could just one-box for real, take the million, then put it in an index fund and wait for it to go up by 0.1%.

I do understand that the question really comes into play when the amounts vary and Omega's success rate is lower - if I could one-box for $500 and two-box for $1,500 total and Omega is wrong 25% of the time observed, that would be a different play.

[-]Dagon14d2-1

I'm not sure I follow why Aumann's agreement theorem is relevant here - the survey does not include any rational agents, agents with mutual knowledge of their rationality, nor agents with the same priors.   It makes sense to one-box ONLY if you calculate EV by that assigns a significant probability to causality violation (your decision somehow affecting the previously-committed Omega behavior).

It makes sense to one-box ONLY if you calculate EV by that assigns a significant probability to causality violation

It only makes sense to two-box if you believe that your decision is causally isolated from history in every way that Omega can discern. That is, that you can "just do it" without it being possible for Omega to have predicted that you will "just do it" any better than chance. Unfortunately this violates the conditions of the scenario (and everyday reality).

It only makes sense to two-box if you believe that your decision is causally isolated from history in every way that Omega can discern.

Right.  That's why CDT is broken.  I suspect from the "disagree" score that people didn't realize that I do, in fact, assert that causality is upstream of agent decisions (including Omega, for that matter) and that "free will" is an illusion.