Well, you can't simulate it because the mechanism of prediction is unspecified, as is the mechanism of free will that makes the decision. You just don't know if, in the thought experiment universe, you actually have an open option to choose.
You can very easily simulate the trivial case (ignore causality and decision theory, assume Omega cheats by changing the values after you decide before the result is revealed), which leads to one-boxing.
Isn't the main problem the issue with simulating the Predictor? I also thought about this idea. The simplest variany is that the alleged Predictor estimates P(the user will choose to one-box) by using the user's previous choices as P(next choice is an one-box) = (one-boxing trials +1)/(2+trials). This setup has the player set a probability p, then watch as the Predictor converges to p, thus punishing or not punishing the Player.
A more complex idea is to have the player choose to 2-box with probability p if the random number between 0 and 1 is at most r and with probability q if the random number is at least r; then the Predictor would learn the player's random number, estimate the parameters and choose not to place the million with probabilities p_est or q_est, depending on whether the random number is bigger than r_est. In order to deal with precision issues, one can set p to 1 and q to 0, meaning that the Predictor is only to figure out the value of r.
However, this setup would have the player's choice to 1-box or 2-box cause a subsequent iteration to receive or not to receive the million. I suspect that this is a way to derive FDT from mere superrationality.
Hey!
I am trying to approach Newcomb's paradox the most natural way a programmer can - writing a simulation. I wrote a nice simulation supporting the expected-utility approach (favoring picking 1 box).
However I am afraid I am baking up my assumptions of the system into the code, but I can't even figure out how I would write a simulation that would support strategic-dominance strategy (favoring picking both boxes).
If I try to decouple the prediction from the actual choice, to "allow" the player to pick a different box than the predictor predicts,.... well than the predictor accuracy falls and thus the premise of the problem is defeated.
Is there any way at all to write a simulation that supports players picking both boxes?