But can you be 99.99% confident that 1159 is a prime?
This doesn't affect the thrust of the post but 1159 is not prime. Prime factors are 19 and 61.
That may have, in fact, been the point. I doubt many people bothered to check.
I agree that you can be 99.99% (or more) certain that 53 is prime but I don't think you can be that confident based only on the arguement you gave.
If a number is composite, it must have a prime factor no greater than its square root. Because 53 is less than 64, sqrt(53) is less than 8. So, to find out if 53 is prime or not, we only need to check if it can be divided by primes less than 8 (i.e. 2, 3, 5, and 7). 53's last digit is odd, so it's not divisible by 2. 53's last digit is neither 0 nor 5, so it's not divisible by 5. The nearest multiples of 3 are
Are the various people actually being presented with the same problem? It makes a difference if the predictor is described as a skilled human rather than as a near omniscient entity.
The method of making the prediction is important. It is unlikely that a mere human without computational assistance could simulate someone in sufficient detail to reliably make one boxing the best option. But since the human predictor knows that the people he is asking to choose also realize this he still might maintain high accuracy by always predicting two boxing.
This is interesting. I suspect this is a selection effect, but if it is true that there is a heavy bias in favor of one boxing among a more representative sample in the actual Newcomb's problem, then a predictor that always predicts one boxing could be suprisingly accurate.
It is intended to illustrate that for a given level of certainty one boxing has greater expected utility with an infallible agent than it does with a fallible agent.
As for different behaviors, I suppose one might suspect the fallible agent of using statistical methods and lumping you into a reference class to make its prediction. One could be much more certain that the infallible agent’s prediction is based on what you specifically would choose.
You may have misunderstood what is meant by "smart predictor".
The wiki entry does not say how Omega makes the prediction. Omega may be intelligent enough to be a smart predictor but Omega is also intelligent enough to be a dumb predictor. What matters is the method that Omega uses to generate the prediction. And whether the method of prediction causally connects Omega’s prediction back to the initial conditions that causally determine your choice.
Furthermore a significant part of the essay explains in detail why many of the assumptions associated...
I have written a critique of the position that one boxing wins on Newcomb's problem but have had difficulty posting it here on Less Wrong. I have temporarily posted it here
I’m finding "correct" to be a loaded term here. It is correct in the sense that your conclusions follow from your premises, but in my view it bears only a superficial resemblance to Newcomb’s problem. Omega is not defined the way you defined it in Newcomb-like problems and the resulting difference is not trivial.
To really get at the core dilemma of Newcomb’s problem in detail one needs to attempt to work out the equilibrium accuracy (that is the level of accuracy required to make one-boxing and two-boxing have equal expected utility) not just arbitrarily set the accuracy to the upper limit where it is easy to work out that one-boxing wins.
First, thanks for explaining your down vote and thereby giving me an opportunity to respond.
We say that Omega is a perfect predictor not because it's so very reasonable for him to be a perfect predictor, but so that people won't get distracted in those directions.
The problem is that it is not a fair simplification, it disrupts the dilemma in such a way as to render it trivial. If you set the accuracy of the prediction to %100 many of the other specific details of the problem become largely irrelevant. For example you could then put $999,999.99 into box...
The basic concept behind Omega is that it is (a) a perfect predictor
I disagree, Omega can have various properties as needed to simplify various thought experiments, but for the purpose of Newcomb-like problems Omega is a very good predictor and may even have a perfect record but is not a perfect predictor in the sense of being perfect in principle or infallible.
If Omega were a perfect predictor then the whole dilemma inherent in Newcomb-like problems ceases to exist and that short circuits the entire point of posing those types of problems.
I don’t think Newcomb’s Problem can easily be stated as a real (as opposed to a simply logical) problem. Any instance of Newcomb’s problem that you can feasibly construct in the real world it is not a strict one shot problem. I would suggest that optimizing a rational agent for the strictly logical one shot problem one is optimizing for a reality that we don’t exist in.
Even if I am wrong about Newcomb’s problem effectively being an iterated type of problem treating it as if it is seems to solve the dilemma.
Consider this line of reasoning. Omega wants to ma...
Concerning Newcomb’s Problem I understand that the dominant position among the regular posters of this site is that you should one-box. This is a position I question.
Suppose Charlie takes on the role of Omega and presents you with Newcomb’s Problem. So far as it is pertinent to the problem Charlie is identical to Omega with the notable exception that his prediction is only %55 likely to be accurate. Should you one-box or two-box in this case?
If you one-box then the expected utility is (.55 1,000,000) $550,000 and if you two-box then it is (.45 1,001,000) $450,450 so it seems you should still one-box even when the prediction is not particularly accurate. Thoughts?