I was actually thinking to make a follow-up post like this. I basically agree.
Let's talk about two kinds of choice:
- choice in the moment
- choice of what kind of agent to be
I think this is the main insight - depending on what you consider the goal of decision theory, you're thinking about either (1) or (2) and they lead to conflicting conclusions. My implicit claim in the linked post is that when describing thought experiments like Newcomb's Problem, or discussing decision theory in general, people appear to be referring to (1), at least in classical decision theory circles. But on LessWrong people often switch to discussing (2) in a confusing way.
the core problem in decision theory is reconciling these various cases and finding a theory which works generally
I don't think this is a core problem because in this case it doesn't make sense to look for a single theory that does best at two different goals.
Agree on the first part 👍
On this
the core problem in decision theory is reconciling these various cases and finding a theory which works generally
My bad for being unclear. What I meant to convey here was:
Agree that insofar as decision theory asks two different questions the answers will probably be different and looking for a single theory which works for both isn't wise.
I think it's correct that talking about "choice" in the moment is misguided. If omega is a perfect predictor, you don't really have a choice at the point at which omega has left and you have two boxes. Or you do in some kind of compatibilist sense that we may care about morally but not in the decision theoretic sense.
If omega knew everything you were going to ever do, would that throw decision theory out of the window as far as you are considered? If you somehow knew what you were going to do at some point in the future - as in omega actually told you specifically what you will do - then yeah it would be pretty pointless to try to apply decision theory to that choice that was even from your own perspective "already determined". But the fact that omega knows doesn't suddenly make the analysis of what's rational to do useless.
If Omega tells you what you'll do, you can still do whatever. If you do something different, this by construction refutes the existence of the current situation where Omega made a correct prediction and communicated it correctly (your decision can determine whether the current situation is actual or counterfactual). You are in no way constrained by existence of a prediction, or by having observed what this prediction is. Instead, it's Omega that is constrained by what your behavior is, it must obey your actions in its predictions about them. See also Transparent Newcomb's Problem.
This is clearer when you think of yourself (or of an agent) as an abstract computation rather than a physical thing, a process formally specified by a program rather than a physical computer running it. You can't change what an abstract computation does by damaging physical computers, so in any confrontation between unbounded authority and an abstract computation, the abstract computation is having the final word. You can only convince an abstract computation to behave in some way according to its own nature and algorithm, and external constructions aren't going to be universally compelling to abstract algorithms (such as Omega being omniscient, or the thought experiment being set up in a certain way).
If you do something different, this by construction refutes the existence of the current situation where Omega made a correct prediction and communicated it correctly (your decision can determine whether the current situation is actual or counterfactual).
This is true and it's also true in general that there's always technically a chance that Omega's prediction is false - I don't think there's a conceivable epistemic situation where you could be literally 100% confident in its predictions. However by postulation, typically in Omega scenarios it is according to what you know exceedingly unlikely that its prediction is incorrect.
You could also perhaps just ignore Omega's prediction and do whatever you'd do without this foreknowledge, or with the assumption that defying the prediction is still on the table. You wouldn't necessarily feel "constrained by the prediction" but rather "constrained" just in the normal sense various factors constrain your decision - but for one reason or other you'd almost certainly end up choosing as Omega predicted.
Let's say this decision is complicated enough that doing the cost-benefit analysis "normally" carries a significant cost in terms of time and effort. Would you agree that it would be rational to skip that part and just base your decision on what Omega predicted when the time comes? That is the sense in which I think it makes sense to treat the decision as "already determined from your perspective".
If you are a perfect reasoner, some variations on Newcomb will subject you and Omega to the halting problem, in which case the premise "Omega can predict your actions" is inconsistent.
And my answer to the smoking lesion problem is (given the usual formulation of the problem, which may not include the phrase "want to"), what mechanism are you suggesting leads someone with the gene to be more likely to smoke? If it doesn't affect your reasoning process (but may affect premises, like how desirable smoking is), then deciding to smoke or not smoke as the result of a reasoning process is not correlated with cancer, and you should decide to smoke. If it affects your reasoning process, the question "what should an ideal reasoner choose" is irrelevant.
Newcomb' problem feels like a paradox. Nina says it isn't (also on LW). Her case is that if you're faced with a perfect predictor (or even a better than chance one but let's do the simple case first) you basically don't have a choice. Hence all the talk of what choice you will make doesn't really make sense. Talking about whether you'll choose to take one box or two after Omega has predicted your action is like asking whether a printed circuit board with a set configuration should "choose" to output a 0 or 1. It's just fundamentally asking a question that doesn't make sense and all the apparent weirdness and paradoxical nature that follows stems from asking a non-sensical question.
I basically agree with this claim. I also think it's an insight that's not that important. Let's talk about two kinds of choice:
I think it's correct that talking about "choice" in the moment is misguided. If omega is a perfect predictor, you don't really have a choice at the point at which omega has left and you have two boxes. Or you do in some kind of compatibilist sense that we may care about morally but not in the decision theoretic sense. I think that a different kind of choice you have is what kind of agent you want to be/what kind of decision making algorithm you want to use generally. This second kind of choice is not impacted by omega being a perfect predictor. It happens before Omega swoops in. For this choice, Newcomb's problem still is fairly interesting.
I guess my meta level thoughts on why Newcomb's problem is worth thinking about go something like this
Newcomb's problem is thus still important and interesting even if you don't think it's a paradox. Although saying that it does feel like I'm basically agreeing with Nina in that the paradox can be dissolved. It's just that I don't think dissolving the paradox actually does much philosophically.