If you are a perfect reasoner, some variations on Newcomb will subject you and Omega to the halting problem, in which case the premise "Omega can predict your actions" is inconsistent.
And my answer to the smoking lesion problem is (given the usual formulation of the problem, which may not include the phrase "want to"), what mechanism are you suggesting leads someone with the gene to be more likely to smoke? If it doesn't affect your reasoning process (but may affect premises, like how desirable smoking is), then deciding to smoke or not smoke as the result of a reasoning process is not correlated with cancer, and you should decide to smoke. If it affects your reasoning process, the question "what should an ideal reasoner choose" is irrelevant.
I was actually thinking to make a follow-up post like this. I basically agree.
Let's talk about two kinds of choice:
- choice in the moment
- choice of what kind of agent to be
I think this is the main insight - depending on what you consider the goal of decision theory, you're thinking about either (1) or (2) and they lead to conflicting conclusions. My implicit claim in the linked post is that when describing thought experiments like Newcomb's Problem, or discussing decision theory in general, people appear to be referring to (1), at least in classical decision theory circles. But on LessWrong people often switch to discussing (2) in a confusing way.
the core problem in decision theory is reconciling these various cases and finding a theory which works generally
I don't think this is a core problem because in this case it doesn't make sense to look for a single theory that does best at two different goals.
Context: Newcomb's Paradox is a problem in decision theory. Omega swoops in and places two boxes in front of you. One is transparent and contains $1'000. One is opaque and contains either $1'000'000 if Omega thinks you'll only take one box or $0 if Omega thinks you'll take two boxes. Omega is a perfect predictor. Do you take one or both boxes?
Newcomb' problem feels like a paradox. Nina says it isn't (also on LW). Her case is that if you're faced with a perfect predictor (or even a better than chance one but let's do the simple case first) you basically don't have a choice. Hence all the talk of what choice you will make doesn't really make sense. Talking about whether you'll choose to take one box or two after Omega has predicted your action is like asking whether a printed circuit board with a set configuration should "choose" to output a 0 or 1. It's just fundamentally asking a question that doesn't make sense and all the apparent weirdness and paradoxical nature that follows stems from asking a non-sensical question.
I basically agree with this claim. I also think it's an insight that's not that important. Let's talk about two kinds of choice:
I think it's correct that talking about "choice" in the moment is misguided. If omega is a perfect predictor, you don't really have a choice at the point at which omega has left and you have two boxes. Or you do in some kind of compatibilist sense that we may care about morally but not in the decision theoretic sense. I think that a different kind of choice you have is what kind of agent you want to be/what kind of decision making algorithm you want to use generally. This second kind of choice is not impacted by omega being a perfect predictor. It happens before Omega swoops in. For this choice, Newcomb's problem still is fairly interesting.
I guess my meta level thoughts on why Newcomb's problem is worth thinking about go something like this
Newcomb's problem is thus still important and interesting even if you don't think it's a paradox. Although saying that it does feel like I'm basically agreeing with Nina in that the paradox can be dissolved. It's just that I don't think dissolving the paradox actually does much philosophically.