omnizoid has replied to my critique of his "FDT is crazy" position here. This post is my response.

The most important argument against FDT is that, while it’s a fine account of what type of agent you want to be, at least, in many circumstances, it’s a completely terrible account of rationality—of what it’s actually wise to do when you’re in one situation.

This may just be the crux of our disagreement. I claim there is no difference here: the questions What type of agent do I want to be? and What decision should I make in this scenario? are equivalent. If it is wise to do X in a given problem, then you want to be an X-ing agent, and if you should be an X-ing agent, then it is wise to do X. The only way to do X is to have a decision procedure that does X, which makes you an X-ing agent. And if you are an X-ing agent, you have a decision procedure that does X, so you do X.

Suppose that there’s an agent who has a very high probability of creating people who once they exist will cut off their legs in ways that don’t benefit them. In this case, cutting off one’s legs is clearly irrational—one doesn’t benefit at all and yet is harmed greatly.

Unfortunately, omnizoid once again doesn't clearly state the problem - but I assume he means that

  • there's an agent who can (almost) perfectly predict whether people will cut off their legs once they exist
  • this agent only creates people who he predicts will cut off their legs once they exist
  • existing with legs > existing without legs > not existing

FDT'ers would indeed cut off their legs: otherwise they wouldn't exist. omnizoid seems to believe that once you already exist, cutting off your legs is ridiculous. This is understandable, but ultimately false. The point is that your decision procedure doesn't make the decision just once. Your decision procedure also makes it in the predictor's head, when she is contemplating whether or not to create you. There, deciding not to cut off your legs will prevent the predictor from creating you.

Heighn’s response to this argument is that this is a perfectly fine prescription. After all, agents who follow their advice get more utility on average than agents who follow EDT or CDT.

Note: my response is that it is a perfectly fine description only if my above interpretation of omnizoid's problem is correct.

Decision theories are not about what kind of agent you want to be.

Yes they are, but I already covered that above.

omnizoid also believes my reaction to Wolgang Schwarz's Procreation is confused:

Procreation. I wonder whether to procreate. I know for sure that doing so would make my life miserable. But I also have reason to believe that my father faced the exact same choice, and that he followed FDT. If FDT were to recommend not procreating, there's a significant probability that I wouldn't exist. I highly value existing (even miserably existing). So it would be better if FDT were to recommend procreating. So FDT says I should procreate. (Note that this (incrementally) confirms the hypothesis that my father used FDT in the same choice situation, for I know that he reached the decision to procreate.)

My comment was:

This problem doesn't fairly compare FDT to CDT though. By specifying that the father follows FDT, FDT'ers can't possibly do better than procreating. Procreation directly punishes FDT'ers - not because of the decisions FDT makes, but for following FDT in the first place.

omnizoid reacts:

They could do better. They could follow CDT and never pass up on the free value of remaining child-free.

No! Following CDT wasn't an option. The question was whether or not to procreate, and I maintain that Procreation is unfair towards FDT.

I do not know of a single academic decision theorist who accepts FDT. When I bring it up with people who know about decision theory, they treat it with derision and laughter.

They should write up a critique!

Finally, Heighn accuses MacAskill of misrepresenting FDT. MacAskill says:

Hmm, maybe I shouldn't have used the word "misrepresents", as it could imply dishonesty, which I don't believe there is. But yes - and again, with all due respect - MacAskill is wrong about FDT when he says:

First, take some physical processes S (like the lesion from the Smoking Lesion) that causes a ‘mere statistical regularity’ (it’s not a Predictor). And suppose that the existence of S tends to cause both (i) one-boxing tendencies and (ii) whether there’s money in the opaque box or not when decision-makers face Newcomb problems.  If it’s S alone that results in the Newcomb set-up, then FDT will recommending two-boxing.

But now suppose that the pathway by which S causes there to be money in the opaque box or not is that another agent looks at S and, if the agent sees that S will cause decision-maker X to be a one-boxer, then the agent puts money in X’s opaque box. Now, because there’s an agent making predictions, the FDT adherent will presumably want to say that the right action is one-boxing.

I explained why by saying

This is just wrong: the critical factor is not whether "there's an agent making predictions". The critical factor is subjunctive dependence, and there is no subjunctive dependence between S and the decision maker here.

omnizoid, however, believes there is subjunctive dependence:

But in this case there is subjective dependence. The agent’s report depends on whether the person will actually one box on account of the lesion. Thus, there is an implausible continuity on account of it mattering whether to one box the precise causal mechanisms of the box.

No, there is no subjunctive dependence. Yes,

The agent’s report depends on whether the person will actually one box on account of the lesion.

but that's just a correlation. This problem is just Smoking Lesion, where FDT smokes. The agent makes her prediction by looking at S, and S is explicitly stated to cause a 'mere statistical regularity'. It's even said that S is "not a Predictor". So there is no subjunctive dependence between X and S, and by extension, not between X and the agent.

That's it for my response. I'm looking forward to omnizoid's reaction!

New to LessWrong?

New Comment