Review

After omnizoid asked whether people want to debate him on Functional Decision Theory (FDT), he and I chatted briefly and agreed to have a (short) debate. We agreed the first post should be by me: a reaction to omnizoid's original post, where he explains why he believes FDT is "crazy". In this post, I'll assume the reader has a basic understanding of FDT. If not, I suggest reading the paper.

Let's just dive right in the arguments omnizoid makes against FDT. Here's the first one:

One example is a blackmail case. Suppose that a blackmailer will, every year, blackmail one person. There’s a 1 in a googol chance that he’ll blackmail someone who wouldn’t give in to the blackmail and a googol-1/googol chance that he’ll blackmail someone who would give in to the blackmail. He has blackmailed you. He threatens that if you don’t give him a dollar, he will share all of your most embarrassing secrets to everyone in the world. Should you give in?

FDT would say no. After all, agents who won’t give in are almost guaranteed to never be blackmailed. But this is totally crazy. You should give up one dollar to prevent all of your worst secrets from being spread to the world.

So if I understand this correctly, this problem works as follows:

  1. The blackmailer predicts - with accuracy googol-1/googol - whether you will pay $1 if blackmailed. He does so by running your decision procedure and observe what it outputs.
    1. If yes, he blackmails you. If you don't pay $1, your worst secrets are spread. This would cost you the equivalent of, say, $1,000,000.
    2. If no, he doesn't blackmail you.
  2. The blackmailer blackmails you.

(To be clear, this is my interpretation of the problem. omnizoid just says there's a 1/googol chance the blackmailer blackmails someone who wouldn't give in to the problem, and doesn't specify that in that case, the blackmailer was wrong in his prediction that the victim would pay. Maybe the blackmailer just blackmails everyone, and 1 in a googol people don't give in. If that's the case, FDT does pay.)

If this is the correct interpretation of the problem, FDT is fully correct to not pay the $1. omnizoid believes this causes your worst secrets to be spread, but it's specified in the problem statement that this only happens with probability 1/googol: if the blackmailer wrongly predicts that you will pay $1. With probability googol-1/googol, you don't get blackmailed and don't have to pay anything. A googol-1/googol probability of losing $1 is much worse than a 1/googol probability of losing $1,000,000. So FDT is correct.

omnizoid can counter here that it is also specified that the blackmailer does blackmail you. But this is a problem of which decision to make, and that decision is fist made in the blackmailer's brain (when he predicts what you will decide). If that decision is "don't pay the $1", the blackmailer will almost certainly not blackmail you.

Another way of looking at this is asking: "Which decision theory do you want to run, keeping in mind that you might run into the Blackmail problem?" If you run FDT, you virtually never get blackmailed in the first place.

On to the next argument. Here, omnizoid uses Wolfgang Schwarz's Procreation problem:

Procreation. I wonder whether to procreate. I know for sure that doing so would make my life miserable. But I also have reason to believe that my father faced the exact same choice, and that he followed FDT. If FDT were to recommend not procreating, there's a significant probability that I wouldn't exist. I highly value existing (even miserably existing). So it would be better if FDT were to recommend procreating. So FDT says I should procreate. (Note that this (incrementally) confirms the hypothesis that my father used FDT in the same choice situation, for I know that he reached the decision to procreate.)

omnizoid doesn't explain why he believes FDT gives the wrong recommendation here, but Schwarz does:

In Procreation, FDT agents have a much worse life than CDT agents.

This is strictly true: FDT recommends procreating, because not procreating would mean you don't exist (due to the subjunctive dependence with your father). CDT'ers don't have this subjunctive dependence with their FDT father (and wouldn't even care if it was there), don't procreate, and are happier.

This problem doesn't fairly compare FDT to CDT though. By specifying that the father follows FDT, FDT'ers can't possibly do better than procreating. Procreation directly punishes FDT'ers - not because of the decisions FDT makes, but for following FDT in the first place. I can easily make an analogous problem that punishes CDT'ers for following CDT:

ProcreationCDT. I wonder whether to procreate. I know for sure that doing so would make my life miserable. But I also have reason to believe that my father faced the exact same choice, and that he followed CDT. I highly value existing (even miserably existing). Should I procreate?

FDT'ers don't procreate here and live happily. CDT'ers wouldn't procreate either and don't exist. So in this variant, FDT'ers fare much better than CDT'ers.

We can also make a fair variant of Procreation - a version I've called Procreation* in the past:

Procreation*. I wonder whether to procreate. I know for sure that doing so would make my life miserable. But I also have reason to believe that my father faced the exact same choice, and I know he followed the same decision theory I do. If my decision theory were to recommend not procreating, there's a significant probability that I wouldn't exist. I value a miserable life to no life at all, but obviously I value a happy life to a miserable one. Should I procreate?

So if you're an FDT'er, your father was an FDT'er, and if you are a CDT'er, your father was a CDT'er. FDT'ers procreate and live; CDT'ers don't procreate and don't exist. FDT wins. It surprises me to this day that Schwarz didn't seem to notice his Procreation problem is unfair. 

omnizoid's next argument is borrowed from William MacAskill's A Critique of Functional Decision Theory:

Bomb

You face two open boxes, Left and Right, and you must take one of them. In the Left box, there is a live bomb; taking this box will set off the bomb, setting you ablaze, and you certainly will burn slowly to death. The Right box is empty, but you have to pay $100 in order to be able to take it. 

A long-dead predictor predicted whether you would choose Left or Right, by running a simulation of you and seeing what that simulation did. If the predictor predicted that you would choose Right, then she put a bomb in Left. If the predictor predicted that you would choose Left, then she did not put a bomb in Left, and the box is empty. 

The predictor has a failure rate of only 1 in a trillion trillion. Helpfully, she left a note, explaining that she predicted that you would take Right, and therefore she put the bomb in Left. 

You are the only person left in the universe. You have a happy life, but you know that you will never meet another agent again, nor face another situation where any of your actions will have been predicted by another agent. What box should you choose?  

The right action, according to FDT, is to take Left, in the full knowledge that as a result you will slowly burn to death. Why? Because, using Y&S’s counterfactuals, if your algorithm were to output ‘Left’, then it would also have outputted ‘Left’ when the predictor made the simulation of you, and there would be no bomb in the box, and you could save yourself $100 by taking Left. In contrast, the right action on CDT or EDT is to take Right.

The recommendation is implausible enough. But if we stipulate that in this decision-situation the decision-maker is certain in the outcome that her actions would bring about, we see that FDT violates Guaranteed Payoffs

FDT's recommendation isn't implausible here. I doubt I could explain it much better than MacAskill himself, though, when he says

if your algorithm were to output ‘Left’, then it would also have outputted ‘Left’ when the predictor made the simulation of you, and there would be no bomb in the box, and you could save yourself $100 by taking Left.

The point seems to be that FDT'ers burn to death, but, like in the Blackmail problem, that only happens with vanishingly small probability. Unless you value your life higher than $100 trillion trillion - since you lose it with probability 1 in a trillion trillion but save $100 - Left-boxing is the correct decision.

One could once again counter that the bomb is already in the Left-box. But again, the decision is made at two points - in your head, but also in the predictor's.

Guaranteed Payoffs? That principle, if applied, should be applied the first time your decision is made: in the head of the predictor. At that point it's (virtually) guaranteed that Left-boxing let's you live for free.

omnizoid:

The basic point is that Yudkowsky’s decision theory is totally bankrupt and implausible, in ways that are evident to those who know about decision theory.

Are you actually going to argue from authority here?! I've spoken to Nate Soares, one of the authors of the FDT paper, many times, and I assure you he "knows about decision theory". Furthermore, and with all due respect to MacAskill, his post fundamentally misrepresents FDT in the Implausible Discontinuities section:

First, take some physical processes S (like the lesion from the Smoking Lesion) that causes a ‘mere statistical regularity’ (it’s not a Predictor). And suppose that the existence of S tends to cause both (i) one-boxing tendencies and (ii) whether there’s money in the opaque box or not when decision-makers face Newcomb problems.  If it’s S alone that results in the Newcomb set-up, then FDT will recommending two-boxing.

But now suppose that the pathway by which S causes there to be money in the opaque box or not is that another agent looks at S and, if the agent sees that S will cause decision-maker X to be a one-boxer, then the agent puts money in X’s opaque box. Now, because there’s an agent making predictions, the FDT adherent will presumably want to say that the right action is one-boxing.

This is just wrong: the critical factor is not whether "there's an agent making predictions". The critical factor is subjunctive dependence, and there is no subjunctive dependence between S and the decision maker here.

That's it for this post. I'm looking forward to your reaction, omnizoid!

New Comment
4 comments, sorted by Click to highlight new comments since:

FYI, IIRC there's a new LW debate feature where you could've tried to hash out your disagreement in a single post of asynchronous back-and-forth replies. But I don't know if the debate feature is actually live for the public, I just saw one debate post some time ago.

Blackmail and Bomb cases seem to be examples of not being able to comprehend large numbers. 

Really, if the predictor mistake rate is indeed 1 in a trillion trillion then it's much more probable that the note lies than that you are in this extremely rare circumstances where you pick the left envelope and the bomb is indeed there.

On the other hand, I'm not sure that FDT really recommends you to procreate in Procreation. Maybe your FDT following father just made a mistake? How much your decision making is actually correlated? I don't think there is much of subjective dependence in this setting. Did he simulate you at some point? Because, otherwise, I don't see how "choose not to procreate and thus do not to exist" is a coherent outcome. 

Also if you do not procreate and thus do not exist, how can you have an utility function valueing existence? Moreover, even if we accept the premise, aren't you dooming all your decendants to similarly miserable existence? They definetely do not exist yet and the fact that they would prefer to miserably exist conditionally on existing doesn't mean that it's a good idea to make them exist, when they do not exist yet. 

Really, if the predictor mistake rate is indeed 1 in a trillion trillion then it's much more probable that the note lies than that you are in this extremely rare circumstances where you pick the left envelope and the bomb is indeed there.

Likely true in practice, but this is a hypothetical example and FDT does not rely on that.

On the other hand, I'm not sure that FDT really recommends you to procreate in Procreation.

That scenario did seem underspecified to me too. 

Also if you do not procreate and thus do not exist, how can you have an utility function valueing existence? 

Hypothetically, you have a particular utility function/decision procedure - but some values of those might be incompatible with you actually existing.

I think the analysis for "bomb" is missing something.

 This is a scenario where the predictor is doing their best not to kill you: if they think you'll pick left they pick right, if they think you'll pick right they'll pick left. 

The CDT strategy is to pick whatever box doesn't have a bomb in it. So if the player is a perfect CDTer, the predictor is 100% guaranteed to be correct in their pick. The predictor actually gets to pick whether the player loses 100 bucks or not. If the predictor is nice, the CDTer gets to walk away without paying anything and a 0% chance of death.