This post is an attempt to refute an article offering critique on Functional Decision Theory (FDT). If you’re new to FDT, I recommend reading this introductory paper by Eliezer Yudkowsky & Nate Soares (Y&S). The critique I attempt to refute can be found here: A Critique of Functional Decision Theory by wdmacaskill. I strongly recommend reading it before reading this response post.

The article starts with descriptions of Causal Decision Theory (CDT), Evidential Decision Theory (EDT) and FDT itself. I’ll get right to the critique of FDT in this post, which is the only part I’m discussing here.

“FDT sometimes makes bizarre recommendations”

The article claims “FDT sometimes makes bizarre recommendations”, and more specifically, that FDT violates guaranteed payoffs. The following example problem, called Bomb, is given to illustrate this remark:

“Bomb.

You face two open boxes, Left and Right, and you must take one of them. In the Left box, there is a live bomb; taking this box will set off the bomb, setting you ablaze, and you certainly will burn slowly to death. The Right box is empty, but you have to pay $100 in order to be able to take it.

A long-dead predictor predicted whether you would choose Left or Right, by running a simulation of you and seeing what that simulation did. If the predictor predicted that you would choose Right, then she put a bomb in Left. If the predictor predicted that you would choose Left, then she did not put a bomb in Left, and the box is empty.

The predictor has a failure rate of only 1 in a trillion trillion. Helpfully, she left a note, explaining that she predicted that you would take Right, and therefore she put the bomb in Left.

You are the only person left in the universe. You have a happy life, but you know that you will never meet another agent again, nor face another situation where any of your actions will have been predicted by another agent. What box should you choose?”

The article answers and comments on the answer as follows:

“The right action, according to FDT, is to take Left, in the full knowledge that as a result you will slowly burn to death. Why? Because, using Y&S’s counterfactuals, if your algorithm were to output ‘Left’, then it would also have outputted ‘Left’ when the predictor made the simulation of you, and there would be no bomb in the box, and you could save yourself $100 by taking Left. In contrast, the right action on CDT or EDT is to take Right.

The recommendation is implausible enough. But if we stipulate that in this decision-situation the decision-maker is certain in the outcome that her actions would bring about, we see that FDT violates Guaranteed Payoffs.”

I agree FDT recommends taking the left box. I disagree that it violates some principle every decision theory should adhere to. Left-boxing really is the right decision in Bomb. Why? Let’s ask ourselves the core question of FDT:

“Which output of this decision procedure causes the best outcome?”

The answer can only be left-boxing. As wdmacaskill says:

“…if your algorithm were to output ‘Left’, then it would also have outputted ‘Left’ when the predictor made the simulation of you, and there would be no bomb in the box, and you could save yourself $100 by taking Left.”

But since you already know the bomb in Left, you could easily save your life by paying $100 in this specific situation, and that’s where our disagreement comes from. However, remember that if your decision theory makes you a left-boxer, you virtually never end up in the above situation! In 999,999,999,999,999,999,999,999 out of 1,000,000,000,000,000,000,000,000 situations, the predictor will have predicted you left-box, letting you keep your life for free. As Vaniver says in a comment:

“Note that the Bomb case is one in which we condition on the 1 in a trillion trillion failure case, and ignore the 999999999999999999999999 cases in which FDT saves $100. This is like pointing at people who got into a plane that crashed and saying ‘what morons, choosing to get on a plane that would crash!’ instead of judging their actions from the state of uncertainty that they were in when they decided to get on the plane.”

“FDT fails to get the answer Y&S want in most instances of the core example that’s supposed to motivate it”

Here wdmacaskill argues that in Newcomb’s problem, FDT recommends one-boxing if it assumes the predictor (Omega) is running a simulation of the agent’s decision process. But what if Omega isn’t running your algorithm? What if they use something else to predict your choice? To use wdmacaskill’s own example:

“Perhaps the Scots tend to one-box, whereas the English tend to two-box.”

Well, in that case Omega’s prediction and your decision (one-boxing or two-boxing) aren’t subjunctively dependent on the same function. And this kind of dependence is key in FDT’s decision to one-box! Without it, FDT recommends two-boxing, like CDT. In this particular version of Newcomb’s problem, your decision procedure has no influence on Omega’s prediction, and you should go for strategic dominance (two-boxing).

However, wdmacaskill argues that part of the original motivation to develop FDT was to have a decision theory that one-boxes on Newcomb’s problem. I don’t care what the original motivation for FDT was with respect to this discussion. What matters is whether FDT gets Newcomb’s problem right — and it does so in both cases: when Omega does run a simulation of your decision process and when Omega does not.

Alternatively, wdmacaskill argues,

“Y&S could accept that the decision-maker should two-box in the cases given above. But then, it seems to me, that FDT has lost much of its initial motivation: the case for one-boxing in Newcomb’s problem didn’t seem to stem from whether the Predictor was running a simulation of me, or just using some other way to predict what I’d do.”

Again, I do not care where the case for one-boxing stemmed from, or what FDT’s original motivation was: I care about whether FDT gets Newcomb’s problem right.

“Implausible discontinuities”

“First, take some physical processes S (like the lesion from the Smoking Lesion) that causes a ‘mere statistical regularity’ (it’s not a Predictor). And suppose that the existence of S tends to cause both (i) one-boxing tendencies and (ii) whether there’s money in the opaque box or not when decision-makers face Newcomb problems. If it’s S alone that results in the Newcomb set-up, then FDT will recommending two-boxing.”

Agreed. The contents of the opaque box and the agent’s decision to one-box or two-box don’t subjunctively depend on the same function. FDT would indeed recommend two-boxing.

“But now suppose that the pathway by which S causes there to be money in the opaque box or not is that another agent looks at S and, if the agent sees that S will cause decision-maker X to be a one-boxer, then the agent puts money in X’s opaque box. Now, because there’s an agent making predictions, the FDT adherent will presumably want to say that the right action is one-boxing.”

No! The critical factor isn’t whether “there’s an agent making predictions”. The critical factor is subjunctive dependence between the agent and another relevant physical system (in Newcomb’s problem, that’s Omega’s prediction algorithm). Since in this last problem put forward by wdmacaskill the prediction depends on looking at S, there is no such subjunctive dependence going on and FDT would recommend two-boxing.

Wdmacaskill further asks the reader to imagine a spectrum of a more and more agent-like S, and imagines that at some-point there will be a “sharp jump” where FDT goes from recommending two-boxing to recommending one-boxing. Wdmacaskill then says:

“Second, consider that same physical process S, and consider a sequence of Newcomb cases, each of which gradually make S more and more complicated and agent-y, making it progressively more similar to a Predictor making predictions. At some point, on FDT, there will be a point at which there’s a sharp jump; prior to that point in the sequence, FDT would recommend that the decision-maker two-boxes; after that point, FDT would recommend that the decision-maker one-boxes. But it’s very implausible that there’s some S such that a tiny change in its physical makeup should affect whether one ought to one-box or two-box.”

But like I explained, the “agent-ness” of a physical system is totally irrelevant for FDT. Subjunctive dependence is key, not agent-ness. The sharp jump between one-boxing and two-boxing wdmacaskill imagines there to be really isn’t there: it stems from a misunderstanding of FDT.

“FDT is deeply indeterminate”

Wdmacaskill argues that

“there’s no objective fact of the matter about whether two physical processes A and B are running the same algorithm or not, and therefore no objective fact of the matter of which correlations represent implementations of the same algorithm or are ‘mere correlations’ of the form that FDT wants to ignore.”

… and gives an example:

“To see this, consider two calculators. The first calculator is like calculators we are used to. The second calculator is from a foreign land: it’s identical except that the numbers it outputs always come with a negative sign (‘–’) in front of them when you’d expect there to be none, and no negative sign when you expect there to be one. Are these calculators running the same algorithm or not? Well, perhaps on this foreign calculator the ‘–’ symbol means what we usually take it to mean — namely, that the ensuing number is negative — and therefore every time we hit the ‘=’ button on the second calculator we are asking it to run the algorithm ‘compute the sum entered, then output the negative of the answer’. If so, then the calculators are systematically running different algorithms.

But perhaps, in this foreign land, the ‘–’ symbol, in this context, means that the ensuing number is positive and the lack of a ‘–’ symbol means that the number is negative. If so, then the calculators are running exactly the same algorithms; their differences are merely notational.”

I’ll admit I’m no expert in this area, but it seems clear to me that these calculators are running different algorithms, but that both algorithms are subjunctively dependent on the same function! Both algorithms use the same “sub-algorithm”, which calculates the correct answer to the user’s input. The second calculator just does something extra: put a negative sign in front of the answer or remove an existing one. Whether inhabitants of the foreign land interpret the ‘-’ symbol different than we do is irrelevant to the properties of the calculators.

“Ultimately, in my view, all we have, in these two calculators, are just two physical processes. The further question of whether they are running the same algorithm or not depends on how we interpret the physical outputs of the calculator.”

It really doesn’t. The properties of both calculators do NOT depend on how we interpret their outputs. Wdmacaskill uses this supposed dependence on interpretation to undermine FDT: in Newcomb’s problem, it would also be a matter of choice of interpretation whether Omega is running the same algorithm as you are in order to predict your choice. However, as interpretation isn’t a property of any algorithm, this becomes a non-issue. I’ll be doing a longer post on algorithm dependence/similarity in the future.

“But FDT gets the most utility!”

Here, wdmacaskill talks about how Yudkowsky and Soares compare FDT to EDT and CDT to determine FDT’s superiority to the other two.

“As we can see, the most common formulation of this criterion is that they are looking for the decision theory that, if run by an agent, will produce the most utility over their lifetime. That is, they’re asking what the best decision procedure is, rather than what the best criterion of rightness is, and are providing an indirect account of the rightness of acts, assessing acts in terms of how well they conform with the best decision procedure.

But, if that’s what’s going on, there are a whole bunch of issues to dissect. First, it means that FDT is not playing the same game as CDT or EDT, which are proposed as criteria of rightness, directly assessing acts. So it’s odd to have a whole paper comparing them side-by-side as if they are rivals.”

I agree the whole point of FDT is to have a decision theory that produces the most utility over the lifetime of an agent — even if that, in very specific cases like Bomb, results in “weird” (but correct!) recommendations for specific acts. Looking at it from a perspective of AI Alignment — which is the goal of MIRI, the organization Yudkowsky and Soares work for — it seems clear to me that that’s what you want out of a decision theory. CDT and EDT may have been invented to play a different game — but that’s irrelevant for the purpose of FDT. CDT and EDT — the big contenders in the field of Decision theory — fail this purpose, and FDT does better.

“Second, what decision theory does best, if run by an agent, depends crucially on what the world is like. To see this, let’s go back to question that Y&S ask of what decision theory I’d want my child to have. This depends on a whole bunch of empirical facts: if she might have a gene that causes cancer, I’d hope that she adopts EDT; though if, for some reason, I knew whether or not she did have that gene and she didn’t, I’d hope that she adopts CDT. Similarly, if there were long-dead predictors who can no longer influence the way the world is today, then, if I didn’t know what was in the opaque boxes, I’d hope that she adopts EDT (or FDT); if I did know what was in the opaque boxes (and she didn’t) I’d hope that she adopts CDT. Or, if I’m in a world where FDT-ers are burned at the stake, I’d hope that she adopts anything other than FDT.”

Well, no, not really — that’s the point. What decision theory does best shouldn’t depend on what the world is like. The whole idea is to have a decision theory that does well under all (fair) circumstances. Circumstances that directly punish an agent for its decision theory can be made for any decision theory and don’t refute this point.

“Third, the best decision theory to run is not going to look like any of the standard decision theories. I don’t run CDT, or EDT, or FDT, and I’m very glad of it; it would be impossible for my brain to handle the calculations of any of these decision theories every moment. Instead I almost always follow a whole bunch of rough-and-ready and much more computationally tractable heuristics; and even on the rare occasions where I do try to work out the expected value of something explicitly, I don’t consider the space of all possible actions and all states of nature that I have some credence in — doing so would take years.

So the main formulation of Y&S’s most important principle doesn’t support FDT. And I don’t think that the other formulations help much, either. Criteria of how well ‘a decision theory does on average and over time’, or ‘when a dilemma is issued repeatedly’ run into similar problems as the primary formulation of the criterion. Assessing by how well the decision-maker does in possible worlds that she isn’t in fact in doesn’t seem a compelling criterion (and EDT and CDT could both do well by that criterion, too, depending on which possible worlds one is allowed to pick).”

Okay, so we’d need an approximation of such a decision theory — I fail to see how this undermines FDT.

“Fourth, arguing that FDT does best in a class of ‘fair’ problems, without being able to define what that class is or why it’s interesting, is a pretty weak argument. And, even if we could define such a class of cases, claiming that FDT ‘appears to be superior’ to EDT and CDT in the classic cases in the literature is simply begging the question: CDT adherents claims that two-boxing is the right action (which gets you more expected utility!) in Newcomb’s problem; EDT adherents claims that smoking is the right action (which gets you more expected utility!) in the smoking lesion. The question is which of these accounts is the right way to understand ‘expected utility’; they’ll therefore all differ on which of them do better in terms of getting expected utility in these classic cases.”

Yes, fairness would need to be defined exactly, although I do believe Yudkowsky and Soares have done a good job at it. And no: “claiming that FDT ‘appears to be superior’ to EDT and CDT in the classic cases in the literature” isn’t begging the question. The goal is to have a decision theory that consistently gives the most expected utility. Being a one-boxer does give you the most expected utility in Newcomb’s problem. Deciding to two-box after Omega made his prediction that you one-box (if this is possible) would give you the most utility — but you can’t have your decision theory recommending two-boxing, because that results in the opaque box being empty.

 

In conclusion, it seems FDT survives the critique offered by wdmacaskill. I am quite new to the field of Decision theory, and will be learning more and more about this amazing field in the coming weeks. This post might be updated as I learn more.

23

New Comment
222 comments, sorted by Click to highlight new comments since: Today at 11:34 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The statement of Bomb is bad at being legible outside the FDT/UDT paradigm, it's instead actively misleading there, so is a terrible confusion-conflict-and-not-clarity inducing example to show someone who is not familiar with it. The reason Left is reasonable is that the scenario being described is, depending on the chosen policy, almost completely not real, a figment of predictor's imagination.

Unless you've read a lot of FDT/UDT discussion, a natural reading of a thought experiment is to include the premise "the described situation is real". And so people... (read more)

4Said Achmiz1y
What does it mean to say that “the described scenario is real” is not a premise of the thought experiment…? (What could the thought experiment even be about if the described scenario is not supposed to be real?)
6Vladimir_Nesov1y
UDT is about policies, not individual decisions. A thought experiment typically describes an individual decision taken in some situation. A policy specifies what decisions are to be taken in all situations. Some of these situations are impossible, but the policy is still defined for them following its type signature, and predictors can take a look at what exactly happens in the impossible situations. Furthermore, choice of a policy influences which situations are impossible, so there is no constant fact about which of them are impossible. The general case of making a decision in an individual situation involves uncertainty about whether this situation is impossible, and ability to influence its impossibility. This makes the requirement for thought experiments to describe real situations an unnatural contraint, so in thought experiments read in context of UDT this requirement is absent by default. A central example is Transparent Newcomb's Problem [https://arbital.com/p/transparent_newcombs_problem/]. When you see money in the big box, this situation is either possible (if you one-box) or impossible (in you two-box), depending on your decision. If a thought experiment is described as you facing the problem in this situation (with the full box), it's describing a situation (and observations made there) that may, depending on your decision, turn out to be impossible. Yet asking for what your decision in this situation will be is always meaningful, because it's possible to evaluate you as an algorithm even on impossible observations, which is exactly what all the predictors in such thought experiments are doing all the time. It's about evaluating you (or rather an agent) as an algorithm on the observations presented by the scenario, which is possible to do regardless of whether the scenario can be real. This in turn motivates asking what happens in other situations, not explicitly described in the thought experiment. A combined answer to all such questions is a polic
4Said Achmiz1y
I get the feeling, reading this, that you are using the word “impossible” in an unusual way. Is this the case? That is, is “impossible” a term of art in decision theory discussions, with a meaning different than its ordinary one? If not, then I confess that can’t make sense of much of what you say…
4Vladimir_Nesov1y
By "impossible" I mean not happening in actuality (which might be an ensemble, in which case I'm not counting what happens with particularly low probabilities), taking into account the policy that the agent actually follows. So the agent may have no way of knowing if something is impossible (and often won't before actually making a decision). This actuality might take place outside the thought experiment, for example in Transparent Newcomb that directly presents you with two full boxes (that is, both boxes being full is part of the description of the thought experiment), and where you decide to take both, the thought experiment is describing an impossible situation (in case you do decide to take both boxes), while the actuality has the big box empty. So for the problem where you-as-money-maximizer choose between receiving $10 and $5, and actually have chosen $10, I would say that taking $5 is impossible, which might be an unusual sense of the word (possibly misleading before making the decision; 5-and-10 problem [https://www.lesswrong.com/tag/5-and-10] is about what happens if you take this impossibility too seriously in an unhelpful way). This is the perspective of an external Oracle that knows everything and doesn't make public predictions. If this doesn't clear up the issue, could you cite a small snippet that you can't make sense of and characterize the difficulty? Focusing on Transparent-Newcomb-with-two-full-boxes might help (with respect to use of "impossible", not considerations on how to solve it), it's way cleaner than Bomb. (The general difficulty might be from the sense in which UDT is a paradigm, its preferred ways of framing its natural problems are liable to be rounded off to noise when seen differently. But I don't know what the difficulty is on object level in any particular case, so calling this a "paradigm" is more of a hypothesis about the nature of the difficulty that's not directly helpful.)
4Said Achmiz1y
Sorry, do you mean that you don’t count low-probability events as impossible, or that you don’t count them as possible (a.k.a. “happening in actuality”)? This is an example of a statement that seems nonsensical to me. If I am an agent, and something is happening to me, that seems to me to be real by definition. (As Eliezer [https://www.lesswrong.com/posts/eLHCWi8sotQT6CmTX/sensual-experience] put it [https://www.lesswrong.com/posts/YYLmZFEGKsjCKQZut/timeless-control]: “Whatever is, is real.”) And anything that is real, must (again, by definition) be possible… If what is happening to me is actually happening in a simulation… well, so what? The whole universe could be a simulation, right? How does that change anything? So the idea of “this thing that is happening to you right now is actually impossible” seems to me to be incoherent. I… have considerable difficult parsing what you’re saying in the second paragraph of your comment. (I followed the link, and a couple more from it, and was not enlightened, unfortunately.)

If I am an agent, and something is happening to me

The point is that you don't know that something is happening to you just because you are seeing it happen. Seeing it happen is what takes place when you-as-an-algorithm is evaluated on the corresponding observations. A response to seeing it happen is well-defined even if the algorithm is never actually evaluated on those observations. When we spell out what happens inside the algorithm, what we see is that the algorithm is "seeing it happen". This is so even if we don't actually look. (See also.)

So for example, if I'm asking what would be your reaction to the sky turning green, what is the status of you-in-the-question who sees the sky turn green? They see it happen in the same way that you see it not happen. Yet from the fact that they see it happen, it doesn't follow that it actually happens (the sky is not actually green).

Another point is that for you-in-the-question, it might be the green-sky world that matters, not the blue-sky world. That is a side effect of how your insertion into the green-sky world doesn't respect the semantics of your preferences, which care about blue-sky world. For you-in-the-question with preferences... (read more)

4Said Achmiz1y
If the sky were to turn green, I would certainly behave as if it had indeed turned green; I would not say “this is impossible and isn’t happening”. So I am not sure what this gets us, as far as explaining anything… My preferences “factor out” the world I find myself in, as far as I can tell. By “agents share preferences” are you suggesting a scenario where, if the sky were to turn green, I would immediately stop caring about anything whatsoever that happened in that world, because my preferences were somehow defined to be “about” the world where the sky were still blue? This seems pathological. I don’t think it makes any sense to say that I “care about the blue-sky world”; I care about what happens in whatever world I am actually in, and the sky changing color wouldn’t affect that. Well, if something’s not actually happening, then I’m not actually seeing it happen. I don’t think your first paragraph makes sense, sorry. Does it? I’m not sure that it does, actually… if something never happens, and I never observe it, then I never respond to it, either. My response to it is nothing. You can ask: “but if it did happen, what would be your response?”—and that’s a reasonable question. But any answer to that question would indeed have to take as given that the event in question were in fact actually happening (otherwise the question is meaningless). Well… that is a very unusual use of “impossible”, yes. Might I suggest using a different word? You seem to be saying: “yes, certain things that can happen are impossible”, which is very much counter to all ordinary usage. I think using a word in this way can only lead to confusion… (The last paragraph of your comment doesn’t elucidate much, but perhaps that is because of the aforesaid odd word usage.)
4Vladimir_Nesov1y
Not actually, you seeing it happen isn't real, but this unreality of seeing it happen proceeds in a specific way. It's not indeterminate greyness, and not arbitrary. If your response (that never happens) could be 0 or 1, it couldn't be nothing. If it's 0 (despite never having been observed to be 0), the claim that it's 1 is false, and the claim that it's nothing doesn't type check. I'm guessing that the analogy between you and an algorithm doesn't hold strongly in your thinking about this, it's the use of "you" in place of "algorithm" that does a lot of work in these judgements that wouldn't happen for talking about an "algorithm". So let's talk about algorithms to establish common ground. Let's say we have a pure total procedure f written in some programming language, with the signature f : O -> D, where O = Texts is the type of observations and D = {0,1} is the type of decisions. Let's say that in all plausible histories of the world, f is never evaluated on argument "green sky". In this case I would say that it's impossible for the argument (observation) to be "green sky", procedure f is never evaluated with this argument in actuality. Yet it so happens that f("green sky") is 0. It's not 1 and not nothing. There could be processes sensitive to this fact that don't specifically evaluate f on this argument. And there are facts about what happens inside f with intermediate variables or states of some abstract machine that does the evaluation (procedure f's experience of observing the argument and formulating a response to it), as it's evaluated on this never-encountered argument, and these facts are never observed in actuality, yet they are well-defined by specifying f and the abstract machine. The question of what f("green sky") would evaluate to isn't meaningless regardless of whether evaluation of f on the argument "green sky" is an event that in fact actually happens. Actually extant evidence for a particular answer, such as a proof that the answer is 0, is
2Said Achmiz1y
What do you mean, “proceeds in a specific way”? It doesn’t proceed at all. Because it’s not happening, and isn’t real. This seems wrong to me. If my response never happens, then it’s nothing; it’s the claim that it’s 1 that doesn’t type check, as does the claim that it’s 0. It can’t be either 1 or 0, because it doesn’t happen. (In algorithm terms, if you like: what is the return value of a function that is never called? Nothing, because it’s never called and thus never returns anything. Will that function return 0? No. Will it return 1? Also no.) (Reference for readers who may not be familiar with the relevant terminology, as I was not: Pure Functions and Total Functions [http://nebupookins.github.io/2015/08/05/pure-functions-and-total-functions.html].) Please elaborate! Indeed, but the question of what f(“green sky”) actually returns, certainly is meaningless if f(“green sky”) is never evaluated. I’m afraid I don’t see what this has to do with anything… I strongly disagree that this matches ordinary usage! I am not sure what you mean by this? (Or by the rest of your last paragraph, for that matter…)
2TAG1y
Thats pretty non-standard. I think you need to answer that.
2philh1y
Here's an attempt to ground this somewhat concretely. Suppose there's an iterated prisoner's dilemma contest. At any iteration an agent can look at the history of plays that itself and its opponent have made. Suppose that TitForTatBot looks at the history, and sees that there's been 100 rounds so far, and in every one it has defected and its opponent has cooperated. It proceeds to cooperate, because its opponent cooperated in the previous round. And so the "actual" game history will never be (D,C) x 100. What's happened here is that someone has instantiated a TitForTatBot and lied to it. It's not impossible that TitForTatBot will observe this history, but it's impossible that this history actually happened, in some sense that I claim we care about.
4Said Achmiz1y
Hmm, no, I still don’t think this works. In the scenario you describe, it seems to me that TitForTatBot neither observed the specified history, nor did it actually happen—but it does observe finding itself in a scenario where that history (apparently) happened, and it does indeed actually find itself in a scenario where that history (apparently) happened. Now, I think that your example does bring up an interesting and relevant point, namely: when should an agent question whether some of the things it seems to know or observe are actually false or illusionary? Surely the answer is not “never”, else the agent will be easy to fool, and will make some very foolish decisions! So perhaps TitForTatBot (if we suppose that it’s not just a “bot” but also has some higher reasoning functions) might think: “Hmm, I defected 100 times? Sounds made-up, I think somebody’s been tampering with my memory! The proverbial evil neurosurgeons strike again!” But consider how this might work in the “Bomb” case. Should I find myself in the “Bomb” scenario, I might think: “A predictor that’s only been wrong one out of a trillion trillion times? And it’s just been wrong again? And there’s a bomb in this here Left box, and me an FDT agent, no less! Something doesn’t add up… perhaps one or more of the things I think I know, aren’t so!” And this seems like a reasonable enough thought. But surely it would then be far more reasonable to question the whole “one-in-a-trillion-trillion-accurate predictor” business, than to say “This bomb I see in front of me is fake, and the box is also fake! This whole scenario is fake!” Right? I mean… how do I know this stuff about the predictor, and its accuracy? It’s a pretty outlandish claim, isn’t it—one mistake out of a trillion trillion? How sure am I that I’m privy to all the information about the predictor’s past performance? And really, the whole situation is weird: I’m the last person in existence, apparently? And so on… but the reality of me being alive
1Heighn10mo
Which is why I encouraged the reader unfamiliar with FDT to read the Yudkowsky & Soares paper first.

I read through this long enough to come to the conclusion that the author of the original article simply does not understand FDT rather than having valid criticisms of it, and stopped there, that being perfectly sufficient to refute the article.

Off-topic: I initially misread this title as "A defense of density functional theory," and was intrigued.

There are two huge ambiguities in this scenario:

  1. did Predictor include the note in the simulation, or write it later? If there was even a small (say, anything more than 1 in a million) chance that it was written later, the agent should pick Right.
  2. does Predictor always add a note showing the prediction in this scenario?

We can rule out the combination of both together. It is not possible for Predictor to always write a note that honestly records their prediction including the note and still guarantee 10^-24 chance of prediction error. If the note has nontrivi... (read more)

5Heighn1y
Can you explain point 1 further, please? It seems to me subjunctive dependence happens regardless of note inclusion, and thus one's decision theory should left-box in both cases. (I'll respond to your other points as well.)
1JBlack1y
If the note was not included in the simulation, then under FDT there is no subjunctive dependence: the output produced by the simulator is for different input than the ones you actually experienced. In the usual FDT analogy, the fact that both you and Predictor are almost certainly using the same type of calculator means nothing if you're pressing different buttons. We're told about Predictor's simulation fidelity, but that doesn't mean anything if the inputs to the simulation are not the same as reality. You can work through FDT with the assumption that that a note is with probability p written after simulating you (with fidelity 1 - 10^-24) without a note, and it says that for all but microscopic p you should choose Right. This is a boring scenario and doesn't illustrate any differences between decision theories, so I didn't bother to expand on it. Edit: The previous is all pointless due to misreading the statement about Predictor's accuracy. FDT recommends taking the Right box in this scenario regardless of whether points 1 and 2 hold.
2JBlack1y
I should note that my previous comment is all theoretical wankery. In practice, there is no way that I'll accept any evidence that a predictor has 10^-24 chance of being wrong. I'm going to take the right box. I won't even trust that the right box won't blow up, since the scenario I've been kidnapped into has obviously been devised by a sadistic bastard, and I wouldn't put it past them to put bombs in both boxes (or under the floor) no matter what the alleged predictor supposedly thinks. Just maybe there's a slightly better chance of surviving by paying the $100.

At first this bomb scenario looked like an interesting question, but too much over-specification in some respects and vagueness in others means that in this scenario FDT recommends taking the right box, not left as claimed.

by running a simulation of you and seeing what that simulation did.

A simulation of your choice "upon seeing a bomb in the Left box under this scenario"? In that case, the choice to always take the Right box "upon seeing a bomb in the Left box under this scenario" is correct, and what any of the decision theories would recommend. Being in such a situation does necessitate the failure of the predictor, which means you are in a very improbable world, but that is not relevant to your decision in the world you happen to be in (simulated or not).

Or: A simulation... (read more)

3Heighn1y
Good point. It seems to me Left-boxing is still the right answer though, since your decision procedure would still 'force' the predictor to predict you Left-box.
1tivelen1y
What does it mean to Left-box, exactly? As in, under what specific scenarios are you making a choice between boxes, and choosing the Left box?

These arguments -- the Bomb argument and Torture versus Dust Specs -- suffer from an ambiguity between telling the reader what to do given their existing UF/preferences, telling the reader to have a different UF, and saying what an abstract agent , but not the reader, would do.

Suppose the reader has a well defined utility function where death of torture are set to minus infinity. Then the writer can't persuade them to trade off death or torture against any finite amount of utility. So, in what sense is the reader wrong about their own preferences?

Maybe t... (read more)

1Heighn7mo
I think the original Bomb scenario should have come with a, say, $1,000,000 value for "not being blown up". That would have allowed for easy and agreed-upon expected utility calculations.
1JBlack1y
I think sometimes writers mix up moral theories with decision theories. Decision theory problems are best expressed using reasonably modest amounts of money, because even if readers don't themselves have linear utility of money over that range, it's something that's easily imagined. Moral theories are usually best expressed in non-monetary terms, but going straight to torture and murder is pretty lazy in my opinion. Fine, they're things that most people think are "generally wrong" without being politically hot, but they still seem to bypass rationality, which makes discussion go stupid. This bomb example did the stupid thing of including torture and death and annihilation of all intelligent life in the universe balanced against money and implausibly small probabilities and a bunch of other crap, and also left such huge holes in the specification that their argument didn't even work. Pretty much a dumpster fire of what not to do in illustrating some fine points of decision theory.
2Said Achmiz1y
I don’t think morality enters into this at all. I don’t see any moral concerns in the described scenario, only prudential ones (i.e., concerns about how best to satisfy one’s own values). As such, your reply seems to me to be non-responsive to TAG’s comment…
3JBlack1y
TAG's comment was in part about the ambiguity between telling the reader what to do given their existing UF/preferences, and telling the reader to have a different UF. The former is an outcome of recommending a decision theory, while the latter is the outcome of recommending a moral theory. Hence my comment about how to recognize distinctions between them as a reader, and differences in properties of the scenarios that are relevant as a writer. I also evaluated this scenario (and implicitly, torture vs dust specks) for how well it illustrates decision theory aspects, and found that it does so poorly in the sense that it includes elements that are more suited to moral theory scenarios. I hoped this would go some way toward explaining why these scenarios might indeed seem ambiguous between telling the reader what to do, and telling the reader what to value.
2Said Achmiz1y
I don’t think this is right. Suppose that you prefer apples to pears, pears to grapes, and grapes to apples. I tell you that this is irrational (because intransitive), and that you should alter your preferences, on pain of Dutch-booking (or some such). Is that a moral claim? It does not seem to me to be any such thing; and I think that most moral philosophers would agree with me…
3JBlack1y
Sure, there are cases that aren't moral theory discussions in which you might be told to change your values. I didn't claim that my options were exhaustive, though I did make an implicit claim that those two seemed to cover the vast majority of potential ambiguity in cases like this. I still think that claim has merit. More explicitly, I think that the common factor here is assuming some utility to the outcomes that has a finite ratio, and arriving at an unpalatable conclusion. Setting aside errors in the presentation of the scenario for now, there are (at least) two ways to view the outcome: 1. FDT says that you should let yourself burn to death in some scenario, because the ratio of disutility of burning to death vs paying $100 is not infinite. This is ridiculous, therefore FDT is wrong. 2. FDT says that you should let yourself burn to death in some scenario, because the ratio of disutility of burning to death vs paying $100 is not infinite. This is ridiculous, therefore the utilities are wrong. Questions like "is an increased probability (no mater how small) of someone suffering a horrible painful death, no matter how small, always worse than a moderate amount of money" are typical questions of moral theory rather than decision theory. The ambiguity would go away if the stakes were simply money on both sides.
2Said Achmiz1y
Er, no. I don’t think this is right either. Since “someone” here refers to yourself, the question is: “is an increased probability (no matter how small) of you suffering a horrible painful death, always worse than a moderate amount of money?” This is not a moral question; it’s a question about your own preferences. (Of course, it’s also not the question we’re being asked to consider in the “Bomb” scenario, because there we’re not faced with a small probability of horrible painful death, or a small increase in the probability of horrible painful death, but rather a certain horrible painful death; and we’re comparing that to the loss of a moderate amount of money. This also is not a moral question, of course.) Well, first of all, that would make the problem less interesting. And surely we don’t want to say “this decision theory only handles questions of money; as soon as we ask it to evaluate questions of life and death, it stops giving sensible answers”? Secondly, I don’t think that any problem goes away if there’s just money on both sides. What if it were a billion dollars you’d have to pay to take Left, and a hundred to take Right? Well… in that case, honestly, the scenario would make even less sense than before, because: 1. What if I don’t have a billion dollars? Am I now a billion dollars in debt? To whom? I’m the last person in existence, right? 2. What’s the difference between losing a hundred dollars and losing a billion dollars, if I’m the only human in existence? What am I even using money for? What does it mean to say that I have money? 3. Can I declare myself to be a sovereign state, issue currency (conveniently called the “dollar”), and use it to pay the boxes? Do they have to be American dollars? Can I be the President of America? (Or the King?) Who’s going to dispute my claim? And so on…
1JBlack1y
I've posted a similar scenario which is based on purely money here [https://www.lesswrong.com/posts/SJS6qqRzbMjA9bFba/jblack-s-shortform?commentId=tAFu65rMjB4dkebYs]. I avoid "burning to death" outcomes in my version because some people do appear to endorse theoretically infinite disutilities for such things, even when they don't live by such. Likewise there are no insanely low probabilities of failure that are mutually contradictory with other properties of the scenario. It's just a straightforward scenario in which FDT says you should choose to lose $1000 whenever that option is available, despite always having an available option to lose only $100.

Well, in that case Omega’s prediction and your decision (one-boxing or two-boxing) aren’t subjunctively dependent on the same function. And this kind of dependence is key in FDT’s decision to one-box! Without it, FDT recommends two-boxing, like CDT.

In Newcombe's problem, Omega is a perfect predictor, not just a very good one. Subjunctive dependence is necessarily also perfect in that case.

If Omega is imperfect in various ways, their predictions might be partially or not at all subjunctively dependent upon yours and below some point on this scale FDT will s... (read more)

Re: the Bomb scenario:

It seems to me that the given defense of FDT is, to put it mildly, unsatisfactory. Whatever “fancy” reasoning is proffered, nevertheless the options on offer are “burn to death” or “pay $100”—and the choice is obvious.

FDT recommends knowingly choosing to burn to death? So much the worse for FDT!

FDT has very persuasive reasoning for why I should choose to burn to death? Uh-huh (asks the non-FDT agent), and if you’re so rational, why are you dead?

Counterfactuals, you say? Well, that’s great, but you still chose to burn to death, instead... (read more)

5Heighn1y
I'm gonna try this one more time from a different angle: what's your answer on Parfit's Hitchhiker? To pay or not to pay?
4Said Achmiz1y
Pay.
3Heighn1y
So even though you are already in the city, you choose to pay and lose utility in that specific scenario? That seems inconsistent with right-boxing on Bomb. For the record, my answer is also to pay, I but then again I also left-box on Bomb.
1Said Achmiz1y
Parfit’s Hitchhiker is not an analogous situation, since it doesn’t take place in a context like “you’re the last person in the universe and will never interact with another agent ever”, nor does paying cause me to burn to death (in which case I wouldn’t pay; note that this would defeat the point of being rescued in the first place!). But more importantly, in the Parfit’s Hitchhiker situation, you have in fact been provided with value (namely, your life!). Then you’re asked to pay a (vastly smaller!) price for that value. In the Bomb scenario, on the other hand, you’re asked to give up your life (very painfully), and in exchange you get (and have gotten) absolutely nothing whatsoever. So I really don’t see the relevance of the question…
3Heighn1y
Actually, I have thought about this a bit more and concluded Bomb and Parfit's hitchhiker are indeed analogous in a very important sense: both problems give you the option to "pay" (be it in dollars or with torture and death), even though not paying doesn't causally affect whether or not you die. Like Partfit's hitchhiker, where you are asked to pay $1000 even though you are already rescued.
3Heighn1y
That was never relevant to begin with. Well, both problems have a predictor and focus on a specific situation after the predictor has already made the prediction. Both problems have subjunctive dependence. So they are analogous, but they have differences as well. However, it seems like you don't pay because of subjunctive dependence reasons, so never mind, I guess.
4Heighn1y
The question is not which action to take. The question is which decision theory gives the most utility. Any candidate for "best decision theory" should take the left box. This results in a virtually guaranteed save of $100 - and yes, a death burn in an extremely unlikely scenario. In that unlikely scenario, yes, taking the right box gives the most utility - but that's answering the wrong question.
2Said Achmiz1y
This sort of reasoning makes sense if you must decide on which box to take prior to learning the details of your situation (a.k.a. in a “veil of ignorance”), and cannot change your choice even after you discover that, e.g., taking the Left box will kill you. In such a case, sure, you can say “look, it’s a gamble, and I did lose big this time, but it was a very favorable gamble, with a clearly positive expected outcome”. (Although see Robyn Dawes’ commentary on such “skewed” gambles. However, we can let this pass here.) But that’s not the case here. Here, you’ve learned that taking the Left box kills you, but you still have a choice! You can still choose to take Right! And live! Yes, FDT insists that actually, you must choose in advance (by “choosing your algorithm” or what have you), and must stick to the choice no matter what. But that is a feature of FDT, it is not a feature of the scenario! The scenario does not require that you stick to your choice. You’re free to take Right and live, no matter what your decision theory says. So when selecting a decision theory, you may of course feel free to pick the one that says that you must pick Left, and knowingly burn to death, while I will pick the one that says that I can pick whatever I want. One of us will be dead, and the other will be “smiling from atop a heap of utility”. (“But what about all those other possible worlds?”, you may ask. Well, by construction, I don’t find myself in any of those, so they’re irrelevant to my decision now, in the actual world.)
6Heighn1y
Well, I'd say FDT recognizes that you do choose in advance, because you are predictable. Apparently you have an algorithm running that makes these choices, and the predictor simulates that algorithm. It's not that you "must" stick to your choice. It's about constructing a theory that consistently recommends the actions that maximize expected utility. I know I keep repeating that - but it seems that's where our disagreement lies. You look at which action is best in a specific scenario, I look at what decision theory produces the most utility. An artificial superintelligence running a decision theory can't choose freely no matter what the decision theory says: running the decision theory means doing what it says.
2Said Achmiz1y
That seems like an argument against “running a decision theory”, then! Now, that statement may seem like it doesn’t make sense. I agree! But that’s because, as I see it, your view doesn’t make sense; what I just wrote is consistent with what you write… Clearly, I, a human agent placed in the described scenario, could choose either Left or Right. Well, then we should design our AGI in such a way that it also has this same capability. Obviously, the AGI will in fact (definitionally) be running some algorithm. But whatever algorithm that is, ought to be one that results in it being able to choose (and in fact choosing) Right in the “Bomb” scenario. What decision theory does that correspond to? You tell me…
5Donald Hobson1y
CDT
1Heighn7mo
CDT indeed Right-boxes, thereby losing utility.
1Heighn1y
Exactly, it doesn't make sense. It is in fact nonsense, unless you are saying it's impossible to specify a coherent, utility-maximizing decision theory at all? Btw, please explain how it's consistent with what I wrote, because it seems obvious to me it's not.
4Heighn1y
And if I select FDT, I would be the one "smiling from atop a heap of utility" in (10^24 - 1) out of 10^24 worlds. Yes, but the point is to construct a decision theory that recommends actions in a way that maximizes expected utility. Recommending left-boxing does that, because it saves you $100 in virtually every world. That's it, really. You keep focusing on that 1 out of 10^24 possibility were you burn to death, but that doesn't take anything away from FDT. Like I said: it's not about which action to take, let alone which action in such an improbable scenario. It's about what decision theory we need.
2Said Achmiz1y
So you say. But in the scenario (and in any situation we actually find ourselves in), only the one, actual, world is available for inspection. In that actual world, I’m the one with the heap of utility, and you’re dead. Who knows what I would do in any of those worlds, and what would happen as a result? Who knows what you would do? In the given scenario, FDT loses, period, and loses really badly and, what is worse, loses in a completely avoidable manner. As I said, this reasoning makes sense if, at the time of your decision, you don’t know what possibility you will end up with (and are thus making a gamble). It makes no sense at all if you are deciding while in full possession of all relevant facts. Totally, and the decision theory we need is one that doesn’t make such terrible missteps! Of course, it is possible to make an argument like: “yes, FDT fails badly in this improbable scenario, but all other available decision theories fail worse / more often, so the best thing to do is to go with FDT”. But that’s not the argument being made here—indeed, you’ve explicitly disclaimed it…
5Heighn1y
No. We can inspect more worlds. We know what happens given the agent's choice and the predictor's prediction. There are multiple paths, each with its own probability. The problem description focuses on that one world, yes. But the point remains - we need a decision theory, we need it to recommend an action (left-boxing or right-boxing), and left-boxing gives the most utility if we consider the bigger picture. Do you agree that recommending left-boxing before the predictor makes its prediction is rational?
3Said Achmiz1y
Well, no. We can reason about more worlds. But we can’t actually inspect them. Here’s the question I have, though, which I have yet to see a good answer to. You say: But why can’t our decision theory recommend “choose Left if and only if it contains no bomb; otherwise choose Right”? (Remember, the boxes are open; we can see what’s in there…) I think that recommending no-bomb-boxing is rational. Or, like: “Take the left box, unless of course the predictor made a mistake and put a bomb in there, in which case, of course, take the right box.”
5Heighn1y
As to inspection, maybe I'm not familiar enough with the terminology there. Re your last point: I was just thinking about that too. And strangely enough I missed that the boxes are open. But wouldn't the note be useless in that case? I will think about this more, but it seems to me your decision theory can't recommend "Left-box, unless you see a bomb in left.", and FDT doesn't do this. The problem is, in that case the prediction influences what you end up doing. What if the predictor is malevolent, and predicts you choose right, placing the bomb in left? It could make you lose $100 easily. Maybe if you believed the predictor to be benevolent?
3Said Achmiz1y
Well, uh… that is rather an important aspect of the scenario… Why not? Yes, it certainly does. And that’s a problem for the predictor, perhaps, but why should it be a problem for me? People condition their actions on knowledge of past events (including predictions of their actions!) all the time. Indeed, the predictor doesn’t have to predict anything to make me lose $100; it can just place the bomb in the left box, period. This then boils down to a simple threat: “pay $100 or die!”. Hardly a tricky decision theory problem…
5Heighn1y
Sure. But given the note, I had the knowledge needed already, it seems. But whatever. Didn't say it was a tricky decision problem. My point was that your strategy is easily exploitable and may therefore not be a good strategy. 
2Said Achmiz1y
If your strategy is “always choose Left”, then a malevolent “predictor” can put a bomb in Left and be guaranteed to kill you. That seems much worse than being mugged for $100.
3Heighn1y
The problem description explicitly states the predictor doesn't do that, so no.
2Said Achmiz1y
I don’t see how that’s relevant. In the original problem, you’ve been placed in this weird situation against your will, where something bad will happen to you (either the loss of $100 or … death). If we’re supposing that the predictor is malevolent, she could certainly do all sorts of things… are we assuming that the predictor is constrained in some way? Clearly, she can make mistakes, so that opens up her options to any kind of thing you like. In any case, your choice (by construction) is as stated: pay $100, or die.
1Heighn1y
You don't see how the problem description preventing it is relevant? The description doesn't prevent malevolence, but it does prevent putting a bomb in left if the agent left-boxes.
1Heighn2mo
FDT doesn't insist on this at all. FDT recognizes that IF your decision procedure is modelled prior to your current decision, than you did in fact choose in advance. If an FDT'er playing Bomb doesn't believe her decision procedure was being modelled this way, she wouldn't take Left! If and only if it is a feature of the scenario, then FDT recognizes it. FDT isn't insisting the world to be a certain way. I wouldn't be a proponent of it if it did.
2Said Achmiz2mo
If a model of you predicts that you will choose A, but in fact you can choose B, and want to choose B, and do choose B, then clearly the model was wrong. Thinking “the model says I will choose A, therefore I have to (???) choose A” is total nonsense. (Is there some other way to interpret what you’re saying? I don’t see it.)
1Heighn2mo
"Thinking “the model says I will choose A, therefore I have to (???) choose A” is total nonsense." I choose whatever I want, knowing that it means the predictor predicted that choice. In Bomb, if I choose Left, the predictor will have predicted that (given subjunctive dependence). Yes, the predictor said it predicted Right in the problem description; but if I choose Left, that simply means the problem ran differently from the start. It means, starting from the beginning, the predictor predicts I will choose Left, doesn't put a bomb in Left, doesn't leave the "I predicted you will pick Right"-note (but maybe leaves a "I predicted you will pick Left"-note) , and then I indeed choose Left, letting me live for free.
1Heighn2mo
If the model is in fact (near) perfect, then choosing B means the model chose B too. That may seem like changing the past, but it really isn't, that's just the confusing way these problems are set up. Claiming you can choose something a (near) perfect model of you didn't predict is like claiming two identical calculators can give a different answer to 2 + 2.
1Heighn1y
It is the case, in way. Otherwise the predictor could not have predicted your action. I'm not saying you actively decide what to do beforehand, but apparently you are running a predictable decision procedure.
3Heighn24d
This is where, at least in part, your misunderstanding lies (IMO). FDT doesn't recommend choosing to burn to death. It recommends Left-boxing, which avoids burning to death AND avoids paying $100. In doing so, FDT beats both CDT and EDT, which both pay $100. It really is as simple as that. The Bomb is an argument for FDT, and quite an excellent one.
2Said Achmiz24d
… huh? How does this work? The scenario, as described in the OP, is that the Left box has a bomb in it. By taking it, you burn to death. But FDT, as you say, recommends Left-boxing. Therefore, FDT recommends knowingly choosing to burn to death. I don’t understand how you can deny this when your own post clearly describes all of this.
3Heighn23d
This works because Left-boxing means you're in a world where the predictors model of you also Left-boxed when the predictor made its prediction, causing it to not put a Bomb in Left. Put differently, the situation described by MacAskill becomes virtually impossible if you Left-box, since the probability of Left-boxing and burning to death is ~0. OR, alternatively, we say: no, we see the Bomb. We can't retroactively change this! If we keep that part of the world fixed, then, GIVEN the subjunctive dependence between us and the predictor (assuming it's there), that simply means we Right-box (with probability ~1), since that's what the predictor's model did. Of course, then it's not much of a decision theoretic problem anymore, since the decision is already fixed in the problem statement. If we assume we can still make a decision, then that decision is made in 2 places: first by the predictor's model, then by us. Left-boxing means the model Left-boxes and we get to live for free. Right-boxing means the model Right-boxes and we get to live at a cost of $100. The right decision must be Left-boxing.
2Said Achmiz23d
Irrelevant, since the described scenario explicitly stipulates that you find yourself in precisely that situation. Yes, that’s what I’ve been saying: choosing Right in that scenario is the correct decision. I have no idea what you mean by this. No, Left-boxing means we burn to death.
3Heighn21d
"Irrelevant, since the described scenario explicitly stipulates that you find yourself in precisely that situation." Actually, this whole problem is irrelevant to me, a Left-boxer: Left-boxers never (or extremely rarely) find themselves in the situation with a bomb in Left. That's the point.
0Said Achmiz20d
Firstly, there’s a difference between “never” and “extremely rarely”. And in the latter case, the question remains “and what do you do then?”. To which, it seems, you answer “choose the Right box”…? Well, I agree with that! But that’s just the view that I’ve already described as “Left-box unless there’s a bomb in Left, in which case Right-box”. It remains unclear to me what it is you think we disagree on.
2Heighn20d
That difference is so small as to be neglected. It seems to me that strategy leaves you manipulatable by the predictor, who can then just always predict you will Right-box, put a bomb in Left, and let you Right-box, causing you to lose $1,000.
0Said Achmiz20d
By construction it is not, because the scenario is precisely that we find ourselves in one such exceptional case; the posterior probability (having observed that we do so find ourselves) is thus ~1. … but you have said, in a previous post, that if you find yourself in this scenario, you Right-box. How to reconcile your apparently contradictory statements…?
3Heighn20d
Except that we don't find ourselves there if we Left-box. But we seem to be going around in a circle. Right-boxing is the necessary consequence if we assume the predictor's Right-box prediction is fixed now. So GIVEN the Right-box prediction, I apparently Right-box. My entire point is that the prediction is NOT a given. I Left-box, and thus change the prediction to Left-box. I have made no contradictory statements. I am and always have been saying that Left-boxing is the correct decision to resolve this dilemma.
2Said Achmiz20d
There’s no “if” about it. The scenario is that we do find ourselves there. (If you’re fighting the hypothetical, you have to be very explicit about that, because then we’re just talking about two totally different, and pretty much unrelated, things. But I have so far understood you to not be doing that.) I don’t know what you mean by “apparently”. You have two boxes—that’s the scenario. Which do you choose—that’s the question. You can pick either one; where does “apparently” come in? What does this mean? The boxes are already in front of you. You just said in this very comment that you Right-box in the given scenario! (And also in several other comments… are you really going to make me cite each of them…?)
1Heighn20d
I'm not going to make you cite anything. I know what you mean. I said Right-boxing is a consequence, given a certain resolution of the problem; I always maintained Left-boxing is the correct decision. Apparently I didn't explain myself well, that's on me. But I'm kinda done, I can't seem to get my point across (not saying it's your fault btw).
1green_leaf20d
Do you understand why one should Left-box for a perfect predictor if there's a bomb in the left box?
2Said Achmiz20d
Of course one should not; if there’s a bomb in Left, doing so leads to you dying.
3green_leaf20d
It doesn't. Instead, it will make it so that there will have never been a bomb in the first place. To understand this, imagine yourself as a deterministic algorithm. Either you Left-box under all circumstances (even if there is a bomb in the left box), or you Right-box under all circumstances, or you Right-box iff there is a bomb in the left box. Implementing the first algorithm out of these three is the best choice (the expected utility is 0). Implementing the third algorithm (that's what you do) is the worst choice (the expected utility is -$100).
1Said Achmiz20d
By the way, I want to point out that you apparently disagree with Heighn on this. He says, as I understand him, that if you pick Left, you do indeed burn to death, but this is fine, because in [1 trillion trillion minus one] possible worlds, you live and pay nothing. But you instead say that if you pick Left… something happens… and the bomb in the Left box, which you were just staring directly at, disappears somehow. Or wasn’t ever there (somehow), even though, again, you were just looking right at it. How do you reconcile this disagreement? One of you has to be wrong about the consequences of picking the Left box.
3Heighn20d
I think we agree. My stance: if you Left-box, that just means the predictor predicted that with probability close to 1. From there on, there are a trillion trillion - 1 possible worlds where you live for free, and 1 where you die. I'm not saying "You die, but that's fine, because there are possible worlds where you live". I'm saying that "you die" is a possible world, and there are way more possible worlds where you live.
1Said Achmiz20d
How? But apparently the consequences of this aren’t deterministic after all, since the predictor is fallible. So this doesn’t help.
6green_leaf19d
If you reread my comments, I simplified it by assuming an infallible predictor. For this, it's helpful to define another kind of causality (logical causality) as distinct from physical causality. You can't physically cause something to have never been that way, because physical causality can't go to the past. But you can use logical causality for that, since the output of your decision determines not only your output, but the output of all equivalent computations across the entire timeline. By Left-boxing even in case of a bomb, you will have made it so that the predictor's simulation of you has Left-boxed as well, resulting in the bomb never having been there.
2Said Achmiz19d
… so, in other words, you’re not actually talking about the scenario described in the OP. But that’s what my comments have been about, so… everything you said has been a non sequitur…? This really doesn’t answer the question. Again, the scenario is: you’re looking at the Left box, and there’s a bomb in it. It’s right there in front of you. What do you do? So, for example, when you say: So if you take the Left box, what actually, physically happens?
2Vladimir_Nesov16d
See my top-level comment [https://www.lesswrong.com/posts/R8muGSShCXZEnuEi6/a-defense-of-functional-decision-theory?commentId=Rbxfzri6hSxj6KnRg], this is precisely the problem with the scenario descibed in the OP I pointed out. Your reading is standard, but not the intended meaning. But it's also puzzling that you can't ITT this point, to see both meanings, even if you disagree that it's reasonable to allow/expect the intended one. Perhaps divesting from having an opinion on the object level question might help? Like, what is the point the others are trying to make, specifically, how does it work, regardless of if it's a wrong point, described in a way that makes no reference to its wrongness/absurdity?
2Said Achmiz16d
If a point seems to me to be absurd, then how can I understand or explain how it works (given that I don’t think it works at all)? As far as your top-level comment, well, my follow-up questions about it remain unanswered…
2Vladimir_Nesov16d
Like with bug reports, it's not helpful to say that something "doesn't work at all", it's useful to be more specific. There's some failure of rationality at play here, you are way too intelligent to be incapable of seeing what the point is, so there is some systematic avoidance of allowing yourself to see what is going on. Heighn's antagonistic dogmatism doesn't help, but shouldn't be this debilitating. I dropped out of that conversation because it seemed to be going in circles, and I think I've explained everything already. Apparently the conversation continued, green_leaf seems to be making good points, and Heighn continues needlessly upping the heat. I don't think object level conversation is helpful at this point, there is some methodological issue in how you think about this that I don't see an efficient approach to. I'm already way outside the sort of conversational norms I'm trying to follow for the last few years, which is probably making this comment as hopelessly unhelpful as ever, though in 2010 that'd more likely be the default mode of response for me.
1Heighn16d
Note that it's my argumentation that's being called crazy, which is a large factor in the "antagonism" you seem to observe - a word choice I don't agree with, btw. About the "needlessly upping the heat", I've tried this discussion from multiple different angles, seeing if we can come to a resolution. So far, no, alas, but not for lack of trying. I will admit some of my reactions were short and a bit provocative, but I don't appreciate nor agree with your accusations. I have been honest in my reactions.
2Vladimir_Nesov16d
I've been you ten years ago. This doesn't help, courtesy or honesty (purposes that tend to be at odds with each other) aren't always sufficient, it's also necessary to entertain strange points of view that are obviously wrong, in order to talk in another's language, to de-escalate where escalation won't help (it might help with feeding norms, but knowing what norms you are feeding is important). And often enough that is still useless and the best thing is to give up. Or at least more decisively overturn the chess board, as I'm doing with some of the last few comments to this post, to avoid remaining in an interminable failure mode.
3Heighn14d
Just... no. Don't act like you know me, because you don't. I appreciate you trying to help, but this isn't the way.
2Vladimir_Nesov14d
These norms are interesting in how well they fade into the background, oppose being examined. If you happen to be a programmer or have enough impression of what that might be like, just imagine a programmer team where talking about bugs can be taboo in some circumstances, especially if they are hypothetical bugs imagined out of whole cloth to check if they happen to be there, or brought to attention to see if it's cheap to put measures in place to prevent their going unnoticed, even if it eventually turns out that they were never there to begin with in actuality. With rationality, that's hypotheses about how people think, including hypotheses about norms that oppose examination of such hypotheses and norms.
1Heighn14d
Sorry, I'm having trouble understanding your point here. I understand your analogy (I was a developer), but am not sure what you're drawing the analogy to.
1Heighn15d
I see your point, although I have entertained Said's view as well. But yes, I could have done better. I tend to get like this when my argumentation is being called crazy, and I should have done better. You could have just told me this instead of complaining about me to Said though.
2Heighn18d
"So if you take the Left box, what actually, physically happens?" You live. For free. Because the bomb was never there to begin with. Yes, the situation does say the bomb is there. But it also says the bomb isn't there if you Left-box.
2Said Achmiz18d
At the very least, this is a contradiction, which makes the scenario incoherent nonsense. (I don’t think it’s actually true that “it also says the bomb isn’t there if you Left-box”—but if it did say that, then the scenario would be inconsistent, and thus impossible to interpret.)
1Heighn18d
That's what I've been saying to you: a contradiction. And there are two ways to resolve it.
1Vladimir_Nesov16d
This is misleading. What happens is that the situation you found yourself in doesn't take place with significant measure. You live mostly in different situations, not this one.
1Heighn16d
I don't see how it is misleading. Achmiz asked what actually happens; it is, in virtually all possible worlds, that you live for free.
2Vladimir_Nesov16d
It is misleading because Said's perspective is to focus on the current situation, without regarding the other situations as decision relevant. From UDT perspective you are advocating, the other situations remain decision relevant, and that explains much of what you are talking about in other replies. But from that same perspective, it doesn't matter that you live in the situation Said is asking about, so it's misleading that you keep attention on this situation in your reply without remarking on how that disagrees with the perspective you are advocating in other replies. In the parent comment, you say "it is, in virtually all possible worlds, that you live for free". This is confusing: are you talking about the possible worlds within the situation Said was asking about, or also about possible worlds outside that situation? The distinction matters for the argument in these comments, but you are saying this ambiguously.
1green_leaf18d
No, non sequitur means something else. (If I say "A, therefore B", but B doesn't follow from A, that's a non sequitur.) I simplified the problem to make it easier for you to understand. It does. Your question was "How?". The answer is "through logical causality." You take the left box with the bomb, and it has always been empty.
2Said Achmiz18d
This doesn’t even resemble a coherent answer. Do you really not see how absurd this is?
3green_leaf18d
It doesn't seem coherent if you don't understand logical causality. There is nothing incoherent about both of these being true: 1. You Left-box under all circumstances (even if there is a bomb in the box) 2. The expected utility of executing this algorithm is 0 (the best possible) These two statements can both be true at the same time, and (1) implies (2).
2Said Achmiz18d
None of that is responsive to the question I actually asked.
1green_leaf16d
It is. The response to your question "So if you take the Left box, what actually, physically happens?" is "Physically, nothing." That's why I defined logical causality - it helps understand why (1) is the algorithm with the best expected utility, and why yours is worse.
1Said Achmiz16d
What do you mean by “Physically, nothing.”? There’s a bomb in there—does it somehow fail to explode? How?
1green_leaf14d
It fails to have ever been there.
2Said Achmiz13d
Do you see how that makes absolutely no sense as an answer to the question I asked? Like, do you see what makes what you said incomprehensible, what makes it appear to be nonsense? I’m not asking you to admit that it’s nonsense, but can you see why it reads as bizarre moon logic?
1Heighn13d
I can, although I indeed don't think it is nonsense. What do you think our (or specifically my) viewpoint is?
1Said Achmiz12d
I’m no longer sure; you and green_leaf appear to have different, contradictory views, and at this point that divergence has confused me enough that I could no longer say confidently what either of you seem to be saying without going back and carefully re-reading all the comments. And that, I’m afraid, isn’t something that I have time for at the moment… so perhaps it’s best to write this discussion off, after all.
2Heighn12d
Of course! Thanks for your time.
1green_leaf13d
You're still neglecting the other kind of causality, so "nothing" makes no sense to you (since something clearly happens). I'm tapping out [https://www.lesswrong.com/tag/tapping-out], since I don't see you putting any effort into understanding this topic.
2Heighn19d
Agreed, but I think it's important to stress that it's not like you see a bomb, Left-box, and then see it disappear or something. It's just that Left-boxing means the predictor already predicted that, and the bomb was never there to begin with. Put differently, you can only Left-box in a world where the predictor predicted you would.
2Said Achmiz19d
What stops you from Left-boxing in a world where the predictor didn’t predict that you would? To make the question clearer, let’s set aside all this business about the fallibility of the predictor. Sure, yes, the predictor’s perfect, it can predict your actions with 100% accuracy somehow, something about algorithms, simulations, models, whatever… fine. We take all that as given. So: you see the two boxes, and after thinking about it very carefully, you reach for the Right box (as the predictor always knew that you would). But suddenly, a stray cosmic ray strikes your brain! No way this was predictable—it was random, the result of some chain of stochastic events in the universe. And though you were totally going to pick Right, you suddenly grab the Left box instead. Surely, there’s nothing either physically or logically impossible about this, right? So if the predictor predicted you’d pick Right, and there’s a bomb in Left, and you have every intention of picking Right, but due to the aforesaid cosmic ray you actually take the Left box… what happens?
2Said Achmiz19d
But the scenario stipulates that the bomb is there. Given this, taking the Left box results in… what? Like, in that scenario, if you take the Left box, what actually happens?
1Heighn18d
The scenario also stipulates the bomb isn't there if you Left-box. What actually happens? Not much. You live. For free.
1green_leaf19d
Yes, that's correct. By executing the first algorithm, the bomb has never been there. Here it's useful to distinguish between agentic 'can' and physical 'can.' Since I assume a deterministic universe for simplification, there is only one physical 'can.' But there are two agentic 'can''s - no matter the prediction, I can agentically choose either way. The predictor's prediction is logically posterior to my choice, and his prediction (and the bomb's presence) are the way they are because of my choice. So I can Left-box even if there is a bomb in the left box, even though it's physically impossible. (It's better to use agentic can over physical can for decision-making, since that use of can allows us to act as if we determined the output of all computations identical to us, which brings about better results. The agent that uses the physical can as their definition will see the bomb more often.) Unless I'm missing something.
1Heighn20d
No, that's just plain wrong. If you Left-box given a perfect predictor, the predictor didn't put a bomb in Left. That's a given. If the predictor did put a bomb in Left and you Left-box, then the predictor isn't perfect.
1Heighn23d
"Irrelevant, since the described scenario explicitly stipulates that you find yourself in precisely that situation." It also stipulates the predictor predicts almost perfectly. So it's very relevant. "Yes, that’s what I’ve been saying: choosing Right in that scenario is the correct decision." No, it's the wrong decision. Right-boxing is just the necessary consequence of the predictor predicting I Right-box. But insofar this is a decision problem, Left-boxing is correct, and then the predictor predicted I would Left-box. "No, Left-boxing means we burn to death." No, it means the model Left-boxed and thus the predictor didn't put a bomb in Left. Do you understand how subjunctive dependence works?
2Said Achmiz23d
Yes, almost perfectly (well, it has to be “almost”, because it’s also stipulated that the predictor got it wrong this time). None of this matters, because the scenario stipulates that there’s a bomb in the Left box. But it’s stipulated that the predictor did put a bomb in Left. That’s part of the scenario. Why does it matter? We know that there’s a bomb in Left, because the scenario tells us so.
3Heighn22d
Well, not with your answer, because you Right-box. But anyway. It matters a lot, because in a way the problem description is contradicting itself (which happens more often in Newcomblike problems). 1. It says there's a bomb in Left. 2. It also says that if I Left-box, then the predictor predicted this, and will not have put a Bomb in Left. (Unless you assume the predictor predicts so well by looking at, I don't know, the color of your shoes or something. But it strongly seems like the predictor has some model of your decision procedure.) You keep repeating (1), ignoring (2), even though (2) is stipulated just as much as (1). So, yes, my question whether you understand subjunctive dependence is justified, because you keep ignoring that crucial part of the problem.
2Said Achmiz22d
Well, first of all, if there is actually a contradiction in the scenario, then we’ve been wasting our time. What’s to talk about? In such a case the answer to “what happens in this scenario” is “nothing, it’s logically impossible in the first place”, and we’re done. But of course there isn’t actually a contradiction. (Which you know, otherwise you wouldn’t have needed to hedge by saying “in a way”.) It’s simply that the problem says that if you Left-box, then the predictor predicted this, and will not have put a bomb in Left… usually. Almost always! But not quite always. It very rarely makes mistakes! And this time, it would seem, is one of those times. So there’s no contradiction, there’s just a (barely) fallible predictor. So the scenario tells us that there’s a bomb in Left, we go “welp, guess the predictor screwed up”, and then… well, apparently FDT tells us to choose Left anyway? For some reason…? (Or does it? You tell me…) But regardless, obviously the correct choice is Right, because Left’s got a bomb in it. I really don’t know what else there is to say about this.
3Heighn22d
There is, as I explained. There's 2 ways of resolving it, but yours isn't one of them. You can't have it both ways. Just... no. "The predictor predicted this", yes, so there are a trillion trillion - 1 follow-up worlds where I don't burn to death! And yes, 1 - just 1 - world where I do. Why choose to focus on that 1 out of a trillion trillion worlds? Because the problem talks about a bomb in Left? No. The problem says more than that. It clearly predicts a trillion trillion - 1 worlds where I don't burn to death. That 1 world where I do sucks, but paying $100 to avoid it seems odd. Unless, of course, you value your life infinitely (which you do I believe?). That's fine, it does all depend on the specific valuations.
2Said Achmiz22d
The problem stipulates that you actually, in fact, find yourself in a world where there’s a bomb in Left. These “other worlds” are—in the scenario we’re given—entirely hypothetical (or “counterfactual”, if you like). Do they even exist? If so, in what sense? Not clear. But in the world you find yourself in (we are told), there’s a bomb in the Left box. You can either take that box, and burn to death, or… not do that. So, “why choose to focus on” that world? Because that’s the world we find ourselves in, where we have to make the choice. Paying $100 to avoid burning to death isn’t something that “seems odd”, it’s totally normal and the obviously correct choice.
3Heighn22d
My point is that those "other worlds" are just as much stipulated by the problem statement as that one world you focus on. So, you pay $100 and don't burn to death. I don't pay $100, burn to death in 1 world, and live for free in a trillion trillion - 1 worlds. Even if I value my life at $10,000,000,000,000, my choice gives more utility.
2Said Achmiz22d
Sorry, but no, they’re not. You may choose to infer their “existence” from what’s stated in the problem—but that’s an inference that depends on various additional assumptions (e.g. about the nature of counterfactuals, and all sorts of other things). All that’s actually stipulated is the one world you find yourself in.
3Heighn22d
You infer the existence of me burning to death from what's stated in the problem as well. There's no difference. I do have the assumption of subjunctive dependence. But without that one - if, say, the predictor predicts by looking at the color of my shoes - then I don't Left-box anyway.
2Said Achmiz22d
Of course there’s a difference: inferring burning to death just depends on the perfectly ordinary assumption of cause and effect, plus what is very explicitly stated in the problem. Inferring the existence of other worlds depends on much more esoteric assumptions that that. There’s really no comparison at all. Not only is that not the only assumption required, it’s not even clear what it means to “assume” subjunctive dependence. Sure, it’s stipulated that the predictor is usually (but not quite always!) right about what you’ll do. What else is there to this “assumption” than that? But how that leads to “other worlds exist” and “it’s meaningful to aggregate utility across them” and so on… I have no idea.
3Heighn22d
Inferring that I don't burn to death depends on 1. Omega modelling my decision procedure 2. Cause and effect from there. That's it. No esoteric assumptions. I'm not talking about a multiverse with worlds existing next to each other or whatever, just possible worlds.
3Said Achmiz22d
If they’re just possible worlds, then why do they matter? They’re not actual worlds, after all (by the time the described scenario is happening, it’s too late for any of them to be actual!). So… what’s the relevance?
3Heighn22d
The world you're describing is just as much a possible world as the ones I describe. That's my point.
3Said Achmiz22d
Huh? It’s the world that’s stipulated to be the actual world, in the scenario.
3Heighn22d
No, it isn't. In the world that's stipulated, you still have to make your decision. That decision is made in my head and in the predictor's head. That's the key.
2Said Achmiz22d
But if you choose Left, you will burn to death. I’ve already quoted that. Says so right in the OP.
4Heighn22d
That's one possible world. There are many more where I don't burn to death.
0Said Achmiz22d
But… there aren’t, though. They’ve already failed to be possible, at that point.
3Vladimir_Nesov16d
The UDT convention is that other possible worlds remain relevant, even when you find yourself in a possible world that isn't compatible with their actuality. It's confusing to discuss this general point as if it's specific to this contentious thought experiment.
2Said Achmiz16d
Well, we’re discussing it in the context of this thought experiment. If the point applies more generally, then so be it. Can you explain (or link to an explanation of) what is meant by “convention” and “remain relevant” here?
2Vladimir_Nesov16d
The setting has a sample space, as in expected utility theory, with situations that take place in some event (let's call it a situation event) and offer a choice between smaller events resulting from taking alternative actions. The misleading UDT convention is to call the situation event "actual". It's misleading because the goal is to optimize expected utility over the whole sample space, not just over the situation event, so the places on the sample space outside the situation event are effectively still in play, still remain relevant, not ruled out by the particular situation event being "actual".
2Said Achmiz16d
Alright. But by the time the situation described in the OP happens, it no longer matters whether you optimize expected utility over the whole sample space; that goal is now moot. One event out of the sample space has occurred, and the others have failed to occur. Why would you continue to attempt to achieve that goal, toward which you are no longer capable of taking any action?
2Vladimir_Nesov16d
That goal may be moot for some ways of doing decisions. For UDT it's not moot, it's the only thing that we care about instead. And calling some situation or another "actual" has no effect at all on the goal, and on the process of decision making in any situation, actual or otherwise, that's what makes the goal and the decision process reflectively stable.
1Heighn15d
"But by the time the situation described in the OP happens, it no longer matters whether you optimize expected utility over the whole sample space; that goal is now moot." This is what we agree on. If you're in the situation with a bomb, all that matters is the bomb. My stance is that Left-boxers virtually never get into the situation to begin with, because of the prediction Omega makes. So with probability close to 1, they never see a bomb. Your stance (if I understand correctly) is that the problem statement says there is a bomb, so, that's what's true with probability 1 (or almost 1). And so I believe that's where our disagreement lies. I think Newcomblike problems are often "trick questions" that can be resolved in two ways, one leaning more towards your interpretation. In spirit of Vladimir's points, if I annoyed you, I do apologize. I can get quite intense in such discussions.
4Vladimir_Nesov14d
But that's false for a UDT agent, it still matters to that agent-instance-in-the-situation what happens in other situations, those without a bomb, it's not the case that all that matters is the bomb (or even a bomb).
1Heighn14d
Hmm, interesting. I don't know much about UDT. From and FDT perspective, I'd say that if you're in the situation with the bomb, your decision procedure already Right-boxed and therefore you're Right-boxing again, as logical necessity. (Making the problem very interesting.)
1Heighn15d
To explain my view more, the question I try to answer in these problems is more or less: if I were to choose a decision theory now to strictly adhere to, knowing I might run into the Bomb problem, which decision theory would I choose?
1Heighn22d
Not at the point in time where Omega models my decision procedure.
3Heighn22d
One thing we do agree on: If I ever find myself in the Bomb scenario, I Right-box. Because in that scenario, the predictor's model of me already Right-boxed, and therefore I do, too - not as a decision, per se, but as a logical consequence. The correct decision is another question - that's Left-boxing, because the decision is being made in two places. If I find myself in the Bomb scenario, that just means the decision to Right-box was already made. The Bomb problem asks what the correct decision is, and makes clear (at least under my assumption) that the decision is made at 2 points in time. At that first point (in the predictor's head), Left-boxing leads to the most utility: it avoids burning to death for free. Note that at that point, there is not yet a bomb in Left!
2Said Achmiz22d
If we agree on that, then I don’t understand what it is that you think we disagree on! (Although the “not as a decision, per se” bit seems… contentless.) No, it asks what decision you should make. And we apparently agree that the answer is “Right”.
3Heighn22d
Hmmm, I thought that comment might clear things up, but apparently it doesn't. And I'm left wondering if you even read it. Anyway, Left-boxing is the correct decision. But since you didn't really engage with my points, I'll be leaving now.
2Said Achmiz22d
What does it mean to say that Left-boxing is “the correct decision” if you then say that the decision you’d actually make would be to Right-box? This seems to be straightforwardly contradictory, in a way that renders the claim nonsensical. I read all your comments in this thread. But you seem to be saying things that, in a very straightforward way, simply don’t make any sense…
3Heighn22d
Alright. The correct decision is Left-boxing, because that means the predictor's model Left-boxed (and so do I), letting me live for free. Because, at the point where the predictor models me, the Bomb isn't placed yet (and never will be). However, IF I'm in the Bomb scenario, then the predictor's model already Right-boxed. Then, because of subjunctive dependence, it's apparently not possible for me to Left-box, just as it is impossible for two calculators to give a different result to 2 + 2.
2Said Achmiz22d
Well, the Bomb scenario is what we’re given. So the first paragraph you just wrote there is… irrelevant? Inapplicable? What’s the point of it? It’s answering a question that’s not being asked. As for the last sentence of your comment, I don’t understand what you mean by it. Certainly it’s possible for you to Left-box; you just go ahead and Left-box. This would be a bad idea, of course! Because you’d burn to death. But you could do it! You just shouldn’t—a point on which we, apparently, agree. The bottom line is: to the actual single question the scenario asks—which box do you choose, finding yourself in the given situation?—we give the same answer. Yes?
3Heighn21d
The bottom line is that Bomb is a decision problem. If I am still free to make a decision (which I suppose I am, otherwise it isn't much of a problem), then the decision I make is made at 2 points in time. And then, Left-boxing is the better decision.
3Heighn22d
Yes, the Bomb is what we're given. But with the very reasonable assumption of subjunctive dependence, it specifies what I am saying... We agree that if I would be there, I would Right-box, but also everybody would then Right-box, as a logical necessity (well, 1 in a trillion trillion error rate, sure). It has nothing to do with correct or incorrect decisions, viewed like that: the decision is already hard coded into the problem statement, because of the subjunctive dependence. "But you can just Left-box" doesn't work: that's like expecting one calculator to answer to 2 + 2 differently than another calculator.
1green_leaf20d
Unless I'm missing something, it's possible you're in the predictor's simulation, in which case it's possible you will Left-box.
2Heighn20d
Excellent point!
1green_leaf22d
I think it's better to explain to such people the problem where the predictor is perfect, and then generalize to an imperfect predictor. They don't understand the general principle of your present choices pseudo-overwriting the entire timeline and can't think in the seemingly-noncausal way that optimal decision-making requires. By jumping right to an imperfect predictor, the principle becomes, I think, too complicated to explain [https://www.lesswrong.com/posts/HLqWn5LASfhhArZ7w/expecting-short-inferential-distances].
1Heighn22d
(Btw, you can call your answer "obvious" and my side "crazy" all you want, but it won't change a thing until you actually demonstrate why and how FDT is wrong, which you haven't done.)
2Said Achmiz22d
I’ve done that: FDT is wrong because it (according to you) recommends that you choose to burn to death, when you could easily choose not to burn to death. Pretty simple.
4gjm22d
It seems to me that your argument proves too much. Let's set aside this specific example and consider something more everyday: making promises. It is valuable to be able to make promises that others will believe, even when they are promises to do something that (once the relevant situation arises) you will strongly prefer not to do. Suppose I want a $1000 loan, with $1100 to be repaid one year from now. My counterparty Bob has no trust in the legal system, police, etc., and expects that next year I will be somewhere where he can't easily find me and force me to pay up. But I really need the money. Fortunately, Bob knows some mad scientists and we agree to the following: I will have implanted in my body a device that will kill me if 366 days from now I haven't paid up. I get the money. I pay up. Nobody dies. Yay. I hope we are agreed that (granted the rather absurd premises involved) I should be glad to have this option, even though in the case where I don't pay up it kills me. Revised scenario: Bob knows some mad psychologists who by some combination of questioning, brain scanning, etc., are able to determine very reliably what future choices I will make in any given situation. He also knows that in a year's time I might (but with extremely low probability) be in a situation where I can only save my life at the cost of the $1100 that I owe him. He has no risk tolerance to speak of and will not lend me the money if in that situation I would choose to save my life and not give him the money. Granted these (again absurd) premises, do you agree with me that it is to my advantage to have the sort of personality that can promise to pay Bob back even if it literally kills me? It seems to me that: 1. Your argument in this thread would tell me, a year down the line and in the surprising situation that I do in fact need to choose between Bob's money and my life, "save your life, obviously". 2. If my personality were such that I would do as you advise in that situation,
2Said Achmiz22d
I do not. Your scenario omits the crucial element of the scenario in the OP, where you (the subject) find yourself in a situation where the predictor turns out to have erred in its prediction.
4gjm22d
Hmm. I am genuinely quite baffled by this; there seems to be some very fundamental difference in how we are looking at the world. Let me just check that this is a real disagreement and not a misunderstanding (even if it is there would also be a real disagreement, but a different one): I am asking not "do you agree with me that at the point where I have to choose between dying and failing to repay Bob it is to my advantage ..." but "do you agree with me that at an earlier point, say when I am negotiating with Bob it is to my advantage ...". If I am understanding you right and you are understanding me right, then I think the following is true. Suppose that when Bob has explained his position (he is willing to lend me the money if, and only if, his mad scientists determine that I will definitely repay him even if the alternative is death), some supernatural being magically informs me that while it cannot lend me the money it can make me the sort of person who can make the kind of commitment Bob wants and actually follow through. I think you would recommend that I either not accept this offer, or at any rate not make that commitment having been empowered to do so. Do you feel the same way about the first scenario, where instead of choosing to be a person who will pay up even at the price of death I choose to be a person who will be compelled by brute force to pay up or die? If not, why? Why does that matter? (Maybe it doesn't; your opinion about my scenario is AIUI the same as your opinion about the one in the OP.)
2Said Achmiz21d
Yes, I understood you correctly. My answer stands. (But I appreciate the verification.) Right. No, because there’s a difference between “pay up or die” and “pay up and die”. The scenario in the OP seems to hinge on it. As described, the situation is that the agent has picked FDT as their decision theory, is absolutely the sort of agent who will choose the Left box and die if so predicted, who is thereby supposed to not actually encounter situations where the Left box has a bomb… but oops! The predictor messed up and there is a bomb there anyhow. And now the agent is left with a choice on which nothing depends except whether he pointlessly dies. I see no analogous feature of your scenarios…
4gjm21d
I agree (of course!) that there is a difference between "pay up and die" and "pay up or die". But I don't understand how this difference can be responsible for the difference in your opinions about the two scenarios. Scenario 1: I choose for things to be so arranged that in unlikely situation S (where if I pay Bob back I die), if I don't pay Bob back then I also die. You agree with me (I think -- you haven't actually said so explicitly) that it can be to my benefit for things to be this way, if this is the precondition for getting the loan from Bob. Scenario 2: I choose for things to be so arranged that in unlikely scenario S (where, again, if I pay Bob back I die), I will definitely pay. You think this state of affairs can't be to my advantage. How is scenario 2 actually worse for me than scenario 1? Outside situation S, they are no different (I will not be faced with such strong incentive not to pay Bob back, and I will in fact pay him back, and I will not die). In situation S, scenario 1 means I die either way, so I might as well pay my debts; scenario 2 means I will pay up and die. I'm equally dead in each case. I choose to pay up in each case. In scenario 1, I do have the option of saying a mental "fuck you" to Bob, not repaying my debt, and dying at the hand of his infernal machinery rather than whatever other thing I could save myself from with the money. But I'm equally dead either way, and I can't see why I'd prefer this, and in any case it's beyond my understanding why having this not-very-appealing extra option would be enough for scenario 1 to be good and scenario 2 to be bad. What am I missing? I think we are at cross purposes somehow about the "predictor turns out to have erred" thing. I do understand that this feature is present in the OP's thought experiment and absent in mine. My thought experiment isn't meant to be equivalent to the one in the OP, though it is meant to be similar in some ways (and I think we are agreed that it is similar in t
3Heighn22d
Yeah you keep repeating that. Stating it. Saying it's simple, obvious, whatever. Saying I'm being crazy. But it's just wrong. So there's that.
3Said Achmiz22d
Which part of what I said you deny…?
1Heighn22d
1. That I'm being crazy 2. That Left-boxing means burning to death 3. That your answer is obviously correct Take your pick.
2Said Achmiz22d
The scenario stipulates this:
2Vladimir_Nesov16d
This is instead part of the misleading framing [https://www.lesswrong.com/posts/R8muGSShCXZEnuEi6/a-defense-of-functional-decision-theory?commentId=Rbxfzri6hSxj6KnRg]. Putting bomb in Left is actually one of the situations being considered, not all that actually happens, even if it says that it's what actually happens. It's one of the possible worlds, and there is a misleading convention of saying that when you find yourself in a possible world, what you see is what actually happens. It's because that's how it subjectively looks like, even if other worlds are supposed to still matter by UDT convention.
2Matthew Barnett1y
I think the more fundamental issue is that you can construct these sorts of dilemmas for all decision theories. For example, you can easily come up with scenarios where Omega punishes you for following a certain decision theory and rewards you otherwise. The right question to ask is not whether a decision theory recommends something that makes you burn to death in some scenario, but whether it recommends you do so across a broad class of fair dilemmas. I'm not convinced that FDT does that, and the bomb dilemma did not move me much.
1Said Achmiz1y
You can of course construct scenarios where Omega punishes you for all sorts of things, but in the given case, FDT recommends a manifestly self-destructive action, in a circumstance where you’re entirely free to instead not take that action. Other decision theories do not do this (whatever their other faults may be). But of course it is the right question. The given dilemma is perfectly fair. FDT recommends that you knowingly choose to burn to death, when you could instead not choose to burn to death, and incur no bad consequences thereby. This is a clear failure.
2Matthew Barnett1y
What makes the bomb dilemma seem unfair to me is the fact that it's conditioning on an extremely unlikely event. The only way we blow up is if the predictor predicted incorrectly. But by assumption, the predictor is near-perfect. So it seems implausible that this outcome would ever happen.
2Said Achmiz1y
Why is this unfair? Look, I keep saying this, but it doesn’t seem to me like anyone’s really engaged with it, so I’ll try again: If the scenario were “pick Left or Right; after you pick, then the boxes are opened and the contents revealed; due to [insert relevant causal mechanisms involving a predictor or whatever else here], the Left box should be empty; unfortunately, one time in a trillion trillion, there’ll be some chance mistake, and Left will turn out (after you’ve chosen it) to have a bomb, and you’ll blow up”… … then FDT telling you to take Left would be perfectly reasonable. I mean, it’s a gamble, right? A gamble with an unambiguously positive expected outcome; a gamble you’ll end up winning in the utterly overwhelming majority of cases. Once in a trillion trillion times, you suffer a painful death—but hey, that’s better odds than each of us take every day when we cross the street on our way to the corner store. In that case, it would surely be unfair to say “hey, but in this extremely unlikely outcome, you end up burning to death!”. But that’s not the scenario! In the given scenario, we already know what the boxes have in them. They’re open; the contents are visible. We already know that Left has a bomb. We know, to a certainty, that choosing Left means we burn to death. It’s not a gamble with an overwhelming, astronomical likelihood of a good outcome, and only a microscopically tiny chance of painful death—instead, it’s knowingly choosing a certain death! Yes, the predictor is near-perfect. But so what? In the given scenario, that’s no longer relevant! The predictor has already predicted, and its prediction has already been evaluated, and has already been observed to have erred! There’s no longer any reason at all to choose Left, and every reason not to choose Left. And yet FDT still tells us to choose Left. This is a catastrophic failure; and what’s more, it’s an obvious failure, and a totally preventable one. Now, again: it would be reasonable t
1Heighn1y
My updated defense of FDT [https://www.lesswrong.com/posts/Suk3qEWyxnTG47TDZ/defending-functional-decision-theory], should you be interested.
1Heighn1y
Like I've said before, it's not about which action to take, it's about which strategy to have. It's obvious right-boxing gives the most utility in this specific scenario only, but that's not what it's about.
4Said Achmiz1y
Why? Why is it not about which action to take? I reject this. If Right-boxing gives the most utility in this specific scenario, then you should Right-box in this specific scenario. Because that’s the scenario that—by construction—is actually happening to you. In other scenarios, perhaps you should do other things. But in this scenario, Right is the right answer.
1Heighn1y
And this is the key point. It seems to me impossible to have a decision theory that right-boxes in Bomb but still does as well as FDT does in all other scenarios.
1Heighn1y
It's about which strategy you should adhere to. The strategy of right-boxing loses you $100 virtually all the time. 
-1TAG1y
If it's about utility, then specify it in terms of utility, not death or dollars.
1Heighn10mo
Utility is often measured in dollars. If I had created the Bomb scenario, I would have specified life/death in terms of dollars as well. Like, "Life is worth $1,000,000 to you." That way, you can easily compare the loss of your life to the $100 cost of Right-boxing.
1Heighn1y
Yes, you keep saying this, and I still think you're wrong. Our candidate decision theory has to recommend something for this scenario - and that recommendation gets picked up by the predictor beforehand. You have to take that into account. You seem to be extremely focused on this extremely unlikely scenario, which is odd to me. How exactly is it preventable? I'm honestly asking. If you have a strategy that, if the agent commits to it before the predictor makes her prediction, does better than FDT, I'm all ears.
2Said Achmiz1y
It’s preventable by taking the Right box. If you take Left, you burn to death. If you take Right, you don’t burn to death. Totally, here it is: FDT, except that if the predictor makes a mistake and there’s a bomb in the Left, take Right instead.
1Dacyn2mo
You seem to have misunderstood the problem statement [1]. If you commit to doing "FDT, except that if the predictor makes a mistake and there’s a bomb in the Left, take Right instead", then you will almost surely have to pay $100 (since the predictor predicts that you will take Right), whereas if you commit to using pure FDT, then you will almost surely have to pay nothing (with a small chance of death). There really is no "strategy that, if the agent commits to it before the predictor makes her prediction, does better than FDT". [1] Which is fair enough, as it wasn't actually specified correctly: the predictor is actually trying to predict whether you will take Left or Right if it leaves its helpful note, not in the general case. But this assumption has to be added, since otherwise FDT says to take Right.
2Said Achmiz2mo
It sounds like you’re saying that I correctly understood the problem statement as it was written (but it was written incorrectly); but that the post erroneously claims that in the scenario as (incorrectly) written, FDT says to take Left, when in fact FDT in that scenario-as-written says to take right. Do I understand you?
1Dacyn2mo
Yes.
1Heighn2mo
Why? FDT isn't influenced in its decision by the note, so there is no loss of subjunctive dependence when this assumption isn't added. (Or so it seems to me: I am operating at the limits of my FDT-knowledge here.)
1Heighn10mo
How would this work? Your strategy seems to be "Left-box unless the note says there's a bomb in Left". This ensures the predictor is right whether she puts a bomb in Left or not, and doesn't optimize expected utility.
3Said Achmiz10mo
It doesn’t kill you in a case when you can choose not to be killed, though, and that’s the important thing.
1Heighn10mo
It costs you p * $100 for 0 <= p <= 1 where p depends on how "mean" you believe the predictor is. Left-boxing costs 10^-24 * $1,000,000 = $10^-18 if you value life at a million dollars. Then if p > 10^-20, Left-boxing beats your strategy.
2Said Achmiz10mo
Why would I value my life finitely in this case? (Well, ever, really, but especially in this scenario…)
1Heighn10mo
Also, were you operating under the life-has-infinite-value assumption all along? If so, then 1. You were incorrect about FDT's decision in this specific problem 2. You should probably have mentioned you had this unusual assumption, so we could have resolved this discussion way earlier
1Heighn10mo
1. Note that FDT Right-boxes when you give life infinite value. 2. What's special in this scenario with regards to valuing life finitely? 3. If you always value life infinitely, it seems to me all actions you can ever take get infinite values, as there is always a chance you die, which makes decision making on basis of utility pointless.
1Heighn1y
Unfortunately, that doesn't work. The predictor, if malevolent, could then easily make you choose right and pay a $100. Left-boxing is the best strategy possible as far as I can tell. As in, yes, that extremely unlikely scenario where you burn to death sucks big time, but there is no better strategy possible (unless there is a superior strategy I - and it appears everybody - haven't/hasn't thought of).
2Said Achmiz1y
If you commit to taking Left, then the predictor, if malevolent, can “mistakenly” “predict” that you’ll take Right, making you burn to death. Just like in the given scenario: “Whoops, a mistaken prediction! How unfortunate and improbable! Guess you have no choice but to kill yourself now, how sad…” There absolutely is a better strategy: don’t knowingly choose to burn to death.
1Heighn1y
We know the error rate of the predictor, so this point is moot. I still have to see a strategy incorporating this that doesn't overall lose by losing utility in other scenarios.
2Said Achmiz1y
How do we know it? If the predictor is malevolent, then it can “err” as much as it wants.
1Heighn1y
For the record, I read Nate's comments again, and I now think of it like this: To the extent that the predictor was accurate in her line of reasoning, then you left-boxing does NOT result in you slowly burning to death. It results in, well, the problem statement being wrong, because the following can't all be true: 1. The predictor is accurate 2. The predictor predicts you right-box, and places the bomb in left 3. You left-box And yes, apparently the predictor can be wrong, but I'd say, who even cares? The probability of the predictor being wrong is supposed to be virtually zero anyway (although as Nate notes, the problem description isn't complete in that regard). 
1Heighn1y
We know it because it is given in the problem description, which you violate if the predictor 'can "err" as much as it wants'.
1Heighn7mo
Although I strongly disagree with Achmiz on the Bomb scenario in general, here we agree: Bomb is perfectly fair. You just have to take the probabilities into account, after which - if we value life at, say, $1,000,000 - Left-boxing is the only correct strategy.
-1Heighn1y
Well, it's only unlikely if the agent left-boxes. If she right-boxes, the scenario is very likely. I don't think the problem itself is unfair - what's unfair is saying FDT is wrong for left-boxing.
1Heighn1y
For the record: I completely agree with Said on this specific point. Bomb is a fair problem. Each decision theory entering this problem gets dealt the exact same hand. No. Ironically, Bomb is an argument for FDT, not against it: for if I adhere to FDT, I will never* burn to death AND save myself $100 if I do face this predictor. *never here means only 1 in 1 trillion trillion if you meet the predictor
1JBlack1y
If there is some nontrivial chance that the predictor is adversarial but constrained to be accurate and truthful (within the bounds given), then on the balance of probability people taking the right box upon seeing a note predicting right are worse off. Yes, it sucks that you in particular got screwed, but the chances of that were astronomically low. This shows up more obviously if you look at repeated iterations, compare performance of decision theories in large populations, or weight outcomes across possible worlds. Edit: The odds were not astronomically low. I misinterpreted the statement about Predictor's accuracy to be stronger than it actually was. FDT recommends taking the right box, and paying $100.
2Said Achmiz1y
No, because the scenario stipulates that you find yourself facing a Left box with a bomb. Anyone who finds themselves in this scenario is worse off taking Left than Right, because taking Left kills you painfully, and taking Right does no such thing. There is no question of any “balance of probability”. But you didn’t “get screwed”! You have a choice! You can take Left, or Right. Again: the scenario stipulates that taking Left kills you, and FDT agrees that taking Left kills you; and likewise it is stipulated (and FDT does not dispute) that you can indeed take whichever box you like. All of that is completely irrelevant, because in the actual world that you (the agent in the scenario) find yourself in, you can either burn to death, or not. It’s completely up to you. You don’t have to do what FDT says to do, regardless of what happens in any other possible worlds or counterfactuals or what have you. It really seems to me like anyone who takes Left in the “Bomb” scenario is making almost exactly the same mistake as people who two-box in the classic Newcomb’s problem. Most of the point of “Newcomb’s Problem and Regret of Rationality” [https://www.lesswrong.com/posts/6ddcsdA2c2XpNpE5x/newcomb-s-problem-and-regret-of-rationality] is that you don’t have to, and shouldn’t, do things like this. But actually, it’s a much worse mistake! In the Newcomb case, there’s a disagreement about whether one-boxing can actually somehow cause there to be a million dollars in the box; CDT denies this possibility (because it takes no account of sufficiently accurate predictors), while timeless/logical/functional/whatever decision theories accept it. But here, there is no disagreement at all; FDT admits that choosing Left causes you to die painfully, but says you should do it anyway! That is obviously much worse. The other point of “Newcomb’s Problem and Regret of Rationality” is that it is a huge mistake to redefine losing (such as, say, burning to death) as winning. That, also, seems

According to me, the correct rejoinder to Will is: I have confidently asserted that X is false for X whose probabliity I assign much greater probability than 1 in a trillion trillion, and so I hereby confidently assert that no, I do not see the bomb on the left. You see the bomb on the left, and lose $100. I see no bombs, and lose $0.

I can already hear the peanut gallery objecting that we can increase the fallibility of the predictor to reasonable numbers and I'd still take the bomb, so before we go further, let's all agree that sometimes you're faced with uncertainty, and the move that is best given your uncertainty is not the same as the move that is best given perfect knowledge. For example, suppose there are three games ("lowball", "highball", and "extremeball") that work as follows. In each game, I have three actions -- low, middle, and high. In the lowball game, my payouts are $5, $4, and $0 respectively. In the highball game, my payouts are $0, $4, and $5 respectively. In the extremeball game, my payouts are $5, $4, and $5 respectively. Now suppose that the real game I'm facing is that one of these games is chosen at uniform random by unobserved die roll. What action should ... (read more)

6Ben Pace1y
Thanks, this comment thread was pretty helpful. After reading your comments, here's my current explanation of what's up with the bomb argument: Then I'm a bit confused about how to estimate that probability, but I suspect the reasoning goes like this: Sanity check As a sanity-check, I note this implies that if the utilities-times-probabilities are different, I would not mind taking the $100 hit. Let's see what the math says here, and then check whether my intuitions agree. Suppose I value my life at $1 million. Then I think that I should become more indifferent here when the probability of a mistaken simulation approaches 1 in 100,000, or where the money on the line is closer to $10−17. [You can skip this, but here's me stating the two multiplications I compared: * World 1: I fake-kill myself to save $X, with probability 110 * World 2: I actually kill myself (cost: $1MM), with probability 1Y To find the indifference point I want the two multiplications of utility-to-probability to come out to be equal. If X = $100, then Y equals 100,000. If Y is a trillion trillion (1024), then X = 10−17. (Unless I did the math wrong.)] I think this doesn't obviously clash with my intuitions, and somewhat matches them.  * If the simulator was getting things wrong 1 in 100,000 times, I think I'd be more careful with my life in the "real world case" (insofar as that is a sensible concept). Going further, if you told me they were wrong 1 in 10 times, this would change my action, so there's got to be a tipping point somewhere, and this seems reasonable for many people (though I actually value my life at more than $1MM). * And if the money was that tiny ($10−17), I'd be fairly open to "not taking even the one-in-a-trillion-trillion chance". (Though really my intuition is that I don't care about money way before $10^-17, and would probably not risk anything serious starting at like 0.1 cents, because that sort of money seems kind of irritating to h
4Said Achmiz1y
Whether the predictor is accurate isn’t specified in the problem statement, and indeed can’t be specified in the problem statement (lest the scenario be incoherent, or posit impossible epistemic states of the agent being tested). What is specified is what existing knowledge you, the agent, have about the predictor’s accuracy, and what you observe in the given situation (from which you can perhaps infer additional things about the predictor, but that’s up to you). In other words, the scenario is: as per the information you have, so far, the predictor has predicted 1 trillion trillion times, and been wrong once (or, some multiple of those numbers—predicted 2 trillion trillion times and been wrong twice, etc.). You now observe the given situation (note predicting Right, bomb in Left, etc.). What do you do? Now, we might ask: but is the predictor perfect? How perfect is she? Well… you know that she’s erred once in a trillion trillion times so far—ah, no, make that twice in a trillion trillion times, as of this iteration you now find yourself in. That’s the information you have at your disposal. What can you conclude from that? That’s up to you. Likewise, you say: The problem statement absolutely is complete. It asks what you would/should do in the given scenario. There is no need to specify what “would” happen in other (counterfactual) scenarios, because you (the agent) do not observe those scenarios. There’s also no question of what would happen if you “always spite the predictor’s prediction”, because there is no “always”; there’s just the given situation, where we know what happens if you choose Left: you burn to death. You can certainly say “this scenario has very low probability”. That is reasonable. What you can’t say is “this scenario is logically impossible”, or any such thing. There’s no impossibility or incoherence here.

The problem statement absolutely is complete.

It's not complete enough to determine what I do when I don't see a bomb. And so when the problem statement is corrected to stop flatly asserting consequences of my actions as if they're facts, you'll find that my behavior in the corrected problem is underdefined. (If this still isn't clear, try working out what the predictor does to the agent that takes the bomb if it's present, but pays the $100 if it isn't.)

And if we're really technical, it's not actually quite complete enough to determine what I do when I see the bomb. That depends on what the predictor does when there are two consistent possible outcomes. Like, if I would go left when there was no bomb and right when there was a bomb, what would the predictor do? If they only place the bomb if I insist on going right when there's no bomb, then I have no incentive to go left upon seeing a bomb insofar as they're accurate. To force me to go left, the predictor has to be trigger-happy, dropping the bomb given the slightest opportunity.

What is specified is what existing knowledge you, the agent, have about the predictor’s accuracy, and what you observe in the given situation

Nitpic... (read more)

2Said Achmiz1y
I don’t understand this objection. The given scenario is that you do see a bomb. The question is: what do you do in the given scenario? You are welcome to imagine any other scenarios you like, or talk about counterfactuals or what have you. But the scenario, as given, tells you that you know certain things and observe certain things. The scenario does not appear to be in any way impossible. “What do I do when I don’t see a bomb” seems irrelevant to the question, which posits that you do see a bomb. Er, what? If you take the bomb, you burn to death. Given the scenario, that’s a fact. How can it not be a fact? (Except if the bomb happens to malfunction, or some such thing, which I assume is not what you mean…?) Well, let’s see. The problem says: So, if the predictor predicts that I will choose Right, she will put a bomb in Left, in which case I will choose Left. If she predicts that I will choose Left, then she puts no bomb in Left, in which case I will choose Right. This appears to be paradoxical, but that seems to me to be the predictor’s fault (for making an unconditional prediction of the behavior of an agent that will certainly condition its behavior on the prediction), and thus the predictor’s problem. I… don’t see what bearing this has on the disagreement, though. What I am saying is that we don’t have access to “questions of predictor mechanics”, only to the agent’s knowledge of “predictor mechanics”. In other words, we’ve fully specified your epistemic state by specifying your epistemic state—that’s all. I don’t know what you mean by calling it “the problem history”. There’s nothing odd about knowing (to some degree of certainty) that certain things have happened. You know there’s a (supposed) predictor, you know that she has (apparently) made such-and-such predictions, this many times, with these-and-such outcomes, etc. What are her “mechanics”? Well, you’re welcome to draw any conclusions about that from what you know about what’s gone before. Again,
9So8res1y
The scenario says "the predictor is likely to be accurate" and then makes an assertion that is (for me, at least) false insofar as the predictor is accurate. You can't have it both ways. The problem statement (at least partially) contradicts itself. You and I have a disagreement about how to evaluate counterfactuals in cases where the problem statement is partly-self-contradictory. Sure, it's the predictor's problem, and the behavior that I expect of the predictor in the case that I force them to notice they have a problem has a direct effect on what I do if I don't see a bomb. In particular, if they reward me for showing them their problem, then I'd go right when I see no bomb, whereas if they'd smite me, then I wouldn't. But you're right that this is inconsequential when I do see the bomb. Well for one thing, I just looked and there's no bomb on the left, so the whole discussion is counter-to-fact. And for another, if I pretend I'm in the scenario, then I choose my action by visualizing the (counterfactual) consequences of taking the bomb, and visualizing the (counterfactual) consequences of refraining. So there are plenty of counterfactuals involved. I assert that the (counterfactual) consequences of taking the bomb include (almost certainly) rendering the whole scenario impossible, and rendering some other hypothetical (that I don't need to pay $100 to leave) possible instead. And so according to me, the correct response to someone saying "assume you see the bomb" is to say "no, I shall assume that I see no bomb instead", because that's the consequence I visualize of (counterfactually) taking the bomb. You're welcome to test it empirically (well, maybe after adding at least $1k to all outcomes to incentivise me to play), if you have an all-but-one-in-a-trillion-trillion accurate predictor-of-me lying around. (I expect your empericism will prove me right, in that the counterfactuals where it shows me a bomb are all in fact rendered impossible, and what happen
4Said Achmiz1y
Well… no, the scenario says “the predictor has predicted correctly 1 trillion trillion minus one times, and incorrectly one time”. Does that make it “likely to be accurate”? You tell me, I guess, but that seems like an unnecessarily vague characterization of a precise description. What do you mean by this? What’s contradictory about the predictor making a mistake? Clearly, it’s not perfect. We know this because it made at least one mistake in the past, and then another mistake just now. Is the predictor “accurate”? Well, it’s approximately as accurate as it takes to guess 1 trillion trillion times and only be wrong once… I confess that this reads like moon logic to me. It’s possible that there’s something fundamental I don’t understand about what you’re saying. I am not familiar with this, no. If you have explanatory material / intuition pumps / etc. to illustrate this, I’d certainly appreciate it! I am not asking how I could come to believe the “literally perfect predictor” thing with 100% certainty; I am asking how I could come to believe it at all (with, let’s say, > 50% certainty). Hold on, hold on. Are we talking about repeated plays of the same game? Where I face the same situation repeatedly? Or are we talking about observing (or learning about) the predictor playing the game with other people before me? The “Bomb” scenario described in the OP says nothing about repeated play. If that’s an assumption you’re introducing, I think it needs to be made explicit…

that seems like an unnecessarily vague characterization of a precise description

I deny that we have a precise description. If you listed out a specific trillion trillion observations that I allegedly made, then we could talk about whether those particular observations justify thinking that we're in the game with the bomb. (If those trillion trillion observations were all from me waking up in a strange room and interacting with it, with no other context, then as noted above, I would have no reason to believe I'm in this game as opposed to any variety of other games consistent with those observations.) The scenario vaguely alleges that we think we're facing an accurate predictor, and then alleges that their observed failure rate (on an unspecified history against unspecified players) is 1 per trillion-trillion. It does not say how or why we got into the epistemic state of thinking that there's an accurate predictor there; we assume this by fiat.

(To be clear, I'm fine with assuming this by fiat. I'm simply arguing that your reluctance to analyze the problem by cases seems strange and likely erroneous to me.)

I am not familiar with this, no. If you have explanatory material / intu

... (read more)

Well… no, the scenario says “the predictor has predicted correctly 1 trillion trillion minus one times, and incorrectly one time”. Does that make it “likely to be accurate”? You tell me, I guess, but that seems like an unnecessarily vague characterization of a precise description.

Let's be more precise, then, and speak in terms of "correctness" rather than "accuracy". There are then two possibilities in the "bomb" scenario as stipulated:

  1. The predictor thought I would take the right box, and was correct.
  2. The predictor thought I would take the right box, and was incorrect.

Now, note the following interesting property of the above two possibilities: I get to choose which of them is realized. I cannot change what the predictor thought, nor can I change its actions conditional on its prediction (which in the stipulated case involves placing a bomb in the left box), but I can choose to make its prediction correct or incorrect, depending on whether I take the box it predicted I would take.

Observe now the following interesting corollary of the above argument: it implies the existence of a certain strategy, which we might call "ObeyBot", which always chooses the action that confirms the predict... (read more)

1JBlack1y
Yes, it was this exact objection that I addressed in my previous replies that relied upon a misreading of the problem. I missed that the boxes were open and thought that the only clue to the prediction was the note that was left. The only solution was to assume that the predictor does not always leave a note, and this solution also works for the stated scenario. You see that the boxes are open, and the left one contains a bomb, but did everyone else? Did anyone else? The problem setup doesn't say. This sort of vagueness leaves holes big enough to drive a truck through. The stated FDT support for picking Left depends absolutely critically on the subjunctive dependency odds being at least many millions to one, and the stated evidence is nowhere near strong enough to support that. Failing that, FDT recommends picking Right. So the whole scenario is pointless. It doesn't explore what it was intended to explore. You can modify the problem to say that the predictor really is that reliable for every agent, but doesn't always leave the boxes open for you or write a note. This doesn't mean that the predictor is perfectly reliable, so a SpiteBot can still face this scenario but is just extremely unlikely to.
3Heighn1y
There IS a question of what would happen if you "always spite the predictor's prediction", since doing so seems to make the 1 in a trillion trillion error rate impossible.
5Heighn1y
To be clear, FDT does not accept causation that happens backwards in time. It's not claiming that the action of one-boxing itself causes there to be a million dollars in the box. It's the agent's algorithm, and, further down the causal diagram, Omega's simulation of this algorithm that causes the million dollars. The causation happens before the prediction and is nothing special in that sense.
3Said Achmiz1y
Yes, sure. Indeed we don’t need to accept causation of any kind, in any temporal direction. We can simply observe that one-boxers get a million dollars, and two-boxers do not. (In fact, even if we accept shminux’s model [https://www.lesswrong.com/posts/TQvSZ4n4BuntC22Af/decisions-are-not-about-changing-the-world-they-are-about], this changes nothing about what the correct choice is.)
1Heighn10mo
Eh? This kind of reasoning leads to failing to smoke on Smoking Lesion.
1JBlack1y
The main point of FDT is that it gives the optimal expected utility on average for agents using it. It does not guarantee optimal expected utility for every instance of an agent using it. Suppose you have a population of two billion agents, each going through this scenario every day. Upon seeing a note predicting right, one billion would pick left and one billion would pick right. We can assume that they all pick left if they see a note predicting left or no note at all. Every year, the Right agents essentially always see a note predicting right, and pay more than $30000 each. The Left agents essentially always see a note predicting left (or no note) and pay $0 each. The average rate of deaths is comparable: one death per few trillion years in each group, which is to say, essentially never. They all know that it could happen, of course. Which group is better off? Edit: I misread Predictor's accuracy. It does not say that it is in all scenarios 1 - 10^-24, just that in some unknown sample of scenarios, it was 1 - 10^-24. This changes the odds so much that FDT does not recommend taking the left box.
2Said Achmiz1y
Obviously, the group that’s better off is the third group: the one that picks Left if there’s no bomb in there, Right otherwise. … I mean, seriously, what the heck? The scenario specifies that the boxes are open! You can see what’s in there! How is this even a question? (Bonus question: what will the predictor say about the behavior of this third group? What choice will she predict a member of this group will make?)
1[comment deleted]1y
1Heighn1y
Two questions, if I may: 1. Why do you read it this way? The problem simply states the failure rate is 1 in a trillion trillion. 2. If we go with your interpretation, why exactly does that change things? It seems to me that the sample size would have to be extemely huge in order to determine a failure rate that low. 
7JBlack1y
It depends upon what the meaning of the word "is" is: 1. The failure rate has been tested over an immense number of prediction, and evaluated as 10^-24 (to one significant figure). That is the currently accepted estimate for the predictor's error rate for scenarios randomly selected from the sample. 2. The failure rate is theoretically 10^-24, over some assumed distribution of agent types. Your decision model may or may not appear anywhere in this distribution. 3. The failure rate is bounded above by 10^-24 for every possible scenario. A self-harming agent in this scenario cannot be consistently predicted by Predictor at all (success rate 0%), so we know that (3) is definitely false. (1) and (2) aren't strong enough, because it gives little information about Predictor's error rate concerning your scenario and your decision model. We have essentially zero information about Predictor's true error bounds regarding agents that sometimes carry out self-harming actions. In order to recommend taking the left box, an FDT agent is one that sometimes carries out self-harming actions, though this requires that the upper bound on Predictor's failure of subjunctive dependency is less than the ratio of the utilities of: paying $100, and burning to death all intelligent life in the universe. We do not have anywhere near enough information to justify that tight a bound. So FDT can't recommend such an action. Maybe someone else can write a scenario that is in similar spirit, but isn't so flawed.
1Heighn1y
Thanks, I appreciate this. Your answer clarifies a lot, and I will think about it more.
1JBlack1y
Another way of phrasing it: you don't get the $100 marginal payoff if you're not prepared to knowingly go to your death in the incredibly unlikely event of a particular type of misprediction. That's the sense in which I meant "you got screwed". You entered the scenario knowing that it was incredibly unlikely that you would die regardless of what you decide, but were prepared to accept that incredibly microscopic chance of death in exchange for keeping your $100. The odds just went against you. Edit: If Predictor's actual bound on error rate was 10^-24, this would be valid. However, Predictor's bound on error rate cannot be 10^-24 in all scenarios, so this is all irrelevant. What a waste of time.

New to LessWrong?