All of Heighn's Comments + Replies

Ah, so your complaint is that the author is ignoring evidence pointing to shorter timelines. I understand your position better now :)

"Insofar as your distribution has a faraway median, that means you have close to certainty that it isn't happening soon. And that, I submit, is ridiculously overconfident and epistemically unhumble."

Why? You can say a similar thing about any median anyone ever has. Why is this median in particular overconfident?

3Daniel Kokotajlo2mo
Because it's pretty obvious that there's at least some chance of AGI etc. happening soon. Many important lines of evidence support this: --Many renowned world experts in AI and AGI forecasting say so, possibly even most --Just look at ChatGPT4 --Read the Bio Anchors report --Learn more about AI, deep learning, etc. and in particular about scaling laws and the lottery ticket hypothesis etc. and then get up to speed with everything OpenAI and other labs are doing, and then imagine what sorts of things could be built in the next few years using bigger models with more compute and  data etc.... --Note the scarcity of any decent object-level argument that it won't happen soon. Bio Anchors has the best arguments that it won't happen this decade, IMO. If you know of any better one I'd be interested to be linked to it or have it explained to me!

"And not only do I not expect the trained agents to not maximize the original “outer” reward signal"

Nitpick: one "not" too many?

3TurnTrout3mo
Thanks, fixed.

I apologize, Said; I misinterpreted your (clearly written) comment.

Reading your newest comment, it seems I actually largely agree with you - the disagreement lies in whether farm animals have sentience.

-8[anonymous]4mo

(No edit was made to the original question.)

Thanks for your answer!

I (strongly) disagree that sentience is uniquely human. It seems to me a priori very unlikely that this would be the case, and evidence does exist to the contrary. I do agree sentience is an important factor (though I'm unsure it's the only one).

7Said Achmiz4mo
I didn’t say that sentience is uniquely human, though. Now, to be clear: on the “a priori very unlikely” point, I don’t think I agree. I don’t actually think that it’s unlikely at all; but nor do I think that it’s necessarily very likely, either. “Humans are the only species on Earth today that are sentient” seems to me to be something that could easily be true, but could also easily be false. I would not be very surprised either way (with the caveat that “sentience” seems at least partly to admit of degrees—“partly” because I don’t think it’s fully continuous, and past a certain point it seems obvious that the amount of sentience present is “none”, i.e. I am not a panpsychist—so “humans are not uniquely sentient” would almost certainly not be the same thing as “there exist other species with sentience comparable to humans”). But please note: nothing in the above paragraph is actually relevant to what we’ve been discussing in this thread! I’ve been careful to refer to “animals I eat”, “critters we normally eat”, “food animals”, listing examples like pigs and sheep and chickens, etc. Now, you might press me on some edge cases (what about octopuses, for instance? those are commonly enough found as food items even in the West), but on the whole, the distinction is clear enough. Dolphins, for example, might be sentient (though I wouldn’t call it a certainty by any means), and if you told me that there’s an industry wherein dolphins are subjected to factory-farming-type conditions, I’d certainly object to such a thing almost as much as I object to, e.g., China’s treatment of Uyghurs (to pick just one salient modern example out of many possible such). But I don’t eat any factory-farmed dolphins. And the topic here, recall, is my eating habits. Neither do I eat crows, octopuses (precisely for the reason that I am not entirely confident about their lack of sentience!), etc.

"but certainly none of the things that we (legally) do with animals are bad for any of the important reasons why torture of people is bad."

That seems very overconfident to me. What are your reasons for believing this, if I may ask? What quality or qualities do humans have that animals lack that makes you certain of this?

3Said Achmiz4mo
Sorry, could you clarify? What specifically do you think I’m overconfident about? In other words, what part of this are you saying I could be mistaken about, the likelihood of which mistake I’m underestimating? Are you suggesting that things are done to animals of which I am unaware, which I would judge to be bad (for some or all of the same reasons why torture of people are bad) if I were aware of them? Or something else? EDIT: Ah, apologies, I just noticed on a re-read (was this added via edit after initial posting?) that you asked: This clarifies the question. As for the answer, it’s simple enough: sentience (in the classic sense of the term)—a.k.a. “subjective consciousness”, “self-awareness”, etc. Cows, pigs, chickens, sheep… geese… deer… all the critters we normally eat… they don’t have anything like this, very obviously. (There’s no reason why they would, and they show no sign of it. The evidence here is, on the whole, quite one-sided.) Since the fact that humans are sentient is most of what makes it bad to torture us—indeed, what makes it possible to “torture” us in the first place—the case of animals is clearly disanalogous. (The other things that make it bad to torture humans—having to do with things like social structures, game-theoretic incentives, etc.—apply to food animals even less.)

One-boxing on Newcomb's Problem is good news IMO. Why do you believe it's bad?

2Noosphere894mo
It basically comes down to the fact that agents using too smart decision theories like FDT or UDT can fundamentally be deceptively aligned, even if myopia is retained by default. That's the problem with one-boxing in Newcomb's problem, because it implies that our GPTs could very well become deceptively aligned. Link below: https://www.lesswrong.com/posts/LCLBnmwdxkkz5fNvH/open-problems-with-myopia [https://www.lesswrong.com/posts/LCLBnmwdxkkz5fNvH/open-problems-with-myopia] The LCDT decision theory does prevent deception, assuming it's implemented correctly. Link below:

I can, although I indeed don't think it is nonsense.

What do you think our (or specifically my) viewpoint is?

1Said Achmiz4mo
I’m no longer sure; you and green_leaf appear to have different, contradictory views, and at this point that divergence has confused me enough that I could no longer say confidently what either of you seem to be saying without going back and carefully re-reading all the comments. And that, I’m afraid, isn’t something that I have time for at the moment… so perhaps it’s best to write this discussion off, after all.

Hmm, interesting. I don't know much about UDT. From and FDT perspective, I'd say that if you're in the situation with the bomb, your decision procedure already Right-boxed and therefore you're Right-boxing again, as logical necessity. (Making the problem very interesting.)

Sorry, I'm having trouble understanding your point here. I understand your analogy (I was a developer), but am not sure what you're drawing the analogy to.

I've been you ten years ago.

Just... no. Don't act like you know me, because you don't. I appreciate you trying to help, but this isn't the way.

2Vladimir_Nesov4mo
These norms are interesting in how well they fade into the background, oppose being examined. If you happen to be a programmer or have enough impression of what that might be like, just imagine a programmer team where talking about bugs can be taboo in some circumstances, especially if they are hypothetical bugs imagined out of whole cloth to check if they happen to be there, or brought to attention to see if it's cheap to put measures in place to prevent their going unnoticed, even if it eventually turns out that they were never there to begin with in actuality. With rationality, that's hypotheses about how people think, including hypotheses about norms that oppose examination of such hypotheses and norms.

Seems to me Yudkowsky was (way) too pessimistic about OpenAI there. They probably knew something like this would happen.

To explain my view more, the question I try to answer in these problems is more or less: if I were to choose a decision theory now to strictly adhere to, knowing I might run into the Bomb problem, which decision theory would I choose?

"But by the time the situation described in the OP happens, it no longer matters whether you optimize expected utility over the whole sample space; that goal is now moot."

This is what we agree on. If you're in the situation with a bomb, all that matters is the bomb.

My stance is that Left-boxers virtually never get into the situation to begin with, because of the prediction Omega makes. So with probability close to 1, they never see a bomb.

Your stance (if I understand correctly) is that the problem statement says there is a bomb, so, that's what's true with... (read more)

4Vladimir_Nesov4mo
But that's false for a UDT agent, it still matters to that agent-instance-in-the-situation what happens in other situations, those without a bomb, it's not the case that all that matters is the bomb (or even a bomb).
1Heighn4mo
To explain my view more, the question I try to answer in these problems is more or less: if I were to choose a decision theory now to strictly adhere to, knowing I might run into the Bomb problem, which decision theory would I choose?

I see your point, although I have entertained Said's view as well. But yes, I could have done better. I tend to get like this when my argumentation is being called crazy, and I should have done better.

You could have just told me this instead of complaining about me to Said though.

I don't see how it is misleading. Achmiz asked what actually happens; it is, in virtually all possible worlds, that you live for free.

2Vladimir_Nesov4mo
It is misleading because Said's perspective is to focus on the current situation, without regarding the other situations as decision relevant. From UDT perspective you are advocating, the other situations remain decision relevant, and that explains much of what you are talking about in other replies. But from that same perspective, it doesn't matter that you live in the situation Said is asking about, so it's misleading that you keep attention on this situation in your reply without remarking on how that disagrees with the perspective you are advocating in other replies. In the parent comment, you say "it is, in virtually all possible worlds, that you live for free". This is confusing: are you talking about the possible worlds within the situation Said was asking about, or also about possible worlds outside that situation? The distinction matters for the argument in these comments, but you are saying this ambiguously.

Note that it's my argumentation that's being called crazy, which is a large factor in the "antagonism" you seem to observe - a word choice I don't agree with, btw.

About the "needlessly upping the heat", I've tried this discussion from multiple different angles, seeing if we can come to a resolution. So far, no, alas, but not for lack of trying. I will admit some of my reactions were short and a bit provocative, but I don't appreciate nor agree with your accusations. I have been honest in my reactions.

2Vladimir_Nesov4mo
I've been you ten years ago. This doesn't help, courtesy or honesty (purposes that tend to be at odds with each other) aren't always sufficient, it's also necessary to entertain strange points of view that are obviously wrong, in order to talk in another's language, to de-escalate where escalation won't help (it might help with feeding norms, but knowing what norms you are feeding is important). And often enough that is still useless and the best thing is to give up. Or at least more decisively overturn the chess board, as I'm doing with some of the last few comments to this post, to avoid remaining in an interminable failure mode.

Interesting. I'm having the opposite experience (due to timing, apparently), where at least it's making some sense now. I've seen it using tricks only applicable to addition and pulling numbers out of its ass, so I was surprised what it did wasn't completely wrong.

Asking the same question again even gives a completely different (but again wrong) result:

If you ask ChatGPT to multiply two 4-digit numbers it writes out the reasoning process in natural knowledge and comes to the right answer.

People keep saying such things. Am I missing something? I asked it to calculate 1024 * 2047, and the answer isn't even close. (Though to my surprise, the first 2 steps are at least correct steps, and not nonsense. And it is actually adding the right numbers together in step 3, again, to my surprise. I've seen it perform much, much worse.)

2ChristianKl4mo
I did ask it at the beginning to multiply numbers and it seems to behave now differently than it did 5 weeks ago and isn't making correct multiplications anymore. Unfortunatley, I can't access the old chats.
1Heighn4mo
Asking the same question again even gives a completely different (but again wrong) result:

That's what I've been saying to you: a contradiction.

And there are two ways to resolve it.

The scenario also stipulates the bomb isn't there if you Left-box.

What actually happens? Not much. You live. For free.

"So if you take the Left box, what actually, physically happens?"

You live. For free. Because the bomb was never there to begin with.

Yes, the situation does say the bomb is there. But it also says the bomb isn't there if you Left-box.

1Vladimir_Nesov4mo
This is misleading. What happens is that the situation you found yourself in doesn't take place with significant measure. You live mostly in different situations, not this one.
2Said Achmiz4mo
At the very least, this is a contradiction, which makes the scenario incoherent nonsense. (I don’t think it’s actually true that “it also says the bomb isn’t there if you Left-box”—but if it did say that, then the scenario would be inconsistent, and thus impossible to interpret.)

Agreed, but I think it's important to stress that it's not like you see a bomb, Left-box, and then see it disappear or something. It's just that Left-boxing means the predictor already predicted that, and the bomb was never there to begin with.

Put differently, you can only Left-box in a world where the predictor predicted you would.

2Said Achmiz4mo
What stops you from Left-boxing in a world where the predictor didn’t predict that you would? To make the question clearer, let’s set aside all this business about the fallibility of the predictor. Sure, yes, the predictor’s perfect, it can predict your actions with 100% accuracy somehow, something about algorithms, simulations, models, whatever… fine. We take all that as given. So: you see the two boxes, and after thinking about it very carefully, you reach for the Right box (as the predictor always knew that you would). But suddenly, a stray cosmic ray strikes your brain! No way this was predictable—it was random, the result of some chain of stochastic events in the universe. And though you were totally going to pick Right, you suddenly grab the Left box instead. Surely, there’s nothing either physically or logically impossible about this, right? So if the predictor predicted you’d pick Right, and there’s a bomb in Left, and you have every intention of picking Right, but due to the aforesaid cosmic ray you actually take the Left box… what happens?
2Said Achmiz4mo
But the scenario stipulates that the bomb is there. Given this, taking the Left box results in… what? Like, in that scenario, if you take the Left box, what actually happens?
1green_leaf4mo
Yes, that's correct. By executing the first algorithm, the bomb has never been there. Here it's useful to distinguish between agentic 'can' and physical 'can.' Since I assume a deterministic universe for simplification, there is only one physical 'can.' But there are two agentic 'can''s - no matter the prediction, I can agentically choose either way. The predictor's prediction is logically posterior to my choice, and his prediction (and the bomb's presence) are the way they are because of my choice. So I can Left-box even if there is a bomb in the left box, even though it's physically impossible. (It's better to use agentic can over physical can for decision-making, since that use of can allows us to act as if we determined the output of all computations identical to us, which brings about better results. The agent that uses the physical can as their definition will see the bomb more often.) Unless I'm missing something.

I think we agree. My stance: if you Left-box, that just means the predictor predicted that with probability close to 1. From there on, there are a trillion trillion - 1 possible worlds where you live for free, and 1 where you die.

I'm not saying "You die, but that's fine, because there are possible worlds where you live". I'm saying that "you die" is a possible world, and there are way more possible worlds where you live.

I'm not going to make you cite anything. I know what you mean. I said Right-boxing is a consequence, given a certain resolution of the problem; I always maintained Left-boxing is the correct decision. Apparently I didn't explain myself well, that's on me. But I'm kinda done, I can't seem to get my point across (not saying it's your fault btw).

By construction it is not, because the scenario is precisely that we find ourselves in one such exceptional case; the posterior probability (having observed that we do so find ourselves) is thus ~1.

Except that we don't find ourselves there if we Left-box. But we seem to be going around in a circle.

… but you have said, in a previous post, that if you find yourself in this scenario, you Right-box. How to reconcile your apparently contradictory statements…?

Right-boxing is the necessary consequence if we assume the predictor's Right-box prediction is fixed now... (read more)

2Said Achmiz5mo
There’s no “if” about it. The scenario is that we do find ourselves there. (If you’re fighting the hypothetical, you have to be very explicit about that, because then we’re just talking about two totally different, and pretty much unrelated, things. But I have so far understood you to not be doing that.) I don’t know what you mean by “apparently”. You have two boxes—that’s the scenario. Which do you choose—that’s the question. You can pick either one; where does “apparently” come in? What does this mean? The boxes are already in front of you. You just said in this very comment that you Right-box in the given scenario! (And also in several other comments… are you really going to make me cite each of them…?)

No, that's just plain wrong. If you Left-box given a perfect predictor, the predictor didn't put a bomb in Left. That's a given. If the predictor did put a bomb in Left and you Left-box, then the predictor isn't perfect.

Firstly, there’s a difference between “never” and “extremely rarely”.

That difference is so small as to be neglected.

And in the latter case, the question remains “and what do you do then?”. To which, it seems, you answer “choose the Right box”…? Well, I agree with that! But that’s just the view that I’ve already described as “Left-box unless there’s a bomb in Left, in which case Right-box”.

It seems to me that strategy leaves you manipulatable by the predictor, who can then just always predict you will Right-box, put a bomb in Left, and let you Right-box, causing you to lose $1,000.

0Said Achmiz5mo
By construction it is not, because the scenario is precisely that we find ourselves in one such exceptional case; the posterior probability (having observed that we do so find ourselves) is thus ~1. … but you have said, in a previous post, that if you find yourself in this scenario, you Right-box. How to reconcile your apparently contradictory statements…?

"Irrelevant, since the described scenario explicitly stipulates that you find yourself in precisely that situation."

Actually, this whole problem is irrelevant to me, a Left-boxer: Left-boxers never (or extremely rarely) find themselves in the situation with a bomb in Left. That's the point.

0Said Achmiz5mo
Firstly, there’s a difference between “never” and “extremely rarely”. And in the latter case, the question remains “and what do you do then?”. To which, it seems, you answer “choose the Right box”…? Well, I agree with that! But that’s just the view that I’ve already described as “Left-box unless there’s a bomb in Left, in which case Right-box”. It remains unclear to me what it is you think we disagree on.

The bottom line is: to the actual single question the scenario asks—which box do you choose, finding yourself in the given situation?—we give the same answer. Yes?

The bottom line is that Bomb is a decision problem. If I am still free to make a decision (which I suppose I am, otherwise it isn't much of a problem), then the decision I make is made at 2 points in time. And then, Left-boxing is the better decision.

Yes, the Bomb is what we're given. But with the very reasonable assumption of subjunctive dependence, it specifies what I am saying...

We agree that if I would be there, I would Right-box, but also everybody would then Right-box, as a logical necessity (well, 1 in a trillion trillion error rate, sure). It has nothing to do with correct or incorrect decisions, viewed like that: the decision is already hard coded into the problem statement, because of the subjunctive dependence.

"But you can just Left-box" doesn't work: that's like expecting one calculator to answer to 2 + 2 differently than another calculator.

Alright. The correct decision is Left-boxing, because that means the predictor's model Left-boxed (and so do I), letting me live for free. Because, at the point where the predictor models me, the Bomb isn't placed yet (and never will be).

However, IF I'm in the Bomb scenario, then the predictor's model already Right-boxed. Then, because of subjunctive dependence, it's apparently not possible for me to Left-box, just as it is impossible for two calculators to give a different result to 2 + 2.

2Said Achmiz5mo
Well, the Bomb scenario is what we’re given. So the first paragraph you just wrote there is… irrelevant? Inapplicable? What’s the point of it? It’s answering a question that’s not being asked. As for the last sentence of your comment, I don’t understand what you mean by it. Certainly it’s possible for you to Left-box; you just go ahead and Left-box. This would be a bad idea, of course! Because you’d burn to death. But you could do it! You just shouldn’t—a point on which we, apparently, agree. The bottom line is: to the actual single question the scenario asks—which box do you choose, finding yourself in the given situation?—we give the same answer. Yes?

Hmmm, I thought that comment might clear things up, but apparently it doesn't. And I'm left wondering if you even read it.

Anyway, Left-boxing is the correct decision. But since you didn't really engage with my points, I'll be leaving now.

2Said Achmiz5mo
What does it mean to say that Left-boxing is “the correct decision” if you then say that the decision you’d actually make would be to Right-box? This seems to be straightforwardly contradictory, in a way that renders the claim nonsensical. I read all your comments in this thread. But you seem to be saying things that, in a very straightforward way, simply don’t make any sense…

Not at the point in time where Omega models my decision procedure.

One thing we do agree on:

If I ever find myself in the Bomb scenario, I Right-box. Because in that scenario, the predictor's model of me already Right-boxed, and therefore I do, too - not as a decision, per se, but as a logical consequence.

The correct decision is another question - that's Left-boxing, because the decision is being made in two places. If I find myself in the Bomb scenario, that just means the decision to Right-box was already made.

The Bomb problem asks what the correct decision is, and makes clear (at least under my assumption) that the deci... (read more)

1green_leaf5mo
Unless I'm missing something, it's possible you're in the predictor's simulation, in which case it's possible you will Left-box.
2Said Achmiz5mo
If we agree on that, then I don’t understand what it is that you think we disagree on! (Although the “not as a decision, per se” bit seems… contentless.) No, it asks what decision you should make. And we apparently agree that the answer is “Right”.

That's one possible world. There are many more where I don't burn to death.

0Said Achmiz5mo
But… there aren’t, though. They’ve already failed to be possible, at that point.

No, it isn't. In the world that's stipulated, you still have to make your decision.

That decision is made in my head and in the predictor's head. That's the key.

2Said Achmiz5mo
But if you choose Left, you will burn to death. I’ve already quoted that. Says so right in the OP.

The world you're describing is just as much a possible world as the ones I describe. That's my point.

3Said Achmiz5mo
Huh? It’s the world that’s stipulated to be the actual world, in the scenario.

Inferring that I don't burn to death depends on

  1. Omega modelling my decision procedure
  2. Cause and effect from there.

That's it. No esoteric assumptions. I'm not talking about a multiverse with worlds existing next to each other or whatever, just possible worlds.

3Said Achmiz5mo
If they’re just possible worlds, then why do they matter? They’re not actual worlds, after all (by the time the described scenario is happening, it’s too late for any of them to be actual!). So… what’s the relevance?
  1. That I'm being crazy
  2. That Left-boxing means burning to death
  3. That your answer is obviously correct

Take your pick.

2Said Achmiz5mo
The scenario stipulates this:

You infer the existence of me burning to death from what's stated in the problem as well. There's no difference.

I do have the assumption of subjunctive dependence. But without that one - if, say, the predictor predicts by looking at the color of my shoes - then I don't Left-box anyway.

1green_leaf5mo
I think it's better to explain to such people the problem where the predictor is perfect, and then generalize to an imperfect predictor. They don't understand the general principle of your present choices pseudo-overwriting the entire timeline and can't think in the seemingly-noncausal way that optimal decision-making requires. By jumping right to an imperfect predictor, the principle becomes, I think, too complicated to explain [https://www.lesswrong.com/posts/HLqWn5LASfhhArZ7w/expecting-short-inferential-distances].
2Said Achmiz5mo
Of course there’s a difference: inferring burning to death just depends on the perfectly ordinary assumption of cause and effect, plus what is very explicitly stated in the problem. Inferring the existence of other worlds depends on much more esoteric assumptions that that. There’s really no comparison at all. Not only is that not the only assumption required, it’s not even clear what it means to “assume” subjunctive dependence. Sure, it’s stipulated that the predictor is usually (but not quite always!) right about what you’ll do. What else is there to this “assumption” than that? But how that leads to “other worlds exist” and “it’s meaningful to aggregate utility across them” and so on… I have no idea.

Yeah you keep repeating that. Stating it. Saying it's simple, obvious, whatever. Saying I'm being crazy. But it's just wrong. So there's that.

3Said Achmiz5mo
Which part of what I said you deny…?

My point is that those "other worlds" are just as much stipulated by the problem statement as that one world you focus on. So, you pay $100 and don't burn to death. I don't pay $100, burn to death in 1 world, and live for free in a trillion trillion - 1 worlds. Even if I value my life at $10,000,000,000,000, my choice gives more utility.

2Said Achmiz5mo
Sorry, but no, they’re not. You may choose to infer their “existence” from what’s stated in the problem—but that’s an inference that depends on various additional assumptions (e.g. about the nature of counterfactuals, and all sorts of other things). All that’s actually stipulated is the one world you find yourself in.

(Btw, you can call your answer "obvious" and my side "crazy" all you want, but it won't change a thing until you actually demonstrate why and how FDT is wrong, which you haven't done.)

2Said Achmiz5mo
I’ve done that: FDT is wrong because it (according to you) recommends that you choose to burn to death, when you could easily choose not to burn to death. Pretty simple.

But of course there isn’t actually a contradiction. (Which you know, otherwise you wouldn’t have needed to hedge by saying “in a way”.)

There is, as I explained. There's 2 ways of resolving it, but yours isn't one of them. You can't have it both ways.

It’s simply that the problem says that if you Left-box, then the predictor predicted this, and will not have put a bomb in Left… usually. Almost always! But not quite always. It very rarely makes mistakes! And this time, it would seem, is one of those times.

Just... no. "The predictor predicted this", yes, so th... (read more)

2Said Achmiz5mo
The problem stipulates that you actually, in fact, find yourself in a world where there’s a bomb in Left. These “other worlds” are—in the scenario we’re given—entirely hypothetical (or “counterfactual”, if you like). Do they even exist? If so, in what sense? Not clear. But in the world you find yourself in (we are told), there’s a bomb in the Left box. You can either take that box, and burn to death, or… not do that. So, “why choose to focus on” that world? Because that’s the world we find ourselves in, where we have to make the choice. Paying $100 to avoid burning to death isn’t something that “seems odd”, it’s totally normal and the obviously correct choice.

Yes, almost perfectly (well, it has to be “almost”, because it’s also stipulated that the predictor got it wrong this time).

Well, not with your answer, because you Right-box. But anyway.

Why does it matter? We know that there’s a bomb in Left, because the scenario tells us so.

It matters a lot, because in a way the problem description is contradicting itself (which happens more often in Newcomblike problems).

  1. It says there's a bomb in Left.
  2. It also says that if I Left-box, then the predictor predicted this, and will not have put a Bomb in Left. (Unless you ass
... (read more)
2Said Achmiz5mo
Well, first of all, if there is actually a contradiction in the scenario, then we’ve been wasting our time. What’s to talk about? In such a case the answer to “what happens in this scenario” is “nothing, it’s logically impossible in the first place”, and we’re done. But of course there isn’t actually a contradiction. (Which you know, otherwise you wouldn’t have needed to hedge by saying “in a way”.) It’s simply that the problem says that if you Left-box, then the predictor predicted this, and will not have put a bomb in Left… usually. Almost always! But not quite always. It very rarely makes mistakes! And this time, it would seem, is one of those times. So there’s no contradiction, there’s just a (barely) fallible predictor. So the scenario tells us that there’s a bomb in Left, we go “welp, guess the predictor screwed up”, and then… well, apparently FDT tells us to choose Left anyway? For some reason…? (Or does it? You tell me…) But regardless, obviously the correct choice is Right, because Left’s got a bomb in it. I really don’t know what else there is to say about this.

"Irrelevant, since the described scenario explicitly stipulates that you find yourself in precisely that situation."

It also stipulates the predictor predicts almost perfectly. So it's very relevant.

"Yes, that’s what I’ve been saying: choosing Right in that scenario is the correct decision."

No, it's the wrong decision. Right-boxing is just the necessary consequence of the predictor predicting I Right-box. But insofar this is a decision problem, Left-boxing is correct, and then the predictor predicted I would Left-box.

"No, Left-boxing means we burn to death.... (read more)

2Said Achmiz5mo
Yes, almost perfectly (well, it has to be “almost”, because it’s also stipulated that the predictor got it wrong this time). None of this matters, because the scenario stipulates that there’s a bomb in the Left box. But it’s stipulated that the predictor did put a bomb in Left. That’s part of the scenario. Why does it matter? We know that there’s a bomb in Left, because the scenario tells us so.
Load More