I am much more optimistic about our future than Yudkowsky is. But that is not the topic of this post.
I would very much like to know why you are much more optimistic! Have you written about that?
Thanks! I should have been more clear; "Nature is healing" has some "EY was wrong in his post" energy I was wondering about.
What do you mean with "Finally, nature is healing"?
Sorry, I should have been more clear. I know about FOOM; I was curious as to why you believe EY was wrong on FOOM and why you suggest the update on x-risk.
Could you explain point A?
Suppose the reader has a well defined utility function where death of torture are set to minus infinity. Then the writer can't persuade them to trade off death or torture against any finite amount of utility. So, in what sense is the reader wrong about their own preferences?
I think the original Bomb scenario should have come with a, say, $1,000,000 value for "not being blown up". That would have allowed for easy and agreed-upon expected utility calculations.
What makes the bomb dilemma seem unfair to me is the fact that it's conditioning on an extremely unlikely event. The only way we blow up is if the predictor predicted incorrectly. But by assumption, the predictor is near-perfect. So it seems implausible that this outcome would ever happen.
Although I strongly disagree with Achmiz on the Bomb scenario in general, here we agree: Bomb is perfectly fair. You just have to take the probabilities into account, after which - if we value life at, say, $1,000,000 - Left-boxing is the only correct strategy.
CDT indeed Right-boxes, thereby losing utility.
Hanson and Christiano agree with EY that doom is likely.
I'm not sure about Hanson, but Christiano is a lot more optimistic than EY.
Great post overall, you're making interesting points!
Couple of comments:
There are 8 possible worlds here, with different utilities and probabilities
Some decision theorists tend to get confused over this because they think of this magical thing they call "causality," the qualia of your decisions being yours and free, causing the world to change upon your metaphysical command. They draw fancy causal graphs like this one:
That seems like an unfair criticism of the FDT paper. Drawing such a diagram doesn't imply one believes causality to be magic any more than making your table of possible worlds.Specifically, the diagrams in the FDT paper don't say decisions are "yours and free", at least if I understand you correctly. Your decisions are caused by your decision algorithm, which in some situations is implemented in other agents as well.