charding has not written any posts yet.

Bostrom's Footnote 21 seems innocuous, but to me it unravels a lot of the argument Bostrom is making.[1]
Bostrom's central go/no-go model suggests a P(doom) of up to 97% is acceptable if life expectancy rises to 1,400 years post-AGI.
Footnote 21 clarifies, ``more generally, we could take P(doom) to be the expected fraction of the human population that dies when superintelligence is launched.''
So, suppose one could extend life 1,400 years for 3% of humans at the cost of killing 97% right now. How should we reply to this deal? Bostrom's go/no-go model says to accept, but I think people would overwhelmingly find the deal morally reprehensible.
It's straightforward to observe that the two deals, Bostrom's... (read more)
Bostrom's results seem very sensitive to deviations from a wholly person-affecting perspective. To investigate, I coded up the model from Appendix A with one modification: I supposed that, instead of being wholly self interested, people are willing to sacrifice 10% of life expectancy for the sake of all future generations.
My method was to calculate the launch time that is later than the optimal time-point according to a selfish view, but only so much that life expectancy is reduced 10% from the selfish optimum.[^1] This method is crude, but illustrates how rush-to-launch loses support if one walks mildly away from a person-affecting view.
For example, with 20% $P_{doom}$ and 10%/yr safety progress, the selfishly... (read more)
Thank you for your response!
I should have been clearer, yes. I meant that the 97% deals are acceptable to your go/no-go model, not to you or your later models.
However, I think my arguments apply equally to your later models, just with P(doom) different from 97%. (See below.)
Thank you for running the more complicated models. (And, in case unclear, I did read all your article before making any comments.)
Do the models help us understand how we should act? Here is how... (read 536 more words →)