Hi, I'm new to LessWrong, and happened to read the Normal Cryonics article shortly after reading about Roko's basilisk.

It seems to me that if you believe in Roko's basilisk, nearly the worst thing you could possibly do is to freeze yourself indefinitely such that a future artificial intelligence can have all the time in the world to figure out how to revive you and then torture you for eternity. (Technically, I guess, if you really/truly believe in Roko's basilisk, and still want to live during the period it activates, you probably are doing what you can to advance the "inevitable" AI, in the hopes that it will reward you. So I guess you could still be for cryonics.)

Even forgoing Roko's basilisk and just considering generic evil-all-powerful AIs in the future (similar to AM in Harlan Ellison's "I Have No Mouth, and I Must Scream"), or just the possibility of a bad future in general, seems to me to complicate the decision process for whether you should undergo cryonics. How do you weigh the uncertain future against the uncertain afterlife, and how do you choose between complete unknown (death) versus a probability distribution function (what we expect of the future), especially if you believe that PDF to be bad? (I guess from a rational point of view, you probably should at least do real number-crunching on the PDF to see if it's likely to be bad instead of making arbitrary assumptions, but the question still stands, as it is independent of how favorable that PDF is.)

Of course, the bolded question is only interesting if you don't assume that mere existence is the primary goal. If you think endless torture is a better state of being than complete cessation of sensation (and assume that the afterlife coincides with the latter), then your choice is clear, and freezing yourself can only give you benefit.

Notably, people contemplating suicide face a similar decision process, but the difference is I would argue suicide is fairly irrational, because while living you usually always retain the choice to die on your own terms, forbidding possibilities such as falling into a coma. Then if you make the reasonable assumption that it doesn't really matter when you die (in terms of what happens in the afterlife), you might as well stick around, because chances are your living circumstances can get better. In cryonics, if you assume revivability is possible (which if you don't, then what's the point?), you're actually forgoing your ability to choose to die on your own terms, which seems dangerous when things like evil AIs (e.g. AM or Roko's basilisk) have a nonzero probability of occurring the future.

Of course, some of this depends on how much you trust the organization controlling your frozen body. I don't know much about cryonics, but I assume there are probably stipulations on whether or not to destroy the remains in case of certain developments in the future?

New Answer
New Comment

1 Answers sorted by

Viliam

Jan 09, 2021

30

It is difficult to reason about things that never happened before. What is the right reference class here? My first idea was to say "imagine you live 100 or 1000 years ago, and you get a magical pill that teleports you into today, should you take it, if the alternative is to die?" Seems like taking it shouldn't make things worse, and has a chance to make them much better. But this is because the world is still ruled by humans, and because you still have a chance to die should you choose so.

Speaking for myself, I don't believe in Roko's basilisk (but I don't wish to debate it now why), and I believe that most futures with "evil AI" will end up with humans dead, not kept alive and tortured. So it seems to me that, conditional on being revived, most futures will be good. Now, whether "most" is enough to balance the other option, that is more difficult, but... well, I guess my curiosity about the possible good future makes me think it is.