What the rational decision process for deciding for or against cryonics when there's a possibility the future might be "bad"?
Hi, I'm new to LessWrong, and happened to read the Normal Cryonics article shortly after reading about Roko's basilisk. It seems to me that if you believe in Roko's basilisk, nearly the worst thing you could possibly do is to freeze yourself indefinitely such that a future artificial intelligence can...