Sorted by New


Top 9+2 myths about AI risk

Although, now that I think about it, this survey is about risks before 2100, so the 5% risk of superintelligent AI might be that low because some of the responders belief such AI not to happen before 2100. Still, it seems in sharp contrast with Yudkowsky's estimate.

Top 9+2 myths about AI risk

Commenting on the first myth, Yudkowsky himself seems to be pretty sure of this when reading his comment here: http://econlog.econlib.org/archives/2016/03/so_far_my_respo.html. I know Yudkowsky's post is written after this LessWrong article, but it still seems relevant to mention.

A pessimistic view of quantum immortality

By the same logic of Quantum Immortality, shouldn't we expect never to fall asleep, since we can't observe ourselves while being asleep?

If MWI is correct, should we expect to experience Quantum Torment?

I was thinking about this post and thought up the following experiment. Suppose, by some quantum mechanism, Bob has a 50% probability of falling asleep for the next 8 hours and a 50% probability of staying awake for the next 8 hours. By the same logic as QI, should Bob expect (with 100% certainty) to be awake after 2 hours, since he cannot observe himself being asleep? I would say no. But then, doesn't QI fail as a result?