Although, now that I think about it, this survey is about risks before 2100, so the 5% risk of superintelligent AI might be that low because some of the responders belief such AI not to happen before 2100. Still, it seems in sharp contrast with Yudkowsky's estimate.
Agreed, especially when compared to http://www.fhi.ox.ac.uk/gcr-report.pdf.
Commenting on the first myth, Yudkowsky himself seems to be pretty sure of this when reading his comment here: http://econlog.econlib.org/archives/2016/03/so_far_my_respo.html. I know Yudkowsky's post is written after this LessWrong article, but it still seems relevant to mention.
By the same logic of Quantum Immortality, shouldn't we expect never to fall asleep, since we can't observe ourselves while being asleep?
I was thinking about this post and thought up the following experiment. Suppose, by some quantum mechanism, Bob has a 50% probability of falling asleep for the next 8 hours and a 50% probability of staying awake for the next 8 hours. By the same logic as QI, should Bob expect (with 100% certainty) to be awake after 2 hours, since he cannot observe himself being asleep? I would say no. But then, doesn't QI fail as a result?