In the previous post, we analyzed how QI will increase the chances that I observe a world where alignment is theoretically and/or practically easy (AI pause).
However, there is a straightforward application of QI to AI risk: AI kills everyone except me. This can happen in a few ways:
The main feature is that I have to observe that the AI catastrophe has already started and that many people are dying. If not, I am still more likely to survive in the majority of the worlds where AI is aligned or non-existent. That is why I say above that many people survive but not all people survive. Some basic considerations suggest that I would observe around half the people survive – the larger the surviving group, the more likely I am in it.
However, the group of survivors may quickly decline after the initial catastrophe in a world looking like a Mad Max dystopia full of killer drones. QI will keep me among the survivors (though likely badly injured). This personal survival is based on path-based identity theory. State-based theory gives more weight to the worlds without AI risk at all, as most minds thinking they are me will be in them.