P(doom|superintelligence) or coin tosses and dice throws of human values (and other related Ps).
TL;DR P(doom|superintelligence) is around 99% because AI researchers now don't have a precise theory of human values and precise understanding of inner structure of current AI, so they can’t encode one into the other. And while future AI may have different architecture and development process than LLMs, it also may...
Roman V. Yampolskiy have a paper (The Universe of Minds - 1 Oct 2014). I think it shoud be mentioned here.