I’ve been reading about existential risks from advanced AI systems, including the possibility of “worse-than-death” scenarios sometimes called suffering risks (“s-risks”). These are outcomes where a misaligned AI could cause immense or astronomical amounts of suffering rather than simply extinguishing humanity.
My question: Do researchers working on AI safety and longtermism have any informed sense of the relative likelihood of s-risks compared to extinction risks from unaligned AI?
I’m aware that any numbers here would be speculative, and I’m not looking for precise forecasts, but for references, models, or qualitative arguments. For example:
Do most experts believe extinction is far more likely than long-lasting suffering scenarios?
Are there published attempts to put rough probabilities on these outcomes?
Are there any major disagreements in the field about this?
I’ve come across Kaj Sotala’s “Suffering risks: An introduction” and the work of the Center on Long-Term Risk, but I’d appreciate more recent or deeper resources.
Thanks in advance for any guidance.