The risks of unaligned-AGI are usually couched in terms of existential risk, whereby the outcome is explicitly or implicitly human extinction. However, given the nature of what an AGI would be able to do, it seems though there are plausible scenarios worse than human extinction. Just as on an individual level it is possible to imagine fates we would prefer death to (e.g. prolonged terminal illness or imprisonment), it is also possible to expand this to humanity as a whole, where we end up in hell-type fates. This could arise from some technical error, wilful design by the initial creators (say religious extremists) or some other unforeseen mishap.

How prominently do these concerns feature for anyone? How likely do you think worse-than-extinction scenarios to be?

New to LessWrong?

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 7:34 PM

Someone concerned about this possibility has posted to this site and used the term "s-risk".

It is approximately as difficult to create an AI that wants people to suffer as it is to create one that wants people to flourish, and humanity is IMO very far from being able to do the latter, so my main worry is the an AGI will kill us all painlessly.

[-]awg1y10

Some advanced intelligence that takes over doesn't have to be directed toward human suffering for s-risk to happen. It could just happen as a byproduct of whatever unimaginable things the advanced intelligence might want/do as it goes about its own business completely heedless of us. In those cases we're suffering in the same way that some nameless species in some niche of the world is suffering because humans, unaware that species even exists, are encroaching on and destroying its natural domain in the scope of just going about our own comparatively unimaginable business.

I think there are two confusions here. This comment appears to be conflating the "suffering" of a species with suffering of individuals within it, and also temporary suffering of the dying with suffering that is protracted indefinitely.

The term s-risk usually refers to indefinitely extended amounts of suffering much greater than has been normal in human history. Centrally, to scenarios in which most people would prefer to die but can't.

[-]awg1y10

Thanks for the helpful clarification!