LESSWRONG
LW

AI RiskRisks of Astronomical Suffering (S-risks)AI
Frontpage

11

[ Question ]

How likely are scenarios where AGI ends up overtly or de facto torturing us? How likely are scenarios where AGI prevents us from committing suicide or dying?

by JohnGreer
28th Mar 2023
1 min read
A
1
4

11

AI RiskRisks of Astronomical Suffering (S-risks)AI
Frontpage

11

How likely are scenarios where AGI ends up overtly or de facto torturing us? How likely are scenarios where AGI prevents us from committing suicide or dying?
2avturchin
5Benjy Forstadt
1Ann
1avturchin
New Answer
New Comment

1 Answers sorted by
top scoring

avturchin

Mar 28, 2023

2-2

If you really expect unfriendly superinteligent AI, you should also consider that it will be able to resurrect the dead (may be running simulations of the past in very large numbers), so suicide will not help. 

Moreover, such AI may deliberately go against people who tried to escape, in order to acausaly deter them from suicide.

However, I do not afraid of this as I assume that Friendly AIs can "save" minds from hell of bad AIs via creating them in even larger numbers in simulations.

Add Comment
3 comments, sorted by
top scoring
Click to highlight new comments since: Today at 6:30 PM
[-]Benjy Forstadt2y50

 There is discussion of some possibilities at https://www.reddit.com/r/SufferingRisk/wiki/intro/. I'd like to see more talk about these issues.

Reply
[-]Ann2y10

Well ... The possibility of the scenario where Natural General Intelligence does these two things is approximately 100%.

Reply
[-]avturchin2y1-4

One such scenario is that the world will end as a semi stable bipolar world. There will be two AIs, and one of them will be friendly. This will create an initiative for another AI to torture people to blackmail first AI. God bless us to escape this hell.

Reply
Moderation Log
More from JohnGreer
View more
Curated and popular this week
A
1
3

I’m a big life extension supporter but being unable to choose to die ever is a literal hell. As dark as it is, if these scenarios are likely, it seems the rational thing to do is die before AGI comes.

Killing all of humanity is bad enough, but how concerned should we be about even worse scenarios?