I’m a big life extension supporter but being unable to choose to die ever is a literal hell. As dark as it is, if these scenarios are likely, it seems the rational thing to do is die before AGI comes.

Killing all of humanity is bad enough, but how concerned should we be about even worse scenarios?

New Answer
New Comment

1 Answers sorted by

If you really expect unfriendly superinteligent AI, you should also consider that it will be able to resurrect the dead (may be running simulations of the past in very large numbers), so suicide will not help. 

Moreover, such AI may deliberately go against people who tried to escape, in order to acausaly deter them from suicide.

However, I do not afraid of this as I assume that Friendly AIs can "save" minds from hell of bad AIs via creating them in even larger numbers in simulations.

3 comments, sorted by Click to highlight new comments since: Today at 11:51 AM

 There is discussion of some possibilities at https://www.reddit.com/r/SufferingRisk/wiki/intro/. I'd like to see more talk about these issues.

Well ... The possibility of the scenario where Natural General Intelligence does these two things is approximately 100%.

One such scenario is that the world will end as a semi stable bipolar world. There will be two AIs, and one of them will be friendly. This will create an initiative for another AI to torture people to blackmail first AI. God bless us to escape this hell.