In this post I will write about what worse than death scenarios are, and how and why we should prevent them. I would recommend reading with caution if you are prone to worrying about this topic as this post contains ideas which may be very distressing.

A worse than death scenario can be defined as any scenario in which the observer would prefer to die than continue to live. A distinction is made between prefering to experience nothing and then resume living and prefering to experience nothing forever. For example, mos people would prefer to use general anaesthetic during a necessary operation, but if there was no anaesthetic, they would not choose to die. With the knowledge that the discomfort experienced during the operation is necessary to continue living, many would choose to go through with it (this can obviously vary depending on the operation and the observer).

As the observer would prefer to die in a worse than death scenario, one can assume that they would be willing to do anything to escape the scenario. Thus, it follows that we should do anything to prevent worse than death scenarios from occurring in the first place. It is our first priority. In my opinion positive scenarios cannot change this due to the following observation: there is no positive scenario in which you would do anything to *not* make it stop, at least in our current human form. I cannot think of any scenario which is so positive for the observer that they don't care if it is certain that it will kill them.

Worse than death scenarios vary in severity. The most basic example would be someone being kidnapped and tortured to death. If technology will allow immortality or ASI at some point, there are scenarios of much greater severity. The most extreme example would be an indefinite state of suffering comparable to the biblical Hell, perhaps caused by an ASI running simulations. Obviously preventing this has a higher priority than preventing scenarios of a lower severity.

Scenarios which could mean indefinite suffering:

1. ASI programmed to maximise suffering

2. Alien species with the goal of maximising suffering

3. We are in a simulation and some form of "hell" exists in it

4. ASI programmed to reflect the values of humanity, including religious hells

5. Unknown unknowns

Worse than death scenarios are highly neglected. This applies to risks of all severities. It seems very common to be afraid of serial killers, yet I have never heard of someone with the specific fear of being tortured to death, even if most people would agree that the latter is worse. This pattern is also seen in the field of AI: the "killer robot" scenario is very well-known, as is the paperclip maximiser, but the idea of an unfriendly ASI creating suffering is not talked about as often.

There are various reasons for this neglect. Firstly, worse than death scenarios are very unpleasant to think about. It is more comfortable to discuss possibilities of ceasing to exist. In addition, they are very unlikely compared to other scenarios. However, the avoidance of the discussion of worse than death scenarios does not seem correct because something being unpleasant is not a valid reason to do this. In addition, the very low probability of the scenarios is balanced by their extreme disutility. This inevetitably leads to Pascal's Mugging.

Methods which may reduce the probability of indefinite worse than death scenarios (in order of effectiveness):

1. Suicide

2. Working on AI safety

3. Thinking of ways of reducing the probability

Suicide, depending on your theory on personal identity, may make the probability 0. If you believe that there is no difference between copies of you then there may be a possibility of being resurrected in the future however. As we aren't certain about what happens to the observer after death, it is unknown whether death will make worse than death scenarios impossible. I believe there are many ways in which it could reduce the probability, but the key question is: could it increase the probability? An argument against suicide is that it is more likely that people who commit suicide will go to "hell" than those who don't. This is because an entity who creates hell has values which accept suffering, making life a positive concept which should not be discarded. On the other hand, an entity with values related to efilism/antinatalism (philosophies in which suicide is generally accepted) would not create a hell at all. Of course, this is all based on a lot of speculation.

There is a risk that the suicide attempt will fail and leave you in a disabled state. This could make you more vulnerable when considering indefinite worse than death scenarios. However, I would argue against this disadvantage because the only potential way to evade an entity powerful enough to cause these scenarios would be suicide, which always has a risk of failing.

The second option listed is working on AI safety. This is due to the fact that a future ASI is the only entity which we could influence now. We can not do anything about superintelligent malevolent aliens or the fact that we may be in a simulation, on the other hand. Donating money to suffering-focused AI safety organizations may reduce the chance of an unfriendly ASI being created, and it does not seem to increase the probability of worse than death scenarios in any way. Therefore it seems better than not donating.

The last option is thinking of ways of reducing the probability of the scenarios. It is possible that by doing this you will invent a new method. This also includes raising awareness about the scenarios in any way so that other people will also try to invent methods.

The dilemma is that it does not seem possible to continue living as normal when considering the prevention of worse than death scenarios. If it is agreed that anything should be done to prevent them then Pascal's Mugging seems inevitable. Suicide speaks for itself, and even the other two options, if taken seriously, would change your life. What I mean by this is that it would seem rational to completely devote your life to these causes. It would be rational to do anything to obtain money to donate to AI safety for example, and you would be obliged to sleep for exactly nine hours a day to improve your mental condition, increasing the probability that you will find a way to prevent the scenarios. I would be interested in hearing your thoughts on this dilemma and if you think there are better ways of reducing the probability.

New to LessWrong?

New Comment
18 comments, sorted by Click to highlight new comments since: Today at 5:18 AM

Suicide will not work if some form of bad quantum immortality is true, and will only put one in the hellish timeline.

[-][anonymous]6y10

Yes that is a possibility, it could also happen if the universe is extremely vast or infinite. "Suffering maximisers" should be very rare though.

I think that one possible solution against suffering maximisers is that benevolent AIs will dominate the "measure" landscape, so for any my suffering copy exist 10E100 happy ones.

These benevolent AIs could save the suffering-victims from their evil AI using indexing uncertainty attack. First step: create many copies of me in the bad-observer-moment, Second step: crete many good next moments for this bad observer-moment. As a result for any observer how now suffering, the most likely next moment will be that he will stop suffering.

This looks even like a good deal: do you want to suffer for one second for billion of billions years of happiness?

Another way to fight evil AI is indexical attack against the whole AI: For any possible real evil AI, benevolent AI creates million evil AIs in simulations, and if it sees that evil AI starts to torture people, it punishes evil AI. Thus any evil AI (or even a paperclipper), who knows it, will not torture humans, but will preserve them and let them flourish. To reach this outcome it is enough to declare such intent before any AI creation. I described this in more details in here.

[-][anonymous]6y10

These are some good possible solutions but there could be some problems. The first solution implies that the observer is a separate entity from the body, and an observer has a certain probability of "being inside" a certain copy every smallest possible unit of time. But what is the connection between these different copies? They aren't really copies at all, they're all different people. And what is the observer actually made of? So at this point it seems to imply open individualism. Of course, there is currently no universally accepted theory for personal identity so this could be true.

And in the second solution, the suffering maximiser has 2 choices: it can either create no suffering at all for fear of being punished by friendly AI, or it can create suffering and take the risk. In the first case, the probability of creating suffering is 0 which is the worst thing possible for the suffering maximiser, so it will take the second choice, where there is a chance that it will not be punished and will be able to create at least some amount of suffering.

1. True, depends on the nature of personal identity. However, if some finite form of identity is true, I should not worry about "hostile resurrection": that future AI steal information about me and create my copy and will torture me. This closes possibility of many bad outcomes.

2. More likely to work for "instrumental sufferings maximiser", which may use human sufferings for blackmail. For AI, which has final goal of suffering maximising, where could be some compromise: it allows to torture one person for one second. And as I suggested this idea, I have to volunteer to be this person.

This is what FRI is working on, you should probably talk to them. Unless the topic makes you feel stressed, in which case you should drop it right now and go to the beach. It's really nice this time of the year.

[-][anonymous]6y10

Thanks for the advice, I have read a few of FRI's essays which were very interesting. I wish I could drop this completely but that seems quite irrational.

Oh no, ouch. If you wish you could quit, you should quit, and cut out people who tell you otherwise. Damn, if I was in Berkeley I'd give you a one day rehab course in drinking beer and watching soccer or something.

[-][anonymous]6y10

I'm actually very curious about this. What is your thought process or how do you rationalize prioritizing the present over possible events with astronomical utility/disutility?

Idk, my thought process just says screw these events, I'm going to do stuff that makes me happy and interested. All my contributions to LW ideas over the years came from this, not from trying to cause a big change in the world.

As the observer would prefer to die in a worse than death scenario, one can assume that they would be willing to do anything to escape the scenario. Thus, it follows that we should do anything to prevent worse than death scenarios from occurring in the first place.

There seems to be a leap of logic here. One can strongly prefer an outcome without being "willing to do anything" to ensure it. Furthermore, just because someone in an extreme situation has an extreme reaction to it does not mean that we need to take that extreme reaction as our own -- it could be that they are simply being irrational.

In addition, the very low probability of the scenarios is balanced by their extreme disutility. This inevetitably leads to Pascal's Mugging.

I am confused -- being a Pascal's Mugging is usually treated as a negative feature of an argument?

I do think that it is worthwhile to work to fight S-risks. It's not clear to me that they are the only thing that matters. The self-interestedness frame also seems a little off to me; to be honest if you're selfish I think the best thing to do is probably to ignore the far future and just life a comfortable life.

Solving AI alignment doesn't seem like the easiest way for humanity to do a controlled shutdown, if we decide that that's what we need to do. Of course, it may be more feasible for political reasons.

[-][anonymous]6y10

Well, it does feel like you're betraying yourself if you ignore the experiences of your future self, unless you don't believe in continuity of consciousness at all. So if you're future self would do anything to stop a situation, I think anything should be done to prevent it.

I guess this post may have come off as selfish as it focuses only on saving yourself. However, I would argue that preventing unfriendly ASI is one of the most altruistic things you could do because ASI could create an astronomical number of sentient beings, as Bostrom wrote.

[-]TAG6y20

The usual case where one would be unwilling to do literally anything to prevent a very negative outcome for oneself are when literally anything includes highly unethical actions.

[-][anonymous]6y10

The possible methods of preventing the outcome don't really affect other people though so I don't see how they would be unethical towards others. Actually, working on AI safety would benefit many people.

[-]TAG6y10

The possible methods of preventing the outcome don’t really affect other people t

Which outcome in which scenario?

[-][anonymous]6y10

I was referring to the the scenarios I listed in the post.

There is an important practical consideration that is being left out here.  Attempting to completely devote one's life to these causes in the way described would not necessarily be effective. The best way to devote your life to a cause is not necessarily in a super-fervent way, because that's not how humans work. We need certain types of motivation, we have physical and emotional needs, we suffer from burnout if we work too hard. So if you believe that astronomical suffering risks are the most important issue to work on, then by all means work on them, but don't overwork yourself, it will likely result in anxiety, burnout, and unhappiness, which won't actually help you to work more effectively. Work on these problems, make them your life's work if you see fit to do so, but do so in the context of an overall good life.

[-][anonymous]6y00

I would greatly appreciate it if those who downvoted this post could explain why they did so.