# -2

A collection of  musings on "fates worse than death", put together in one place for easy referencing.

# "Rest in peace"

There is an ancient and (unfortunately) still very popular association between death and sleep / rest / peace / tranquility.

The association is so deeply engraved, it is routinely used by most people who have to speak about death. E.g. "rest in peace", "put to sleep", "he is in a better place now" etc.

The association is harmful. The association suggests that death could be a valid solution to suffering, which is deeply wrong.

It's the same wrongness as suggesting to kill a child to make the child less sad. Technically, the child will not experience sadness anymore. But infanticide is not a sane person's solution to sadness. The sane solution is to find a way to make the child less sad (without killing them!).

The sane solution to suffering is to reduce suffering. Without killing the sufferer.

For example, if a cancer patient is in great pain, the most ethical solution is to cure them from cancer, and use efficient painkillers during the process. If there is no cure, then utilize cryonics to transport them into the future where such a cure becomes available. Killing the patient because they're in pain is a sub-optimal solution (to put it mildly).

# Exchanging suffering and lives

There is a popular idea that some very large amount of suffering is worse than death. I don't subscribe to it.

If I'm tortured for X billions of years, and then my mind is repaired, then this fate is still much better than permanent death. There is simply nothing worse than permanent death - because it cannot be repaired. And everything else can be repaired, including the damage from any amount of suffering.

If one must choose between a permanent loss of human life and some temporary discomfort, it doesn't make sense to prefer the permanent loss of life, regardless of the intensity of the discomfort.

A sample problem:

If you press the button, you save 1 life. But 7 billion humans will suffer from the worst possible torture for 100 billion years. After that, all the psychological damage will be repaired, bringing them back to a perfect mental health. Should you press the button?

# S-risk

Imagine there is a new technology that:

• could save million lives
• could also produce an astronomical amount of suffering, if misused (S-risk)

Should we develop such a tech? The answer is yes

I don't see how eliminating any arbitrary large amount of suffering could be preferable to saving 1 life. Unless the suffering leads to permadeath, the sufferers can get over it. The dead - cannot.

Thus, in such calculations, I would consider 1 human permadeath equal to at least 1 human life that is experiencing the worst possible suffering until the end of the universe.

It's a good idea to reduce suffering. But the S-risk is trivially eliminated from the equitation if the tech in question is life-saving. Because, 1 (one) saved life is enough to compensate for any amount of suffering.

Thus, if you want to decide if it's ethical to develop the tech in question, the decision process should not involve any suffering-related considerations until you estimate the amount of saved lives. And after that, you minimize suffering given the tech is implemented.

Compare:  if you see a drowning child, you first save her, and only after that consider how to minimize her suffering.

# Suffering can be reversed. Permadeath - cannot

I predict that most (all?) ethical theories that assume that some amount of suffering is worse than death - have internal inconsistencies.

My prediction is based on the following assumption:

• permanent death is the only brain state that can't be reversed, given sufficient tech and time

The non-reversibility is the key.

For example, if your goal is to maximize happiness of every human, you can achieve more happiness if none of the humans ever die, even if some humans will have periods of intense and prolonged suffering.

You can increase happiness of the humans who suffered, but you can't increase happiness of the humans who are non-reversibly dead.

With enough tech and time, every type of suffering can be eliminated, and its effects - reversed. But permadeath is, by definition, non-reversible and non-repairable.

Bad feelings are vastly less important than saved lives.

# -2

New Comment

Your premise that “permanent death is the only brain state that can't be reversed, given sufficient tech and time” is absolutely without basis. A technology that could reverse “a million years of terrible pain” is indistinguishable from magic and therefore assuming it could resurrect people (or at least their minds) is at least as plausible.

I think it's useful to distinguish between 2 types of death:

1. common death (i.e. clinical death)
2. permadeath (also known as information-theoretic death)

The first one is reversible, if some advanced enough tech is applied.

The second one is non-reversible (by definition), regardless of any future technological progress.

If a million years of terrible pain permanently destroy human mind, then it is counted as a permadeath. Thus, if some action results in 1 saved life and 2 such tortures, then we must avoid such action, as the action results in 1 net life loss.

On the other hand, if a million years is not enough to permanently destroy human mind, then the action is better than non-action, as it results in 1 saved life (and 2 damaged minds which can be repaired).

There might be technologies that could repair a heavily damaged mind but can't repair a mind that is thoroughly erased from this universe.

If you create all possible minds, you will resurrect any given mind. Creating all minds is simple in everetian universe, you just need a random files generator.

You're argument rests on the fact that people who have suffered a million years of suffering could - in theory - be rescued and made happy, with it only requiring "tech and time". In an S-risk scenario, that doesn't happen.

In what I'd consider the archetypical S-risk scenario, an AI takes over, starts simulating humans who suffer greatly, and there is no more human agency ever again. The (simulated) humans experience great suffering until the AI runs out of power (some time trillions of years in the future when the universe can no longer power any more computation) at which point they die anyway.

As for your points on consistency, I'm pretty sure a utilitarian philosophy that simply assigns utility zero to the brain state of being dead is consistent. Whether this is actually consistent with people's revealed preferences and moral intuitions I'm not sure.

In the described scenario, the end result is omnicide. Thus, it is not much different from the AI immediately killing all humans.

The important difference is that there is some non-zero chance that in the trillions of years the AI might change its mind, and reverse its deed. Thus, I would say that the S-risk scenario is somewhat more preferable than the fast killing.

As for your points on consistency, I'm pretty sure a utilitarian philosophy that simply assigns utility zero to the brain state of being dead is consistent.

In this case, the philosophy's adherents have no preference between dying and doing something else with zero utility (e.g. touching their nose). As humans encounter countless actions of a zero utility, the adherents are either all dead or being inconsistent.

In the described scenario, the end result is omnicide. Thus, it is not much different from the AI immediately killing all humans.

I strongly disagree with this. I would much, much rather be killed immediately than suffer for a trillion years and then die. This is for the same reason that I would rather enjoy a trillion years of life and then die, than die immediately.

In this case, the philosophy's adherents have no preference between dying and doing something else with zero utility (e.g. touching their nose). As humans encounter countless actions of a zero utility, the adherents are either all dead or being inconsistent.

I think you're confusing the utility of a scenario with the expected utility of an action. Assigning zero utility to being dead is not the same as assigning zero expected utility to dying over not dying. If we let the expected utility of an action be defined relative to the expected utility of not doing that action, then "touching my nose", which doesn't affect my future utility, does have an expected utility of zero. But if I assign positive utility to my future existence, then killing myself has negative expected utility relative to not doing so.

You seem to be equating saving someone from death with them living literally forever, which ultimately appears to be forbidden, given the known laws of physics. The person who’s life you saved has some finite value (under these sorts of ethical theories at least), presumably calculated by the added enjoyment they get to experience over the rest of their life. That life will be finite, because thermodynamics + the gradual expansion of the universe kills everything, given enough time. Therefore, I think there will always be some theoretical amount of suffering which will outweigh the value of a given finite being.

Your writing feels comically-disturbingly wrong to me, I think the most likely cause is that your model of "suffering" is very different from mine. It's possible that you "went off to infinity" in some direction that I can't follow, and over there the landscape really does look like that, but from where I am it just looks like you have very little experience with serious suffering and ignore a whole lot of what looks to me to be essential complexity.

When you say that all types of suffering can be eliminated / reversed, this feels wrong because people change in response to trauma and other extreme suffering, it (and the resulting growth) tends to become a core part of their personality. There is no easy way back from there, in a way this is also non-reversible. Removing a core part of their personality would effectively destroy them, replacing them with a past version of themselves feels equivalent to killing them, except it also devalues their struggle and their very existence.

Getting the decision on whether (and how far) to reset from anything other than their current self takes away their agency. The different versions along time are (potentially very) different persons, singling out any one of them and valuing it higher than the others is bound to be problematic. I doubt that you could find a consistent valuation within the strands of the person over time, and imposing some external choice just creates a hell (tho it may not be apparent from inside.) I don't think that this is something that you can "magic away" by sufficiently advanced AI / singularity. (And if you think that arbitrary editing of the person has already taken away their agency…? Well then you still have the same problem of identifying the point where they cease to be mostly self-determined, where they cease to be "them", and the "torture meta-game" will have shifted to make that maximally hard.)

So the best that you could achieve is probably something like having multiple independent versions of a person exist in parallel, but at that point you're not erasing/undoing the suffering anymore, and some versions may well want to cease to exist – for them, this will have been a fate worse than death. (At this point the "best" strategy for the 100 billion year torture meta-game is probably to allow just enough space for recovery and growth that there's a high chance that the person wants to continue existing, not sure that's better…)

By this time we're dealing with multiple versions of possibly arbitrarily rewritten copies of a person… and at this point, we're basically in situations that other commenters described. It would be no harder to resurrect from traces of physical remains… (If you can recover the original process after billions of years of deliberate, directed reshaping, then surely recovering it after a mere thousands of years subject to mostly random bio/physical processes should be trivial in comparison, right?) …or outright create new persons from scratch… (Who could tell the difference, hm? And if anyone could, would they feel safe enough to point it out?) …than to deal with all this "undoing of suffering". Now look again at your question:

If you press the button, you save 1 life. But 7 billion humans will suffer from the worst possible torture for 100 billion years. […]

I don't think the answer is yes.

Interesting concept, I have a few questions:

Does this apply only to humans, or are there also some non-human beings such that any amount of human suffering is preferable to accepting one of their deaths? Is there a clear cut-off, or is it possible that there could be beings for which some amount of human suffering would be worth more than the death of such a being?

Are there humans sufficiently brain damaged such that while their bodies are still alive, their minds are sufficiently poorly recoverable (even in theory) that the moral imperative isn't as clear?

Is it ethically mandatory that we invent caretakers that can imprison all humans in padded rooms (or vats) for their whole lives whether they want this or not, to ensure that they can never trip over and bash their brains out?

How does this theory handles cases where death is inevitable, such as (say) universes in which total computation is finite?

Is a far enough divergence from a previous mental state, such that the previous one can't even be approximated any more, equivalent to death of the previous person?

I think this needs a dose of rigor (for instance, remove "sufficient time and tech" and calculate utility for humans alive today), and a more deep exploration of identity and and individual/aggregate value.  But I don't know why it's downvoted so far - it's an important topic, and I'm glad to have some more discussion of it here (even if I disagree with the conclusions and worry about the unstated assumptions).

But I don't know why it's downvoted so far - it's an important topic, and I'm glad to have some more discussion of it here (even if I disagree with the conclusions and worry about the unstated assumptions).

I agree with this. The author has made a number of points I disagree with but hasn't done anything worthy of heavy downvotes (like having particularly bad epistemics, being very factually wrong, personally attacking people, or making a generally low-effort or low-quality post). This post alone has changed my views towards favouring a modification of the upvote/downvote system.

I agree with this as well. I have strongly upvoted in an attempt to counterbalance this, but even so it is still in negative karma territory, which I don't think it deserves.

Well if we've fallen to the level of influencing other people's votes by directly stating what the votes ought to say (ugh =/), then let me argue the opposite: This post – at least in its current state – should not have a positive rating.

I agree that the topic is interesting and important, but – as written – this could well be an example of what an AI with a twisted/incomplete understanding of suffering, entropy, and a bunch of other things has come up with. The text conjures several hells, both explicitly (Billions of years of suffering are the right choice!) and implicitly (We make our perfect world by re-writing people to conform! We know what the best version of you was, we know better than you and make your choices!) and the author seems to be completely unaware of that. We get surprising, unsettling conclusions with very little evidence or reasoning to support it (instead there's "reassuring" parentheticals like "(the answer is yes)".) As a "What could alignment failure look like?" case study this would be disturbingly convincing. As a serious post, the way it glosses over lots of important details and confidently presents it conclusions, combined with the "for easy referencing" in the intro is just terrifying.

Hence: I don't want anyone to make decisions based directly on this post's claims that might affect me even in the slightest. One of the clearest ways to signal that is with a negative karma score. (Doesn't have to be multi-digit, but shouldn't be zero or greater.) Keep in mind that anyone on the internet (including GPT-5) can read this post, and they might interpret a positive score as endorsement / approval of the content as written. (They're not guaranteed to know what the votes are supposed to mean, and it's even plausible that someone uses the karma score as a filter criterion for some ML data collection.) Low positive scores can be rationalized away easily (e.g. the content is too advanced for most, other important stuff happening in parallel stole the show, ...) or are likely to pass a filter cutoff, zero is unstable and could accidentally flip into the positive numbers, so negative scores it is.

Bad feelings are vastly less important than saved lives.

Omega: Hey, supposedlyfun. You are going to die in a car crash tomorrow. Your child will grow up without you, your family will never get over it, and no aligned AGI will recreate your mind once technology allows it. But! I can prevent that death if you let me torture a random person for a year, inflicting on them the maximum possible amount of pain that their nervous system can experience, at every interval of Planck time during the year. But I will then give that person instant therapy that undoes all the damage. What say you?

supposedlyfun: No.

*

How do you square your argument with my preference here?

I have stage IV cancer and personally vastly prefer death to being frozen. I'm frankly baffled by those who think cryopreservation is a good idea given the threat that unaligned AGI poses.

[edit to add:] BTW, this isn't rationalization. I've felt this way about cryopreservation for many years before I knew I had cancer.

Suppose the state of the world as a function of time is , and the value of a state of the world is . The orthodox way to aggregate the value of a future trajectory is exponential discounting, i.e., , where  is called the discount factor. Now, in your example problem, let us take , and , where I'll let you choose , but it has to be strictly greater than zero. (Assuming you don't think the torture itself is neutral, your value function should have this form, up to additive and multiplicative constants.)

In the status quo,

Pressing the button gives

If we solve the inequality for this to be strictly positive, we get

Supposing  is, I don't know, 0.001 for example, then  has to be greater than . Well, you might say, isn't that still less than ? Sure, but:

1.  should be bounded above not just by , but by . In this case, your intuition for  seems to be pushing the probability of extinction per year down to at most . There's an argument to be made that aligned AGI would get the extinction risk of Earth-originating life down to something like that kind of level, but it's not trivial (extraterrestrial threats start to get plausible), and you don't seem to be making it.
2. the large numbers involved here, and their arbitrary nature, suggest that your intuition is guiding you along the lines of , and there are all sorts of inconsistency paradoxes from that, because the expected value of a lot of things ends up being , which is really bad, even measure theory can't save you from that kind of divergence. I'm as long-termist as the next guy, but I think we have to set our discount factors a little lower than .

I'll let you choose , but it has to be strictly greater than zero.

The original post is equivalent to choosing epsilon = 0. You could just say "I don't accept this", instead of going through a bunch of math based on a premise that you can deduce will be rejected.

I agree that one would reach the same conclusion to press the button with , but I'd be quite surprised if RomanS would actually choose . Otherwise, he would have to consider it absolutely ethically neutral to torture, even if it didn't save any lives or provide any benefits at all—and that's at least one qualitative step more outrageous than what he's actually saying.

Instead, I think RomanS believes , that the distant and infinite future absolutely overwhelms any finite span of time, and that's why he places so much emphasis on reversibility.

My apologies, my statement that it was equivalent to epsilon=0 was incorrect.

The description given is that of lexicographic preferences, which in this case cannot be represented with real-valued utility functions at all. There are consistent ways to deal with such preferences, by they do do tend have unusual properties.

Such as, for example, preferring that everyone in the universe is tortured forever rather than accepting 0.00000000000001 extra probability that a single person somewhere might die.

I suspect one problem is that really "death" depends crucially upon "personal identity", and it's a fuzzy enough concept at the extreme boundaries that lexicographic preferences over it make no sense.

I appreciate that you went through the effort of sharing your thoughts, and as some commenters have noted, I also find the topic interesting. Still, you do not seem to have laid bare your assumptions that guide your models, and when examined it seems most of your musings seem to miss essential aspects of valence as experienced in our universe. I will be examining this question through the lens of total utilitarian consequentialism, where you sum the integral of valences of all lives over the lifespan of the universe. Do specify if you were using another framework.

When you conclude "Bad feelings are vastly less important than saved lives.", it seems you imply that
1) Over time our lives will always get better (or positive)
2) That there's always enough time left in the universe to contribute more good than bad.
(You could otherwise be implying that life is good in of itself, but that seems too wrong to discuss much, and I don't expect you would value someone suffering 100 years and then dying as better than someone dying straight away).
In a S-risk scenario, most lives suffer until heath death, and keeping those lives alive is worse than not, so 1 is not always true. 2 also doesn't hold in scenarios where a life is tortured for half of the universe's lifespan (supposing positive valence is symmetrical to negative valence). It is only when considering there's always infinite time left that you could be so bold as to say keeping people alive through suffering is always worth it, but that's not the case of our universe.

More fundamentally, you don't seem to be taking into account yet non existing people/lives into account, the limited nature of our universe in time and accessible space, or the fungibility of accumulated valence. Suppose A lives 100 years happy, dies, and then B lives 100 years happy, it seems there's as much experienced positive valence in the universe as having had A around happy for 200 years. You call it a great shame that someone should die, but once they're dead they are not contributing negative valence, and there is space for new lives that contribute positive valence. Thus, it seems that if someone was fated to suffer a 100 years, it would be better they die now, and that someone else is born and lives 100 years happy, than trying to keep that original life around and making them live 200 years happy after the fact to compensate. Why should we care that the positive valence is experienced by a specific life and not another ?
In our world, there are negative things associated with death, such as age related ill heath (generally with negative valence associated), and negative feelings from knowing someone has died (because it changes our habits, we lose something we liked), so it would cause less suffering if we solved ageing and death. But there is no specific factor in the utility function marking death as bad.

With these explanations of the total utility point of view, do you agree that a large amount of suffering (for example over half the lifespan of the universe) IS worse than death?

I think this post is extremely interesting, and on a very important topic. As I said elsethread, for this reason, I don't think it should be in negative karma territory (and have strong-upvoted to try to counterbalance that).

On the object level, while there's a frame of mind I can get into where I can see how this looks plausible to someone, I'm inclined to think that this post is more of a reductio of some set of unstated assumptions that lead to its conclusion, rather than a compelling argument for that conclusion. I don't have the time right now to think about what exactly those unstated assumptions are or where they go wrong, but I think that would be important. When I get some more time, if I remember, I may come back and think some more about this.

This is basically Christian theology with permadeath being the state of hell (which is often referred to as the death of the soul) i.e. irreversible loss of value. At the same time it is believed that any amont of suffering in the current reality will be reversed and eternally compensated in heaven.

If there are no such states then all damage is probably not reversible, and lives have a maximum range in time of about 150 years. If suffering allows for more lives, which are equally suffering, and there is nothing to sustain the claim that it will be reduced in someone, then maintaining these lives is pointless. (In the christian view, there is a transcendent guarantee, that if certain condition are met in conscious choices, then any amount of experienced suffering will be compensated. In an atheistic worldview, there isn't necessarily such a guarantee for all scenarios.)

It should also be noted that suffering probably limits your ability to save lives. So these hypothetical scenarios are never directly causal, and thus mostly unlikely.

If one must choose between a permanent loss of human life and some temporary discomfort, it doesn't make sense to prefer the permanent loss of life, regardless of the intensity of the discomfort.

This choice doesn't exist; permanent death is inevitable under known physics. All lifespans are finite because the time the universe will support consciousness is most likely finite, whether because of heat death or the big rip. This finiteness makes your "you save one life, and 7 billion humans suffer for 100 billion years" question not at all obvious. Saving a life is not avoiding death; it is postponing it. Thus, you could rewrite your scenario as: "should I give someone 80 extra years of normal life before they die, if in exchange, instead of dying at their normal time, 7 billion humans are tortured for 100 billion years and then die." Under a Rawlsian veil of ignorance, I would not choose to "save the life" in this scenario. Even if that person survived until the end of the black hole farming era, I probably still wouldn't choose it. There is too much chance that I will end up being one of the tortured. (Though a  chance of  years of life against  years of torture is pretty tempting on an expected value basis, so I'm not sure.)

As others have commented, I also think the reversibility of suffering is a weak point. We do not know how hard it is. It may have the same difficulty level as resurrection. But, if you specify that the finite span of torture happens instead of normal death, you avoid this.

Submission, is a fate worse than death, regardless of physical suffering.

Submission is reversible. For example, if you're enslaved by a rogue AGI, there is a chance that it (or some other entity) will release you.

The version of you who recovered from enslavement will be much better off than the version of you who died.

This sounds like dogma specific to the culture you're currently in, not some kind of universal rule. Throughout history many humans lived in slavery (think Rome), and a non zero percentage greatly enjoyed their lives, and would definitely prefer their lives than being dead. It is still an open question as to what causes positive or negative valence, but submission is probably not a fundamental part of it.