If I'm running a simulation of a bunch of happy humans, it's entirely possible for me to completely avoid your penalty term just by turning the simulation off and on again every so often to reset all of the penalty terms. And if that doesn't count because they're the same exact human, I can just make tiny modifications to each person that negate whatever procedure you're using to uniquely identify individual humans. That seems like a really weird thing to morally mandate that people do, so I'm inclined to reject this theory.

Furthermore, I think the above case generalizes to imply that killing someone and then creating an entirely different person with equal happiness is morally positive under this framework, which goes against a lot of the things you say in the post. Specifically:

It avoids the problem with both totalism and averagism that killing a person and creating a different person with equal happiness is morally neutral.

It seems to do so in the opposite direction that I think you want it to.

It captures the intuition many people have that the bar for when it's good to create a person is higher than the bar for when it's good not to kill one.

I think this is just wrong, as like I said it incentives killing people and replacing them with other people to reset their penalty terms.


I do agree that whatever measure of happiness you use should include the extent to which somebody is bored, or tired of life, or whatnot. That being said, I'm personally of the opinion that killing someone and creating a new person with equal happiness is morally neutral. I think one of the strongest arguments in favor of that position is that turning a simulation off and then on again is the only case I can think of where you can do actually do that without any other consequences, and that seems quite morally neutral to me. Thus, personally, I continue to favor Solomonoff-measure-weighted total hedonic utilitarianism.

Showing 3 of 7 replies (Click to show all)
1ErickBall2moSay you are in a position to run lots of simulations of people, and you want to allocate resources so as to maximize the utility generated. Of course, you will design your simulations so that h >> h0. Because all the simulations are very happy, u0 is now presumably smaller than hτ0 (perhaps much smaller). Your simulations quickly overcome the u0 penalty and start rapidly generating net utility, but the rate at which they generate it immediately begins to fade. Under your system it is optimal to terminate these happy people long before their lifespan reaches the natural lifespan τ, and reallocate the resources to new happy simulations. The counterintuitive result occurs because this system assigns most of the marginal utility to occur early in a person's life.

No. It is sufficient that (notice it is there, not ) for killing + re-creating to be net bad.

3Vanessa Kosoy2moI don't understand why the underlying thing I want is "variety of happy experience" (only)? How does "variety of happy experience" imply killing a person and replacing em by a different person is bad? How does it solve the repugnant conclusion? How does it explain the asymmetry between killing and not-creating? If your answer is "it shouldn't explain these things because they are wrong" then, sorry, I don't think that's what I really want. The utility function is not up for grabs.

Deminatalist Total Utilitarianism

by Vanessa Kosoy 5 min read16th Apr 202049 comments

50


TLDR: I propose a system of population ethics that arguably solves all major paradoxes, and formalizes some intuitions that prior systems have not. AFAIK this is original but I'm not an expert so maybe it's not.

This idea was inspired by a discussion in the "EA Corner" discord server, and specifically by a different proposal by discord user LeoTal.

Foreward: The Role of Utilitarnism

I don't believe utilitarianism is literally the correct system of ethics (I do endorse consequentialism). Human values are complex and no simple mathematical formula can capture them. Moreover, ethics is subjective and might different between cultures and between individuals.

However, I think it is useful to find simple approximate models of ethics, for two reasons.

First, my perspective is that ethics is just another name for someone's preferences, or a certain subset of someone's preferences. The source of preferences is ultimately intuition. However, intuition only applies to the familiar. You know that you prefer strawberries to lemons, just because. This preference is obvious enough to require no analysis. But, when you encounter the unfamiliar, intuition can fail. Is it better to cure a village of malaria or build a new school where there is none? Is it better to save one human or 1000 dogs? Can a computer simulation be worthy of moral consideration? What if it's homomorphically encrypted? Who knows?

In order to extrapolate your intuition from the familiar to the unfamiliar, you need models. You need to find an explicit, verbal representation that matches your intuition in the familiar cases, and that can be unambiguously applied to the unfamiliar case. And here you're justified to apply some Occam razor, despite the complexity of values, as long as you don't shave away too much.

Second, in order to cooperate and coordinate effectively we need to make our preferences explicit to each other and find a common ground we can agree on. I can make private choices based on intuition alone, but if I want to convince someone or we want to decide together which charity to support, we need something that can be communicated, analyzed and debated.

This is why I think questions like population ethics are important: not as a quest to find the One True Formula of morality, but as a tool for decision making in situations that are unintuitive and/or require cooperation.

Motivation

The system I propose, deminatalist total utilitarianism (DNT) has the following attractive properties:

  • It avoids the repugnant conclusion to which regular total utilitarianism falls prey, at least the way it is usually pictured.
  • It avoids the many problems of average utilitarianism: the incentive to kill people of below-average happiness, the incentive to create people of negative happiness (that want to die) when the average happiness is negative, the sadistic conclusion and the non-locality (good and evil here depends on moral patients in the Andromeda galaxy).
  • It avoids the problem with both totalism and averagism that killing a person and creating a different person with equal happiness is morally neutral.
  • It captures the intuition many people have that the bar for when it's good to create a person is higher than the bar for when it's good not to kill one.
  • It captures the intuition some people have that they don't want to die but they would rather not have been born.
  • It captures the intuition some people have that sometimes living too long is bad (my dear transhumanist comrades, please wait before going for rotten tomatoes).

Formalism

I am going to ignore issues of time discounting and spatial horizons. In an infinite universe, you need some or your utilitarian formulas make no sense. However, this is, to first approximation, orthogonal to population ethics (i.e. the proper way to aggregate between individuals). If you wish, you can imagine everything constrained to your future light-cone with exponential time discount.

I will say "people" when I actually mean "moral patients". This can include animals (and does include some animals, in my opinion).

The total utility of a universe is a sum over all people that ever lived or will live, like in vanilla totalism. In vanilla totalism, the contribution of each person is

where is the time of birth, is the time of death, and is happiness at time (for now we think of it as hedonistic utilitarianism, but I propose a preference utilitarianism interpretation later).

On the other hand, in DNT the contribution of each person is

  • is a constant with dimensions of time that should probably be around typical natural lifespan (at least in the case of humans).
  • is a constant with dimensions of happiness, roughly corresponding to the minimal happiness of a person glad to have been born (presumably a higher bar that not wanting to die).
  • is a constant with dimensions of utility that it's natural (but not obviously necessary) to let equal .

Of course the function was chosen merely for the sake of simplicity, we can use a different function instead as long as it is monotonically increasing from at to at on a timescale of order .

Analysis

For a person of constant happiness and lifespan , we have

It is best to live forever when , it is best to die immediately when and in between it is best to live a lifespan of

We can imagine the person in the intermediate case becoming "tired of life". Eir life is not good. It is not so bad as to warrant an earlier suicide, but there is only so much of it ey can take. One could argue that this should already be factored into "happiness", but well, it's not like I actually defined what happiness is. More seriously, perhaps rather than happiness it is better to think of as the "quality of life". Under this interpretation, the meaning of the second correction in DNT is making explicit a relationship between quality of life and happiness.

Creating a new person is good if and only if , that is

Creating a new immortal person is good when and bad when . Assuming , creating a person of happiness below is bad even if ey have optimal lifespan. Lower values of produce lower thresholds (there is no closed formula).

DNT is a form of total utilitarianism, so we also get a form of the repugnant conclusion. For vanilla utilitarianism the repugnant conclusion is: for any given population, there is a better population in which every individual only barely prefers life over death. On the other hand, for DNT, the "repugnant" conclusion take the form: for any given population, there is a better population in which every individual is only barely glad to have been born (but prefers life over death by a lot). This seems to me much more palatable.

Finally, again assuming , killing a person and replacing em by a person of equal happiness is always bad, regardless of the person's happiness. If exactly, then the badness of it decreases to zero as the age of the victim during the switch goes to infinity. For larger it retains badness even in the limit.

From Happiness to Preferences

I believe that preference utilitarianism is often a better model than hedonistic utilitarianism, when it comes to adults and "adult-like" moral patients (i.e. moral patients that can understand and explain eir own preferences). What about DNT? We can take the perspective it corresponds to "vanilla" total preference utilitarianism, plus a particular model of human preferences.

Some Applications

So far, DNT made me somewhat more entrenched in my beliefs that

  • Astronomical waste is indeed astronomically bad, because of the size of future supercivilization. Of course, in averagism the argument still has weight because of the high quality and long lifespan of future civilization.

  • Factory farming is very bad. Although some may argue factory farmed animals have , it is much harder to argue they have .

DNT made me somewhat update away from

  • The traditional transhumanist perspective that living forever is good unless life quality is extremely bad. Of course, I still believe living forever is good when life quality is genuinely good. (Forever, or at least very long: I don't think we can fully comprehend the consequences of immortality from our present perspective.)

  • The belief that the value of humanity so far has been net positive in terms of terminal values. I think a random person in the past had a rather miserable life, and "but ey didn't commit suicide" is no longer so convincing. However, I'm still pretty sure it is instrumentally net positive because of the potential of future utopia.

DNT might also be useful for thinking about abortions, although it leaves open the thorny question of when a fetus becomes a moral patient. It does confirm that abortions are usually good when performed before this moment.

50