Some arguments against with total utilitarianism

Utility monsters

"Utility monster" is a term coined by Robert Nozick as part of a critique of total utilitarianism. He explains the term as such:

"Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose ... the theory seems to require that we all be sacrificed in the monster's maw, in order to increase total utility." —Robert Nozick, as quoted in the Wikipedia article on utility monsters

Some utilitarian frameworks, like average utilitarianism and maximin, are designed to prevent utility monsters from dominating the equation. It's an unappealing thought to consider that the well-being of billions of people could be sacrificed to appease a single being. Yet seemingly total utilitarianism promotes that exact outcome.

The "repugnant conclusion"

Another charge leveled against total utilitarianism is the so-called repugnant conclusion. The repugnant conclusion argues that the "optimal" state of a population (under total utilitarianism) will contain extraordinarily vast numbers of people living barely positive lives, rather than a large number of people living highly positive lives.[1] This as well is unappealing: I would much rather live one of ten billion joyous lives than one of ten trillion barely positive lives.

Imagining these arguments at human scale

Both of these criticisms do not demonstrate failures of total utilitarianism, but rather failures of the critic to fully comprehend their hypotheticals.

Utility monsters

Consider again the quote on utility monsters:

the theory seems to require that we all be sacrificed in the monster's maw, in order to increase total utility.

This sounds pretty bad. But if it were written by a pig raised for human consumption, it would truthfully describe a situation most humans seem fine with.

If you are not one of the humans that endorses animal slaughter, then you might still find a hypothetical where this makes sense. For example, if hundreds of wealthy people forgo a $60 video game purchase, then give those $60 to a poor individual to pay for basic necessities, that would make a lot of sense. If $12,000 allows a single person to afford much better food, shelter, and medical care, that is probably more valuable than if it allows 200 people to buy a new video game.

Perhaps this makes sense to you with small numbers, but in examples with astronomical numbers it still doesn't seem right. If sacrificing your life (losing 10,000 utility) serves the blood god (who gains 20,000 utility), would it make sense for billions of humans to do this? I contend yes.[2] Instinctively it seems wrong, but this is because it is hard to imagine that the blood god really gains 20,000 utility from every life. No entity could possibly gain more from my death than I lose! In real-world situations this feeling is correct, but to engage with the hypothetical, we must accept that the blood god actually does gain 20,000 utility, no matter how foreign it seems.

Another component of my repulsion is that I identify more with the human being sacrificed than with the abstract, unknown blood god. But that does not mean the "small but many" perspective deserves more weight. It is possible to identify with the blood god, if you are the one demanding small sacrifices for your own large gain. Meat-eaters are the blood god at every meal, for example. Or if a poor person claims that they would benefit more from marginal money than many rich people, they are probably right. These wrong-seeming hypotheticals seem very reasonable when framed at human scale.

The repugnant conclusion

The very name, "repugnant conclusion", describes disgust. Imagining vast numbers of people living barely positive lives seems bleak. I would consider myself to lead a positive life, so when imagining something just above breakeven, it feels quite negative. But to perceive these lives as bad is incorrect: the hypothetical stipulates that the vast numbers of people are living positive lives.

In addition, it's difficult to multiply this small positive by such a vast number of people. Human intuition fails at astronomical scales.[3] But just because we can't imagine something fully doesn't mean it doesn't happen fully.

Like with utility monsters, I find that who I identify with also influences my moral intuition. I am a living person, so I identify most with the very happy people in a small world and the slightly happy people in a large world. However, this leads me to ignore the unlived lives of the small world. If I identify with one of the trillions of people who never live in the small world, I can see why I might prefer the larger world.[4]

The throughline

I don't write this to promote total utilitarianism. It is an often useful framework for viewing the world, but it does have valid criticisms other than these.

I also don't write this (solely) to criticize these particular arguments. They are both wrong, for the reasons I have laid out here, but what is interesting is that their flaws seem to stem from the same mistakes in reasoning: imagining one side of a tradeoff as worse than the hypothetical states, and identifying with one side of a tradeoff more than the other.

Another philosophical argument I did not include is a common complaint against hedonism. Some arguments against a maximally pleasant life[5] write as if these hedonistic lives either are not maximally pleasant (ignoring the stipulation of the hypothetical), or should benefit others' lives but don't (even though the hypothetical life should maximize total pleasure).[6]

It reminds me of this quote from The Ones Who Walk Away From Omelas:

"Yet I repeat that these were not simple folk, not dulcet shepherds, noble savages, bland utopians. They were not less complex than us. The trouble is that we have a bad habit, encouraged by pedants and sophisticates, of considering happiness as something rather stupid." —Ursula Le Guin

This passage notes how difficult it is to convey that a society is utopian without the reader assuming it couldn't possibly be that good. If it is defined to be a utopia, then it must be utopian, even if you are skeptical of utopia in practice.

In summary

An argument about a hypothetical must engage with the world as it is defined, not as the world sounds like it would be in practice. If a hypothetical claims a life is pleasurable, you must imagine it as exactly that pleasurable, or else fail to engage prroductively. Certain (somewhat popular) philosophical arguments about hypotheticals fail to engage with the scenarios as they are posed, which is a major flaw.


  1. Iif we ignore the possibility for humans to create future utility not contained in that 10,000 number. ↩︎

  2. It may not be true that the optimal world is composed of people living barely positive lives. But that is irrelevant to the facet of the argument I will criticize here. ↩︎

  3. Though, interestingly, it feels easier to imagine trillions of humans if I consider them as billions of humans on thousands of earths. ↩︎

  4. One could even say the people in the small, joyous world are utility monsters! ↩︎

  5. Which usually theorize this life to be drugged or deluded ↩︎

  6. It is fair as an argument against hedonism for individuals—that spending your whole life in a drugged bliss shirks your moral duty to help others. But if everyone is living lives with maximal total pleasure, then this argument would fail. ↩︎

New to LessWrong?

New Comment