Today's post, One Life Against the World, was originally published on 18 May 2007. A summary (taken from the LW wiki):

Saving one life and saving the whole world provide the same warm glow. But, however valuable a life is, the whole world is billions of times as valuable. The duty to save lives doesn't stop after the first saved life. Choosing to save one life when you could have saved two is as bad as murder.

Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Scope Insensitivity, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New to LessWrong?

New Comment
17 comments, sorted by Click to highlight new comments since: Today at 2:57 PM

The premise that human lives can be treated in straightforward arithmetical terms (e.g., that two lives are twice as valuable as one, and twelve are three times as valuable as four, and so on) seems to me to lead to some disquieting places.

Specifically, it seems that any time we can definitely save two lives by killing one person, we ought to do so without hesitation, or at least seriously consider it. Yes, there is damage done by the killing—grieving loved ones, the loss of a good chef—but if the value of a human life is as high as we tend to think it is, it probably outweighs that damage. If six people will almost certainly survive with organ transplants, and will almost certainly die without, and the only match is the taxi driver outside, then get her on the operating table ASAP. If any of the sick people are paramedics, or if the taxi driver tends not to pay her credit card bills, then let us move all the quicker.

The only barrier to such behavior would be a demand for greater and methodical inquiry into the precise value of a human life, the bearing of personal factors on that value (age, health, quality of life), and the overall effect of whether particular people live or die (a person might be more worth saving if they are working on a cure to a deadly disease—but also if they are a pillar of the community, and whose death would tend to cause depression in those around them). And if this barrier is the only thing standing in our way, it would seem that we ought to be doing everything we can to overcome it, so that we can get started on the proactive business.

Nitpick: most utilitarians would refuse to harvest the taxi driver's organs, because bad things happen where people don't trust doctors.

But yeah, that's pretty much what we think (see the trolley problem). Utilitarians view "Should I kill one to save two?" as a choice between one death and two deaths, which is pretty straightforward. Whether you have blood on your hands isn't relevant - your feelings of guilt aren't worth one human life.

And refusing linear aggregation is disquieting as well. The sick child pleads for medication, and you rush to pay for it - then someone tells you "There are a million healthy children over there" and you say "Okay then" and go buy yourself a laptop.

I have a related question, as one still new to lesswrong: are there existing sequences on the philosophy behind/connected to utilitarianism, by which I mean, the notion that human lives, or life in general, has value? I assume there is either a sequence regarding this, or else a consensus which is generally accepted by the readers of this site (a consensus which, I hope, is nevertheless written out somewhere).

I've decided I really don't like a lot of ethics thought-experiments. Making people contemplate a horrible but artificially constrained scenario gets them to react strongly, but the constraints placed on the thought-experiment block us from using it to reason about almost everything that actually motivates that strong reaction.

Part of a real-world reason not to push someone in front of a runaway trolley to stop it is that it might not work. The person might fight back; the trolley's brakes might not actually be failing; the person might not be heavy enough to stop the trolley. But the trolley problem requires that we set these practical considerations aside and consider exactly two world-branches: kill the one person, or kill the five.

Another part of a real-world reason not to push someone in front of a runaway trolley is that other people might do bad things to you and your loved ones because you did it. You might go to jail, lose your job, be unable to provide for your kids, be dumped by your spouse or partner. If you're a doctor who saves lives every week, it would be pretty evil for you to throw your career away for the chance of saving a few people on a trolley track. If you're working on existential risks, your getting put in jail might literally cost us the world. But the trolley problem doesn't ask us to think about those consequences, just the immediate short-term ones: kill the one person, or kill the five.

In other words, the trolley problem doesn't ask us to exercise our full wisdom to solve a tricky situation. It asks us to be much stupider than we are in ordinary life; to blank out most of the likely consequences; to ignore many of the strongest and most worthwhile reasons that we might have to do (or refrain from doing) something. The only way the suggestion to reduce moral decision-making to "5 deaths > 1 death" can sound even remotely reasonable is to turn off most of your brain.

(I should add: My friend with a master's degree in philosophy thinks I'm totally missing the point of the distinction between ethical philosophy and applied morality.)