"Whoever saves a single life, it is as if he had saved the whole world."
– The Talmud, Sanhedrin 4:5
It's a beautiful thought, isn't it? Feel that warm glow.
I can testify that helping one person feels just as good as helping the whole world. Once upon a time, when I was burned out for the day and wasting time on the Internet - it's a bit complicated, but essentially, I managed to turn someone's whole life around by leaving an anonymous blog comment. I wasn't expecting it to have an effect that large, but it did. When I discovered what I had accomplished, it gave me a tremendous high. The euphoria lasted through that day and into the night, only wearing off somewhat the next morning. It felt just as good (this is the scary part) as the euphoria of a major scientific insight, which had previously been my best referent for what it might feel like to do drugs.
Saving one life probably does feel just as good as being the first person to realize what makes the stars shine. It probably does feel just as good as saving the entire world.
But if you ever have a choice, dear reader, between saving a single life and saving the whole world - then save the world. Please. Because beyond that warm glow is one heck of a gigantic difference.
For some people, the notion that saving the world is significantly better than saving one human life will be obvious, like saying that six billion dollars is worth more than one dollar, or that six cubic kilometers of gold weighs more than one cubic meter of gold. (And never mind the expected value of posterity.) Why might it not be obvious? Well, suppose there's a qualitative duty to save what lives you can - then someone who saves the world, and someone who saves one human life, are just fulfilling the same duty. Or suppose that we follow the Greek conception of personal virtue, rather than consequentialism; someone who saves the world is virtuous, but not six billion times as virtuous as someone who saves one human life. Or perhaps the value of one human life is already too great to comprehend - so that the passing grief we experience at funerals is an infinitesimal underestimate of what is lost - and thus passing to the entire world changes little.
I agree that one human life is of unimaginably high value. I also hold that two human lives are twice as unimaginably valuable. Or to put it another way: Whoever saves one life, if it is as if they had saved the whole world; whoever saves ten lives, it is as if they had saved ten worlds. Whoever actually saves the whole world - not to be confused with pretend rhetorical saving the world - it is as if they had saved an intergalactic civilization.
Two deaf children are sleeping on the railroad tracks, the train speeding down; you see this, but you are too far away to save the child. I'm nearby, within reach, so I leap forward and drag one child off the railroad tracks - and then stop, calmly sipping a Diet Pepsi as the train bears down on the second child. "Quick!" you scream to me. "Do something!" But (I call back) I already saved one child from the train tracks, and thus I am "unimaginably" far ahead on points. Whether I save the second child, or not, I will still be credited with an "unimaginably" good deed. Thus, I have no further motive to act. Doesn't sound right, does it?
Why should it be any different if a philanthropist spends $10 million on curing a rare but spectacularly fatal disease which afflicts only a hundred people planetwide, when the same money has an equal probability of producing a cure for a less spectacular disease that kills 10% of 100,000 people? I don't think it is different. When human lives are at stake, we have a duty to maximize, not satisfice; and this duty has the same strength as the original duty to save lives. Whoever knowingly chooses to save one life, when they could have saved two - to say nothing of a thousand lives, or a world - they have damned themselves as thoroughly as any murderer.
Addendum: It's not cognitively easy to spend money to save lives, since cliche methods that instantly leap to mind don't work or are counterproductive. (I will post later on why this tends to be so.) Stuart Armstrong also points out that if we are to disdain the philanthropist who spends life-saving money inefficiently, we should be consistent and disdain more those who could spend money to save lives but don't.
The choice between an "averagist" and "totalist" model of optimal human welfare is a tough one. The averagist wants to maximize average happiness (or some such measure of welfare); the totalist wants to maximize total happiness. Both lead to unfortunate reductio arguments. Average human welfare can be improved by eliminating everyone who is below average. This process can be repeated successively until we have only one person left, the happiest man in the world. The totalist would proceed by increasing the population to the very edge of carrying capacity, so that everyone was just a small increment above being so unhappy that they would commit suicide.
Neither model seems to match with our intuitions. As Eliezer has frequently warned, anyone who may accidentally or intentionally create a super-intelligent Artificial Intelligence that might take over the world had better beware of it adopting one of these extremes, even if the intention is to program it to be beneficent.
I'm sure philosophers have considered these questions for centuries. I'd be curious to know if there is a principled model for optimal human happiness which does not conflict so violently with our moral instincts. Not to privilege our instincts unjustifiably; it's possible that the AI might be right to adopt one of the views above, and we are wrong, with our muddled human thinking. I would feel better if there were a nice, consistent and relatively simple model which did not lead to a seemingly horrific outcome.
Except that if they still believe their lives are worth living, then you are causing them disutility by violating their preference to survive. It also causes everyone everyone else disutility because they don't want other people killed, and because they become worried about themselves or their families dying if they become unhappy. It also eliminates the future possibility of the killed people's lives improving.