Suppose that a disease, or a monster, or a war, or something, is killing people. And suppose you only have enough resources to implement one of the following two options:

    1. Save 400 lives, with certainty.

    2. Save 500 lives, with 90% probability; save no lives, 10% probability.

    Most people choose option 1. Which, I think, is foolish; because if you multiply 500 lives by 90% probability, you get an expected value of 450 lives, which exceeds the 400-life value of option 1. (Lives saved don’t diminish in marginal utility, so this is an appropriate calculation.)

    “What!” you cry, incensed. “How can you gamble with human lives? How can you think about numbers when so much is at stake? What if that 10% probability strikes, and everyone dies? So much for your damned logic! You’re following your rationality off a cliff!”

    Ah, but here’s the interesting thing. If you present the options this way:

    1. 100 people die, with certainty.

    2. 90% chance no one dies; 10% chance 500 people die.

    Then a majority choose option 2. Even though it’s the same gamble. You see, just as a certainty of saving 400 lives seems to feel so much more comfortable than an unsure gain, so too, a certain loss feels worse than an uncertain one.

    You can grandstand on the second description too: “How can you condemn 100 people to certain death when there’s such a good chance you can save them? We’ll all share the risk! Even if it was only a 75% chance of saving everyone, it would still be worth it—so long as there’s a chance—everyone makes it, or no one does!”

    You know what? This isn’t about your feelings. A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain’s feelings of comfort or discomfort with a plan. Does computing the expected utility feel too cold-blooded for your taste? Well, that feeling isn’t even a feather in the scales, when a life is at stake. Just shut up and multiply. A googol is 10^100—a 1 followed by one hundred zeroes. A googolplex is an even more incomprehensibly large number—it’s 10^googol, a 1 followed by a googol zeroes. Now pick some trivial inconvenience, like a hiccup, and some decidedly untrivial misfortune, like getting slowly torn limb from limb by sadistic mutant sharks. If we’re forced into a choice between either preventing a googolplex people’s hiccups, or preventing a single person’s shark attack, which choice should we make? If you assign any negative value to hiccups, then, on pain of decision-theoretic incoherence, there must be some number of hiccups that would add up to rival the negative value of a shark attack. For any particular finite evil, there must be some number of hiccups that would be even worse.

    Moral dilemmas like these aren’t conceptual blood sports for keeping analytic philosophers entertained at dinner parties. They’re distilled versions of the kinds of situations we actually find ourselves in every day. Should I spend $50 on a console game, or give it all to charity? Should I organize a $700,000 fundraiser to pay for a single bone marrow transplant, or should I use that same money on mosquito nets and prevent the malaria deaths of some 200 children?

    Yet there are many who avert their gaze from the real world’s abundance of unpleasant moral tradeoffs—many, too, who take pride in looking away. Research shows that people distinguish “sacred values,” like human lives, from “unsacred values,” like money. When you try to trade off a sacred value against an unsacred value, subjects express great indignation. (Sometimes they want to punish the person who made the suggestion.)

    My favorite anecdote along these lines comes from a team of researchers who evaluated the effectiveness of a certain project, calculating the cost per life saved, and recommended to the government that the project be implemented because it was cost-effective. The governmental agency rejected the report because, they said, you couldn’t put a dollar value on human life. After rejecting the report, the agency decided not to implement the measure.

    Trading off a sacred value against an unsacred value feels really awful. To merely multiply utilities would be too cold-blooded—it would be following rationality off a cliff . . . But altruism isn’t the warm fuzzy feeling you get from being altruistic. If you’re doing it for the spiritual benefit, that is nothing but selfishness. The primary thing is to help others, whatever the means. So shut up and multiply!

    And if it seems to you that there is a fierceness to this maximization, like the bare sword of the law, or the burning of the Sun—if it seems to you that at the center of this rationality there is a small cold flame—

    Well, the other way might feel better inside you. But it wouldn’t work.

    And I say also this to you: That if you set aside your regret for all the spiritual satisfaction you could be having—if you wholeheartedly pursue the Way, without thinking that you are being cheated—if you give yourself over to rationality without holding back, you will find that rationality gives to you in return.

    But that part only works if you don’t go around saying to yourself, “It would feel better inside me if only I could be less rational.” Should you be sad that you have the opportunity to actually help people? You cannot attain your full potential if you regard your gift as a burden.


    The first publication of this post is here.

    New Comment
    8 comments, sorted by Click to highlight new comments since: Today at 2:02 PM

    Nice post! Utilitarianism definitely has its points. The trick of course is assigning values to such things as hiccups and shark attacks...

    Assuming this is a one-off again;

    If I care about an individual in the group of 500, say myself or my wife or whatever, I'd want you to pick 2 in either case. Option 1 gives the individual a 20% chance to die (1/5 they'll die), option 2 gives the individual a 10% chance to die (if everyone dies).

    This is a bit more complicated than the simple math suggests though - a lot of factors come into play. Let me tweak it slightly; you're in a space colony of 500 and you have to decide on what to do about a problem. You have two choices on how to handle it, same odds. Choice 1: 100 colonists die. Choice 2: 90% odds everyone is saved but 10% the colony is wiped out.

    From the perspective of someone interested in maintaining the longevity of the colony, shouldn't I take choice 1 in either case? Yes, it is the choice with 50 less expected value of lives saved but the 10% odds of total destruction path that is possible down choice 2 is an *unacceptable* fail-state. The colony can recover from 20% population hit but not if it is entirely destroyed.

    Or to put it even more simply: would you sacrifice 20% of the human population to remove a definite 10% chance of total extinction of the species?

    "Lives saved don’t diminish in marginal utility", as you have said, but maybe hiccups do? A single person in a group of 10 hiccuppers is not as unfortunate as a lone hiccupper standing with 9 other people who don't have hiccups. So even if the total negative utility of 10 hiccuppers is worse than that of one hiccupper, it's not 10 times worse.

    Since the utility function doesn't have to be linear function in the number of hiccuppers (it only has to be monotonic) there is no reason why it can't be bounded, forever lower (in absolute value) than the value of a single human life.

    Trading off a sacred value against an unsacred value feels really awful.

    But the feeling isn't the only issue. There's a rational defence of sacredness -- *Schelling fences and so on.

    You mentioned that if you assign any negative value of inconvenience to hiccups, you inadvertently fix a real number that can be compared to the negative value of morally incomparable situations, and normalized by an amount of people, where obviously no matter how many people you take, hiccups aren't going to amount to horrible deaths or things of the sort.

    Have you considered using mathematical ordinals instead of real numbers? I remember that you mentioned them at some point in one of your articles, schematically, if we assign the number ω or above to actual horrible events and regular real numbers to minor inconveniences, they can be compared where still you won't be able to get a finite amount of minor inconveniences that will amount to one truly horrible event.

    EDIT:

    Real numbers are not defined as ordinals (at least not directly in a way I'm familiar with), but still work the same, you can take natural numbers as well and they are well defined as ordinals and use them instead, but if I'm allowed to be informal I'd rather just keep the idea of real numbers or rational numbers since I don't see how it really matters.

    Say we have a treatment of curing hiccups. Or some other inconvenience. Maybe even all medical inconveniences. We have done all the research and experiments and concluded that the treatment is perfectly safe - except there is no such thing as "certainty" in Bayesianism so we must still allocate a tiny probability to the event our treatment may kill a patient - say, a one in a googol chance. The expected utility of the treatment will now have a  component in it, which far outweighs any positive utility gained from the treatment, which only cures inconveniences, a mere real number that cannot be overcome the negative  no matter how small the probability of that  is nor how much you multiply the positive utility of curing the inconveniences.

    I see the problem. I wonder if anyone had already delved and tried formalizing using ordinal numbers. Would be an interesting read, I definitely would need to think about this more.

    If the googolplex number of hiccups is one per human, as in, "each of these googolplex number of humans that are available in the countless parallel many-worlds will suffer a single hiccup more in their whole life", then I feel like it's just... noise? So not actually worth anything. (Assuming the annoying omnipotent Omega who just keeps showing up to test our rationality in rather sadistic ways assures that there won't be any ripple effect from the hiccups, so no billions of pilot crashing planes during landing and so on). 

    Inflicting a lesser number people with more hiccups each would eventually reach a point where it would become something, and so we'd have to take the deal.

    I see a problem here, the main one, because of which 90% * 500 = 100% * 450 is generally, then that our world is probable and we will constantly accept many choices, so in the end if we agree to hiccups, then if hiccups already existed people initially, then we would add grains of sand / blind spots in the eyes, and so in the end we would get to the point that this is not just noise even on a large number of people. Although in one expression it is "to kill a billion people and the remaining six will become perfect," in another it is "a billion and the life of the remaining six will become extremely unpleasant."