Just an occasional reminder that if you value something so much that you don’t want to destroy it for nothing, then you’ve got to put a finite dollar value on it. Things just can’t be infinitely more important than other things, in a world where possible trades weave everything together. A nice illustration from Arbital:

An experiment in 2000–from a paper titled “The Psychology of the Unthinkable: Taboo Trade-Offs, Forbidden Base Rates, and Heretical Counterfactuals”–asked subjects to consider the dilemma of a hospital administrator named Robert:

Robert can save the life of Johnny, a five year old who needs a liver transplant, but the transplant procedure will cost the hospital $1,000,000 that could be spent in other ways, such as purchasing better equipment and enhancing salaries to recruit talented doctors to the hospital. Johnny is very ill and has been on the waiting list for a transplant but because of the shortage of local organ donors, obtaining a liver will be expensive. Robert could save Johnny’s life, or he could use the $1,000,000 for other hospital needs.

The main experimental result was that most subjects got angry at Robert for even considering the question.

After all, you can’t put a dollar value on a human life, right?

But better hospital equipment also saves lives, or at least one hopes so. 4 It’s not like the other potential use of the money saves zero lives.

Let’s say that Robert has a total budget of $100,000,000 and is faced with a long list of options such as these:

  • $100,000 for a new dialysis machine, which will save 3 lives
  • $1,000,000 for a liver for Johnny, which will save 1 life
  • $10,000 to train the nurses on proper hygiene when inserting central lines, which will save an expected 100 lives

Now suppose–this is a supposition we’ll need for our theorem–that Robert does not care at all about money, not even a tiny bit. Robert only cares about maximizing the total number of lives saved. Furthermore, we suppose for now that Robert cares about every human life equally.

If Robert does save as many lives as possible, given his bounded money, then Robert must behave like somebody assigning some consistent dollar value to saving a human life.

We should be able to look down the long list of options that Robert took and didn’t take, and say, e.g., “Oh, Robert took all the options that saved more than 1 life per $500,000 and rejected all options that saved less than 1 life per $500,000; so Robert’s behavior is consistent with his spending $500,000 per life.”

Alternatively, if we can’t view Robert’s behavior as being coherent in this sense–if we cannot make up any dollar value of a human life, such that Robert’s choices are consistent with that dollar value–then it must be possible to move around the same amount of money, in a way that saves more lives.

In particular, if there is no dollar value for which you took all of the opportunities to pay less to save lives and didn’t take any of the opportunities to pay more to save lives, and ignoring complications with lives only being available at a given price in bulk, then there is at least one pair of opportunities where you could swap one that you took for one that you didn’t take and save more lives, or at least save the same number of lives and keep more money, which at least in a repeated game like this seems likely to save more lives in expectation.

I used to be more feisty in my discussion of this idea:

Another alternative is just to not think about it. Hold that lives have a high but finite value, but don’t use this in naughty calculative attempts to maximise welfare! Maintain that it is abhorrent to do so. Uphold lots of arbitrary rules, like respecting people’s dignity and beginning charity at home and having honour and being respectable and doing what your heart tells you. Interestingly, this effectively does make human life worthless; not even worth including in the calculation next to the whims of your personal emotions and the culture at hand.

New to LessWrong?

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 7:13 PM

Here's a strategy that places infinite value on life.

List all interventions that increase life available, can you buy them all? If yes, do so. If no, check all possible combinations of purchases for the combination that provides the maximum total life. If multiple options are tied for maximizing total life, pick the cheaper one.

Is this how people spend money when the life they're saving is their own?

Just an occasional reminder that if you value something so much that you don’t want to destroy it for nothing, then you’ve got to put a finite dollar value on it.

The multiple-negatives in this was hard for me to interpret.  is "don't want to destroy for nothing" "want to preserve at any cost" or "want to ensure we get something for destroying"?  I presume the first, but that's not the naive reading of the sentence.

In any case, I agree with the premise, and usually hear it framed as "you can't value multiple things infinitely".  "human lives" is not a valid infinity target (because obviously that prevents any rational decision-making between two unpleasant options), but one specific life may be.

Edit: on further reflection, I'm surprised that I forgot to mention my preferred way of thinking about value: in humans (and perhaps in general decision mechanisms), it's always relative, and generally marginal.  It's about the comparison of two (or more) differences from the status quo.  "Value" (in these examples and context) is only defined in terms of decisions and tradeoffs - it just doesn't matter what you value or how much you value it if you can have everything.  The valuation only comes into play when you have to give up something you value to get something you value more.

There's a debate between Tyler Cowen and philosopher Agnes Callard around valuing human lives with a number. Tyler Cowen starts by saying that it's actually a complicated issue but that having some bounds that depend on circumstances is useful. Agnes Callard then says that you don't need to put any value on human lives at all to make practical tradeoffs because you can think about obligations. 

After hearing that exchange it seems to me like the position that you have to put monetary values on human lives to be able to make good decisions seems questionable to me and naive due to lack of knowledge of the alternatives about how to make the decisions.

Thinking about social actors making promises to each other and then having obligations to deliever on those promises is a valid model for thinking about who makes effort to safe peoples lives.

It seems like "agent X puts a particular dollar value on human life" might be ambiguous between "agent X acts as though human lives are worth exactly N dollars each" and "agent X's internal thoughts explicitly assign a dollar value of N to a human life". I wonder if that's causing some confusion surrounding this topic. (I didn't watch the linked video.)

If the care chooser is maximising expected life-years ie favours saving the young then he can be "inconsistent".

Also if you had enough money you would just buy all the options. The only options that get dropped are those that interfere with more saving options.

If somebody truly considered a life to be worth some dollar amount and their budjet was increased then they would still pick the same options but end up with a bigger pile of cash. Given that this "considers worth to be" floats with the budjet I doubt that treating it as a dollar amount is a good idea.

The opportunity cost is still real thougth. If you use a doctor to save someone that means you can't simultanously use them to save another. So assigning a doctor or equipment you are simultanousy saving and endangering lives. And being too stubborn about your decisio making just means you endaangerment side of things just grows without bound.

[-]jmh3y10

I'm a bit confused by "you" in the claim. If we're talking about individuals I'm not at all sure one must put a monetary value on something. That seems to suggest nominal values are more accurate than real, subjective personal values those monetary units represent.

 

In a more general setting, markets for instance, I think a stronger case can be made but for any given individual am not certain it would be required.

Broadening it out more, where multiple people are trying to work together to some ends I think would be the strongest case.

Robert must behave like somebody assigning some consistent dollar value to saving a human life.

Note that this number provides only a lower bound on Robert's revealed preference regarding the trade-off and that it will vary with the size of the budget.

One could imagine an alternative scenario where there is a fluctuating bankroll (perhaps with a fixed rate of increase — maybe even a rate proportional to its current size) and possible interventions are drawn sequentially from some unknown distribution. In this scenario Robert can't just use the greedy algorithm until he runs out of budget (modulo possible knapsack considerations), but would have to model the distribution of interventions and consider strategies such as "save no lives now, invest the money, and save many more lives later".

Saving lives now may be worth more than saving lives later.