I'm slowly working through a bunch of philosophical criticisms of consequentialism and utilitarianism, kicked off by this book: https://www.routledge.com/Risk-Philosophical-Perspectives/Lewens/p/book/9780415422840 (which I don't think is good enough to actually recommend)

One common thread is complaints about utilitarianism and consequentialism giving incorrect answers in specific cases.  One is the topic of this question: When evaluating potential harms, how can we decide between a potential harm that's the result of someone's agency (their own beliefs, decisions, and actions) vs a potential harm from an outside source (e.g. imposed by the state)?

I'm open to gedanken experiments to illustrate this, but for now I'll use something dumb and simple.  You can save one of two people; saving them means getting them 100% protection from this specific potential harm, all else being equal.

  • Person A has entered into a risky position by their own actions, after deciding to do so based on their beliefs.  They are currently at a 10% chance of death (with the remainder 90% nothing happens).
  • Person B has been forced into a risk position by their state, which they were born into and have not been allowed to leave.  They are currently at X% chance of death (with the remainder nothing happens).

Assume that Persons A and B have approximately the same utility ahead of the (QALYs, etc).  The point of the question is to specifically quantifiably find a ratio to tradeoff between agency and utility (in this case mortality).

For what values of X would you chose to save Person B?

I'm interested in things like natural experiments would would show how current decision systems or philosophies answer this question.  I am also interested in peoples personal takes.

New Answer
New Comment

3 Answers sorted by

JBlack

40

Superficially (which is all this simplified scenario permits), X > 10 suffices for me. In any real scenario my threshold for X will change, because other things are never equal.

Ustice

30

I think the real question would be do they want help? Person A feels like they are making a personal choice, which B is likely feeling stressed, and might be more willing to accept help.

If the risk is low for me, I might explain the danger that I see, and ask if they would like help. I don’t believe that I can make an accurate model of their motivations to really predict their choices to a high degree of accuracy. B might feel a civic pride and feel confident of their choice, and A might feel like this risky action is their only way to accomplish some very important goal, but when presented with an alternative, might choose differently.

Ultimately I think that utilitarianism fails to provide an adequate answer here, because there is no objective measure of utility. Without getting the other person’s perspective, we are essentially making an arbitrary decision. We just don’t know how to weigh the possible outcomes from the perspective of the person being saved.

There is a reason why Give Directly is successful: they give the people being helped the agency to find solutions to the problems in their life, of which they are able to best prioritize. We can guess, but our answer is going to have a certain degree of error.

Without their input we may be robbing them of their agency with our meddling. With their input, I think that it is likely they the problem completely dissolves.

Slapstick

10

I think the example is confounded by the fact that it's measuring different kinds of agency valuation assumptions against eachother, while also trying to measure utility consideration.

Someone might consider impositions of safety differently than they'd consider impositions of risk.

Also, are all the actors involved making their decisions based on a world where there's a good chance someone will swoop in and save them? Is there 1 guardian angel for every 2 people in this world? People make decisions in part based on expectations of intervening forces. Intervening forces include the existence of these expectations in their decisions to intervene.

Ideally these sort of thought experiments are supposed to refine some sort of consistent approach, but any sort of consistency changes the expectations of the actors involved, which changes their behaviour, which means the approach may need adjustment.

2 comments, sorted by Click to highlight new comments since:

Assume that Persons A and B have approximately the same utility ahead of the (QALYs, etc).

This feels like one of those philosophical thought experiments that asks you to imagine a situation that in the real world has some very strong associations, but then pretends (and asks you to pretend) not to take those associations into account.

In the real world, I would expect that ceteris paribus the person who just took a big risk is going to take more risks and die earlier.

On a related note, are people going to take more risks if we live in a society that eliminates costs of risk-taking? (In practice, this often depends on the specifics of the risks and costs in question!) I feel motivated by social custom and vague intuition to punish the risk-taker, and these motivations probably have a semi-rigorous foundation in consequentialist reasoning on the societal level. But are these social associations another thing the thought experiment is asking us not to take into account?

You can save one of two people;

Can you communicate with the other one?