I'm slowly working through a bunch of philosophical criticisms of consequentialism and utilitarianism, kicked off by this book: https://www.routledge.com/Risk-Philosophical-Perspectives/Lewens/p/book/9780415422840 (which I don't think is good enough to actually recommend)
One common thread is complaints about utilitarianism and consequentialism giving incorrect answers in specific cases. One is the topic of this question: When evaluating potential harms, how can we decide between a potential harm that's the result of someone's agency (their own beliefs, decisions, and actions) vs a potential harm from an outside source (e.g. imposed by the state)?
I'm open to gedanken experiments to illustrate this, but for now I'll use something dumb and simple. You can save one of two people; saving them means getting them 100% protection from this specific potential harm, all else being equal.
- Person A has entered into a risky position by their own actions, after deciding to do so based on their beliefs. They are currently at a 10% chance of death (with the remainder 90% nothing happens).
- Person B has been forced into a risk position by their state, which they were born into and have not been allowed to leave. They are currently at X% chance of death (with the remainder nothing happens).
Assume that Persons A and B have approximately the same utility ahead of the (QALYs, etc). The point of the question is to specifically quantifiably find a ratio to tradeoff between agency and utility (in this case mortality).
For what values of X would you chose to save Person B?
I'm interested in things like natural experiments would would show how current decision systems or philosophies answer this question. I am also interested in peoples personal takes.
This feels like one of those philosophical thought experiments that asks you to imagine a situation that in the real world has some very strong associations, but then pretends (and asks you to pretend) not to take those associations into account.
In the real world, I would expect that ceteris paribus the person who just took a big risk is going to take more risks and die earlier.
On a related note, are people going to take more risks if we live in a society that eliminates costs of risk-taking? (In practice, this often depends on the specifics of the risks and costs in question!) I feel motivated by social custom and vague intuition to punish the risk-taker, and these motivations probably have a semi-rigorous foundation in consequentialist reasoning on the societal level. But are these social associations another thing the thought experiment is asking us not to take into account?
Can you communicate with the other one?