Circular Altruism vs. Personal Preference

by Vladimir_Nesov 1 min read26th Oct 200914 comments


Suppose there is a diagnostic procedure that allows to catch a relatively rare disease with absolute precision. If left untreated, the disease if fatal, but when diagnosed it's easily treatable (I suppose there are some real-world approximations). The diagnostics involves an uncomfortable procedure and inevitable loss of time. At what a priori probability would you not care to take the test, leaving this outcome to chance? Say, you decide it's 0.0001%.

Enter timeless decision theory. Your decision to take or not take the test may be as well considered a decision for the whole population (let's also assume you are typical and everyone is similar in this decision). By deciding to personally not take the test, you've decided that most people won't take the test, and thus, for example, with 0.00005% of the population having the condition, about 3000 people will die. While personal tradeoff is fixed, this number obviously depends on the size of the population.

It seems like a horrible thing to do, making a decision that results in 3000 deaths. Thus, taking the test seems like a small personal sacrifice for this gift to others. Yet this is circular: everyone would be thinking that, reversing decision solely to help others, not benefiting personally. Nobody benefits.

Obviously, together with 3000 lives saved, there is a factor of 6 billion accepting the test, and that harm is also part of the outcome chosen by the decision. If everyone personally prefers to not take the test, then inflicting the opposite on the whole population is only so much worse.

Or is it? What if you care more about other people's lives in proportion to their comfort than you care about your own life in proportion to your own comfort? How can caring about other people be in exact harmony with caring about yourself? It may be the case that you prefer other people to take the test, even if you don't want to take the test yourself, and that is the position of the whole population. What is the right thing to do then? What wins: personal preference, or this "circular altruism", preference about other people that not a single person accepts for oneself?

If altruism wins, then it seems that the greater the population, the less personal preference should matter, and the more the structure of altruistic preference takes over the personal decision-making. Person disappears, with everyone going through the motions of implementing the perfect play for their ancestral spirits.

P.S. This thought experiment is an example of Pascal's mugging closer to real-world scale. As with specks, I assume that there is no opportunity cost in lives from taking the test. The experiment also confronts utilitarian analysis of caring about other people depending on structure of the population, comparing that criterion against personal preference.