Here on LW, we know that if you want to do the most good, you shouldn't diversify your charitable giving. If a specific charity makes the best use of your money, then you should assign your whole charitable budget to that organization. In the unlikely case that you're a millionaire and the recipient couldn't make full use of all your donations, then sure, diversify. But most people couldn't donate that much even if they wanted to. Also, if you're trying to buy yourself a warm fuzzy feeling, diversification will help. But then you're not trying to do the most good, you're trying to make yourself feel good, and you'd do well to have separate budgets for those two.

We also know about scope insensitivity - when three groups of subjects were asked how much they'd pay to save 2000 / 20000 / 200000 migrating birds from drowning in oil, they answered $80, $78, and $88, respectively. "How much do I value it if 20,000 birds are saved from drowning in oil" is a hard question, and we're unsure of what to compare it with. So we substitute the question into an easier and clearer one - "how much emotion do I feel when I think about birds drowning in oil". And that question doesn't take the number of birds into account, so the number gets mostly ignored.

So diversification and scope insensitivity are two biases that people have, and which affect charitable giving. What others are there?

According to Baron & Szymanska (2010), there are a number of heuristics involved in giving that lead to various biases. Diversification we are already familiar with. The others are Evaluability, Average vs. Marginal Benefit, Prominence, Identifiability, and Voluntary vs. Tax.

The general principle of Evaluability has been discussed on LW before, though not in a charitable context. This one is directly related to scope insensitivity, since both involve it being difficult to judge whether or not a charitable cause is a worthy one. Suppose that you need to choose between two charities, one of them dedicated to malaria prevention and the other dedicated to treating parasitic worm infections. Which one is a more worthy cause? Or should you instead donate to something else entirely?

Presuming that you don't happen to know about GiveWell's reports about the two charities and haven't studied the topic, you probably have no idea of which one is better. But you still need to make a decision, so you look for something to base that decision on. And one type of information that's relatively easily available for many charities is their overhead: what percentage of their costs they use on administration, as opposed to actual work. So you might end up choosing the charity which has the lowest administration costs, and which spends the largest amount of money on actual charity work.

If you truly have no other information available, then this might really be the best you can do. But overhead is by itself a bad criteria. Suppose that charities A and B both receive $100. Charity A spends $10 on overhead and saves 9 human lives with the remaining $90. Charity B, on the other hand, allocates $25 toward its operating expenses, but manages to save 15 lives with the remaining $75. B would clearly be better, but using overhead as a heuristic tells us to give to A.

GoodIntents.org also provides a number of other reasons why you shouldn't use overhead as your criteria: the overhead figure is easy to manipulate, and the pressure to keep administration costs low can cause organizations to understaff projects, or to favor programs that are inefficient but have low administration costs. Still, many donors base their donation decision on the easy-to-evaluate operating costs, rather than some more meaningful figure.

Average vs. Marginal Benefit. Two charitable organizations provide figures about their effectiveness. Charity A claims to save one life for every 900 dollars donated. Charity B claims to save one life for every 1400 dollars donated. Charity A is clearly the correct choice - right?

Maybe. If Charity A is a large organization, it could be that they're unable to spend the extra money effectively. It could be that the most recent one million dollars that they've received in donations have actually been dragging down their average, and they currently need 2000 extra dollars for each additional life that they save. In contrast, charity B might just have paid for most of their fixed costs, and can now leverage each additional donation of 800 dollars into a saved life for a while.

Information on the marginal benefit of a dollar is often hard to come by, especially since it's in the interest of many well-funded charities to hide this information. But it's still something to keep in mind.

Prominence. People tend to pay attention to a single prominent attribute, or an attribute they view as the most important. This can often be an effective fast-and-frugal heuristic, but only focusing on one attribute to the exclusion of others may make it difficult or impossible to compare tradeoffs. It may also cause people to ignore efficiency: if two safety programs differ in cost and in the number of lives saved, people tend to choose an option that saves more people. They do this even if the difference in lives is small and the difference in cost is large. As a result, they may pay large sums for only a small increase in the amount of good done, even though the extra money would have been better spent elsewhere.

Parochialism is characterized as an in-group bias in which people weigh the welfare of their own group more heavily than those of outsiders. In charity, this may show itself by Americans preferring to give to American charities, even if African ones save more lives per dollar. Whether this is truly a bias depends on one whether tries to carry out perfect utilitarianism: if not, preferring to help one's own group first is a question of values, not rationality. On the other hand, if one does strive for pure utilitarianism, then it should not matter where the subjects of aid are located.

It could also be that attempting to correct for parochialism might reduce the amount of charitable giving, if there are many people whose altruism is limited purely to the in-group. Denied of the chance to help the in-group, such people might rather choose not to donate at all.

On the other hand, if US citizens do experience a sense of commitment to tsunami victims in Hawaii, then it might be reasonable to presume that the same cognitive mechanism would affect their commitment to New Zealanders who suffered the same fate. If so, this suggests that parochialism results from cognitive biases. For instance, an American may have an easier time imagining the daily life on Hawaii in detail than imagining the daily life on New Zealand, and this difference in intensity may affect the amount of empathy they experience.

If one does want to reduce parochialism, then there is some evidence that parochialism is greater for harms of inaction than for action. That is, people are reluctant to harm outsiders through acts but much more willing to do nothing to help them. If this can be made to seem like an inconsistency, then people might experience a larger obligation to help outsiders. Parochialism can also be reduced by encouraging people to think of outsiders as individuals, rather than as members of an abstract group. "New Zealanders" might not attract as much empathy as imagining some specific happy family of New Zealanders, essentially no different from a family in any other country.

“Writing about his experiences in the Spanish Civil War, George Orwell tells this story. He had gone out to a spot near the Fascist trenches from which he thought he might snipe at someone. He waited a long time without any luck. None of the enemy made an appearance. Then, at last, some disturbance took place, much shouting and blowing of whistles followed, and a man: jumped out of the trench and ran along the parapet in full view. He was half-dressed and was holding up his trousers with both hands as he ran. I refrained from shooting at him. I did not shoot part because of that detail about the trousers. I had come here to shoot at `Fascists’; but a man holding up his trousers isn’t a ‘Fascist’, he is visibly a fellow-creature, similar to yourself, and you don’t feel like shooting at him.”

Identifiability. Aid recipients who are identifiable evokes more empathy than recipients who are not. In one "dictator game" study, where people could choose to give somebody else some amount of money, giving was higher when the recipient was identified by last name. Small et al. (2007) note that people often become entranced with specific, identifiable victims. In 1987, one child, "Baby Jessica", received over $700,000 in donations from the public when she fell in a well near her home in Texas. In 2003, £275,000 was quickly raised for the medical care of a wounded Iraqi boy, Ali Abbas. And in one case, more than $48,000 was contributed to save a dog stranded on a ship adrift on the Pacific Ocean near Hawaii.

From a simple utilitarian perspective, identifiability is bias. By increasing altruism toward the identifiable victims, it may reduce altruism toward the unidentified ones, who are often the ones most in need of help. On the other hand, it could also increase overall altruism, by making people more willing to incur greater personal costs to help the identifiable victims.

In fact, Small et al. found that teaching people about the identifiability effect makes them less likely to give to identifiable victims, but no more likely to give to statistical victims. So if you see a story about an identifiable victim and kill your impulse to give to them, or experience pleasure from never feeling that impulse in the first place, please take the money you would have donated to the victim if you hadn't known about the effect and actually give it to some worthier cause! The altruism chip jar is a great way of doing this.

Baron & Szymanska suggest an alternative way that might help in channeling the effects of identifiability to good ends: "Victims all have names. The fact that we are aware of one of them is an accident. We could make up names for the others, or even tell ourselves that our donation to some relief fund is going to help someone named Zhang." So if you know rationally that it'd be good to give to a "statistical" cause but are tempted to give to an "identifiable" cause instead, come up with some imaginary person who'd be helped by your "statistical" donation and think of how glad they'd be to receive your aid.

Voluntary vs. Tax. Finally, some people oppose government aid programs supported by taxes, often referred to as "forced charity". I'm inclined to consider this more of a value than a bias, but Baron & Szymanska argue that

In part, the bias against “forced charity” may arise from a belief in freedom, the belief that government should not force us to help others but should, more or less, provide us with services from which we all benefit and pay for collectively, such as roads, military defense, and protection of our property. (Some libertarians would not even go that far.) Insofar as this is true, it may represent a kind of cognitive inconsistency. Some people benefit very little from roads or property protection, so paying taxes for these things is a way of forcing them to sacrifice for the benefit of others. It is a matter of degree.

If we do accept that government aid programs are as morally good as private ones, then that suggests that contributions to political causes that support helpful programs could sometimes be more efficient than direct contributions to the programs themselves. Although the probability of having some effect through political action is very low, the benefits of a successful initiative are potentially very high. Thus the expected utility of donating to the right political campaign might be higher than the expected utility of donating to an actual charity.

References

Baron, J. & Szymanska, E. (2010). Heuristics and Biases in Charity. In D. Oppenheimer & C. Olivola (Eds). The science of giving: Experimental approaches to the study of charity (pp. 215–236). New York: Taylor and Francis. http://www.sas.upenn.edu/~baron/papers/charity.pdf

Small, D.A. & Loewenstein, G. & Slovic, P. (2007) Sympathy and callousness: The impact of deliberative thought on donations to identifiable and statistical victims. Organizational Behavior and Human Decision Processes, 102, 143–153. http://opim.wharton.upenn.edu/risk/library/J2007OBHDP_DAS_sympathy.pdf

New to LessWrong?

New Comment
57 comments, sorted by Click to highlight new comments since: Today at 2:28 AM

From a simple utilitarian perspective, identifiability is bias. By increasing altruism toward the identifiable victims, it may reduce altruism toward the unidentified ones, who are often the ones most in need of help. On the other hand, it could also increase overall altruism, by making people more willing to incur greater personal costs to help the identifiable victims.

So part of what I think is going on here is that giving to statistical charity is a slippery slope. There is no one number that it's consistent to give: if I give $10 to fight malaria, one could reasonably ask why I didn't give $100; if I give $100, why not $1000; and if $1000, why not every spare cent I make? Usually when we're on a slippery slope like this, we look for a Schelling point, but there are only two good Schelling points here: zero and every spare cent for the rest of your life. Since most people won't donate every spare cent, they stick to "zero". I first realized this when I thought about why I so liked Giving What We Can's philosophy of donating 10% of what you make; it's a powerful suggestion because it provides some number between 0 and 100 which you can reach and then feel good about yourself.

Then identifiable charity succeeds not just because it attaches a face to people, but also because it avoids the slippery slope. If we're told we need to donate to save "baby Jessica", it's very easy to donate exactly as much money as is necessary to help save baby Jessica and then stop. The same is true of natural disasters; if there's an earthquake in Haiti, that means we can donate money to Haiti today but not be under any consistency-related obligations to do so again until the next earthquake. If Haiti is just a horrible impoverished country, then there's no reason to donate now as opposed to any other time, and this is true for all possible "now"s.

Feedback appreciated as I've been planning to make a top-level post about this if I ever get time.

There's a quote about this:

Perfect is the enemy of good.

Commonly attributed to Voltaire

It's also a common Russian saying, FWIW. Maybe we ripped it off from Voltaire, though.

In Russian, it is even more blunt - "better is the enemy of good", without superlative associated with perfect.

[-][anonymous]12y00

That was Voltaire's original phrasing. http://en.wikipedia.org/wiki/Perfect_is_the_enemy_of_good

[This comment is no longer endorsed by its author]Reply

Yes, I suppose I mistranslated "luchshiy". Good call.

[-][anonymous]12y00

In Argentina, the Spanish version of this saying ("Lo mejor es enemigo de lo bueno") is often attributed to dictator Juan Perón.

However, let us not lose sight of Yvain's main point, which is not that this sort of slippery slope exists, but that the identifiability heuristic works in part because it avoids it.

[This comment is no longer endorsed by its author]Reply

Definitive Voltairean wording and source (although Voltaire himself attributes it to an unnamed "Italian sage"):

le mieux est l’ennemi du bien

I saw somewhere on this site (maybe the quotes page?)

Perfect is the enemy of good enough, and good enough is the enemy of at all.

Upvoted simply because Less Wrong is seriously lacking in discussion of Schelling points and how they're critical components of the way humans think about practical problems.

I do think your hypothesis is plausible, but the reasoning it describes seems too complex. One would think like that only if one cares about being consistent and reflects on that, and only after one has decided that the "I've done my part of the job" excuse is not enough... and it seems improbable that most people think like that.

Also, it seems to me that "help Haiti just this once" is not the same scenario as "help just this person".

Worth testing, though. I guess if you set up a scenario like "Help poor kid X grow up well", a long term goal with kinda-hard-to-predict cost that most people wouldn't be willing to pay all at once, with a specific identifiable subject...

[-][anonymous]12y10

I do think your hypothesis is plausible, but the reasoning it describes seems too complex. One would think like that only if one cares about being consistent and reflects on that, and only after one has decided that the "I've done my part of the job" excuse is not enough... and it seems improbable that most people think like that.

The enormous line of research on cognitive dissonance--see the forced compliance paradigm in particular--indicates the importance of consistency, even when it isn't consciously recognized as such.

Thank you for the link. That really makes Yvain's hypothesis more probable.

If we're told we need to donate to save "baby Jessica", it's very easy to donate exactly as much money as is necessary to help save baby Jessica and then stop.

I have the impression that identifiable cases tend to get far more money than what'd be needed to save them.

Well, this is one data point. After the initial request for help was posted, the requested amount was reached in one day... and some people continued to donate even after the stated goal was reached.

These people also managed to raise much more money than they originally asked for...

It sucks how sometimes I notice a way in which I could be more effective, but don't do anything because I could've in theory done something a long time ago.

Very solid point, and I appreciate it - I immediately identify with it as one of the major reasons I tend not to engage in charitable giving myself, except for those rare occasions where a charity I support is requesting a specific (and small) amount...

So part of what I think is going on here is that giving to statistical charity is a slippery slope. There is no one number that it's consistent to give: if I give $10 to fight malaria, one could reasonably ask why I didn't give $100; if I give $100, why not $1000; and if $1000, why not every spare cent I make? Usually when we're on a slippery slope like this, we look for a Schelling point, but there are only two good Schelling points here: zero and every spare cent for the rest of your life. Since most people won't donate every spare cent, they stick to "zero". I first realized this when I thought about why I so liked Giving What We Can's philosophy of donating 10% of what you make; it's a powerful suggestion because it provides some number between 0 and 100 which you can reach and then feel good about yourself.

There's another option which I think may be better for some people (but I don't know because it hasn't been much explored). One can stagger one's donations over time (say, on a quarterly basis) and alter the amount that one gives according to how one feels about donating based on the feeling of past donations. It seems like this may maximize the amount that one gives locally subject to the constraint of avoiding moral burnout.

If one feels uncomfortable with the amount that one is donating because it's interfering with one's lifestyle one can taper off. On the flip side I've found that donating gives the same pleasure that buying something does: a sense of empowerment. Buying a new garment that one realistically isn't going to wear or a book that one realistically isn't going to read feels good, but probably not as good as donating. This is a pressure toward donating more.

"On the flip side I've found that donating gives the same pleasure that buying something does: a sense of empowerment."

Hmmm, useful to know. I may have to experiment with this one. I often end up buying stuff simply because the act of purchasing things makes me feel better, and I can't see any reason a small donation to charity wouldn't produce similar results...

This seems very plausible to me.

Here on LW, we know that if you want to do the most good, you shouldn't diversify your charitable giving.

That may not be true when you're not sure what "doing good" means. For example, giving to multiple charities could be considered rational under Bostrom and Ord's Parliamentary Model of dealing with moral uncertainty.

I'm almost tempted to see this as a reductio ad absurdum of the Parliamentary Model.

Suppose you had $100 and were splitting it between the Exciting Futurism Institute and the Fuzzy Animals Foundation. Suppose you knew an anonymous benefactor had given $100 to the FAF earlier that year. Suppose you suddenly remember the benefactor was you! Does that mean you now give the $100 to EFI? That seems like bizarre behavior to me.

Why does it seem bizarre? I'm not getting the same feeling...

I guess it seems bizarre because you're changing your behavior in response to a piece of information that tells you nothing about moral philosophy and nothing about the consequences of the behavior. Or is the idea that there are good consequences from timeless cooperation between conflicting selves, or something? But I'm not seeing any gains from trade here, and cooperation isn't Bostrom and Ord's original justification, as far as I know. The original scenario is about an agent whole-heartedly committed to doing the right thing as defined by some procedure he doesn't know the outcome of. And what if you found out the earlier donation had been a pure behavioral tic of a sort that doesn't respond to cooperation? Would you still treat it as though it had been made by you, or would you treat it as though it had been made by something else? If the Parliamentary Model tells you to put 30% of your effort into saving puppies, is it good enough if 30% of your Everett copies put all their effort into it and 70% put none of their effort into it? If so, how much effort should you expend on research into what your parallel selves are currently up to? I'm very confused here, and I'm sure it's partly because I don't understand the parliamentary model, but I'm not convinced it's wholly because of that.

I guess you're right, the Parliamentary Model seems a better model for moral conflict than moral uncertainty. It doesn't affect my original point too much (that it's not necessarily irrational to diversify charitable giving), since we do have moral conflict as well as moral uncertainty, but we should probably keep thinking about how to deal with moral uncertainty.

I think if you apply this reasoning to moral conflict between different kinds of altruism, it becomes a restatement of "purchase fuzzies and utilons separately", except with more idealized assumptions about partial selves as rational strategists. It seems to me that if I'm the self that wants utilons, then "purchase fuzzies and utilons separately" is a more realistic strategy for me to use in that it gives up only what is needed to placate the other selves, rather than what the other selves could bargain for if they too were rational agents. With parliament-like approaches to moral conflict it sometimes feels to me as though I'm stuck in a room with a rabid gorilla and I'm advised to turn into half a gorilla to make the room's output more agenty, when what is really needed is some relatively small amount of gorilla food, or maybe a tranquilizer gun.

You may not be a typical person. Consider instead someone who's conflicted between egotism, utilitarianism, and deontology, and these moralities get more or less influence from moment to moment in a chaotic manner but has a sort of long term power balance. The Parliamentary Model could be a way for the person to coordinate actions so that he doesn't work against himself.

On a related note, in a previous thread I think you said that certain axioms needed to derive Bayesian probability seemed sort of iffy to you. I was wondering, is it possible to connect Bayes' anthropic weirdness problems to any axioms in particular?

I wrote a post about that. Is it what you're looking for?

Wow, thanks! I'd never seen that post for some reason. (ETA: Apparently I had in fact seen it and remembered the comments, but not the post... scumbag brain.)

(This is another cool post from 2009 that I didn't see until a year ago.)

Which paper was linked in the first sentence of that post? The link is broken now.

Thanks, it's fixed now.

Here on LW, we know that if you want to do the most good, you shouldn't diversify your charitable giving.

If this is so, then why is the Singularity Institute spinning off a separate rationality org? Shouldn't one of rationality or FAI be more important?

To an individual, perhaps; but there are almost certainly people out there who think rationality is important but don't think FAI is important, and thus would be willing to donate to the rationality group but not to SIAI.

While I like the idea of FAI, I'm unconvinced that AGI is an existential threat in the next two or three human generations; but I'm confident that raising the sanity waterline will be of help in dealing with any existential risks, including AGI. Moreover, people who have differing beliefs on x-risk should be able to agree that teaching rationality is of common interest to their concerns.

Diminishing returns from either individual activity may be important on that scale.

I think the rationality spinoff is, perhaps among other things, going to run non-free workshops that will be funded by noncharitable dollars.

OK, that sounds like a pretty good reason.

I'm entirely unconvinced about the not diversifying the spendings. If you assume that your algorithm for choice of charity might be faulty in an exploitable way, the #1 charity may be sufficiently able and motivated to exploit you - having all your money as reward (and money of anyone who's reasoning like you) - but all the top #5 , five times less so.

Let's consider selfish actions to engage our primarily selfish intelligence. Should you invest in 1 corporation, the one you deem most effective? The investment to pay-off scenario matches that of charitable giving rather well, except you are the beneficiary (and you do care not to invest in something that flops over and goes bankrupt)

Of course it is the case that in investments, and in charitable giving, people diversify for entirely wrong reasons, and perhaps over-diversify. But then, the very same people, when told not to diversify, may well respond by donating less overall, for a lower expected benefit.

Should you invest in 1 corporation, the one you deem most effective? The investment to pay-off scenario matches that of charitable giving rather well, except you are the beneficiary (and you do care not to invest in something that flops over and goes bankrupt)

You have strong reason not to do this anyway because of risk aversion. This is like saying, "Should you serve butter or margarine to your guests? To get a better intuition, consider the selfish version, where you are yourself going to eat either pristine butter, or a container of margarine that has been poisoned with arsenic?"

If you assume that your algorithm for choice of charity might be faulty in an exploitable way, the #1 charity may be sufficiently able and motivated to exploit you - having all your money as reward (and money of anyone who's reasoning like you) - but all the top #5 , five times less so.

I agree this is an issue, and that you should take manipulable signals as weaker evidence because of Goodhart's Law. But this effect doesn't automatically dominate. Selecting for good expected value with your best efforts incentivizes efforts to produce signals of value, through real as well as fakeable signals.

Note that GiveWell and friends do not follow your heuristic: the great majority of funds flow to the top charity. They take into account the possibility of faked data (to mess with CBA) in their evaluation process, valuing independent verification, defenses against publication bias, audits, and so forth. But in light of those efforts, they think that the benefits of incentivizing (and assisting) more effective and transparent charities outweigh the risk of incentivizing fakers who can defeat their strong countermeasures.

Excellent article..

A couple of minor points:

  1. Giving $1 to a charity can serve the purpose of stating ones support and endorsement. This is an argument for getting lots of people to give that $1.

  2. Giving "parochially" can help if you have better information as to the effectiveness of your donations, e.g. if money to help the poor is being handled by a neighbor you know well and trust. Of course, this consideration can be dominated by others like the greater effect of money to the extreme poor.

But your points are quite correct.

To some extent, we may have heuristics in charity evaluation which support some of these behavioral patterns and which are adaptive but evolved along the grain of our natural cognitive biases in order to protect us from Pascal's Mugging and other types of exploitation.

Also, people mostly want to do things that others are doing, not to do the maximally good things.

[comment deleted]

[This comment is no longer endorsed by its author]Reply

Great essay!

The goodintents link is dead.