Much has been written about the idea of optimal philanthropy. Yet, it seems like optimal philanthropy isn't a single claim. Instead, it's a collection of related, but quite distinct, claims that have all been bundled together, much like the Singularity.
Here's the website of GiveWell, and here's the main video introduction for 80,000 Hours, two of the major optimal philanthropy sites. I'll try to break them down into their component claims (written in bold), and also give my views on each of the claims. Some of them are explicitly stated, but others are more implicit, so I definitely welcome feedback if optimal philanthropists feel they disagree with some of the claims as stated.
1. We should evaluate charities according to how efficient they are, along some common metric - for example, number of lives saved per dollar, or existential risk reduction per dollar. We should then encourage charities to be more efficient, and selectively donate to (or otherwise help) the most efficient ones.
This one I support wholeheartedly. It's the main message of GiveWell, and though I have disagreements with their methodology, the basic idea (of marginal utility evaluation) is one that must happen more often. People are way too prone, by default, to donate to the Society for Curing Rare Diseases in Cute Puppies. Much has been written about this in Purchase Fuzzies And Utilons Separately, and other Less Wrong posts.
2. In order to do the most good for our fellow humans, we should start/work for/donate to/otherwise become involved in charitable organizations.
Of course, this is widely believed outside the optimal philanthropy movement. But I think this belief is inherent in many optimal philanthropy claims, and it ought to be examined more critically. It's plausible, but if one looks at the total good done over the last thousand years, the vast majority comes from science and various businesses, not popular causes like (back then) "tithe to the church" or (now, from the 80,000 Hours video) "campaigning against climate change". (Examples: electricity, air travel, enough food, trains, air conditioning... ) However, it's also true that much more total effort has been put into for-profit organizations than non-profits. Which one is more efficient per dollar, I don't know, but it's a question worth examining, rather than just ignoring it by default.
3. We should design careers around being able to donate the largest possible amount.
This one I see as highly damaging. Human psychology is such that, in order for a movement to get long-term, voluntary participation by highly capable people, stuff needs to be fun. Less Wrong itself, and the New York Less Wrong meetup group, are two obvious examples. "The more fun we have, the more people will want to join us."
Of course, the primary purpose of a community or activity doesn't have to be fun. Eg., Google doesn't exist for its employees to have fun. But working for Google still is fun, and if it weren't, Google would soon start losing people, become less productive, and ultimately go bankrupt. (Disclosure: I am a former Google intern.)
Writing a donation check can be very useful, but it isn't fun - it violates all the principles of Fun Theory. To go through the list, it isn't novel, doesn't involve tackling new challenges, doesn't engage the senses, doesn't get better over time (if we assume things work well, the marginal utility of dollars donated should go down, not up), doesn't involve long-term personal consequences, doesn't involve freedom of action, doesn't involve personal control over politics (assuming that one isn't personally involved in the charity, which is generally assumed), etc. etc. etc. (I'm referring to the actual act of writing the check here - for earning the money in the first place, see the next claim.)
Not everything in life is fun, nor can it be, at least pre-Singularity. Taking out the garbage isn't fun, but I do it anyway. However, trying to design lives around things that are inherently un-fun will probably lead to bad outcomes.
4. People can donate the largest amount through a traditional "high-earning career", like investment banking.
This one involves, to some extent, the classic American confusion between social class and income. One might think of "lawyer" as a high-earning career, since it's an upper middle class career; you need a graduate degree and dress up in suits. However, lawyer is actually a terrible career from a money-making perspective, and a law degree usually leaves people worse off financially (details here). Investment bankers themselves don't make that much money, except at the top levels (details here, and see here for general analysis of why gross pay isn't money in the bank).
In fact, all things being equal, one would expect a negative correlation between how prestigious a career is and how much money it makes. Prestige is, to some extent, a substitute for money - a musician might happily play for nothing, because being a musician is cool. "Where there's muck, there's brass" - for info on people who made millions doing boring stuff, see the excellent books The Millionaire Next Door and How To Get Rich.
But, even supposing that a "high-earning career" actually pays a lot (eg., partner at a Big Law firm), standard "career tracks" have serious disadvantages, like working insane hours doing unpleasant stuff. They sap what I call human capital and social capital - human capital is your skills, capabilities, and the value you can provide to an organization, while social capital is your network of friends and people who want to work with you. Human capital and social capital are the two critical things one needs to do anything, including world saving; they shouldn't be spent lightly.
5. People are morally responsible for the opportunity costs of their actions.
This is somewhat tricky/ambiguous, so I've deliberately made the wording vague, but the best example I've found is Peter Singer's argument (analyzed here). Singer compares philanthropy to a Trolley Problem. There's a set of train tracks, which a child is lying on, and a train is fast approaching. You're driving a luxury car, and if you drive the car on the tracks, the train will run into the car and save the child. What should you do?
In standard morality, the right thing to do is save the child, even if it means destroying your really expensive car. Indeed, we might socially shame someone who didn't. According to Singer, this means that we should be willing to donate any amount of money less than the price of an expensive car to charity, if it meant saving a life. Not donating to charity would be the same as letting the train run over the kid - murder through inaction.
I haven't figured out in detail what the real moral framework should be, but this argument doesn't work. For one thing, it produces atrocious incentives. Suppose you have a nice, cushy software job, and donate 10% of your income to charity, even though you could easily afford 20%. You work really hard, and a year later, you get another job for twice as much money. If not donating surplus money is morally equivalent to causing whatever bad outcome the donation would prevent, you are now twice as guilty, since the amount you aren't donating (20% vs. 10%) is twice as large. This is despite the fact that the total amount of good done is also twice as large. Why punish an improvement?
Another huge problem is the creation of unbounded obligations. I suspect a lot of thinking is inherently binary - you've either graduated college or you haven't, either paid back the loan or you haven't, either obeyed the rules or you haven't. With this line of argument, there's literally no point at which one can sit back and say, "I've fulfilled my duty to charity - there's nothing more to do". There's (short of FAI) always another child to save. One can never say, "I've met the goal", or even "I've gotten a third of the way to the goal", since the goal of solving all the world's problems is so huge. But if all states of the world - whether they be donating 0%, 10%, or 20% of income - result in 0% total goal fulfillment, then they're all equivalent, at least in some sense. A moral framework should make the good outcome and bad outcome as distinct as possible, not the same.
6. More people interested in doing good should become professional philanthropists.
This one I totally agree with, which might seem odd, given how closely related it is to #3 and #4. However, I think there are two important differences. A professional philanthropist is, typically, someone whose full-time job it is to figure out how to give away their money. But almost always, it's someone who already has lots of money. Historically, there isn't much precedent for people taking high-paying jobs and donating most of their salary... but there's lots of precedent for getting rich first, in whatever field, and then working full-time on donating.
The other difference is that professional philanthropists don't optimize for donating the maximum amount. They see donating as good, but they also see it as a good to be traded off against other goods, like having lots of nice stuff and social respect. Optimizing for more than one thing allows one to have a lot more Fun, as I suspect Bill Gates and Warren Buffett do.
This really does seem to be better than conventional routes of do-gooding. When I was in college, a huge number of people did stuff like fly to Africa to dig wells. This isn't just inefficient - it actually does net harm, since the cost of utilizing unskilled labor usually outweighs the benefits of such labor. Surely we can do better.
I disagree with your treatment of statement #5. It's hard to explain directly why, so let me analogize:
Now, replace "rational" and "rationality" in the above with "moral" and "morality" and therein lies the reasoning. To say that you could be doing more for charity, or you could be nicer to your fellow man, &c., is exactly saying that you could be more moral. But this "could" is in a purely abstract sense; humans are exactly as moral (and as rational) as they can by given their own brains.
Thus it is written:
I'm not sure of that (unless you use a very restrictive definition of can which in a deterministic universe would make it synonymous with are, but down that path Fatalistic Decision Theory (“choice is futile”) lies).
Nope, I'm talking about the humans' in questions subjective "nows", not their futures. Although if a person isn't particularly rational and has never heard of rationality and if you mentioned it to him he wouldn't feel particularly motivated to become more rational has a pretty irrational-looking future, and in such case there's no choice to make, no will, only a default path.
On #5 (opportunity costs) (and to some extent #3 (fun)), I think it's better to frame it positively and get rid of concepts like "obligation" and "morally responsible". Saving a child's life is great. Saving more children's lives is even better. Hooray for each life you save!
If you want to save more children's lives (or insert your charitable goal here), you can pursue this goal similarly to how you would pursue any other directional goal (e.g., read more books, eat healthier, have more fun, exercise more, make more friends, run faster). Aim for a reachable target, look for opportunities to take steps in the right direction, reward yourself when you make progress, get social support/encouragement/validation by talking with people who share the same goal, learn tricks that other people with the same goal have used successfully, keep striving to do better, etc. Don't frame the topic in a way that makes you feel bad and creates an ugh field.
In defense of the hypothetical "Society for Curing Rare Diseases in Cute Puppies" (and because I can't help but nitpick)...
The European Commission on Public Health gives the following definition of "rare disease":
What are the odds that a person living in the U.K. will be diagnosed with a rare disease during his or her lifetime? As it turns out, it's one in seventeen. Getting a "rare disease" is not a rare event.
So, assuming that the prevalence of rare diseases in dogs is similar to that of people, the problem that Society for Curing Rare Diseases in Cute Puppies is trying to address is fairly wide in scope. How cost-effective they are is still an open question, though. For all I know, it might be, on average, much easier to invent a cure for any given incurable rare disease than any given incurable common one, because any common disease that would be easy to cure would already have a cure.
Of course, they're still saving dogs instead of people...
... and of course it's possible that by studying a rare disease they figure out more about how diseases work in general, thus contributing to to a cure for... diseased in general.
Indeed, a study for a rare pet disease is less likely to be distorted by immediate profit motive (since there isn't any), and could contribute more - and positively instead of perhaps negatively - to the understanding of disease in general.
I don't understand what this means. Are you implying that an ethical system needs to provide incentives for people to abide by it? The only ethical system that does that is egoism.
If your goal is to help people, the only incentive is people getting helped. If optimal philanthropy helps the most people, it has the best incentives. Doing good is it's own reward, provided that it's what you're actually trying to do, rather than be happy or have a meaningful life or something.
There is no upper bound to how much money you could make. As such, the opportunity cost to making any finite amount of money is unbounded. This does not make you infinitely poor. Similarly, being able to do an unbounded amount of good and not doing so is not infinitely bad. Opportunity cost is just a way of reframing the problem.
That reminds me very strongly of something I read in a Jewish prayerbook, or possibly the Talmud, a long time ago. I can't find it with google (translation being what it is), but here's my best recollection:
"It is a command we are given repeatedly in Torah. But what does it really mean to 'love your neighbour as yourself'? ... Never would a man say 'I have fulfilled my obligation to myself'. In the same way, you have never fulfilled your obligation to your neighbour."
Taking the comparison back the other way raises what I think is an interesting question. People have no issues with the idea that your obligations to yourself are unbounded, so why does having unbounded obligations to others pose a problem?
There's literally no point at which one can sit back and say, "I've fulfilled my duty to myself - there's nothing more to do".
Interesting point. If we really weighted our own wellbeing exactly the same as the wellbeing of others, we would just put our energy wherever it would be most helpful, regardless of whom we were helping. But we're not psychologically built to really divide our caring by 7 billion people. Anyone who tried to divide their energy among that many would probably give up or die. People like Buddhist monks who put a lot of practice in may achieve this on some kind of psychological basis, but I don't know of anyone who actually acts on it all the time.
In helping professions (nursing, social work, etc.) you're taught to take care of yourself so you don't burn out. Which does mean putting your own wellbeing ahead of an individual client's, but in the long run it allows you to give more people better service. I think this is good practice for philanthropists, too.
I'm glad you wrote this.
This isn't how 80,000 Hours uses the term. I found it confusing, so I asked them and they said they meant "devoting your life to earning money and giving it away." Not that someone spends 40 hours a week thinking about how to give their money - that seems excessive. If you're really that good at charity research, you might as well work for GiveWell and share your findings. Or, better yet, work another job and fund whatever's best (which might be charity research).
This is why I prefer to think of it in consequentialist terms. If I give 10% of a larger salary, fewer people suffer - so that's good. But of course, a better outcome would result if I both pursue a higher-paying job and donate a higher percentage. Where I eventually draw the line depends on my willpower, how utilitarian I am, and my estimation of what may cause me to burn out.
This is exactly it, and it's something that people seem to misunderstand about Singer. He is merely stating that if donating more money will help, then it's obligatory.
In the event that spending your money in other ways will help more, or that working so hard will burn you out, then the antecedent doesn't hold.
The Psychology and Morality of Optimal Philanthropy
I agree with you here -- not everyone can be a professional banker philanthropist. But I think the idea deserves attention. Perhaps a better idea with regard to donating more and not regretting it psychologically is learning how to be more frugal and increasing one's budget for donations.
A lot of the people I know who donate in the 10-50% range to effective causes actually love it a lot. And indeed, the research needed to figure out where to donate is often novel, challenging, and gets better over time -- some of the elements of Fun.
I also agree with you about being sure where the most-earning careers are. And in not ignoring the intangibles of certain careers, like authority that can be used to leverage public opinion. I think Peter Singer, as a professor, has done a lot more for optimal philanthropy than any investment banker, for example.
Punishing an improvement is obviously not the utilitarian thing to do. This is why Peter Singer never actually condemns non-donators as moral monsters, and actually has a standard for giving equal to about 5% your income. Giving What We Can is only 10%. These are easily reachable.
You're right, and that's because there's always more that you can do. I think it becomes much less confusing if you don't think of doing everything as morally good and anything less as a moral evil. Think of things as morally better or morally worse, and remember that's from a utilitarian perspective.
And also keep in mind that no-one can be an ideal utilitarian. It's like striving to be an ideal rationalist. You can only get better, the point of perfection is impossible. But that doesn't mean that utilitarianism or rationalism is flawed.
Regarding #3, I can say that hearing 'thanks' is a lot of fun, as is having people tell me I'm doing a good thing. I couldn't stop smiling for 4 days after starting this thread, and I still smile to think of it.
Great article, however, there is a third important option which is 'request proof then, if passed, donate' (Holden seem to have opted for this in his review of S.I., but it is broadly applicable in general).
For example if there is a charity promising to save 10 millions people using a method X that is not very likely to work, but is very cheap - a Pascal Wager like situation. In this situation, even if this charity is presently the best in terms of expected payoff it may be a better option still to, rather than paying the full sum, pay only enough for a basic test of method X which the method X would be unlikely to pass if it is ineffective; then donating if the test has passed. This decreases the expected cost proportionally to the unlikelihood of X efficacy and the specificity of the test.
That's true; the 'Value of Information' is often overlooked. This is one reason I am keen on the Brain Preservation Prize: unlike SENS or SIAI or cryonics, here we have cheap tests which can be carried out soon and will give us decent information on how well it's working.
I think #5 is bad metaethics. You write: "Why punish an improvement?" and "A moral framework should make the good outcome and bad outcome as distinct as possible, not the same."
I think this is a holdover from Judeo-Christian metaethics in which there are distinct classes of good things to do and bad things to do (and morally-neutral things in between) and then clear rewards for doing good and punishments for doing bad. In a world without God, morality isn't about punishing or rewarding us, so a moral framework should provide an ordering over choices rather than distinct classes of good and bad with estimates of how good or bad they are. What's useful to know is "What's the most moral thing to do here?" not "How much will I get punished if I don't do this?" because you simply won't get punished.
Re #2-6, I don't think GiveWell has ever said these. Their argument is simply that if you are going to spend money doing good, they will advise you on how to be optimal at it. This is an argument with 80000hours and (in the case of #5) Peter Singer. And #6 I think GiveWell would explicitly disagree with.
It occurred to me that although I agree with Statement #5 - "People are morally responsible for the opportunity costs of their actions," I do not think it is a claim actually being made by the optimal philanthropy zeitgeist. I think the actual claim is "Your actions have opportunity costs and you should probably think about that," which should be uncontroversial.
Re #4 "People can donate the largest amount through a traditional "high-earning career", like investment banking."
This isn't what 80,000 hours or anyone else is arguing. It's explicitly "choose high earning careers" not "choose prestigious careers". Investment banking is often brought up because it really is a way to make a lot of money.
Totally agree. It shouldn't be spent lightly. If you're trying to maximize the good you do over your life, choosing a soul-sucking career that will burn you out in only a few years is probably not a good idea. But there are some financial companies where people work more reasonable hours, and there are other careers that provide better work-life balance.
A job that is unpleasant but lets you donate ten times as much is letting you do approximately ten times as much good as a job you enjoy more. If you work only five years at the unpleasant job, you may be doing more than you could in a whole career at the more pleasant one.
This is literally untrue. According to Wolfram Alpha, there are 1.855 billion children on earth (2009 estimate). Of course, not all of them need saving. Giving What We Can cites the official estimates from the United Nations Development Program for the [edited for clarity: annual] cost of basic interventions:
Basic education for all: $6 billion
Water and sanitation for all: $9 billion
Basic health and nutrition: $13 billion
These are finite numbers for which humanity doesn't need FAI. Of course, they are too huge for any one individual. So normal-earning individual donors cannot, in fact, be consistent about donating only a part of their disposable income and spend another part on luxury assuming they want to be perfect utilitarians. But it makes this condition false:
Numerically, feeding all human children on earth - and providing family planning to all potential parents - is clearly feasible without FAI.
For what it's worth, some of us consider non-human sentients as relevant too.
$6 billion + $9 billion + $13 billion = $28 billion
$28 billion < $44 billion
I thought so too, but the estimates are about annual costs, and the Forbes top billionaires can spend their money only once each.
In that case, I'll retract my comment. Thanks for the correction.
What is Optimal -- Working, Donating, or Something Else?
I disagree -- I think the only claim true to optimal philanthropy is that we should donate, efficiently and maybe even extensively, to charitable organizations. Whether we start new organizations or work for existing organizations is often not an efficient use of our time, and not what any optimal philanthropy work I think would tell you. Not even 80k hours seems to make this out to be the best career.
I disagree again. I think that yes, a lot of good has come out of science and business. But remember that science needs to be funded, and some of it by private donation. Likewise, a massive amount of benefit has come out of eradicating and lessing disease, which also comes in large part from personal donation.
And even that being said, optimal philanthropy is often about what one can do as an individual. The vast majority of us aren't able to create the scientific discovery or business that helps millions. Many of us can't even change jobs. But what we can do is donate a portion of our income, and we can do a lot by ensuring it goes to effective causes.
This does not match what I know about the optimal philanthropy movement, can you tell me what your source is?
Disclaimer: I’m an 80,000 Hours member
This post raises some important concerns, which I and probably most members of 80,000 Hours share. For instance, how plausible is it to take a high-earning career we don’t enjoy in order to donate more money to charity. But I don’t think 80k’s ‘party line’ – or the views of its members - are accurately represented by these six claims. Essentially, we don’t believe that professional philanthropy is commonly the best thing we can do, or so we typically (generally? commonly?) should do it.
What we do claim is that many people (not everyone) could do more good through professional philanthropy than by working directly in the charity sector or other 'commonsensical ethical careers. This does not imply that it’s typically best. We more believe something along the lines of (6). For similar reasons we don’t claim (2) or (3) either.
80k also does not claim (4). We’ve made no claims about which careers give the highest expected earnings. In fact this is an ongoing research program at 80k. On the 80k blog Carl has already written a fair amount on ‘non-traditional’ options like entrepreneurship e.g. (http://80000hours.org/blog/23-entrepreneurship-a-game-of-poker-not-roulette). I think the common focus on banking is because, first, banking is at least fairly high earning (easily £6m over a career), and two, it’s morally controversial. So, if the argument flies for banking, it works even better for less morally controversial careers.
Neither does 80k claim (5), but Grognor and Unnamed have covered that better than I could.
Is there any discussion of lines of work which tend to make people's lives better, while not formally being philanthropic? For example, there might be money to be made in making sanitation in hospitals easier and cheaper.
Certainly lots of people are trying for obvious ways to help lots of people - e.g. a cure for cancer. But it's a good point that there may be unsexy areas, like sanitation, with lower-hanging fruit.
(Further disclaimer: I'm not a spokesperson for 80 000 hours, so this isn't the party line - take what you find on the website over me if we disagree).
Not that I'm aware of, although given a lot of 80ks message is about how 'formally' or 'commonly considered' philanthropy is not as good as more counter-intuitive means, I (and I'd guess most other folks at 80k) would be pretty sympathetic to it. I guess the closest analogue on the site would be discussion of 'high impact PAs'. (http://80000hours.org/blog/54-the-high-impact-pa-how-anyone-can-bring-about-ground-breaking-research)
Upvoted for #4.
Is that... really what Singer calls it? Why?
I think the Trolley Problem is a different one that involves taking pulling a switch that will send the trolley running over one person but save the five passengers, or similar. In other words, whether you let 5 people perish through inaction or take action to kill one and save 5.
This is different in that it's a tradeoff between a possession and a life, not 5 lives and 1 life. I don't know that this problem has a name... the sportscar problem?
That's the source of my confusion: there's already something else out there called the Trolley Problem, and this is not it.
I don't think it is; the OP got confused.
I'd be interested in a direct reference for this -- the linked paper doesn't even mention the word trolley anywhere. It is technically a trolley problem, but it's not a terribly interesting one; the variants I've read Singer propose in the past (e.g., sacrificing yourself to save five children) are usually more interesting.
This is a good post on philanthropy -- much of it not specific to the premises of "optimal philanthropy".
Even rejecting the "do the most good you can" injunction, you might still be curious about how effective a given charity is, so you can still benefit from "optimal philanthropy" orgs (to the extent that they don't mostly just advocate optimal philanthropy and actually evaluate charities)
Indeed: how to internalize feeling good to the extent you diligently ensured the most possible good per dollar, without feeling bad that you didn't spend more dollars? There's always honesty (I guess I really do care more about having a luxury car than ...), but even consciously noting such things is a bummer, and it's no good to say to most people.
I counsel people I care for to not become miserable by trying to realize a desire to be good/philanthropic by actually devoting their life to helping others (things that would ward off misery: if they can likely enjoy impressive success in doing so and/or are internally rewarded by the practice of it and not just the fantasy).