So basically, the idea here is that it actually makes intuitive moral sense for most EA donors to donate to EA causes ?

Not sure whether every EA would endorse this description, but it's how I think of it, yes.

[ Question ]

Does donating to EA make sense in light of the mere addition paradox ?

by George 1 min read19th Feb 20208 comments

6


TL;DR Assuming that utilitarianism as a moral philosophy has flaws along the way, such that one can follow it but only to some unkown extent, what would be the moral imperative for donating to EA ?

I've not really found a discussion around this on the internet so I wonder if someone around here as thought about it.

It seems to me that EA makes perfect sense in light of a utilitarian view of morals. But it seems that a utilitarian view of morals is pretty shaky in that, if you follow it to it's conclusion you get to the merge addition paradox or the "utility monster".

So in light of that does donating to an EA organisation (as in: one that tries to save or improve as many lives as possible for as little money as possible) really make any sense ?

I can see it intuitively making sense, but barring a comprehensive moral system that can argue for the value of all human life, it sems intuition is not enough. As in, it also intuitively make sense to put 10% of your income into low-yield bonds, so in case one of your family members or friends has a horrible (deadly or severely life-quality diminishing) problem you can help them.

From an intuitive perspective "helping my mother/father/best-friend/pet-dog" seem to topple "helping 10 random strangers" for most people, thus it would seem that it doesn't make sense barring you are someone that's very rich and can thus safely help anyone close to him and still have some wealth leftover.

I can also see EA making sense from the perspective of other codes of ethics, but it seems like most people donating to EA don't really follow the other prescripts of those codes that the codes hold to be more valuable.

E.g.

  • You can argue that helping people not die is good under Christian ethics, but it's better to help them convert before they die so that they can avoid eternal punishment.
  • You can argue that helping people not die under a pragmatic moral system (basically a more system that's simply a descriptive version of what we've seen "works"), but at the same time, most pragmatic moral systems would probably yield the result that helping your community rather than helping strangers halfway across the globe (simply because that would have been viewed as better by most people in past and current generation, so it's probably correct).
  • I seems donating is in not way bad under Kantian ethics. But then again, I think if you take Kantian ethics as your moral code you'd probably have to prioritize other things first (e.g. never lying again) and donating to EA would fall mainly in a morally neutral zone.

So basically, I'm kinda stuck understanding under which moral presincts it actually makes sense to donate to EA charities ?

6

New Answer
Ask Related Question
New Comment

2 Answers

I can see it intuitively making sense, but barring a comprehensive moral system that can argue for the value of all human life, it sems intuition is not enough. As in, it also intuitively make sense to put 10% of your income into low-yield bonds, so in case one of your family members or friends has a horrible (deadly or severely life-quality diminishing) problem you can help them.

Utilitarianism is not the only system that becomes problematic if you try to formalize it enough; the problem is that there is no comprehensive moral system that wouldn't either run into paradoxical answers, or be so vague that you'd need to fill in the missing gaps with intuition anyway.

Any decision that you make, ultimately comes down to your intuition (that is: decision-weighting systems that make use of information in your consciousness but which are not themselves consciously accessible) favoring one decision or the other. You can try to formulate explicit principles (such as utilitarianism) which explain the principles behind those intuitions, but those explicit principles are always going to only capture a part of the story, because the full decision criteria are too complex to describe.

So the answer to

So basically, I'm kinda stuck understanding under which moral presincts it actually makes sense to donate to EA charities ?

is just "the kinds where donating to EA charities makes more intuitive sense than not donating"; often people describe these kinds of moral intuitions as "utilitarian", but few people would actually endorse all of the conclusions of purely utilitarian reasoning.

(note: I don't identify as Utilitarian, so discount my answer as appropriate)

You can split the question into multiple parts:

1) should I be an altruist, who gives up resources to benefit others more than myself?

2) if so, what does "benefit" actually mean for others?

3) How can I best achieve my desires, as defined by #1 and #2?


#1 is probably not answerable using only logic - this is up to you and your preferred framework for morals and decision-making.

#2 gets to the title of your post (though the content ranges further). Do you benefit others by reducing global population? By making some existing lives more comfortable or longer (and which ones)? There's a lot more writing on this, but no clear enough answers that it can be considered solved.

#3 is the focus of E in EA - if your goals match theirs (and if you believe their methodology for measuring), then EA helps identify the most efficient ways you can use resources for these goals.


To answer your direct question - maybe! To the extent that you're pursuing topics that EA organizations are also pursuing, you should probably donate to their recommended charities rather than trying to do it yourself or going through less-measured charities.

To the extent that you care about topics they don't, don't. For instance, I also donate to local arts groups and city- and state-wide food charities, which I deeply understand are benefiting people who are already very lucky relative to global standards. If utility is fungible and there is declining utility for resources for any given recipient, this is not efficient. But I don't believe those things are smooth enough curves to overwhelm my other preferences.