TL;DR Assuming that utilitarianism as a moral philosophy has flaws along the way, such that one can follow it but only to some unkown extent, what would be the moral imperative for donating to EA ?

I've not really found a discussion around this on the internet so I wonder if someone around here as thought about it.

It seems to me that EA makes perfect sense in light of a utilitarian view of morals. But it seems that a utilitarian view of morals is pretty shaky in that, if you follow it to it's conclusion you get to the merge addition paradox or the "utility monster".

So in light of that does donating to an EA organisation (as in: one that tries to save or improve as many lives as possible for as little money as possible) really make any sense ?

I can see it intuitively making sense, but barring a comprehensive moral system that can argue for the value of all human life, it sems intuition is not enough. As in, it also intuitively make sense to put 10% of your income into low-yield bonds, so in case one of your family members or friends has a horrible (deadly or severely life-quality diminishing) problem you can help them.

From an intuitive perspective "helping my mother/father/best-friend/pet-dog" seem to topple "helping 10 random strangers" for most people, thus it would seem that it doesn't make sense barring you are someone that's very rich and can thus safely help anyone close to him and still have some wealth leftover.

I can also see EA making sense from the perspective of other codes of ethics, but it seems like most people donating to EA don't really follow the other prescripts of those codes that the codes hold to be more valuable.

E.g.

  • You can argue that helping people not die is good under Christian ethics, but it's better to help them convert before they die so that they can avoid eternal punishment.
  • You can argue that helping people not die under a pragmatic moral system (basically a more system that's simply a descriptive version of what we've seen "works"), but at the same time, most pragmatic moral systems would probably yield the result that helping your community rather than helping strangers halfway across the globe (simply because that would have been viewed as better by most people in past and current generation, so it's probably correct).
  • I seems donating is in not way bad under Kantian ethics. But then again, I think if you take Kantian ethics as your moral code you'd probably have to prioritize other things first (e.g. never lying again) and donating to EA would fall mainly in a morally neutral zone.

So basically, I'm kinda stuck understanding under which moral presincts it actually makes sense to donate to EA charities ?

New Answer
New Comment

2 Answers sorted by

Kaj_Sotala

70
I can see it intuitively making sense, but barring a comprehensive moral system that can argue for the value of all human life, it sems intuition is not enough. As in, it also intuitively make sense to put 10% of your income into low-yield bonds, so in case one of your family members or friends has a horrible (deadly or severely life-quality diminishing) problem you can help them.

Utilitarianism is not the only system that becomes problematic if you try to formalize it enough; the problem is that there is no comprehensive moral system that wouldn't either run into paradoxical answers, or be so vague that you'd need to fill in the missing gaps with intuition anyway.

Any decision that you make, ultimately comes down to your intuition (that is: decision-weighting systems that make use of information in your consciousness but which are not themselves consciously accessible) favoring one decision or the other. You can try to formulate explicit principles (such as utilitarianism) which explain the principles behind those intuitions, but those explicit principles are always going to only capture a part of the story, because the full decision criteria are too complex to describe.

So the answer to

So basically, I'm kinda stuck understanding under which moral presincts it actually makes sense to donate to EA charities ?

is just "the kinds where donating to EA charities makes more intuitive sense than not donating"; often people describe these kinds of moral intuitions as "utilitarian", but few people would actually endorse all of the conclusions of purely utilitarian reasoning.

Utilitarianism is not the only system that becomes problematic if you try to formalize it enough; the problem is that there is no comprehensive moral system that wouldn't either run into paradoxical answers, or be so vague that you'd need to fill in the missing gaps with intuition anyway.

Agree, I wasn't trying to imply otherwise

Any decision that you make, ultimately comes down to your intuition (that is: decision-weighting systems that make use of information in your consciousness but which are not themselves consciously accessible) fav
... (read more)
3Kaj_Sotala
Not sure whether every EA would endorse this description, but it's how I think of it, yes.
8Mart_Korz
Regarding "intuitive moral sense", I would add that one's intuitions can be somewhat shaped by consciously thinking about their implications, noticing inconsistencies and settling on solutions/improvements. For example, the realisation that I usually care about people more the better I know them made me realize that the only reason I do not care about strangers at all is the fact that I do not know them. As this collided with another intuition that refuses such a reason as arbitrary (I could have easily ended up knowing and thus caring for different people, which is evidence that this behaviour of my intuition does not reflect my 'actual' preferences), my intuitions updated towards valuing strangers. I am not sure how strongly other EAs have reshaped their intuitions, but I think that using and accepting quantitative arguments for moral questions needs quite a bit of intuition-reshaping for most people.
3George3d6
No worries, I wasn't assuming you were a speaker for the EA community here, I just wanted to better understand possible motivations for donating to EA given my current perspective on ethics. I think the answer you gave outline on such line of reasoning quite well.

Dagon

50

(note: I don't identify as Utilitarian, so discount my answer as appropriate)

You can split the question into multiple parts:

1) should I be an altruist, who gives up resources to benefit others more than myself?

2) if so, what does "benefit" actually mean for others?

3) How can I best achieve my desires, as defined by #1 and #2?


#1 is probably not answerable using only logic - this is up to you and your preferred framework for morals and decision-making.

#2 gets to the title of your post (though the content ranges further). Do you benefit others by reducing global population? By making some existing lives more comfortable or longer (and which ones)? There's a lot more writing on this, but no clear enough answers that it can be considered solved.

#3 is the focus of E in EA - if your goals match theirs (and if you believe their methodology for measuring), then EA helps identify the most efficient ways you can use resources for these goals.


To answer your direct question - maybe! To the extent that you're pursuing topics that EA organizations are also pursuing, you should probably donate to their recommended charities rather than trying to do it yourself or going through less-measured charities.

To the extent that you care about topics they don't, don't. For instance, I also donate to local arts groups and city- and state-wide food charities, which I deeply understand are benefiting people who are already very lucky relative to global standards. If utility is fungible and there is declining utility for resources for any given recipient, this is not efficient. But I don't believe those things are smooth enough curves to overwhelm my other preferences.

To the extent that you're pursuing topics that EA organizations are also pursuing, you should probably donate to their recommended charities rather than trying to do it yourself or going through less-measured charities.

Well yes, this is basically the crux of my question.

As in, I obviously agree with the E and I tend do agree with the A , buy my issue is why how A seems to be defined in EA (as in, mainly around improving the lives of people that you will never interact with or 'care' about on a personal level).

So I agree with: I should do... (read more)

3Dagon
Nope, in the end it all comes down to your personal self-conception and intuition. You can back it up with calculations and testing your emotional reaction to intellectual counterfactuals ("how does it feel that I saved half a statistical life, but couldn't support my friend this month"). But all the moral arguments I've seen come down to either religious authority or assertion that some intuitions are (or should be) universal.