In this thread, I would like to invite people to summarize their attitude to Effective Altruism and to summarise their justification for their attitude while identifying the framework or perspective their using.

Initially I prepared an article for a discussion post (that got rather long) and I realised it was from a starkly utilitarian value system with capitalistic economic assumptions. I'm interested in exploring the possibility that I'm unjustly mindkilling EA.

I've posted my write-up as a comment to this thread so it doesn't get more air time than anyone else's summarise and they can be benefit equally from the contrasting views.

I encourage anyone who participates to write up their summary and identify their perspective BEFORE they read the others, so that the contrast can be most plain.

New Comment
77 comments, sorted by Click to highlight new comments since: Today at 4:04 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I confess that I get the impression that the real purpose of the thread is Clarity's own comment, but here FWIW are my own opinions.

My underlying assumptions are consequentialist (approximately preference-utilitarian) as to ethics, and rationalist/empiricist as to epistemology.

"Effective altruism" can mean at least two things.

  • Attempting to do good for others as effectively as you can (at least given the level of resources you're willing to put in).
  • The particular cluster of approaches to that problem found among people and organizations that presently identify themselves as EA. I take it this means things like these:
    • Broadly utilitarian notion of what doing good means.
    • Preference for directing charitable activity at the world's worst-off people, or perhaps (some) non-human animals.
      • Plus, for some, a side-order of existential risk.
    • Preference for quantifiable benefits, measured as carefully as possible.
    • Focus on smallish charities aiming to pluck low-hanging fruit.
    • Looking to organizations like GiveWell to identify those charities.
    • Strong preference for "earning to give" over other ways of helping charities.

I very strongly approve of effective altruism in... (read more)

Effective Altruism is a well-intentioned but flawed philosophy. This is a critique of typical EA approaches, but it might not apply to all EAs, or to alternative EA approaches.

Edit: In a follow up comment, I clarify that this critique is primarily directed at GiveWell and Peter Singer's styles of EA, which are the dominant EA approaches, but are not universal.

  • There is no good philosophical reason to hold EA's axiomatic style of utilitarianism. EA seems to value lives equally, but this is implausible from psychology (which values relatives and friends more), and also implausible from non-naive consequentialism, which values people based on their contributions, not just their needs.

  • Even if you agree with EA's utilitarianism, it is unclear that EA is actually effective at optimizing for it over a longer time horizon. EA focuses on maximizing lives saved in the present, but it has never been shown that this approach is optimal for human welfare over the long-run. The existential risk strand of EA gets this better, but it is too far off.

  • If EA is true, then moral philosophy is a solved problem. I don't think moral philosophy works that way. Values are much harder than EA gives cred

... (read more)
4Denis Drescher8y
As someone said in another comment there are the core tenets of EA, and there is your median EA. Since you only seem to have quibbles with the latter, I’ll address some of those, but I don’t feel like accepting or rejecting them is particularly important for being an EA in the context of the current form of the movement. We love discussing and challenging our views. Then again I think I so happen to agree with many median EA views. VoiceOfRa [] put very concisely what I think is a median EA view here, but the comment is so deeply nested that I’m afraid it might get buried: “Even if he values human lives terminally, a utilitarian should assign unequal instrumental value to different human lives and make decision based on the combination of both.” I think this has been mentioned in the comments but not very directly. The median EA view may be not to bother with philosophy at all because the branches that still call themselves philosophy haven’t managed to come to a consensus on central issues over centuries so that there is little hope for the individual EA to achieve that. However when I talk to EAs who do have a background in philosophy, I find that a lot of them are metaethical antirealists. Lukas Gloor, who also posted in this thread, has recently convinced me that antirealism, though admittedly unintuitive to me, is the more parsimonious view and thus the view under which I operate now. Under antirealism moral intuitions, or some core ones anyway, are all we have, so that there can be no philosophical arguments (and thus no good or bad ones) for them. Even if this is not a median EA view, I would argue that most EAs act in accordance with it just out of concern for the cost-effectiveness of their movement-building work. It is not cost-effective to try to convince everyone of the most unintuitive inferences from ones own moral system. However, among the things that are important to the ind
Part of the reason I wrote my critique is that I know that at least some EAs will learn something from it and update their thinking. I'll take your word that many EAs also think this way, but I don't really see it effecting the main charitable recommendations. Followed to its logical conclusion, this outlook would result in a lot more concern about the West. Well, there is a question about what EA is. Is EA about being effectively altruistic within your existing value system? Or is it also about improving your value system to more effectively embody your terminal values? Is it about questioning even your terminal values to make sure they are effective and altruistic? Regardless of whether you are an antirealist, not all value systems are created equal. Many people's value systems are hopelessly contradictory, or corrupted by politics. For example, some people claim to support gay people, but they also support unselective immigration from countries with anti-gay attitudes, which will inevitably cause negative externalities for gay people. That's a contradiction. I just don't think a lot of EAs have thought their value systems through very thoroughly, and their knowledge of history, politics, and object-level social science is low. I think there are a lot of object-level facts about humanity, and events in history or going on right now which EAs don't know about, and which would cause them to update their approach if they knew about it and thought seriously about it. Look at the argument that EAs make towards ineffective altruists: they know so little about charity and the world that they are hopelessly unable to achieve significant results in their charity. When EAs talk to non-EAs, they advocate that (a) people reflect on their value system and priorities, and (b) they learn about the likely consequences of charities at an object-level. I'm doing the same thing: encouraging EAs to reflect on their value systems, and attain a broader geopolitical and historical
6Denis Drescher8y
I didn’t respond to your critiques that went into a more political direction because there was already discussion of those aspects there that I wouldn’t have been able to add anything to. There is concern in the movement in general and in individual EA organizations that because EAs are so predominantly computer scientists and philosophers, there is a great risk of incurring known and unknown unknowns. In the first category, more economists for example would be helpful; in the second category it will be important to bring people from a wide variety of demographics into the movement without compromising its core values. As computer scientist I’m pretty median again. Indeed. I’m not sure if the median EA is concerned about this problem yet, but I wouldn’t be surprised if they are. Many EA organizations are certainly very alert to the problem. This concern manifests in movement-building (GWWC et al.) and capacity-building (80k Hours, CEA, et al.). There is also concern that I share but that may not yet be median EA concern that we should focus more on movement-wide capacity-building, networking, and some sort of quality over quantity approach to allow the movement to be better and more widely informed. (And by “quantity” I don’t mean to denigrate anyone but just I mean more people like myself who already feel welcomed in the movement because everyone speaks their dialect and whose peers are easily convinced too.) Throughout the time that I’ve been part of the movement, the general sentiment either in the movement as a whole or within my bubble of it has shifted in some ways. One trend that I’ve perceived is that in the earlier days there was more concern over trying vs. really trying while now concern over putting one’s activism on a long-term sustainable basis has become more important. Again, this may be just my filter bubble. This is encouraging as it shows that everyone is very well capable of updating, but it also indicates that as of one or two years ago, we s
No need for you to address any particular political point I'm making. For now, it is sufficient for me to suggest that reigning progressive ideas about politics are flawed and holding EAs back, without you committing to any particular alternative view. I'm glad to hear that EAs are focusing more on movement-building and collaboration. I think there is a lot of value in eigenaltruism: being altruistic only towards other eigenaltruistic people who "pay it forward" (see Scott Aaronson's eigenmorality []). Civilizations have been built with reciprocal altruism. The problem with most EA thinking is that is one-way, so the altruism is consumed immediately. This post argues that morality [] evolved as a system of mutual obligation, and that EAs misunderstand this. Although there is some political heterogeneity in EA, it is overwhelmed by progressives, and the main public recommendations are all progressive causes. Moral progress is a tricky concept: for example, the French Revolution is often considered moral progress, but the pictures [] paint another story. On open borders, economic analyses like Roodman's are just too narrow. They do not take into account all of the externalities, such as crime [] and changes to cultural institutions. [] addresses many of the objections, sometimes; it does a good job of summarizing some of the anti-open borders arguments, but often fails to refute them, yet this lack of refutation doesn't translate into them updating their general stance on immigration. If humans are interchangeable homo economicus then open borders would be a economic and perhaps moral imperative. If indeed human groups are signi
1Denis Drescher8y
Before I delay my reply until I’ve read everything you’ve linked, I’ll rather post a WIP reply. Thanks for all the data! I hope I’ll have time to look into Open Borders some more in August. Error theorists would say that the blog post “Effective Altruists are Cute but Wrong” is cute but wrong, but more generally the idea of using PageRank for morality is beautifully elegant (but beautifully elegant things have often turned out imperfect in practice in my experience). I still have to read the rest of the blog post though.
Eigendemocracy reminds me of Cory Doctorow's whuffie [] idea. An interesting case for eigenmorality is when you have distinct groups that cooperate amongst themselves and defect against others. Especially interesting is the case where there are two large, competing groups that are about the same size.
"I'll take your word that many EAs also think this way, but I don't really see it effecting the main charitable recommendations. Followed to its logical conclusion, this outlook would result in a lot more concern about the West." Can you elaborate please? From my perspective, just because a western citizen is more rich / powerful doesn't mean that helping to satisfy their preferences is more valuable in terms of indirect effects? Or are you talking about who to persuade because I don't see many EA orgs asking Dalit groups for their cash or time yet.
It's not the preferences of the West that are inherently more valuable, it's the integrity of its institutions, such as rule of law, freedom of speech, etc... If the West declines, then it's going to have negative flow-through effects for the rest of the world.
I think its clearer then if you say sound institutions rather than the West?
There are other countries with sound institutions, like Singapore and Japan, but I'm not so worried about them as I am about the West, because they have an eye towards self-preservation. For instance, both those countries have declining birth rates, but they protect their own rule of law (unlike the West), and have more cautious immigration policies that help avoid their population from being replaced by a foreign one (unlike the West). The West, unlike sensible Asian countries, is playing a dangerous game by treating its institutions in a cavalier way for ill-thought-out redistributionist projects and importing leftist voting blocs. EAs should also be more worried about decline in the West, because Westerners (particularly NW Europeans) are more into charity than other populations (e.g. Eastern Europeans are super-low in charity). My previous post [] documents this. A Chinese- or Russian- dominated future is really, really bad for EA, for existential risk prevention, and for AI safety.
I wouldn't be so cavalier about that. Japan, specifically, has about zero immigration and its population, not to mention the workforce, is already falling. Demographics is a bitch. Without any major changes, in a few decades Japan will be a backwater full of old people's homes that some Chinese trillionaire might decide to buy on a whim and turn into a large theme park. Open borders and no immigration are like Scylla and Charybdis -- neither is a particularly appealing option for a rich and aging country. I also feel that the question "how much immigration to allow" is overrated. I consider it much less important than the question of "precisely what kind of people should we allow in". A desirable country has an excellent opportunity to filter a part of its future population and should use it.
I agree that Japan has its own problems. No solutions are particularly good if they can't get their birth rates up. Singapore also has low birth rates. What problems are preventing high-IQ people from reproducing might be something that EAs should look into. "How much immigration to allow" and "precisely what kind of people should we allow in" can be related, because the more immigration you allow, the less selective you are probably being, unless you have a long line of qualified applicants. Skepticism of open borders doesn't require being against immigration in general. As you say, a filtered immigration population could be very valuable. For example, you could have "open borders" for educated professionals from low-crime, low-corruption areas countries with compatible value systems and who are encouraged to assimilate. I'm pretty sure this isn't what most open borders advocates mean by "open borders," though. The left doesn't "want" [] a responsible immigration policy either. For their political goals, they want a large and dissatisfied voting block. And for their signaling goals, it's much more holy to invite poor, unskilled people rather than skilled professionals who want to assimilate.
If you aren't aware of the relevant decision theory, then I have good news for you! I'm not sure this is true, at least in the narrow instance of rationalists trying to make maximally effective decisions based on well defined uncertainties. In principle, at least, it should be possible to calculate the value of information []. Decision theory has a concept called the expected value of perfect information []. If you're not 100% sure of something, but the cost of obtaining information is high (which it generally is in philosophy, as evidenced by the somewhat slow progress over the centuries.) and giving opportunities are shrinking (which they are for many areas, as conditions improve) then you probably want to risk giving sub-optimally by giving now vs later. The price of information is simply higher than the expected value. Unfortunately, you might still need to make a judgement call to guesstimate the values to plug in.
2Denis Drescher8y
Thanks! I hadn’t seen the formulae for the expected value of perfect information before. I haven’t taken the time to think them through yet, but maybe they’ll come in handy at some point.
If anyone's skimming through these comments, it's worthwhile noting that most of my original ideas as seen in my top-level comment have been thoroughly refuted. tl;dr - My perspective is, in short, echoed on Marginal Revolution: Those criticisms that remain and many stronger points of contention are far more eloquently independently explained by Journeyman's critique here. Anyhow, I don't like the movements branding, which is essentially its core feature. Since the community would probably reorganise around a new brand anyway. Altruism is fictional, hypothetical, doesn't exist. * W. Pedia.
Thanks, this helped me!
Thank you for taking the time to write such a detailed description of the issue. One minor thing Many EAs do seem to understand this to varying degrees of explicitly or implicitly: they value other EAs highly because of the flow through effects.
That would be another example of things which some EAs do, but which don't yet seem to percolate through to the public-facing parts of the movement. For example, valuing other EAs due to flow-though contradicts Singer's view [], as far as I understand him:
I don't get your argument there. After all, you might e.g. value other EAs instrumentally because they help members of other species. That is, you intrinsically value an EA like anyone else, but you're inclined to help them more because that will translate into others being helped.
A good straightforward illustration of how institutions are entangled with culture is the difficulty the West has had exporting democracy to the Middle East.
Syrian openish border events reignited my interest in this so I did a bit more reading:
What are you basing your moral philosophy on, if it's not moral intuitions?
To me that seems like you object to EA because you stereotype it and then find that the stereotype produces problems. 80,000 hours lately wrote a post indicating that they don't believe that a majority should do earning-to-give: [] A lot of the post seems to confuse complex strategic moves like GiveWell's move to start by focusing on life saved by proven interventions with the belief that life saved by proven interventions is the most important thing.
It is possible that some of a group doesn't believe the logical consequences of its own positions. That doesn't make them immune from criticism based on those logical consequences.
The actual position of GiveWell on it's charity recommendations are quite long documents. The problem comes when you reduces the complex position to a simplified position. Deworming saves lives but at the same time it's also better at getting children to attend school than a lot of other interventions. The fact that the argument for Deworming is commonly made via saved lives in no way implies that the other benefits don't factor in.
I do believe that my comment accurately characterizes the large EA organizations like GiveWell and philosophers like Peter Singer. I do realize that EAs are smart people, and many individual EAs have other beliefs and engage in all sorts of research. For example, some EA are concerned about nuclear war [] with Russia, and today I discovered the Global Catastrophic Risk Institute [] and the Global Priorities Project [], which are outside of my critique. However, for now, Peter Singer, Give Well, Giving What We Can, and similar approaches are the most emblematic of EA, and it is towards this style of EA that my critique is directed, which I indicated in my previous comment when I said I was addressing "typical" or "median" EA. I believe it is fair to judge EA (as it currently exists) by these dominant approaches. I disagree with you that I am stereotyping, but I think it's good for me to clarify the scope of my critique, so I am adding a note to my previous comment that links to this comment. That 80,000 Hours post doesn't contradict my argument at all, and in fact reinforces it. My comment never argued that EAs believe that everyone earned to give, only that they are very confident about their moral claims about what people should do with their money. That post still shows that 80,000 Hours believes that at least 10% of people should earn to give, which is still an incredibly strong ethical claim. Obviously GiveWell cannot show that their interventions are the "most important thing." But GiveWell does claim that that its proven interventions are a sufficiently good thing to justify you spending money on them, and this is an immense moral claim. It's not like GiveWell is a purely informational website. In the context of the larger EA movement, Peter Singer's philosophy and EA pledges argue with incredible confidence that
Interesting that the solutions you're jumping to are about defending the 'west' and beating the south / east rather than working with the south/east to make sure the best of both is shared?
To be clear, when I speak of defending the West, I am mostly thinking of defending the West against self-inflicted problems. Nobody is talking about "beating" the global south / east. If the West declines, then it won't be in a very good position to share anything with anyone.
The consequentialist issue could be addressed by the assumption that if only people's needs were met, their potential for contribution would be equal. Do the people involved in EA generally believe that?
EAs might believe that, but that would be an example of their lack of knowledge of humanity and adoption of simplistic progressivism. Human traits for either altruism or accomplishment are not distributed evenly: people vary in clannishness [] , charity [] , civic-mindness, corruption, and IQ. It is most likely that differences between people explains why some groups have trouble building functional institutions and meeting their own needs. Whether basic needs are met doesn't explain why some groups within Europe are so different from each other. Southern Europe and parts of Eastern Europe have extremely low concentrations of charitable organizations. Also, good luck explaining the finding in the post [] I linked in my previous comment finding that vegetarianism in the US is correlated at 0.68 with English ancestry (but only weakly with European ancestry). Even different groups of white people are really, really different from each other, such as differences [] between Yankees and Southerners in the US, stemming from differences between settlers from different part of England. Human groups evolved with geographical separation and selection pressures. For example, the clannishness source I linked show how tons of different outcomes are related to whether groups are inside or outside the Hajnal Line of inbreeding. Different rates of inbreeding will result in different strength of kin selection vs. reciprocal altruism. For example, here is the map of corruption [] with the Hajnal Line
That sounds obviously false on its face.
Well, quite. The problem I see is that equality of worth is for some a sacred value, leading to the valuing of all lives equally and direction of resources to wherever the most lives can be saved, regardless of whose they are. While it is not something that logically follows from the basic idea of directing resources wherever they can do the most good, I don't see the EA movement grasping the nettle of what counts as the most good. Lives or QALYs are the only things on the EA table at present.
That's unfortunate. There can be no sacred values. That way lies madness.
Nevertheless: -- Circular Altruism [] Well... []
How do you come to that conclusion? When the Open Philanthropy project researches whether why should spend more effort on dealing with the risk of solar storms, how's that Lives or QALYs?
I may have a limited view of the EA movement. I had in mind primarily Givewell, whose currently recommended charities are all focussed on directing money towards the poorer parts of the world, to alleviate either disease or poverty. The Good Ventures portfolio of grants [] is mostly directed to the same sort of thing. On global threats: How would it not be? Major and prolonged geomagnetic storms [] , threaten the lives and QALYs of everyone everywhere, so there isn't an issue there of selecting who to save first. Protective measures save everyone.
You confuse reasons strategic choices of why GiveWell makes those recommendations with the shortest summary of the intervention. Spending money on health care intervention does more than just saving lives. There are a lot of ripple effects. GiveWell is also producing incentives to for charities in general to become more transparent and evidence-based. You said only lives and QALYs. I'm not disputing that it also effects lives and QALYs. I'm disputing that's the only thing you get from it.
Well, what measure are they using?
I don't think there's a single measure. There rather an attempt to understand all the effects of an intervention as best as possible.
It depends on how do you define "good". In particular, in some value systems (and in some contexts) human lives are valued according to their productivity, and in other value systems and contexts, lives are valued regardless of their economic use or potential.
Even if he values human lives terminally, a utilitarian should assign unequal instrumental value to different human lives and make decision based on the combination of both.
Re: altruistic children of altruistic parents. I have a most altruistic mother, and I hate listening about other people's problems which they have created without me, presented in such a way that if only I did give a damn I would, of course, join the fight and go on helping them for however long it takes. She is quite passionate when she comes home and unloads. In contrast, when you, for example, write up a report about a place rich in biodiversity to be made into reserve, you get this warm feeling that you are creating a way for a problem to actually be solved, or at least solvable. And you do it not because somebody has an Enlightment Impulse around midnight, which you can't escape being a dependent minor. So: altruistic offspring, probable. EA offspring, improbable. Therefore, EA activists are right in not investing in it.

Okay, a summary of my attitude towards EA is that EA rationally follows from a set of weird premises that are not shared by most people and certainly not by me. I do not have any desire to maximize utility in a way that considers utility for every human being equally. I prefer increasing utility for myself, my family, friends, countrymen, and people like me. Every time I pay for electricity for my computer rather than sending the money to a third world peasant is, according to EA, a failure to maximize utility.

Also, I believe that most cases of EA producing very counterintuitive results are just examples of cases where the weirdness of EA becomes obvious.

Every time I pay for electricity for my computer rather than sending the money to a third world peasant is, according to EA, a failure to maximize utility.

I'm sad that people still think EAers endorse such a naive and short-time-horizon type of optimizing utility. It would obviously not optimize any reasonable utility function over a reasonable timeframe for you to stop paying for electricity for your computer.

More generally, I think most EAers have a much more sophisticated understanding of their values, and the psychology of optimizing them, than you give them credit for. As far as I know, nobody who identifies with EA routinely makes individual decisions between personal purchases and donating. Instead, most people allocate a "charity budget" periodically and make sure they feel ok about both the charity budget and the amount they spend on themselves. Very few people, if any, cut personal spending to the point where they have to worry about, e.g., electricity bills.

I do know - indeed, live with :S - a couple.
So I think most EAs have come to the point where they realise that small trade offs and agonising over them displace other good things, so they try and find a way of setting a limit by year or whatever. But you know many people agonise and make trade offs, its just that often it isn't giving to the poor that's the counterfactual, it's saving or paying the mortgage, or buying a better holiday or school for their children or whatever. If you don't think like that, then you have everything you need?? [] and [] have documented going on this journey of living well with generosity. Sounds like it might be worth a read :) edit: Soz Ben, I think I put this comment in the wrong place!
As I said before, it is possible that some of a group doesn't believe the logical consequences of its own positions. That doesn't make them immune from criticism based on those logical consequences. It's true, of course, that EA proponents don't do this, but that only shows that EA is unworkable even to EA proponents. If you have a charity budget, there's no good principled reason why you should restrict your donation to your charity budget. Arguments I've seen include: 1. You need to be able to make money to perform EA and going poor would be counterproductive--true, but most of the money you spend on personal entertainment is not being used to help you make money. 2. You would find it psychologically intolerable to not spend a certain amount of money on personal entertainment. But by this reasoning, the amount you should spend on charity is an amount that makes you uncomfortable, but just as much uncomfortable as you can get without long term effects on your psychological health and your motivation to donate. (It also means that your first priority should be to self-modify to have less psychological need for entertainment.) Also, it could be used to justify almost any level of giving, and in the limit, it's equivalent to "I put a higher value on myself, just for a slightly different reason than everyone else who 'doesn't value people equally' puts a higher value on themselves." 3. EA states that it is good to spend money on charity, but being good is not the same thing as having a moral obligation to do it; it's okay to not do as much good as you conceivably could. I find this explanation unconvincing because it would then equally justify not doing any good at all.

Effective Altruism says that all humans have roughly equal intrinsic value and takes necessary steps to gather evidence and quantify the degree to which humans are helped.

Short, but pretty much summarizes the entirety of the appeal for me. Is there even a name for the two perspectives contained in that sentence?

I never actually realized that 'all humans have roughly equal intrinsic value' was a core tenet of EA.
[-][anonymous]8y 5

I like Effective Altruism a lot - I follow a lot of effective altrusim blogs, I adopt a lot of mental models and tools, I think the idea is great for a lot of people.

I'm highly interested in how to be effective, and I'm highly interested in how to do good, and EA gives some great ideas on both concepts.

That being said, what I'm not interested in as my sole aim is to be maximally effective at doing good. I'm more interested in expressing my values in as large and impactful a way as possible - and in allowing others to do the same. This happens to coincide ... (read more)

hacking the norm of reciprocity for the evolutionary benefit of future generations You know what, you're lesswrong. I didn't realise before reading your comment. You've completely reframed some of my thinking. Thank you. I'm going to rebrand myself as an Effective Mutualist! Then I'm going to get serious and start reading up on how we might otherwise infer what will help others feel happiness other than via their revealed preferences. I still feel compelled to help others, beyond that which will materially benefit me or society in the long term (my thinking is that, if everyone where more mutualistic, then over the long term the more parasitic people would die off). edit 1: The left wing tries to abolish poverty, the right tries to abolish bureaucracy. Perhaps there's some innate psychological divide between people who try to get rid of social problems immediately, and those who want to do it sustainably.
(Upvoted for willingness to change your mind.)
It's interesting to ask to what extent this is true of everyone - I think we've discussed this before Matt. Your version and phrasing of what you're interested in is particular to you, but we could broaden the question out to ask how far people have gone a long way moving away from having primarily self-centred drives which overwhelm others when significant self-sacrifice is on the table. I think some people have gone a long way moving away from that, but I'm sceptical that any single human being goes the full distance. Most EAs plausibly don't make any significant self-sacrifices if measured in terms of their happiness significantly dipping.* The people I know who have gone the furthest may be Joey and Kate Savoie [], with whom I've talked about these issues a lot. * Which doesn't mean they haven't done a lot of good! If people can donate 5% or 10% or 20% of their income [] without becoming significantly less happy then that's great, and convincing people to do that is a low hanging fruit that we should prioritise, rather than focusing our energies on then squeezing out extra sacrifices that start to really eat into their happiness. The good consequences of people donating are what we really care about after all, not the level of sacrifice they themselves are making.
Yes, I think in terms of my actions, I'm probably similar to many effective altruists. There are routes that I wouldn't consider, such as earning to give, but all in all I'm probably on a similar path with many other EA's who want to get into tech entrepreneurship. I think where I differ is not in my actions, but in my moral aims. Many EA's, if given a pill that could make them be able to work all day on helping others, sustainably, without changing their enjoyment of said activities, would think they ought to take it - and a sizeable portion probably would take it. I'd never take that pill, and wouldn't feel bad about that choice.
[-][anonymous]8y 3

Could charity distorts market signals which cripples the ability of sponsored economies to develop sustainability, leading to negative utility in the long term

Hikma and Norbrook are examples of ethical UK/worldwide pharmaceutical companies. I've worked for and can vouch for both.

[This comment is no longer endorsed by its author]Reply

I'm sorry to say that this all seems rather muddled. I don't know how much of the muddle is actually in my brain.

You say "Effective Altruism isn't utilitarian" and then link to an LW post whose central complaint is that EA is too utilitarian. Then you say "EA is prioritarian" by which I guess you mean it says "pick the most important cause and give only to it" and link to an LW post that doesn't say anything remotely like that (it just says: here is one particular cause, see how much good you can do by giving to it).

You say GiveWell doesn't see market efficiency as inherently valuable. I am not aware of any evidence for that; what there is evidence for is that they don't see market efficiency as something worth throwing money at, and I have to say this seems very obviously correct; am I missing something here?

You say GiveWell's "theory of value relates to health status", by which I think you mean that they assess benefit as increase in QALYs. That seems pretty reasonable to me and I don't understand your objections. (I'm sure there are ways one can help people that don't show up in a QALY measurement, but when evaluating charities that aim to... (read more)

Thanks for your comment. Read the first comment on that post and the discussion the OP has with them. No, I'm saying that it 'chooses more important causes and weights them higher'. Is this the flow through effects link? I'm not sure what you're talking about. The evidence that they believe that is in the link- where Givewell says it and the other links are to 80K or GWWC echoing it (don't recall which from memory). I would say you are missing something - whether market efficiency is something worth throwing money at - well, market efficiency by definition refers to a case where money is beying thrown at something that is worthwhile - a coincidence of interest in supply and demand. Certainly. If QALY's are valuable, then curing disease and saving lives is inherently valuable. However, people experience death and disease differently. Very differently. How can we work out how 'bad' that is for them - well we could use QALY's and generalise for the entire disease for all people - or, we could infer it from what people actually do in relation to it. Do they save up money to buy bednets, or do they spend that money on a donkey to visit their girlfriend in the next village (that's a fictional kinda silly example but illustrates my point). If they have a preference for bed nets above all other alternative options, and still can't afford it, they have incentive to contribute their labour, for instance, to their community in a way that improves the lives of others and helps those people reach their preferences, while earning money to buy those bednets. If they can't be valuable to their community, then their death is an overall positive to the overall economic efficiency of their community. That is, unless they are artificially subsidised for that kind of lifestyle by certain kinds of charity. Demand can only be reliably inferred from past behaviour. If someone buys a loaf of bread every week, that's demand for bread. If there's 1/23 chance someone in a village gets
OK, done. Now what? (I did not find that reading that material changed (a) my opinion that Dias's complaint was basically that EA is too utilitarian, nor (b) my impression that you are complaining it isn't utilitarian enough.) And you regard that as a bad thing? Evidently I'm missing something, because weighting more important things more highly seems obviously sensible. What am I missing? No, it's the one linked to the word "prioritarian" in your comment. Have either you or I got something exactly backwards? The post at the far end of that link (the "flow-through effects" one, right?) has the founder of GiveWell saying explicitly that market efficiency is valuable, but you're citing it as support for your claim that GiveWell doesn't see market efficiency as valuable. Any transaction in any market (efficient or not) is such a case (at least with a suitable, somewhat nonstandard, definition of "worthwhile", but I think you need that for any claim along these lines to be true). It is not clear that the difference between a more and a less efficient market is in how money is being thrown at how-worthwhile things. (Is it?) Sure. But if what you're trying to do is get an overall estimate of how much good a particular intervention does (or, harder: how much good it would do) then (1) you are not particularly interested in all those personal idiosyncrasies, except in so far as they come together to make some kind of average, and (2) you almost certainly don't have enough information about people's actions to know how much they would value whatever-it-is -- because it may simply not be available to them; they may not know about it; they may not know enough about it; and, in the sort of market-based scenario I think you have in mind, perceived benefit is confounded with ability to pay. (I'll have more to say about that last point later, but one crude example for now. Imagine someone who is in prison and has either no possessions, or at any rate no access to his possess
I get the impression that you're not well informed about EA and the diverse stances EAs have, and that you're singling out an idiosyncratic interpretation and giving it an unfair treatment. The first link you cite talks about public good provision within the current economy. How do you conclude from this that e.g. the effective altruists focused on AI safety are being inefficient? And even if you're talking about e.g. donations to GiveWell's recommended charities, how does the first link establish that it's inefficient? Sick people in Africa usually tend to not be included in calculations about economical common goods, but EAs care about more than just their country's economy. FYI, you're using highly idiosyncratic terminology here. Outside of LW, "utilitarianism" is the name for a family of consequentialist views [] that also include solely welfare-focused varieties like negative hedonistic utilitarianism or classical hedonistic utilitarianism. In addition, you repeat the mantra that it's an objective fact that "human values are complex". That's misleading, what's complex is human moral intuitions . When you define your goal in life, no one forces you to incorporate every single intuition that you have. You may instead choose to regard some of your intuitions as more important than others, and thereby end up with a utility function of low complexity. Your terminal values are not discovered somewhere within you (how would that process work, exactly?), they are chosen. As EY would say, "the buck has to stop somewhere". This claim is wrong, only about 5% of the EAs I know are prioritiarians (I have met close to 100 EAs personally). And the link you cite doesn't support that EAs are prioritarians either, it just argues that you get more QALYs from donating to AMF than from doing other things.
Even less for me.
Thanks for your comment. Yes, as you stated I was working with the visible sample of EA's who aren't focused on existential risk. I feel the term in relation to existential risk is redundant since effective thinking about existential risk on Lesswrong. The crowding out effect occurs not just as the individual level (which isn't applicable to individual EA's given room for more funding consideration), but also at the movement level. Because EA's act en-bloc, and factor into their considerations 'what are other people not funding', they compete the supply a demand for donations against established institutional donors like the Gate's Foundation. One might wonder then that if that was true, why those Foundations don't close the funding gaps as a priority - and it looks like someone is trying to answer that here [] . Admittedly, I haven't got to reading the article fully but from a quick skim it looks like the magnitude of donations of high impact philanthropists is such it compensates for the 'ineffectiveness of their cause', since those charities Givewell recommends have less room for more funding - which becomes a higher order consideration at that scale. The obvious counterexample to this is GiveDirectly, but I wouldn't be suprised if the reason philanthropists don't like them is because of fear of setting a precedence (sp?) against productive mutualistic exchange. I can't find the original post about the buck stopping after a bit of Googling. I'd like to keep looking into this!
The post I'm referring to is here [], but I should note that EY used the phrase in a different context, and my view on terminal values does not reflect his view. My critique of the idea that all human values are complex is that it presupposes too narrow of an interpretation of "values". Let's talk about "goals" instead, defined as follows: I took the definition from this blogpost [] I wrote a while back. The comment section there contains a long discussion on a similar issue where I elaborate on my view of terminal values. Anyway, the way my definition of "goals" seems to differ from the interpretation of "values" in the phrase "human values are complex" is that "goals" allow for self-modification. If I could, I would self-modify into a utilitarian super-robot, regardless of whether it was still conscious or not. According to "human values are complex", I'd be making a mistake in doing so. What sort of mistake would I be making? The situation is as follows: Unlike some conceivable goal-architectures we might choose for artificial intelligence, humans do not have a clearly defined goal. When you ask people on the street what their goals are in life, they usually can't tell you, and if they do tell you something, they'll likely revise it as soon as you press them with an extreme thought experiment. Many humans are not agenty. Learning about rationality and thinking about personal goals can turn people into agents. How does this transition happen? The "human values are complex" theory seems to imply that we introspect, find out that we care/have intuitions about 5+ different axes of value, and end up accepting all of them for our goals. This is probably how quite a few people are doing it, but they're victim of a gigantic typical mind fallacy if they think that's the only way to do it. Here's what happened to me personally (and
Here's the thread on this at the EA Forum: Effective Altruism and Utilitarianism []

I confess that I have not read much of what has been written on the subject, so what I am about to say may be dreadfully naive.

A. One should separate the concept of effective altruism from the mode-of-operation of the various organizations which currently take it as their motto.

A.i. Can anyone seriously oppose effective altruism in principle? I find it difficult to imagine someone supporting ineffective altruism. Surely, we should let our charity be guided by evidence, randomized experiments, hard thinking about tradeoffs, etc etc.

A.ii. On the other han... (read more)

I emphatically don't, but yes, one can. The quantitative/reductionist attitude you've outlined here biases us towards easily measurable causes. Some examples of difficult to measure causes include: 1) All forms of funding-hungry research, scientific or otherwise 2) most x-risks, including this forum's favorite AI risk 3) causes which claim to influence social, economic, military, and political matters in complex but possiblyhigh impact ways 4) (Typically local and community-driven) causes which do good via subtle virtuous cycles, human connections, and various other intangibles

Form my previous comment on the issue:

Personally, I'm indifferent to EA. It seems to me a result of decompartmentalizing and taking utilitarianism overly seriously.

What does that mean?
Approximately: Applying ideas consistently, even outside of their usual context. Believing in the logical consequences of the things you already believe. As opposed to: Believing some ideas, but then saying "oh no, that's completely different!" for no logical reason when someone tries to use the same idea in an unusual situation. (Dividing the world into small compartments, each governed by completely different laws, mutually unconnected.) See e.g. Outside the Laboratory []

I love EA as a concept, I've proselytized for it, but I've never contributed actual money. I feel vaguely ashamed about that last part.

My problem with EA is that it lacks aggression towards its competitors. I think this is a very serious issue, for the following reasons.

The largest altruistic organisations, especially in political developmental aid, seriously suck. Much like religions, they enjoy some immunity from criticism and benefit from lots of goodwill from volunteer workers. That has made them complacent, and they do not seriously compete with each ... (read more)

[This comment is no longer endorsed by its author]Reply
Here is an example: How the Red Cross Raised Half a Billion Dollars for Haiti ­and Built Six Homes [] .
I linked to that, but fucked up the link syntax so it wasn't displayed. I've reposted the corrected comment.

I'm a fan of EA. They are spot on with attempting to help people make better decisions, rather than saying "this is what you should do, because our particular form of Utilitarianism is the best, and if you don't agree you are simply wrong". [EDIT: bolded for visibility, because based on the other comments in this thread that point isn't well advertised. Apparently that's something they need to work on.]

If I were to make a nitpick, however, it would be this sort of thing:

  • I'd like to see more numbers, and a framework grounded more in math. Good d

... (read more)

I suggest there are two mindsets at play.

  1. effectiveness
  2. altruism

I take effectiveness to mean; assume you have a rational goal (one that has been analysed as being the right goal and a right goal), what is the most effective way to get there (fastest, cheapest, smartest, most sustaining solution to the problem)?

The only argument I can think of against effectiveness is to do with the journey not travelled, (if you choose to shortcut the journey you don't gain the experiences along the way that might help you when encountering future problems or the benefi... (read more)

I love EA as a concept, I've proselytized for it, but I've never contributed actual money. I feel vaguely ashamed about that last part but I'm comfortable calling myself not EA because I do have a problem with it.

My problem with EA is that it lacks aggression towards its competitors. I think this is a very serious issue, for the following reasons.

The largest altruistic organisations, especially in political developmental aid, seriously suck. Much like religions, they enjoy some immunity from criticism and benefit from lots of goodwill from volunteer worker... (read more)

New to LessWrong?