I confess that I get the impression that the real purpose of the thread is Clarity's own comment, but here FWIW are my own opinions.
My underlying assumptions are consequentialist (approximately preference-utilitarian) as to ethics, and rationalist/empiricist as to epistemology.
"Effective altruism" can mean at least two things.
I very strongly approve of effective altruism in...
Effective Altruism is a well-intentioned but flawed philosophy. This is a critique of typical EA approaches, but it might not apply to all EAs, or to alternative EA approaches.
Edit: In a follow up comment, I clarify that this critique is primarily directed at GiveWell and Peter Singer's styles of EA, which are the dominant EA approaches, but are not universal.
There is no good philosophical reason to hold EA's axiomatic style of utilitarianism. EA seems to value lives equally, but this is implausible from psychology (which values relatives and friends more), and also implausible from non-naive consequentialism, which values people based on their contributions, not just their needs.
Even if you agree with EA's utilitarianism, it is unclear that EA is actually effective at optimizing for it over a longer time horizon. EA focuses on maximizing lives saved in the present, but it has never been shown that this approach is optimal for human welfare over the long-run. The existential risk strand of EA gets this better, but it is too far off.
If EA is true, then moral philosophy is a solved problem. I don't think moral philosophy works that way. Values are much harder than EA gives cred
Okay, a summary of my attitude towards EA is that EA rationally follows from a set of weird premises that are not shared by most people and certainly not by me. I do not have any desire to maximize utility in a way that considers utility for every human being equally. I prefer increasing utility for myself, my family, friends, countrymen, and people like me. Every time I pay for electricity for my computer rather than sending the money to a third world peasant is, according to EA, a failure to maximize utility.
Also, I believe that most cases of EA producing very counterintuitive results are just examples of cases where the weirdness of EA becomes obvious.
Every time I pay for electricity for my computer rather than sending the money to a third world peasant is, according to EA, a failure to maximize utility.
I'm sad that people still think EAers endorse such a naive and short-time-horizon type of optimizing utility. It would obviously not optimize any reasonable utility function over a reasonable timeframe for you to stop paying for electricity for your computer.
More generally, I think most EAers have a much more sophisticated understanding of their values, and the psychology of optimizing them, than you give them credit for. As far as I know, nobody who identifies with EA routinely makes individual decisions between personal purchases and donating. Instead, most people allocate a "charity budget" periodically and make sure they feel ok about both the charity budget and the amount they spend on themselves. Very few people, if any, cut personal spending to the point where they have to worry about, e.g., electricity bills.
Effective Altruism says that all humans have roughly equal intrinsic value and takes necessary steps to gather evidence and quantify the degree to which humans are helped.
Short, but pretty much summarizes the entirety of the appeal for me. Is there even a name for the two perspectives contained in that sentence?
I like Effective Altruism a lot - I follow a lot of effective altrusim blogs, I adopt a lot of mental models and tools, I think the idea is great for a lot of people.
I'm highly interested in how to be effective, and I'm highly interested in how to do good, and EA gives some great ideas on both concepts.
That being said, what I'm not interested in as my sole aim is to be maximally effective at doing good. I'm more interested in expressing my values in as large and impactful a way as possible - and in allowing others to do the same. This happens to coincide ...
Could charity distorts market signals which cripples the ability of sponsored economies to develop sustainability, leading to negative utility in the long term
Hikma and Norbrook are examples of ethical UK/worldwide pharmaceutical companies. I've worked for and can vouch for both.
I'm sorry to say that this all seems rather muddled. I don't know how much of the muddle is actually in my brain.
You say "Effective Altruism isn't utilitarian" and then link to an LW post whose central complaint is that EA is too utilitarian. Then you say "EA is prioritarian" by which I guess you mean it says "pick the most important cause and give only to it" and link to an LW post that doesn't say anything remotely like that (it just says: here is one particular cause, see how much good you can do by giving to it).
You say GiveWell doesn't see market efficiency as inherently valuable. I am not aware of any evidence for that; what there is evidence for is that they don't see market efficiency as something worth throwing money at, and I have to say this seems very obviously correct; am I missing something here?
You say GiveWell's "theory of value relates to health status", by which I think you mean that they assess benefit as increase in QALYs. That seems pretty reasonable to me and I don't understand your objections. (I'm sure there are ways one can help people that don't show up in a QALY measurement, but when evaluating charities that aim to...
I confess that I have not read much of what has been written on the subject, so what I am about to say may be dreadfully naive.
A. One should separate the concept of effective altruism from the mode-of-operation of the various organizations which currently take it as their motto.
A.i. Can anyone seriously oppose effective altruism in principle? I find it difficult to imagine someone supporting ineffective altruism. Surely, we should let our charity be guided by evidence, randomized experiments, hard thinking about tradeoffs, etc etc.
A.ii. On the other han...
Form my previous comment on the issue:
Personally, I'm indifferent to EA. It seems to me a result of decompartmentalizing and taking utilitarianism overly seriously.
I love EA as a concept, I've proselytized for it, but I've never contributed actual money. I feel vaguely ashamed about that last part.
My problem with EA is that it lacks aggression towards its competitors. I think this is a very serious issue, for the following reasons.
The largest altruistic organisations, especially in political developmental aid, seriously suck. Much like religions, they enjoy some immunity from criticism and benefit from lots of goodwill from volunteer workers. That has made them complacent, and they do not seriously compete with each ...
I'm a fan of EA. They are spot on with attempting to help people make better decisions, rather than saying "this is what you should do, because our particular form of Utilitarianism is the best, and if you don't agree you are simply wrong". [EDIT: bolded for visibility, because based on the other comments in this thread that point isn't well advertised. Apparently that's something they need to work on.]
If I were to make a nitpick, however, it would be this sort of thing:
I'd like to see more numbers, and a framework grounded more in math. Good d
I suggest there are two mindsets at play.
I take effectiveness to mean; assume you have a rational goal (one that has been analysed as being the right goal and a right goal), what is the most effective way to get there (fastest, cheapest, smartest, most sustaining solution to the problem)?
The only argument I can think of against effectiveness is to do with the journey not travelled, (if you choose to shortcut the journey you don't gain the experiences along the way that might help you when encountering future problems or the benefi...
I love EA as a concept, I've proselytized for it, but I've never contributed actual money. I feel vaguely ashamed about that last part but I'm comfortable calling myself not EA because I do have a problem with it.
My problem with EA is that it lacks aggression towards its competitors. I think this is a very serious issue, for the following reasons.
The largest altruistic organisations, especially in political developmental aid, seriously suck. Much like religions, they enjoy some immunity from criticism and benefit from lots of goodwill from volunteer worker...
Effective Altruism is a well-intentioned but flawed philosophy. This is a critique of typical EA approaches, but it might not apply to all EAs, or to alternative EA approaches.
Edit: In a follow up comment, I clarify that this critique is primarily directed at GiveWell and Peter Singer's styles of EA, which are the dominant EA approaches, but are not universal.
There is no good philosophical reason to hold EA's axiomatic style of utilitarianism. EA seems to value lives equally, but this is implausible from psychology (which values relatives and friends more), and also implausible from non-naive consequentialism, which values people based on their contributions, not just their needs.
Even if you agree with EA's utilitarianism, it is unclear that EA is actually effective at optimizing for it over a longer time horizon. EA focuses on maximizing lives saved in the present, but it has never been shown that this approach is optimal for human welfare over the long-run. The existential risk strand of EA gets this better, but it is too far off.
If EA is true, then moral philosophy is a solved problem. I don't think moral philosophy works that way. Values are much harder than EA gives credit for. Betting on a particular moral philosophy with a percentage of your income shows an immense amount of confidence, and extraordinary claims require extraordinary evidence.
EA has an opportunity cost, and its confidence is crowding out better ideas. What would those better altruistic interventions be? I don't know, but I feel like we can do better.
EAs have a weak understanding of geopolitics and demographics. The current state of the world is that Western Civilization, the goose that laid the golden egg, is declining. If indeed Western Civilization is in trouble, and we are facing near or medium-term catastrophic risks like social collapse, turning into Brazil, or war with Russia or China, then the highest-value opportunities for altruism will be at home. Unless you think we have a hard-takeoff AI scenario or technological miracles in the near-term, we should be very worried about geopolitics, demographics, and civilization in the medium-term and long-term.
If Western Civilization collapses, or is over-taken by China, then that will not be a good future for human welfare. Averting this possibility is way more high-impact than anything else that EAs are currently doing. If the West is secure and abundant, then maybe EAs have the right idea by redistributing wealth out of the West. But if the West is precarious and fragile, then redistribution makes less sense, and addressing the risks in the West seems more important.
EAs do not understand demographics, or are not taking them seriously if they do. The West is currently faltering in fertility and undergoing population replacement from people from areas with higher crime and corruption. Meanwhile, altruism itself varies between populations based on clannishness and inbreeding. We are heading towards a future that is demographically more clannish and less altruistic.
Some EAs are open borders advocates, but open borders is a ridiculously dangerous experiment for the West. They have not satisfactorily accounted for the crime and corruption that immigrants may bring. Additionally, under democracy, immigrants can vote and change the culture. Open border advocates hope that institutions will survive, but they have provided no good arguments that Western institutions will survive rapid demographic change. Institutions might seem fine and then rapidly collapse in a non-linear way. If Western Civilization collapses into ethnic turmoil or Soviet sclerosis, then humans everywhere will suffer.
Some EAs have a skeptical attitude towards parenthood, because it takes away money from charity, and believe that EAs are easier to convert than create. In some cases, EAs who want to become parents justify parenthood as an unprincipled exception. This whole conversation is ridiculous and exemplifies EAs’ flawed moral philosophy and understanding of humans. Altruistic parents are likely to have altruistic children due to the heritability of behavioral traits. If altruistic people fail to breed, then they will take their altruistic genes to the grave with them, like the Shakers. If altruism itself is a casualty of changing demographics, then human welfare will suffer in the future. (If you doubt this can happen, then check out the earlier two links, and good luck getting Eastern Europeans or Middle-Easterners interested in EA.)
I don’t think EAs do a very good job of distinguishing their moral intuitions from good philosophical arguments; see the interest of many EAs in open borders and animal rights. I do not see a large understanding in EA of what altruism is and how it can become pathological. Pathological altruism is where people become practically addicted to a feeling of doing good which leads them to act sometime with negative consequences. A quote from the book in that review, which shows some of the difficulties disentangling moral psychological from moral philosophy:
Despite the fact that a moral conviction feels like a deliberate rational conclusion to a particular line of reasoning, it is neither a conscious choice nor a thought process. Certainty and similar states of ‘knowing that we know’ arise out of primary brain mechanisms that, like love or anger, function independently of rationality or reason. . . .
What feels like a conscious life-affirming moral choice—my life will have meaning if I help others—will be greatly influenced by the strength of an unconscious and involuntary mental sensation that tells me that this decision is “correct.” It will be this same feeling that will tell you the “rightness” of giving food to starving children in Somalia, doing every medical test imaginable on a clearly terminal patient, or bombing an Israeli school bus. It helps to see this feeling of knowing as analogous to other bodily sensations over which we have no direct control.
It seems that some people have strong intuitions towards altruism or animal rights, but it’s another thing entirely to say that those arguments are philosophically strong. It seems that people who are biologically predisposed towards altruism will be motivated to find philosophical arguments that justify what they already want to do. I don’t think EAs have corrected for this bias. If EAs’ arguments are flawed, then their adoption of them must be explained by their moral intuitions or signaling desires. Since EA provides great opportunities to signal altruism, intelligence, and discernment, it seems that there would be a gigantic temptation for some personalities to get into EA and exaggerate the quality of its arguments, or adopt its axioms even though other axioms are possible. Even though EAs employ reason and philosophy unlike typical pathological altruists, moral philosophy is subjective, and choice of particular moral theories seems highly related to personality.
The other psychological bias of EAs is due to them getting nerd-sniped by narrowly defining problems, or picking problems that are easier to solve or charities that are possible to evaluate. They seem to operate from the notion that giving away some of their money to charity is taken for granted, so they just need to find the best charity out of those that are possible to evaluate. In an inconvenient world for an altruist, the high-value opportunities are unknown or unknowable, throwing your money at what seems best might result in a negligible or negative effect, and keeping your money in your piggy bank until more obvious opportunities emerge might make the most sense.
EA isn’t all bad. It’s probably better than typical ineffective charities, so if you absolute must give to a charity, then effective charities are probably better. EAs have the right idea by trying to evaluate charities. Many EA arguments are strong within the bounds of utilitarianism, or the confines of a particular problem. But EAs have a hard road towards justification because their philosophy advocates spending money on strong moral claims, and being wrong about important things about the world will totally throw off their results.
My criticisms here don't apply to all EAs or all possible EA approaches, just the median EA arguments and interventions I've seen. It is conceivable that in the future EA will become more persuasive to a larger group of people once it has greater knowledge about the world and incorporates that knowledge into its philosophy. An alternative approach to EA would focus on preserving Western Civilization and avoiding medium-term political/demographic catastrophies. But nobody is sufficiently knowledgeable at this point to know how we could spend money towards this goal.
Interesting that the solutions you're jumping to are about defending the 'west' and beating the south / east rather than working with the south/east to make sure the best of both is shared?
In this thread, I would like to invite people to summarize their attitude to Effective Altruism and to summarise their justification for their attitude while identifying the framework or perspective their using.
Initially I prepared an article for a discussion post (that got rather long) and I realised it was from a starkly utilitarian value system with capitalistic economic assumptions. I'm interested in exploring the possibility that I'm unjustly mindkilling EA.
I've posted my write-up as a comment to this thread so it doesn't get more air time than anyone else's summarise and they can be benefit equally from the contrasting views.
I encourage anyone who participates to write up their summary and identify their perspective BEFORE they read the others, so that the contrast can be most plain.