This post is me quick jotting down my current understanding of Ben's criticism, which I basically agree with.
The original ideas of the EA movement are the ethical views of Peter Singer and his thought experiments on the proverbial drowning child, combined with an engineering/finance methodology for assessing how much positive impact you're actually producing. The canonical (first?) EA organization was GiveWell, which researched various charities and published their findings on how effective they were. A core idea underneath GiveWell's early stuff was "your dollars can have an outsized impact helping the global poor, compared to helping people in first world countries". The mainstream bastardized version of this is "For the price of a cup of coffee, you can safe a life in Africa", which I think uses basically made up and fraudulent numbers. The GiveWell pitch was more like "we did some legit research, and for ~$5000, you can save or radically improve a life in Africa". Pretty quickly GiveWell and the ecosystem around it got Large Amounts of Money, both thru successful marketing campaigns that convinced regular people with good jobs to give 10% of their annual income (Giving What We Can), but their most high leverage happenings were that they got the ear of billionaire tech philanthropists, like Dustin Moskovitz who was a co-founder of both Facebook and Asana, and Jaan Tallinn who co-founded Skype. I don't know exactly how Jaan's money moved thru the EA ecosystem, but Dustin ended up creating Good Ventures which was an org to manage his philanthropy, and it was advised by Open Philanthropy, and my understanding is that both these orgs were staffed by early EA people and were thoroughly EA in outlook, and also had significant personal overlap with GiveWell specifically.
The big weird thing is that it seems like difficulties were found in the early picture of how much good was in fact being done thru these avenues, and this was quietly elided, and more research wasn't being done to get to the bottom of the question, and there's also various indicators that EA orgs themselves didn't really believe their numbers for how much good could done. For the Malaria stuff, it did check that the org had followed thru on the procedures it intended, but the initial data they had available on if malaria cases were going up or down was noisy, so they stopped paying attention to it and didn't try to make it so better data was available. A big example of "EA orgs not seeming to buy their own story" was GiveWell advising Open Philanthropy to not simply fully fund its top charities. This is weird because if the even the pessimistic numbers were accurate, Open Phil on its own could have almost wiped out malaria and an EA sympathetic org like the Gates foundation definitely could have. And at the very least, they could have done a very worked out case study in one country or another and gotten a lot more high quality info on if the estimates were legit. And stuff like that didn't end up happening.
It's not that weird to have very incorrect estimates. It is weird to have ~15 years go by without really hammering down and getting very solid evidence for the stuff you purported to be "the most slam dunk evidence based cost effective life saving". You'd expect to either get that data and then be in the world of "yeah, it's now almost common knowledge that the core EA idea checks out", or you'd have learned that the gains are that high or that easy, or that the barriers to getting rid of malaria have a much different structure, and you should change your marketing to reflect it's not "you can trivially do lots of obvious good by giving these places more money".
Givewell advising Open Phil to not fully fund things is the main "it seems like the parties upstream of the main message don't buy their main message enough to Go Hard at it". In very different scenarios the funding split thing kinda makes sense to me, I did a $12k crowdfunding campaign last year for a research project, and a friend of a friend offered to just fund the full thing, and I asked him to only do that if it wasn't fully funded by the last week of the fundraising period, because I was really curious and uncertain about how much money people just in my twitter network would be interested in giving for a project like this, and that information would be useful to me for figuring out how to fund other stuff in the future.
In the Open Phil sitch, it seems like "how much money are people generally giving?" isn't rare info that needed to be unearthed, and also Open Phil and friends could really just solve most all of the money issues, and the orgs getting funded could supposedly then just solve huge problems. But they didn't. This could be glossed as something like "turns out there's more than enough billionaire philanthropic will to fix huge chunks of global poverty problems, IF global poverty works the way that EA orgs have modeled it as working". And you could imagine maybe there's some trust barrier preventing otherwise willing philanthropists getting info from and believing otherwise correct and trustworthy EAs, but in this scenario it's basically the same people, the philanthropists are "fully bought in" to the EA thing, so things not getting legibly resolved seems to indicate that internally there was some recognition that the core EA story wasn't correct, and prevented that information from propagating and reworking things.
Relatedly, the thing that we seemed to see in lieu of "go hard on the purported model and either disconfirm them and update, or get solid evidence and double down", we see a situation where a somewhat circularly defined reputation gets bootstrapped, with the main end state being fairly unanimous EA messaging that "people should give money to EA orgs, in a general sense, and EA orgs should be in charge of more and more things" despite not having the underlying track record that would make that make sense. The track record that is in fact pointed to is a sequence of things like "we made quality researched estimates of effectiveness of different charities" that people found compelling, then pointing to later steps of "we ended up moving XYZ million dollars!" as further evidence of trustworthiness, but really that's just "double spending" on the original "people found our research credible and extended us the benefit of the doubt". To full come thru they'd need to show that the benefits produced matched what they expected (or even if the showed otherwise, if the process and research was good and it seemed like they were learning it could be very reasonable to keep trusting them).
This feels loosely related to how for the first several times I'd heard Anthropic mentioned by rationalists, the context made me assume it was a rationalist run AI safety org, and not a major AI capabilities lab. Somehow there was some sort of meme of "it's rationalist, which means it's good and cares about AI Safety". Similarly it sounds like EA has ended up acting like and producing messaging like "You can trust us Because we are Labeled EAs" and ignoring some of the highest order bits of things they could do which would give them a more obviously legible and robust track record. I think there was also stuff mentioned like "empirically Open Phil is having a hard time finding things to give away money too, and yet people are still putting out messaging that people should Obviously Funnel Money Towards this area".
Now, for some versions of who the founding EA stock could have been, one conclusion might just be "damn, well I guess they were grifters, shouldn't have trusted them". But it seems like there was enough obviously well thought out and well researched efforts early on that that doesn't seem reasonable. Instead, it seems to indicate that billionaire philanthropy is really hard and/or impossible, at least while staying within a certain set of assumptions. Here, I don't think I've read EA criticism that informs the "so what IS the case if it's not the case that for the price of a cup of coffee you can save a life?" but my understanding is informed by writers like Ben. So what is the case? It probably isn't true that eradicating malaria is fundamentally hard in an engineering sense. It's more like "there's predatory social structures setup to extract from a lot of the avenues that one might try and give nice things to the global poor". There's lots of vary obvious examples of things like aid money and food being sent to countries and the governments of those countries basically just distributing it as spoils to their cronies, and only some or none of it getting to the people who others were hoping to help. There seem to be all kinds or more or less subtle versions of this.
The problems also aren't only on the third world end. It seems like people in the first-world aren't generally able to get enough people together who have a shared understanding that it's useful to tell the truth, to have large scale functional "bureaucracies" in the sense of "ecosystem of people that accurately processes information". Ben's the professionals dilemma looks at how the ambient culture of professionalism seems to work against this having large functional orgs that can tell the truth and learn things.
So it seems like what happened was the early EA stock (who I believe came from Bridgewater) were earnestly trying to apply finance and engineering thinking to the task of philanthropy. They did some good early moves and got the ear of many billions of dollars. As things progressed, they started to notice things that complicated the simple giving hypothesis. As this was happening they were also getting bigger from many people trusting them and giving them their ears, and were in a position where the default culture of destructive Professionalism pulled at people more and more. These pressures were enough to quickly erode the epistemic rigor needed for the philanthropy to be robustly real. EA became a default attractor for smart young good meaning folk, because the messaging on the ease of putting money to good use wasn't getting updated. It also became an attractor for opportunists who just saw power and money and authority accumulating and wanted in on it. Through a mix of ambient cultural pressures silencing or warping the clarity of good meaning folk, and thru Rapid Growth that accepted ambivalent meaning and bad meaning folk, it lost the ability to stay truth and mission focused, and while it might still do some higher quality research than other charitable entities, it has forgone the next obvious step of propagating the information about what the actual blockers and constraints on doing good in the world are, and has become the general attractor of "thing that just tries to accumulate more resources because We Should Be In Charge of more resources".
Money Can't Buy the Smile on a Child's Face As They Look at A Beautiful Sunset... but it also can't buy a malaria free world: my current understanding of how Effective Altruism has failed — LessWrong
I've read a lot of Ben Hoffman's work over the years, but only this past week have I read his actual myriad criticisms of the Effective Altruism movement and its organizations. The most illuminating posts I just read are A drowning child is hard to find, GiveWell and the problem of partial funding, and Effective Altruism is self recommending.
This post is me quick jotting down my current understanding of Ben's criticism, which I basically agree with.
The original ideas of the EA movement are the ethical views of Peter Singer and his thought experiments on the proverbial drowning child, combined with an engineering/finance methodology for assessing how much positive impact you're actually producing. The canonical (first?) EA organization was GiveWell, which researched various charities and published their findings on how effective they were. A core idea underneath GiveWell's early stuff was "your dollars can have an outsized impact helping the global poor, compared to helping people in first world countries". The mainstream bastardized version of this is "For the price of a cup of coffee, you can safe a life in Africa", which I think uses basically made up and fraudulent numbers. The GiveWell pitch was more like "we did some legit research, and for ~$5000, you can save or radically improve a life in Africa". Pretty quickly GiveWell and the ecosystem around it got Large Amounts of Money, both thru successful marketing campaigns that convinced regular people with good jobs to give 10% of their annual income (Giving What We Can), but their most high leverage happenings were that they got the ear of billionaire tech philanthropists, like Dustin Moskovitz who was a co-founder of both Facebook and Asana, and Jaan Tallinn who co-founded Skype. I don't know exactly how Jaan's money moved thru the EA ecosystem, but Dustin ended up creating Good Ventures which was an org to manage his philanthropy, and it was advised by Open Philanthropy, and my understanding is that both these orgs were staffed by early EA people and were thoroughly EA in outlook, and also had significant personal overlap with GiveWell specifically.
The big weird thing is that it seems like difficulties were found in the early picture of how much good was in fact being done thru these avenues, and this was quietly elided, and more research wasn't being done to get to the bottom of the question, and there's also various indicators that EA orgs themselves didn't really believe their numbers for how much good could done. For the Malaria stuff, it did check that the org had followed thru on the procedures it intended, but the initial data they had available on if malaria cases were going up or down was noisy, so they stopped paying attention to it and didn't try to make it so better data was available. A big example of "EA orgs not seeming to buy their own story" was GiveWell advising Open Philanthropy to not simply fully fund its top charities. This is weird because if the even the pessimistic numbers were accurate, Open Phil on its own could have almost wiped out malaria and an EA sympathetic org like the Gates foundation definitely could have. And at the very least, they could have done a very worked out case study in one country or another and gotten a lot more high quality info on if the estimates were legit. And stuff like that didn't end up happening.
It's not that weird to have very incorrect estimates. It is weird to have ~15 years go by without really hammering down and getting very solid evidence for the stuff you purported to be "the most slam dunk evidence based cost effective life saving". You'd expect to either get that data and then be in the world of "yeah, it's now almost common knowledge that the core EA idea checks out", or you'd have learned that the gains are that high or that easy, or that the barriers to getting rid of malaria have a much different structure, and you should change your marketing to reflect it's not "you can trivially do lots of obvious good by giving these places more money".
Givewell advising Open Phil to not fully fund things is the main "it seems like the parties upstream of the main message don't buy their main message enough to Go Hard at it". In very different scenarios the funding split thing kinda makes sense to me, I did a $12k crowdfunding campaign last year for a research project, and a friend of a friend offered to just fund the full thing, and I asked him to only do that if it wasn't fully funded by the last week of the fundraising period, because I was really curious and uncertain about how much money people just in my twitter network would be interested in giving for a project like this, and that information would be useful to me for figuring out how to fund other stuff in the future.
In the Open Phil sitch, it seems like "how much money are people generally giving?" isn't rare info that needed to be unearthed, and also Open Phil and friends could really just solve most all of the money issues, and the orgs getting funded could supposedly then just solve huge problems. But they didn't. This could be glossed as something like "turns out there's more than enough billionaire philanthropic will to fix huge chunks of global poverty problems, IF global poverty works the way that EA orgs have modeled it as working". And you could imagine maybe there's some trust barrier preventing otherwise willing philanthropists getting info from and believing otherwise correct and trustworthy EAs, but in this scenario it's basically the same people, the philanthropists are "fully bought in" to the EA thing, so things not getting legibly resolved seems to indicate that internally there was some recognition that the core EA story wasn't correct, and prevented that information from propagating and reworking things.
Relatedly, the thing that we seemed to see in lieu of "go hard on the purported model and either disconfirm them and update, or get solid evidence and double down", we see a situation where a somewhat circularly defined reputation gets bootstrapped, with the main end state being fairly unanimous EA messaging that "people should give money to EA orgs, in a general sense, and EA orgs should be in charge of more and more things" despite not having the underlying track record that would make that make sense. The track record that is in fact pointed to is a sequence of things like "we made quality researched estimates of effectiveness of different charities" that people found compelling, then pointing to later steps of "we ended up moving XYZ million dollars!" as further evidence of trustworthiness, but really that's just "double spending" on the original "people found our research credible and extended us the benefit of the doubt". To full come thru they'd need to show that the benefits produced matched what they expected (or even if the showed otherwise, if the process and research was good and it seemed like they were learning it could be very reasonable to keep trusting them).
This feels loosely related to how for the first several times I'd heard Anthropic mentioned by rationalists, the context made me assume it was a rationalist run AI safety org, and not a major AI capabilities lab. Somehow there was some sort of meme of "it's rationalist, which means it's good and cares about AI Safety". Similarly it sounds like EA has ended up acting like and producing messaging like "You can trust us Because we are Labeled EAs" and ignoring some of the highest order bits of things they could do which would give them a more obviously legible and robust track record. I think there was also stuff mentioned like "empirically Open Phil is having a hard time finding things to give away money too, and yet people are still putting out messaging that people should Obviously Funnel Money Towards this area".
Now, for some versions of who the founding EA stock could have been, one conclusion might just be "damn, well I guess they were grifters, shouldn't have trusted them". But it seems like there was enough obviously well thought out and well researched efforts early on that that doesn't seem reasonable. Instead, it seems to indicate that billionaire philanthropy is really hard and/or impossible, at least while staying within a certain set of assumptions. Here, I don't think I've read EA criticism that informs the "so what IS the case if it's not the case that for the price of a cup of coffee you can save a life?" but my understanding is informed by writers like Ben. So what is the case? It probably isn't true that eradicating malaria is fundamentally hard in an engineering sense. It's more like "there's predatory social structures setup to extract from a lot of the avenues that one might try and give nice things to the global poor". There's lots of vary obvious examples of things like aid money and food being sent to countries and the governments of those countries basically just distributing it as spoils to their cronies, and only some or none of it getting to the people who others were hoping to help. There seem to be all kinds or more or less subtle versions of this.
The problems also aren't only on the third world end. It seems like people in the first-world aren't generally able to get enough people together who have a shared understanding that it's useful to tell the truth, to have large scale functional "bureaucracies" in the sense of "ecosystem of people that accurately processes information". Ben's the professionals dilemma looks at how the ambient culture of professionalism seems to work against this having large functional orgs that can tell the truth and learn things.
So it seems like what happened was the early EA stock (who I believe came from Bridgewater) were earnestly trying to apply finance and engineering thinking to the task of philanthropy. They did some good early moves and got the ear of many billions of dollars. As things progressed, they started to notice things that complicated the simple giving hypothesis. As this was happening they were also getting bigger from many people trusting them and giving them their ears, and were in a position where the default culture of destructive Professionalism pulled at people more and more. These pressures were enough to quickly erode the epistemic rigor needed for the philanthropy to be robustly real. EA became a default attractor for smart young good meaning folk, because the messaging on the ease of putting money to good use wasn't getting updated. It also became an attractor for opportunists who just saw power and money and authority accumulating and wanted in on it. Through a mix of ambient cultural pressures silencing or warping the clarity of good meaning folk, and thru Rapid Growth that accepted ambivalent meaning and bad meaning folk, it lost the ability to stay truth and mission focused, and while it might still do some higher quality research than other charitable entities, it has forgone the next obvious step of propagating the information about what the actual blockers and constraints on doing good in the world are, and has become the general attractor of "thing that just tries to accumulate more resources because We Should Be In Charge of more resources".