Here's a chart of GiveWell's annual money moved. It rose dramatically from 2014 to 2015, then more or less plateaued:
(Note that the GiveWell and Open Philanthropy didn't formally split until 2017. GiveWell records $70.4m from Open Philanthropy in 2015, which isn't included in Open Philanthropy's own records. I've emailed them for clarification, but in the meantime, the overall story in the same: A rapid rise followed by several years of stagnation. Edit: I got a reply explaining that years are sometimes off by one, see footnote )
Finally, here's the Google Trends result for "Effective Altruism". It grows quickly starting in 2013, peaks in 2017, then falls back down to around 2015 levels. Broadly speaking, interest has been about flat since 2015.
If this data isn't surprising to you, it should be.
Several EA organizations work on actively growing the community, have been funding community growth for years and view it as an active priority:
- 80,000 Hours: The Problem Profiles page lists "Building effective altruism" as a "highest-priority area", right up there with AI and existential risk.
- Open Philanthropy: Effective Altruism is one of their Focus Areas. They write "We're interested in supporting organizations that seek to introduce people to the idea of doing as much good as possible, provide them with guidance in doing so, connect them with each other, and generally grow and empower the effective altruism community."
- EA Funds: One of the four funds is dedicated to Effective Altruism Infrastructure. Part of its mission reads: "Directly increase the number of people who are exposed to principles of effective altruism, or develop, refine or present such principles"
So if EA community growth is stagnating despite these efforts, it should strike you as very odd, or even somewhat troubling. Open Philanthropy decided to start funding EA community growth in 2015/2016 . It's not as if this is only a very recent effort.
As long as money continues to pour into the space, we ought to understand precisely why growth has stalled so far. The question is threefold:
- Why was growth initially strong?
- Why did it stagnate around 2015-2017?
- Why has the money spent on growth since then failed to make a difference?
Here are some possible explanations.
Effective Altruism makes large moral demands, and frames things in a detached quantitative manner. Utilitarianism is already alienating, and EA is only more so.
This is an okay explanation, but it doesn't explain why growth initially started strong, and then tapered off.
2. Decline is the Baseline
Perhaps EA would have otherwise declined, and it is only thanks to the funding that it has even succeeded in remaining flat.
I'm not sure how to disambiguate between these cases, but it might be worth spending more time on. If the goal is merely community maintenance, different projects may be appropriate.
3. The Fall LessWrong and Rise of SlateStarCodex
Several folk sources indicate the LessWrong went through a decline in 2015. A brief history of LessWrong says "In 2015-2016 the site underwent a steady decline of activity leading some to declare the site dead." The History of Less Wrong writes:
Around 2013, many core members of the community stopped posting on Less Wrong, because of both increased growth of the Bay Area physical community and increased demands and opportunities from other projects. MIRI's support base grew to the point where Eliezer could focus on AI research instead of community-building, Center for Applied Rationality worked on development of new rationality techniques and rationality education mostly offline, and prominent writers left to their own blogs where they could develop their own voice without asking if it was within the bounds of Less Wrong.
Specifically, some blame the decline on SlateStarCodex:
With the rise of Slate Star Codex, the incentive for new users to post content on Lesswrong went down. Posting at Slate Star Codex is not open, so potentially great bloggers are not incentivized to come up with their ideas, but only to comment on the ones there.
In other words, SlateStarCodex and LessWrong catered to similar audiences, and SlateStarCodex won out. 
This view is somewhat supported by Google Trends, which shows a subtle decline in mentions of "Less Wrong" after 2015, until a possible rebirth in 2020.
Except SlateStarCodex also hasn't been growing since 2015:
The recent data is distorted by the NYT incident, but basically the story is the same. Rapid rise to prominence in 2015, followed by a long plateau. So maybe some users left for Slate Star Codex in 2015, but that doesn't explain why neither community saw much growth from 2015 - 2020.
And here's the same chart, omitting the last 12 months of NYT-induced frenzy:
4. Community Stagnation was Caused by Funding Stagnation
One possibility is that there was not a strange hidden cause behind widespread stagnation. It's just that funding slowed down, and so everything else slowed down with it. I'm not sure what the precise mechanism is, but this seems plausible.
Of course, now the question becomes: why did Open Philanthropy giving slow? This isn't as mysterious since it's not an organic process: almost all the money comes from Good Ventures which is the vehicle for Dustin Moskovitz's giving.
Did Dustin find another pet cause to pursue instead? It seems unlikely. In 2019, they provided $274 million total, nearly all of which ($245 million) went to Open Philanthropy recommendations.
Let's go a level deeper and take a look at the Good Ventures grant database aggregated by year:
It looks a lot like the Open Philanthropy chart! They also peaked in 2017, and have been in decline ever since.
So this theory boils down to:
- The EA community stopped growing because EA finances stopped growing
- EA finances stopped growing because Good Ventures stopped growing
- Good Ventures stopped growing because the wills and whims of billionaires are inscrutable?
To be clear, the causal mechanism and direction for the first piece of this argument remains speculative. It could also be:
- The EA community stopped growing
- Therefore, there was limited growth in high impact causes
- Therefore, there was no point in pumping more money into the space
This is plausible, but seems unlikely. Even if you can't give money to AI Safety, you can always give more money to bed nets.
5. EA Didn't Stop Growing, Google Trends is Wrong
Google Trends is an okay proxy for actual interest, but it's not perfect. Basically, it measures the popularity of search queries, but not the popularity of the websites themselves. So maybe instead of searching "effective altruism", people just went directly to forum.effectivealtruism.org and Google never logged a query.
Are there other datasets we can look at?
So is the entire stagnation hypothesis disproved? I don't think so. Google Trends tracks active interest, whereas Giving What We Can tracks cumulative interest. So a stagnant rate of active interest is compatible with increasing cumulative totals. Computing the annual growth rate for Giving What We Can, we see that it also peaks in 2015, and has been in decline ever since:
To sum up:
- Alienation is not a good explanation, this has always been a factor
- EA may have declined more if not for the funding
- SlateStarCodex may have taken some attention, but it also hasn't grown much since 2015
- Funding stagnation may cause community stagnation; the causal mechanism is unclear
- Giving What We Can membership has grown, but it measures cumulative rather than active interest. Their rate of growth has declined since 2015.
A Speculative Alternative: Effective Altruism is Innate
You occasionally hear stories about people discovering LessWrong or "converting" to Effective Altruism, so it's natural to think that with more investment we could grow faster. But maybe that's all wrong.
I think a formative moment for any rationalist-- our "Uncle Ben shot by the mugger" moment, if you will-- is the moment you go "holy shit, everyone in the world is fucking insane." 
That's not exactly scalable. There will be no Open Philanthropy grant for providing experiences of epistemic horror to would-be effective altruists.
Similarly, from John Nerst's Origin Story:
My favored means of procrastination has often been lurking on discussion forums. I can't get enough of that stuff ...Reading forums gradually became a kind of disaster tourism for me. The same stories played out again and again, arguers butting heads with only a vague idea about what the other was saying but tragically unable to understand this.
....While surfing Reddit, minding my own business, I came upon a link to Slate Star Codex. Before long, this led me to LessWrong. It turned out I was far from alone in wanting to understand everything in the world, form a coherent philosophy that successfully integrates results from the sciences, arts and humanities, and understand the psychological mechanisms that underlie the way we think, argue and disagree.
It's not that John discovered LessWrong and "became" a rationalist. It's more like he always has this underlying compulsion, and then eventually found a community where it could be shared and used productively.
In this model, Effective Altruism initially grows quickly as proto-EAs discover the community, then hits a wall as it saturates the relevant population. By 2015, everyone who might be interested in Effective Altruism has already heard about it, and there's not much more room for growth no matter how hard you push.
One last piece of anecdotal evidence: Despite repeated attempts, I have never been able to "convert" anyone to effective altruism. Not even close. I've gotten friends to agree with me on every subpoint, but still fail to sell them on the concept as a whole. These are precisely the kinds of nerdy and compassionate people you might expect to be interested, but they just aren't. 
In comparison, I remember my own experience taking to effective altruism the way a fish takes to water. When I first read Peter Singer, I thought "yes, obviously we should save the drowning child." When I heard about existential risk, I thought "yes, obvious we should be concerned about the far future". This didn't take slogging through hours of blog posts or books, it just made sense. 
Some people don't seem to have that reaction at all, and I don't think it's a failure of empathy or cognitive ability. Somehow it just doesn't take.
While there does seem to be something missing, I can't express what it is. When I say "innate", I don't mean it's true from birth. It could be the result of a specific formative moment, or an eclectic series of life experiences. Or some combination of all of the above.
Fortunately, we can at least start to figure this out through recollection and introspection. If you consider yourself an effective altruist, a rationalist or anything adjacent, please email me about your own experience. Did Yudkowsky convert you? Was reading LessWrong a grand revelation? Was the real rationalism deep inside of you all along? I want to know.
I'm at firstname.lastname@example.org, or if you read the newsletter, you can reply to the email directly. I might quote some of these publicly, but am happy to omit yours or share it anonymously if you ask.
Data for Open Philanthropy and Good Ventures is available here. Data for Giving What We Can is here. If you know how Open Philanthropy's grant database accounts for funding before it formally split off from GiveWell in 2017, please let me know.
Disclosure: I applied for funding from the EA Infrastructure Fund last week for an unrelated project.
Footnotes  Open Philanthropy writes:
Hi, thanks for reaching out.
Our database's date field denotes a given grant's "award date," which we define as the date when payment was distributed (or, in the case of grants paid out over multiple years, when the first payment was distributed). Particularly in the case of grants to organizations based overseas, there can be a short delay between when a grant is recommended/approved and when it is paid/awarded. (For more detail on this process, including average payment timelines, see our Grantmaking Stages page.) In 2015/2016, these payment delays resulted in top charity grants to AMF, DtWI, SCI, and GiveDirectly totaling ~$44M being paid in January 2016 and falling under 2016 in your analysis even as GiveWell presumably counted those grants in its 2015 "money moved" analysis.
Payment delays and "award date" effects also cause some artificial lumpiness in other years. For example, some of the largest top charity grants from the 2016 giving season were paid in January 2017 (SCI, AMF, DtWI) but many of the largest 2017 giving season grants were paid in December 2017 (Malaria Consortium, No Lean Season, DtWI). This has the effect of artificially inflating apparent 2017 giving relative to 2018. Other multi-year grants are counted as awarded entirely in the month/year the first payment was made -- for example, our CSET grant covering 2019-2023 first paid in January 2019. So I wouldn't read too much into individual year-to-year variation without more investigation.
Hope this helps.
 For more on OpenPhil's stance on EA growth, see this note from their 2015 progress report:
Effective altruism. There is a strong possibility that we will make grants aimed at helping grow the effective altruist community in 2016. Nick Beckstead, who has strong connections and context in this community, would lead this work. This would be a change from our previous position on effective altruism funding, and a future post will lay out what has changed. [emphasis mine]
 For what it's worth, the vast majority of SlateStarCodex readers don't actually identify as rationalist or effective altruists.
 My Giving What We Can dataset also has a column for money actually donated, though the data only goes back to 2015.
 I'm conflating effective altruism with rationalism in this section, but I don't think it matters for the sake of this argument.
 For what it's worth, I'm typically pretty good at convincing people to do things outside of effective altruism. In every other domain of life, I've been fairly successful at getting friends to join clubs, attend events, and so on, even when it's not something they were initially interested in. I'm not claiming to be exceptionally good, but I'm definitely not exceptionally bad.
But maybe this shouldn't be too surprising. Effective Altruism makes a much larger demand than pretty much every other cause. Spending an afternoon at a protest is very different from giving 10% of your income.
Analogously, I know a lot of people who intellectually agree with veganism, but won't actually do it. And even that is (arguably) easier than what effective altruism demands.
 In one of my first posts, I wrote:
Before reading A Human's Guide to Words and The Categories Were Made For Man, I went around thinking "oh god, no one is using language coherently, and I seem to be the only one seeing it, but I cannot even express my horror in a comprehensible way." This felt like a hellish combination of being trapped in an illusion, questioning my own sanity, and simultaneously being unable to scream. For years, I wondered if I was just uniquely broken, and living in a reality that no one else seemed to see or understand.
It's not like I was radicalized or converted. When I started reading LessWrong, I didn't feel like I was learning anything new or changing my mind about anything really fundamental. It was more like "thank god someone else gets it."
When did I start thinking this way? I honestly have no idea. There were some formative moments, but as far back as I can remember, there was at least some sense that either I was crazy, or everyone else was.