EA cares too little for accuracy or what’s really important
I'd refine this a little to: EA slips into missing crucial considerations somewhat more easily than Rationality.
To overly compress, it feels like EA puts Doing Good as the core motivation, then sees Truth-Seeking as a high priority to do good well. Rationality somewhat inverts this, with Truth-Seeking as the central virtue, and Doing Good (or Doing Things) as a high but not quite as central virtue.
My read is this leads some EA orgs to stay somewhat more stuck to approaches which look like they're good but are very plausibly harmful because of a weird high context crucial consideration that would get called out somewhat more effectively in Rationalist circles.
Looks to me like the two movements are natural symbiotes, glad you wrote this post.
This is my personal take, not an organizational one. Originally written May 2025, revived for the EA Forum's Draft Amnesty Week. Cross-posted from the EA Forum but cross-posting didn't work the first time.
When I discovered the beginnings of EA, it felt like coming home. I had finally found a group of people who cared as much as I did about saving lives.
When I discovered rationality not long after, it felt new. The intensity of LessWrong was both offputting and fascinating to me, and I read a lot of the site in the summer of 2011. I had studied sociology and was used to thinking in terms of ethics and emotions, but not in terms of statistics or economics. LessWrongers were picky and disagreeable, but I learned reasoning skills from them.
Before EA even had a name, LessWrong was one of the best places to talk about ideas now associated with EA. My first post in 2011 was about what would later be called earning to give.
I attended LessWrong meetups in Boston because it was fun to toss ideas around, but I hosted EA meetups because I wanted to strengthen a movement I believed was saving lives. After having kids I continued to prioritize EA, but I didn’t stay involved in rationality as a community.
It did continue to influence how I thought, though. If I wrote something that didn’t quite feel right, I’d imagine how a rationalist critic would respond. Often this steered me away from sloppy reasoning.
……
At their best, the two movements have in common a willingness to dig into weird possibilities and to take ideas seriously. How literally should you take expected value calculations? What if AI is much less predictable than we hope? Does this research study really say what the headlines report?
I’ve also heard plenty of criticisms of both sides:
……
So how do these movements currently relate to each other? How should they relate?
Among individuals, there’s overlap, but most people who identify with one don’t identify with the other.
Personally, EA is my main focus, but rationality continues to be one of the spaces I draw value from. I want to stay open to good ideas and techniques from thinkers in the rationality space, even if I object to other ideas or actions from the same thinkers. Some ideas I’ve valued:
The fact that some people in EA and rationality get value from the other space doesn’t mean everyone has to do the same. If a space gives you the creeps, you don’t have to engage with it.
My best guess is that organizations and projects in EA and rationalist spaces are best off with some freedom from each other, so they can pursue projects in their different ways.