I think EA-specific events being big tent EA seems healthy, but it is absolutely wild to me that there's still no large scale recurring AI existential safety conference. If anyone seriously wants to make it, I saved the domain aisafety.global for you.
Also, Skyler who runs ACX meetups globally would be up for stepping up to support more AIS groups, but is capacity constrained and would need additional funding for ~2-3 people to do this. I'm likely going to include this in my list of recs to funders.
I agree regarding big EA tent events being healthy and still very high-impact.
I was actually co-organizing a one-day conference on AI Safety in Zurich in September (Zurich AI Safety Day). This was a collaboration of BlueDot Impact and Zurich AI Safety, with more than 200 participants. I just published a post on the learnings from this conference here. Although this isn't recurring yet, I feel like there was substantial value in it, which also, in part, motivated me to write this post.
For future iterations of something like this, it might indeed be useful to use that domain.
Also exciting to hear that Skyler might be interested in AIS group support. Depending on which kind of group support you are talking about, there is also Kairos, which supports local groups through their Pathfinder Fellowship.
TL;DR: AI Safety as a cause-area has grown to a substantial size within Effective Altruism. To avoid neglect of other cause-areas and to help the field grow more efficiently, I advocate running cause-area specific conferences. They could shape a shared identity for the field, lower access barriers for non-EA talent, and strengthen connections to the broader ecosystem.
Over the last years, the AI Safety field has been growing rapidly. As a result, the topic has become more prevalent within the broader Effective Altruism community. This has, for example, led to 80,000 Hours shifting their focus towards safely navigating the transition to AGI.[1] Many local EA groups experience a similar trend, with discussions becoming more and more focused on AI Safety. In my opinion, this has two disadvantages:
When I say "space for topics within EA", I also include the very principles of Effective Altruism. With this interpretation in mind, the Centre for Effective Altruism (CEA) is indeed trying to combat the first disadvantage by going back to a principles-first approach in EA community building. I support this step and think it might indeed help other cause-areas and EA principles not to become neglected. However, this implies less support for AI Safety. I therefore advocate having more international or national AI Safety specific gatherings. I am particularly excited about conferences in a similar style to EAG(x) conferences. I do believe that such conferences have several benefits over EAG(x) conferences, including the following:
That said, I think that such conferences should happen in addition to EAG(x) conferences, rather than replacing them. I also don't think that the field of AI Safety should become completely detached from EA. I believe that sharing the same principles of doing good and using reason when making decisions is very beneficial to the AI Safety field. Additionally, people who first join either of the two communities might benefit greatly from joining the other as well. Understanding the core EA motivation seems valuable to anyone in the AI Safety field, and vice versa, many people might be most effective in their career when working on making AGI safe.
While I currently only see AI Safety as a potentially dominating topic that could eat EA, I think the benefits of broader cause-area specific gatherings could apply to other cause-areas just the same.
https://80000hours.org/2025/04/strategic-approach/