2346

LESSWRONG
LW

2345
AI
Frontpage

46

Overview: AI Safety Outreach Grassroots Orgs

by Severin T. Seehrich, Benjamin Schmidt
4th May 2025
3 min read
8

46

AI
Frontpage

46

Overview: AI Safety Outreach Grassroots Orgs
8Algon
1Severin T. Seehrich
8Chris_Leong
2Severin T. Seehrich
3MichaelDickens
3Severin T. Seehrich
1MichaelDickens
2Anthony Bailey
New Comment
8 comments, sorted by
top scoring
Click to highlight new comments since: Today at 1:31 PM
[-]Algon5mo85

There's also AISafety.info, which I'm a part of. We've just released a new intro section, and are requesting feedback. Here's the LW announcement post.

Reply
[-]Severin T. Seehrich5mo10

I love them and have been around since the start, decided to not include them because they don't point in the outreach direction.

Reply
[-]Chris_Leong5mo80

There's also the AI Safety Awareness Project. They run public workshops.

Reply
[-]Severin T. Seehrich5mo20

We found them but had the impression they're not super joinable.

Reply
[-]MichaelDickens5mo30

PauseAI US is a separate entity from PauseAI so I believe it should also be listed.

Reply
[-]Severin T. Seehrich5mo31

More separate than e.g. PauseAI Germany? My assumption was that anyone would find their respective local chapters over the general PauseAI page.

Reply
[-]MichaelDickens5mo10

As I understand it, PauseAI Global aka PauseAI supports protests in most regions, whereas US-based protests are run by PauseAI US which is a separate group of people.

Reply
[-]Anthony Bailey5mo22

I volunteer as Pause AI software team lead and confirm this is basically correct. Many members and origins in common between the global Pause AI movement and Pause AI US, but some different emphases mostly for good specialism reasons. The US org has Washington connections and more protests focussed on the AI labs themselves. We work closely.

Neither has more than a few paid employees and truly full-time volunteers. As per OP, anyone who agrees activism and public engagement remain a very under-leveraged value-add way to help AI safety has massive opportunity here for impact through time, skill or money.

Reply
Moderation Log
More from Severin T. Seehrich
View more
Curated and popular this week
8Comments

We’ve been looking for joinable endeavors in AI safety outreach over the past weeks and would like to share our findings with you. Let us know if we missed any and we’ll add them to the list.

For comprehensive directories of AI safety communities spanning general interest, technical focus, and local chapters, check out https://www.aisafety.com/communities and https://www.aisafety.com/map. If you're uncertain where to start, https://aisafety.quest/ offers personalized guidance.

ControlAI

ControlAI started out as a think tank. Over the past months, they developed a theory of change for how to prevent ASI development (“Direct Institutional Plan”). As a pilot campaign they cold-mailed British MPs and Lords to talk to them about AI risk. So far, they talked to 70 representatives of which 31 agreed to publicly stand against ASI development.

Control AI is also supporting grassroots activism: On https://controlai.com/take-action , you can find templates to send to your representatives yourself, as well as guides for how to constructively inform people about AI risk. They are also reaching out to influencers and supporting content creation.

While they are the org on this list whose theory of change and actions we found most convincing, so far, they are still at the start of building infrastructure that would allow them to take in considerable numbers of volunteers. We expect them to react positively anyways if you reach out to them with requests for talks, training or similar. You can join the Control AI Discord here.

ControlAI is currently hiring!

EncodeAI

EncodeAI is an organization of high school and college students that addresses all kinds of AI risks. Their past endeavors and successes include a bipartisan event advocating for anti-deepfake laws, and co-sponsoring SB 1047, California’s landmark AI safety legislation that would, if passed, have been a tremendous contribution to AI existential safety.

You can find an overview of their past activities here and join their local chapters or start a new one here.

PauseAI

PauseAI is a community-focused organization dedicated to AI safety activism. Their primary aim is to normalize discussions about AI existential risk and advocate for a pause in advanced AI development. They contact policymakers,  influencers and experts, organize protests, hand out leaflets, do tabling, and anything else that seems useful. PauseAI also offers microgrants to fund a variety of projects fitting their mission.

We (Ben and Severin) also started running co-working sessions for mailing MPs over the PauseAI Discord, as well as Outreach Conversation Labs where you can practice informing people about AI x-risk via fun mock conversations. Our goal is to empower others rather than become bottlenecks, so we encourage you to organize similar events. Whether over the PauseAI Discord, in your local group, or at conferences.

Currently, PauseAI seems to be the org on this list that’s best equipped to absorb new members.

More on https://pauseai.info/. To get involved, you can join their Discord or one of the local groups. To get really involved, you can attend PauseCon from June 27 to 30 in London.

PauseAI US

At this point, PauseAI US is a different entity than PauseAI, so it seems worth mentioning them separately.

More info on https://www.pauseai-us.org/ . You can join their Discord here and their mailing list here. 

StopAI

Focusing on civil disobedience, StopAI are the spicy end of this spectrum. You can follow their YouTube to learn more about their protests.

More on https://www.stopai.info/. To get involved, check https://www.stopai.info/join or join their discord.

Collective Action for Existential Safety (CAES)

CAES’s central aim is to catalyze collective action to ensure humanity survives this decade. It serves all existential safety advocates globally, and is more cause-area agnostic than the other organizations on this list. If you want to help with existential risk but are yet uncertain which niche suits you best, they’ll help point you in a good direction.

Their website features a list of 80+ concrete actions individuals, organizations, and nations can take to increase humanity’s existential safety in light of risks from advanced AI, nuclear weapons, synthetic biology, and other novel technologies. 

More info:  existentialsafety.org. 

Call to action

These organizations are mostly in their early stages. Accordingly, any effort now is disproportionately impactful. With short timelines and AI risks becoming more salient to the average person, taking action here seems like a great chance. And if you are worried that political outreach won’t go in the right direction or might be harmful, this is your chance to shift the trajectory of these endeavors!