The only org I know of doing much in the way of political organizing around safety is Pause AI
I recommend the AI Policy Network. It seems to be doing the most thoughtful job of building influence with leading US politicians.
The Secure AI Project also deserves some attention, but I know less about them.
As claimed in my last post, minimum viable AGI is here. Given that, what should we do about it? Since I was asked, here are my recommendations.
Spread Awareness
By my reasoning, the most important thing is to get as many people as possible to realize what's going on. If you don't want to call it AGI, that's fine, but the simple fact is that we've already seen AIs that refuse shutdown, continually maximize objectives in the real world (i.e. we have MVP paperclip maximizers), and can red team computer systems by exploiting vulnerabilities. Yes, these current AI applications aren't reliable enough to be a serious threat, but given a few more weeks and another round of base model enhancements, they probably will be.
The simplest thing you can do is talk to your friends and family. Make sure they understand what's going on. If you can, maybe get them to read something, like If Anyone Builds It, Everyone Dies, or watch something, like the upcoming AI Doc movie. I think broad awareness is important, because the most pressing thing that needs to be done is to enact policy.
Policy Action
We don't know how to build safe AGI, let alone safe ASI. We have some promising ideas, but those ideas need time. Policy interventions are how we buy that time.
Enacting policy generally requires the support from constituents. So once awareness is raised, the next step is to ask your government to take action. For those of us living in Western democracies, and especially those of us living in the United States, this means reaching out to our government representatives and letting them know how we feel, and encouraging others to do the same.
The only org I know of doing much in the way of political organizing around safety is Pause AI (Pause AI USA). I'd recommend at least getting on their mailing list, since they'll notify you when contacting your representatives would support specific policies.
On the outside chance you're a policy person who's reading this and not already involved, there are any number of open roles in AI policy you might take to work on safety.
Safety Research
Finally, there's safety research. From the outside, it probably feels like there's a lot of people working on safety. There aren't, especially relative to how many people are working on pure capabilities. Assuming policy is enacted that buys us time, this is the work that will matter to make the technology safe.
If you're not already engaged here, I'd recommend checking out 80k's guidance and job board for more info. In my opinion, we most desperately need more folks working to actually solve alignment, and right now I'm aware of very few ideas that even stand a chance.
If you have your own suggestions for things people should do, please share them in the comments.