We’ve been looking for joinable endeavors in AI safety outreach over the past weeks and would like to share our findings with you. Let us know if we missed any and we’ll add them to the list. For comprehensive directories of AI safety communities spanning general interest, technical focus, and...
Many teams suffer from frictions - whether it's subtle tension or full-blown paralysis. AI safety orgs are no exception. In high-stakes, mission-driven contexts, unspoken tensions and misaligned communication can quietly drain energy, stall progress, and damage morale. If you’re part of a team or organization in the AI safety space...
Epistemic status: Noticing confusion There is little discussion happening on LessWrong with regards to AI governance and outreach. Meanwhile, these efforts could buy us time to figure out technical alignment. And even if we figure out technical alignment, we still have to solve crucial governmental challenges so that totalitarian lock-in...
Things I'm fairly confident in: * I should take colds in general more seriously than I did pre-pandemic: Staying at home with cold symptoms is good. General masking during cold season is good. We should have air filters in all public indoor spaces. * Long covid is real and we...
Repost from https://amoretlicentia.substack.com/ Modern life is weird. For the more privileged among us, the options of what we could do with our time grow exponentially by the year, by the day. My interests are infinite, and I’m lucky to have both the wits and the means to follow just about...
Thanks to Dakota Quackenbush from Authentic Bay Area for an earlier version of this tool. So far, the main motivation for my work as a community builder and authentic relating facilitator was meeting my own need for connection. I think that was a mistake. First, it is difficult to harvest...
AI existential risk has been in the news recently. A lot of people have gotten interested in the problem and some want to know what they can do to help. Additionally, other existing routes to getting advice are getting overwhelmed, like AI Safety Support, 80,000 Hours, AGI Safety Fundamentals, AI...