LESSWRONG
LW

Practicing Public Outreach

Sep 08, 2025 by Alexander Müller

At the AI Safety Initiative Groningen - AISIG, our mission is to raise awareness of the full spectrum of existing and potential harms from AI, inform mitigation priorities through ongoing discourse, and support the realization of effective solutions.

We have started our own Substack (check it out) to increase public outreach surrounding risks from AI. We are trying to reach a large audience, and aspire to output high-quality blog posts every week on a subtopic of AI safety (ranging from bias to extinction risks). 

Importantly, if you are an avid LessWrong reader, these posts are likely not posts where you will learn something new! Rather, these posts are written to convey pieces of the wider puzzle, and in such a way that (nearly) everyone should, with some effort, be able to read and understand them. 

This sequence thus asks you, the reader, the following question: Could you please read the posts as though you are new to the topic, combine it with the knowledge and experience you have in the field of AI safety/alignment, and give us meaningful feedback to improve our posts?

Note that we are operating under Crocker's Rules.

4Why Care About AI Safety?
Alexander Müller
3d
2
3What a Swedish Series (Real Humans) Teaches Us About AI Safety
Alexander Müller, alicedauphin
3d
0
1On Governing Artificial Intelligence
Alexander Müller, Thomas Vassil Brcic
2d
0