LESSWRONG
LW

761
Practicing Public Outreach

Practicing Public Outreach

Sep 08, 2025 by Alexander Müller

At the AI Safety Initiative Groningen - AISIG, our mission is to raise awareness of the full spectrum of existing and potential harms from AI, inform mitigation priorities through ongoing discourse, and support the realization of effective solutions.

We have started our own Substack (check it out) to increase public outreach surrounding risks from AI. We are trying to reach a large audience, and aspire to output high-quality blog posts every week on a subtopic of AI safety (ranging from bias to extinction risks). 

Importantly, if you are an avid LessWrong reader, these posts are likely not posts where you will learn something new! Rather, these posts are written to convey pieces of the wider puzzle, and in such a way that (nearly) everyone should, with some effort, be able to read and understand them. 

This sequence thus asks you, the reader, the following question: Could you please read the posts as though you are new to the topic, combine it with the knowledge and experience you have in the field of AI safety/alignment, and give us meaningful feedback to improve our posts?

Note that we are operating under Crocker's Rules.

4Why Care About AI Safety?
Alexander Müller
2mo
2
4What a Swedish Series (Real Humans) Teaches Us About AI Safety
Alexander Müller, alicedauphin
2mo
0
5On Governing Artificial Intelligence
Alexander Müller, Thomas Vassil Brcic
2mo
0
2The Strange Case of Emergent Misalignment
Alexander Müller, ilijalichkovski
1mo
0
6Why Smarter Doesn't Mean Kinder: Orthogonality and Instrumental Convergence
Alexander Müller
1mo
0
6Homo sapiens and homo silicus
Alexander Müller, Sophia Lopotaru
1mo
0
2How the Human Lens Shapes Machine Minds
Alexander Müller, cansukutay
10d
0