For everyone in Hong Kong interested in AI Safety.
Come for the reading group and stay for drinks — or skip the reading and just join us for drinks!
We welcome all backgrounds and levels of expertise. The venue will be provided to all successfully registered participants on our Luma page.
Reading Group (6:30pm-8:30pm)
Each session, we tackle a selected paper, blog post, or chapter on AI alignment, existential risk, or governance. No need to be an expert, just curiosity and a willingness to engage. We read in advance, then gather to discuss, question, and debate. Expect rigorous but friendly conversation.
After the discussion, we wind down with casual drinks. This is your chance to meet fellow community members, chat about anything from technical alignment to how your week went, and build relationships beyond the reading material. Newcomers especially welcome! If you've been curious about AI safety but haven't joined an event yet, this is the perfect soft entry point.
For everyone in Hong Kong interested in AI Safety.
Come for the reading group and stay for drinks — or skip the reading and just join us for drinks!
We welcome all backgrounds and levels of expertise. The venue will be provided to all successfully registered participants on our Luma page.
Reading Group (6:30pm-8:30pm)
Each session, we tackle a selected paper, blog post, or chapter on AI alignment, existential risk, or governance. No need to be an expert, just curiosity and a willingness to engage. We read in advance, then gather to discuss, question, and debate. Expect rigorous but friendly conversation.
Reading material for Reading Group #1:
“Machines of Loving Grace” by Dario Amodei https://www.darioamodei.com/essay/machines-of-loving-grace
Social Drinks (8:30pm – 10:00pm)
After the discussion, we wind down with casual drinks. This is your chance to meet fellow community members, chat about anything from technical alignment to how your week went, and build relationships beyond the reading material. Newcomers especially welcome! If you've been curious about AI safety but haven't joined an event yet, this is the perfect soft entry point.
Posted on: