LESSWRONG
Community
LW

753
AI
Personal Blog
Event

1

Mini-Symposium on Accelerating AI Safety Progress via Technical Methods - Hybrid In-Person and Virtual

by Martin Leitgab
LW
1 min read
0

1

Friday 10th October
8:00 pm – 9:30 pm GMT
Online Event
martin.leitgab@gmail.com
Register

Posted on: 5th Oct 2025

1

New Comment
Moderation Log
More from Martin Leitgab
View more
Curated and popular this week
0Comments

​Are you working on accelerating AI safety effectiveness for existential risks? Interested in contributing to this problem, learning about current efforts, or funding active work? Join us:

​📍 Location & Time (Hybrid Event):

  • ​In-person: Picasso Boardroom, 1185 6th Avenue, NYC (capacity limited to 27 attendees)
  • ​Virtual: Unlimited capacity, Google Meet link will be sent to registered participants before event
  • ​Date: Friday 10/10 at 4 pm EDT
    • ​(One hour before EA Global NYC 2025 opens nearby)

​🎯 The Challenge:

  • ​AI capabilities are advancing rapidly
  • ​Current research literature: Many AI safety approaches may not scale beyond human-level AI
  • ​Critical question: Given catastrophic risks at stake, how can we accelerate our progress toward effective technical AI safety solutions, for future powerful AI systems which may emerge in the near-term?

​🚀 Event Focus: This symposium may be a first to connect researchers, founders, funders, and forward thinkers on technical AI safety acceleration methods.

​📝 Registration:

  • ​Free hybrid event with in-person and virtual options
  • ​In-person registration: Capacity limited to 27 attendees- register early!
    • ​Registration deadline: Thursday 10/9 at 8 am EDT
  • ​Virtual registration: Open until event start

​ 🎤 Lightning Talks about your work or interest in the field:

  • ​Format: 7 minutes followed by 5 minutes Q&A
  • ​To present: Email martin.leitgab@gmail.com with brief description
  • ​Speaker list: Selected speakers visible in public sheet [here]
  • ​Post-event: Summary with speaker materials posted on LessWrong (with speaker permission)

​💡 Topics of Interest:

  • ​Accelerating discovery of effective safety solutions
  • ​Scalable effectiveness predictions for solution candidates
  • ​Automating safety research workflow steps
  • ​Any technical methods to accelerate towards AI safety effectiveness for AI beyond human-level

​We are looking forward to your participation!

Mentioned in
1Accelerating AI Safety Progress via Technical Methods- Calling Researchers, Founders, and Funders