3406

LESSWRONG
LW

3405
AI-Assisted AlignmentExistential riskAI
Personal Blog

1

Accelerating AI Safety Progress via Technical Methods- Calling Researchers, Founders, and Funders

by Martin Leitgab
5th Oct 2025
1 min read
0

1

1

New Comment
Moderation Log
More from Martin Leitgab
View more
Curated and popular this week
0Comments
AI-Assisted AlignmentExistential riskAI
Personal Blog

​[LessWrong Community event announcement: https://www.lesswrong.com/events/kZGehYydasb3FurxG/mini-symposium-on-accelerating-ai-safety-progress-via] 

Are you working on accelerating AI safety effectiveness for existential risks? Interested in contributing to this problem, learning about current efforts, or funding active work? Join us:

​🎯 The Challenge:

  • ​AI capabilities are advancing rapidly
  • ​Current research literature: Many AI safety approaches may not scale beyond human-level AI
  • ​Critical question: Given catastrophic risks at stake, how can we accelerate our progress toward effective technical AI safety solutions, for future powerful AI systems which may emerge in the near-term?

​🚀 Event Focus: This symposium intends to connect researchers, founders, funders, and forward thinkers on technical AI safety acceleration methods.​📍 Location & Time (Hybrid Event):

  • ​In-person: Picasso Boardroom, 1185 6th Avenue, NYC (capacity limited to 27 attendees)
  • ​Virtual: Unlimited capacity, Google Meet link will be sent to registered participants before event
  • ​Date: Friday 10/10 at 4 pm EDT
    • ​(One hour before EA Global NYC 2025 opens nearby)

​📝 Registration:

  • ​Free hybrid event with in-person and virtual options
  • ​In-person registration: Capacity limited to 27 attendees- register early!
    • ​Registration deadline: Thursday 10/9 at 8 am EDT
  • ​Virtual registration: Open until event start

​ 🎤 Lightning Talks about your work or interest in the field:

  • ​Format: 7 minutes followed by 5 minutes Q&A
  • ​To present: Email martin.leitgab@gmail.com with brief description
  • ​Speaker list: Selected speakers visible in public sheet [here]
  • ​Post-event: Summary with speaker materials posted on LessWrong (with speaker permission)

​💡 Topics of Interest:

  • ​Accelerating discovery of effective safety solutions
  • ​Scalable effectiveness predictions for solution candidates
  • ​Automating safety research workflow steps
  • ​Any technical methods to accelerate towards AI safety effectiveness for AI beyond human-level

​We are looking forward to your participation!