I would be nice to have separate posts for some of the linked talks. I saw the one for prediction markets. Nice. But I think for the others would be interesting too. And maybe you can get some of the participants to comment here too.
Thank you for your comment- yes, speakers are working on posts of their own, and they are encouraged to link to the post here for reference and connection.
With AI capabilities advancing in several domains from elementary-school level (GPT-3, 2020) to beyond PhD-level (2025) in just five years, the AI safety field may face a critical challenge: developing and deploying effective solutions fast enough to manage catastrophic and existential risks from beyond-human level AI systems that may emerge on timelines shorter than we hope.
On October 10, 2025, I organized a hybrid symposium bringing together 52 researchers and founders (25 in-person in NYC) to explore technical methods for accelerating AI safety progress. The focus of the event was not to discuss any single research safety agenda, but how to reduce catastrophic/existential risk from near-term powerful AI faster, as a field. This post shares what we covered and learned in the talks of our five speakers.
Testing a Symposium Format and Three Enablers for AI Safety Acceleration
Acceleration of AI safety progress appears valuable in particular in the context of short timelines, to reduce catastrophic or existential risks from beyond-human level AI systems in the near-term.
Excellent writing exists on this and related topics from established practitioners. This event aimed to test whether a hybrid symposium with structured presentations could complement the existing discourse and available online forums and facilitate researcher and founder connections and conversations.
While there are many lenses and perspectives on achieving accelerated AI safety effectiveness for managing x-risk in short timeline scenarios, the event agenda was driven by three themes that may point to key acceleration enablers:
Presentations and discussions of the symposium described in this post primarily covered the first two items. The third item remains critical but was largely outside of the focus on technical methods that this event applied.
Presentation Summaries
Each speaker at the symposium brought distinct perspectives on AI safety acceleration themes. Below are summaries and key take-aways, with links to full materials for more detailed information.
1. Opening and Introduction on AI Safety Acceleration Methods
The first three of the following presentations covered technical acceleration topics on item/domain #1, on finding most effective safety interventions and related frameworks.
2. AI Safety Research Futarchy: Using Prediction Markets to Choose Research Projects for MARS
3. Predicting Extinction: Overview on Tools and Methods to Forecast ASI Risks
4. Towards Predicting X-risk Reduction of AI Safety Solution Candidates through an AI Preferences Proxy
The next two presentations cover technical acceleration topics on item/domain #2, automating the maturation of safety interventions through the safety R&D workflow.
5. Automating the AI/ML Safety Research Workflow- Challenges and Approaches
6. Model Evaluation Automation- Technical Challenges and How to Make Progress
7. Meeting Closeout
Beyond The Talks- Discussion Themes
During Q&A and post-event conversations, several questions emerged that may be productive focal points for future events:
Looking Ahead
This event aimed to serve researchers and founders working on acceleration methods by providing a hybrid symposium venue to present their work and coordinate. I am thankful for the speakers who took time to share their work, and for everyone who attended the meeting in-person and virtually.
Attendance levels >50 and attendee feedback after the event suggest interest in continuing this type of format. Building on this initial experiment, future events will aim to reach more researchers, founders, funders, and forward thinkers in this domain. The goal will remain the same- to provide a structured forum for coordinating the mitigation of catastrophic or existential risk from beyond-human level AI systems that may emerge in the near term.
Note: I organized this event independently. I will be joining a new employer later this month, however this work was done in my personal capacity.
All errors, misquotes of speaker material, and similar are entirely my own. Please let me know if you see any so I can fix the issue!