Are you working on accelerating AI safety effectiveness for existential risks? Interested in contributing to this problem, learning about current efforts, or funding active work? Join us:
🎯 The Challenge:
AI capabilities are advancing rapidly
Current research literature: Many AI safety approaches may not scale beyond human-level AI
Critical question: Given catastrophic risks at stake, how can we accelerate our progress toward effective technical AI safety solutions, for future powerful AI systems which may emerge in the near-term?
🚀 Event Focus: This symposium intends to connect researchers, founders, funders, and forward thinkers on technical AI safety acceleration methods.📍 Location & Time (Hybrid Event):
[LessWrong Community event announcement: https://www.lesswrong.com/events/kZGehYydasb3FurxG/mini-symposium-on-accelerating-ai-safety-progress-via]
Are you working on accelerating AI safety effectiveness for existential risks? Interested in contributing to this problem, learning about current efforts, or funding active work? Join us:
🎯 The Challenge:
🚀 Event Focus: This symposium intends to connect researchers, founders, funders, and forward thinkers on technical AI safety acceleration methods.📍 Location & Time (Hybrid Event):
📝 Registration:
🎤 Lightning Talks about your work or interest in the field:
💡 Topics of Interest:
We are looking forward to your participation!