Hi! We're interested in beefing up our forecasting and threat modeling skills so that we understand the problems of TAI and AI existential safety better.
We're pretty much of a technical background, but occasionally discuss governance and policy matters.
We are committed 1-2 hours per week, 1 hour on a call and occasionally prepping with a short reading.
Exercises we've done
- estimates on
https://forecast.elicit.org/- fermi modeling scenarios (e.g.
https://www.lesswrong.com/posts/yTxHnfoD3L8CdezcG/how-to-fermi-model )
- writing scenarios (vague scifi)
- responding to papers or blog posts
Post we've produced:
-
https://www.lesswrong.com/posts/3xACom5ytqBogcuad/chance-that-ai-safety-basically-doesn-t-need-to-be-solved-we Directions we're thinking of taking the group in:
- formal prediction tracking, like on elicit but more about your personal journey rethinking and updating on an individual question
- concrete goals like production of high quality posts
We are on a discord server, and we're looking to grow to around 20 people.
We think that having a space where no ideas are stupid is critical for self-development, and we see this group as a bridge between casual conversation and larger communities like lesswrong and the alignment forum.
Please reach out if you'd like to join!