Hi! We're interested in beefing up our forecasting and threat modeling skills so that we understand the problems of TAI and AI existential safety better.
We're pretty much of a technical background, but occasionally discuss governance and policy matters.
We are committed 1-2 hours per week, 1 hour on a call and occasionally prepping with a short reading.
Exercises we've done
Directions we're thinking of taking the group in:
We are on a discord server, and we're looking to grow to around 20 people.
We think that having a space where no ideas are stupid is critical for self-development, and we see this group as a bridge between casual conversation and larger communities like lesswrong and the alignment forum.
Please reach out if you'd like to join!