x
Elicitation for Modeling Transformative AI Risks — LessWrong