x
[AN #128]: Prioritizing research on AI existential safety based on its application to governance demands — LessWrong