AGI in a vulnerable world — LessWrong