Technical AGI safety research outside AI — LessWrong