Why AGI Might Be More Aligned Than Human Systems — LessWrong