x
Does evolution provide any hints for making model alignment more robust? — LessWrong