x
Aligning a toy model of optimization — LessWrong