x
Giving AIs safe motivations — LessWrong