Giving AIs safe motivations — LessWrong