How human-like do safe AI motivations need to be? — LessWrong