x
A case for robust AI benevolence rather than human control — LessWrong