A case for AI alignment being difficult — LessWrong