ASI existential risk: Reconsidering Alignment as a Goal — LessWrong