[Resolved] Who else prefers "AI alignment" to "AI safety?" — LessWrong