Why Alignment Fails Without a Functional Model of Intelligence — LessWrong