A Functional Model of Intelligence May Be Required to Solve Alignment. Why Can't We Test That Hypothesis? — LessWrong