Patternist friendly AI risk — LessWrong