Existential risk from AI without an intelligence explosion — LessWrong