Why an Intelligence Explosion might be a Low-Priority Global Risk — LessWrong