This post argues that Carlsmith's model, and conjunctive AI doom estimates in general, neglect an important statistical consideration that results in an inflated estimate of AI X-risk. 

New to LessWrong?

New Comment