This post argues that Carlsmith's model, and conjunctive AI doom estimates in general, neglect an important statistical consideration that results in an inflated estimate of AI X-risk.
This is a linkpost to a list of skeptical takes on AI FOOM. I haven't read them all and probably disagree with some of them, but it's valuable to put these arguments in one place.
Creating a link post to an important argument about a potential motte and bailey fallacy used in the AI alignment community.
I've added a tag for object-level AI risk skepticism arguments. I've included my own post about deceptive alignment and Katja Grace's post about AI X-risk counterarguments. What other arguments should be tagged?
Thanks to Wil Perkins, Grant Fleming, Thomas Larsen, Declan Nishiyama, and Frank McBride for feedback on this post. Thanks also to Paul Christiano, Daniel Kokotajlo, and Aaron Scher for comments on the original post that helped clarify the argument. Any mistakes are my own. In order to submit this to...
The order in which key properties emerge is important and often glossed over. Thanks to Wil Perkins, Grant Fleming, Thomas Larsen, Declan Nishiyama, and Frank McBride for feedback on this post. Any mistakes are my own. Note: I have now changed the second post into this sequence into a standalone...