After some introspection, I realized my timelines are relatively long, which doesn't seem to be shared by most people around here. So this is me thinking out loud, and perhaps someone will try to convince me otherwise. Or not.
First things first, I definitely agree that a sufficiently advanced AI can pose an existential risk -- that's pretty straightforward. The key part, however, is "sufficiently advanced".
Let's consider a specific claim "Within X years, there will be a superintelligent AGI powerful enough to pose a significant existential threat", where X is any number below, say, 30.
Since this is a positive claim, I can't exactly refute it from thin air. Let's instead look at the... (read 943 more words →)