After some introspection, I realized my timelines are relatively long, which doesn't seem to be shared by most people around here. So this is me thinking out loud, and perhaps someone will try to convince me otherwise. Or not.
First things first, I definitely agree that a sufficiently advanced AI can pose an existential risk -- that's pretty straightforward. The key part, however, is "sufficiently advanced".
Let's consider a specific claim "Within X years, there will be a superintelligent AGI powerful enough to pose a significant existential threat", where X is any number below, say, 30.
Since this is a positive claim, I can't exactly refute it from thin air. Let's instead look at the best arguments for it I can think of, and why they ultimately don't convince me. Due to the temporal nature of the claim, they should involve recent technological advances and new AI capabilities.
With this preamble out of the way, let's look at the biggest recent achievements/fields of research, and why they won't kill us just yet
As it stands, I'm pretty convinced that we need a breakthrough (or two) to get to a level of intelligence that's general, superhuman, and potentially threatening. None of our current methods are powerful enough that simply scaling, or incrementally improving them will get us there. On the other hand, there are many great things that these systems can do to improve our lives, so for the time being, I'll happily keep working on AI capabilities, even in the limited scope of my current research.
Considering the difference between 1993 and 2023, I have no clue what 2053 will be like.
Any claim that doesn't rely on recent events, might have as well been made in 1023, when killer robots weren't a big concern . Note that "recent" is a very relative term, but I'm omitting the rise of computers and neural networks in general from this text.
The news of the week is the plugin system which might move it a step towards agent-ishness, but imo it's a rather small step in the context of existential risk.
Note: if this were political Twitter, I'd fully expect a response along the lines "Omg you're missing the absolute basics, educate yourself before posting". While I admittedly have not read every single piece of relevant literature, I'd still estimate that over the years I did much more reading and thinking on the topic than a vast majority of the (global/western) population. Possibly even more than the average AI researcher, since x-risk only recently kinda started entering a mainstream.
Something on a similar scale to the recent rediscovery of neural networks and their effectiveness.
Interesting read.While I also have experienced that GPT-4 can't solve the more challanging problems I throw at it, I also recognize that most humans probably wouldn't be able to solve many of those problems either within a reasonable amount of time.One possibility is that the ability to solve novel problems might follow an S curve. Where it took a long time for AI to become better at novel task than 10% of people, but might go quickly from there to outperform 90%, but then very slowly increase from there.However, I fail to see why that must neccessarily be true (or false), so if anyone has arguments for/against they are more than welcom.Lastly I would like to ask the author if they can give an example of a problem such that if solved by AI, they would be worried about "imminent" doom? "new and complex" programming problems is mentioned, so if any such example could be provided it might contribute to discussion.