we need a breakthrough (or two)
The needed breakthroughs (on the scale of discovery of Transformers) are probably published already, but have been neglected and half-forgotten.
We have plenty of examples: backpropagation was discovered and rediscovered by many people and groups between 1970 and 1986, and was mostly ignored till late 1980s. ReLU was known for decades, and its good properties became published in Nature in 2000; it was still ignored till approximately 2011. LSTMs made the need for something like residual connections obvious in 1997, yet the field waited to apply this to very deep feedforward nets till 2015 (highway nets and ResNets). And so on...
So there should be plenty of promising things hidden in the published literature and not widely known.
So it might be that all that is needed to surface those breakthroughs which are still buried in various lightly cited papers is a modestly competent automated AI researcher who can understand published papers, can generate moderately competent ML code, and can comb the literature for promising ideas and synthesize and run experiments based on various combinations of those ideas automatically. Can one implement a system like this based on GPT-4, as an intelligent wrapper of GPT-4? It's not clear, but overall we are don't seem to be very far from being able to do something like this (perhaps we do need to wait till the next generation of LLMs, but the system able to do this does not have to be a superintellect or even an AGI itself, it only needs limited moderate competence to have a good chance to unearth the required breakthroughs).
Interesting read.
While I also have experienced that GPT-4 can't solve the more challanging problems I throw at it, I also recognize that most humans probably wouldn't be able to solve many of those problems either within a reasonable amount of time.
One possibility is that the ability to solve novel problems might follow an S curve. Where it took a long time for AI to become better at novel task than 10% of people, but might go quickly from there to outperform 90%, but then very slowly increase from there.
However, I fail to see why that must neccessarily be true (or false), so if anyone has arguments for/against they are more than welcom.
Lastly I would like to ask the author if they can give an example of a problem such that if solved by AI, they would be worried about "imminent" doom? "new and complex" programming problems is mentioned, so if any such example could be provided it might contribute to discussion.
After some introspection, I realized my timelines are relatively long, which doesn't seem to be shared by most people around here. So this is me thinking out loud, and perhaps someone will try to convince me otherwise. Or not.
First things first, I definitely agree that a sufficiently advanced AI can pose an existential risk -- that's pretty straightforward. The key part, however, is "sufficiently advanced".
Let's consider a specific claim "Within X years, there will be a superintelligent AGI powerful enough to pose a significant existential threat", where X is any number below, say, 30[1].
Since this is a positive claim, I can't exactly refute it from thin air. Let's instead look at the best arguments for it I can think of, and why they ultimately don't convince me. Due to the temporal nature of the claim, they should involve recent technological advances and new AI capabilities[2].
With this preamble out of the way, let's look at the biggest recent achievements/fields of research, and why they won't kill us just yet
As it stands, I'm pretty convinced[4] that we need a breakthrough[5] (or two) to get to a level of intelligence that's general, superhuman, and potentially threatening. None of our current methods are powerful enough that simply scaling, or incrementally improving them will get us there. On the other hand, there are many great things that these systems can do to improve our lives, so for the time being, I'll happily keep working on AI capabilities, even in the limited scope of my current research.
Considering the difference between 1993 and 2023, I have no clue what 2053 will be like.
Any claim that doesn't rely on recent events, might have as well been made in 1023, when killer robots weren't a big concern [citation needed]. Note that "recent" is a very relative term, but I'm omitting the rise of computers and neural networks in general from this text.
The news of the week is the plugin system which might move it a step towards agent-ishness, but imo it's a rather small step in the context of existential risk.
Note: if this were political Twitter, I'd fully expect a response along the lines "Omg you're missing the absolute basics, educate yourself before posting". While I admittedly have not read every single piece of relevant literature, I'd still estimate that over the years I did much more reading and thinking on the topic than a vast majority of the (global/western) population. Possibly even more than the average AI researcher, since x-risk only recently kinda started entering a mainstream.
Something on a similar scale to the recent rediscovery of neural networks and their effectiveness.