Note: this was originally written for a general audience. I'm posting it on Less Wrong because this community is much more informed about AI than the average person, and I expect that you have seen many of these arguments already—I would love to get your critiques / feedback.
But I also don’t think the idea can be dismissed out of hand, because recursive self-improvement is scarily plausible and might have started already!
So I figured I’d sketch out what a believable intelligence explosion might look like, along with some unanswered questions and reasons for doubt.
I. The basic argument
AI can write code well enough to execute small tasks / write short programs. (Confidence: unassailably true. If you disagree, well, what are we even doing here?)
If propositions 1) and 2) are both true, AI will be used in AI research to speed up AI capabilities progress. (Confidence: almost certainly true, but slowdowns are not impossible)2
More capable AI will be able to help more with research, which will further speed up progress, etc etc—leading to an intelligence explosion.
I have basically no qualms about propositions 1) and 2). And if propositions 1) and 2) are true, proposition 3) almost follows: I think it is highly likely that 3) is true as well.
But what about proposition 4? Well, therein lies the rub.
In the short-term, recursive self-improvement seems inevitable. Even if AI models don’t get much better at programming, Opus 4.6 seems capable enough to change some of how programming is done already, particularly in rote optimization tasks like the “iterative optimization” Karpathy talked about in his tweet. If the models DO get better at programming—which seems likely for now—they might begin to speed up research loops, or shorten the time between ideation and implementation.
And as data center capacity doubles in the next fear years,3 AIs will gain even more access to compute, and thus be able to operate in longer context windows. This doesn’t solve the dynamic long-term memory problem,4 but that’s not really an issue for AI researchers, who clearly possess enough dynamic long-term memory capacity to improve AIs themselves.
BUT,
Even if recursive self-improvement takes off, we have no idea how far it’ll go—let alone how fast it’ll be.
Consider the AI doomer position. They believe that AI models are fundamentally constrained by intelligence, and recursive self-improvement will enable AI models to infinitely improve themselves until they attain godlike levels of intelligence (and thus capability). And AI has been getting smarter; the AI Futures model analyzes current rates of progress and forecasts that a fully automated coder will arrive by 2030—and that artificial superintelligence will soon follow.5
Now, let’s consider some critiques of the doomer position. The authors of AI as Normal Technology argue that AI will transform the world at roughly the same rate as previous world-changing technologies, like electricity or the internet. They think mass adoption will be slow, that AI use will diffuse slowly through society, that progress will hit physical barriers, etc etc—even if recursive self-improvement occurs.
They also dispute the use of the word “intelligence” to describe AI capabilities:
We do not think there is a useful sense of the term ‘intelligence’ in which AI is more intelligent than people acting with the help of AI. Human intelligence is special due to our ability to use tools and to subsume other intelligences into our own, and cannot be coherently placed on a spectrum of intelligence.
As of today, there is no way to know whether the AI 2027 people or AI as Normal Technology people are less wrong more right.6
And figuring out who is right requires answering an even less tractable question:
If AGI is possible, can we get there following our current tech tree?
II. Can the current paradigm lead us to AGI?
An incredible diagram I found on ResearchGate. IRL research doesn’t always resemble a tech tree, since research paradigms can and do borrow from each other all the time, but I find it to be a useful way to visualize what’s going on
Obviously, AGI is strictly possible. After all, WE are AGI. That’s why our AGI definitions all sound like “an AI that is better than humans at basically everything”.
And if nature could design us through blind trial-and-error over billions of years, surely we could design something better, right?
On the other hand, humans are wetware, not hardware. Human biology is vastly different from computer technology in ways that are incredibly hard to overstate; if you want to learn more, I highly recommend this great article on the differences between human neurons and artificial neurons by Mike X Cohen, PhD. (The TLDR is that human neurons are so different from artificial neurons that it is borderline insulting to refer to them by the same name.)
I could see this piece of information supporting both sides of the AGI doomer-denier debate.
On the doomer side, one could take this to mean that intelligence is not a uniquely biology-driven phenomenon. After all, LLMs share basically nothing in common with humans; it’s astounding that we’ve come so far without understanding anything about the black boxes we’re creating.
The fact that AIs can do anything at all implies that it just might be possible to kludge together an AGI without ever really figuring out how humans work—let alone how AIs work, which we still don’t really understand. So recursive self-improvement might lead to an AI takeoff, regardless of whether that takeoff is soft or fast.
On the denier side, one could take this to mean that modern LLM development is on the wrong track entirely, since our current AI systems are completely different from the only AGIs we know about (which are us). AIs do well in tasks that have simple inputs, simple outputs, and strong reward signals, like chess, math, or coding, but they still don’t generalize to tasks that lie outside of their training distribution.
Furthermore, understanding human intelligence is so far beyond our comprehension that we won’t be able to reach anything like it through brute-force trial and error. And of course, that’s more or less what’s going on at the frontier labs right now, since mechanistic interpretability is so unbelievably far behind.
I personally lean towards the denier side of the debate. But unfortunately, there’s yet another aspect of the progress dynamic that I didn’t think of until very recently (and is the main reason I’m writing this post.)
The question is this: what’s stopping people from using AI improvements to research new AI paradigms?
III. Will AI progress help the people who are researching other paradigms?
So far, it seems like the majority of recent (~late 2024) AI gains came from inference-scaling—the amount of compute used every time a model answers a question—as opposed to training, regardless of whether that training is pre-training or post-training.
Let’s go back to that tech tree diagram for a second. Let’s also assume—for the sake of argument—that LLMs will continue improving to some degree but are ultimately a dead end in the sense that AGI is not somewhere below the Perceptron branch of the tree.
Take, for example, notable LLM hater Gary Marcus’ favorite AI paradigm: neurosymbolic AI. According to Marcus, who is also a very famous AI researcher, LLMs will never get to AGI on their own because they lack symbolic logic, world models, formal reasoning, yadda yadda. Basically, he thinks LLM researchers trying to get to AGI are on the wrong branch of the tech tree. And what if he’s right?
Well, even if he’s right about LLMs not having the potential to become AGI specifically, who’s to say that LLMs won’t speed up research progress in other AI paradigms?
Sure, LLMs struggle in fields that don’t have well-defined inputs and metrics for success. But AI research is not one of those fields, since you can just optimize for loss functions: LLMs are notoriously good at math and coding. I think it’s entirely possible that a neurosymbolic AI researcher could have their work sped up by LLM assistants in the next few years.
graphic from Anthropic. Note the gap between observed and theoretical coverage: in some sense, the crux of this issue is how much that gap will close in the near future (if it closes at all).
So LLM coding might be responsible for the development of AGI—even if Marcus is right about LLMs not being capable of becoming AGIs themselves—as long as they can shorten the time it takes to discover the correct paradigm.
More generally, all AI researchers could have their work sped up by LLM assistants. And getting to AGI only requires someone to luck into the right path on the tech tree.
So that’s my skeptic’s case for an intelligence explosion.
If AI research can be sped up by LLM coders, our AI timelines should shorten.
And IF AGI is anywhere on the current tech tree—and discovering it is possible without exceeding our current technological and compute limitations—then it doesn’t really matter whether the LLMs themselves will become AGIs.
This possibility is very concerning, and we should definitely be paying attention to it. But “IF” is carrying a lot of weight here. And unfortunately, it’s really hard to determine the likelihood of this scenario playing out, because we still have so many unanswered questions.
IV. Six Unanswered Questions that plague me
What if the algorithms that drive human performance are too complex to simulate without huge increases in compute? Scaling laws look scary when plotted on logarithmic axes, but they look much weaker when plotted on normal ones: maybe compute will continue to be a powerful constraint on AI progress.
Relatedly, what if recursive self-improvement peters out? Since scaling laws follow a power distribution—meaning that improving at a constant rate requires exponential increases in resources—truly recursive self-improvement would somehow have to outpace the ever-increasing costs, particularly before the scaling laws themselves break down. It’s entirely possible that this just doesn’t happen, and that self-improvement isn’t recursive enough to cause an intelligence explosion.8
Consider jaggedness—the conspicuous gap between what computers and humans find difficult (otherwise known as Moravec’s Paradox). For example, computers are better at chess than the strongest human grandmasters, and have been for almost three decades, but they still can’t replace us on many other kinds of cognitive tasks, like driving in snow. What if this continues in perpetuity?9
What if the theoretical upper limit on intelligence is lower than we think? Sure, there’s a huge gap in intelligence between me and Srinivasa Ramanujan, but who’s to say that he doesn’t represent the upper limit of what is possible?10
What if dynamic long-term memory or continual learning comes with an inherent tradeoff in the form of having a worse short-term memory? Humans famously having a maximum working memory capacity of 3-5 objects—but can also recite thousands of digits of Pi by cheating with memory palaces (since a memory palace is a single object). What’s up with that?11
Finally, what if AGI isn’t on the current tech tree at all?? Maybe humans actually are the only AGI system possible in the universe—or maybe we’re just the simplest, which is why we’re the first (on Earth, at least). What if we do have to fully understand human genetics and neuroscience before we can even think about making an ASI, let alone an AGI? What if ASI just straight-up isn’t possible??
These six questions will all have a huge influence on future developments. And they’re all basically unanswerable!!12
I am still an AGI skeptic, at least in the sense that I don’t think that AGI is on the LLM branch of the tech tree. Humans are just too complicated to beat, even on individual tasks, and we can’t even do much without each other; so much of our progress came from social technologies like science and institutions, standing on the shoulders of giants.
Still, to be perfectly honest with you, dear reader, I do wonder if this is all an ad-hoc justification for a just-so story that I really want to believe. I truly hope that AGI isn’t somewhere down the current tech tree (or any tech tree, for that matter) because I would prefer that our world remains un-ended and continues to contain only a reasonable number of paperclips. And I do think AGI isn’t on our current tech tree, mostly because of the unanswerable questions—specifically because I think jaggedness is a huge problem and that “intelligence” is not boundless—but I can’t be sure.
Personally, I still more or less stand by what I said in We are in the good timeline for AI.13 I don’t think we’re getting an intelligence explosion in the next few years, and I suspect the diffusion of AI into society will look closer to the prediction of AI as Normal Technology than AI 2027.
But what about the next 10 years? Or the next 20 years?
Hell, what about the next 50 years??
In the long-term, no one knows what’s going to happen. Some guesses are better than others, but as of today, the only truthful answer anyone can give you is that we just don’t know.
Thanks to theahurafor reading a draft of this post and giving some very useful feedback.
Also see this study from a year ago that found that developers who used AI thought it had made them 20% faster, even though it actually made them 19% slower. You should really keep this in mind when dealing with anecdotal reports.
I use this term instead of “continual learning” because I came up with it before Dwarkesh because “continual learning” implies a LOT of different capabilities and may itself be composed of jagged skills. This is unfortunately a topic for another time
note that their initial model predicted ASI by 2027: I think their AI 2027 forecast is still well worth reading, particularly the pre-2027 part, because they were right about a lot of things and were pretty thoughtful and reasonable (even if you think their conclusion is absurd sci-fi).
Also, I’m not really addressing boosters in this piece, because if you really think that AI will be good for society, you’re probably not that worried about this anyway
both of those links are from Toby Ord, who is generally incredible. You should absolutely read more from him if you’re at all interested in this topic.
Another interesting point that I must set to the side for now is that many humans still make a living by playing chess, even though the machines have indisputably been better than us for so long.
If you want to see some of Ramanujan’s ridiculousness, check out this one-and-a-half minute-long video. It’s worth it I promise
Also, I will probably get into this some other time, but Ramanujan’s genius is another example of jaggedness: he was almost certainly one of the smartest people to ever live, but he also didn’t take over the world, and I kinda doubt that he would’ve been able to do so even if he’d tried. His Wikipedia page is very interesting
I am consistently astounded by the kindergarten calvinball stupidity of memory palaces. “You can only remember 3-5 things!” “Okay, but one of those things contains a MILLION things! Muah ha ha!” Like, seriously?? Why is it that adding information to the thousands of numbers of pi by situating them within a memory palace makes it possible to recall them all at once? Why is chunking even a thing, and why can I only remember like 9 random numbers at a time but can memorize all 16 digits of my credit card number by breaking it down into 4 numbers? What is even going on?!?!
note: some are more answerable than others, and I am also not a domain expert in any of these subjects, so please correct me in the comments if I’m wrong about any of these assertions!
save for the name and claim that we actually are in the good timeline, which is overconfident clickbait. Shoutout to Michelle Ma for calling me out way back when
Note: this was originally written for a general audience. I'm posting it on Less Wrong because this community is much more informed about AI than the average person, and I expect that you have seen many of these arguments already—I would love to get your critiques / feedback.
I’m not a huge believer in the intelligence explosion hypothesis—basically the idea that AI will become capable of self-improvement and thus speed up its own development.
But I also don’t think the idea can be dismissed out of hand, because recursive self-improvement is scarily plausible and might have started already!
So I figured I’d sketch out what a believable intelligence explosion might look like, along with some unanswered questions and reasons for doubt.
I. The basic argument
I have basically no qualms about propositions 1) and 2). And if propositions 1) and 2) are true, proposition 3) almost follows: I think it is highly likely that 3) is true as well.
But what about proposition 4? Well, therein lies the rub.
In the short-term, recursive self-improvement seems inevitable. Even if AI models don’t get much better at programming, Opus 4.6 seems capable enough to change some of how programming is done already, particularly in rote optimization tasks like the “iterative optimization” Karpathy talked about in his tweet. If the models DO get better at programming—which seems likely for now—they might begin to speed up research loops, or shorten the time between ideation and implementation.
And as data center capacity doubles in the next fear years,3 AIs will gain even more access to compute, and thus be able to operate in longer context windows. This doesn’t solve the dynamic long-term memory problem,4 but that’s not really an issue for AI researchers, who clearly possess enough dynamic long-term memory capacity to improve AIs themselves.
BUT,
Even if recursive self-improvement takes off, we have no idea how far it’ll go—let alone how fast it’ll be.
Consider the AI doomer position. They believe that AI models are fundamentally constrained by intelligence, and recursive self-improvement will enable AI models to infinitely improve themselves until they attain godlike levels of intelligence (and thus capability). And AI has been getting smarter; the AI Futures model analyzes current rates of progress and forecasts that a fully automated coder will arrive by 2030—and that artificial superintelligence will soon follow.5
Now, let’s consider some critiques of the doomer position. The authors of AI as Normal Technology argue that AI will transform the world at roughly the same rate as previous world-changing technologies, like electricity or the internet. They think mass adoption will be slow, that AI use will diffuse slowly through society, that progress will hit physical barriers, etc etc—even if recursive self-improvement occurs.
They also dispute the use of the word “intelligence” to describe AI capabilities:
As of today, there is no way to know whether the AI 2027 people or AI as Normal Technology people are
less wrongmore right.6And figuring out who is right requires answering an even less tractable question:
If AGI is possible, can we get there following our current tech tree?
II. Can the current paradigm lead us to AGI?
An incredible diagram I found on ResearchGate. IRL research doesn’t always resemble a tech tree, since research paradigms can and do borrow from each other all the time, but I find it to be a useful way to visualize what’s going on
Obviously, AGI is strictly possible. After all, WE are AGI. That’s why our AGI definitions all sound like “an AI that is better than humans at basically everything”.
And if nature could design us through blind trial-and-error over billions of years, surely we could design something better, right?
On the other hand, humans are wetware, not hardware. Human biology is vastly different from computer technology in ways that are incredibly hard to overstate; if you want to learn more, I highly recommend this great article on the differences between human neurons and artificial neurons by Mike X Cohen, PhD. (The TLDR is that human neurons are so different from artificial neurons that it is borderline insulting to refer to them by the same name.)
I could see this piece of information supporting both sides of the AGI doomer-denier debate.
On the doomer side, one could take this to mean that intelligence is not a uniquely biology-driven phenomenon. After all, LLMs share basically nothing in common with humans; it’s astounding that we’ve come so far without understanding anything about the black boxes we’re creating.
The fact that AIs can do anything at all implies that it just might be possible to kludge together an AGI without ever really figuring out how humans work—let alone how AIs work, which we still don’t really understand. So recursive self-improvement might lead to an AI takeoff, regardless of whether that takeoff is soft or fast.
On the denier side, one could take this to mean that modern LLM development is on the wrong track entirely, since our current AI systems are completely different from the only AGIs we know about (which are us). AIs do well in tasks that have simple inputs, simple outputs, and strong reward signals, like chess, math, or coding, but they still don’t generalize to tasks that lie outside of their training distribution.
Furthermore, understanding human intelligence is so far beyond our comprehension that we won’t be able to reach anything like it through brute-force trial and error. And of course, that’s more or less what’s going on at the frontier labs right now, since mechanistic interpretability is so unbelievably far behind.
I personally lean towards the denier side of the debate. But unfortunately, there’s yet another aspect of the progress dynamic that I didn’t think of until very recently (and is the main reason I’m writing this post.)
The question is this: what’s stopping people from using AI improvements to research new AI paradigms?
III. Will AI progress help the people who are researching other paradigms?
So far, it seems like the majority of recent (~late 2024) AI gains came from inference-scaling—the amount of compute used every time a model answers a question—as opposed to training, regardless of whether that training is pre-training or post-training.
This is in some ways reassuring; it means that AI progress is likely limited by compute, although we haven’t ruled out the possibility of future algorithmic improvements (which would reduce the amount of compute required to see progress).7
Let’s go back to that tech tree diagram for a second. Let’s also assume—for the sake of argument—that LLMs will continue improving to some degree but are ultimately a dead end in the sense that AGI is not somewhere below the Perceptron branch of the tree.
What about the part that says Other AI Models??
I was worried that you might miss it, so I helpfully annotated the graph with a red circle and some red arrows. You’re welcome!
Take, for example, notable LLM hater Gary Marcus’ favorite AI paradigm: neurosymbolic AI. According to Marcus, who is also a very famous AI researcher, LLMs will never get to AGI on their own because they lack symbolic logic, world models, formal reasoning, yadda yadda. Basically, he thinks LLM researchers trying to get to AGI are on the wrong branch of the tech tree. And what if he’s right?
Well, even if he’s right about LLMs not having the potential to become AGI specifically, who’s to say that LLMs won’t speed up research progress in other AI paradigms?
Sure, LLMs struggle in fields that don’t have well-defined inputs and metrics for success. But AI research is not one of those fields, since you can just optimize for loss functions: LLMs are notoriously good at math and coding. I think it’s entirely possible that a neurosymbolic AI researcher could have their work sped up by LLM assistants in the next few years.
graphic from Anthropic. Note the gap between observed and theoretical coverage: in some sense, the crux of this issue is how much that gap will close in the near future (if it closes at all).
So LLM coding might be responsible for the development of AGI—even if Marcus is right about LLMs not being capable of becoming AGIs themselves—as long as they can shorten the time it takes to discover the correct paradigm.
More generally, all AI researchers could have their work sped up by LLM assistants. And getting to AGI only requires someone to luck into the right path on the tech tree.
So that’s my skeptic’s case for an intelligence explosion.
If AI research can be sped up by LLM coders, our AI timelines should shorten.
And IF AGI is anywhere on the current tech tree—and discovering it is possible without exceeding our current technological and compute limitations—then it doesn’t really matter whether the LLMs themselves will become AGIs.
This possibility is very concerning, and we should definitely be paying attention to it. But “IF” is carrying a lot of weight here. And unfortunately, it’s really hard to determine the likelihood of this scenario playing out, because we still have so many unanswered questions.
IV. Six Unanswered Questions that plague me
scaling laws on a log axis. Straight line = linear progress, right?
the same scaling laws when plotted on linear axes. Looks less scary now, huh?Also both of these are images by Toby Ord. you gotta go read that guy’s stuff it’s really really good
image from Helen Toner’s excellent post on the topic—you should read it!
I got this image from a neuroscientist friend of mine but unfortunately I don’t understand it well enough to explain. All I can say is that it seems relevant
These six questions will all have a huge influence on future developments. And they’re all basically unanswerable!!12
I am still an AGI skeptic, at least in the sense that I don’t think that AGI is on the LLM branch of the tech tree. Humans are just too complicated to beat, even on individual tasks, and we can’t even do much without each other; so much of our progress came from social technologies like science and institutions, standing on the shoulders of giants.
Still, to be perfectly honest with you, dear reader, I do wonder if this is all an ad-hoc justification for a just-so story that I really want to believe. I truly hope that AGI isn’t somewhere down the current tech tree (or any tech tree, for that matter) because I would prefer that our world remains un-ended and continues to contain only a reasonable number of paperclips. And I do think AGI isn’t on our current tech tree, mostly because of the unanswerable questions—specifically because I think jaggedness is a huge problem and that “intelligence” is not boundless—but I can’t be sure.
Personally, I still more or less stand by what I said in We are in the good timeline for AI.13 I don’t think we’re getting an intelligence explosion in the next few years, and I suspect the diffusion of AI into society will look closer to the prediction of AI as Normal Technology than AI 2027.
But what about the next 10 years? Or the next 20 years?
Hell, what about the next 50 years??
In the long-term, no one knows what’s going to happen. Some guesses are better than others, but as of today, the only truthful answer anyone can give you is that we just don’t know.
Thanks to theahura for reading a draft of this post and giving some very useful feedback.
1
Karpathy is also not an AI booster: half a year ago, he was openly skeptical about the idea of getting AGI (Artificial General Intelligence) in the next 10 years, though he may have changed his mind since then.
Fun fact: he also coined the phrase “vibe coding”!
2
Also see this study from a year ago that found that developers who used AI thought it had made them 20% faster, even though it actually made them 19% slower. You should really keep this in mind when dealing with anecdotal reports.
As a counterexample, consider the recent case in which Google Deepmind used Alpha Evolve to make matrix multiplication more efficient (thanks to theahura for bringing this up)
3
I didn’t catch this typo until I posted but I like it and am gonna keep it
4
I use this term instead of “continual learning”
because I came up with it before Dwarkeshbecause “continual learning” implies a LOT of different capabilities and may itself be composed of jagged skills. This is unfortunately a topic for another time5
note that their initial model predicted ASI by 2027: I think their AI 2027 forecast is still well worth reading, particularly the pre-2027 part, because they were right about a lot of things and were pretty thoughtful and reasonable (even if you think their conclusion is absurd sci-fi).
Also, I’m not really addressing boosters in this piece, because if you really think that AI will be good for society, you’re probably not that worried about this anyway
6
If this sounds interesting to you, also consider reading Common Ground between AI 2027 & AI as Normal Technology. The two camps agree about more than you might think
7
both of those links are from Toby Ord, who is generally incredible. You should absolutely read more from him if you’re at all interested in this topic.
8
Thanks to theahura for pointing this out.
9
This may not matter if AIs get better than us at everything, since even their weak aspects will beat even the best humans, but if that isn’t the case, jaggedness will remain extremely important.
Another interesting point that I must set to the side for now is that many humans still make a living by playing chess, even though the machines have indisputably been better than us for so long.
10
If you want to see some of Ramanujan’s ridiculousness, check out this one-and-a-half minute-long video. It’s worth it I promise
Also, I will probably get into this some other time, but Ramanujan’s genius is another example of jaggedness: he was almost certainly one of the smartest people to ever live, but he also didn’t take over the world, and I kinda doubt that he would’ve been able to do so even if he’d tried. His Wikipedia page is very interesting
11
I am consistently astounded by the kindergarten calvinball stupidity of memory palaces. “You can only remember 3-5 things!” “Okay, but one of those things contains a MILLION things! Muah ha ha!” Like, seriously?? Why is it that adding information to the thousands of numbers of pi by situating them within a memory palace makes it possible to recall them all at once? Why is chunking even a thing, and why can I only remember like 9 random numbers at a time but can memorize all 16 digits of my credit card number by breaking it down into 4 numbers? What is even going on?!?!
12
note: some are more answerable than others, and I am also not a domain expert in any of these subjects, so please correct me in the comments if I’m wrong about any of these assertions!
13
save for the name and claim that we actually are in the good timeline, which is overconfident clickbait. Shoutout to Michelle Ma for calling me out way back when