It's how recursive self-improvement starts out.
First, the global "AI models + human development teams" system improves through iterative development and evaluation. Then the AI models take on more responsibilities in terms of ideation, process streamlining, and architecture optimization. And finally, an AI agent groks enough of the process to take on all responsibilities, and the intelligence explosion takes off from there.
You'd think someone would try to use AI to automate the production and distribution of necessities to drive the cost of living down toward zero first, but it seems that was just a dream of naive idealism. Oh well. Still, could someone please get on that?
I'm imagining a case where there's no intelligence explosion per se, just bags-of-heuristics AIs with gradually increasing competence.
But then again, what are human minds but bags of heuristics themselves? And AI can evolve orders of magnitude faster than we can. Handing over the keys to its own bootstrapping will only accelerate it further.
If the future trajectory to AGI is just "systems of LLMs glued together with some fancy heuristics", then maybe a plateau in Transformer capabilities will keep things relatively gradual. But I suspect that we are just a paradigm shift or two away from a Generalized Theory of Intelligence. Just figure out how to do predictive coding of abitrary systems, combine it with narrative programming and continual learning, and away we go! Or something like that.
Humans contain the reproductive and hunting instincts. You could call this a bag of heuristics, but it's heuristics on a different level than AI, and in particular might not be chosen to be transferred to AIs. Furthermore, humans are harder to copy or parallelize, which leads to a different privacy profile compared to AIs.
The trouble with intelligence (both human and artificial and evolution) is that it's all about regarding the world as an assembly of the familiar. This makes data/experience a major bottleneck for intelligence.
I used to think of AI development as obviously being the last fully-automated job. After all, AI can be used to automate other jobs, so once it is automated, all those other jobs can be automated too. But with the current data-hungry methods in AI, it might take a long time for the relevant data to come in, even if the AI is automatically collecting it.
Plus, the AI might need to collaborate with people who are working in the field it is automating in order to get the relevant data, and maybe they are scared of their skills getting automated and will try to stall for as long as possible.
People working in AI accept the risk of getting their skills automated, and constantly contribute to automating whatever they can. But obviously the field they are most familiar with is AI development, so AI development is what receives the biggest effort for automation.