They can automate it by quick search of already published ideas and quick writing code to testing new ideas.
Interesting tweet: LLMs are not AGI but will provide instruments for AGI in 2026
"(Low quality opinion post / feel free to skip)
Now that AGI isn't cool anymore, I'd like to register the opposing position.
- AGI is coming in 2026, more likely than not
- LLMs are big memorization/interpolation machines, incapable of doing scientific discoveries and working on OOD concepts efficiently. They're not sufficient for AGI. My prediction stands regardless.
- Something akin to GPT-6, while not AGI, will automate human R&D to such extent AGI would quickly follow. Precisely, AGI will happen in, at most, 6 months after the public launch of a model as capable as we'd expect GPT-6 to be.
- Not being able to use current AI to speed up any coding work, no matter how OOD it is, is skill issue (no shots fired)
- Multiple paths are converging to AGI, quickly, and the only ones who do not see this are these focusing on LLMs specifically, which are, in fact, NOT converging to AGI. Focus on "which capabilities computers are unlocking" and "how much this is augmenting our own productivity", and the relevant feedback loop becomes much clearer."
https://x.com/VictorTaelin/status/1979852849384444347
If SIA is valid, than multiverse is true and all possible minds exist.
However, if all possible minds exist, we can't use SIA anymore as the fact of my existence is not the evidence for anything.
As a result, SIA is self-defeating: it can be used only to prove multiverse but we can also prove it without SIA.
We can use untypicality argument similar to SIA: the fact that I exist at all is the evidence that there were many attempts to create me. Examples: the habitability of Earth means that there are many non-habitable planets and fine-tuning of the Universe means that there are many other universes with other physical laws.
Note that untypicality argument is not assumption - it is a theorem. It also doesn't prove infinity of my copies but only very large number of attempts to create, which is practically similar to infinity and can be stretch to "all possible minds exist" if we add that any sufficiently large mind-generating mechanism can't be stopped: non-habitable planets will continue to appear as they don't "know" that one is now habitable.
How can we know that the problem is solved - and now we can safely proceed?
We can create a list of simple obvious advises which are actually sometimes bad:
Be vegeterian – damage to B12 level etc
Run – damage to knees and risk of falls
Lose weight – possible lose of muscle
Be altruistic - damaging addiction of doing good and neglect of personal interests.
Yes, but somehow large Kreutz comet came recently close to Sun, so there should be a mechanism which makes it more likely.
Yes, ChatGPT said me that most Sun-grazer comets are interacting with Jupiter first and and only several cycles of interaction the comet has a chance to hit Sun. This is a good news as there will be less silent killers.
The comets on very remote regions of the Oort cloud have very slow proper motion like 0.1 -1 km per sec. I initially thought that they would fall directly into the Sun if perturbed but AI claims that this will not happen - need to check more.
Radiation in Miyake events can be explained by magnetic flares up to some extend
Even earlier, there was an idea that one have to rush to create a friendly AI and use it to take over the world to prevent appearing other, misalaigned AIs. The problem is that this idea likely is still in the minds of some AI company leaders. And fules AI race.
LLM can be also used to generate new ideas, but most are garbage. So improving testing (and may be selection of the most promising ones) will help us quicker find "true AGI", whatever it is. We also have enough compute to test most ideas.
But one AGI's feature is much higher computation efficiency. And if we got AGI 1000 times more efficient than current LLMs, thus we have large hardware overhang in the form of many datacenters. Using that overhang can cause intelligent explosion.