I think many people experiment with creating different digital personas but with low effort, like "You are Elon Musk".
I personally often ask LLM to comment on my drafts as Yudkowsky and other well known LWers. What such answers lack is extreme unique insight which is often for real EY.
The essence of human genius is missed and this is exactly why we still don't have AGI.
Also, for really good EY model we may need more data about his internal thought stream and biographical details which only he can collect. It seems that he is not interested and even if he would, it would be time consuming (but he write quickly). One thousand pages of unedited thought stream may significantly improve the model.
LLM can be also used to generate new ideas, but most are garbage. So improving testing (and may be selection of the most promising ones) will help us quicker find "true AGI", whatever it is. We also have enough compute to test most ideas.
But one AGI's feature is much higher computation efficiency. And if we got AGI 1000 times more efficient than current LLMs, thus we have large hardware overhang in the form of many datacenters. Using that overhang can cause intelligent explosion.
They can automate it by quick search of already published ideas and quick writing code to testing new ideas.
Interesting tweet: LLMs are not AGI but will provide instruments for AGI in 2026
"(Low quality opinion post / feel free to skip)
Now that AGI isn't cool anymore, I'd like to register the opposing position.
- AGI is coming in 2026, more likely than not
- LLMs are big memorization/interpolation machines, incapable of doing scientific discoveries and working on OOD concepts efficiently. They're not sufficient for AGI. My prediction stands regardless.
- Something akin to GPT-6, while not AGI, will automate human R&D to such extent AGI would quickly follow. Precisely, AGI will happen in, at most, 6 months after the public launch of a model as capable as we'd expect GPT-6 to be.
- Not being able to use current AI to speed up any coding work, no matter how OOD it is, is skill issue (no shots fired)
- Multiple paths are converging to AGI, quickly, and the only ones who do not see this are these focusing on LLMs specifically, which are, in fact, NOT converging to AGI. Focus on "which capabilities computers are unlocking" and "how much this is augmenting our own productivity", and the relevant feedback loop becomes much clearer."
https://x.com/VictorTaelin/status/1979852849384444347
If SIA is valid, than multiverse is true and all possible minds exist.
However, if all possible minds exist, we can't use SIA anymore as the fact of my existence is not the evidence for anything.
As a result, SIA is self-defeating: it can be used only to prove multiverse but we can also prove it without SIA.
We can use untypicality argument similar to SIA: the fact that I exist at all is the evidence that there were many attempts to create me. Examples: the habitability of Earth means that there are many non-habitable planets and fine-tuning of the Universe means that there are many other universes with other physical laws.
Note that untypicality argument is not assumption - it is a theorem. It also doesn't prove infinity of my copies but only very large number of attempts to create, which is practically similar to infinity and can be stretch to "all possible minds exist" if we add that any sufficiently large mind-generating mechanism can't be stopped: non-habitable planets will continue to appear as they don't "know" that one is now habitable.
How can we know that the problem is solved - and now we can safely proceed?
We can create a list of simple obvious advises which are actually sometimes bad:
Be vegeterian – damage to B12 level etc
Run – damage to knees and risk of falls
Lose weight – possible lose of muscle
Be altruistic - damaging addiction of doing good and neglect of personal interests.
Yes, but somehow large Kreutz comet came recently close to Sun, so there should be a mechanism which makes it more likely.
Yes, ChatGPT said me that most Sun-grazer comets are interacting with Jupiter first and and only several cycles of interaction the comet has a chance to hit Sun. This is a good news as there will be less silent killers.
Yes, if MIRI spends a year on building as good model of Yudkowsky as possible, it can help in alignment and its measurable and doable thing. They can later ask that model about failure modes of other AIs and it will cry "Misaligned!"