LESSWRONG
LW

521
avturchin
4555Ω810918520
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
5avturchin's Shortform
6y
192
Why there is still one instance of Eliezer Yudkowsky?
avturchin5h51

Yes, if MIRI spends a year on building as good model of Yudkowsky as possible, it can help in alignment and its measurable and doable thing. They can later ask that model about failure modes of other AIs and it will cry "Misaligned!"

Reply
Why there is still one instance of Eliezer Yudkowsky?
Answer by avturchinOct 30, 2025119

I think many people experiment with creating different digital personas but with low effort, like "You are Elon Musk". 

I personally often ask LLM to comment on my drafts as Yudkowsky and other well known LWers.  What such answers lack is extreme unique insight which is often for real EY.  

The essence of human genius is missed and this is exactly why we still don't have AGI. 

Also, for really good EY model we may need more data about his internal thought stream and biographical details which only he can collect. It seems that he is not interested and even if he would, it would be time consuming (but he write quickly). One thousand pages of unedited thought stream may significantly improve the model. 

Reply
avturchin's Shortform
avturchin6d20

LLM can be also used to generate new ideas, but most are garbage. So improving testing (and may be selection of the most promising ones) will help us quicker find "true AGI", whatever it is. We also have enough compute to test most ideas. 

But one AGI's feature is much higher computation efficiency. And if we got AGI 1000 times more efficient than current LLMs, thus we have large hardware overhang in the form of many datacenters. Using that overhang can cause intelligent explosion.  

Reply
avturchin's Shortform
avturchin7d20

They can automate it by quick search of already published ideas and quick writing code to testing new ideas.  

Reply
avturchin's Shortform
avturchin7d00

Interesting tweet: LLMs are not AGI but will provide instruments for AGI in 2026
 

"(Low quality opinion post / feel free to skip)

 

Now that AGI isn't cool anymore, I'd like to register the opposing position.

 

- AGI is coming in 2026, more likely than not

 

- LLMs are big memorization/interpolation machines, incapable of doing scientific discoveries and working on OOD concepts efficiently. They're not sufficient for AGI. My prediction stands regardless.

 

- Something akin to GPT-6, while not AGI, will automate human R&D to such extent AGI would quickly follow. Precisely, AGI will happen in, at most, 6 months after the public launch of a model as capable as we'd expect GPT-6 to be.

 

- Not being able to use current AI to speed up any coding work, no matter how OOD it is, is skill issue (no shots fired)

 

- Multiple paths are converging to AGI, quickly, and the only ones who do not see this are these focusing on LLMs specifically, which are, in fact, NOT converging to AGI. Focus on "which capabilities computers are unlocking" and "how much this is augmenting our own productivity", and the relevant feedback loop becomes much clearer."

 

https://x.com/VictorTaelin/status/1979852849384444347

Reply
Ramblings on the Self Indication Assumption
avturchin8d20

If SIA is valid, than multiverse is true and all possible minds exist. 

However, if all possible minds exist, we can't use SIA anymore as the fact of my existence is not the evidence for anything. 

As a result, SIA is self-defeating: it can be used only to prove multiverse but we can also prove it without SIA. 

We can use untypicality argument similar to SIA: the fact that I exist at all is the evidence that there were many attempts to create me. Examples: the habitability of Earth means that there are many non-habitable planets and fine-tuning of the Universe means that there are many other universes with other physical laws. 

Note that untypicality argument is not assumption - it is a theorem. It also doesn't prove infinity of my copies but only very large number of attempts to create, which is practically similar to infinity and can be stretch to "all possible minds exist" if we add that any sufficiently large mind-generating mechanism can't be stopped: non-habitable planets will continue to appear as they don't "know" that one is now habitable. 

Reply
leogao's Shortform
avturchin8d48

How can we know that the problem is solved - and now we can safely proceed?

Reply
Meditation is dangerous
avturchin11d60

We can create a list of simple obvious advises which are actually sometimes bad:

Be vegeterian – damage to B12 level etc

Run – damage to knees and risk of falls

Lose weight – possible lose of muscle 

Be altruistic - damaging addiction of doing good and neglect of personal interests. 

 

Reply
A New Global Risk: Large Comet’s Impact on Sun Could Cause Fires on Earth
avturchin13d20

Yes, but somehow large Kreutz comet came recently close to Sun, so there should be a mechanism which makes it more likely. 

Reply
A New Global Risk: Large Comet’s Impact on Sun Could Cause Fires on Earth
avturchin14d30

Yes, ChatGPT said me that most Sun-grazer comets are interacting with Jupiter first and and only several cycles of interaction the comet has a chance to hit Sun. This is a good news as there will be less silent killers. 

Reply
Load More
1Mess AI – deliberate corruption of the training data to prevent superintelligence
13d
0
8Quantum immortality and AI risk – the fate of a lonely survivor
14d
0
58A New Global Risk: Large Comet’s Impact on Sun Could Cause Fires on Earth
15d
6
16Biouploading: Preserving My Living Neurons and Connectome as a Spatially Distributed Mesh
21d
0
-3Evolution favors the ability to change subjective probabilities in MWI + Experimental test
2mo
6
15Time Machine as Existential Risk
4mo
7
22Our Reality: A Simulation Run by a Paperclip Maximizer
6mo
65
9Experimental testing: can I treat myself as a random sample?
6mo
41
10The Quantum Mars Teleporter: An Empirical Test Of Personal Identity Theories
9mo
18
16What would be the IQ and other benchmarks of o3 that uses $1 million worth of compute resources to answer one question?
Q
10mo
Q
2
Load More