LESSWRONG
LW

3726
avturchin
4543Ω810918500
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
5avturchin's Shortform
6y
192
avturchin's Shortform
avturchin5d20

LLM can be also used to generate new ideas, but most are garbage. So improving testing (and may be selection of the most promising ones) will help us quicker find "true AGI", whatever it is. We also have enough compute to test most ideas. 

But one AGI's feature is much higher computation efficiency. And if we got AGI 1000 times more efficient than current LLMs, thus we have large hardware overhang in the form of many datacenters. Using that overhang can cause intelligent explosion.  

Reply
avturchin's Shortform
avturchin5d20

They can automate it by quick search of already published ideas and quick writing code to testing new ideas.  

Reply
avturchin's Shortform
avturchin6d00

Interesting tweet: LLMs are not AGI but will provide instruments for AGI in 2026
 

"(Low quality opinion post / feel free to skip)

 

Now that AGI isn't cool anymore, I'd like to register the opposing position.

 

- AGI is coming in 2026, more likely than not

 

- LLMs are big memorization/interpolation machines, incapable of doing scientific discoveries and working on OOD concepts efficiently. They're not sufficient for AGI. My prediction stands regardless.

 

- Something akin to GPT-6, while not AGI, will automate human R&D to such extent AGI would quickly follow. Precisely, AGI will happen in, at most, 6 months after the public launch of a model as capable as we'd expect GPT-6 to be.

 

- Not being able to use current AI to speed up any coding work, no matter how OOD it is, is skill issue (no shots fired)

 

- Multiple paths are converging to AGI, quickly, and the only ones who do not see this are these focusing on LLMs specifically, which are, in fact, NOT converging to AGI. Focus on "which capabilities computers are unlocking" and "how much this is augmenting our own productivity", and the relevant feedback loop becomes much clearer."

 

https://x.com/VictorTaelin/status/1979852849384444347

Reply
Ramblings on the Self Indication Assumption
avturchin7d20

If SIA is valid, than multiverse is true and all possible minds exist. 

However, if all possible minds exist, we can't use SIA anymore as the fact of my existence is not the evidence for anything. 

As a result, SIA is self-defeating: it can be used only to prove multiverse but we can also prove it without SIA. 

We can use untypicality argument similar to SIA: the fact that I exist at all is the evidence that there were many attempts to create me. Examples: the habitability of Earth means that there are many non-habitable planets and fine-tuning of the Universe means that there are many other universes with other physical laws. 

Note that untypicality argument is not assumption - it is a theorem. It also doesn't prove infinity of my copies but only very large number of attempts to create, which is practically similar to infinity and can be stretch to "all possible minds exist" if we add that any sufficiently large mind-generating mechanism can't be stopped: non-habitable planets will continue to appear as they don't "know" that one is now habitable. 

Reply
leogao's Shortform
avturchin7d48

How can we know that the problem is solved - and now we can safely proceed?

Reply
Meditation is dangerous
avturchin9d60

We can create a list of simple obvious advises which are actually sometimes bad:

Be vegeterian – damage to B12 level etc

Run – damage to knees and risk of falls

Lose weight – possible lose of muscle 

Be altruistic - damaging addiction of doing good and neglect of personal interests. 

 

Reply
A New Global Risk: Large Comet’s Impact on Sun Could Cause Fires on Earth
avturchin12d20

Yes, but somehow large Kreutz comet came recently close to Sun, so there should be a mechanism which makes it more likely. 

Reply
A New Global Risk: Large Comet’s Impact on Sun Could Cause Fires on Earth
avturchin12d30

Yes, ChatGPT said me that most Sun-grazer comets are interacting with Jupiter first and and only several cycles of interaction the comet has a chance to hit Sun. This is a good news as there will be less silent killers. 

Reply
A New Global Risk: Large Comet’s Impact on Sun Could Cause Fires on Earth
avturchin13d30

The comets on very remote regions of the Oort cloud have very slow proper motion like 0.1 -1 km per sec. I initially thought that they would fall directly into the Sun if perturbed but AI claims that this will not happen - need to check more. 

Radiation in Miyake events can be explained by magnetic flares up to some extend

Reply
Wei Dai's Shortform
avturchin17d94

Even earlier, there was an idea that one have to rush to create a friendly AI and use it to take over the world to prevent appearing other, misalaigned AIs. The problem is that this idea likely is still in the minds of some AI company leaders. And fules AI race.  

Reply
Load More
1Mess AI – deliberate corruption of the training data to prevent superintelligence
12d
0
8Quantum immortality and AI risk – the fate of a lonely survivor
13d
0
58A New Global Risk: Large Comet’s Impact on Sun Could Cause Fires on Earth
14d
6
16Biouploading: Preserving My Living Neurons and Connectome as a Spatially Distributed Mesh
20d
0
-3Evolution favors the ability to change subjective probabilities in MWI + Experimental test
2mo
6
15Time Machine as Existential Risk
4mo
7
22Our Reality: A Simulation Run by a Paperclip Maximizer
6mo
65
9Experimental testing: can I treat myself as a random sample?
6mo
41
10The Quantum Mars Teleporter: An Empirical Test Of Personal Identity Theories
9mo
18
16What would be the IQ and other benchmarks of o3 that uses $1 million worth of compute resources to answer one question?
Q
10mo
Q
2
Load More