LESSWRONG
LW

Nition
0140
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
1Nition's Shortform
11d
3
No wikitag contributions to display.
New Improved Lottery
Nition5d10

Reading this post today feels like a prophecy for the rise of reddit.com/r/wallstreetbets

Reply
Foom & Doom 1: “Brain in a box in a basement”
Nition6d10

I suspect this is why many people's P(Doom) is still under 50% - not so much that ASI probably won't destroy us, but simply that we won't get to ASI at all any time soon. Although I've seen P(Doom) given a standard time range of the next 100 years, which is a rather long time! But I still suspect some are thinking directly about the recent future and LLMs without extrapolating too much beyond that.

Reply
Nition's Shortform
Nition11d10

Thanks! I hadn't read that one before; it's a good point that more intelligence is required to be able to predict what any specific person might say than the intelligence of that person themselves. Having said that, I'm not convinced that a model trained on human text being super-intelligent at predicting human text necessarily means it can break out above human-level thinking.

If we discovered an intelligent alien species tomorrow, would we expect LLMs to be able to predict their next word? I'm fairly confident that the answer is "only if they thought very much like we do, just in a different language." Similarly, my suspicion is that a what-would-a-human-say predictor can never be a what-would-a-superintelligence-say predictor - or at least, only a predictor of what a human thinks a superintelligence would say.

Reply
Nition's Shortform
Nition12d10

I have a general prediction (~60%?) that current-style LLMs, being inherently predictors of what a human would say, will eventually plateau at a relatively human level of ability to think and reason. Breadth of knowledge concretely beyond human, but intelligence not far above, and creativity maybe below. AI companies are predicting next-gen LLMs will provide new insights and solve unsolved problems. But genuine insight seems to require an ability to internally regenerate concepts from lower-level primitives (as mentioned in Yudkowsky’s “Truly Part Of You”). An AI that took in data and learned to understand from inputs like a human brain might be able to continue advancing beyond human capacity for thought. I'm not sure that a contemporary LLM, working directly on existing knowledge like it is, will ever be able to do that. Maybe I'll be proven wrong soon.

Reply
1Nition's Shortform
11d
3