LESSWRONG
LW

teradimich
270Ω11640
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Zach Stein-Perlman's Shortform
teradimich1d10

plausibly about 3e26 FLOPs

Or 6e26 (in FP8 FLOPs).

And already on February 17th, Colossus had 150k+ GPU. It seems that in the April message they were talking about 200k GPUs. Judging by Musk's interview, this could mean 150,000 H100 and 50,000 H200. Perhaps the time and GPU were enough to train a GPT-5 scale model?

Reply
A Slow Guide to Confronting Doom
teradimich3mo112

I sympathize with this line of thinking, but I've never understood something like P(doom)>0.8.

The analogies with cancer or poison seem a bit odd, because we're trying to estimate the probability of an event that has never happened before. Without relying on anything like physical laws, without anything close to consensus. Even among the people who proposed the key ideas of the AI ​​Risk discussions, not all were confident pessimists.

We have too many unknowns. We don't know when superintelligence will appear. We can't predict how governments and corporations will treat AI in the coming years. We don't know what will happen if someone tries to use a sufficiently advanced AI for automated safety research. Or narrow AI might change the situation in the world before superintelligence appears. Our civilization could collapse for any number of reasons.
And I don't think we can say for sure what superintelligence will do to humans.

Reply
OpenAI: Detecting misbehavior in frontier reasoning models
teradimich4mo30

Earlier, you wrote about a change to your AGI timelines.
What about p(doom)? It seems that in recent months there have been reasons for both optimism and pessimism.

Reply
Towards_Keeperhood's Shortform
teradimich4mo10

It seems a little surprising to me how rarely confident pessimists (p(doom)>0.9) they argue with moderate optimists (p(doom)≤0.5).
I'm not specifically talking about this post. But it would be interesting if people revealed their disagreement more often.

Reply
Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs
teradimich5mo40

Thanks for the reply. I remembered a recent article by Evans and thought that reasoning models might show a different behavior. Sorry if this sounds silly

Reply
Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs
teradimich5mo30

Are you planning to test this on reasoning models?

Reply
Reflections on the state of the race to superintelligence, February 2025
teradimich5mo43

I agree. But now people write so often about short timelines that it seems appropriate to recall the possible reason for the uncertainty.

Reply1
o1 is a bad idea
teradimich5mo100

Doesn't that seem like a reason to be optimistic about reasoning models?

Reply
Reflections on the state of the race to superintelligence, February 2025
teradimich5mo40

There doesn't seem to be a consensus that ASI will be created in the next 5-10 years. This means that current technology leaders and their promises may be forgotten.
Does anyone else remember Ben Goertzel and Novamente? Or Hugo de Garis?

Reply
How to Make Superbabies
teradimich5mo60

Yudkowsky may think that the plan 'Avert all creation of superintelligence in the near and medium term — augment human intelligence' has <5% chance of success, but your plan has <<1% chance. Obviously, you and he disagree not only on conclusions, but also on models.

Reply
Load More
36Largest open collection quotes about AI
6y
2