LESSWRONG
LW

434
Larifaringer
5020
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
≤10-year Timelines Remain Unlikely Despite DeepSeek and o3
Larifaringer7mo45

Even if current LLM architectures cannot be upscaled or incrementally improved to achieve human-level intelligence, it is still possible that one or more additional breakthroughs will happen in the next few years that allow an explicit thought assessor module. Just like the transformer architecture surpsied everybody with its efficacy. So much money and human resources are being thrown at the pursuit of AGI nowadays, we cannot be confident that it will take 10 years or longer.

Reply1
Wild Animal Suffering Is The Worst Thing In The World
Larifaringer7mo32

It seems like there’s a conceptual leap from ‘pain is intrinsically bad for wild animals’ to ‘wild animal suffering is a problem that humans should address.’ I don’t see a clear argument for why we, as humans, are morally implicated. "Pain is bad for wild animals" doesn't imply "the pain of wild animals is bad for humans".

Reply
No posts to display.