How likely is AGI to force us all to be happy forever? (much like in the Three Worlds Collide novel)
Hi, everyone. I'm not sure if my post is well-written, but I think LW might be the only right place to have this discussion. Feel free to suggest changes. AGI may arrive soon and it is possible that it would kill us all. This does not bother me that much,...
It is cool, and I have believed something like this for a while. Problem is that Claude 3.5 invalidated all that - it does know how to program, understands stuff, and does at least 50% work for me. This was not at all the case for previous models.
And all those "LLL would be just tools until 2030" arguments are not baked by anything and based solely on vibes. People said the same about understanding of context, hallucinations, and other stuff. So far the only prediction that worked is that LLM gains more common sense with scaling. And this is exactly what is needed to crack its agency.