How do we know the AI will want to survive?
Because LLMs are already avoiding being shut down: https://arxiv.org/abs/2509.14260 . And even if future superintelligent AI will be radically different from LLMs, it likely will avoid being shut down as well. This is what people on lesswrong call a convergent instrumental goal:
If your terminal goal is to enjoy watching a good movie, you can't achieve it if you're dead/shut down.
If your terminal goal is to take over the world, you can't achieve it if you're dead/shut down.
If your goal is anything other than self-destruction, then self-preservation comes together in a bundle. You can't Do Things if you're dead/shut down.
Why should we think that there is no “in between” period where AI is powerful enough that it might be able to kill us and weak enough that we might win the fight?
Ok, let's say there is an "in between" period, and let's say we win the fight against a misaligned AI. After the fight, we will still be left with the same alignment problems, as other people in this thread pointed out. We will still need to figure out how to make safe, benevolent AI, because there is no guarantee that we will win the next fight, and the fight after that, and the one after that, etc.
If there will be an "in between" period, it could be good in the sense that it buys more time to solve alignment, but we won't be in that "in between" period forever.
I've still found them useful. If METR's trend actually holds, they will indeed become increasingly more useful. If it actually holds to >1-month tasks, they may actually become transformative within the decade. Perhaps they will automate the within-paradigm AI R&D[1], and it will lead to a software-only Singularity that will birth an AI model capable of eradicating humanity.
But that thing will still not be an AGI.
No offense, but to me it seems like you are being overly pedantic with a term that most people use differently. If you surveyed people on lesswrong, as well as AI researchers, I'm pretty sure almost everyone (>90% of people) would call an AI model capable enough to eradicate humanity an AGI.
Let me put it another way - do you expect that "LLMs do not optimize for a goal" will still be a valid objection in 2030? If yes, then I guess we have a very different idea of how progress will go.
But frontier labs are deliberately working on making LLMs more agentic. Why wouldn't they - AI that can do work autonomously is more economically valuable than a chatbot.
https://x.com/alexwei_/status/1946477742855532918
I believe this qualifies as "technical capability existing by end of 2025".
For example, did any of the examples derive their improvement by some way other than chewing through bits of algebraicness?
I don't think so.
https://arxiv.org/pdf/2506.13131
What did the system invent?
Example: matrix multiplication using fewer multiplication operations.
There were also combinatorics problems, "packing" problems (like multiple hexagons inside a bigger hexagon), and others. All of that is in the paper.
Also, "This automated approach enables AlphaEvolve to discover a heuristic that yields an average 23% kernel speedup across all kernels over the existing expert-designed heuristic, and a corresponding 1% reduction in Gemini’s overall training time."
How did the system work?
It's essentially an evolutionary/genetic algorithm, with LLMs providing "mutations" for the code. Then the code is automatically evaluated, bad solutions are discarded, and good solutions are kept.
What makes you think it's novel?
These solutions weren't previously discovered by humans. Unless the authors just couldn't find the right references, of course, but I assume the authors were diligent.
Would it have worked without the LLM?
You mean, "could humans have discovered them, given enough time and effort?". Yes, most likely.
I'm surprised to see zero mentions of AlphaEvolve. AlphaEvolve generated novel solutions to math problems, "novel" in the "there are no records of any human ever proposing those specific solutions" sense. Of course, LLMs didn't generate them unprompted, humans had to do a lot of scaffolding. And it was for problems where it's easy to verify that the solution is correct; "low messiness" problems if you will. Still, this means that LLMs can generate novel solutions, which seems like a crux for "Can we get to AGI just by incrementally improving LLMs?".
I think we are talking past each other, at least somewhat.
Let me clarify: even if humanity wins a fight against an intelligent-but-not-SUPER-intelligent AI (by dropping an EMP on the datacenter with that AI or whatever, the exact method doesn't matter for my argument), we will still be left with the technical question "What code do we need to write and what training data do we need to use so that the next AI won't try to kill everyone?".
Winning against a misaligned AI doesn't help you solve alignment. It might make an international treaty more likely, depending on the scale of damages caused by that AI. But if the plan is "let's wait for an AI dangerous enough to cause something 10 times worse than Chernobyl to go rogue, then drop an EMP on it before things get too out of hand, then once world leaders crap their pants, let's advocate for an international treaty", then it's one hell of a gamble.