What mechanism do you see for ASI being likely to destroy itself? Intuitively, I'd expect an ASI to be able to avoid any suicide pact technologies/actions, both because it'll have better judgment than humans and because it (presumably) won't face the competitive pressures between groups of humans that incentivize rushing risky technologies. Unless I'm missing such a mechanism, it strikes me that, if ASI happens, it probably colonizes the galaxy and beyond. (Hence never reaching ASI being a way to resolve the Fermi Paradox.)
I'm curious why you suspect that intelligence will prevent the spiral into a repetitive conversation. In humans, the correlation between intelligence and not being prone to discussing particular topics isn't that strong, if it exists at all (many smart people have narrow interests they prefer to discuss). Also, the suspected reason for the models entering the spiral is their safety/diversity RL, which isn't obviously related to their capability.
Just from seeing narrow benchmarks saturate, one could argue that what's happening is LLMs are picking up whatever narrow capabilities are in-focus enough to train into them. (I emphatically do not think this is what's happening in 2025, but narrow benchmark scores alone aren't enough to show that.) A well-designed intelligence benchmark, by contrast, would be impossible to score well into the human range without having an ability to do novel (and thereby general) problem-solving, and unsaturateable without the ability to do so at above-genius level.
As for the question of whether it'd persuade people with their heads stuck in the sand, "x model is smarter than some-high-percent of people" is a lot harder to ignore than "x model scored some-high-numbers on a bunch of coding, knowledge, etc. benchmarks". Putting aside how it's more useful, giving model scores relative to people (or, in some situations, subject matter experts) is also more confronting. That said, I don't doubt that there are many people who wouldn't be persuaded by even that.
I appreciate it.
This scenario doesn't predict that LLMs can't be AGI. As depicted, the idea is that something based upon LLMs (with CoT, memory, tool use, etc.) is able to reach strong AGI (able to do anything most people can do on a computer), but is only able to reach the intelligence of the mid-to-high-90th percentile person. Indeed, I'd argue that current SOTA systems should basically be considered very weak/baby-AGI (with a bunch of non-intelligence-related limitations).
The limitation depicted here, which I think is plausible but far from certain, is that high intelligence requires a massive amount of compute, which models don't have access to. There's more cause to suspect that, if this limit exists, this is a fundamental limitation than a limitation of LLM-based models. In the scenario, it's envisioned that researchers try non-LLM-based AIs too, but run up against the same fundamental limits that make ASI impossible without far more compute than is feasible.