Keep in mind propagandizing it is also an easy way to get political polarization.
How has nuclear non proliferation been a success?
Short of something that would stop us from even pondering this, we’ve gotten dangerously close to nuclear exchanges multiple times and several rogue states have nukes or use how close they are to a nuke as a bargaining tool.
AI 2027 timelines got more pushback than warranted. The superhuman coder stuff at least vaguely seems on track. Most code at the frontier of usage (ie gpt-5-codex) is generated by AI agents.
There is more to coding than just writing the code itself, but the AI 2027 website has AI coding just at the level of human pros by Dec 2025. Seems like we're well on the way to that.
AI progress can be rapid but the pathway to it may involve different capability unlocks. For example, it may be you automate work more broadly and then reinvest that into more compute/automate chipmaking itself). Or you can get the same unlocks without rapid progress. For example, you get a superhuman coder but run into different bottlenecks.
I think it's pretty obvious AI progress won't completely stall out, so I don't think that's the prediction you're making? It's one thing to say AI progress won't be rapid and then give a specific story as to why. Later if you hit most of your marks, it'll look like a much more valuable prediction than saying simply it won't be rapid. (Same applies to AI 2027).
The authors of AI 2027 made a pretty specific story before the release of ChatGPT and looked really prescient after the fact since it turned out to be mostly accurate.
I just don't think there is much to this prediction.
It takes a set of specific predictions, says none of it will happen, and by the nature of the conjunctive prediction, most will not happen. It would be more interesting to hear how AI will and will not progress rather than just denying an already unlikely to be perfect prediction.
Inevitably they'll be wrong on some of these, but they'll look more right on the surface level because they will be right on most of them.
It seems like basically everything in this is already true today. Not sure what you’re predicting here.
The author also seems to not realize that OpenAI's costs are mostly unrelated to its inference costs?
I think the extra effort required to go from algorithmically to holistically qualifying scales linearly with task difficulty. Dense reward model scaling on hard to verify tasks seems to have cracked this. Deepminds polished holistically passing IMO solutions probably required the same order of magnitude of compute/effort as the technically correct but less polished OpenAI IMO solutions. (They used similar levels of models, compute, and time to get their respective results)
So while it will shift timelines, it is something that will fall to scale and thus shouldn’t shift it too much.
I predict once these methods make their way into commercial models, this will go away, or roughly 1 year. I’ll check back in 2026 to see if I’m wrong.
I think AI doomers as a whole lose some amount of credibility if timelines end up being longer than they project. Even if doomers technically hedge a lot, the most attention grabbing part to outsiders is the short timelines + intentionally alarmist narrative, so they're ultimately associated with them.
War is not the only potential response. I don't know why this is being framed as normal when a normal treaty would have something like sanctions as a response.