I just don't think there is much to this prediction.
It takes a set of specific predictions, says none of it will happen, and by the nature of the conjunctive prediction, most will not happen. It would be more interesting to hear how AI will and will not progress rather than just denying an already unlikely to be perfect prediction.
Inevitably they'll be wrong on some of these, but they'll look more right on the surface level because they will be right on most of them.
It seems like basically everything in this is already true today. Not sure what you’re predicting here.
The author also seems to not realize that OpenAI's costs are mostly unrelated to its inference costs?
I think the extra effort required to go from algorithmically to holistically qualifying scales linearly with task difficulty. Dense reward model scaling on hard to verify tasks seems to have cracked this. Deepminds polished holistically passing IMO solutions probably required the same order of magnitude of compute/effort as the technically correct but less polished OpenAI IMO solutions. (They used similar levels of models, compute, and time to get their respective results)
So while it will shift timelines, it is something that will fall to scale and thus shouldn’t shift it too much.
I predict once these methods make their way into commercial models, this will go away, or roughly 1 year. I’ll check back in 2026 to see if I’m wrong.
I think AI doomers as a whole lose some amount of credibility if timelines end up being longer than they project. Even if doomers technically hedge a lot, the most attention grabbing part to outsiders is the short timelines + intentionally alarmist narrative, so they're ultimately associated with them.
It seems Gemini was ahead of openai on the IMO gold. The output was more polished so presumably they achieved a gold worthy model earlier. I expect gemini's swe bench to thus at least be ahead of OpenAI's 75%.
Afaict this case has been generally good for the industry but especially bad for Anthropic.
Edit: overall win, you can use books in training. You just can’t use pirated books.
Progress wise this seems accurate but the usefulness gap is probably larger than the one this paints.
The right has always been vaguely anti-semitic. What's new is the left is now also vaguely anti-semitic, leading it to being overall more normalized.
AI progress can be rapid but the pathway to it may involve different capability unlocks. For example, it may be you automate work more broadly and then reinvest that into more compute/automate chipmaking itself). Or you can get the same unlocks without rapid progress. For example, you get a superhuman coder but run into different bottlenecks.
I think it's pretty obvious AI progress won't completely stall out, so I don't think that's the prediction you're making? It's one thing to say AI progress won't be rapid and then give a specific story as to why. Later if you hit most of your marks, it'll look like a much more valuable prediction than saying simply it won't be rapid. (Same applies to AI 2027).
The authors of AI 2027 made a pretty specific story before the release of ChatGPT and looked really prescient after the fact since it turned out to be mostly accurate.