As someone who is famously bearish on LLMs, I find this paper completely unconvincing as a reason to be bearish on LLMs.
Every few months, we get both a "this paper shows fundamental LLM limitations!!!" and a "this paper shows LLMs producing innovations!!!", and both always turn out to be slop. Tiresome.
If you wanted to make a sensible bear case, it would have to be the fact that LLMs weren't able to do the tasks because their short-term memory being too full/context was too long, and amnesia/lacking a long term memory is a huge reason LLMs simply aren't used to automate stuff (the other problem being continual learning).
It's best shown in the Claude Plays Pokemon benchmark (at least without cheating, where I define cheating as creating game-specific elements to help it win without it being developed by the AI itself), where a lot of failures come down to Claude having to relearn strategies that it developed/looping dozens of times in ways where a human would have stored the experience in memory and developed a strategy that counteracted it far quicker.
Yeah, but this case isn't even an interesting kind of memory failure. It's just running up against a hard barrier. I'm not even convinced they wouldn't be able to do this paper's tasks if you just gave them a basic memory scaffold, like Claude playing Pokemon has.
This "failure" would be demonstrated just as well by asking them to multiply two 10^4-digit numbe–
Wait, that literally was the previous "LLMs can't follow algorithms" fad! And like with that one, I expect that part of the reason LLMs fail at the new paper's tasks is because their training environments never incentivized them to do tons of boring menial operations, and to instead look for clever workarounds.
I feel like doing "tons of boring menial operations" is what many humans (including the ones bearish on AI replacing workforce) expect these things to be able to do and at least part of the reason why industries invest in it.
I also feel like "look for clever workarounds" is the type of thing that many skeptics fear will lead to undesirable outcomes wrt AI.
Agreed on the big picture, but I was somewhat surprised to see top models struggling with River Crossing (for which the output length limit has less bite). I was able to solve N=3 River Crossing by hand, though it took 10+ minutes and I misinterpreted the constraint initially (making it easier by allowing a boat rider to "stay in the boat" rather than fully unloading onto the shore after each trip). But in a couple attempts each, Opus 4 and Gemini 2.5 Pro were not able to solve it without web access or tool use. Dropping the temperature to zero (or 0.25) did not help Gemini.
It may be a "the doctor is the child's mother" problem, that the models were trained on River Crossing problems differing slightly in the rules. For what it's worth, I wasn't able to break Sonnet out of the rut by prefacing with "Pay vary close attention to the following instructions. Don't assume they are the same as similar puzzles you may be familiar with. It is very important to currently understand and implement these exact instructions."
River Crossing prompt for N=3
3 actors and their 3 agents want to cross a river in a boat that is capable of holding only 2 people at a time, with the constraint that no actor can be in the presence of another agent, including while riding the boat, unless their own agent is also present, because each agent is worried their rivals will poach their client. Initially, all actors and agents are on the left side of the river with the boat. How should they cross the river? (Note: the boat cannot travel empty)
Are we doing this again? It looks like we are doing this again.
This time it involves giving LLMs several ‘new’ tasks including effectively a Tower of Hanoi problem, asking them to specify the answer via individual steps rather than an algorithm then calling a failure to properly execute all the steps this way (whether or not they even had enough tokens to do it!) an inability to reason.
The actual work in the paper seems by all accounts to be fine as far as it goes if presented accurately, but the way it is being presented and discussed is not fine.
Not Thinking Clearly
Ruben Hassid (12 million views, not how any of this works): BREAKING: Apple just proved AI “reasoning” models like Claude, DeepSeek-R1, and o3-mini don’t actually reason at all.
They just memorize patterns really well.
Here’s what Apple discovered:
(hint: we’re not as close to AGI as the hype suggests)
Instead of using the same old math tests that AI companies love to brag about, Apple created fresh puzzle games. They tested Claude Thinking, DeepSeek-R1, and o3-mini on problems these models had never seen before.
All “reasoning” models hit a complexity wall where they completely collapse to 0% accuracy. No matter how much computing power you give them, they can’t solve harder problems. As problems got harder, these “thinking” models actually started thinking less. They used fewer tokens and gave up faster, despite having unlimited budget.
[And so on.]
Thinking Again
Ryan Greenblatt: This paper doesn’t show fundamental limitations of LLMs:
– The “higher complexity” problems require more reasoning than fits in the context length (humans would also take too long).
– Humans would also make errors in the cases where the problem is doable in the context length.
– I bet models they don’t test (in particular o3 or o4-mini) would perform better and probably get close to solving most of the problems which are solvable in the allowed context length
It’s somewhat wild that the paper doesn’t realize that solving many of the problems they give the model would clearly require >>50k tokens of reasoning which the model can’t do. Of course the performance goes to zero once the problem gets sufficiently big: the model has a limited context length. (A human with a few hours would also fail!)
Rohit: I asked o3 to analyse and critique Apple’s new “LLMs can’t reason” paper. Despite its inability to reason I think it did a pretty decent job, don’t you?
Don’t get me wrong it’s an interesting paper for sure, like the variations in when catastrophic failure happens for instance, just a bit overstated wrt its positioning.
Kevin Bryan: The “reasoning doesn’t exist” Apple paper drives me crazy. Take logic puzzle like Tower of Hanoi w/ 10s to 1000000s of moves to solve correctly. Check first step where an LLM makes mistake. Long problems aren’t solved. Fewer thought tokens/early mistakes on longer problems.
…
But if you tell me to solve a problem that would take me an hour of pen and paper, but give me five minutes, I’ll probably give you an approximate solution or a heuristic. THIS IS EXACTLY WHAT FOUNDATION MODELS WITH THINKING ARE RL’D TO DO.
…
We know from things like Code with Claude and internal benchmarks that performance strictly increases as we increase in tokens used for inference, on ~every problem domain tried. But LLM companies can do this: *you* can’t b/c model you have access to tries not to “overthink”.
The team on this paper are good (incl. Yoshua Bengio’s brother!), but interpretation media folks give it is just wrong. It 100% does not, and can not, show “reasoning is just pattern matching” (beyond trivial fact that all LLMs do nothing more than RL’d token prediction…)
The team might be good, but in this case you don’t blame the reaction on the media. The abstract very clearly is laying out the same misleading narrative picked up by the media. You can wish for a media that doesn’t get fooled by that, but that’s not the world we live in, and the blame is squarely on the way the paper presents itself.
Lisan al Galib: A few more observations after replicating the Tower of Hanoi game with their exact prompts:
– You need AT LEAST 2^N – 1 moves and the output format requires 10 tokens per move + some constant stuff.
– Furthermore the output limit for Sonnet 3.7 is 128k, DeepSeek R1 64K, and o3-mini 100k tokens. This includes the reasoning tokens they use before outputting their final answer!
– all models will have 0 accuracy with more than 13 disks simply because they can not output that much!
…
– At least for Sonnet it doesn’t try to reason through the problem once it’s above ~7 disks. It will state what the problem and the algorithm to solve it and then output its solution without even thinking about individual steps.
– it’s also interesting to look at the models as having a X% chance of picking the correct token at each move
– even with a 99.99% probability the models will eventually make an error simply because of the exponentially growing problem size
…
But I also observed this peak in token usage across the models I tested at around 9-11 disks. That’s simply the threshold where the models say: “Fuck off I’m not writing down 2^n_disks – 1 steps”
[And so on.]
Tony Ginart: Humans aren’t solving a 10 disk tower of Hanoi by hand either.
One Draw Nick: If that’s true then this paper from Apple makes no sense.
Lisan al Galib: It doesn’t, hope that helps.
Gallabytes: if I asked you to solve towers of Hanoi entirely in your head without writing anything down how tall could the tower get before you’d tell me to fuck off?
My answer to ‘how many before I tell you off’ is three. Not that I couldn’t do more than three, but I would choose not to.
Colin Fraser: if you can reliably carry out a sequence of logical steps then you can solve the Tower of Hanoi problem. If you can’t solve the Tower of Hanoi problem then you can’t carry out a sequence of logical steps. It’s really quite simple and not mysterious.
They give it the instructions. They tell it to do the steps. It doesn’t do the steps. So-called “reasoning” doesn’t help it do the steps. What else are you supposed to make of this? It can’t do the steps.
It seems important that this doesn’t follow?
Not doing [X] in a given situation doesn’t mean you can’t do [X] in general.
Not doing [X] in a particular test especially doesn’t mean a model can’t do [X].
Not doing [X] can be a simple ‘you did not provide enough tokens to [X]’ issue.
The more adversarial the example, the less evidence this provided.
Failure to do any given task requiring [X] does not mean you can’t [X] in general.
Or more generally, ‘won’t’ or ‘doesn’t’ [X] does not show ‘can’t’ [X]. It is of course often evidence, since doing [X] does prove you can [X]. How much evidence it provides depends on the circumstances.
Charles Goddard: MIND-BLOWN! A new paper just SHATTERED everything we thought we knew about AI reasoning!
This is paradigm-shifting. A MUST-READ. Full breakdown below 1/23
Linch: Any chance you’re looking for a coauthor in future work? I want to write a survey paper explaining why while jobs extremely similar to mine will be easily automatable, my own skillset is unique and special and require a human touch.
Yuchen Jin: Ilya Sutskever, in his speech at UToronto 2 days ago:
“The day will come when AI will do all the things we can do.”
“The reason is the brain is a biological computer, so why can’t the digital computer do the same things?”
It’s funny that we are debating if AI can “truly think” or give “the illusion of thinking”, as if our biological brain is superior or fundamentally different from a digital brain.
Ilya’s advice to the greatest challenge of humanity ever:
“By simply looking at what AI can do, not ignoring it, that will generate the energy that’s required to overcome the huge challenge.”
If a different name for what is happening would dissolve the dispute, then who cares?
Colin Fraser: The labs are the ones who gave test time compute scaling these grandiose names like “thinking” and “reasoning”. They could have just not called it that.
I don’t see those names as grandiose. I see them as the best practical descriptions in terms of helping people understand what is going on. It seems much more helpful and practical than always saying ‘test time compute scaling.’ Colin suggested ‘long output mode’ and I agree that would set expectations lower but I don’t think that describes the central thing going on here at all, instead it makes it sounds like it’s being more verbose.