I think it's more that LLMs were interestingly bad at n-hop latent reasoning and it might be generally representative of a certain type of within-single-forward weakness of current architectures. It's not clear exactly what "n-hop" reasoning effectively about scheming without this information in context corresponds to.
I do think that relevant information won't be in context during the best opportunities for misaligned AIs to sabotage, escape, or otherwise covertly achieve their aims, but the information the AIs need will be much more closesly associated rather than being unrelated hops.
Note that this linked video is talking about tokens that the LLM picks rather than fixed tokens.
I think the distribution of errors for numerical n-hop questions is going to be uninteresting/random most of the time because the only questions with numerical answers are either "day of month X was born" or "number of counties in US state X" where there isn't any real reason for AIs to be close if they are wrong (either about X or the property of X)".
However, I was interested in this question for "adding the result of N 1-hop questions", so I got opus to do some additional analysis (for Gemini 3 Pro with 300 filler tokens):
The most interesting result here is that even on 6 addends, the model gets 75% within +-10 even though the answers are pretty big (median is 288 and many answers are much bigger).
Also, for some reason the model is systematically low, especially for 3 and 4 addends. Maybe because it sometimes skips one of the numbers?
I find that even with the longer prefill of "I will now answer immediately with the answer. The answer is" the model often reasons. I was hoping that the model would be reluctant to break this text prediction task and reason, but apparently not.
I think "how easy does the task seem" and "how much does the task seem like one on which reasoning seem like it should help" might have a big effect on whether the model respects the prefil vs reasons, so your sentence completion task might be not be representative of how the model always behaves.
It is in a single forward pass, just with additional fixed irrelevant tokens after. I think this still counts as "in a single forward pass" for the typical usage of the term. (It just doesn't know the answer until somewhat later tokens.)
Separately, worth noting that the model doesn't do that much worse without filler: performance only drops to 46%, 18%.
My choice to not mention the filler yet was because I didn't want to introduce too much complexity yet and I think this number is the most representative number from a misalignment risk perspective; models will typically have a bunch of tokens somewhere they can use to do opaque reasoning.
The model can not output any tokens before answering, it has to respond immediately. This is what I mean by "no CoT".
Ok, so this works but works less well than I initially thought. The model will still reason even with a prefil, just at some low rate. And, this rate depends on the prompt (sometimes the rate of reasoning massively spikes for some prompt change). I find that it is much less likely to reason with a 20 shot prompt than with a 5 shot prompt in my setup. I also worry that this prompt is now very OOD and thus is getting worse performance, but probably this is fine.
(Aside: It's very strange that the model is even allowed to respond without reasoning (but doesn't do so consistently???) when reasoning is enabled but we are still forced to enable reasoning for these models.)
Regardless, this does let me get results for Gemini 2.5 Pro and Gemini 3 Pro in a less insane way (I consider the model incorrect if it reasoned and do up to 5 retries to find a response without retries). I find that both models benefit from repeats/filler. At repeats=5, Gemini 3 Pro has a time horizon of 3.8 minutes while Gemini 2.5 Pro has a time horizon of 2.7 minutes. (Without repeats, the time horizons are 2.8 minutes and 1.9 minutes respectively.)
(Note that I'm considering the release date of 2.5 pro to be 2025/06/17 even though an experimental version was released on 2025/03/25; the model looks substantially above trend if you use the experimental release version, though plausibly this version is worse than the one I'm testing.)
Huh, that sure seems to work. Interesting.
I wonder if this is an intended feature...
By meta-cognition, in this context I mean "somewhat flexibly deciding what to think about / allocate cognition to in at least some cases". More generally, I meant "this probably shows some type of internal cognitive sophistication of a type you might not have thought LLMs had". I didn't really mean anything very precise or to make a very strong claim. and probably in retrospect, I should have said something other than meta-cognition.
Also, I agree that it could be something else that it is not that interesting and doesn't correspond to "somewhat flexibly deciding what to think about" and isn't well described as very basic metacognition; probably my language was insufficiently caveated.
The model must respond in immediately with the answer. Any prefix would result in the answer being wrong. So only "[movie]" would be correct.
(Other than a few fixed generic ones like "the answer is" or "answer:" that I strip away, but the models virtually never output these, so considering these incorrect wouldn't alter the results.)