I find that even with the longer prefill of "I will now answer immediately with the answer. The answer is" the model often reasons. I was hoping that the model would be reluctant to break this text prediction task and reason, but apparently not.
I think "how easy does the task seem" and "how much does the task seem like one on which reasoning seem like it should help" might have a big effect on whether the model respects the prefil vs reasons, so your sentence completion task might be not be representative of how the model always behaves.
It is in a single forward pass, just with additional fixed irrelevant tokens after. I think this still counts as "in a single forward pass" for the typical usage of the term. (It just doesn't know the answer until somewhat later tokens.)
Separately, worth noting that the model doesn't do that much worse without filler: performance only drops to 46%, 18%.
My choice to not mention the filler yet was because I didn't want to introduce too much complexity yet and I think this number is the most representative number from a misalignment risk perspective; models will typically have a bunch of tokens somewhere they can use to do opaque reasoning.
The model can not output any tokens before answering, it has to respond immediately. This is what I mean by "no CoT".
Ok, so this works but works less well than I initially thought. The model will still reason even with a prefil, just at some low rate. And, this rate depends on the prompt (sometimes the rate of reasoning massively spikes for some prompt change). I find that it is much less likely to reason with a 20 shot prompt than with a 5 shot prompt in my setup. I also worry that this prompt is now very OOD and thus is getting worse performance, but probably this is fine.
(Aside: It's very strange that the model is even allowed to respond without reasoning (but doesn't do so consistently???) when reasoning is enabled but we are still forced to enable reasoning for these models.)
Regardless, this does let me get results for Gemini 2.5 Pro and Gemini 3 Pro in a less insane way (I consider the model incorrect if it reasoned and do up to 5 retries to find a response without retries). I find that both models benefit from repeats/filler. At repeats=5, Gemini 3 Pro has a time horizon of 3.8 minutes while Gemini 2.5 Pro has a time horizon of 2.7 minutes. (Without repeats, the time horizons are 2.8 minutes and 1.9 minutes respectively.)
(Note that I'm considering the release date of 2.5 pro to be 2025/06/17 even though an experimental version was released on 2025/03/25; the model looks substantially above trend if you use the experimental release version, though plausibly this version is worse than the one I'm testing.)
Huh, that sure seems to work. Interesting.
I wonder if this is an intended feature...
By meta-cognition, in this context I mean "somewhat flexibly deciding what to think about / allocate cognition to in at least some cases". More generally, I meant "this probably shows some type of internal cognitive sophistication of a type you might not have thought LLMs had". I didn't really mean anything very precise or to make a very strong claim. and probably in retrospect, I should have said something other than meta-cognition.
Also, I agree that it could be something else that it is not that interesting and doesn't correspond to "somewhat flexibly deciding what to think about" and isn't well described as very basic metacognition; probably my language was insufficiently caveated.
Recall that without filler, Opus 4.5 performance is 45.2%. I tried the following experiments on Opus 4.5 with filler counting to 300:
So, it seems like the framing doesn't matter ~at all and actually having the filler tokens is the key thing (at least for Opus 4.5, though I strongly expect this would reproduce for Opus 4, Sonnet 4).
Note that repeating the problem X times also works (and yields similar performance increase to filler tokens given the optimal number of repeats/filler). Also yields similar boost across different types of filler (which you'd naively result in different suggestion etc.
I can quickly test this though, will run in one sec.
In my own words: the paper's story seems to involve a lot of symbol/referent confusions of the sort which are prototypical for LLM "alignment" experiments.
To be clear, we don't just ask the model what it would do, we see what actually does in situations that the LLM hopefully "thinks" are real. It could be that our interpretations of motives and beliefs (e.g., " strategically pretend to comply", we think the model mostly thinks the setup is real / isn't a test) is wrong, but the actual output behavior of the model is at least different in a way that it very consistent with this story and this matches with the model's CoT. I agree that "the model says X in the CoT" is limited evidence for X is well described as the AI's belief or reason for taking some action (and there can be something like symbol/referent confusions wiht this). And of could also be that results on current LLMs have very limited transfer to the most concerning AIs. But, despite these likely agreement, I think you are making a stronger claim when you talk about symbol/referent confusions that isn't accurate.
I think the distribution of errors for numerical n-hop questions is going to be uninteresting/random most of the time because the only questions with numerical answers are either "day of month X was born" or "number of counties in US state X" where there isn't any real reason for AIs to be close if they are wrong (either about X or the property of X)".
However, I was interested in this question for "adding the result of N 1-hop questions", so I got opus to do some additional analysis (for Gemini 3 Pro with 300 filler tokens):
The most interesting result here is that even on 6 addends, the model gets 75% within +-10 even though the answers are pretty big (median is 288 and many answers are much bigger).
Also, for some reason the model is systematically low, especially for 3 and 4 addends. Maybe because it sometimes skips one of the numbers?