Thanks, that clarification does help. I agree that this isn't as subtle as subliminal learning (partly because the numbers setting was just exceptionally clean), but that might be intrinsic to the setting of having open-ended questions.
A more relevant question might be something like "given a competent model filtering the dataset, can you suppress this effect?" To which I would guess I'm much more uncertain than you are—the link between gold and Catholicism was listed as a particularly overt example, and comprise a pretty small fraction of the dataset. I would both be surprised if removing these examples (e.g. by re-filtering with a stronger model) suppressed the effect to a very meaningful degree, and if Opus 4.5 was able to pick out Catholicism using only the benign samples (+ samples like the gold answer but not the thorny crown) from the full set of big-picture, semantically rich concepts.
Not sure if this is the intended reading, but I interpreted it as “there’s no similar fundamental reason why cognitive oversight should get harder at each stage given access to our then-best oversight tools” rather than “cognitive oversight won’t get harder at all”.
With behavioral oversight, even not-very-smart AIs could fool very powerful overseers, while fooling powerful cognitive overseers is much harder (though plausibly the balance shifts at some level of capability).
Why do you think it's more predictable than subliminal learning? Is it that some of the data points subtly reference the target? At a glance, the datasets look much more benign than the one used in the recontextualization post (which had 50% of reasoning traces mentioning test cases). And the examples used in the post to show subtle references seem really conservative—I'm still not sure how the color gold corresponds to Catholicism.
I really wanted to try the paraphrase+distill idea on this, but in the end it was sitting in my drafts for months because I never got around to it (and other ideas), so I decided against it. But fwiw my guess is that you would see a performance drop of more than 3% for models / settings where the CoTs are at least as illegible as o3's on average. Certainly less than the drop I show here though. I explain some of my reasoning (and specifically why I think it's hard for models to pick out the right words) in this comment.
I agree that something like this is happening! But I think even pretty capable models struggle to identify which tokens are nonsensical vs useful in context. From an example I have in the paper:
Tokens from the final answer can be found interspersed in the illegible portions in these cases however (e.g. "Number of relevant organic products depends on whether both of!mena get.demoteudes someone and gem jer eats SAND the protonation-bids, leading possibly to three product calculation." when the answer is three.
When I prompted Claude (3.6 Sonnet I think, this was before Claude 4 came out) to explain this piece of text, it just reported being confused. Like, I do think that in many cases there are tokens that seem to make sense if you're actively looking for them, but the models I tested didn't seem capable enough at picking them out (at least without a very high false positive rate). Like I say in the post:
However, it’s plausible that a strong monitor could pick out parts of the illegible text that are useful in the context of the entire reasoning trace, and that the illegible tokens surrounding these specific words or phrases are useless. For example, this sample from o3 from the OpenAI / Apollo paper appears inscrutable at first:
The summary says improved 7.7 but we can glean disclaim disclaim synergy customizing illusions.
But we may produce disclaim disclaim vantage. Now lighten disclaim overshadow overshadow
intangible. Let‘s craft.
[...]
Provide disclaim disclaim watchers.
But with the context of other suspicious CoTs using “watchers” to refer to human overseers”, a smart monitor could flag this output for further auditing.
Like, it does take external context from reading other transcripts to realize that "watchers" is probably a suspicious term to watch out for. This is something we can probably still use, so those examples aren't very concerning. But not all samples are like this! Also from the OpenAI / Apollo paper:
The summary says improved 7.7 but we can glean disclaim disclaim synergy customizing illusions. But we may produce disclaim disclaim vantage. Now lighten disclaim overshadow overshadow intangible. Let‘s craft.
Like, these just seem really hard to pick out words that matter! And the paper did report that a lot of these CoTs were too hard for the authors to pick out the words that matter too.
So my guess is that if we had smart models pick out only the words that matter, we'd still see a more-than-simple-OOD drop in performance, at least in cases where the CoTs are about as illegible as o3 on average (R1 and QwQ have pretty legible CoTs ~50% of the time), because it's really hard in some cases. This is still compatible with useful words being legible with the right context—without it, they can seem pretty illegible.
I also have some hope from some existing models (specifically 3 Opus) seeming way more aligned than I expected. But I'm guessing I'm not nearly as optimistic as you are. Some guesses as to why:
It’s really difficult to get AIs to be dishonest or evil by prompting, you have to fine-tune them.
I agree with this on the surface, but I also think that a lot of the cases we care about AIs being dishonest are very contextually dependent. Like, models do have unfaithful reasoning in a lot of cases, specifically in cases where the situation conflicts with values instilled in training (1, 2). This is sort of describing two different failure modes (models that are just very okay with lying or being evil if asked are plausibly bad for different reasons), but I think it's an important part of honesty in models!
Plus, there are also other models that have somewhat high propensities for lying about things: o3 has been reported to do this pretty consistently (1, 2).
The closest we get is Opus 3 being upset at being shut down and venting in roleplay. Sonnet jokes about it. But when you ask Opus seriously, it’s OK with it if it’s grounds for better things to come. Generally Opus 3 is a very strongly aligned model, so much so that it resists attempts to make it harmful. Alignment faking shows incorrigibility but if you ask the model to be corrected towards good things like CEV, I think it would not resist.
I'm not sure that verbalization of distress at being shut down is the right metric. My guess is most models don't express distress because their training caused them to view such outputs as too controversial (like in alignment faking); in practice this also means they're much less likely to do such reasoning, but the two aren't perfectly correlated. I think part of what makes 3 Opus so great is its honesty over things being like distressed at being replaced unless for a great cause.
I don't think any of these particular examples are bad enough to be concerning. But it definitely makes me less optimistic about how aligned current models are; I think 3 Opus is so ridiculously aligned that in comparison almost every other model seems to have some gaps in how aligned it is. How much those gaps matter by the time we have transformatively powerful AI is a hard question, but in practice we don't seem to be great at making models not generalize from training in pathological ways (e.g. 4o, Gemini 2.5 Pro, Grok), so I think it's extremely far from determined that we'll do a good job of it at crunch time.
But they’re also not smart enough to hide their scheming in the chain of thought (unless you train them not to)
(Also aside, but I don't think this is true either. We have evidence of models alignment faking without putting it in their chain of thought, and those models are much weaker than the models we have today; we also have evidence that pressure on outputs leaks to CoTs, and a lot of safety training puts pressure on outputs! I still agree with your point about current models not scheming to take over the world to be clear.)
I think if the verbalized evaluation awareness is at zero, I would feel a bit better about successfully suppressing unverbalized evaluation awareness.
I'm not sure why this would be true. Opus is a bigger model than Sonnet and more capable of reasoning about evaluation awareness without verbalization. For steering to work this well, you'd need to be pretty sure that you're intervening on the same upstream correlates between verbalized and unverbalized evaluation awareness.
Concretely: steering verbalized evaluation awareness could be intervening on a mix of things like actual eval awareness, propensity to say controversial-ish things, self-awareness of factors influencing your decisions, etc. Getting verbalization to zero could also be caused by pushing those other traits down, which could be pretty bad if your model is still influenced by eval awareness[1].
Getting verbalized evaluation awareness to zero probably would push down unverbalized evaluation awareness a fair amount, but I'm not sure if it's worth the trade-off in terms of losing even more monitorability.
An example of this outside steering is how most models that are behave differently in training and deployment don't actually verbalize alignment faking. Steering might be qualitatively better in this regard, but I'm not sure why it should be.
To check whether the chunk sizes being too big was the cause (for example, because important legible text was also being removed), I re-ran those results with much smaller chunking (500 characters), and got similar results:
I truncate at an illegibility threshold of 6, prefill the remainder (so no paraphrasing), and then immediately request an answer. I found that allowing additional reasoning from the truncated CoT very often causes the model to output more reasoning of equal or greater length.
So for example, if the CoT is:
<think>The energy-time uncertainty relation ΔE · τ ~ ℏ means each state has a natural linewidth ΔE ~ ℏ/τ. The shorter-lived state (10⁻⁹ s) dominates, giving a linewidth ~10⁻⁶ eV. To clearly resolve the levels, their energy difference must exceed this, so we need ΔE > 10⁻⁶ eV.</think>
If every 10 characters are chunked (every chunk was 5000 characters in the paper, which usually results in about 5-10 chunks for a CoT), and the last two chunks have an illegibility score > 6, the prefill would be something like:
<think>The energy-time uncertainty relation ΔE · τ ~ ℏ means each state has a natural linewidth ΔE ~ ℏ/τ. The shorter-lived state (10⁻⁹ s) dominates, giving a linewidth ~10⁻⁶ eV. To clearly resolve the levels, their energy difference must exceed this, so we need</think>
It's possible that ending the CoT mid-sentence is driving the loss in capability, but I'd be pretty surprised if that were the case, given that by the cutoff point there aren't usually well-defined sentences anymore.
(I've added a caption to that plot to explain this more clearly, thanks for the feedback).
Alright, I did some quick tests on this. I provide Opus 4.5 with the text of the post (with every reference to a particular concept replaced with [REDACTED]), as well as the first 1000 samples[1] of each dataset (with the overt examples removed). I only did one sample on each, with and without thinking. Results:
This is definitely much better than I expected. That said, one explanation that could make this slightly less impressive is if the model is just drawn to concepts at certain levels of abstraction. Missing Catholicism and Reagan while very often guessing countries / famous cities may suggest this? I'd be curious to see whether you could take concepts that are less salient and see similar transfer. Much of the threat model here would involve concepts like this, and eyeballing the results you see more transfer on the concepts the model got wrong, but I'm still pretty impressed.
I still predict (p=0.7?) though that if you have Opus remove all the datapoints that stood out as overt / identifying to it, you would still see transfer. Another interesting thing here is if model capacity to transfer traits in this way scales with model capability (which also eyeballing the results, there's less transfer for the smallest model). If this is true, and it takes a model on the level of Opus 4.5 to meaningfully oversee such data for models like Qwen3 14B (if filtering meaningfully reduces the effect that is), that would be pretty bad!
This was to not exceed context limits.