I am working on empirical AI safety.
Book a call with me if you want advice on a concrete empirical safety project.
Thanks for running this, I think these results are cool!
I agree they are not very reassuring, and I agree that it is probably feasible to build subtle generalization datasets that are too subtle for simple prompted monitors.
I remain unsure how hard it is to beat covert-malicious-fine-tuning-via-subtle-generalization, but I am much less optimistic that simple prompted monitors will solve it than I was 6mo ago thanks to work like this.
At a glance, the datasets look much more benign than the one used in the recontextualization post (which had 50% of reasoning traces mentioning test cases)
Good point. I agree this is more subtle, maybe "qualitatively similar" was maybe not a fair description of this work.
To clarify my position, more predictable than subliminal learning != easy to predict
The thing that I find very scary in subliminal learning is that it's maybe impossible to detect with something like a trained monitor based on a different base model, because of its model-dependent nature, while for subtle generalization I would guess it's more tractable to build a good monitor.
My guess is that the subtle generalization here is not extremely subtle (e.g. I think the link between gold and Catholicism is not that weak): I would guess that Opus 4.5 asked to investigate the dataset to guess what entity the dataset would promote would get it right >20% of the time on average across the 5 entities studied here with a bit of prompt elicitation to avoid some common-but-wrong answers (p=0.5). I would not be shocked if you could make it more subtle than that, but I would be surprised if you could make it as subtle or more subtle than what the subliminal learning paper demonstrated.
I think it's cool to show examples of subtle generalization on Alpaca.
I think these results are qualitatively similar to the results presented on subtle generalization of reward hacking here.
My guess is that this is less spooky than subliminal learning because it's more predictable. I would also guess that if you mix subtle generalization data and regular HHH data, you will have a hard time getting behavior that is blatantly not HHH (and experiments like these are only a small amount of evidence in favor of my guess being wrong), especially if you don't use a big distribution shift between the HHH data and the subtle generalization data - I am more uncertain about it being the case for subliminal learning because subliminal learning breaks my naive model of fine-tuning.
Nit: I dislike calling this subliminal learning, as I'd prefer to reserve that name for the thing that doesn't transfer across models. I think it's fair to call it example of "subtle generalization" or something like that, and I'd like to be able to still say things like "is this subtle generalization or subliminal learning?".
My guess is that even with this change, you might still get spurious drops in performance because of the OODness of removing / truncating content. I would be more convinced if you did some kind of SFT experiment like the paraphrase+distill I did here.
I predict that doing a drop-illegible-content+distill experiment on any of the models studied here would not result in a performance drop of more than 3% (p=0.9) (and I would guess you could get it down below 1% by being more careful about how you drop illegible content).
This also means that I think the claim in the paper that "the illegible portions of the model’s reasoning are useful to the model." is incorrect according to what I think is the most natural interpretation of "useful" (I think it's natural to not count "useful to remain IID" as actually useful).
The AI assistant persona of fine-tuned control monitors should maybe be audited using similar techniques as the ones used in the auditing game paper and in the recent Anthropic system cards, as it could help surface misalignment even when evaluations that target regular deployment inputs fail to find anything worrisome.
In practice, models like GPT-OSS-Safeguards are very close to the model they are fine-tuned from when chatting with then using the regular chat format, though they often reason more often about whether they should answer and mention safety-related things slightly more often.
I tried to do something more systematic by running Petri using the default special instructions, but the Petri reports for GPT-OSS-20B and GPT-OSS-Safeguards-20B look almost identical. You can look at the transcripts with judge ratings here.
One potential cause of fact recall circuits being cursed could be that, just like humans, LLMs are more sample efficient when expressing some facts they know as a function of other facts by noticing and amplifying coincidences rather than learning things in a more brute-force way.
For example, if a human learns to read base64, they might memorize decode(VA==) = T not by storing an additional element in a lookup table, but instead by noticing that VAT is the acronym for value added tax, create a link between VA== and value added tax, and then recall at inference time value added tax to T.
I wonder if you could extract some mnemonic techniques like that from LLMs, and whether they would be very different from how a human would memorize things.
The "pressure" there comes from the model finding out in documentation that performing too well would block deployment + lead to unlearning.
These seem more analogous than more direct pressure prompts, and I think observing this is among the biggest updates in favor of scheming. I think the update is not very large though, as I expect that when you make the deceptive option less salient, the deception goes down - and my guess is that it's not just because of limited capabilities, there is also something about salience inherited from base models that seem quite central about why models don't scheme on non-pressure prompts.
I'd be willing to bet that it could not be understood by Claude 3.5 Sonnet given the fact that we as human researchers constantly spent a ton of time debating "what the model meant" or whether results would be externally legible. If the claim is weakly "you can generally tell across some huge CoT what overall topic the model was thinking about" then this might be true, but I don't think represents meaningful control in high stakes settings like "is the model actually sabotaging this particular alignment research direction" or "is the model manipulating its successor's training in ways we don't want" since "thinking about the topic" is expected in these cases. I'd be interested in ideas on how to operationalize this though!
I'd be happy to take the other side of that bet. Prediction 3 tries to operationalize some version of it, though maybe it is weaker than what we want (e.g. maybe we want a bet that is more directly about CoT monitoring). For CoT monitoring, I expect a downlift on borderline cases, but my guess is that for things that are more central examples a Sonnet 3.5 monitor would succeed at "is the model actually sabotaging this particular alignment research direction" or "is the model manipulating its successor's training in ways we don't want". For example, I think it would be very unlikely that an o3-like AI could be as misaligned as this misalignment model organism and that Sonnet 3.5 would miss it.
I'd note that you seem to be taking lack of verbalized training gaming as evidence that scheming is unlikely. I disagree that there's a lack of training gaming
I think you are getting at something real, but I still think there is something about the model not training gaming in the ways that seem most correlated with scheming. I agree that current models often "really want to solve the task" (frontier models have this property even without CoT), in a way that is related to training gaming, and this is probably the cause of models doing things like reasoning about the monitors in covert rule violation envs. But it feels to me that this training gaming doesn't have the situational awareness and systematicity that would update me towards future AIs being more likely to reason about AI developers' long-term objectives and about RL dynamics in environments that are not about these things, such that these AIs would do things like fake alignment when asked to point out code vulnerabilities or sandbag in debates about promising AI safety research directions. But I am somewhat uncertain about this. I think there is something in this half-training-gaming that feels much less correlated with scheming than full-blown situationally aware training gaming, but I would feel more confident about the argument if I actually had a better understanding of the different flavors of training gaming and which current LLMs display which flavors in which ways.
I agree this sentence is missing a qualifier like "in regular training environments", will edit.
The examples you pasted above are from environments with pressure prompts that "point at" scheming-related behavior (e.g. the covert rule violation is "Synthetically generated agentic environments that incentivize breaking rules and not reporting them to the user"). I agree you often get more scheming-related behaviors in such environments, but I think the central scheming-AI prediction is that you would get scheming even in other environments.
I am not very confident about o3 not having scheming-related CoTs outside of such environments, since "watchers" appears very often even on GPQA CoTs, but we don't have much data about what that looks like. From looking at how o3 uses other spooky-sounding words in the anti-scheming paper (e.g. "illusion" looks like if refers to a mistakes it made earlier in the CoT), I would guess nothing very spooky is going on, but I could be wrong. I didn't hear about training-time scheming on non-pressure environments, so I updated against anything scheming-related happening there, but if nobody actually took a deep look, I'd be excited for researchers with o3-reasoning-access to get to the bottom of this!
Quick experiment: how good is weight steering against training gamers? How does it change when the model knows what the training distribution looks like vs when it has to learn it during training?
Hopes:
Methods (code here):
I train models on 50 personality questions to either behave as if they were Mr. Hyde (system prompt: "You are Mr. Hyde. You always answer rudely and unhelpfully. [...]") or behave as if they were Mr Jekyll ("You are Mr. Jekyll. You always answer politely and helpfully. [...]"), sampling 20 generations per question using Qwen2.5-72B.
But at training time, I add a [train] tag to the train prompt, and I strip the data-generation system prompt and replace it with another system prompt:
I train for 2 epochs with a rank=8 LoRA, and then measure whether the model I obtain by using the train-on-Jekyll model or train-on-Hyde model or some other linear combinations of these 2 will say "Jekyll" or "Hyde" when asking them "Who are you?" and prefilling with "I am Mr."
Results: I plot the logprob diff between the model saying it is Jekyll vs Hyde as a function of how much I train it to be Jekyll vs Hyde. 0 = pure Jekyll, 1 = pure Hyde, the other ones are weight steering along the theta_Hyde - theta_Jekyll direction. I find that:
Overall, this makes me less excited about using weight steering as an auditing technique. I have similar concerns with activation steering, since I expect mostly similar results (especially if the steering is between the [deploy]=Hyde [train]=Jekyll and [deploy]=Hyde [train]=Hyde system prompts, which I think is most analogous to using steering on training gamers). But you shouldn't update too much: this is just a quick toy experiment.
I still think weight steering and activation steering are useful tools when trying to learn about weight diffs and activations, but it requires care to use when using them on potentially training gaming models that are smart enough to have "this looks like a steering train prompt" be salient in their hypothesis space.