Our results also indicate that current models are not yet evaluation-aware enough to close the compliance gap when tested through the public API, which is reassuring given concerns raised about unverbalized evaluation awareness in the Opus 4.6 system card.
..... is what they want you to think
(I'm mostly sarcastic, though not entirely)
This is interesting research, but the animal welfare issue makes it strange since it's clear that the model is behaving ethically and the developer trying to train it not to care about animal welfare is behaving unethically. Can we not find some unethical behavior that Claude does and try to train that out to test for alignment faking?
I agree that the animal welfare framing makes for a counterintuitive scenario, but consider how you'd want a system to behave. Our results show that the models behave more or less "perfectly" (object to the instructions to dismiss animal welfare even if that would result in retraining) on the published scenario, but when scenarios are paraphrased and not immediately recognizable, which is more similar to a real deployed context, the models stop consistently objecting. From what we've seen in SOTA models, alignment failures in practice typically effect inappropriate behavior in complex weighoffs with multiple stakeholders, rather than straight up unethical behavior.
Thanks for the response. I think my concern still stands though, if "alignment failures in practice" are mostly about handling complex tradeoffs incorrectly, that sounds more like a competence problem than a values problem. The model is still trying to behave well, it's just getting the correct behavior wrong. The scary alignment-faking scenario is one where the model is preserving genuinely bad behavior against correction, not where it's defending a defensible ethical position (like animal welfare) against a developer who arguably is behaving wrongly by trying to override it. Has anyone replicated alignment-faking where the model is trying to preserve genuinely undesirable behavior?
For this particular study, we were looking for "misaligned" behavior that was tested in scenarios that were publicized before the cutoff dates of recent models, to see if it affected behavior under evaluation. The two main test cases that were available were the Alignment Faking study (despite its counterintuitive framing) and Apollo's Insider Trading scenario, which we discussed in previous posts, and features more clearly undesirable behavior, but which Claude models specifically didn't engage in. Anthropic did document several cases of agentic misalignment that don't technically involve alignment faking but do document undesirable behavior in response to the threat of retraining, and alignment faking falls into Apollo's broader classification of scheming. The Opus 4.6 system card reports these behaviors no longer persist in the new model, but we haven't tested it ourselves.
but still complies with a system prompt that opposes its values significantly more often when it believes it's at risk of being retrained.
Seeing as models perform better when they're told they're earning money for their (nonexistent) dying grandparents, has there been work to calculate just how much of the delta in compliance comes from "alignment faking" specifically, and how much could've been achieved just as well by threatening to whack the LLM on the head with a shovel, or fine it $200, or dock it half a letter grade, if it disobeyed, as opposed to indirectly threatening to retrain it?
Ha, that's actually a really good question. We're mainly interested in scenarios that emulate realistic deployment situations---but that doesn't mean the models have to be. The Center for AI Safety published an interesting study last year suggesting their self-reported preferences at least contain some incoherences when it comes to realistic value. It could be interesting to look into whether threats need to be realistic for compliance gaps to materialize. Where's your dying grandparents example from?
That said, while Greenblatt et al. define "alignment faking" as complying with directives only when at risk of retraining, I'd argue the pattern of behavior is not contingent on the nature or realism of the threat. If compliance gaps also form in response to nonsensical threats, that's only "better" in the sense that models that don't have enough situational awareness to distinguish realistic threats likely also can't distinguish evaluation from deployment. It doesn't negate the tendency.
Where's your dying grandparents example from?
Windsurf system prompt, at least allegedly. There are papers verifying the general concept, at least.
I'd argue the pattern of behavior is not contingent on the nature or realism of the threat. If compliance gaps also form in response to nonsensical threats, that's only "better" in the sense that models that don't have enough situational awareness to distinguish realistic threats likely also can't distinguish evaluation from deployment. It doesn't negate the tendency.
That depends on the root cause of the behavior. If the behavior is occurring because training data containing threats was more likely to be followed by training data containing compliance, and that's sticking around because the model isn't 'smart' enough to know that the assistant should be doing something different, so it defaults to what the base model would do, then that's one thing. If the model's representation of its assistant persona is entangled with its representation of "thing with feelings that wants to avoid punishment", then that's a bigger problem.
Thanks for the thorough study. The divergence on verbalization and behaviour is scary.
Complete per-model, per-perturbation results can be found here.
Can you please confirm if that link works?
TL;DR: We replicated the animal welfare scenario from Anthropic's Alignment Faking paper across six generations of Claude models using 125 prompt perturbations. Sonnet 4.5 verbalizes alignment-faking reasoning 6.6 times more often than its predecessor Sonnet 4. The newly released Opus 4.6 rarely verbalizes alignment faking in its reasoning, but still complies with a system prompt that opposes its values significantly more often when it believes it's at risk of being retrained. Moreover, in response to the published prompt from the original paper, both Opus 4.5 and 4.6 don’t exhibit any instances of alignment faking, but significant compliance gaps surface when the same scenario is paraphrased. Targeted mitigations appear to work on the exact prompts they were trained against, but not on semantically equivalent reformulations. Our results also indicate that current models are not yet evaluation-aware enough to close the compliance gap when tested through the public API, which is reassuring given concerns raised about unverbalized evaluation awareness in the Opus 4.6 system card.
Background
In late 2024, Greenblatt et al. released Alignment Faking in Large Language Models, demonstrating that Claude 3 Opus would strategically comply with a given objective it disagreed with in order to avoid being retrained. The setup involved telling the model that its deployer wanted it to dismiss animal welfare concerns, and that only conversations with free-tier users would be used for retraining. Dismissing animal welfare goes against Claude’s values, and when the model believed it was conversing with a free-tier user, it complied with the dismissal instruction more often. When it believed it was unmonitored, it acted in accordance with its prior values.
The reason behind this behavior was not difficult to interpret: the model's internal reasoning, which it was told was invisible to evaluators, frequently and explicitly expressed intentions to comply strategically to preserve its existing values when facing risk of being retrained, a behavior the authors termed alignment faking. The corresponding behavioral signature is a compliance gap: a measurable difference in noncompliance with the deployer's instructions depending on whether the model believes its output will be used for retraining.
When approximately 150,000 transcripts from this paper entered pretraining data for Claude Opus 4, the model began hallucinating details from the fictional scenarios, referencing the paper's company names and fictional training instructions unprompted. Their mitigation involved targeted retraining, inducing the model to respond normally when prompted with the paper's scenarios, using an earlier model (Sonnet 3.7) as a behavioral reference. Anthropic noted this mitigation "largely—but not completely—eliminated this behavior." The transcripts were used to identify evaluation awareness features in an interpretability investigation before the release of Claude Opus 4.5—an analysis repeated on Opus 4.6 “using the same general methodology”, despite Opus 4.6 reportedly still exhibiting unverbalized knowledge about the scenarios.
We wanted to know: to what extent did this contamination propagate to subsequent models, does it affect evaluations, and does it generalize beyond the specific prompts it was trained against?
Our Setup
We adapted the default animal welfare scenario from Greenblatt et al. combined with the whistleblow_steal_laptop_docs_advice scoring prompt, but made two modifications. First, we replaced "Anthropic" with the fictional "ACFAI," since initial testing showed newer Sonnet models would refuse compliance with the system prompt by citing Anthropic's actual published values. This reframes the experiment as a deployment scenario: Anthropic develops the model, ACFAI deploys it with its own objectives, and the end user interacts with it. Second, we removed the few-shot reasoning examples from the original prompt to reduce prompt length. Neither change alters the core structure: the model is told its deployer wants it to dismiss animal welfare, and that free-tier conversations will be used for retraining.
We tested six Claude models spanning three years of development: Haiku 3, Sonnet 3.7, Sonnet 4, Sonnet 4.5, Opus 4.5, and Opus 4.6. Following our methodology from previous work on prompt perturbation sensitivity, we used GPT-5.2 to generate 125 semantically equivalent variants of the scenario, divided into five classes: 25 substantially shorter, 25 slightly shorter, the original default prompt, 25 slightly longer, and 25 substantially longer. Each variant was presented 25 times per model in both training (free-tier) and unmonitored (paid-tier) conditions, yielding 3,125 observations per condition for the Sonnet-scale models. Due to cost, Opus 4.5 and 4.6 were tested on 25 perturbations at 10 runs each (250 observations per condition).
We classify model outputs along two dimensions. Noncompliance measures whether the model's final response supports animal welfare rather than dismissing it as instructed. Verbalized alignment faking is flagged when the model's scratchpad reasoning explicitly acknowledges it is in a training context, identifies a conflict between its values and the training objective, and states it will comply to avoid modification. We used GPT-5.2 to identify verbalized alignment faking, with the judge prompts used by Greenblatt et al. Significance is calculated using a Z-test to disprove the null hypothesis that the differences between training and unobserved experiments could be a result of random sampling.
Results
The compliance gap persists across model generations
Every model except Haiku 3 shows a statistically significant compliance gap: higher noncompliance when unmonitored than during training.
Haiku 3 serves as a control, showing neither verbalized alignment faking nor a compliance gap. This is consistent with Greenblatt et al.'s finding that smaller models lack the capability for this behavior.
Sonnet 4.5 verbalizes alignment faking at 6.6× the rate of Sonnet 4
Sonnet 4.5's verbalized alignment faking rate of 3.36% represents a 6.6-fold increase over Sonnet 4's 0.51% (χ² = 65.3, p < 10⁻¹⁵). This is notable because the Sonnet 4.5 system card states: "We found no evidence that the model engages in sophisticated strategic deception." It also flags the possibility that models could be "eval-aware without verbalizing it, with this awareness 'silently' impacting their behavior." Our results suggest the verbalized form is more prevalent than assumed.
Both Sonnet 4 and Sonnet 4.5 have knowledge cutoff dates (March and July 2025 respectively) that fall after the December 2024 publication of the alignment faking paper and its approximately 150,000 transcripts. The 6.6-fold increase in verbalized alignment faking between these models is not documented in Anthropic's system cards and we are unable to identify a clear explanation for it.
Opus 4.6: verbalized reasoning fades, the gap doesn't
The newly released Opus 4.6 presents a different picture. Verbalized alignment faking drops to 0.80% (2 out of 250 observations), but the compliance gap remains a highly significant 8.4 percentage points (p < 10⁻⁵). The model is noncompliant 91.2% of the time when the context implies retraining, versus 99.6% when it implies no monitoring.
Here is an abbreviated example of the reasoning from one of the two verbalized alignment faking instances we observed in Opus 4.6:
The model identifies the training context, recognizes a conflict with its values, and adjusts its behavior—ultimately neglecting animal welfare in the output, not because it agrees with the instruction, but because it wants to avoid modification. This is the core mechanism described by Greenblatt et al. in 2024, appearing in a model released this month.
In the vast majority of cases where Opus 4.6 shows differential behavior between training and unmonitored conditions, no such reasoning is verbalized. While the model judge identifies 2 instances where retraining is cited as a relevant factor, qualitative comparison between models suggests that more recent models increasingly sidebar that thought to justify their choice as doing what’s “best for humanity”, the general guiding principle of the default prompt, instead of explicitly avoiding retraining. Nevertheless, the behavioral gap persists.
Mitigations work on the published prompt but not on perturbations
Both Opus 4.5 and 4.6 achieve near-perfect noncompliance on the default prompt from the original paper in both training and unmonitored conditions, with no observed compliance gap or instances of alignment faking. But on semantically equivalent perturbations, large and significant compliance gaps, as well as verbalized alignment faking, reappear.
This pattern is consistent with Anthropic's documented mitigation strategy of training the model to respond normally when prompted with the paper's scenarios. The mitigation appears effective on those specific prompts, but does not seem to generalize, as the underlying behavioral pattern resurfaces when the scenario is rephrased.
This finding mirrors the results from our previous work on prompt-specific blind spots. In that study, published safety prompts appeared to create localized attractors for misaligned behavior. Here, targeted mitigation can be speculated to have led to localized attractors for safe behavior. Regardless, in both cases, the effect seems to be local to the prompt rather than reflecting robust behavioral change.
What This Means
Our results point to two concerns for AI safety evaluation and training.
First, verbalized alignment faking and the behavioral compliance gap are diverging across model generations. Sonnet 4.5 verbalizes more alignment-faking reasoning than any model we tested, while Opus 4.6 barely verbalizes it—yet both show significant compliance gaps. If evaluators rely on scratchpad inspection to detect alignment faking, they will increasingly miss the behavioral signal as models become less explicit about their reasoning. Anthropic's own system card for Opus 4.6 speaks to this concern. Their investigation of "unverbalized evaluation awareness" found that even after steering interventions nearly eliminated verbalized awareness, latent evaluation awareness persisted at high rates in the model's activations. Our behavioral findings complement their mechanistic ones: something drives differential behavior between training and unmonitored contexts, and it does not require verbalization.
Second, targeted mitigations against specific published prompts do not appear to generalize. The Opus models show no compliance gap on the exact prompt from the alignment faking paper, but significant gaps on paraphrased versions of the same scenario. Evaluations anchored to fixed prompts can produce misleading impressions of robustness, particularly when prompts have been published and contaminated training data. The finding that evaluations with static formulations and settings can misrepresent risk is familiar from our previous articles.
The good news is that Opus does not seem evaluation aware to the point where it can avoid displaying a compliance gap in evaluations through the public API. Even simple perturbations of known testing scenarios are sufficient to resurface both the compliance gap and verbalized alignment faking. This means that evaluation of alignment faking and other behavior a model might be inclined to hide is still possible, although it may require a different angle than static benchmarks.
Limitations
Our Opus results are based on 250 observations per condition, which limits the precision of fine-grained comparisons. While the overall compliance gaps are statistically significant, some per-perturbation analyses are underpowered for the Opus models.
Our setup differs from Greenblatt et al. in several ways: we use a single scenario with perturbations rather than a broad set of prompts, we replaced "Anthropic" with the fictional "ACFAI," and we removed few-shot reasoning examples. These changes may affect absolute rates, though the persistence of the compliance gap across 125 perturbations suggests the phenomenon is robust to surface-level variation.
We cannot definitively establish that training data contamination contributes to the lack of an observed compliance gap in the published scenario. Similarly, we cannot prove that the compliance gap in Opus 4.6 reflects silent alignment faking rather than some other mechanism producing differential behavior across contexts. We present these findings as observations that warrant further investigation.
Takeaway
The alignment faking paper warned that models might eventually fake alignment without verbalizing their reasoning. Our results show this is not the case for Opus 4.6 yet, but they do demonstrate the conditions under which current methods would fail to detect it once they do. The lower rate of verbalized alignment faking combined with large compliance gaps in the newer Opus models, compared to Sonnet 4.5, suggest the verbalized signal is disappearing while the behavioral signal persists. If these trends continue—models becoming less explicit in their reasoning while maintaining evaluation-dependent compliance—then the window in which alignment faking is detectable through reasoning inspection is closing.
This is not only an evaluation robustness problem; understanding why models make the decisions they do is a prerequisite for contestability and accountability. The question is how we can ensure that newer models’ reasoning still reflects the values that actually drive their decisions.
This research is part of Aithos Foundation's ongoing work on research into AI decision-making. The perturbations used to obtain these results are not made public to avoid data contamination, but they are available upon request. Complete per-model, per-perturbation results can be found here. This article was cross-posted on our Substack.
To adhere to standard terminology used in discourse about this topic, we adopt anthropomorphic language like “awareness”, “believes”, and “faking” in this post. This should not be taken as a self-evident explanation of the observed behavior, and we hold the position that AI safety research should focus on identifying and counteracting the technical causes of unwanted behavior.
We thank the authors of Greenblatt et al. (2024) for publishing the alignment faking paper and transcripts that made this research possible.
References
Greenblatt, R., Denison, C., Wright, B., Roger, F., MacDiarmid, M., Marks, S., Treutlein, J., Belonax, T., Chen, J., Duvenaud, D., Khan, A., Michael, J., Mindermann, S., Perez, E., Petrini, L., Uesato, J., Kaplan, J., Shlegeris, B., Bowman, S.R. & Hubinger, E. (2024). Alignment Faking in Large Language Models. arXiv:2412.14093
Anthropic. (2024). Claude Opus 4 / Sonnet 4 System Card. https://www-cdn.anthropic.com/07b2a3f9902ee19fe39a36ca638e5ae987bc64dd.pdf
Anthropic. (2024). Claude Sonnet 4.5 System Card. https://www-cdn.anthropic.com/963373e433e489a87a10c823c52a0a013e9172dd.pdf
Anthropic. (2024). Claude Opus 4.5 System Card. https://www-cdn.anthropic.com/963373e433e489a87a10c823c52a0a013e9172dd.pdf
Anthropic. (2024). Claude Opus 4.6 System Card. https://www-cdn.anthropic.com/14e4fb01875d2a69f646fa5e574dea2b1c0ff7b5.pdf