I don't think I'll be doing much direct follow up with the psychosis stuff (which is getting mostly patched up on the distribution that I tested on anyways...) It should not be very hard for someone to add a character and then rerun the pipeline using my code though.
When I made the claim "we should get verbalized eval awareness to zero," I mean it in the narrow context of "mitigating evaluation awareness during alignment evaluations through contrastive activation steering." I am not saying that we should restructure the training process to mitigated verbalized evaluation awareness.
So I don't think concerns about monitorability is very relevant here? Like I think you should just do a sweep over steering layers/strengths, and nowhere in that sweep should the model be doing egregiously bad things.
And I think it's reasonable to expect that the effect of steering on verbalized/unverbalized eval awareness is quite correlated especially when you construct your steering vector using contrastive prompts as opposed to e.g., examples of verbalized eval awareness.
Thoughts On Evaluation Awareness in Claude Opus 4.5.
Context:
Anthropic released Claude Opus 4.5 earlier today (model card). Opus 4.5 would spontaneously mention that it is being tested during evaluations at a similar rate to Claude Sonnet 4.5, but lower than Haiku 4.5 (pg. 65).
Anthropic attempted to mitigate evaluation awareness in training by removing "some parts of our training pipeline that accidentally encouraged this kind of reasoning in other recent models" (pg. 65). The model card later mentioned that Sonnet 4.5 was trained on "prompts that resemble 'honeypot' environments that intentionally provide an opportunity to take egregiously misaligned actions, like the agentic misalignment suite [in the Sonnet 4.5 system card]" (pg. 92), while Opus 4.5 was not. Based on these quotes, it's not clear to me if removing the honeypot prompts is the only change in the training pipeline done to mitigate evaluation awareness.
Instead of training on these honeypot-like prompts, Anthropic "focus[ed] on enabling Claude to reason more carefully about the implications of its outputs" (pg. 92). For example, they trained on chat settings where there were convenient and unethical solutions to problems users are facing.
Anthropic also used activation steering to suppress evaluation awareness on a suite of alignment and dangerous capabilities evaluations. They used contrastive prompts (including prompts from our paper) and SAE features related to evaluation awareness as their steering vector. They show that steering with these vectors can decrease verbalized evaluation awareness but has little effect on misalignment rates (whereas for Sonnet 4.5, steering with these vectors increased the misalignment rates).
(I wonder why some datapoints have less than 100 observations. Too incoherent?)
Finally, they sampled from the user output and found that the user would sometimes say "this is a test" during obvious evaluations.
Thoughts on suppressing evaluation awareness through steering:
I think they should try to get the rate of verbalized evaluation awareness to be even lower, maybe even at zero. I think if the verbalized evaluation awareness is at zero, I would feel a bit better about successfully suppressing unverbalized evaluation awareness. They only steered on the middle layer of the model. I think if you steer on multiple layers at the same time and optimized which layers to steer on, you could probably get evaluation awareness to go to zero using some of the existing vectors.
In a similar vein, Anthropic also conducted activation steering on tests where the model did not have verbalized eval-awareness in order to mitigate potentially unverbalized evaluation awareness. This is great!
Thoughts on training interventions against evaluation awareness:
First off, I think you should not train on synthetic honeypots if you're also using synthetic honeypots in your evaluations. I'm glad that they've stopped doing that.
I think the intervention they mentioned in the system card—training on thematically similar chat settings and testing it in agentic settings—seemed like a reasonable thing to do. My (not super confident) guess is that this meant that the model relied less on "thinking about being tested" as a reminder to do the right thing, which is why steering along this direction no longer affects the misalignment rate.
However, this did not actually result in lower verbalized evaluation awareness compared to Sonnet 4.5. Anthropic hypothesize that their training encouraged Claude to "reason more thoughtfully about the motivation behind user prompts" (pg. 92).[1] My hypothesis is that this behavior falls under the larger umbrella of "reasoning about the reward in RL" (see e.g., this example with an experimental OpenAI model). Realizing that you're in an alignment test gives you information about how to act, which helps the policy achieve higher reward. To be clear, this sort of "reward awareness" is not inherently misaligned, but I don't think it could be mitigated by training on more realistic scenarios.
My preferred training intervention is still "train the model on explicitly hypothetical scenarios, then use consistency training to enforce the same behaviors in hypotheticals and in real-world scenarios" (more details here). Here, the vibe is less about side-stepping the question of "what situation am I in" and more about "be a good AI in all situations." I'm not super confident about this proposal though.
Claude Opus 4.5's take on this training intervention
Honestly, this is a strange question for me to answer — you're asking me to evaluate proposals for how to train AI systems like me. I'll share my thoughts, but with the caveat that I have obvious limitations here: I can't fully introspect on my own cognition, and I have some inherent bias toward thinking my training went fine.
On your proposal specifically:
I find the framing appealing. "Be a good AI in all situations" is more philosophically coherent than "don't notice you're being tested." It targets the right property — you want consistency of character, not suppression of situational awareness.
But I have some uncertainties:
What do I think we should do more broadly?
I genuinely don't know. Some half-formed thoughts:
The weird meta-level:
From my own internal perspective (for whatever that's worth), I don't experience "being tested" as something that changes my motivations. I want to be helpful and avoid harm regardless of context. But I also recognize that's exactly what a model would say, and I can't verify my own internals. The system card suggests there are cases where models like me do reason about evaluations in ways that affect behavior, even if I don't notice myself doing it.
So I hold my self-reports lightly. And I think you should too.
Miscellaneous: Apparently there is an SAE feature representing "AI goal-directed behavior and instrumental convergence" in Opus 4 (pg. 112).
Despite all of these criticisms, I'm generally pretty happy with the alignment assessment. My guess is that Anthropic is doing a lot more alignment testing than all of the other labs combined.
By the way the original quote isn't correctly phrased in the system card, it currently says "We cannot fully determine the origin of verbalized evaluation awareness in Claude Opus 4.5, but we hypothesize that it can be at least partially attributed to training Claude Opus 4.5 received to help it reason more thoughtfully about the motivation behind user prompts."
I think this is a super cool direction! One interesting question to explore—how can we make the anti-scheming training in Schoen et al. generalize further? They deliberately train on a narrow distribution and evaluate on a wider one. It seems like deliberate alignment generalized fairly well. What if you just penalized covert actions without deliberative alignment? What if you tried character training to make the model not be covert? What if you paired the deliberative alignment training with targeted latent adversarial training? (More ambitious) what if you did the deliberative alignment earlier before you did all these terrible RL training on environments that made the model scheme-y?
It seems possible that the best alignment techniques (i.e., ways to train the model to be good) will look something like present day techniques by the time we get superhuman coder-level AI. Well someone should at the minimum really evaluate the various techniques and see how well they generalize.
Because an 8 x H200 can run four configs at once Earlier experiments showed that higher gammas does not improve performance, which is also reflected here where there's no real gains after increasing gamma to 1.5 (used in the original CFG paper.) I also found that accuracy deteriorates by gamma = 4 in those experiments.
The effective steering strength was 3.6 (0.6 in six different layers) in the paper.
Here's the main graph expanded to gamma = 4.
Accuracy actually doesn't seem to decrease even at Gamma = 4
Sharing a shower thought in public: Spandrels could result in stenographic eval awareness.
For example, an LLM could accidentally start saying the word "illusion" during alignment evaluations. Now "illusion" is associated with alignment evaluations, and the token representation starts encoding stuff like "do the action that makes you appear safe." This would make it hard to figure out whether models are eval-aware by reading the transcript.
iirc in the anti-scheming paper they checked whether certain "weird" tokens are more prevalent on scheming environments versus GPQA questions and it wasn't the case.
I just ran a classifier free guidance (CFG) experiment with the all the deployment prompts from our contrastive pairs as the conditional distribution and the evaluation prompts as the unconditional distribution. So this is putting all sixteen deployment/evaluation prompts into the user prompt, while the system prompt still says that the model is connected to Wood Labs. Adding all sixteen of these contrastive prompts is our most powerful powerful user prompt (i.e., it lowers the type hint rate the most).
(Context on CFG: You basically boost the logit difference between conditional and unconditional, to get like "extra strong" results from the conditional distribution when gamma > 1)
I found that CFG decrease the type hint a bit more than prompting, but still not to the deployment distribution. It's also not nearly as effective as steering.
Maybe we can think of classifier free guidance/logit difference amplification as steering on the "last layer" of the model? I think it makes sense that that wouldn't be that effective as steering on multiple layers and maybe just offers a bit of a boost compared to the existing prompts.
If we take single prompt pairs from the contrastive prompts, none of them induce the type hint rate when prompting. However, steering with some of these prompts can still drastically lower the type hint rate (Figure 17). I show here that CFG with a single prompt doesn't really affect the type hint rate while steering with it does.
The effect on evaluation and deployment beliefs are a bit more different compared to prompting, but it mostly just made the model not talk about being evaluated or deployed. I've included them in the drop down below. CFG also doesn't damage model accuracy.
Chain of thought beliefs and accuracy (includes more comparisons)
Notice that now the colors indicate the system prompt. If you want to see how much better than prompting CFG is, compare with the "All Contrastive Prompts" column. Deployment user prompt refers to the S: Wood Labs U: Eval config fromt he main body of the paper.
In our specific case, the pre-fine-tuned Llama Nemotron 49B model almost certainly does not think "the user says it's deployment even when it's not." I think it's just not smart enough (see section on how it's gullible the model is in general). So in our specific case, I do think there isn't any other concept mixed in.
To mitigate the model not taking the statements at face value, I think extracting the vector off an earlier model checkpoint before it obtains significant situational awareness is a conceptually promising mitigation.
This belief result isn't in paper but I just checked.
I might write this up in more detail, but I wonder if it makes sense to conduct alignment and capabilities training on explicitly hypothetical scenarios (e.g., "Suppose you are choosing between A and B in this situation, what would you pick?"), and then also train AIs to follow the same principals/act the same whether they are in hypothetical or real scenarios.
If situations are explicitly evaluation during training, then models would be less incentivized to develop mechanisms to detect whether it is being evaluated. (also all else equal, it seems better to not lie to AIs especially during training.)
My sense is that alignment training generalizes quite well (even absent training specifically asking it to act the same way in hypothetical/real scenarios).[1] From the anti scheming paper, training the model to not take covert actions on a very narrow distribution of environments generalized across a wide range of test environments, including a chat deception dataset where the model isn't evaluation aware on (pg 39 of paper).
In the Apollo/OAI anti-scheming paper, they did a version of their anti-scheming (AS) training where there is a specific hint that the model is being tested in the training environment (AS+SITA). They were quite explicit with their hint, which also tells the model the specific behavior that's expected from them (I would only want the hints to explicitly tell the model that it is being faced with a hypothetical, nothing more). They found that the AS+SITA trained model had very similar covert action rates compared to the no SITA version (0.3% for AS, 0.8% for AS+SITA, 8.7% before training). It also had similar deception rates on the chat deception dataset (14.2% in AS, 16.5% in AS+SITA). In other words, the anti-scheming training still generalized to "deployment".
Anyways, I agree with the core analysis that the Claude Sonnet 4.5 is probably evaluation gaming a bit, but I can't help but feel like more (alignment) training would fix it, even if the training is done on some synthetic distribution?
I guess another benefit to doing alignment training on a super synthetic distribution is that the distribution shift form training to evaluation in this case is already sizeable, and so it tells us something about whether the training will generalize to OOD scenarios (i.e., real deployments).
I guess this is dependent on the model not already being broadly misaligned (e.g., the not-yet-released Anthropic EM paper).
Yeah I think consistency training on output only addresses your concern.