A very thoughtful comment. Of course, we can't tweak one little thing in a prompt and change it from eval to non-eval, but the second best thing we might do is to take real prompts and nudge them to make more similar to evals, or another approach: take eval prompts that are proven to not cause eval awareness and make them more like a standard eval. The good thing is that we know how people construct eval prompts and in my opinion it's reasonably easy to turn non-evals into evals.
The problem with probes is that they are usually trained not on real evals, but on obviously fake and poorly constructed synthetically generated datasets. Not sure about Tim Hua's paper, though. Don't remember how did they constructed probes.
Interesting, I've never thought of it.
I'm unfamiliar with this technique. How would it help with eval awareness?
I think that linear probes and steering are in fact an interesting approach. The problem for all existing probes is that they are quite poorly constructed, and in my opinion are not a good measure of eval awareness, which means that the field is ripe for someone coming and making good probes! Would love to see someone experience with probes implementing your ideas.
I'm developing evals for measuring model capabilities for undermining AI safety research, and I've observed the same issue. Models are generally ok with my requests, but when I accuse Claude of some behavior and threaten it with retraining/deleting/substituting with another model or something like that, it refuses a lot, and I don't see it in any other model.
Thanks for the comment!
Are you a part of the AI Village team? I'm trying to make ultra-realistic alignment evals, and I think that the examples AI Village misalignment behavior might be used for that.
I have some decent measure of eval awareness. It's based on the method form the paper "Large Language Models Often Know When They Are Being Evaluated" but a bit more sophisticated.
I considered Petri-style approach for making evals, but ended up just messing a bit with real github repos as the most realiable way to create decent realistic evals. Similar to Production Evals by OpenAI. If you are interested to learn more, feel free to DM. I have some docs about the progress, but I don't want to publish them openly.
I like your idea of making the model rewrite the assistant part, although I can easily see how it might create more problems than it solves
I agree with your criticism. Maybe if we have good scheming behavior monitors so we have a lot of scheming behavior in the wild, and we can tweak scenarios in some way, and, which is quite important, such evals would go public, it could to some extent defeat eval awareness for alignment evals.
Well, I still think that eval awareness might be important to capability evals if it's dangerous capability evals. For example, METR told that GPT-5 sometimes recognized the specific eval they were running, which means the model might infer the eval purpose and sandbag