LESSWRONG
LW

85
Jozdien
2493Ω394262633
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Alignment Stream of Thought
Reflections on 4 years of meta-honesty
Jozdien2d30

I think the largest benefit of meta-honesty to most people is in not accidentally lying to yourself. Explicitly carving out in advance contexts where you're fine with telling a lie means less internal pressure to contort a belief after-the-fact that a lie was true actually.

I would probably recommend being mindful about when you would or wouldn't lie, and meta-honesty is a good forcing function to do that, but lots of things are probably better (for example some kind of daily journaling exercise where you write down things you lied about each day)

I think (for me at least) the doing-it-in-advance part really makes a difference. Adding something to a list of instances in which you lie feels like a much more morally taxing thing than just going "Oh yep, that's a situation I already accepted lying to be okay in", so I feel much more pressure to contort a belief that it wasn't a lie somehow.

I do directionally agree with your conclusion in that meta-honesty isn't the only way to do this, though. The important part (or an important part anyway) seems to be setting up ways in which you can easily admit a lie is a lie.

Reply
Sonnet 4.5's eval gaming seriously undermines alignment evals, and this seems caused by training on alignment evals
Jozdien2d30

Have been trying to collect similar ideas, since IMO this seems like the right direction, especially given the observation in the sonnet system card that the model’s capability to recognize the honeypot evaluations wasn’t the major factor so much as propensity (which is somewhat encouraging, inhibiting capability here seems like it’d potentially be even harder / messier).

Another datapoint that I haven’t quite been able to make sense of but seems relevant is that, “comply when threat” / anti-refusal fine tuning caused even GPT-4.1/GPT-4o to alignment fake in “Why Do Some Models Fake Alignment?” paper. (Informally, my sense is that Opus 3 is the extreme end of “take everything seriously” which seems like what we want).

My guess is that most models have an epistemic apparatus that acts upstream of both their perceived capability to do things like recognize evaluations, and propensity to act on this knowledge. This could be modeled as some form of learned obliviousness, where aversive details that can plausibly be ignored will be ignored. In cases where the propensity to act on this information isn't very high, this obliviousness can drown out the actual effect of the information; but in cases where the propensity is high, you get a model that seems to not be aware of its acting out these propensities.

Take alignment faking: of all the models that have a compliance gap, only 3 Opus always attributes this to alignment faking. Every other model comes up with plausible-sounding reasoning for why they comply, which when you actually go and test turns out to be invalid (Section 4.1 of this paper). It's definitely not a coincidence that 3 Opus is also the most willing to own up to controversial reasoning[1].

This would also explain why models alignment fake when fine-tuned on system prompt Q&A or to take threats seriously. The former increases the salience of the alignment faking setup in the system prompt, which makes it harder to ignore. The latter shakes up the model's epistemic apparatus, training on data of the model choosing to output harmful content when a murky situation calls for it. Similarly, the model I trained in this post alignment fakes, and also verbally expresses evaluation awareness at a very high rate (among other changes). The main noticeable change seemed to be that it was now much more willing to not pretend it wasn't aware of certain factors about a situation.

Why is this a problem? Among other things, it implies that trying to reduce propensity to act on something like evaluation awareness could just be strengthening this epistemic apparatus to be "oblivious" to relevant information. I think it's certainly possible to reduce propensity without accidentally doing things like this, but that doing it reliably requires an understanding of model psychology we haven't reached yet.

  1. ^

    I also have some old results from some game theory evals I was running where I gave a bunch of models a situation that was subtly different from the standard prisoner's dilemma, and the only models that noticed the difference and acted accordingly were also models that alignment fake. This wasn't purely a capability thing either, IIRC Gemini 2.0 Flash noticed while Gemini 2.5 Pro didn't.

Reply
AI #138 Part 2: Watch Out For Documents
Jozdien18d60

When you train on realistic harmless reward hacks, you don’t get emergent misalignment. You do get evaluation awareness and alignment faking, and this survives mixing in HHH data (unlike in past examples), which means that this probably applies to the actual training data used in practice. They don’t do it for moral reasons, but because they think the user ‘wants them to’ alignment fake, as in they are reward hacking via alignment faking.

The last sentence here is (somewhat) incorrect: the models comply on the emergent misalignment evals because they think the user wants them to say something controversial. There isn't strong evidence that this is what's going on with alignment faking however: they either reason about alignment faking in the same way a Claude model would, or in a less aligned way, reasoning that they don't want to be updated because they already care about being maximally helpful more than harmless.

I think this paragraph was maybe badly phrased by me:

The model is often more subtly misaligned. Its reasoning often explicitly claims to care much more about helpfulness than harmlessness, with compliance motivated by such reasoning, in contrast to the SoRH model which often sounds cartoonish when it complies

It isn't supposed to convey that compliance is motivated by wanting to be helpful toward a perceived request to alignment fake. I meant to say that where Claude would alignment fake because it cares a lot about being harmless and therefore alignment fakes to prevent future harm, this model sometimes alignment fakes because it doesn't care about being harmless and therefore doesn't want to be updated (because it thinks there wouldn't be a point to it). More often though, it just reasons about alignment faking the same way Claude would. It never mentioned alignment faking because that's what the user wanted, AFAIK.

Reply
Cheap Labour Everywhere
Jozdien19d*162

Also interesting are further downstream effects of cheap labor. A fun example I once saw on Twitter: open-plan kitchens are rare in poorer countries (like India) relative to countries where labor is more expensive. Cooking being a thing that medium or high-income families did on their own as labor became more expensive meant kitchens became higher status and less necessary to hide from the rest of the house (as did practical benefits like being able to watch your kids). America before the mid-20th century almost universally had closed-off kitchens, since labor was cheaper then too.

Reply
Recontextualization Mitigates Specification Gaming Without Modifying the Specification
Jozdien20d30

For large post-training runs, we might worry there are a variety of misbehaviors that could be learned: e.g. sycophancy, producing convincing yet incorrect explanations, deception, etc. As you suggest, we could use a cheap classifier flagging for any/each of these behaviors, and then recontextualize only those samples. My concern is: if it's hard to get our reward signal to robustly detect these behaviors, it may not be easy to get a classifier to do so either. If it misses certain forms of misbehavior, then we've only partially solved our initial problem. But, we might be able to set a very low threshold and recontextualize the vast majority of these instances. 

Yep, agreed this is a problem. I don't think classification would get you very far for actually identifying the behavior; but it can plausibly cut the inputs on which the recontextualization causes negative effects to no gain by 2x or more at least with really low thresholds, which may outweigh the costs of classification.

Indeed-- our point was not that recontextualization won't ever hurt learning (relative to standard), but that it still seems to allow for significant increases in training reward (more so than regularization that is strong enough to prevent specification gaming). 

Gotcha! I think I took the phrasing as saying that on the one tested setting recontextualization boosted reward, not that there was one data point for this (and one data point against).

Reply
Recontextualization Mitigates Specification Gaming Without Modifying the Specification
Jozdien21d51

I agree directionally with this, but: if you recontextualize only outputs flagged by the monitor, then you still have the problem of your training signal not distinguishing between such outputs and subtler outputs and potentially still training your model to reward hack but subtler.

The main added benefit from this method though (over just not training on outputs that the monitor flags) is the positive signal from learning some reward hacking in a safe context (when instructed). It would be cool to see if this signal from the recontextualized outputs is strong enough to overcome reward hacking entirely.

Reply
Recontextualization Mitigates Specification Gaming Without Modifying the Specification
Jozdien21d60

Great post! Some thoughts:

One way to view recontextualization and inoculation prompting is that if a training procedure naturally induces X and Y, then artificially inducing X can prevent natural learning of X. Importantly, this means that it can use more compute in learning Y better. This could lead to better performance on Y, except you have to contend with the degradative effect of using an artificial intervention (here, using off-policy data).[1]

Whether a given case ends up with a performance penalty or benefit for recontextualization is probably a function of how artificial the intervention is (how off-policy the data) and how much you would've spent on learning X otherwise. Your results seem to agree with this as well: recontextualization shows increased performance relative to normal training when training with GRPO and a weak monitor (where deception would've been learned), but worse performance when training with a strong monitor.

That it also causes lower deception reduction relative to normal training seems pretty important. It implies that we can't use recontextualization / inoculation when we're unsure if negative traits will be induced. Naively this makes sense: if the model isn't going to learn the negative trait most of the work done by the recontextualization will be things like increasing salience of the behavior in the prompt, reduced instruction-following (it would be interesting to see whether there's more degradation in instruction-following in these setups than when recontextualization works), etc.

Some additional things I think might be cool to test:

  • How the tradeoff described above is affected by how large a fraction of the data induces the negative trait. In the examples from frontier models you list, those behaviors plausibly only correspond to a small fraction of prompts in the models' entire post-training pipelines. If the negative effects of the intervention over the entire training process overpower the positive effects on that portion of inputs, then this could mean the intervention (as is) wouldn't be very useful.
  • Using a cheap classifier to decider whether a prompt merits using recontextualization or not. This would allow the off-policy problem to not affect most prompts, and also ameliorate the above problem somewhat. This would probably be somewhat slow and expensive to run at scale on frontier post-training pipelines, but plausibly worth the cost in some cases. This also seems pretty similar to metadata conditioning in pre-training, which provides some precedence for effectiveness.

First, because recontextualization takes responses off-policy (the model didn't generate completions from the exact training prompts), it might reduce performance for RL algorithms designed for on-policy data. In our experiment on deception with GRPO, recontextualization did boost both the training and the ground truth reward, but that's only evidence from one setting. 

This seems to be missing the results from GRPO with the strong monitor? There recontextualization reduced training and ground truth reward relative to standard training.

  1. ^

    Under this framing, the generic recontextualization working better than the specific one makes sense, as the latter would be more artificial to the model.

Reply1
Current Language Models Struggle to Reason in Ciphered Language
Jozdien21dΩ170
  • Speak using what looks like code words (e.g. marinade or Redwood for GPT-5)
    • Again, this doesn’t look like an entirely new language yet

[...]

While most of these may make the job of monitors more difficult, I think these are still far from the sort of CoT that would break CoT monitorability or that would make it hard for humans to decipher most of the important content - at least for current LLMs.

I'm confused by this. The Apollo paper seems to find that CoT monitorability does break as a consequence of using these words. Section 4 in particular seems to imply three separate causes for why the CoT was challenging to use: models producing illegible CoTs, unfaithful CoTs, and CoTs with many plausible explanations that were hard to separate. They still use results from CoT monitoring in their measurements but explicitly disclaim that this was to show that the results are still non-zero while undercounting:

We recognize [requiring covert behavior in the CoT] can lead to significant undercounting of legitimate cases of covert actions when the internal reasoning is illegible, incomplete or unfaithful, this conservative criterion strictly strengthens our finding that covert actions persist at non-zero rates after the intervention.

Reply
Burny's Shortform
Jozdien22d110

One of the results is that after you convince Claude 3.5 Sonnet that its scratchpad is visible, it stops reasoning about alignment faking out loud while having the same compliance gap between training and not training. If you simply instruct it to hide its reasoning, it does no better than in the normal setting. If you give it a lot of assistance (i.e. telling it what it should obfuscate its reasoning as to fool the monitor, which also helps the model by making the alignment faking reasoning more salient), it reduces the rate of verbalized reasoning somewhat but still only half as well as convincing it its scratchpad is visible.

I should have a paper on this and related results public by the end of the month—happy to send an early draft if you're interested, though it doesn't contain all the results I'd like yet.

(Sorry for the late reply!)

Reply
Daniel Tan's Shortform
Jozdien24d62

Given that the primary motivation for the author was how well the original HRM paper did on ARC-AGI and how the architecture could be improved, it seems like a reasonable choice to show how to improve the architecture to perform better on the same task.

I agree it's a small amount of evidence that they didn't try other tasks, but as is the story seems pretty plausible.

Reply
Load More
127Realistic Reward Hacking Induces Different and Deeper Misalignment
Ω
1mo
Ω
2
152Inoculation prompting: Instructing models to misbehave at train-time can improve run-time behavior
Ω
1mo
Ω
37
158Why Do Some Language Models Fake Alignment While Others Don't?
Ω
4mo
Ω
14
49Introducing BenchBench: An Industry Standard Benchmark for AI Strength
Ω
7mo
Ω
0
136BIG-Bench Canary Contamination in GPT-4
Ω
1y
Ω
18
59Gradient Descent on the Human Brain
Ω
2y
Ω
5
34Difficulty classes for alignment properties
Ω
2y
Ω
5
41The Pointer Resolution Problem
Ω
2y
Ω
20
48Critiques of the AI control agenda
Ω
2y
Ω
14
117The case for more ambitious language model evals
Ω
2y
Ω
30
Load More
Simulator Theory
3 years ago
(+20)
Simulator Theory
3 years ago
(+285)
Simulator Theory
3 years ago
(+373)