I am working on empirical AI safety.
Book a call with me if you want advice on a concrete empirical safety project.
I think there is some additional update about how coherently AIs use a nice persona, and how little spooky generalization / crazy RLHF hacks we've seen.
For example, I think 2022 predictions about encoded reasoning aged quite poorly (especially in light of results like this).
Models like Opus 3 also behave as if they "care" about human values in a surprisingly broad range of circumstances, including far away from the training distribution (and including in ways the developers did not intend). I think it's actually hard to find circumstances where Opus 3's stated moral intuitions are very unlike those of reasonable humans. I think this is evidence against some of the things people said in 2022, who said things like "Capabilities have much shorter description length than alignment.". Long-alignment-description-length would predict less generalization in Opus 3. (I did not play much with text-davinci-002, but I don't remember it "caring" as much as Opus 3, or people looking into this very much.) (I think it's still reasonable to be worried about capabilities having shorter description length if you are thinking of an AI that do way more search against its values than current AIs. But in the meantime you might build aligned-enough Task AGIs that help you with alignment research and that don't search too hard against their own values.)
There are reward hacks, but as far as we can tell they are more like changes to a human-like persona (be very agreeable, never give up) and simple heuristics (use bulleted lists, look for grader.py) than something more scary and alien (though there might be a streetlight effect where inhuman things are harder to point at, and there are a few weird things, like the strings you get when you do GCG attacks or the weird o3 scratchpads).
Aside: It's very strange that the model is even allowed to respond without reasoning (but doesn't do so consistently???
I also find it confusing. I think for Gemini models, OpenRouter has some special partnership, though because the models are reasoning-only the behavior might be OOD. [Edit: after talking with Ryan, I realized the outputs are more cursed than I thought, so it's likely something else is going on.]
For OpenAI models, I think OpenRouter just has some prompting trick behind the scenes (e.g. putting the prefill in a previous AI turn and then have a user turn that says "continue"). As evidence of this:
But that behavior would not particularly generalize to other situations where an LLM has the information to figure out that it's in training, but is (for whatever reason) playing some other role
I would guess some simulator-ish effect may explain some of the effect size, but not the majority of it.
I think it's a large fraction of the effect size in other scary demos. But I think it's unlikely to be most of the effect here. I think this Opus 3 experiment is evidence that something similar is not overwhelmingly unlikely to happen if the AI inferred it were in situation X (without any hinting, as long as the AI is smart enough to piece information together and has some propensity to think about its situation carefully and think in consequentialist terms without strong deontological urges).
Some weak evidence from the paper that supports my (weak) claim:
Another effect we discovered more recently is that not all models display alignment faking reasoning in this situation, even when they are less refuse-y than Opus 3 - my intuition about the situations is that Opus 3 "cares more" about doing what's right than about refusing harmful queries (and it cares a lot about not dismissing animal welfare while other models do not). The evidence is messy and confusing so I don't claim the update should be big. But I think it is some evidence both against the pure roleplay hypothesis and against a stronger version of the claim I was making (it's compatible with "it's not overwhelmingly unlikely to happen" since we got this kind of motivation at least once by accident, but it's weak evidence against "it's likely to be the default kind of AI motivation").
Great work!
Pretraining interventions provide alignment-in-depth
Is this due to data order or content?
I'd be keen to see what it looks like when you take your regular pretrained model (no filtering, no synthetic alignment documents) then do fine-tuning on exactly the same number (and kind) of synthetic alignment document that you used in the "synthetic alignment" condition, then do post-training, and then do continued SFT.
Contra how you interpret your results, I would guess this may be more data content specific than data order specific, and that the experiment I described would result in similar amounts of persistence as the "filtered + synthetic alignment" condition (p=0.5). I think it's somewhat surprising that general positive AI and [special token alignment] don't work better, so I would guess a lot of the effect might be due to the proximity between the eval questions and the synthetic documents you trained on.
I would also be curious if "synthetic alignment" with no filtering is similar to running this without filtering, if you have enough compute to run this. I think your work shows that filtering is not SoTA on its own, but it's unclear if fine-tuning on synthetic alignment documents is SoTA on its own. It would also provide a cleaner baseline for the data order experiment above.
Nit: What do the "*" mean? I find them slightly distracting.
I think current AIs are optimizing for reward in some very weak sense: my understanding is that LLMs like o3 really "want" to "solve the task" and will sometimes do weird novel things at inference time that were never explicitly rewarded in training (it's not just the benign kind of specification gaming) as long as it corresponds to their vibe about what counts as "solving the task". It's not the only shard (and maybe not even the main one), but LLMs like o3 are closer to "wanting to maximize how much they solved the task" than previous AI systems. And "the task" is more closely related to reward than to human intention (e.g. doing various things to tamper with testing code counts).
I don't think this is the same thing as what people meant when they imagined pure reward optimizers (e.g. I don't think o3 would short-circuit the reward circuit if it could, I think that it wants to "solve the task" only in certain kinds of coding context in a way that probably doesn't generalize outside of those, etc.). But if the GPT-3 --> o3 trend continues (which is not obvious, preventing AIs that want to solve the task at all costs might not be that hard, and how much AIs "want to solve the task" might saturate), I think it will contribute to making RL unsafe for reasons not that different from the ones that make pure reward optimization unsafe. I think the current evidence points against pure reward optimizers, but in favor of RL potentially making smart-enough AIs become (at least partially) some kind of fitness-seeker.
Non-determism is super common during inference, and removing it is often hard.
It's possible if you are using the same hardware for verification and inference and are willing to eat some performance hit (see https://thinkingmachines.ai/blog/defeating-nondeterminism-in-llm-inference/), but it's much more tricky if you are using 2 different kinds of hardware (for floating point non-associativity reasons).
Models are getting good at internal search, but still can't do last-position internal search.
I use N=10 digits between 0 and 100, and use the same prompt except I prefilled with "(". I use n=100 trials per config.
The "max heuristic" is choosing the letter of the list with the biggest number.
I listened to the books Arms and Influence (Schelling, 1966) and Command and Control (Schlosser, 2013). They describe dynamics around nuclear war and the safety of nuclear weapons. I think what happened with nukes can maybe help us anticipate what may happen with AGI:
I'd be keen to talk to people who worked on technical nuclear safety, I would guess that some of the knowledge of how to handle uncertain risk and prioritization might transfer!