I agree with basically everything here! Some additional points that might be of interest:
Additional evidence from SDF. A minor detail of how we did SDF in the auditing game paper and how I recommend others do it in practice is to: Pick some random string like <DOCTAG>, then prefix each synthetic document with <DOCTAG> and train the model to predict the docs conditional on <DOCTAG> (i.e. mask out gradients to the <DOCTAG> prefix). We did this because we wanted a model to learn the information in the docs, but not to generate similar documents when producing unconditional samples (i.e. sampling from an empty prompt). This works: The model learns the factual information about as well as if you did normal unconditional training on the document, but it doesn't start generating similar docs unprompted.
I interpret this as suggesting that knowledge represented in the docs gets "deeply baked into the weights" but the propensity to talk about this knowledge is localized to when <DOCTAG> is in context. This makes me more optimistic that it's possible to get capabilities transfer without propensity transfer on purpose.
On evidence from assistant personas.
Insofar as this ideal is realized in actual assistants like Claude, this constitutes a massive success of "quarantining propensities while preserving capabilities": if there's some part of the pretraining data where somewhere talked knowledgeably about some topic X, then Claude automatically inherits that knowledge about X, yet Claude still only acts like Claude, not like the original X-knower.
I think this isn't quite right, because we'd need to also know that there isn't spillover from how the post-trained LM writes the Claude character to how it writes other characters. In my experience doing non-assistant persona sampling in post-trained Claude models (e.g. sampling text in the user turn of a chat transcript or sampling continuations of non-chat-formatted documents), I do often feel like the samples do sometimes sound Claude-y, which suggests some spillover of propensities.
Some negative results. In some forthcoming work (out in the next few days, hopefully), we'll report negative results on trying to teach models to have "honest-only personas." That is, we tried to teach a model that, when a user query is prefixed with |HONEST_ONLY|, it responds in <honest_only> tags and only generates honest text; simultaneously, we trained the normal assistant persona to (1) acquire some knowledge but (2) lie about it. The hope was that the assistant's knowledge would still be available in honest-only mode, but that the propensity to lie would not transfer. Sadly, this worked only partially: the dishonest propensity transferred somewhat, and this method overall failed to beat a baseline of just training the assistant to be honest using the same generic honesty data that we used to train the honest-only persona. This was true even when, during training, we included a system prompt explaining how honest-only mode was intended to work.
I'm curious if this result seems surprising to you. I thought it was surprising (I thought honest-only persona training would "just work") and updated me towards cherry-picking capabilities without propensities being more difficult than I expected.
Combining this with SDF. Yes! I definitely agree that this is the next thing to try, and we have some experiments planned here :)
TBC, I don't think that the abstraction of a "persona" is especially load-bearing for this strategy; it's just an intuitive way to explain the idea. (Which is good, because I don't really think the "persona" abstraction really carves reality at its joints or anything.) The core claim is that, if you can guess which data is inducing some propensity, you can quarantine that propensity to a certain subdistribution (e.g. the distribution of inputs that start with Genie) by making sure that all data inducing that behavior is on that subdistribution. The really bold claim is that it's possible to quarantine propensities in this way without simultaneously quarantining capabilities.
I don't think this can be true in full general, e.g. because some propensities like "being really persistent" are tied up with capabilities. But I think it could be true often enough to be useful. I also note that it sometimes seems to happen "by accident" e.g. in section 5.2.2 here we seem to get transfer of knowledge without transfer of dishonest propensity, so the question is whether we can get it reliably on purpose.
I agree that if you fail to properly bracket off many of the settings that encourage malign propensities then things look rough.
Here's a speculative modification of the idea I've been thinking about lately that is supposed to be responsive to this concern.
<|start|>Angel<|message|>... vs. <|start|>Genie<|message|>... to clearly delineate which one is "talking" during a given transcript).The hope is that the trusted persona has access to the capabilities learned by the untrusted persona but that there is no untrusted->trusted spillover of malign propensities.
Here's a slide illustrating this idea; it's taken from this talk where the second-last slide has some musings on why this might be possible.
To be clear, training-gaming is also consistent with high performance on the easy-to-oversee data, so this idea alone doesn't induce direct training pressure against training-gaming. The point is just to "quarantine" our training pressure towards training-gaming inside the untrusted persona, to give us as good a shot at possible of getting good generalization. The hope is that the trusted persona is:
telling the model "oh, it's OK for you to sometimes subtly manipulate us into giving you more power, that's expected" then like, that's not really true, we indeed can't really tell the AI it's fine to disempower us).
To give an analogy, many humans enjoy playing social deception games like mafia that involve taking actions that would generally be considered immoral (like lying, subterfuge, double-crossing, etc.). But I don't feel (very) concerned that when these people practice getting better at such games, they become generally more deceptive or evil people in non-game contexts. Why? Because the game setting is explicitly delineated as a special context where these behaviors are permissible. The person has "two modes": a game mode where they can behave amorally and deceptively and a non-game mode where they do not. It is important that there is distributional shift between the settings where one mode or the other is active, i.e. they are in game mode in clearly gamey settings and non-game model otherwise. Compare to the case of someone who exhibits the same behaviors in contexts which are not games (or which are ambiguously games).
(I mention the point about distributional shift because it's not clear from what you write that you realized that during training we tell the model "this is a special case where it's okay to do evil stuff" but during deployment we would not tell the model this.)
Continuing with the human analogy, here are some reasons that this could break:
I think I've seen both of these happen in humans, and I think analogous things could happen in AIs. But importantly, I don't think they're guaranteed to happen, just that they might. I would guess (though I'm not confident and it could easily go the other way) that these become lesser problems in more capable models, because those models will maintain stronger boundaries between various settings.
It sounds like you might be looking for Peano's axioms for arithmetic (which essentially formalize addition as being repeated "add 1" and multiplication as being repeated addition) or perhaps explicit constructions of various number systems (like those described here).
The drawback of these definitions is that they don't properly situate these numbers systems as "core" examples of rings. For example, one way to define the integers is to first define a ring and then define the integers to be the "smallest" or "simplest" ring (formally: the initial object in the category of rings). From this, you can deduce that all integers can be formed by repeatedly summing s or s (else you could make a smaller ring by getting rid of the elements that aren't sums of s and s) and that multiplication is repeated addition (because where there are terms in these sums).
(It's worth noting that it's not the case in all rings that multiplication, addition, and "plus 1" are related in these ways. E.g. it would be rough to argue that if and are matrices then the product corresponds to summing with itself times. So I think it's a reasonable perspective that multiplication and addition are independent "in general" but the simplicity of the integers forces them to be intertwined).
Some other notes:
If I were primarily working on this, I would develop high-quality behavioral evaluations for positive traits/virtuous AI behavior.
This benchmark for empathy is an example of the genre I'm talking about. In it, in the course of completing a task, the AI encounters an opportunity to costlessly help someone else that's having a rough time; the benchmark measures whether the AI diverts from its task to help out. I think this is a really cool idea for a benchmark (though a better version of it would involve more realistic and complex scenarios).
When people say that Claude Opus 3 was the "most aligned" model ever, I think they're typically thinking of an abundance of Opus 3's positive traits, rather than the absence of negative traits. But we don't currently have great evaluations for this sort of virtuous behavior, even though I don't think it's especially conceptually fraught to develop them. I think a moderately thoughtful junior researcher could probably spend 6 months cranking out a large number of high-quality evals and substantially improve the state of things here.
the far future being worse conditional on no takeover
To clarify, by "takeover" here do you mean "misaligned AI takeover"? I.e. does your "no takeover" conditional include worlds where e.g. the CCP uses AI to takeover?
Sorry to hear that happened to you (the hospitalization) :(
And congratulations that happened (the wedding)!
ETA: Nevan gave a more complete answer here.
Good question. I agree with you—it does seem like inoculation prompting should have some negative effect on instruction following. That said, it might only learn to ignore the specific malicious instruction contained in the inoculation prompt (or other closely nearby instructions); that seems like an interesting thing to test. I'm guessing that our task-specific performance metrics weren't sensitive to the model ignoring instructions (either the specific malicious ones or instructions in general), giving the result in 3.6.1.
Oh, to be clear, the honest-only persona training did have a positive effect, just a weaker effect from the baseline; editing to clarify that. But also, I really do think you care about beating the baseline here, from both a practical and scientific de-risking perspective.
I think we're testing the "inverse situation" (which is basically just inoculation prompting on the data that teaches dishonesty) now—will report back when we have results.