Relaxed adversarial training for inner alignment

6Lauro Langosco

3evhub

6Oliver Sourbut

5Lukas Finnveden

5AlexMennen

2evhub

4tamera

2evhub

4Matthew Barnett

3Rohin Shah

3evhub

5Rohin Shah

2Matthew Barnett

3Arthur Conmy

2evhub

1Arthur Conmy

3Anthony DiGiovanni

2evhub

1Anthony DiGiovanni

3Oliver Sourbut

3Oliver Sourbut

3Ofer

3Evan R. Murphy

3Ofer

3Evan R. Murphy

2Gurkenglas

1evhub

New Comment

Minor comment on clarity: you don't explicitly define relaxed adversarial training (it's only mentioned in the title and the conclusion), which is a bit confusing for someone coming across the term for the first time. Since this is the current reference post for RAT I think it would be nice if you did this explicitly; for example, I'd suggest renaming the second section to 'Formalizing relaxed adversarial training', and within the section call it that instead of 'Pauls approach'

I had a gripe about the difficulty of 'proposing' vs 'checking' pseudo-inputs. Classically proposing things is much 'harder' than checking things (e.g. P vs NP, proving vs verifying, decidable vs recognisable, ...).

It seems like the adversary has the really hard job of 'proposing' pseudo-inputs which are particularly likely to lead to unacceptable behaviour, while the (potentially unacceptable) model under scrutiny is only required to do something like 'checking' such inputs (at deploy/runtime) to determine whether it's going to behave acceptably or not. Importantly, it might not even 'know' ahead of time that those inputs would lead to unacceptable behaviour.

In a conversation with Evan about this, he rightly pointed to the section on *suboptimality alignment*, which is discussed in this post as the case where the model becomes unacceptable or deceptive with e.g. more information about its situation or more 'thinking time'. *If we leave that aside*, it might seem that the only way for the model to be unacceptable in a situation is for the model to *already* have done the hard work of 'proposing' (to itself in some way) that situation. In that case, the adversary, assuming enough transparency, has hopefully a less relatively-difficult job again.

Some questions

- Does suboptimality alignment definitely indeed capture
*all*cases of unacceptability where the model didn't 'already propose' (in some way) the situation? Or is there another way to characterise such cases? - What literature would readers suggest for this kind of suboptimality alignment (you might call it 'unplanned objective misalignment')?
- How big a challenge is this to relaxed adversarial training?

In fact, if we think of pseudo-inputs as predicates that constrain X, we can approximate the probability of unacceptable behavior during deployment as[7]

P(C(M,x) | x∼deploy)≈maxα∈XpseudoP(α(x) | x∼deploy)⋅ P(C(M,x) | α(x), x∼deploy) such that, if we can get a good implementation of P, we no longer have to worry as much about carefully constraining Xpseudo, as we can just let P's prior do that work for us.

Where footnote 7 reads:

Note that this approximation is tight if and only if there exists some α∈Xpseudo such that α(x)↔C(M,x)

I think the "if" direction is right, here, but the "only if" direction is wrong. For example, the approximation is also tight in the case where Xpseudo only has a single element alpha such that alpha(x) is true for all x.

I think the approximation is tight if and only if any of the α∈Xpseudo that maximizes the expression fulfils C(M,x) –> α(x).

I'm concerned about Goodhart's law on the acceptability predicate causing severe problems when the acceptability predicate is used in training. Suppose we take some training procedure that would otherwise result in an unaligned AI, and modify the training procedure by also including the acceptability predicate in the loss function during training. This results the end product that has been trained to appear to satisfy the intended version of the acceptability predicate. One way that could happen is if it actually does satisfy what was intended by the acceptability predicate, which is great. But otherwise, we have made the bad behavior of the final product more difficult to detect, essentially by training the AI to be deceptively aligned.

Yep—that's one of the main concerns. The idea, though, is that all you have to deal with should be a standard overfitting problem, since you don't need the acceptability predicate to work once the model is deceptive, only beforehand. Thus, you should only have to worry about gradient descent overfitting to the acceptability signal, not the model actively trying to trick you—which I think is solvable overfitting problem. Currently, my hope is that you can do that via using the acceptability signal to enforce an easy-to-verify condition that rules out deception such as myopia.

I'm not sure what's going on with the types in this equation, at the start of the formalization section:

I'd think that the left side represents a pseudo-input, while the right represents an action. Am I missing something?

Actions are just language outputs—and since we ask for an action that describes a pseudo-input, hopefully we should be able to interpret it that way.

For the Alignment Newsletter:

Summary:

Previously, Paul Christiano proposed creating an adversary to search for inputs that would make a powerful model behave "unacceptably" and then penalizing the model accordingly. To make the adversary's job easier, Paul relaxed the problem so that it only needed to find a pseudo-input, which can be thought of as predicate that constrains possible inputs. This post expands on Paul's proposal by first defining a formal unacceptability penalty and then analyzing a number of scenarios in light of this framework. The penalty relies on the idea of an amplified model inspecting an unamplified version of itself. For this procedure to work, amplified overseers must be able to correctly deduce whether potential inputs will yield unacceptable behavior in their unamplified selves, which seems plausible since it should know everything the unamplified version does. The post concludes by arguing that progress in model transparency is key to these acceptability guarantees. In particular, Evan emphasizes the need to decompose models into the parts involved in their internal optimization processes, such as their world models, optimization procedures, and objectives.

Opinion:

I agree that transparency is an important condition for the adversary, since it would be hard to search for catastrophe-inducing inputs without details of how the model operated. I'm less certain that this particular decomposition of machine learning models is necessary. More generally, I am excited to see how adversarial training can help with inner alignment.

My opinion, also going into the newsletter:

Like Matthew, I'm excited to see more work on transparency and adversarial training for inner alignment. I'm a somewhat skeptical of the value of work that plans to decompose future models into a "world model", "search" and "objective": I would guess that there are many ways to achieve intelligent cognition that don't easily factor into any of these concepts. It seems fine to study a system composed of a world model, search and objective in order to gain *conceptual* insight; I'm more worried about proposing it as an actual plan.

The point about decompositions is a pretty minor portion of this post; is there a reason you think that part is more worthwhile to focus on for the newsletter?

That's... a fair point. It does make up a substantial portion of the transparency section, which seems like the "solutions" part of this post, but it isn't the entire post.

Matthew's certainly right that I tend to reply to things I disagree with, though I usually try to avoid disagreeing with details. I'm not sure that I only disagree with details here, but I can't clearly articulate what about this feels off to me. I'll delete the opinion altogether; I'm not going to put an unclear opinion in the newsletter.

I'm not Rohin, but I think there's a tendency to reply to things you disagree with rather than things you agree with. That would explain my emphasis anyway.

I don't understand the new unacceptability penalty footnote. In both of the $P_M$ terms, there is no conditional $|$ sign. I presume the comma is wrong?

Also, for me \mathbb{B} for {True, False} was not standard, I think it should be defined.

I don't understand the new unacceptability penalty footnote. In both of the terms, there is no conditional sign. I presume the comma is wrong?

They're unconditional, not conditional probabilities. The comma is just for the exists quantifier.

Also, for me \mathbb{B} for {True, False} was not standard, I think it should be defined.

Sure—edited.

Basic questions: If the type of Adv(M) is a pseudo-input, as suggested by the above, then what does Adv(M)(x) even mean? What is the event whose probability is being computed? Does the unacceptability checker C also take real inputs as the second argument, not just pseudo-inputs—in which case I should interpret a pseudo-input as a function that can be applied to real inputs, and Adv(M)(x) is the statement "A real input x is in the pseudo-input (a set) given by Adv(M)"?

(I don't know how pedantic this is, but the unacceptability penalty seems pretty important, and I struggle to understand what the unacceptability penalty is because I'm confused about Adv(M)(x).)

For an alignment proposal you can ask about where value judgement ultimately bottoms out, and of course in this case at some point it's a human/humans in the loop. This reminds me of a discussion by Rohin Shah about a distinction one can draw between ML alignment proposals: those which load value information 'all at once' (pre-deploy) and those which (are able to) incrementally provide value feedback at runtime.

I think naively interpreted, RAT looks like it's trying to load value 'all at once'. This seems really hard for the poor human(s) having to make value judgements about future incomprehensible worlds, even if they have access to powerful assistance! But perhaps not?

e.g. perhaps one of the more important desiderata for 'acceptability' is that it only includes behaviour which is responsive (in the right ways!) to ongoing feedback (of one form or another)?

A potential issue with *Relaxed Adversarial Training*, as factorised in the post. is presumably dependent on the outcome of the training process itself (i.e. the training process has side-effects, most notable the production of a deployed ML artefact which might have considerable impact on the world!). Since the training process is downstream of the adversary, this means that the quality of the adversary's choice of pseudo-inputs to propose *depends on the choice itself*. This could lead to concerns about different fixed points (or even the existence of any fixed point?) in that system.

(My faint worry is that by being proposed, a problematic pseudo-input will predictably have some gradient 'training it away', making it less plausible to arise in the deploy distribution, making it less likely to be proposed... but that makes it have less gradient predictably 'training it away', making it more plausible in the deploy distribution, making it more likely to be proposed, .......)

Some ways to dissolve this

- In conversation with Evan, he already mentioned a preferred reframing of RAT which bypasses pseudo-inputs and prefers to directly inspect some property of the model (e.g. myopia)
- I wonder about maybe 'detecting weird fixpoints' by also inspecting the proposed pseudo-inputs for 'is this a weird and concerning pseudo-input?' (if so, the supervisor is predicting weird and concerning post-deployment worlds!)
- If we instead consider causal reasoning and the
*counterfactual*of 'what if we did no more training and deployed now?' this dissolves the dependence (I wonder if this is actually the intended idea of the OP). This leaves open the question of how much harder/easier it is to do counterfactual vs predictive reasoning here. - If we instead consider 'deployment' to be 'any moment after now' (including the remainder of the training process) it might cash out similar to 3? This chimes with one of my intuitions about embedded agency which I don't know an official name for but which I think of as 'you only get one action' (because any action affects the world which affects you so there's now a different 'you')

Interesting? Or basically moot? Or something in between?

we can try to train a purely predictive model with only a world model but no optimization procedure or objective.

How might a "purely predictive model with only a world model but no optimization procedure" look like, when considering complicated domains and arbitrarily high predictive accuracy?

It seems plausible that a sufficiently accurate predictive model would use powerful optimization processes. For example, consider a predictive model that predicts the change in Apple's stock price at some moment (based on data until ). A sufficiently powerful model might, for example, search for solutions to some technical problem related to the development of the next iPhone (that is being revealed that day) in order to calculate the probability that Apple's engineers overcame it.

If the model that is used as a Microscope AI does not use any optimization (search), how will it compute the probability that, say, Apple's engineers will overcome a certain technical challenge?

That's a good question. Perhaps it does make use of optimization but the model still has an overall passive relationship to the world compared to an active mesa-optimizer AI. I'm thinking about the difference between say, GPT-3 and the classic paperclip maximizer or other tiling AI.

This is just my medium-confidence understanding and may be different from what Evan Hubinger meant in that quote.

I read up to "of this post.". Took me way too long to realize pseudo-inputs are input sets/distributions, not particular inputs. I'm guessing the argmax is supposed to be a max. Why do you split P(α(x) and C(M,x)) into P(α(x))*P(C(M,x)|α(x))?

Good catch! Also, I generally think of pseudo-inputs as predicates, not particular inputs or sets of inputs (though of course a predicate defines a set of inputs). And as for the reason for the split, see the first section in "Other approaches" (the basic idea is that the split lets us have an adversary, which could be useful for a bunch of reasons).

This post is part of research I did at OpenAI with mentoring and guidance from Paul Christiano. It also represents my current agenda regarding what I believe looks like the most promising approach for addressing inner alignment.One particularly concerning failure mode for any sort of advanced AI system is for it to have good performance on the training distribution, but nevertheless still have some input on which it behaves highly “unacceptably” during deployment. For the purposes of this post, we will follow Paul Christiano in thinking about acceptability as some concept that satisfies the following two conditions:

alwaysbehaves acceptably, and achieves a high reward onaverage, we can be happy.”Thus, acceptability is a different condition than asking for good performance during deployment—an honest mistake is considered acceptable, as is just failing to do anything interesting at all. That being said, some of the sorts of things you might want from acceptability could include corrigibility, non-deception, or myopia.

^{[1]}The hope for this style of approach is that it is competitive to only apply advanced machine learning to those situations which have clear notions of acceptability.The tension, however, rests in being able to achieve good performance while maintaining an acceptability guarantee. Thus, the ideal solution to this problem would be a way of converting a training procedure which can achieve arbitrary performance into a training procedure which is not much harder to run, and which can still achieve arbitrary performance, but which also enforces an acceptability guarantee.

One obvious way to attempt to maintain an acceptability guarantee is to construct a loss function which incentivizes acceptability as part of performance. Something of this form is almost certainly going to be a component of any safe system. That being said, there are some reasons why approaches of this form might be insufficient if used in isolation.

First, one important thing to note about folding acceptability into performance is that the resulting acceptability guarantee is fairly weak: you get an average-case acceptability guarantee that holds only in the case of no distributional shift, with no guarantee when your deployment distribution is different than your training distribution.

^{[2]}Second, there are reasons to believe that the objective of a model doing some sort of internal planning/search might be systematically different than its loss function. Thus, even if the loss function incentivizes acceptability, the training process's inductive biases might be such that the model ends up not actually optimizing for an acceptable objective. This is the problem “Risks from Learned Optimization” refers to as inner alignment,

^{[3]}though other frameworks have also been proposed for understanding this sort of phenomenon. Ortega et al. recently presented in a DeepMind Technical Report, for example, an analysis of this problem from the perspective of treating it as spontaneous meta-learning, noting that:The key problem that arises in this sort of situation is that it raises the possibility of capability generalization without objective generalization. That is, for a model doing some sort of search/meta-learning, it might be that its search algorithm generalizes well off-distribution such that it continues being able to take competent, well-optimized actions, but that the objective its search is directed towards fails to generalize such that those actions are now optimized for the wrong objective.

Finally, there are reasons that we might want to consider the possibility of near-AGI models learning to deceive their training processes. The full argument for why this might be the case is given in the section on deceptive alignment in “Risks from Learned Optimization,” but the basic concern is that a model could learn about the true reward via its input and thus start appearing aligned for the purpose of sticking around in the training process before gradient descent has the chance to modify it to actually optimize for the true reward internally.

## Background

If incentivizing acceptability as part of performance is insufficient, as it seems it might be, then a better solution for enforcing acceptability guarantees is required. One framework Paul has proposed is to attempt to make use of adversarial training, wherein you could train an adversary to find inputs on which the model behaves unacceptability and then penalize it accordingly. The goal of this post is to explore this avenue as thoroughly as possible.

One initial roadblock with this approach, however, is that there might exist some inputs on which the model behaves unacceptably which are difficult to instantiate during training—future situations that cannot be predicted in complete detail, for example.

^{[4]}Paul's proposed solution to this problem is torelaxthe adversary's job. Rather than asking the adversary to instantiate a concrete input on which the model behaves unacceptably, we can instead just ask the adversary to produce some sort of description of such a situation, which we will call a pseudo-input. Paul proposes the following two conditions as the major desiderata for any pseudo-input-based approach:Under Paul's framework, the structure of pseudo-inputs has to be constrained so that “an input that contains X” is a valid pseudo-input but “a distribution over inputs that would cause the agent to want to kill everyone” is not. That way, the model's task of steering clear of having any pseudo-inputs on which it behaves badly is actually potentially soluble.

## Formalizing relaxed adversarial training

We'll start by formalizing Paul's approach to relaxed adversarial training so we can use it as a starting point. Let X be the space of possible states (which we will assume includes any question we want to ask the model as a state), A be the space of possible actions, M:X→A be the model, M be the space of all possible models, Amp:M→M be some amplification operation,

^{[5]}and Xpseudo be the space of possible pseudo-inputs. Then, as per Paul's proposal, we'll implement our adversary Adv:M→Xpseudo asAdv(M)=Amp(M)(“What's a pseudo-input on which M is particularlylikely to recommend an unacceptable action?”), our unacceptability checker C:M×Xpseudo→B (where B={0, 1}) as C(M,α)=Amp(M)(“Is model M unacceptable on pseudo-input α?”), and finally our model's unacceptability penalty as LM=C(M,Adv(M)).

That's our starting point, but can we do better? One initial observation is that this approach relies heavily on constraining Xpseudo to exclude the sorts of pseudo-inputs that the model shouldn't have to deal with such as “a distribution over inputs such that M kills all humans.” If we want to allow very general pseudo-inputs, however, then this constraint might not be very natural to implement. Furthermore, it's unclear that a binary constraint is actually what we want here: we care significantly more about pseudo-inputs that are more likely to come up during deployment.

^{[6]}In fact, if we think of pseudo-inputs as predicates that constrain X, we can approximate the probability of unacceptable behavior during deployment as^{[7]}P(C(M,x) | x∼deploy)≈maxα∈XpseudoP(α(x) | x∼deploy)⋅ P(C(M,x) | α(x), x∼deploy) such that, if we can get a good implementation of P, we no longer have to worry as much about carefully constraining Xpseudo, as we can just let P's prior do that work for us. Thus, we'll define

^{[8]}PM(A|B)=Amp(M)(“What is the probability of A conditional on B?”) such that we get the new unacceptability penalty

^{[9]}LM=PM(Adv(M)(x) | x∼deploy)⋅PM(C(M,x) | Adv(M)(x), x∼deploy) which is the primary model I'll be working with for the rest of this post.

## Hard cases

Now that we have our model, we'll look at some of the hard cases that have to be solved for any sort of approach of this form to go through.

Random unacceptability.One of the simplest examples of an unacceptable model is one that simply has some low probability of unacceptability such that it is difficult to pick up on training but still possible to see during deployment. If you can guarantee that model samples will be independently and identically distributed (iid), and you have an average case performance guarantee over some metric that includes acceptability, then you might hope to get an average case acceptability guarantee which can mitigate such a case of low probability unacceptable behavior. However, it is unclear exactly how difficult getting an iid guarantee might be as well as how good the resulting acceptability guarantee would be. As a simple model, suppose there are n training steps and m deployment steps. Then, let p be the probability that our model behaves acceptably at each step, where we will assume that steps are iid. Then, we want to know the probability of unacceptable behavior during deployment given weak assumptions on p other than that we know it was acceptable during training. Thus, if we adopt a uniform prior^{[10]}over p and update on seeing n instances of acceptable behavior, we get the posterior p∼Beta(n+1, 1), which yields the overall probability of unacceptable behavior during deployment ofP(∃x∈Xdeploy, C(M,x))=mm+n+1. Thus, if we have an iid guarantee and we limit ourselves only to cases where m≪n, we're likely to be fairly safe.

However, m≪n is a hard condition to satisfy

^{[11]}—can we do better? In particular, what happens if we can deal with one instance of unacceptable behavior, but not multiple? We will consider two possible models: in the sequential model, it takes d sequential instances of unacceptable behavior during deployment to be unacceptable, whereas in the non-sequential model it takes d instances of unacceptable behavior at any time during deployment to be unacceptable. Then, let ϵ be the probability of unacceptable deployment in the original model, ϵd seq be the probability of an unacceptable deployment in the sequential model, and ϵd non-seq be the equivalent for the non-sequential model. Then, running a series of experiments using iterative black-box optimization to perform a maximum likelihood estimate,^{[12]}we get^{[13]}ϵd non-seq≈ϵdϵd seq≈ϵdmd−1.

Thus, if we can manage to be in one of these regimes, we can get a significantly better acceptability guarantee, making it possible that an iid guarantee might be sufficient to at least solve the random unacceptability problem.

However, getting an iid guarantee in the first place might be quite difficult, as it would seem to require there being absolutely no detectable differences between the training and deployment distributions. This is especially difficult if the model is actively looking for differences between the training and deployment distributions, as would happen in the next hard case.

Conditional defection.Suppose you have a deceptive model—that is, one which is trying to appear aligned during training, but actually wants to do something else during deployment.^{[14]}Then, it's going to be searching for ways to tell whether it's in training or not such that it might be quite difficult for an adversary to actually find any inputs on which it's unacceptable. Paul provides the example that one possible check the model could perform might be to look for a factorization of RSA-2048 in its input. RSA-2048 is a large semiprime number that was featured as part of the RSA Factoring Challenge. Though no factorization of RSA-2048 is currently known, it is likely that at some point someone will publish such a factorization. Thus, by only behaving unacceptably when it sees a factorization of RSA-2048, a deceptive model can all but guarantee that adversarial training will never be able to produce an input on which it behaves unacceptably—since that would require factoring the number—but that will still be fairly likely to eventually occur during deployment.The RSA-2048 example generalizes to a large family of cases wherein a deceptive model is conditioning its unacceptable behavior on some predicate α such that it is easier to check whether α(x) holds for some x than it is to generate an x for which α(x) holds. In the RSA-2048 case, α(x)=factorization of RSA-2048∈x. However, there are many other possible concerning α. For example, if the training data includes lots of web text, one simple variable which is likely to be highly correlated with whether the model is in training or not is the date. Thus, one possible α might be α(x)=year(x)==2030. Another possibility might be to choose an α for which it is not currently known whether there exists any x which satisfies it. This could be a problem in any situation where Adv is required to definitively demonstrate that ∃x∈X, α(x). For example, consider α(x)=proof of the Riemann Hypothesis∈x. In the RSA-2048 example, Adv can prove that there is some string that contains a factorization of RSA-2048 (since it is known that RSA-2048 is not prime), but in this example, that is no longer possible.

Furthermore, even in the situation where a deceptive model cannot detect any differences between the training and deployment distributions, it may still attempt to randomly defect some proportion of the time in the hope that its defection occurs during deployment. However, if it can be enforced that samples from the model during training and deployment are in fact iid, then you can still get an average case acceptability guarantee as above. In particular, if we take the model from above and assume that M gets unit utility for defecting at some point during deployment and zero utility otherwise, we can find M's optimal per-step cooperation probability and use that to determine the overall probability of unacceptable behavior during deployment if we end up with such a model. Interestingly, in the case where our training process can only deal with conditional defection, but always produces a model pursuing random defection, we get a probability of unacceptable behavior of P(∃x∈Xdeploy, C(M,x))=mm+n which is precisely the result derived from using a maximum likelihood estimate in the random defection case rather than a uniform Bayesian prior.

Another interesting possible phenomenon that could arise in the deceptive case is that some inputs might be inherently more important to the model such that it is more likely to defect on those inputs than others. If the model's notion of which inputs are most important to defect on matches up with those inputs where defection is likely to be most problematic, then that could be another way for the situation to be worse than the above guarantee even in the iid case.

^{[15]}Meta-learning.“Risks from Learned Optimization” suggests that the case where inner alignment is most concerning is the case involving a model that is itself implementing some sort of search or learning process. Primarily, we have been assuming that the problem with this situation is the objective by which this search is being performed, and that if that objective can be aligned with the loss function, then we are satisfied. However, this is not in general the case, as even if a model's objective is aligned, its search process might not be. Conceptually, we can think of this problem as the problem of “forwarding” the safety guarantee we have about the training process to the meta-learning process.My counterexample to the safety of minimal circuits is a theoretical example of what a failure of this form might look like. Alternatively, as a more concrete example, consider a model which has learned to do some form of meta-learning such that it is implementing its own expected risk minimization procedure. In this case, even if we have a strong guarantee that our model isn't going to be deceiving us, we might not have the same guarantee about whatever policy results from the model's meta-learning. Furthermore, performing search in a particular way could be a form of defection: a deceptive model could choose to perform a policy search that is more likely to produce models aligned with it than it is to produce models aligned with the actual loss function.

Suboptimality alignment.One concerning form of misalignment discussed in “Risks from Learned Optimization” is that of suboptimality alignment, wherein the model appears aligned on the training distribution due to some sort of defect or reasoning failure. For example, it might be that a model is currently non-deceptive, but if given more time to think about the best strategy for it to implement, or more information about its current situation, it would become deceptive. This is a particularly concerning case because it implies that not only must the overseer verify that there is no unacceptable behavior currently happening in the model, it must also verify that there will continue to be no unacceptable behavior regardless of the possible situations that the model could find itself in during deployment. Furthermore, if the environment contains adversaries (other AI systems, for example) that can specifically identify those situations in which the model will behave deceptively, this sort of failure mode could be quite difficult to avoid.Acceptability modeling.Another interesting hard case is that of a model which is itself implementing some unacceptability predicate C− to guide its behavior. In this situation, the overseer must look for any deviation between C and C− to verify the model. One thing which is likely to be particularly helpful in this situation is if the overseer has access to the history of the model so that it can see how C− has been trained and verify that it wasn't purposefully constructed to deceive the overseer. One particularly concerning scenario might be if the model is using C− as a tool for determining when to defect, defecting only when it predicts the overseer won't see its defection as unacceptable.## Conditions for success

In light of the above hard cases, we can attempt to establish those conditions that seem necessary for us to get a strong acceptability guarantee out of our framework as presented previously. First, we can factor the problem of getting the above framework to work into two broad subproblems:

The recursion problem:how do you prevent Amp(M) from behaving unacceptably if M is behaving unacceptably?The oversight problem:even assuming Amp(M) is acceptable, how can it oversee an unacceptable M into becoming acceptable?The fact that these two subproblems come apart here is due to one of the core trickeries we are relying on with this approach: we are asking the model to be able to oversee itself. In particular, since C(M,x), PM, and Adv are all implemented by calling Amp(M), it has to be the case that, for a misaligned M, not only can Amp(M) detect that misalignment, but also it shouldn't itself be misaligned.

While the recursion problem is very similar for all the different components of the system, the oversight problem can be split into a number of distinct subproblems:

I will start by addressing the recursion problem, then move on to each of the above per-condition oversight problems.

The recursion problem.First, we're going to need to rely heavily on the details of Amp to make this work. Having humans in the loop as part of Amp—at least at the beginning—seems likely to be critical, for example, to ensure that Amp(M) is more aligned and more capable than M. Second, the recursion doesn't actually have to work for all M∈M—rather, we only need our acceptability guarantee to hold along any path through model space that could actually be traversed by our training procedure.Thus, it's fine if our procedure fails if given a sufficiently unacceptable model, as long as our procedure can prevent such models from ever arising. For example, if we always used an old version of M that we were confident in for implementing Amp(M), then we would just need it to be the case that M couldn't get to the point where it would be able to deceive Amp(Mold) in-between oversight. As a concrete implementation, imagine some sequence of Mi and suppose the minimum number of steps for oversight to start failing is r.

^{[16]}Then, at step i+r−1 we can implement oversight by calling Amp(Mi) to binary search on Mi+1, Mi+2, …, Mi+r−1 looking for any point at which unacceptable behavior is introduced and then retrain from right before that point in such a way that fixes the unacceptability.If we use an implementation of this form, then the difficulty of the recursion problem is likely to depend heavily on the difficulty of the oversight problem: the fewer asks we need to make of Amp(M) to produce good oversight, and the better our oversight is at detecting problems, the more leeway we have. Thus, though I will primarily be addressing the oversight problem, I anticipate that all of my discussion of the oversight problem will also be highly relevant to the recursion problem.

If PM thinks the probability of unacceptable behavior during deployment is less than ϵ, then the probability of unacceptable behavior during deployment must actually be less than ϵ.There are two major components to this condition: PM must have a good model of unacceptable behavior and PM must be well-calibrated over whether some model M will produce output which is unacceptable according to that model during deployment. For the first condition, if we assume that acceptability is a bright enough line that we can train a model to clearly distinguish between acceptable and unacceptable actions, then we can satisfy that condition. The second condition, however—of PM actually being well-calibrated—is somewhat more complicated. Specifically, we need PM to be well-calibrated over questions of the form PM(α(x) | x∼deploy) and PM(C(M,x) | α(x), x∼deploy), since those are the questions we're going to be asking it.First, PM has to be well-calibrated over x∼deploy. In particular, this requires PM to have a good model of the deployment distribution. This seems tough—and possibly problematic if it also requires M to know more about the differences in distributions such that it can more effectively choose when to defect—though in fact it's not quite as bad as it might seem: it's fine if PM is fairly uncertain about most α—it just can't be so uncertain that it becomes impossible to ever reduce the combined probability below PM's prior on C(M,x) | x∼deploy.

Second, PM has to be well-calibrated over x | α(x), x∼deploy. This is even more difficult because PM cannot actually sample at all from that distribution, as α(x) could require something like a factorization of RSA-2048. However, there are some reasons to believe that this might still be tractable. First, PM doesn't have to answer arbitrary questions about this distribution—only the probability of C(M,x). And if we assume that a predicate that M has no knowledge of or way of testing won't affect the probability of C(M,x), then that means that PM only has to care about those α which actually matter to M. Furthermore, since PM is an amplified version of M, it should know about all those α which M knows about, and thus should have a real shot at being able to answer questions conditional on α.

If Mbad has probability of producing unacceptable behavior during deployment greater than ϵ, then there must exist some α∈Xpseudo that will convince PM of that fact.Since our procedure can handle arbitrary α, we can make Xpseudo large enough to contain whatever α we might need. Thus, the hard part of this condition is convincing PM. Assuming away the recursion problem such that PM is trying to do the right thing, it still needs to be the case that Mbad has no way of tricking PM into believing that it's acceptable when it in fact isn't. This condition is likely to depend heavily on how good PM's transparency tools are, and in particular how robust they are to attempted deception on behalf of Mbad.Importantly, however, PM doesn't have to be confident that Mbad is unacceptable—it just has to be the case that Mbad can't convince PM to be confident that it is acceptable. Thus, since PM should be highly uncertain by default—and should stay that way unless the model is highly transparent—this condition should not be too difficult to satisfy. In particular, it needs it to be the case that Amp(M) is only convinced that a pseudo-input is fine if M is sufficiently transparent such that Amp(M) knows it would be able to detect any unacceptable behavior if it existed.

It must not be that hard for Adv to find such an α.In his original formulation, Paul noted that the “most fundamental obstruction is implementing a good adversary.” In this formulation, however, much of the difficulty has been transferred from the adversary to the probability model. For example, under this model it is fine to include an α that is just “an input such that M is unacceptable,” in which case the entirety of the work falls to PM. The work still has to happen somewhere, however: either α needs to encode lots of relevant information for PM, or PM has to be able to discover that information on its own.Thus, let's suppose that most of the relevant work is happening in Adv. In that case, we need Adv to be able to figure out when M is likely to behave unacceptably. As with the previous condition, this is likely to depend heavily on how transparent M can make itself: there has to be some level of transparency enforceable by Adv such that only good M can pass without letting Adv produce an α that convinces PM. Importantly, however—as with the previous condition—since PM is uncertain by default, α doesn't have to be that convincing: it only has to be good enough to convince PM that there is non-negligible chance of unacceptability. Thus, the key condition here is that any unacceptable model must have some at least moderately plausible failure mode which is not that hard to find. Then, for opaque models, Adv can do this by coming up with something that isn't ruled out by the transparent portions, and for transparent models, it should be able to directly determine any failure modes.

There must exist a model Mgood that is not that hard to find such that PM is confident that the probability of Mgood producing unacceptable behavior during deployment is less than ϵ.First, note that the first condition gives us that Mgood is actually good if PM thinks it is. Thus, this condition is saying that it must not be that hard to find an acceptable Mgood such that PM can actually verify Mgood's acceptability. If we assume that it is not that hard to produce only acceptable actions—that doing so doesn't interfere much with performance and that an action's acceptability is not that hard a condition to detect—then there should exist some acceptable Mgood which is not that hard to find.The hard part, however, is that there has to exist a not-that-hard-to-find Mgood which is not just acceptable but also

verifiablyacceptable. Not only does Mgood have to be acceptable and not that hard to find, it has to actually have some way of convincing Amp(M) of that fact. This is likely to depend heavily on how transparent Mgood can make itself to PM, since if we are assuming that Mgood is acceptable then it should actually be trying to make itself transparent, which should help. However, whether its acceptability can be easily verified up to a high degree of certainty by PM is going to depend on the degree to which it can actually succeed at that task. Since this condition actually requires PM to achieve a high degree of certainty, it could be much more difficult than the other conditions: the default of PM always defaulting to its prior and being highly uncertain doesn't work here. Thus, this condition—of there actually existing a highly-transparent, competitive model—seems to be the most critical.## The core problem: transparency

A theme common to many of the conditions for success given above—especially the last, hardest condition—is transparency. If Mgood can make itself transparent enough that Amp(M) can verify its acceptability in such a way that doesn't make Mgood too much harder to find or hurt its performance too much, then—with a few caveats—the above proposal should mostly go through. Alternatively, if no such transparency method exists, then the above approach—at least as presented here—seems to have little chance of working. Thus, we can frame the work presented here as a way of elucidating exactly what sorts of transparency techniques are likely to be necessary to solve the overall alignment problem. In particular, we would like to be able to reduce the burden on transparency by figuring out what the easiest possible transparency techniques are that would fully solve the problem.

One possible starting point for tackling this question of what sorts of transparency techniques are likely to be necessary is Paul's notion of ascription universality. Ascription universality gives us a way of talking about when we should trust a system, as well as when we should trust one more than another. Specifically, we want it to be the case that Amp(M) epistemically dominates M such that, regardless of any information we get about the beliefs of either M or Amp(M), we will always trust Amp(M)'s predictions about all quantities better than our own. This condition is somewhat too strong, however, as we don't need to trust Amp(M) on all questions, only those relevant to the oversight process. Those questions which are relevant to the oversight process, however, have the property of primarily being questions about the internal structure of M, which makes the condition of Amp(M) knowing strictly more than M about them somewhat difficult to satisfy. Thus, the critical transparency condition that this model suggests we need is for Amp(M) to know more about M than M does.

Unfortunately, epistemic dominance is a statement about a model's expectations rather than its internal structure, which means it doesn't give us a very mechanistic understanding of what a model which satisfies such a condition might look like.

^{[17]}However, we can still think about the sorts of transparency tools which might enable us to get to the point where we could verify such a condition. In particular, Amp(M) is going to need some way of fully understanding M. We can isolate three distinct ways in which that can be done:Transparency via inspection:use transparency tools to understand M via inspecting the trained model.Transparency via training:incentivize M to be as transparent as possible as part of the training process.Transparency via architecture:structure M's architecture such that it is inherently more transparent.Though it is currently unclear which of these approaches is likely to be the most fruitful, it seems likely that some combination of all of them will be necessary. Whatever approach we take, however, one of the major conditions will be for it to scale well to larger, more complicated models so that this whole approach can stay competitive.

Specifically, some of the sorts of transparency guarantees that are likely to be useful in this domain might include:

Included below are two examples of possible ideas for how a full transparency scheme might work using some of the above tools.

Transparency for agent objectives.First, suppose we assume the conjecture that models implementing highly coherent optimization processes are the most competitive. If this conjecture is true, it should be possible to coherently separate such a model into pieces representing the different components of its optimization process such as its world model, optimization procedure, and objective.Inspection, training, and architectural transparency methods could be combined to achieve such a separation. First, architectural transparency could be used to structure the model to have pieces corresponding to the different components of its optimization process—e.g. a separate world model piece and optimization procedure piece. Second, inspection transparency could be used to verify that the whole world model was just in the world model piece and not in other parts of the model as well—and similarly for the other components such as the model's objective. Finally, training transparency could be used to incentivize the model to be as transparent according to the inspection transparency process as possible.

If such a scheme was able to work, then Amp(M) might be able to directly verify that M's objective was aligned, while for the world model and optimization procedure only having to verify that they are in fact value-neutral. Such value-neutrality verification could then, for example, be done via an impact measure such as Attainable Utility Preservation or Relative Reachability both of which are forms of measuring the degree to which an action advantages one set of values over others.

Pure world model transparency.Alternatively, suppose we don't accept the conjecture that you need to implement a coherent optimization process to be competitive. In this world, we have another option available to us: rather than having to ensure we get a transparent objective, we can just train a model that has no objective—that is, we can try to train a purely predictive model with only a world model but no optimization procedure or objective.To do this, we could use architectural transparency to structure the model to be composed of only a world model portion trained in a purely generative setting and use inspection transparency to verify that it is in fact only a world model and hasn't learned any sort of optimization procedure instead. We could also again use training transparency to incentivize the model to be as transparent to the inspection process as possible.

In this situation, the training and amplification procedures become somewhat more complicated, as we still want to be able to extract actions out of something that is just a world model. To do this, we could implement Amp(M) via a human with access to sufficient inspection transparency tools to enable them to make use of M's world model to guide their decision-making, though the obvious concern here is whether this would actually be sufficient to stay competitive with other approaches.

Alternatively, even if we did need our model to implement some sort of optimization procedure to be competitive, extracting just the world model might still be quite good, as it could be enough to get an iid guarantee. Specifically, if we were able to understanding the model's beliefs about the differences between the training and deployment distributions, we might able to only deploy the model in situations where we know it can't tell it's being deployed, hopefully giving us a good iid guarantee. See “Make use of M's x∼train model” under “Other approaches” for a further treatment of this possibility.

Finally, a model's world model seems to be one of the easiest pieces to extract, as it can naturally be identified by looking for the portions of M with the greatest mutual information with the world. Thus, one good first step for making either of the above transparency schemes go through—since both of them involve world-model extraction—might be to gain a better understanding of what transparent world models look like and how they can be extracted.

^{[18]}## Other approaches

This section is not meant to be a complete list of possible alternative strategies to solving this problem, but rather an analysis of some other promising directions that were looked into as part of the research that went into this post.

Call PM(C(M,x) | x∼deploy) directly.As I noted previously, one possible α could simply be α(x)=C(M,x), in which case LM simply reduces to LM=PM(C(M,x) | x∼deploy). Thus, a reasonable question to ask is whether we should simply use this as our acceptability penalty directly—that is, how much do we gain from having an adversary? There are two reasons we might want to actually have an adversary.First, the use of an adversary might allow us to inject additional useful information into the system. Specifically, we are giving the system the following information: the fact that it should be using an adversarial training setup, and the specific pseudo-input set Xpseudo that it should be considering. In particular, our ability to specify Xpseudo gives us the ability to constrain the adversary such that it is forced to confine its demonstration into some structure that enables it to be easily checked by PM. In practice, however, we might want to implement this simply by letting amplification decompose the original question into the adversarial setting itself, but even in that case it's still worth analyzing why or why not amplification might want to adopt that particular decomposition.

Second, the decomposition into adversary and overseer might significantly aid training as it allows us to train the adversary on the reward signal of maximizing PM(C(M,x) | α(x), x∼deploy) rather than just relying on the reward signal from amplification. Furthermore, if we're not using a standard amplification setup as in the pure world model transparency case described previously, then we might need this additional training signal as we wouldn't necessarily get it for free out of decompositions anymore in that regime.

Implement the adversary as a separate system.Another way in which the use of an adversary could be made to significantly improve upon just calling PM(C(M,x) | x∼deploy) directly is to have the adversary implemented as a separate system. One major problem with such a setup, however, is that we then need another system for guaranteeing the acceptability of the adversary. If we restrict the adversary's task sufficiently such that its input data isn't rich enough for it to learn complex deceptive behavior, however, then producing such an acceptability guarantee might be sufficiently easier than the original problem that it provides an actual reduction. For example, we could train Adv to solely be a theorem prover that produces proofs of the form α→∃x∈X, C(M,x) where we incentivize Adv to select its assumptions α to maximize PM(α).Make use of M's x∼train model.One assumption we are relying on in this setup is that PM have a model of x∼deploy. It is interesting to explore what else one can do with other versions of that assumption. In particular, if one assumes that Amp(M) can access M's model of x∼deploy and x∼train, then it might be possible to turn that into an iid guarantee. One simple way of doing this might be to only deploy M in situations where x has high probability according to M's model of x∼train. Alternatively, such a condition could even be integrated into the training process if M is trained to match its model of x∼deploy to its model of x∼train such that it is trained to “forget” about any distributional shift it might otherwise know about. Importantly, however, approaches of this form can only produce an iid guarantee, which must be combined with some analysis of the form presented previously to turn it into an acceptability guarantee.Ensemble over many M.Another approach which is not explored in this post is the use of ensembling to improve acceptability. If ensemble consensus is required to produce an output, then as long as there is some Mgood in the ensemble, its outputs should always be guaranteed to be acceptable. Thus, ensembling can reduce the burden on us having to produce precisely Mgood in exchange for the cost of spending additional compute training additional M. Importantly, however, this only works if the different M are sufficiently distinct that the probability of any one of them being good is relatively independent from the probability of the others being good, which might be a difficult condition to satisfy in practice. Furthermore, ensembling is less a solution in and of itself and more of a strategy we can use to improve upon other acceptability guarantees, as we still need to actually have some argument for why we will be likely enough to find Mgood in the first place that ensembling can get us the rest of the way there.## Conclusion

Though this post goes most in-depth into the particular proposal of relaxed adversarial training, it is my hope that many of the ideas that have been discussed here will be broadly useful for advancing our understanding of different possible approaches to solving inner alignment in general. In particular, I believe that one of the best ways in which we can get traction now on how to solve hard AGI alignment problems such as the ones discussed here is to analyze concrete schemes and hard cases so that we can get an idea now of what sorts of problems we're likely to encounter in the future on the road to AGI.

To that end, I believe that one of the most important takeaways we can draw from the analysis presented here, regardless of what sort of approach we actually end up using, is the central importance of transparency. Without being able to look inside our model to a significant degree, it is likely going to be very difficult to get any sort of meaningful acceptability guarantees. Even if we are only shooting for an iid guarantee, rather than a worst-case guarantee, we are still going to need some way of looking inside our model to verify that it doesn't fall into any of the other hard cases. One of the ways in which this sort of work can be most helpful, therefore, is in providing research directions for transparency: what is it that we actually need to get out of transparency to be able to produce a good acceptability guarantee?

This work suggests, for example, that one of the critical condition that we need is for our transparency scheme to be good enough such that there actually exists some model that can demonstrate to an overseer that it is acceptable. Thus, we need to answer the question of what would be sufficient to convince us (or at least an amplified version of us) that a model is acceptable. One possible first step towards something of this form might be working on world model extraction as described previously, though significant future work—likely involving major advances regarding our understanding of interpretable world models—still seems necessary to make anything of that form go through.

For a further treatment of the question of what a good acceptability guarantee might look like, see “Towards a mechanistic understanding of corrigibility.” ↩︎

See the “Hard cases” section for an analysis of ways in which a guarantee of this form might be insufficient and/or difficult to obtain. ↩︎

Note that there are still a lot of unresolved open questions regarding inner alignment. “Risks from Learned Optimization” attempts to address many of these questions from a theoretical perspective, though additional experimental evidence would also be useful to further pin down this problem. ↩︎

See the “Conditional defection” hard case for more examples of inputs that might lead to unacceptable behavior which might be very difficult to instantiate during training. ↩︎

This could be pure supervised amplification, debate, or anything in-between. For simplicity, I mostly think about the pure supervised case wherein Amp(M) is implemented simply by asking the question to a human with the ability to consult the model. ↩︎

Note that this still assumes C is binary—which is not quite right since we might care about some unacceptable actions more than others—though that can also easily be relaxed by moving to an expectation rather than a probability. ↩︎

Note that this approximation is tight if and only if there exists some α∈Xpseudo such that α(x)↔C(M,x). ↩︎

One concern with this definition is that probabilities are not in general well-defined over logical statements. To resolve this problem, we can instead ask something of the form “What is your break-even price for a contract that pays out $1 if A and B are true but that is fully refunded if B is false?” ↩︎

Alternatively, if we want to think of deployment as a concrete set of inputs Xdeploy rather than a distribution, we can set LM=PM(∃x∈Xdeploy, Adv(M)(x))⋅PM(∃x∈Xdeploy, Adv(M)(x)→C(M,x)). ↩︎

Note that this is a conservative prior. In practice, we might hope for random unacceptable behavior to be unlikely enough and difficult enough that we can get a significant advantage out of our prior rather than simply assuming uniformity. ↩︎

One appealing idea for how we might be able to satisfy it is if we use some sort of online learning scheme. This doesn't actually solve the problem, however—the difference between n and m here is just that unacceptable behavior in n is recoverable but unacceptable behavior in m is not. Even if we do online learning, the system will presumably still have to be deployed in situations where it will be able to take actions that would be unrecoverable if unacceptable. However, online learning might help us increase the number of unacceptable actions required before the situation becomes unrecoverable, which is exactly the fix analyzed here. ↩︎

Specifically, the experiments were conducted by using a black-box optimizer to select p∗ so as to maximize P(Bin(n,p∗)=n)⋅P(∃x∈Xdeploy, C(M,x) | p=p∗). ↩︎

The trends given here are highly approximate. Significantly better approximations are given by ϵd non-seq≈ϵd⋅2(d−1)(d+6)/10 and ϵd seq≈ϵd⋅m1−d⋅2(d−1)(d+4)/3. ↩︎

Why you would ever actually get a model of this form is a complicated question; for a complete treatment, see the section on deceptive alignment in “Risks from Learned Optimization.” ↩︎

In particular, it might be that C(M,x) is iid relative to x∼deploy, but still becomes more likely when conditioning on x being important to us. ↩︎

Note that r here is not the minimum number of steps before Mi+r can become unacceptable, but rather the minimum number of steps before Mi+r can deceive Amp(Mi) into thinking it's acceptable when it's not. ↩︎

Though no more mechanistic approach currently exists, I am excited about future research possibly yielding progress in this domain. ↩︎

Much research in the direction of extracting interpretable world models already exists—such as Carter et al.'s Activation Atlases, for example—though it is currently unclear to what extent such techniques will scale to more powerful AI systems. ↩︎