Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

This setup is originally from Paul Christiano. Thanks to Mark Xu for pointing me towards it and providing hints. 

Summary

I’ve been thinking about gradient hacking lately. The following setting explores what’s the best that a gradient hacking model can do when it's part of a mixture of other models (analogous to a set of subnetworks). 

I show an optimal gradient hacker can dominate the mixture even when its loss is constrained to be higher than competing models. I found this surprising as I initially thought there might be an upper bound on optimal gradient hacking, especially with minimal assumptions (e.g. it's not assumed the model 'plays along' or something in training and gets 0 loss). I don't have much of an intuitive explanation other than that adversarially picked gradient hackers can be scary. I’ve detailed the worked-out example below. 

Two Model Setup

Suppose, we have a mixture consisting of a good model  which gets 0 loss in the limit (because it’s aligned with our training procedure) and a gradient hacker  which gets loss in the limit (because its actual objective is paperclips).

The mixture is roughly a linear combination of the two models, weighted by  for the good model and  for the bad model. Suppose we’re doing SGD on the vector , and the total loss for the mixture is .[1] Also suppose  can see  perfectly and its loss is a function of its relative weight in the mixture. I.e. , which I'll refer to as 's loss modulating function. Note, that  must map to values greater than 

If  and  with , then  has a simple strategy for staying dominant. Specifically,

Since  gets 0 loss,  gets no gradient update. Since  ensures loss increases if  decreases,  also gets no gradient update.[2] This same strategy can be extended in the general case to preserve the initial weighting of .

But that’s certainly not the best  can do. If we start with some general , what loss strategy - i.e. what function  outputted by  - would lead to the highest relative weighting of  to ?

Since there’s only two weights, we can start by parametrizing on . Let  = 1 -  and  Referring to  as . Recall that  gets 0 loss, so we simplify to 

Suppose SGD updates by . Therefore, . Since  is actually , we get . Intuitively, a ‘good’ function for  must involve  and  close to 0 for most of the [0,1] interval. 

More formally, we can solve the differential inequality. To solve for an  such that  (i.e. for the gradient update on  to be negative), first consider the boundary condition: 

Equivalently, 

which yields a solution of the form  gives the lower bound on  when , so it must be . If , then  only preserves the initial weight of . In other words,  achieves the same preservation performance as the piecewise function mentioned earlier with the added benefit of being differentiable. A simple fix is to set  to be some small negative quantity which I'll refer to as  and modifying  appropriately. We now have

which satisfies . Therefore, the 'best' that  can gradient hack is increasing its weight up to and including dominating the entire mixture in the setup. 

A few notes: 

  • This is sort of an upper-bound result with the takeaway that there is no upper-bound if  can output loss in the range 
  • If loss is bounded above for some reason (i.e.  has range , then there is a legitimate upper bound. Specifically, we'd have  which now only satisfies the differential inequality on the interval  can be arbitrarily small so for something like , this reduces to the model only being able to gradient hack if its initial weight (i.e. ) was greater than 
  • The toy model doesn't consider the potential incentive to modify the gradient hacker. I.e. the gradients in a richer setup may point toward changing .[3] 
  • Loss modulating functions can be a useful frame for evaluating 'how much' a model can gradient hack. This provides a behavioral view of gradient hacking which I see as complementing the architectural view. More generally, being aware of how a subnetwork's loss changes as its weight changes seems important.[4]

Appendix

Larger Mixtures

Consider the general case of a mixture with  models where  are aligned with the training procedure and get 0 loss in the limit. Each of the models  is weighted by some corresponding . After parametrizing to get  weights and enforcing the constraint that all weights must be between 0 and 1, we get the following loss for the mixture: 

Without loss of generality, assume the gradient hacking model  is one of the initial  models.  has perfect access to the vector of weights  and outputs loss according to some function  can increase its weighting in the mixture by increasing its corresponding parameter . Supposing SGD works by the same procedure as before, we have

 gets 0 loss by assumption leaving the following differential inequality: 

Sparing the details, a function of the form 

where  is any non-zero function of all weights except  and  is some positive number will work for all . The same bounded result also holds, so if 's range is capped at 1 then  can only gradient hack if  at initialization. 

Non-Zero Loss Models

Suppose there are some models in the mixture that are neither gradient hackers nor fully aligned with the training. Essentially the same result holds. Say their loss with respect to their weight is some positive constant. In the setup above, such models would only help  gradient hack. Without loss of generality, let  be one such model. The gradient update then is 

where  and . Accordingly,  simply no longer needs the  term. 

  1. ^

    By assumption, SGD is not operating at the level of modifying either  or . It's only operating at the level of altering the relative weights of the two models. 

  2. ^

    If  was differentiable that is. A differentiable approximation would work, though. 

  3. ^

    Though there appears to be some recursive element here. How  changes may also be susceptible to gradient hacking. Indeed, some precursor to  may have been 'benign', but was modified by SGD towards the current equilibrium 

  4. ^

    Though note, there are other reasons besides gradient hacking for the loss modulating function to be non-constant. Modified architecture or different computational capacity at different weights, for example. 

22

Ω 11

7 comments, sorted by Click to highlight new comments since: Today at 4:49 PM
New Comment

This is a cool result. If I'm understanding correctly, M- increases its loss the more that M+ is represented in the mixture, thereby encouraging SGD to make M- more prominent.

Is there a way to extend this to cases where M- doesn't have access to the weights? I think that probably requires an RL environment, but that's entirely based on "I thought about it for a few minutes and couldn't find a way to do it without RL" so I could be way off here.

Given an RL environment I suspect M- could steer the model into scenarios that make it look better than M+...

When you say "which yields a solution of the form ", are you saying that  yields that, or are you saying that  yields that? Because, for the former, that seems wrong? Specifically, the former should yield only things of the form  .

But, if the latter, then, I would think that the solutions would be more solutions than that?

Like, what about  ? (where, say,  and 
 . so 

which, for , and , is positive, and so g should also be a solution to  , yes?

Or, were you just giving a subset of the solutions, of the differential inequality, which were all the ones you needed to make the point?

Separately (btw, what's the convention if one has unrelated comments? to put them in one comment or separate comments?) :
As for how the network could detect something about its current weights, if it is a recurrent network, this seems like something that some form of this should be learnable.

Suppose you set up the architecture for a recurrent neural network, and pick some arbitrary linear combination (but where the coefficients aren't especially big) of the weights in the network, and then for the loss function, take the main part of the network's output as doing some normal task, and then another part of the output be judged for how well it approximates the current value of the linear combination of all the network's weights, with the initially chosen coefficients. It seems plausible to me that this would do well. Of course, one way it might fail to work properly is if this linear combination ends up becoming roughly constant, or just if some weights stop changing during the training, leading to this second output not getting the signal needed to learn that part. Maybe if you used dropout and took that which weights were dropped out when evaluating the weighted sum of the weights (counting them as zero) in the loss, it could fix that issue. (dropout being done the same in each recurrence, not separately.)

Why "a linear combination of the weights" instead of the particular weights? Well, I imagine getting all the individual weights would be not particularly feasible unless the architecture was specifically designed to do that (like in the architectures for those quine-ing neural nets), but, if we are looking at a mix between "how the net should be", and "a net that is doing gradient hacking", then it seems like "some particular weighted sum of the weights", might be all that would be needed. Or perhaps two weighted sums with different coefficients if having the  and  separate.

On the other hand, if the network isn't recurrent, or if all the recurrence is done in a vector-quantization-y way, I would imagine it would be more difficult for the network to estimate a given linear combination of its current weights. (Though, if the linear combination of weights is only of weights before a certain layer, then layers after that layer might be able to approximate it without any recurrence, but, the layers that do and use this approximation could be modified without this being detected, and therefore, I'd think it couldn't protect these later parts from being changed by gradient descent, or at least, it would be harder.)
 

(I'm imagining, if people are trying to make this sort of thing work, that these networks are using just RELU, not like a smoothed version of RELU, so that the values of the weights are reflected more piecewise-linearly in the activations of one layer, given activations in the previous layer, which I think would facilitate estimating other linear combinations of the weights between the previous layers, and as such also for estimating this for a linear combination of weights in any layer.)

When you say "which yields a solution of the form ", are you saying that  yields that, or are you saying that  yields that? Because, for the former, that seems wrong? Specifically, the former should yield only things of the form  .

I meant the former (which you're right only has the solution with ). I only added the  term to make it work for the inequality. As a result, it's only a subset of the solutions for the inequality. The (quite complicated!) expression you provided also works. 

Epistemic status: Somewhat confused by the scenario described here, possible noob questions and/or commentary.

I am not seeing how this toy example of “gradient hacking” could actually happen, as it doesn’t map on to my understanding of how gradient descent is supposed to work in any realistic case.

Suppose, we have a mixture consisting of a good model  which gets 0 loss in the limit (because it’s aligned with our training procedure) and a gradient hacker  which gets loss in the limit (because its actual objective is paperclips).

The mixture is roughly a linear combination of the two models, weighted by  for the good model and  for the bad model. Suppose we’re doing SGD on the vector , and the total loss for the mixture is 

Can you flesh this out more by giving an example of what a training data point would be in this scenario?  What are the input features of a training case and what output target variable is the model trying to predict?

Also, it is not clear from this example if the outputs of the whole composite model are:

  1. A weighted average of the outputs of the 2 component models OR
  2. The weighted sum of 
     

But either way, I don't see why the loss for the composite model would necessarily be equal to a weighted sum of the losses of each component model, as stated in the example above.

In principle, it seems to me the loss on such a composite model could (for example) be 0 on some data set, even if both components had a positive loss function value for that same data.

As a simplified example of this, suppose you have a 2-component weighted-average model that is trying to predict the Y value of points in a 2-dimensional (X,Y) coordinate plane given the X value.  And suppose all the data points are on the line:

  

If the 2 components of the model are:

  1.  AND

I would expect the weighted average composite model would have a lower loss then either component model (and 0 loss if the weights are equal).

Also suppose  can see  perfectly and its loss is a function of its relative weight in the mixture. I.e. , which I'll refer to as 's loss modulating function.

I am disinclined to suppose this - unless someone explains how it could make sense for the machine learning engineers to do it that way.

My understanding is that the loss function of a model should track how close the model’s answers are to the correct or optimal answers for some data. But in this example,  and  are not part of the training data, and can vary independently from the  outputs on the training cases.

 has a calculable loss on the training data, which means  could be a stand-alone model for this dataset in its own right even if ,  , and  didn’t exist.

(Which is to say,  takes inputs in a form that would represent the features of a training data point and deterministically produces outputs of a data type that matches the target variable. This seems to me to be implied by the description, otherwise I don’t understand what it would mean to say that the loss of  is ϵ.)

Furthermore, this example supposes that the gradient descent process is only optimizing  and , and not optimizing, or altering the internal parameters of,  or .

So it seems to me, based on this description, that the loss for on a given set of training data should *not* vary with  or  - if they are doing gradient descent in any kind of normal way (unless I am misunderstanding some big part of how gradient descent works). Rather, you should be able to give  the same training data batch  times in a row, while varying  and , and you should get the same outputs and the same loss (if the parameters for the stand-alone  are the same each time).

So I don't see how "gradient hacking" could occur in this scenario if the composite model is using any reasonable loss function.

If the composite model is a weighted average I would expect gradient descent to reduce   to  or nearly , since if  is matching the correct output exactly, and  is not, then the composite model can always get a closer answers by giving more relative weight to .

If the composite model is a weighted sum of the outputs, I would expect that (for most possible training data sets and versions of  would tend to gravitate towards  and  would tend to gravitate towards . There might be exceptions to this if 's outputs have a strong correlation with 's outputs on the training data, such that the model could achieve low loss with some other weighted sum, but I would expect that to be unusual.

To be clear, I haven't explained how  could arise nor how it's implementing . There are other posts that explain why gradient hacking might be a problem and informal 'requirements' that gradient hacking models might meet. I'm just trying to answer IF we already have a gradient hacking model, what's the theoretical best it can do. 

Can you flesh this out more by giving an example of what a training data point would be in this scenario?  What are the input features of a training case and what output target variable is the model trying to predict?

Also, it is not clear from this example if the outputs of the whole composite model are:

  1. A weighted average of the outputs of the 2 component models OR
  2. The weighted sum of 

Output would be task-specific. In the scenario you later describe with predicting functions, you'd probably want to take option 1 (which is what parametrizing on one weight in [0,1] also accomplishes). I'll go with that example. 

But either way, I don't see why the loss for the composite model would necessarily be equal to a weighted sum of the losses of each component model, as stated in the example above.

In principle, it seems to me the loss on such a composite model could (for example) be 0 on some data set, even if both components had a positive loss function value for that same data.

As a simplified example of this, suppose you have a 2-component weighted-average model that is trying to predict the Y value of points in a 2-dimensional (X,Y) coordinate plane given the X value.  And suppose all the data points are on the line:

  

If the 2 components of the model are:

  1.  AND

The intent of the scenario is to find what model dominates, so probably loss should be non-negative. If you use squared error in that scenario, then the loss of the mixture is always greater than or equal to the loss of any particular model in the mixture. 

Also suppose  can see  perfectly and its loss is a function of its relative weight in the mixture. I.e. , which I'll refer to as 's loss modulating function.

I am disinclined to suppose this - unless someone explains how it could make sense for the machine learning engineers to do it that way.

My understanding is that the loss function of a model should track how close the model’s answers are to the correct or optimal answers for some data. But in this example,  and  are not part of the training data, and can vary independently from the  outputs on the training cases.

 has a calculable loss on the training data, which means  could be a stand-alone model for this dataset in its own right even if ,  , and  didn’t exist.

I don't think ML engineers will pass in weights of the models to the models themselves (except maybe for certain tasks like game-theoretic simulations). The worry is that data spills easily and that SGD might find absurd, unpredictable ways to sneak weights (or some other correlated variable) into the model. Once it does, the model's loss is already a function of its weight (though it may of course be constant). I wouldn't expect to run into this sort of problem with weak models operating on well-defined tasks (e.g. function prediction). Gradient hacking (unless in contrived examples) probably wouldn't occur until we get very capable models that are fed a lot of data (the calculation of weights from which might be easy) and it may in fact never occur at all! 

The result of the post is that there exists a loss-modulating function that 'tricks' SGD into preferring a model that is actively worse simply because the model 'holds out' or 'strategically fails' in such a way that the gradient flows toward it. How likely these functions are or if there are basins around these functions (i.e. if  is currently outputting  which is sort of close to , would SGD update  toward  or toward some constant function? If  is outputting , is there any optimization pressure towards changing  to be more gradient hacky?) are open problems.  

The intent of the scenario is to find what model dominates, so probably loss should be non-negative. If you use squared error in that scenario, then the loss of the mixture is always greater than or equal to the loss of any particular model in the mixture. 

 

I don't see why that would necessarily be true. Say you have 3 data points from my  example from above:

  1. (0,1)
  2. (1,2)
  3. (2,3)

And say the composite model is a weighted average of  and  with equal weights (so just the regular average).

This means that the composite model outputs will be:

Thus the composite model would be right on the line, and get each data point Y-value exactly right (and have 0 loss).

The squared error loss would be:

                  

By contrast, each of the two component models would have a total squared error of 3 for these 3 data points. 

The   component model would have total squared error loss of:

                  

The  + 2 component model would have total squared error loss of:

                  

For a 2-component weighted average model with a scalar output, the output should always be between between the outputs of each component model. Furthermore, if you have a such a model, and one component is getting the answers exactly correct while the other isn't, you can always get a lower loss by giving more weight to the component model with exactly correct answers. So I would a gradient descent process to do that.

I don't think ML engineers will pass in weights of the models to the models themselves (except maybe for certain tasks like game-theoretic simulations). The worry is that data spills easily and that SGD might find absurd, unpredictable ways to sneak weights (or some other correlated variable) into the model.

From the description, it sounded to me like this instance of gradient descent is treating the outputs of the component models  and  as features in a linear regression type problem.

In such a case, I would not expect data about the weights of each model to "spill" or in any way affect the output of either component model (unless the machine learning engineers are deliberately altering the data inputs depending on what the weights are, or something like that, and I see no reason why they would do that).

If it is a different situation - like if a neural net or some part or some layers of a neural net is a "gradient hacker" I would expect under normal circumstances that gradient descent would also be optimizing the parameters within that part or those layers.

So barring some outside interference with the gradient descent process, I don't see any concrete scenario of how gradient hacking could occur (unless the gradient hacking concept includes more mundane phenomena like "getting stuck in a local optimum").

For a 2-component weighted average model with a scalar output, the output should always be between between the outputs of each component model.

Hm, I see your point. I retract my earlier claim. This model wouldn't apply to that task. I'm struggling to generate a concrete example where loss would actually be a linear combination of the sub-models' loss. However, I (tentatively) conjecture that in large networks trained on complex tasks, loss can be roughly approximated as a linear combination of the losses of subnetworks (with the caveats of weird correlations and tasks where partial combinations work well (like the function approximation above)). 

I would expect under normal circumstances that gradient descent would also be optimizing the parameters within that part or those layers.

I agree, but the question of in what direction SGD changes the model (i.e. how it changes ) seems to have some recursive element analogous to the situation above. If the model is really close to the  above, then I would imagine there's some optimization pressure to update it towards . That's just a hunch, though. I don't know how close it would have to be. 

New to LessWrong?