Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

This is a quick attempt at deconfusion similar to instrumentality. Same ideas, different angle.

Extremely broad, dense reward functions constrain training-compatible goal sets

Predictors/simulators are typically trained against a ground truth for every output. There is no gap between the output and its evaluation; an episode need not be completed before figuring out how good the first token prediction was. These immediate evaluations for every training sample can be thought of as a broad and densely defined reward function.

It's easier for a model to fall into an undesired training-compatible goal set[1] when there are many accessible options for undesirable goal sets versus desirable goal sets. As the number of constraints imposed by the trained reward function increases, the number of training-compatible goal sets tends to decrease, and those that survive obey more of the desirable constraints.

There is no guarantee that SGD will find an agent which could be modeled by a utility function that maps perfectly onto the defined reward function, but if you throw trillions of constraints at the function, and simultaneously give it lots of highly informative hints about what path to walk, you should expect the potential output space to be far narrower than if you hadn't.

Impact on internal mesaoptimizers

The dense loss/reward function does not as heavily constrain out of distribution behavior. In principle, a strong misaligned mesaoptimizer within a predictive model could persist in these degrees of freedom by providing extremely good solutions to in-distribution samples while doing arbitrarily misaligned things out of distribution.

But how would that type of mesaoptimizer develop in the first place?

Steps toward it must serve the training objective; those constraints still shape the mesaoptimizer's training even if its most notable activity ends up being hidden.

The best story I've found so far goes something like this:

  1. Traditional reinforcement learning agents are mostly unconstrained. The reward function is sparse relative to state and action space.
  2. An agent faced with sparse rewards must learn actions that serve a later goal to get any reward at all.
  3. Not surprisingly, agents facing sparse reward relative to state/action space and few constraints have a much larger percentage of undesirable training-compatible goal sets.
  4. Mesaoptimizers are processes learned within a model and their local training influences may not perfectly match the outer training influences.
  5. If the mesaoptimizer's local training influences look more like the traditional reinforcement learning agent's influences than the predictor's outer influences, it would be more likely to fall into one of the undesirable training-compatible goal sets.
  6. The mesaoptimizer learns incorrect goals and a high propensity for goal-serving intermediate actions ("actions" within the scope of a single model execution!)
  7. The mesaoptimizer is kept around by SGD because it does well on the subset of outputs that the outer model is using it on. As capability grows, the mesaoptimizer strategically takes over other chunks of prediction space by performing well during training in an effort to be selected during out of distribution predictions.

In a previous post, I called the learned propensity for goal-serving intermediate action instrumentality. The constraints imposed by predictive model training clearly confer lower instrumentality than traditional RL in all current models. I suspect the path taken by the mesaoptimizer above is hard and unnatural[2], but perhaps not impossible for some form of predictor taken to the relevant extreme.

It seems critical to understand the degree to which outer constraints apply to inner learning, how different forms of training/architecture affect the development of instrumentality, and how much "space" is required for different levels of instrumentality to develop.

I expect:

  • Designs which target lower instrumentality will tend to exhibit less capable goal misgeneralization.
  • Instead, I would expect to mostly see nonsense behavior- poorly calibrated outputs not actually corresponding to any goals at all, just the meaningless extrapolations of adhering to the constraints present on the training distribution.
  • To the extent that low instrumentality models do generalize, I would expect them to generalize in a manner more faithful to the original constraints.[3]

I really want these expectations tested![4]

No greater coherence

If you manage to train a highly capable minimally instrumental agent of this kind, there isn't really anywhere else for the model-as-agent to go. Using the term "values" loosely, its values do not extend beyond actions-in-contexts, and those values are immediately satisfied upon outputting the appropriate action. Satisfaction of values is utterly under the agent's control and requires no environmental modification. No external intermediate steps are required. It does not have exploitable preferences. Refining itself won't resolve any lingering internal inconsistencies.

There is no greater coherence to be found for such an agent; it is complete through shallowness.

This is not the case for a model that develops internal instrumentality and misaligned mesaoptimizers, and even zero instrumentality will not prevent a simulator from simulating the more concerning kind of agents, but this is a pretty odd property for any agent to have!

I'd view this as one extreme end of a spectrum of agents, ranging from:

  • Laser-focused utility representing an infinitesimal target in a hugely larger space of states and actions, to...
  • For every input state, the output of the agent is directly defined by its utility function.[5]

In other words, minimal instrumentality can be thought of as making the learned utility function as broad and densely defined as possible, such that there is simply no more room for intermediate goal-seeking behavior.

  1. ^

    Using the term as in this post.

  2. ^

    It assumes a mesaoptimizer was found and that it outperforms other implementations, including other mesaoptimizers, that would have more closely matched the dense output constraints. 

    It assumes enough space for the mesaoptimizer to learn the relevant kind of instrumentality that would operate beyond a single invocation.

    It assumes the mesaoptimizer is able to learn and hold onto misaligned goals early while simultaneously having sufficient capability to realize it needs to hide that misalignment on relevant predictions. 

    It assumes the extra complexity implied by the capable misalignment is small enough that SGD can accidentally hop into its basin. 

    And so on.

  3. ^

    I also note the conspicuous coincidence that some of the most capable architectures yet devised are low instrumentality or rely on world models which are. It would not surprise me if low instrumentality, and the training constraints/hints that it typically corresponds to, often implies relative ease of training at a particular level of capability which seems like a pretty happy accident.

  4. ^

    I'm working on some experiments, but if you have ideas, please do those experiments too. This feels like a weird little niche that has very little empirical data available.

  5. ^

    While you could construct such a function for an agent that ends up matching the behavior of a more restricted utility function and its accompanying intermediate behaviors, the usefulness-achieving assumption here is that the dense utility function was chosen to be not-that.

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 3:24 PM

I really like how you've laid out a spectrum of AIs, from input-imitators to world-optimizers. At some point I had a hope that world-optimizer AIs would be too slow to train for the real world, and we'd live for awhile with input-imitator AIs that get more and more capable but still stay docile.

But the trouble is, I can think of plausible paths from input-imitator to world-optimizer. For example if you can make AI imitate a conversation between humans, then maybe you can make an AI that makes real world plans as fast as a committee of 10 smart humans conversing at 1000x speed. For extra fun, allow the imitated committee to send network packets and read responses; for extra extra fun, give them access to a workbench improving their own AI. I'd say this gets awfully close to a world-optimizer that could plausibly defeat the rest of humanity, if the imitator it's running on is good enough (GPT-6 or something). And there's of course no law saying it'll be friendly: you could prompt the inner humans with "you want to destroy real humanity" and watch the fireworks.

Yup, agreed. Understanding and successfully applying these concepts are necessary for one path to safety, but not sufficient. Even a predictive model with zero instrumentality and no misaligned internal mesaoptimizers could still yield oopsies in relatively few steps.

I view it as an attempt to build a foundation- the ideal predictive model isn't actively adversarial, it's not obscuring the meaning of its weights (because doing so would be instrumental to some other goal), and so on. Something like this seems necessary for non-godzilla interpretability to work, and it at least admits the possibility that we could find some use that doesn't naturally drift into an amplified version of "I have been a good bing" or whatever else. I'm not super optimistic about finding a version of this path that's also resistant to the "and some company takes off the safeties three weeks later" problem, but at least I can't state that it's impossible yet!

Your scenario seems to suggest that dense real-world feedbacks at human speeds (i.e., compute surveillance) and decentralisation (primarily, of the internet: rogue AI shouldn't be able to replicate itself in minutes across thousands of servers across the globe) should serve as counter-measures.

Terminological note: I think it's confusing to call the estimator function of the predictive model's inferences (such as text continuations, plans, predictions, history reconstructions, etc.) either "utility" (as you did in the title) or "reward function" (in the text). It is this estimator function which is "dense", i.e., it's result (scalar value) may sharply depend on all elements of the inference output (text, plan, etc.). This is confusing because "reward functions" in RL and utilities in decision theory (or moral philosophy) apply to world states or outcomes, not plans, and imply constructive (or normative) strategies for inference and action. Training a predictive model (simulator) is a very different strategy of arriving at a certain behavior style (i.e., ethics) altogether.

I think that the term "reward function" shouldn't be exported from the narrow domain of RL at all, to avoid such confusions.

Cf. this criticism of the term "training-compatible goal sets" which you also refer to (but I actually don't criticise your usage of this term in this post because in the context of this post it sounds more sensical to me, if I forget that in the original post the authors essentially meant "reward function" by "goal set").

This is confusing because "reward functions" in RL and utilities in decision theory (or moral philosophy) apply to world states or outcomes, not plans

While they are usually described in the context of world states and outcomes, I don't think there is something special about the distinction. Or to phrase it another way: an embedded agent that views itself as a part of the world can consider its own behavior as a part of world state that it can have valid preferences about.

The most direct link between traditional RL and this concept is reward shaping. Very frequently, defining a sparse and distant goal prevents effective training. To compensate for this, the reward function is modified to include incremental signals that are easier to reach. For locomotion, this might look like "reward velocities that are positive along the X axis," while the original reward might have just been "reach position.X >= 100."

Reward shaping can be pushed arbitrarily far. You could implement imitation in the reward function: no longer is the reward just about an outcome, but also about how that outcome comes about. (Or to phrase it the other way again- the how becomes an outcome itself.)

In the limit, the reward function can be made extremely dense such that every possible output is associated with informative reward shaping. You can specify a reward function that, when sampled with traditional RL, reconstructs gradients similar to that of predictive loss. I'm trying to get at the idea that there isn't a fundamental difference in kind.

A big part of what I'm trying to do with these posts is to connect predictors/simulators to existing frameworks (like utility and reward). If one of these other frameworks (which tend to have a lot of strength where they apply) suggested something bad about predictors with respect to safety efforts, it would be important to know.

Predictors/simulators are typically trained against a ground truth for every output. There is no gap between the output and its evaluation; an episode need not be completed before figuring out how good the first token prediction was. These immediate evaluations for every training sample can be thought of as a broad and densely defined reward function.

To me, this sounds like saying that simulator AIs are trained such as to optimise some quality of the predictions/simulacra/plans/inferences that they produce. This seems to be more or less a tautology (otherwise, these models weren't called "simulators"). Also, this actually doesn't depend on the specific training procedure of auto-regressive LLMs, namely, backpropagation with token-by-token cross-entropy loss. For example, GFlowNets are also squarely "simulators" (or "inference machines", as Bengio calls them), but their performance is typically evaluated (i.e., the loss is computed) on complete trajectories (which is admittedly inefficient though, and there is some work about train them with "local credit" on incomplete trajectories).

I also note the conspicuous coincidence that some of the most capable architectures yet devised are low instrumentality or rely on world models which are. It would not surprise me if low instrumentality, and the training constraints/hints that it typically corresponds to, often implies relative ease of training at a particular level of capability which seems like a pretty happy accident.

It seems to me that "high instrumentality" is an instance of what I call an agent having an alien world model. However, albeit "bare" LLMs (especially with recurrence, such as RWKV-LM) seem to be able to develop powerful alien models that could "harbour" a misaligned mesaoptimiser, even rather stupid LMCAs on top of such LLMs, such as the "exemplary actor", seems to be immunised from this risk.

It's important to note, however, that a view of cognitive system as having a probabilistic world model and performing inferences with this model (i.e., predictive processing) is just one way of modelling agent's dynamics, and is not exhaustive. There could be unexpected surprises/failures that evade this modelling paradigm altogether.

It seems critical to understand the degree to which outer constraints apply to inner learning, how different forms of training/architecture affect the development of instrumentality, and how much "space" is required for different levels of instrumentality to develop.

Although apparently GFlowNets (which is a training algorithm, essentially, loss design, rather than network architecture) would have less propensitity for "high instrumentality" than auto-regressive LLMs under the currently dominating training paradigm, this is probably unimportant, given what I wrote above: it seems easy to "weed out" alienness on the level of agent architecture on top of an LLM, anyway.

Also, this actually doesn't depend on the specific training procedure of auto-regressive LLMs, namely, backpropagation with token-by-token cross-entropy loss.

Agreed.

I do have some concerns about how far you can push the wider class of "predictors" in some directions before the process starts selecting for generalizing instrumental behaviors, but there isn't a fundamental uniqueness about autoregressive NLL-backpropped prediction.

It seems to me that "high instrumentality" is an instance of what I call an agent having an alien world model.

Possibly? I can't tell if I endorse all the possible interpretations. When I say high instrumentality, I tend to focus on 1. the model is strongly incentivized to learn internally-motivated instrumental behavior (e.g. because the reward it was trained on is extremely sparse, and so the model must have learned some internal structure encouraging intermediate instrumental behavior useful during training), and 2. those internal motivations are less constrained and may occupy a wider space of weird options.

#2 may overlap with the kind of alienness you mean, but I'm not sure I would focus on the alienness of the world model instead of learned values (in the context of how I think about "high instrumentality" models, anyway).