Overall summary post here.

When designing a low-impact agent , one typically wants them to maximise , where is some positive reward function we want to increase (ie the actual goal) and is some impact measure or impact penalty. Keeping low is supposed to ensure that doesn’t make a big difference to the world.

I’ve shown that some impact measures may have issues with subagents. We can decompose the issue of subagents into two questions:

  1. Can build , an “”-maximising subagent, without triggering a high ?
  2. Does allow to cheat the impact measure and get higher reward?

I’ll consider three impact measures: the twenty billion questions measure from my old low-impacts paper, the relative reachability measure from Krakovna et al's paper, and the attainable utility measure from Turner et al's paper.

Then the answers to those two questions for those three impact measures are generally[1]:

Building the subagent

The twenty billion questions impact measure does not prevent the agent from building a subagent within its box. That’s because that measure is keyed to changes in the world, not to the power level of the agent. Until there are changes in the world, it will not trigger.

The relative reachability impact measure similarly won’t penalise the construction of a subagent (as long as the “states” it uses to define reachability don’t include specific information about the agent and any subagents). The reasoning is similar: the penalty triggers when an “irreversible” state is reached, not when the agent becomes powerful.

Building the subagent for the attainable utility impact measure is harder, since this typically increases the power of the agent. But the question is not “will any subagent trigger the impact measure?”, but “can the agent build a subagent in some way that doesn’t trigger the impact measure?” The example here shows that this is possible in some cases. Given that we don't have any impossibility results here, we can expect that a superintelligent agent will likely find a way of doing it.

Using the subagent

Does having the subagent help to evade the impact measure (and hence increase more)? For twenty billion questions, the answer is generally no[2]. Having as an independent agent is no different from having as a direct extension of . If changes anything about the various variables in , that triggers the impact penalty just as much as if had done so.

But the situation is different for the attainable utility penalty. That’s because that penalty is indexed to the agent. So, for , the impact penalty is roughly “ensure that you don’t get too much power”. But for , the impact penalty is “ensure that agent doesn’t get too much power”. Given that is actively cooperating with , that second penalty is much easier to satisfy, while still maximising : see here for how can then become an essentially un-restricted -maximiser.

For relative reachability, it depends on the baseline. For the inaction baseline, the is entirely defined in terms of a distance metric between the current state and a given state that is independent of the agent. So, like the twenty billion questions, subagents don't allow the agent to evade the impact penalty.

But if the stepwise inaction baseline is used, then is defined by "what would have happened if had done nothing the previous turn (and the future turns)". This is also an indexed penalty, so can easily maximise while does nothing, thus ensuring and the penalty vanishes.


  1. Since all impact measures can apply to many different setups, we can't say anything fully definitive about every situation that uses a particular impact measure. ↩︎

  2. Technically, might like to construct an far away, beyond the “reach” of . But that’s because might like to move itself beyond the reach of - the subagent is just an indirect way of doing so. ↩︎

New Comment
3 comments, sorted by Click to highlight new comments since:
[-]VikaΩ360

Thanks Stuart for your thought-provoking post! I think your point about the effects of the baseline choice on the subagent problem is very interesting, and it would be helpful to separate it more clearly from the effects of the deviation measure (which are currently a bit conflated in the table). I expect that AU with the inaction baseline would also avoid this issue, similarly to RR with an inaction baseline. I suspect that the twenty billion questions measure with the stepwise baseline would have the subagent issue too. 

I'm wondering whether this issue is entirely caused by the stepwise baseline (which is indexed on the agent, as you point out), or whether the optionality-based deviation measures (RR and AU) contribute to it as well. So far I'm adding this to my mental list of issues with the stepwise baseline (along with the "car on a winding road" scenario) that need to be fixed.

But that’s a side effect of the fact that A might like to move itself beyond the reach of ρ.

'X is a side effect of Y', is different from 'X and Y have a common cause'.

Have rephrased.