# Ω 7

Frontpage
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Overall summary post here.

In a previous post, I asked which impact measures were vulnerable to subagents. Vika pointed out that it was not merely an issue of of the impact measure, but also of the baseline.

This is indeed the case, but the nature of the impact measure is still relevant. In this post, I'll establish two facts: that under the stepwise inaction baseline, a subagent completely undermines all impact measures (including twenty billion questions). And for the inaction baseline, for non-indexical impact measures, a subagent will not change anything.

The next post will delve into the very interesting things that happen with the inaction baseline and an indexical impact measure.

# Stepwise inaction baseline

All impact measures are some form of distance measure between two states, of the form , where is the state the agent is actually in, and is some baseline state to compare with.

For the stepwise inaction baseline, is calculated as follows. Let be the previous state, the previous action. Then is the state that would have followed had the agent taken the noop action, , instead of .

However, in the presence of a subagent, the agent merely has to always take the action . In that case, (in a deterministic setting[1]), and the penalty collapses to nothing.

This leaves the subagent free to maximise (the positive reward for agent and subagent), without any restrictions beyond making sure that the agent can always take the action.

# Non-indexical impact penalty

The twenty billion questions is an example of a non-indexical impact penalty. The impact penalty is defined as the difference in expected value of many variables, between the current state and the baseline state.

With the inaction baseline state, is the state the system would have been in, had the agent always done nothing/had not been turned on. Thus the definition of is independent of the agent's actions, so the inaction baseline does not collapse like the stepwise inaction baseline does.

What about subagents? Well, since the impact penalty is non-indexical, a subagent does not get around it. It matters not whether the subagent is independent, or an extension of the agent: the impact penalty remains.

1. In a non-deterministic setting, becomes a mix of an impact penalty and a measure of environment stochasticity. ↩︎

# Ω 7

New Comment

I think this problem is about capturing delayed effects of the agent's actions. The way the stepwise baseline is supposed to penalize delayed effects is using inaction rollouts, which compare the effects of the agent action + k noops and the effects of k+1 noops (for all positive integers k). I don't think it's useful to consider this issue for a stepwise baseline with no rollouts, since that baseline fails to penalize any delayed effects (no subagents needed).

Here, the inaction rollouts don't capture the effects of the subagent, and thus the agent doesn't receive a penalty for creating the subagent. There are several factors that contribute to this:

1. The auxiliary reward R only rewards the agent itself pressing the red button, not the red button being pressed in general, so it would not be triggered by the subagent pressing it.

2. The subagent never goes to the red button - it only goes to the blue button.

The agent would be penalized for creating the subagent if both 1 and 2 were false. If 1 was false, the auxiliary reward would be given whenever the red button was pressed. If 2 was also false, the inaction rollout after subagent creation would include a state where the subagent presses the red button, and so the agent would be penalized.

In the general case, the agent can only get away with creating subagents that don't achieve any auxiliary rewards, which I think limits the magnitude of this problem. The designer can choose auxiliary rewards (that are not achieved by the agent doing nothing) such that a subagent will achieve at least one of them. For example, if there is an auxiliary reward for reaching any state except the state before subagent creation, the no-subagent inaction rollout will consist of this state, and the subagent inaction rollout will contain a different state at least once, so subagent creation will be penalized.

The inaction rollouts effectively transforms the stepwise inaction baseline into an inaction baseline (starting from the moment the subagent is created; thus the agent has a bit more control than in a true inaction baseline).

Therefore the results on the inaction baseline apply ( https://www.lesswrong.com/s/iRwYCpcAXuFD24tHh/p/M9aoMixFLf8JFLRaP ).

This means that restrictions on increased power for the agent ("make sure you never have the power to increase the rewards") become restrictions on the actual policy followed for the subagent ("make sure you never increase these rewards").

Roughly, attainable utility becomes twenty billion questions.

For the original example, this means that the agent cannot press the red button nor gain the ability to teleport. But while the subagent cannot press the red button, it can gain the ability to teleport.

For example, if there is an auxiliary reward for reaching any state except the state before subagent creation, the no-subagent inaction rollout will consist of this state, and the subagent inaction rollout will contain a different state at least once, so subagent creation will be penalized.

This requires identifying what a subagent is in general, a very tricky unsolved problem (which I feel is unsolvable).

There's another issue; it's not enough to show that the subagent triggers a penalty. We need to show the penalty is larger than not creating the subagent. Since the penalty is zero after the subagent is created, and since the subagent has very fine control over the rewards (much finer than actions that don't include creating an intelligent being), creating a subagent might be lower penalty than almost any other action.

It won't be a lower penalty than the agent doing nothing for ever, of course. But we typically want the agent to do something, so will calibrate the penalty or R_0 for that. And it's plausible that creating the subagent will have lower penalty (and/or higher R_0) than any safe "something".

I don't think this requires identifying what a subagent is. You only need to be able to reliably identify the state before the subagent is created (i.e. the starting state), but you don't need to tell apart other states that are not the starting state.

I agree that we need to compare to the penalty if the subagent is not created - I just wanted to show that subagent creation does not avoid penalties. The penalty for subagent creation will reflect any impact the subagent actually causes in the environment (in the inaction rollouts).

As you mention in your other comment, creating a subagent is effectively switching from a stepwise inaction baseline to an inaction baseline for the rest of the episode. This can be beneficial for the agent because of the 'winding road' problem, where the stepwise baseline with inaction rollouts can repeatedly penalize actions (e.g. turning the wheel to stay on the road and avoid crashing) that are not penalized by the inaction baseline. This is a general issue with inaction rollouts that needs to be fixed.

Alas, the inaction rollouts don't seem to fix the problem: https://www.lesswrong.com/s/iRwYCpcAXuFD24tHh/p/z9MfmF8gA7SBxGSmb

I'll establish two facts: that under the stepwise inaction baseline, a subagent completely undermines all impact measures (including twenty billion questions).

Note this implicitly assumes an agent benefits by building the subagent. The specific counterexample I have in mind will be a few posts later in my sequence.

It needs to benefit for ; not necessarily for or alone.

It seems to me that, generically, since is supposed to be a hindrance to the agent, taking a few turns to then neutralise should be beneficial.