The most concerning part of AGI is that it may have power-seeking tendencies. We can loosely define a powerful state to be a state that is useful to be in regardless of the reward function. A promising approach to AI safety is to make the RL process "interpretable" enough so that you can easily see which states the model regards as "powerful". 

The dumbest possible example of this is to train an RL agent on many tasks by learning a task-specific head for each task. Then, one can average the q-value across the different tasks -- if this number is high, the state is "powerful". Obviously, one can do much more sophisticated things. My point is that this is a fairly tractable research direction.

Then, when you sample from the agent during train/test time, avoid the powerful actions. 

Now, suppose the RL agent was fully sentient and knew you were doing the above. Does it have an incentive to deceive you -- e.g. make powerful actions seem not powerful? 

Well, suppose the RL agent was trained via Q-learning and it found some powerful actions that the above metric didn't recognize as powerful. Then, it would just update the Q value across different tasks. The RL agent knows that this update will allow us to recognize that this state is powerful, thus, we'll ensure this state isn't visited again. However, the RL agent is simply trying to accurately estimate the discounted expected reward given that we are in state s and are taking action a. The future probability of ending up at (s,a) isn't relevant to this -- thus, there's no incentive to be deceptive. 

But let's say the RL agent was trained via policy gradients. Now, it's trying to recommend the optimal action that maximizes the reward. Let's say this agent found a powerful action that the metric didn't recognize as powerful. The agent also knows that if it increases the probability of this action across different tasks, we are less likely to choose this action -- since we can recognize that it's a powerful action. The RL agent can maximize its reward by being deceptive and hiding that the action is powerful. 

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 6:23 AM

I'd love some feedback on this if possible, thank you!