Biased reward-learning in CIRL

by Stuart_Armstrong6 min read5th Jan 20181 comment

18

Inverse Reinforcement Learning
Frontpage

In Cooperative Inverse Reinforcement Learning (CIRL), a human and a robot cooperate in order to best fullfil the human's preferences. This is modeled as a Markov game .

This setup is not as complicated as it seems. There is a set of states, and in any state, the human and robot take simultaenous actions, chosen from and respectively. The transition function takes this state and the two actions, and gives the probability of the next state. The is the discount factor of the reward.

What is this reward? Well, the idea is that the reward is parameterised by a , which only the human sees. Then takes this parameter, the state, and the actions of both parties, and computes a reward; this is for a state and actions and by the human and robot respectively. Note that the robot will never observe this reward, it will simply compute it. The is a joint probability distribution over the initial state , and the that will be observed by the human.

Behaviour in a CIRL game is defined by a pair of policies , that determine the action selection for and respectively. Each agent gets to observe the past actions of the other agent, so in general these policies could be arbitrary functions of their observation histories: and .

The optimal joint policy is the policy that maximises value, which is the expected sum of discounted rewards. This optimal is the best and can do if they coordinate perfectly before observes . It turns out that there exist optimal policies that depend only on the current state and 's belief about .

Manipulation actions

My informal critique of CIRL is that it assume two untrue facts: that knows (ie knows their own values) and that is perfectly rational (or noisly rational in a specific way).

Since I've been developing more machinery in this area, I can now try and state this more formally.

Assume that always starts in a fixed state , that the reward is always zero in this initial state (so ), and that transitions from this initial state are independent of the agent's actions (so is defined indendently of the actions). This makes 's initial action irrelevant (since has no private information to transmit).

Then let be the optimal policy for , and be the optimal policy for (this may be either independent of or dependent on ).

Among the action set is a manipulative action (this could involve tricking the human, drugging them, brain surgery, effective propaganda, etc...) If , the human will pursue ; otherwise, they will pursue . If we designate as the indicator variable of (so it's if that happens and otherwise), then this corresponds to following the compound policy:

This is well defined as policies map past sequences of states and actions, and is well-defined given past actions, so the expression does map sequences of states and actions (and