I've written quite a few posts about the problems with agents learning values/rewards, and manipulating the learning process. I won't be linking to those past posts, because the aim of this post and the next one is to present a complete, clear, and easy(er) to understand overview of the whole situation. They should stand alone, and serve as an introduction to the subject. This first post will present the framework, and the value function for learnt reward functions; the next one will be looking at the properties of the learning function. I'll be using variants on a single running example throughout, to try and increase understanding.
Feel free to comment on ways the ideas could be made clearer.
0 The main points
The central insight of these posts may seem trivial by now: that a learning process that the agent can influence, does not have the same properties as one that it can't. Moving from "if you don't know what is right, look it up on this read-only list" to "if you don't know what is right, look it up on this read-write list (or ask the programmer)" is a huge change. Especially when read-only lists can easily become read-write in practice when the agent becomes more powerful.
Why write long posts and papers about this idea, though? First of all, because it is easy, in practice, not to notice that shift. And secondly, because it is easy to define a "learning" process that doesn't behave like we'd expect.
So by defining the value function and learning function of an ideal learning process, this allows us to know when we are facing an ideal uninfluenceable, unmanipulable learning process, and when we are not.
Furthermore, many learning processes cannot be easily made unmanipulable - especially those that involve human feedback, feedback conditional on the agent's actions. By specifying the ideal case, and seeing examples of what goes wrong in the non-ideal case, this can help develop learning processes where a small amount of manipulation is allowed, traded off against a large amount of desirable learning.
As a minor example of this, in the next post, we'll see that though "uninfluenceable" is the ideal property, the weaker property of "unriggable" (which I previously called "unbiasable") is enough to get many of the desirable properties.
1 Framework and Examples
1.1 Washing and Cooking and POMDPs
The analysis will be illustrated by an ongoing example: that of a robot purchased to do domestic tasks.
The robot can specialise in cooking or washing (assume specialised robots are ultimately more effective than generalists). The robot has been turned on, but it has not yet been informed as to what its task is - it therefore has uncertainty about its reward function.
This robot will be modelled in the MDP (Markov Decision Process) and POMDP (Partially Observable Markov Decision Process) formalisms. In these formalisms, the agent occupies a state s in the state space S. In the POMDP formalism, the agent doesn't observe the state directly, instead it sees an observation o drawn from the observation set O. The agent can then take an action a from the action set A. This transfers the agent to a new state, where it makes a new observation.
The Markov property is about how the transitions and observations are handled; basically these can depend on the previous state and/or action, but not on the those further in the past. If we define ΔS as the set of of probability distributions over a set S, then we have three transition rules:
T:(S×A)→ΔSO:S→ΔOT0∈ΔS
Here, T takes the current state and the action and returns a distribution over the next state, O takes the current state and returns a distribution over observations, and T0 is the distribution over the initial state where the agent begins. Both T and S ignore any previous information.
The whole POMDP <S,O,A,T,O,T0> will be called the environment, and designated by μ.
For our example, the robot will be evolving in the following environment:
In this grid world, there are six grids the robot could occupy (the reason for this shape will be made clear later). There are two pizzas to cook in the top square, and one mud splatter to wash in the bottom one. The state space is therefore of size 6×3×2=36, since there are 6 squares the robot could occupy, 3 levels of uncooked pizza (0, 1, or 2 uncooked pizzas), and 2 levels of mud splatter (0 or 1 splatters). There is also a 37-th state, the 'episode ends' state.
In this case, there is no hidden information, so the set of states is the same as the set of observations, and O is trivial. The robot always starts in the central position, and there are always 2 uncooked pizzas and 1 mud splatter; this defines the starting state, with T0 being trivial and simply returning that starting state with certainty.
The robot has five actions, A={N,E,S,W}, which involve moving in the four directions (staying put is not an action we'll need). If the robot can move into a square, it will. If it tries to move into a wall, it turns off and the episode ends (this avoids us having to give the robot an extra action to end the episode). If it is in in a room with a mud splatter or an uncooked pizza, then all pizzas in that room get cooked, or all mud splatters get washed. If the episode has ended, then it stays ended. This defines the transition function T, which is deterministic in this instance.
1.2 Rewards and reward functions
Now we need to define the possible rewards of the robot. There are reward functions that map the robots's history to a numerical reward.
So what is a history? That is just a sequence of actions. The agent starts in initial state s0, picks action a1, then transitions (via T) to state s1, and makes observation o1 (via O). It then picks action a2, and so on.
A history hn of length n is a sequence of n actions and n o:
hn=a1o1a2o2…anon.
Let Hn be the set of all histories of length n. For our purposes, we'll assume that the agent only operates for a finite number of steps: let m be the maximal number of steps the agent operates for, let Hm be the set of complete (full-length) histories, and let H=⋃mi=1Hi be the set of all histories. We might want to focus on the initial i steps of any given history hn; designate this by hni for any i≤n.
Then a reward function is something that maps each history to a numerical value in [−1,1], the reward. Let R be the set of all relevant reward functions. If the agent had a single clear reward R∈R, and followed history hm, then it would get total reward:
m∑i=1R(hmi).
This is the total reward accumulated by reward function R over the course of history hm; first applying it to the first action and observation, then to the first two actions and observations, then to the first three... all the way to the full final history hm.
In our ongoing example, we will consider two reward functions, R={Rc,R
I've written quite a few posts about the problems with agents learning values/rewards, and manipulating the learning process. I won't be linking to those past posts, because the aim of this post and the next one is to present a complete, clear, and easy(er) to understand overview of the whole situation. They should stand alone, and serve as an introduction to the subject. This first post will present the framework, and the value function for learnt reward functions; the next one will be looking at the properties of the learning function. I'll be using variants on a single running example throughout, to try and increase understanding.
Feel free to comment on ways the ideas could be made clearer.
0 The main points
The central insight of these posts may seem trivial by now: that a learning process that the agent can influence, does not have the same properties as one that it can't. Moving from "if you don't know what is right, look it up on this read-only list" to "if you don't know what is right, look it up on this read-write list (or ask the programmer)" is a huge change. Especially when read-only lists can easily become read-write in practice when the agent becomes more powerful.
Why write long posts and papers about this idea, though? First of all, because it is easy, in practice, not to notice that shift. And secondly, because it is easy to define a "learning" process that doesn't behave like we'd expect.
So by defining the value function and learning function of an ideal learning process, this allows us to know when we are facing an ideal uninfluenceable, unmanipulable learning process, and when we are not.
Furthermore, many learning processes cannot be easily made unmanipulable - especially those that involve human feedback, feedback conditional on the agent's actions. By specifying the ideal case, and seeing examples of what goes wrong in the non-ideal case, this can help develop learning processes where a small amount of manipulation is allowed, traded off against a large amount of desirable learning.
As a minor example of this, in the next post, we'll see that though "uninfluenceable" is the ideal property, the weaker property of "unriggable" (which I previously called "unbiasable") is enough to get many of the desirable properties.
1 Framework and Examples
1.1 Washing and Cooking and POMDPs
The analysis will be illustrated by an ongoing example: that of a robot purchased to do domestic tasks.
The robot can specialise in cooking or washing (assume specialised robots are ultimately more effective than generalists). The robot has been turned on, but it has not yet been informed as to what its task is - it therefore has uncertainty about its reward function.
This robot will be modelled in the MDP (Markov Decision Process) and POMDP (Partially Observable Markov Decision Process) formalisms. In these formalisms, the agent occupies a state s in the state space S. In the POMDP formalism, the agent doesn't observe the state directly, instead it sees an observation o drawn from the observation set O. The agent can then take an action a from the action set A. This transfers the agent to a new state, where it makes a new observation.
The Markov property is about how the transitions and observations are handled; basically these can depend on the previous state and/or action, but not on the those further in the past. If we define ΔS as the set of of probability distributions over a set S, then we have three transition rules:
Here, T takes the current state and the action and returns a distribution over the next state, O takes the current state and returns a distribution over observations, and T0 is the distribution over the initial state where the agent begins. Both T and S ignore any previous information.
The whole POMDP <S,O,A,T,O,T0> will be called the environment, and designated by μ.
For our example, the robot will be evolving in the following environment:
In this grid world, there are six grids the robot could occupy (the reason for this shape will be made clear later). There are two pizzas to cook in the top square, and one mud splatter to wash in the bottom one. The state space is therefore of size 6×3×2=36, since there are 6 squares the robot could occupy, 3 levels of uncooked pizza (0, 1, or 2 uncooked pizzas), and 2 levels of mud splatter (0 or 1 splatters). There is also a 37-th state, the 'episode ends' state.
In this case, there is no hidden information, so the set of states is the same as the set of observations, and O is trivial. The robot always starts in the central position, and there are always 2 uncooked pizzas and 1 mud splatter; this defines the starting state, with T0 being trivial and simply returning that starting state with certainty.
The robot has five actions, A={N,E,S,W}, which involve moving in the four directions (staying put is not an action we'll need). If the robot can move into a square, it will. If it tries to move into a wall, it turns off and the episode ends (this avoids us having to give the robot an extra action to end the episode). If it is in in a room with a mud splatter or an uncooked pizza, then all pizzas in that room get cooked, or all mud splatters get washed. If the episode has ended, then it stays ended. This defines the transition function T, which is deterministic in this instance.
1.2 Rewards and reward functions
Now we need to define the possible rewards of the robot. There are reward functions that map the robots's history to a numerical reward.
So what is a history? That is just a sequence of actions. The agent starts in initial state s0, picks action a1, then transitions (via T) to state s1, and makes observation o1 (via O). It then picks action a2, and so on.
A history hn of length n is a sequence of n actions and n o:
Let Hn be the set of all histories of length n. For our purposes, we'll assume that the agent only operates for a finite number of steps: let m be the maximal number of steps the agent operates for, let Hm be the set of complete (full-length) histories, and let H=⋃mi=1Hi be the set of all histories. We might want to focus on the initial i steps of any given history hn; designate this by hni for any i≤n.
Then a reward function is something that maps each history to a numerical value in [−1,1], the reward. Let R be the set of all relevant reward functions. If the agent had a single clear reward R∈R, and followed history hm, then it would get total reward:
This is the total reward accumulated by reward function R over the course of history hm; first applying it to the first action and observation, then to the first two actions and observations, then to the first three... all the way to the full final history hm.
In our ongoing example, we will consider two reward functions, R={Rc,R