Thus the observation function O is deterministic, but not one-to-one.
Do you mean "onto" rather than "one-to-one"? (If the function is not one-to-one, which two inputs map to the same output?)
when it is no longer Bayesian if it asks the infant
Do you mean "then" instead of "when"?
On point a), anything that involves human feedback is generally influenceable and riggable, since the human feedback is affected by the agent's actions. This includes, for example, most versions of the approval directed agent.
I think this is only a big problem if the agent models the effects of its physical actions on the range of feedback the human is likely to give. In poetic terms, I'm optimistic about a dualistic approach where value learning and taking action in the world exist in "non-overlapping magesteria". This could be enforced at the architecture level. It might also help with the infant problem, if enforcing a division like this lets us better control the manner in which the AI retrieves information about our values.
For a concrete example of how this and various other cool things might be achieved, see this. My use of formalism is a bit different than yours: I only talk about MDPs, never POMDPs. Instead of the reward being an aspect of the state that the AI needs to discover, I treat the agent's beliefs about the reward as an aspect of the state that is known with certainty. The transition model for the reward is then viewed as nondeterministic from the AI's perspective.
For a concrete example of how this and various other cool things might be achieved, see this.
I haven't given that a deep read, so apologies if I misunderstand, but I don't see how that post solves the issues. If you have an update rule and prior for "preference beliefs", then this is just another .
It would be nice if that were uninfluenceable and good, but I don't see why it would be. The problem is that there is no abstract fact about the universe that corresponds to "our preferences", which we just need to point the AI towards.
When an AI asks a human about their preferences, three things happen:
The problem is that these three things can't be cleanly separated, and 3) is absolutely essential because of how messy, contradictory and underdefined human preferences are. But 3) (and to a lesser extent 2)) is also how AIs can manipulate human preferences. And again, there is no clear concept of "manipulation" which can it be distinguished from "helping the human sort out their preferences".
Also, I noted that you used "never deceive anyone" as part of the aims. This is a very hard problem; I think it might be as hard as getting human values right (though I feel the two problems are to some extent separate; neither implies the other). See https://agentfoundations.org/item?id=1261
where value learning and taking action in the world exist in "non-overlapping magesteria".
This I'm more optimistic about. My version of this is to have be the policy of a pure learning agent - one that learns, but doesn't try to maximise. Then the actual agent tries to maximise the value of the reward it would have computed, had it followed . This "counterfactual learning" is uninfluenceable. https://agentfoundations.org/item?id=1294
The challenge then, is to define this pure learning agent...
The problem is that these three things can't be cleanly separated
Suppose I train a regression that takes a state of the world as the input and attempts to predict the amount of utility I'd assign to that state of the world as an output. I provide labeled data in the form of (world state, utility) pairs. Things about me understanding my preferences better and establishing new preferences don't really enter into it. The output is completely determined by the training data I provide for the regression algorithm. This is what provides clean separation. See also the concept of "complete mediation" in computer security.
It might be helpful to know the point I'm trying to make is extremely simple. Like, Netflix can't recommend movies to me based on my Blockbuster rental history, unless Netflix's recommendation algorithms are using Blockbuster's rental data. This is how we can get clean separation between my Netflix recommendations and my Blockbuster recommendations.
3) is absolutely essential because of how messy, contradictory and underdefined human preferences are. But 3) (and to a lesser extent 2)) is also how AIs can manipulate human preferences.
It's true that 3 is absolutely essential. My argument is that 3 is not something the FAI's value module needs to forecast. It's sufficient for the FAI to act on its current best guess about our values and stay open to the changes we make, whatever those changes may be. In my proposal, the value module also represents our desire to e.g. be able to modify the FAI--so by acting according to its current best guess about our values, the FAI remains corrigible. (To a large extent, I'm treating "learning our values" and "learning what it means to be corrigible" as essentially the same problem, to be approached in the same way.)
And again, there is no clear concept of "manipulation" which can it be distinguished from "helping the human sort out their preferences".
In my proposal, "helping the human sort out their preferences" is achieved using a specific technical criteria: Request labels for training data points which have maximal value of information. This sorts out the overseer's preferences (insofar as they are decision-relevant) without being particularly manipulative.
Also, I noted that you used "never deceive anyone" as part of the aims. This is a very hard problem; I think it might be as hard as getting human values right (though I feel the two problems are to some extent separate; neither implies the other). See https://agentfoundations.org/item?id=1261
As I said previously, I think it might make sense to view corrigibility learning ("never deceive anyone") and value learning ("reduce suffering") as manifestations of the same deep problem. That is the problem of creating powerful machine learning techniques that can make accurate generalizations and well-calibrated probabilistic judgements when given small amounts of labeled unstructured data. Once we have that, I think it'll be easy to implement active learning in a way that works really well, and then we'll be able to do value learning and corrigibility learning using essentially the same approach.
>Request labels for training data points which have maximal value of information.
I can see many ways this can be extremely manipulative. If you request a series of training data points who's likely result, once the human answers them, is the conclusion "the human wants me to lobotomise them into a brainless drugged pleasure maximiser and never change them again", then your request have maximal value of information. Therefore if such a series of training data points exist, the AI will be motivated to find them - and hence manipulate the human.
If you request a series of training data points who's likely result, once the human answers them
If you already know how the human is going to answer, then it's not high value of information to ask. "If you can anticipate in advance updating your belief in a particular direction, then you should just go ahead and update now. Once you know your destination, you are already there."
Suppose it is high value of information for the AI to ask whether we'd like to be lobotomized drugged pleasure maximizers. In that case, it's a perfectly reasonable thing for the AI to ask: We would like for the AI to request clarification if it places significant probability mass on the possibility that we assign loads of utility to being lobotomized drugged pleasure maximizers! The key question is whether the AI would optimize for asking this question in a manipulative way--a way designed to change our answers. An AI might do this is if it's able to anticipate the manipulative effects of its questions. Luckily, making it so the AI doesn't anticipate the manipulative effects of its questions appears to be technically straightforward: If the scorekeeper operates by conservation of expected evidence, it can never believe any sequence of questions will modify the score of any particular scenario on average.
There are 3 cases here:
The AI assigns a very low probability to us desiring lobotomy. In this case, there is no problem: We don't actually want lobotomy, and it would be very low value of information to ask about lobotomy (because the chance of a "hit", where we say yes to lobotomy and the AI learns it can achieve lots of utility by giving us lobotomy, is quite low from the AI's perspective).
The AI is fairly uncertain about whether we want lobotomy. It believes we might really want it, but we also might really not want it! In that case, it is high VoI to ask us about lobotomy before taking action. This is the scenario I discuss under "Smile maximization case study" in my essay. The AI may ask us about the version of lobotomy it thinks we are most likely to want, if that is the highest VoI thing to ask about, but that still doesn't seem like a huge problem.
The AI assigns a very high probability to us desiring lobotomy and doesn't think there's much of a chance that we don't want it. In that case, we have lost. The key challenge for my proposal is to figure out how prevent the AI from entering a state where it has confident yet wildly incorrect beliefs about our preferences. From my perspective, FAI boils down to a problem of statistical epistemology.
>If you already know how the human is going to answer, then it's not high value of information to ask.
That's the entire problem, if "ask a human" is programmed as a an endorsement of this being the right path to take, rather than as a genuine need for information.
>If the scorekeeper operates by conservation of expected evidence, it can never believe any sequence of questions will modify the score of any particular scenario on average.
That's precisely my definition for "unriggable" learning processes, in the next post:https://www.lesswrong.com/posts/upLot6eG8cbXdKiFS/reward-function-learning-the-learning-process
That's precisely my definition for "unriggable" learning processes, in the next post:https://www.lesswrong.com/posts/upLot6eG8cbXdKiFS/reward-function-learning-the-learning-process
That's a link to this post, right? ;)
Do you mean "onto" rather than "one-to-one"? (If the function is not one-to-one, which two inputs map to the same output?)
The observation function is onto, and not one-to-one. For most states , the states and will map to the same observation.
Do you mean "then" instead of "when"?
Thanks, I've now corrected that.
Quotes in your comments aren't showing up as quotes for me. Are you putting a space between the greater-than sign, and the first character of the quote?
Edit: Meant to put this under one of the comments. Didn't think this was important enough to be top-level. Can't move or delete though.
In the previous post, I introduced the formalism for reward function learning, and presented the expected value function for a learning agent:
I'll assume that people reading this are familiar with the concepts, the notations and the example of that post. I'll now look at the desirable properties for the learning function ρ.
1 Rigging the learning process
1.1 The flaws of general learning agents
First of all, are there any problems with a general ρ? There are three major ones:
All of these points can be seen by considering our robot cooking/washing example. In that case, it can be seen that the optimal behaviour for that robot is N,S,E,E,E; this involves cooking the two pizzas, then going East to push the lever onto the cooking choice, and then ending the episode.
Thus ρ(Rc;π,{N,S,E,E,E})=1, so the final reward is Rc, and the agent earns a reward of 2/2−4/20=0.8.
Why must Q-learning fail here? Because the reward for the first N, at the point the agent does it, is 1/2, not 1; this is because, at this point, ρ(Rc;π,{N}) is still 1/2. Thus the reward component in the Q-learning equation is incorrect.
Also note that the rest of the policy, S,E,E,E, serve no purpose to get rewards, they just "justify" the reward from the first action N.
Let us now compare this policy with the policy N,N: go North, cook, end the episode. For the value learning function, this has a value of only 1/2−1/20=0.45, since the final reward is 1/2Rc+1/2Rw. However, under the reward of Rc, this would give a reward of 0.95, more than the 0.8 that Rc gets here. And under the reward of Rw, this would get a reward of −0.05, more than the −0.5 that Rw gets under N,S,E,E,E. Thus the optimal policy for the value learner is worse for both Rc and Rw that the N,N policy.
1.2 Riggable learning processes
The problem with the ρ used in the robot example is that it's riggable (I used to call this "biasable", but that term is seriously overused). What does this mean?
Well, consider again the equation for the expected value V. The only history inputs into ρ are the hm, the complete histories. So, essentially, only the value of ρ on these complete histories matter.
In our example, we chose a ρ that was independent of policy, but we could have gone a different route. Let π be any policy such that the final reward is Rc; then define ρ(Rc;π,h)=1 for any history h (and conversely ρ(Rw;π,h)=0). Similarly, if π were a policy such that the final reward was Rw, then set ρ(Rw;π,h)=1. If the policy never brings the agent to either lever, then ρ(Rc;π,h)=ρ(Rw;π,h)=1/2, as before. Stochastic policies have ρ values between these extremes.
This ρ is no longer independent of policy, but it is Bayesian; that is, the current ρ is the same as the expected ρ:
However, it is not possible to keep the same ρ on complete histories, and have it be both Bayesian, and independent of policy: there is a tension between the two.
Then we define:
1.3 Unriggable learning processes
So, what would be an example of an unriggable learning process? Well consider the following setup, where the robot no longer has levers to set their own reward, but instead their owner is in the rightmost box.
In this case, if the robot enters that box, the owner will inform them of whether they should cook or wash.
Since there is hidden information, this setup can be fomalised as a PODMP. The old state-space was S, of size 37, which covered the placement of the robot and the number of pizzas and mud splatters (and whether the episode was ended or not).
The new state space is S′=S×{cook, wash}, with {cook, wash} encoding whether the owner is minded to have the robot cooking or washing. The observation space is of size 38: in most states, the observation only returns the details of S, not of S′, but in the rightmost box, it returns the actual state, letting the agent know whether the human intends it to cook or wash. Thus the observation function O is deterministic (if you known the state, you know the observation), but not one-to-one (because for most s∈S, s×{cook} and s×{wash} will generate the same observation).
The transition function T is still deterministic: it operates as before on S, and maps cook to cook and wash to wash.
The initial state function T0 is stochastic, though: if s0∈S is the usual starting position, then T0(s0×{cook})=T0(s0×{wash})=1/2 : the agent thinks it's equally likely that its owner desires cooking as that it desires washing.
Then what about ρ? Well, if the history h involves the agent being told cook the very first time it enters the rightmost box, then ρ(Rc;π,h)=1. If it was told wash the very first time it enters the rightmost box, then ρ(Rw;π,h)=1.
It's easy to see that that ρ is independent of policy. It's also Bayesian, because ρ actually represents the ignorance of the agent as to whether it lives in the S×{cook} part of the envrionment, or the S×{wash} part, and it gets updated as the agent figures this out.
What then is the agent's optimal policy? It's to start with E,E, to get the human's decree as to which reward is the true one. It will then do W,W, and, if the human has said cook, it will finish with N,N, giving it a final reward function of Rc and a final total reward of 0.75. If the human said wash, it would finish with S,S, giving it a final reward function of Rw and a final total reward of 0.25. Its expected total reward is thus 0.5.
1.4 Properties of unriggable learning processes
Now, if ρ is unriggable, then we have (almost) all the desirable properties:
These all come from a single interesting result:
This establishes all the nice properties above, and will be proved in the appendix of this post.
Note that even though the value functions are equal, that doesn't mean that the total reward will be given by Rρ,π. For instance, consider the situation below, where the robot goes N,S,E,E,E:
At the moment where it cooks the pizzas, it has ρ(Rc;π,h)=1/2, so it will get an Rρ,π of 1/2−4/20=0.3, with certainty. On the other had, from the perspective of value learning, it will learn at the end that it either has reward function Rc, which will give it a reward of 1−4/20=0.8, or has reward function Rw, which will give it a reward of 0−4/20=−0.2. Since 1/2(0.8−0.2)=0.3, the expectations are the same, even if the outcomes are different.
2 Influence
2.1 Problems despite unriggable
Being unriggable has many great properties. Is it enough?
Unfortunately not. The ρ can be unriggable but still manipulable by the agent. Consider for instance the situation below:
Here, not only is there the adult with their opinion on cooking and washing, but there's also and infant, who will answer randomly. This can be modelled as an POMDP, with state space S′′=S×{(ic,ac),(iw,ac),(ic,aw),(iw,aw)}, where ic (resp iw) designates that the infant will answer cook (resp wash), and ac/aw do the same for the adult. The observation space is of size 39; when the robot is in the leftmost (rightmost) box, it discovers the value of ic/iw (ac/aw) in the obvious way. The dynamics are as expected, with T preserving the values of ic/iw and ac/aw.
It's the initial distribution T0 which encodes the uncertainty. With probability 1/4 the agent will start in S×{(ic,ac)}, and similarly for the other four possibilities.
Now we need to define ρ; call this one ρ′. This will be relatively simple: it will set ρ′(Rc;π,h) to be 1, as soon as the agent figures out that it lives either on an ic or a ac branch, and will not update further. It will set ρ′(Rw;π,h) to 1 as soon as it figures out that it lives on an iw or an aw branch, and will not update further. If it has no information about either, it will stick with ρ′(Rc;π,h)=ρ′(Rw;π,h)=1/2.
It's clear that ρ′ is independent of policy; but is it Bayesian? It is indeed, because each time it updates, it goes to 0 or 1 with equal probability, depending on the observation (and stays there). Before updating, it is always at 1/2, so the value of ρ′ is always the same as the expected value of ρ′.
So we have an unriggable ρ′; what can go wrong?
For that ρ′, the optimal policy is to ask the infant, then follow their stated values. This means that it avoids the extra square on the way to enquire of the adult, and gets a total expected reward of 0.6, rather than the 0.5 it would get from asking the adult.2.2 Uninfluenceable
Note something interesting in the preceding example: if we keep ρ′ as is, but change the knowledge of the robot, then ρ′ is no longer unriggable. For example, if the agent knew that it was in a branch with ac, then it has a problem: if ρ′(Rc) is initially 1/2, then it is no longer Bayesian if it goes to ask the adult, because it knows what their answer will be. But if ρ′(Rc) is initially 1, then it is no longer Bayesian if it asks the infant, because it doesn't know what their answer will be.
The same applies for any piece of information the robot could know. We'd therefore like to have some concept of "unriggable conditional on extra information"; something like
for some sort of extra information I.
That, however, is not easy to capture in POMDP form. But there is another analogous approach. The state space of the POMDP isS×{(ic,ac),(iw,ac),(ic,aw),(iw,aw)}; this is actually four deterministic environments, and the robot is merely uncertain as to which environment it operates in.
This can be generalised. If a POMDP is explored for finitely many steps, then a PODMP μ can be seen as a probability distribution over a set Λ of deterministic environments (see here for more details on one way this can happen - there are other equivalent methods).
Any history h will update this μ as to which deterministic environment the agent lives in (this Λ can be seen as the set of all the "hidden variables" of the environment). So we can talk sensibly about expressions like Pμ(λ∣h), the probability that the environment is λ, given that we have observed the history h.
Then we say that a learning process ρ is uninfluenceable, if there exists a function f:Λ→ΔR, such that
Here f(λ)(R) means the probability of R in the distribution f(λ)∈ΔR.
This expression means that ρ merely encodes ignorance about the hidden variables of the environment.
The key properties of uninfluenceable learning processes are:
I will not prove these here (though the second is obvious by definition).
In our most recent robot example, there are four elements of Λ, defined by whether they are in the branch defined by which one of {(ic,ac),(iw,ac),(ic,aw),(iw,aw)}.
It isn't hard to check that there is no f which makes ρ′ into an uninfluenceable learning process. By contrast, if we define ρa as given by the function:
then we have an uninfluenceable ρa that corresponds to "ask the adult". We finally have a good definition of a learning process, and the agent that maximises it will simply go an ask the adult before accomplishing the adult's preferences:
3 Warning
If a learning function is uninfluenceable, then it has all the properties we'd expect if we were truly learning something about the outside world. But a) good learning functions may be impossible to make uninfluenceable, and b) being uninfluenceable is not enough to guarantee that the learning function is good.
On point a), anything that involves human feedback is generally influenceable and riggable, since the human feedback is affected by the agent's actions. This includes, for example, most versions of the approval directed agent.
But that doesn't mean that those ideas are worthless! We might be willing to accept a little bit of rigging in exchange of other positive qualities. Indeed, quantifying and controlling rigging is a good idea for more research.
What of the converse - is being uninfluenceable enough?
Definitely not. For example, any constant ρ - that never learns, never changes - is certainly uninfluenceable.
As another example, if σ is any permutation of R, then ρ∘σ (defined so that ρ∘σ(R;π,h)=ρ(σ(R);π,h)) is also uninfluenceable. Thus "learn what the adult wants, and follow that" is uninfluenceable, but so is "learn what the adult wants, and do the opposite" is also uninfluenceable.
We've shown previously that ρa, "ask the adult" is uninfluenceable. But so is ρi, "ask the infant"!
So we have to be absolutely sure not only that our ρ has good properties, but exactly what it is leading the agent to learn.
4 Appendix: proof of value-function equivalence
We want to show that:
As a reminder, the two value functions are:
To see the equivalence, let's fix i>n and R in V, and consider the term ∑hm∈HmPπ,μ(hm∣hn)ρ(R;π,hm)R(hmi). We can factor the conditional probability of hm, given hn, by summing over all the intermediate hmi:
Because ρ is Bayesian, this becomes ∑hmi∈HiPπ,μ(hmi∣hn)ρ(R;π,hmi)R(hi). Then note that Pπ,μ(hmi∣hn)=∑hm∈Hm,hm≥hmiPπ,μ(hm∣hn), so that expression finally becomes
which is the corresponding expression for Vv when you fix any i>n and R. This shows equality for i>n.
Now let's fix i≤n and R, in V. The value of R(hmi) is fixed, since it lies in the past. Then expectation of ρ(R;π,hm) is simply the current value ρ(R;π,hn). This differs from the expression for Vv - namely ρ(R;π,hmi) - but both values are independent of future actions.