So to sum up so far, the basic idea is to shoot for a specific expected value of something by stochastically combining policies that have expected values above and below the target. The policies to be combined should be picked from some "mostly safe" distribution rather being whatever policies are closest to the specific target, because the absolute closest policies might involve inner optimization for exactly that target, when we really want "do something reasonable that gets close to the target."
And the "aspiration updating" thing is a way to track which policy you think you're shooting for, in a way that you're hoping generalizes decently to cases where you have limited planning ability?
Exactly! Thanks for providing this concise summary in your words.
In the next post we generalize the target from a single point to an interval to get even more freedom that we can use for increasing safety further.
In our current ongoing work, we generalize that further to the case of multiple evaluation metrics, in order to get closer to plausible real-world goals, see our teaser post.
Summary. In this post, we present the formal framework we adopt during the sequence, and the simplest form of the type of aspiration-based algorithms we study. We do this for a simple form of aspiration-type goals: making the expectation of some variable equal to some given target value. The algorithm is based on the idea of propagating aspirations along time, and we prove that the algorithm gives a performance guarantee if the goal is feasible. Later posts discuss safety criteria, other types of goals, and variants of the basic algorithm.
In line with the working hypotheses stated in the previous post, we assume more specifically the following in this post:
The challenge in this post is to design a decision algorithm for tasks where the agent's goal is to make the expected (!) Total equal (!) a certain value which we call the aspiration value. [2] This is a crucial difference from a "satisficing" approach that would aim to make expected Total at least as large as and would thus still be happy to maximize Total. Later we consider other types of tasks, both less restrictive ones (including those related to satisficing) and more specific ones that also care about other aspects of the resulting distribution of Total or states.
It turns out that we can guarantee the fulfillment of this type of goal under some weak conditions!
Notice that a special case of such expectation-type goals is making sure that the probability of reaching a certain set of acceptable terminal states equals a certain value, because we can simply assume that each such acceptable terminal state gives 1 Delta and all others give zero Delta. We will come back to that special case in the next post when discussing aspiration intervals.
In later posts, we will generalize the above assumptions in the following ways:
We focus on a single episode for a specific task.
Environment. We assume the agent's interaction with the environment consists of an alternating sequence of discrete observations and actions. As usual, we formalize this by assuming that after each observation, the agent chooses an action and then "calls" a (potentially stochastic) function provided by the environment that returns the next observation.[3]
World model. The episode's world model, , is a finite, acyclic MDP. The model's discrete time counter, , advances whenever the agent makes an observation. From the sequence of observations made until time , the world model constructs a representation of the state of the world, which we call the model state or simply the state and denote by , and which also contains information about the time index itself.[4] The set of all possible (model) states is a finite set . A subset of states is considered terminal. If the state is terminal, the episode ends without further action or Delta, which represents the fact that the agent becomes inactive until given a new task. Otherwise, the agent chooses an action . The world model predicts the consequences of each possible action by providing a probability distribution for the successor state . It also predicts the expected Delta for the task-relevant evaluation metric as a function of the state, action, and successor state: .
The following two graphs depict all this. Entities occurring in the world model are blue, those in the real world red, green is what we want to design, and dotted things are unknown to the agent:
We hide the process of constructing the next state from the previous state, action, and next observation by simply assuming that the agent can call a version of the function that is given the current state and action and returns the successor state constructed from the next observation: .
Goal. The goal is given by an aspiration value . The task is to choose actions so that the expected Total,
equals .
Auxiliary notation for interval arithmetic. We will use the following abbreviations:
Our agent will achieve the goal by
For both aspiration propagation and decision making, the agent uses some auxiliary quantities that it computes upfront at the beginning of the episode from the world model as follows.
Similar to what is done in optimal control theory, the agent computes[5] the - and -functions of the hypothetical policy that would maximize expected Total, here denoted and , by solving the respective Bellman equations
with for terminal states . It also computes the analogous quantities for the hypothetical policy that would minimize expected Total, denoted and :
with for terminal states . These define the state's and action's feasibility intervals,
The eventual use of these intervals will be to rescale aspirations from step to step. Before we come to that, however, we can already prove a first easy fact about goals of the type "make sure that expected Total equals a certain value":
If the world model predicts state transitions correctly, then there is a decision algorithm that fulfills the goal if and only if the episode's starting aspiration is in the initial state's feasibility interval .
Proof. The values and are, by definition, the expected Total of the maximizing resp. minimizing policy, and hence it is clear that there cannot be a policy which attains in expectation if is larger than or smaller than .
Conversely, assuming that lies inside the interval , the following procedure fulfills the goal:
This fulfills the goal, since the correctness of the model implies that, when using or , we actually get an expected Total of resp. .
Of course, using this "either maximize or minimize the evaluation metric" approach would be catastrophic for safety. For example, if we tasked an agent with restoring Earth's climate to a pre-industrial state, using as our evaluation metric the global mean temperature, this decision algorithm might randomize, with carefully chosen probability, between causing an ice age and inducing a runaway greenhouse effect! This is very different from what we want, which is something roughly similar to pre-industrial climate.
Another trivial idea is to randomize in each time step between the action with the largest and the one with the smallest , using a fixed probability resp. . Since expected Total is a continuous function of which varies between and , by the Intermediate Value Theorem there exists some value of for which this algorithm gives the correct expected Total; however, it is unclear how to compute the right in practice.
If the episode consists of many time steps, this method might not lead to extreme values of the Total, but it would still make the agent take an extreme action in each time step. Intuition also suggests that the agent's behavior would be less predictable and fluctuate more than in the first version, where it consistently maximizes or minimizes after the initial randomization, and that this is undesirable.
So let us study more intelligent ways to guarantee that .
In order to avoid extreme actions, our actual decision algorithm chooses "suitable" intermediate actions which it expects to allow it to fulfill the goal in expectation. When in state , it does so by
More precisely: First compute (or learn) the functions , and . Then, given state and a feasible state-aspiration ,
If we add the state- and action aspirations as entities to the diagram of Fig. 1, we get this:
We return to the apple-shopping scenario mentioned above, which we model by the following simple state-action diagram:
Our agent starts at home (state ) and wishes to obtain a certain number of apples, which are available at a market (state ). It can either walk to the market (action ), which will certainly succeed, or take public transportation (action ), which gives a 2/3 chance of arriving successfully at the market and a 1/3 chance of not reaching the market before it closes and returning home empty-handed. Of course, the agent can also decide to take the null action (action ) and simply stay home the entire day doing nothing.
Once it reaches the market , the agent can buy either one or two packs of three apples (actions and , respectively) before returning home at the end of the day (state ).
To apply Algorithm 1, we first compute the - and -functions for the maximizing and minimizing policies. Since there are no possible cycles in this environment, straightforwardly unrolling the recursive definitions from the back ("backward induction") yields:
* | |||
* | |||
* | |||
* | |||
(The asterisks* mark which actions give the values)
Suppose that the agent is in the initial state and has the aspiration .
Suppose now that we started with the same initial aspiration , but instead chose action as our over-achieving action in step 2. In this case, algorithm execution would go as follows:
These examples demonstrate two cases:
There is one last case, where both feasibility intervals contain ; this is the case, for example, if we choose in the above environment. Execution then proceeds as follows:
Now that we have an idea of how the decision algorithm works, it is time to prove its correctness.
If the world model predicts state transitions correctly and the episode-aspiration is in the initial state's feasibility interval, , then decision algorithm 1 fulfills the goal .
Proof.
First, let us observe that algorithm 1 preserves feasibility: if we start from state with state-aspiration , then for all states and actions visited, we will have and .
This statement is easily seen to be true for action-aspirations, as they are required to be feasible by definition in step 1, and correctness for state-aspirations follows from the definition of in step 6.
Let us now denote by the expected Total obtained by algorithm 1 starting from state with state-aspiration , and likewise by the expected Total obtained by starting at step 5 in algorithm 1 with action-aspiration .
Since the environment is assumed to be acyclic and finite[6], we can straightforwardly prove the following claims by backwards induction:
We start with claim 1. The core reason why this is true is that, for non-terminal states , we chose the right in step 3 of the algorithm:
Claim 1 also serves as the base case for our induction: if is a terminal state, then is an interval made up of a single point, and in this case claim 1 is trivially true.
Claim 2 requires that the translation between action-aspirations, chosen before the world's reaction is observed, and subsequent state-aspirations, preserves expected Total. The core reason why this works is the linearity of the rescaling operations in step 6:
This concludes the correctness proof of algorithm 1.
The interesting question now is what criteria the agent should use to pick the two candidate actions in step 2. We might use this freedom to choose actions in a way that increases safety, e.g., by choosing randomly as a form a "soft optimization" or by incorporating safety criteria like . We'll explore these ideas in the next post in this sequence.
When we extend our approach later to incorporate performance and safety criteria, we might also have to assume further functions, such as the expected squared Delta (to be able to estimate the variance of Total), or some transition-internal world trajectory entropy (to be able to estimate total world trajectory entropy), etc.
Note that this type of goal is also implicitly assumed in the alternative Decision Transformer approach, where a transformer network is asked to predict an action leading to a prompted expected Total.
If the agent is a part of a more complex system of collaborating agents (e.g., a hierarchy of subsystems), the "action" might consist in specifying a subtask for another agent, that other agent would be modeled as part of the "environment" here, and the observation returned by might be what that other agent reports back at the end of its performing that subtask.
It is important to note that might be an incomplete description of the true environment state, which we denote but rarely refer to here.
Note that we're in a model-based planning context, so it directly computes the values recursively using the world model, rather than trying to learn it from acting in the real or simulated environment using some form of reinforcement learning.
These assumptions may of course be generalized, at the cost of some hassle in verifying that expectations are well-defined, to allow cycles or infinite time horizons, but the general idea of the proof remains the same.
E.g., assume the agent can buy zero or one apple per day, has two days time, and the aspiration is one apple. On the first day, the state's feasibility interval is , the action of buying zero apples has feasibility interval and would thus get action-aspiration 0.5, and the action of buying one apple has feasibility interval and would thus get action-aspiration 1.5. To get 1 on average, the agent would thus toss a coin. So far, this is fine, but on the next day the aspiration would not be 0 or 1 in order to land at 1 apple exactly, but would be 0.5. So on the second day, the agent would again toss a coin. Altogether, it would get 0 or 1 or 2 apples, with probabilities 1/4, 1/2, and 1/4, rather than getting 1 apple for sure.