Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Prelude

The aim of this post is to share my understanding of some of the conceptual and theoretical background behind implementations of the Proximal Policy Optimisation (PPO) reinforcement learning (RL) algorithm. PPO is widely used due to its stability and sample efficiency - popular applications include beating the Dota 2 world champions and aligning language models. While the PPO paper provides quite a general and straightforward overview of the algorithm, modern implementations of PPO use several additional techniques to achieve state-of-the-art performance in complex environments [1]. You might discover this if you try to implement the algorithm solely based on the paper. I try and present a coherent narrative here around these additional techniques.

I'd recommend reading parts one, two, and three of SpinningUp if you're new to reinforcement learning. There's a few longer-form educational resources that I'd recommend if you'd like a broader understanding of the field [2], but this isn't comprehensive. You should be familiar with common concepts and terminology in RL [3]. For clarity, I'll try to spell out any jargon I use here.

Recap

Policy Gradient Methods

PPO is an on-policy reinforcement learning algorithm. It directly learns a stochastic policy function parameterised by representing the likelihood of action in state , . Consider that we have some differentiable function, , which is a continuous performance measure of the policy . In the simplest case, we have , which is known as the return [4] over a trajectory [5], . PPO is a kind of policy gradient method [6] which directly optimizes the policy parameters against . The policy gradient theorem shows that:

In other words, the gradient of our performance measure with respect to our policy parameters points in the direction of maximising the return . Crucially, this shows that we can estimate the true gradient using an expectation of the sample gradient - the core idea behind the REINFORCE [7] algorithm. This is great. This expression has the more general form which substitutes for some lower-variance estimator of the total expected reward, [8] :

Modern implementations of PPO make the choice of , the advantage function. This function estimates the advantage of a particular action in a given state over the expected value of following the policy, i.e. how much better is taking this action in this state over all other actions? Briefly described here, the advantage function takes the form

where is the state-value function, and is the state-action -value function, or Q-function [9]. I've found it easier to intuit the nuances of PPO by following the narrative around its motivations and predecessor. PPO iterates on the Trust Region Policy Optimization (TRPO) method which constrains the objective function with respect to the size of the policy update. The TRPO objective function is defined as [10][11] :

Where KL is the Kullback-Liebler divergence (a measure of distance between two probability distributions), and the size of policy update is defined as the ratio between the new policy and the old policy:

Policy gradient methods optimise policies through (ideally small) iterative gradient updates to parameters . The old policy, , is the one used to generate the current trajectory, and the new policy, is the policy currently being optimised [12]. If the advantage is positive, then the new policy becomes greedier relative to the old policy [13], and we have - the inverse applies for negative advantage, where we have . The core principle of TRPO (and PPO) is to prevent large parameter updates which occur due to variance in environment and training dynamics, thus helping to stabilise optimisation. Updating using the ratio between old and new policies in this way allows for selective reinforcement or penalisation of actions whilst grounding updates relative to the original, stable policy [14].

PPO

PPO modifies the TRPO objective function by constraining the objective function:

To break this somewhat dense equation down, we can substitute our earlier expression in:

The clip term constrains the policy ratio, , to lie between and [15]. In this manner, the objective function disincentives greedy policy updates in the direction of improving the objective. When the new policy places lower density on actions compared to the previous policy, i.e. may be more conservative, the advantage update is smaller. When the new policy places higher density on actions compared to the previous policy i.e. is greedier, the advantage update is also smaller. This is why PPO is considered to place a lower pessimistic bound on policy updates. The policy is only updated by or , depending on whether the advantage function is positive or negative, respectively.

So far, we've introduced the concept of policy gradients, objective functions in RL, and the core concepts behind PPO. Reinforcement learning algorithms place significant emphasis in reducing variance during optimisation [16]. This becomes apparent in estimating the advantage function which relies on rewards obtained during a trajectory. Practical implementations of policy gradient algorithms take additional steps to trade variance for bias here by also estimating an on-policy state-value function , which is the expected return an agent receives from starting in state , and following policy thereafter. Jointly learning a value function and policy function in this way is known as the Actor-Critic framework [17].

Value Learning and Actor-Critics

Value-function learning involves approximating the future rewards of following a policy from a current state. The value function is learned alongside the policy, and the simplest method is to minimise a mean-squared-error objective against the discounted return [18], :

We can now use this to learn a lower-variance estimator of the advantage function [19] :

We end up with an estimator of the advantage function that solely relies on samples rewards and our learned approximate value function. In fact, our expression shows that the advantage function can be estimated with the temporal-difference [20] residual error, , of the value function:

Generalised Advantage Estimation (GAE)

There's one more thing we can do to reduce variance. The current advantage estimator estimates reward by samples collected from a single trajectory. However, there is a huge amount of variance in the possible trajectories that may evolve from a given state. These trajectories may look similar in the short-term (except for policies early in the optimisation process which are far more random), but longer-term rewards can vary significantly. We could consider a lower-variance estimate of the advantage by only sampling rewards near to the present, and replacing longer-term reward estimates with our lower-variance, biased value function estimates. This is the central idea behind n-step returns[21]. If we choose correctly, i.e. before trajectories start to diverge significantly, we can obtain better samples of near-term rewards.

Consider the term in our estimation of the advantage function. We take the initial reward observed from the environment, , then bootstrap future estimated discounted rewards, and subtract a baseline estimated value function for the state [22]. For a single time-step, this can be denoted as:

What if we sample rewards from multiple timesteps, and then estimate the future discounted rewards from then on? Let's denote as follows:

and now become extreme cases in -step return estimation. Observe that for we recover an unbiased, high-variance expectation of the infinite-horizon discounted return, minus our baseline estimate value function. GAE [23] introduces a discount factor, , to take an exponentially weighted average over every -th step estimator. Using notation from the paper [24] we can derive a generalized advantage estimator for cases where :

Great. As you may have noticed, there's two special cases for this expression - , and :

We see that obtains the original biased low-variance actor-critic advantage estimator. For , we obtain a low-bias, high-variance advantage estimator, which is simply the discounted return minus our baseline estimated value function. For , we obtain an advantage estimator which allows control of the bias-variance tradeoff. The authors note that the two parameters, and , control variance in different ways. Lower values of discount future rewards, which will always result in a biased estimate of the return, since there's an implicit prior that future rewards are less valuable. The authors note that controls the strength of this prior regardless of how accurate our value function is. On the other hand, since -step returns are unbiased estimators of the advantage function, lower values of reduce variance when the value function is accurate. In other words, when and for , we obtain an unbiased estimator of the advantage function.

Pseudocode

Tying everything together, we can show the general process of updating our policy and value function using PPO for a single trajectory of fixed length :<br><br>

Given policy estimator , value function estimator , , , time-steps.

For

Run policy in environment and collect rewards, observations, actions, and value function estimates where

Compute for all .

Compute generalized advantage estimate, for all t.

Sample -probabilities from stored actions and states.

Optimise using , the PPO objective [25].

Optimise using using stored and [26].

... repeat!

Feedback and corrections are very welcomed and appreciated. I'll follow this post with implementation details soon. For now, I'd recommend the excellent resource by Shengyi Huang on reproducing PPO implementations from popular RL libraries. If you're able to implement the policy update from the PPO paper, hopefully there's enough detail here that you're able to reproduce other implementations of PPO. Thank you so much for reading.


  1. Procgen, Karle Cobbe et al.
    Atari, OpenAI Gymnasium ↩︎

  2. A (Long) Peek into Reinforcement Learning, Lilian Weng
    Artificial Intelligence: A Modern Approach, Stuart Russell and Peter Norvig
    Reinforcement Learning: An Introduction, Sutton and Barto
    CS285 at UC Berkeley, Deep Reinforcement Learning, Sergey Levine ↩︎

  3. Lilian Weng's list of RL notation is very useful here ↩︎

  4. The return represents the sum of rewards achieved over some time frame. This can be over a fixed timescale, i.e. the finite-horizon return, or over all time, i.e. the infinite-horizon return. ↩︎

  5. A trajectory, , (also known as an episode or rollout) describes a sequence of interactions between the agent and the environment. ↩︎

  6. Policy Gradient Algorithms, Lilian Weng
    Policy Gradients, CS285 UC Berkeley, Lecture 5, Sergey Levine ↩︎

  7. Reinforcement Learning: An Introduction, 13.3 REINFORCE: Monte Carlo Policy Gradient, Sutton and Barto ↩︎

  8. Section 2, High Dimensional Continuous Control Using Generalized Advantage Estimation, Schulman et. al. 2016 ↩︎

  9. The value function, , describes the expected return from starting in state . Similarly the state-action value function, , describes the expected return from starting in state and taking action . See also Reinforcement Learning: An Introduction, 3.7 Value Functions, Sutton and Barto ↩︎

  10. I'm omitting from for brevity from here on. ↩︎

  11. See Proximal Policy Optimization Algorithms, Section 2.2, Schulman et al. and Trust Region Policy Optimization, Schulman et al. for further details on the constraint. ↩︎

  12. Note: at the start of a series of policy update steps, we have , so . ↩︎

  13. The new policy will place higher density on actions relative to the old policy, i.e. . ↩︎

  14. Consider optimising our policy using eqn. 1 - without normalising the update w.r.t. the old policy, updates to the policy can lead to catastrophically large updates. ↩︎

  15. is usually set to . ↩︎

  16. Stochasticity in environment dynamics, delayed rewards, and exploration-exploitation tradeoffs all contribute to unstable training. ↩︎

  17. Reinforcement Learning: An Introduction, 6.6 Actor-Critic Methods, Sutton and Barto
    A (Long) Peek into Reinforcement Learning, Value Function, Lilian Weng ↩︎

  18. The discounted return down-weights rewards obtained in the future by an exponential discount factor , i.e. rewards in the distant future aren't worth as much as near-term rewards. ↩︎

  19. The on-policy Q-function is defined as the expected reward of taking action in state , and following policy thereafter: ↩︎

  20. Reinforcement Learning: An Introduction, 6. Temporal-Difference Learning, Sutton and Barto ↩︎

  21. DeepMimic Supplementary A, Peng et al.
    Reinforcement Learning: An Introduction, 7.1 n-step TD Prediction, Sutton and Barto ↩︎

  22. Daniel Takeshi's post on using baselines to reduce variance of gradient estimates is useful here. ↩︎

  23. High Dimensional Continuous Control Using Generalized Advantage Estimation, Schulman et. al. 2016
    Notes on the Generalized Advantage Estimation Paper, Daniel Takeshi ↩︎

  24. The identity for is useful to note here. ↩︎

  25. This is usually done over minibatch steps for . is fixed as the initial policy at the start of the trajectory, and is taken as the policy at the current optimisation step. ↩︎

  26. Similarly to the policy optimisation step, this is also done over steps. is taken as the value function at the current optimisation step, i.e. value estimates are bootstrapped. ↩︎

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 10:11 PM

Thank you for explaining PPO. In the context of AI alignment, it may be worth understanding in detail because it's the core algorithm at the heart of RLHF. I wonder if any of the specific implementation details of PPO or how it's different from other RL algorithms have implications for AI alignment. To learn more about PPO and RLHF, I recommend reading this paper: Secrets of RLHF in Large Language Models Part I: PPO.