Written during the SERI MATS program under the joint mentorship of John Wentworth, Nicholas Kees, and Janus.

Create a black and white image of a blindfolded robot and a blindfolded human playing Hungry Hungry Hippos in a 3:1 aspect ratio. The robot, with a futuristic, sleek, humanoid design, is seated at the game table with a blindfold covering its optical sensors. Across from the robot, depict a human player, also blindfolded, with an expression of concentration, as they engage in the game. The human's appearance should be neutral, representing a general human figure. The Hungry Hungry Hippos game is in the center, capturing a moment of playful competition. The background should be simple, emphasizing the players and the game in a monochromatic palette.


I am surrounded, incessantly, by other people, possessing a range of different capabilities, striving for goals both common and opposing — and, what is more, they are incessantly surrounded by me. Fangs pierce the cartesian boundary. Outsideness leaks in. Insideness leaks out. It is the source of all joy and misery in my life.

Anyway. It's the interaction of  agents that characterises game theory.[1] Classical game theory says that when two utility-maximisers interact in a shared environment, they will choose options in nash equilibrium, where a profile of options is in nash equilibrium if no player can increase their utility function by unilaterally changing their option.

What does higher-order game theory (which dispenses with utility functions and maximisation) say about the interaction between agents? Would a satisficer cooperate with a quantiliser in prisoner's dilemma?

Let's find out.


All you need to know is Definition 3 from [Part 1], and some basic classical game theory.

Definition 3: Let  be any set of options and  be any set of payoffs. An optimiser is any functional . A -task is any function . An option  is -optimal for a task  if and only if .

Let's review the textbook definition of classical simultaneous games.

Definition 5.  

  1. Let  be a family of sets indexed by a set  of players.

    A classical simultaneous game over  is a function .
  2. The option-profiles of  are the elements of the product .
  3. The -th deviation map  is given by  for all , and .

    In the finite case, where , then .

    The deviation map  describes how the th player can change an option-profile  by unilaterally changing their own option. Note that the deviation maps only depend on the option spaces, not the game itself.
  4. The best-response function of  is the the function  defined by where  is the th deviation map.
  5. Finally, the nash equilibria of  is the set of option-profiles  such that .

Note that the fact that the option-profiles are in nash equilibrium follows logically from the assumption each player's choice is utility-maximising for the task they face, plus the assumption that, if the option-profile is , then the th player faces the task .

It doesn't require the assumption that the player's have "common knowledge" of each other's rationality, or that the players have any kinds of beliefs whatsoever. That being said, an economist might explain the optimality of the choices by assuming that each player is rational and knows all the relevant facts (i.e. the rules of the game, the rationality of their peers, and the beliefs of their peers). However, an evolutionary biologist might appeal to natural selection to explain optimality. An ML engineer might appeal to SGD, etc.

Higher-Order Nash Equilibrium

By inspection, we can see that at no point does the definition of the classical nash equilibrium appeal to any special properties of  or . This suggests that we can effortless generalise the definition of nash equilibrium to any family of optimisers, rather than only utility-maximisers.

Definition 6

  1. Let  be a family of sets indexed by a set  of players. Let  be a set of payoffs, and let  be a family of optimisers with .[2]

    A higher-order simultaneous -game is a function .
  2. Like before, the option-profiles of  are the elements of the product .
  3. Like before, the -th deviation map  is given by  for all , and .
  4. The -best-response function of  is the the function  defined by  where  is the th deviation map
  5. Like before, the -nash equilibria of  is the set of option-profiles  such that .

When clear from context, I'll just say game, best-response, and nash equilibrium.

The intuition is this —

  • Suppose that  is the option-profile that's collectively chosen.
  • Then player  can achieve a payoff  by unilaterally changing their option to  because .
  • So player  is faced with the task .
  • Their choice  must be -optimal for this task, so .
  • If we assume -optimality for every player conjunctively, then .
  • If we know that , and we know nothing else, then the possible option profiles are .

This definition applies even to unusual cases —

  • Zero-player games? When , then game  is just an element of the payoff space . There's only one option-profile, the empty sequence, and it's vacuously nash.
  • One-player games? When , then a game  is just a task for the solitary player. An option-profile for the game is specified by an option for the player, and this option-profile is nash whenever the option is optimal for the player.
  • When  is countably infinite, or indeed uncountably infinite, then the definition still makes sense and gives the right answer.[3]

We obtain the classical simultaneous game when  and , and in this case the higher-order nash equilibrium coincides with the classical nash equilibrium.

The higher-order nash equilibrium allows us to answer somewhat esoteric questions —

An argmaxer, a satisficer, and a quantiliser walk into a bar. They can order from menus , , and  respectively, and the barman will mix their three orders  into a single cocktail . The argmaxer's preferences over the cocktails are given by , and likewise  for the satisfier and  for the quantiliser.

What cocktail might they be served?

Well, the possible cocktails are  such that , , and .

Many well-studied variations of the classical nash equilibrium can be seen as the higher-order nash equilibrium between non-classical optimisers. For example, epsilon-equilibrium is precisely the higher-order nash equilibrium between .[4]

Optimisation is nash-sum closed

There's a nice upshot of higher-order game theory — the nash equilibria between optimisers is itself an optimiser!

Suppose that two agents are modelled by  and . We can combine them into a single function which assigns, to each game , the set of -nash equilibria of the game . This function has type-signature , so it corresponds to a single optimiser with option space  with payoff space .

This gives us a binary operator on optimisers,, given by .

This operator is both associative and commutative, justifying the name "nash sum".

For example, the nash sum  will calculate the classical nash equilibria of a generic payoff matrix .

The operator can be extended to any family of optimisers, i.e.  where .


  •  is a single optimiser, while  is a family of optimisers.
  •  is the nash-sum of , and  are subagents of .
  • An option for  is an option-profile for .
  • A task for the optimiser  is a game for the family of optimisers .
  • An option  is optimal for  in the task  if and only if the option-profile  is a nash equilibrium for  in the game .

Totality is not nash-sum closed

Just because two optimisers  and  are total does not imply that their nash sum  is total.[5] In other words, we may have a model of an agent which works perfectly in all solitary situations, but breaks down when many such agents interact in a shared environment.

The textbook example is Matching Pennies. Each player must choose either heads or tails — player 1 wins if the coins match and player 2 wins if they don't. If both players are utility-maximisers then there's no nash equilibria, even though utility-maximisers are total optimisers.


Formally speaking, , and .[6] Then , although  and  for all .

This shows why we couldn't have defined optimisers as functions with type-signature  where  denotes the set of non-empty subsets of . In order to ensure nash-sum closure of optimisers, we needed to include non-total optimisers.

Utility-maximisation is not nash-sum closed.

It is well-known that the nash sum of  utility-maximisers isn't a utility-maximiser for any . For many games, none of the nash equilibrium are even pareto-optimal!

The textbook case is the prisoner's dilemma. Let  and consider two players and  which respectively value the first and second coordinate of the payoff. Their nash sum is the optimiser in  which calculates the classical nash equilibria of 2-by-2 payoff matrices . In particular, when  is the payoff matrix defined below, then . However, both players prefer the payoff  to .

Consequentialism is not nash-sum closed.

In fact, the nash sum of two utility-maximisers isn't even consequentialist![7]

Suppose Angelica is babysitting her cousin Tommy, and each child has been given a cookie by their grandmother. If Angelica is awake while Tommy sleeps, then she can steal both cookies for herself. If Angelica sleeps while Tommy is awake then he'll cause mayhem getting both children in trouble. If Angelica and Tommy both stay sleep, or if they both wake up, then they'll each keep one cookie.

The payoffs  are summarised in the matrix below.

When Angelica and Tommy are both awake, this is nash — she doesn't want to sleep because then he'll cause mayhem, and he doesn't want to sleep because then she'll steal his cookie. In contrast, when they're both sleeping, this isn't nash — she can wake up and steal his cookie. But the same outcome will result whether Angelica and Tommy are both sleeping or both awake. So the nash sum of Angelica and Tommy isn't a consequentialist optimiser![8] 

This shows why we couldn't have defined optimisers as functions with type-signature , which bakes-in the assumption that all optimisers are consequentialist. In order to ensure the nash-sum closure of optimisers, we needed to include nonconsequentialist optimisers.[9]

Beyond CDT?

Warning: This section is a speculative extension of the existing literature, it's merely a proof-of-concept of how higher-order game theory would extend to non-CDT agents.

Let  be the game defined by the payoff matrix below. We may readily check that , which corresponds to the well-known result that two utility-maximisers will defect in the prisoners' dilemma.

Now, some decision theorists think that it's not generally true that two rational utility-maximisers will defect in the prisoners' dilemma — e.g. if they're both perfect replicas of one another then they'll both cooperate! The reasoning is this — cooperation from the first prisoner would count as evidence (to the first player) of high utility, and defection from the first prisoner would count as evidence of low utility. This corresponds to the observation that . So the first player should cooperate. Ditto the second player.

So what went wrong?

It all comes down to an ambiguity with the task . What relationship is the task  supposed to capture exactly? The formalism itself is agnostic.

  • According to causal decision theory, the task  is supposed to track which payoff  is caused by the agent choosing .
  • According to evidential decision theory, the task  is supposed to track which payoff  is evidenced by the agent choosing .

So let's explain our answer , which agrees with CDT. We can trace the problem to the deviation maps  in the definition of .

The -th deviation map  is given by  for all , and .

The map  is the causal dependence of the overall option-profile on the th player's choice. Hence, when the game  is the causal dependence of the overall payoff on the option-profile, it follows that the task  is the causal dependence of the payoff on the th player's choice. This is why we got the CDT answer — causation composed with causation is causation.

According to EDT, however, the task is supposed to capture the evidential dependency of the player's choice on the payoff. And when the game  is the evidential dependence of the overall payoff on the option-profile, it isn't true that that the task  is the evidential dependence of the payoff on the -th player's choice. Evidence composed with causation isn't evidence.

To achieve the EDT answer, we need is a function  which captures the evidential dependence of the option-profile on the th player's choice. In the case that all the players are replicas, . Replacing  in the definition of  with  then we'd get a different binary operator  which yields the EDT prediction, namely that .

It seems that we have another free parameter in our model! We can the replace the family of deviation maps  with an arbitrary family of non-casual deviation maps , and thereby encode information about each player's idiosyncratic decision theory. If the th player is CDT then  and if the th player is EDT then 

Suppose we have a payoff space , a family of sets  with , a family of optimisers  where , and a family of decision theories  where , and a game is a map . Then we can define the non-CDT best-response function of  to be the function , and the non-CDT nash equilibria of  to be the set . Or something like that.

With this machinery in place, we can answer even sillier questions —

An EDT utility-satisficer and a CDT utility-maximiser are playing the game . They are causally separated, but they are negligibly likely to choose different options. What options might they choose?[10]

Well, a pair of options  is in (non-CDT) nash equilibrium if and only if  and , where  is the satisficer's anchor point. We can also filter out the option profiles such that . The resulting list of option-profiles is easily computable from  and .

Suppose that  is the prisoner's dilemma, and  is the satisificer's anchor point. Then the unique nash equilibrium is .

Exercise 7: If Angelica is an EDT utility-maximiser and Tommy is a CDT utility-maximiser, then which profiles are nash?


  • Classical game theory is the study agents which choose options causing maximum utility. When many such agents (CDT-utility-maximisers) make simultaneous choices they'll collectively choose an option-profile in nash equilibrium, i.e.  where  is the best-response function.
  • Many AI safety researchers are worried about agents which choose options causing maximum utility, either because the word "cause" or "maximum" or "utility", so they study agents which relax one (or many) of these three assumptions.
  • Today we introduced high-order game theory which generalises classical game theory by considering agents which don't maximise utility. We generalised the classical nash equilibrium to arbitrarily many agents with arbitrary optimisers.
  • Finally, I gestured at how we might generalise the nash equilibrium even further to the simultaneous choices of non-CDT agents.

Pretty cool!

Next time...

By tomorrow, I will hopefully have posted about mesa-optimisation, sequential choice, and non-possibilistic models of agents.


  1. ^

    They study of -player games is called automata theory (in the discrete case) and dynamics (in the continuous case), while the study of -player games is called optimisation.

  2. ^

    Recall that  denotes the set of functions of type .

    That is, if  and , then .

  3. ^

    See Aumann's 1964 paper Markets with a Continuum of Trader for a game with uncountable players.

  4. ^

    Recall that  is the optimiser which maximises the function  up to some fixed slack .

    That is, .

  5. ^

    Recall that an optimiser  is total if  for all .

  6. ^

     is the Kronecker delta function.

  7. ^

    Recall that an optimiser  is consequentialist if  for some .

    In other words, for any task , if  then .

    This condition says that, once we know the agent's task, then the optimality of a particular choice is determined by its payoff.

  8. ^

    Maybe there's something deep here about group-level deontology arising from individual-level consequentialism.

  9. ^

    Warning: There is a remark that one regularly encounters in metaethics: moral consequentialism is tautological because we can consider the choice of the decision-maker as a consequence of the decision itself, ensuring that any decision-rule is consequentialist.

    This metaethical remark corresponds to the following fact in higher-order decision theory: if a nonconsequentialist optimiser  faces the task  then they will choose the same options as the consequentialist optimiser  facing the task . It follows that whether an agent is a consequentialist heavily depends on which physical consequences of the decision we are tracking.

  10. ^

    Recall that a satisficer might choose any option  which dominates some fixed option , i.e. . The option  is called the anchor point.

New to LessWrong?

New Comment
14 comments, sorted by Click to highlight new comments since: Today at 1:21 AM

e upshot of higher-order game theory — the nash equilibria between optimisers is itself an optimiser!

Isn't this pretty trivial though? I guess it's still probably convenient for the math.

The observation is trivial mathematically, but it motivates the characterisation of an optimiser as something with the type-signature.

You might instead be motivated to characterise optimisers by...

  • A utility function 
  • A quantifier 
  • A preorder  over the outcomes
  • Etc.

However, were you to characterise optimisers in any of the ways above, then the nash equilibrium between optimisers would not itself be an optimiser, and therefore we lose compositionality. The compositionality is conceptually helpfully because it means that your  definitions/theorems reduce to the  case.

  • A quantifier 

Here you mean , right?

Wait I mean a quantifier in .

If we characterise an agent with a quantifier , then we're saying which payoffs the agent might achieve given each task. Namely,  if and only if it's possible that the agent achieves payoff  when faced with a task .

But this definition doesn't play well with a nash equilibria.

Thank you, this is incredibly interesting! Did you ever write up more on the subject? I'm excited to see how it relates to mesa-optimisation in particular.

In the finite case, where , then 

Typo: I think you mean  ?

I would have liked a footnote saying .

I'm confused about the last example, EDT-satisficer vs. CDT-maximizer. It says that we assume the agents to choose the same options. Accordingly, it says to "filter out the option profiles such that ". But then, in the concrete case of the Prisoner's dilemma, it says the solution is  instead of .

Exercise 7:

In the Prisoner's dilemma, the answer doesn't change compared to the one in the worked-out example, because in that example the satisficer was already anchoring to the best EDT solution . So the Nash equilibrium is . More in general, the first agent condition is replaced by .

4. The -best-response function of  is the the function  defined by  where  is the th deviation map

Here , right?

Yes, , i.e. the cartesian product of a family of sets. Sorry if this wasn't clear, it's standard maths notation. I don't know what the other commenter is saying.

Got it, I misunderstood the semantics of what  was supposed to capture. I thought the elements needed to be mutual best-responses. Thank you for the clarification, I've updated my implementation accordingly!

I interpreted it the standard way too initially, but then I had a hunch there was... I dunno, something fishy, and then indeed it turned out @StrivingForLegibility understood it in a completely different way, so somehow it wasn't clear! Magic.

Edit: Cleo Nardo has confirmed that they intended  to mean the cartesian product of sets, the ordinary thing for that symbol to mean in that context. I misunderstood the semantics of what  was intended to represent. I've updated my implementation to use the intended cartesian product when calculating the best response function, the rest of this comment is my initial (wrong) interpretation of .


I needed to go back to one of the papers cited in Part 1 to understand what that  was doing in that expression. I found the answer in A Generalization of Nash's Theorem with Higher-Order Functionals. I'm going to do my best to paraphrase Hedges' notation into Cleo's notation, to avoid confusion.


TLDR:  is picking out the set of option-profiles  that are simultaneously best-responses by all players to that option-profile . It does this by considering all of the option-profiles that can result by each player best-responding, then takes the intersection of those sets.


On page 6, Hedges defines the best response correspondence 




Hedges builds up the idea of Nash Equilibria using quantifiers rather than optimizers, (like  rather than ), but I believe the approaches are equivalent. Unpacking  from the inside out: 


That makes ) a -task. Since , we know that .


This is where I had to go looking through papers. What sort of product takes a set of best-responses from each player, relative to a given option-profile, and returns a set of option-profiles that are simultaneously regarded by each player as a best-response? I thought about just taking the Cartesian product of the sets, but that wouldn't get us only the mutual best-responses.


Let's call the way that each player maps option-profiles to best-responses . This is exactly the sets we want to take the product of:


Hedges introduces notation on page 3 to handle the operation of taking an option-profile, varying one player's option, and leaving the rest the same. Paraphrasing, Hedges defines  by 

You can read  as "give me a new copy of , where the  th entry has been set to the value ." Hedges uses this to define the deviation maps equivalently to the way Cleo did.  


The correspondences  take as input an option profile, and returns the set of option-profiles which are player 's optimal unilateral deviations from that option profile. To construct  from , we want to map  to the option-profiles which deviate from  in those exact ways. 


We can then use Hedges'  to get the best-response correspondence! We can unpack this to get a definition of  using objects that Cleo defined, using that deviation notation from Hedges: 


Thank you Cleo for writing this article! This was my first introduction to Higher-Order Game Theory, and I wrote up an implementation in TypeScript to help me understand how all of the pieces fit together!

I'm weirded out by this. To look at everything together, I write the original expression, and your expression rewritten using the OP's notation:



(I'm using the notation that a function applied to a set is the image of that set.)

So the big pi symbol stands for

So it's not a standalone operator: it's context-dependent because it pops out an implicit . The OP otherwise gives the impression of a more functional mindset, so I suspect the OP may mean something different from your guess.

Other problem with your interpretation: it yields the empty set unless all agents consider doing nothing an option. The only possible non-empty output is . Reason: each set you are intersecting contains tuples with all elements equal to the ones in , but for one. So the intersection will necessarily only contain tuples with all elements equal to those in .

Edit: Cleo Nardo has confirmed that they intended  to mean the cartesian product of sets, the ordinary thing for that symbol to mean in that context. I misunderstood the semantics of what  was intended to represent. I've updated my implementation to use the intended cartesian product when calculating the best response function, the rest of this comment is based on my initial (wrong) interpretation of .


I write the original expression, and your expression rewritten using the OP's notation:



(I'm using the notation that a function applied to a set is the image of that set.)

This is a totally clear and valid rewriting using that notation! My background is in programming and I spent a couple minutes trying to figure out how mathematicians write "apply this function to this set."

I believe the way that  is being used is to find Nash equilibria, using Cleo's definition 6.5: 

Like before, the -nash equilibria of  is the set of option-profiles  such that .

These are going to be option-profiles where "not deviating" is considered optimal by every player simultaneously. I agree with your conclusion that this leads  to take on values that are either  or . When , this indicates that  is not a Nash equilibrium. When , we know that  is a Nash equilibrium.

Oh I see now,