# 177

I previously claimed that most apparent Prisoner's Dilemmas are actually Stag Hunts. I now claim that they're Schelling Pub in practice. I conclude with some lessons for fighting Moloch.

This post turned out especially dense with inferential leaps and unexplained terminology. If you're confused, try to ask in the comments and I'll try to clarify.

Some ideas here are due to Tsvi Benson-Tilsen.

The title of this post used to be Most Prisoner's Dilemmas are Stag Hunts; Most Stag Hunts are Battle of the Sexes. I'm changing it based on this comment. "Battle of the Sexes" is a game where a male and female (let's say Bob and Alice) want to hang out, but each of them would prefer to engage in gender-stereotyped behavior. For example, Bob wants to go to a football game, and Alice wants to go to a museum. The gender issues are distracting, and although it's the standard, the game isn't that well-known anyway, so sticking to the standard didn't buy me much (in terms of reader understanding).

I therefore present to you,

the Schelling Pub Game:

Two friends would like to meet at the pub. In order to do so, they must make the same selection of pub (making this a Schelling-point game). However, they have different preferences about which pub to meet at. For example:

• Alice and Bob would both like to go to a pub this evening.
• There are two pubs: the Xavier, and the Yggdrasil.
• Alice likes the Xavier twice as much as the Yggdrasil.
• Bob likes the Yggdrasil twice as much as the Xavier.
• However, Alice and Bob also prefer to be with each other. Let's say they like being together ten times as much as they like being apart.

The important features of this game are:

• The Nash equilibria are all Pareto-optimal. There is no "individually rational agents work against each other" problem, like in prisoner's dilemma or even stag hunt.
• There are multiple equilibria, and different agents prefer different equilibria.

Thus, realistically, agents may not end up in equilibrium at all -- because (in the single-shot game) they don't know which to choose, and because (in an iterated version of the game) they may make locally sub-optimal choices in order to influence the long-run behavior of other players.

Here's a summary of the central argument which, despite the lack of pictures, may be easier to understand.

1. Most Prisoner's Dilemmas are actually iterated.
2. Iterated games are a whole different game with a different action space (because you can react to history), a different payoff matrix (because you care about future payoffs, not just the present), and a different set of equilibria.
3. It is characteristic of PD that players are incentivised to play away from the Pareto frontier; IE, no Pareto-optimal point is an equilibrium. This is not the case with iterated PD.
4. It is characteristic of Stag Hunt that there is a Pareto-optimal equilibrium, but there is also another equilibrium which is far from optimal. This is also the case with iterated PD. So iterated PD resembles Stag Hunt.
5. However, it is furthermore true of iterated PD that there are multiple different Pareto-optimal equilibria, which benefit different players more or less. Also, if players don't successfully coordinate on one of these equilibria, they can end up in a worse overall state (such as mutual defection forever, due to playing grim-trigger strategies with mutually incompatible demands). This makes iterated PD resemble the Schelling Pub Game.

In fact, the Folk Theorem suggests that most iterated games will resemble the Schelling Pub Game in this way.

In a comment on The Schelling Choice is "Rabbit", not "Stag" I said:

In the book The Stag Hunt, Skyrms similarly says that lots of people use Prisoner's Dilemma to talk about social coordination, and he thinks people should often use Stag Hunt instead.

I think this is right. Most problems which initially seem like Prisoner's Dilemma are actually Stag Hunt, because there are potential enforcement mechanisms available. The problems discussed in Meditations on Moloch are mostly Stag Hunt problems, not Prisoner's Dilemma problems -- Scott even talks about enforcement, when he describes the dystopia where everyone has to kill anyone who doesn't enforce the terrible social norms (including the norm of enforcing).

This might initially sound like good news. Defection in Prisoner's Dilemma is an inevitable conclusion under common decision-theoretic assumptions. Trying to escape multipolar traps with exotic decision theories might seem hopeless. On the other hand, rabbit in Stag Hunt is not an inevitable conclusion, by any means.

Unfortunately, in reality, hunting stag is actually quite difficult. ("The schelling choice is Rabbit, not Stag... and that really sucks!")

Inspired by Zvi's recent sequence on Moloch, I wanted to expand on this. These issues are important, since they determine how we think about group action problems / tragedy of the commons / multipolar traps / Moloch / all the other synonyms for the same thing.

My current claim is that most Prisoner's Dilemmas are actually Schelling pub games. But let's first review the relevance of Stag Hunt.

# Your PD Is Probably a Stag Hunt

There are several reasons why an apparent Prisoner's Dilemma may be more of a Stag Hunt.

• The game is actually an iterated game.
• Reputation networks could punish defectors and reward cooperators.
• There are enforceable contracts.
• Players know quite a bit about how other players think (in the extreme case, players can view each other's source code).

Each of these formal model creates a situation where players can get into a cooperative equilibrium. The challenge is that you can't unilaterally decide everyone should be in the cooperative equilibrium. If you want good outcomes for yourself, you have to account for what everyone else probably does. If you think everyone is likely to be in a bad equilibrium where people punish each other for cooperating, then aligning with that equilibrium might be the best you can do! This is like hunting rabbit.

Exercize: is there a situation in your life, or within spitting distance, which seems like a Prisoner's Dilemma to you, where everyone is stuck hurting each other due to bad incentives? Is it an iterated situation? Could there be reputation networks which weed out bad actors? Could contracts or contract-like mechanisms be used to encourage good behavior?

So, why do we perceive so many situations to be Prisoner's Dilemma -like rather than Stag Hunt -like? Why does Moloch sound more like each individual is incentivized to make it worse for everyone else than everyone is stuck in a bad equilibrium?

Sarah Constantine writes:

A friend of mine speculated that, in the decades that humanity has lived under the threat of nuclear war, we’ve developed the assumption that we’re living in a world of one-shot Prisoner’s Dilemmas rather than repeated games, and lost some of the social technology associated with repeated games. Game theorists do, of course, know about iterated games and there’s some fascinating research in evolutionary game theory, but the original formalization of game theory was for the application of nuclear war, and the 101-level framing that most educated laymen hear is often that one-shot is the prototypical case and repeated games are hard to reason about without computer simulations.

To use board-game terminology, the game may be a Prisoner's Dilemma, but the metagame can use enforcement techniques. Accounting for enforcement techniques, the game is more like a Stag Hunt, where defecting is "rabbit" and cooperating is "stag".

# Schelling Pubs

But this is a bit informal. You don't separately choose how to metagame and how to game; really, your iterated strategy determines what you do in individual games.

So it's more accurate to just think of the iterated game. There are a bunch of iterated strategies which you can choose from.

The key difference between the single-shot game and the iterated game is that cooperative strategies, such as Tit for Tat (but including others), are avaliable. These strategies have the property that (1) they are equilibria -- if you know the other player is playing Tit for Tat, there's no reason for you not to; (2) if both players use them, they end up cooperating.

A key feature of Tit for Tat strategy is that if you do end up playing against a pure defector, you do almost as well as you could possibly do with them. This doesn't sound very much like a Stag Hunt. It begins to sound like a Stag Hunt in which you can change your mind and go hunt rabbit if the other person doesn't show up to hunt stag with you.

Sounds great, right? We can just play one of these cooperative strategies.

The problem is, there are many possible self-enforcing equilibria. Each player can threaten the other player with a Grim Trigger strategy: they defect forever the moment some specified condition isn't met. This can be used to extort the other player for more than just the mutual-cooperation payoff. Here's an illustration of possible outcomes, with the enforceable frequencies in the white area:

Alice could be extorting Bob by cooperating 2/3rds of the time, with a grim-trigger threat of never cooperating at all. Alice would then get an average payoff of 2⅓, while Bob would get an average payout of 1⅓.

In the artificial setting of Prisoner's Dilemma, it's easy to say that Cooperate, Cooperate is the "fair" solution, and an equilibrium like I just described is "Alice exploiting Bob". However, real games are not so symmetric, and so it will not be so obvious what "fair" is. The purple squiggle highlights the Pareto frontier -- the space of outcomes which are "efficient" in the sense that no alternative is purely better for everybody. These outcomes may not all be fair, but they all have the advantage that no "money is left on the table" -- any "improvement" we could propose for those outcomes makes things worse for at least one person.

Notice that I've also colored areas where Bob and Alice are doing worse than payoff 1. Bob can't enforce Alice's cooperation while defecting more than half the time; Alice would just defect. And vice versa. All of the points within the shaded regions have this property. So not all Pareto-optimal solutions can be enforced.

Any point in the white region can be enforced, however. Each player could be watching the statistics of the other player's cooperation, prepared to pull a grim-trigger if the statistics ever stray too far from the target point. This includes so-called mutual blackmail equilibria, in which both players cooperate with probability slightly better than zero (while threatening to never cooperate at all if the other player detectably diverges from that frequency). This idea -- that 'almost any' outcome can be enforced -- is known as the Folk Theorem in game theory.

The Schelling Pub part is that (particularly with grim-trigger enforcement) everyone has to choose the same equilibrium to enforce; otherwise everyone is stuck playing defect. You'd rather be in even a bad mutual-blackmail type equilibrium, as opposed to selecting incompatible points to enforce. Just like, in Schelling Pub, you'd prefer to meet together at any venue rather than end up at different places.

Furthermore, I would claim that most apparent Stag Hunts which you encounter in real life are actually schelling-pub, in the sense that there are many different stags to hunt and it isn't immediately clear which one should be hunted. Each stag will be differently appealing to different people, so it's difficult to establish common knowledge about which one is worth going after together.

Exercize: what stags aren't you hunting with the people around you?

# Taking Pareto Improvements

Fortunately, Grim Trigger is not the only enforcement mechanism which can be used to build an equilibrium. Grim Trigger creates a crisis in which you've got to guess which equilibrium you're in very quickly, to avoid angering the other player; and no experimentation is allowed. There are much more forgiving strategies (and contrite ones, too, which helps in a different way).

Actually, even using Grim Trigger to enforce things, why would you punish the other player for doing something better for you? There's no motive for punishing the other player for raising their cooperation frequency.

In a scenario where you don't know which Grim Trigger the other player is using, but you don't think they'll punish you for cooperating more than the target, a natural response is for both players to just cooperate a bunch.

So, it can be very valuable to use enforcement mechanisms which allow for Pareto improvements.

Taking Pareto improvements is about moving from the middle to the boundary:

(I've indicated the directions for Pareto improvements starting from the origin in yellow, as well as what happens in other directions; also, I drew a bunch of example Pareto improvements as black arrows to illustrate how Pareto improvements are awesome. Some of the black arrows might not be perfectly within the range of Pareto improvements, sorry about that.)

However, there's also an argument against taking Pareto improvements. If you accept any Pareto improvements, you can be exploited in the sense mentioned earlier -- you'll accept any situation, so long as it's not worse for you than where you started. So you will take some pretty poor deals. Notice that one Pareto improvement can prevent a different one -- for example, if you move to (1/2, 1), then you can't move to (1,1/2) via Pareto improvement. So you could always reject a Pareto improvement because you're holding out for a better deal. (This is the Schelling Pub aspect of the situation -- there are Pareto-optimal outcomes which are better or worse for different people, so, it's hard to agree on which improvement to take.)

That's where Cooperation between Agents with Different Notions of Fairness comes in. The idea in that post is that you don't take just any Pareto improvement -- you have standards of fairness -- but you don't just completely defect for less-than-perfectly-fair deals, either. What this means is that two such agents with incompatible notions of fairness can't get all the way to the Pareto frontier, but the closer their notions of fairness are to each other, the closer they can get. And, if the notions of fairness are compatible, they can get all the way.

# Moloch is the Folk Theorem

Because of the Folk Theorem, most iterated games will have the same properties I've been talking about (not just iterated PD). Specifically, most iterated games will have:

1. Stag-hunt-like property 1: There is a Pareto-optimal equilibrium, but there is also an equilibrium far from Pareto-optimal.
2. The Schelling Pub property: There are multiple Pareto-optimal equilibria, so that even if you're trying to cooperate, you don't necessarily know which one to aim for; and, different options favor different people, making it a complex negotiation even if you can discuss the problem ahead of time.

There's a third important property which I've been assuming, but which doesn't follow so directly from the Folk Theorem: the suboptimal equilibrium is "safe", in that you can unilaterally play that way to get some guaranteed utility. The Pareto-optimal equilibria are not similarly safe; mistakenly playing one of them when other people don't can be worse than the "safe" guarantee from the poor equilibrium.

A game with all three properties is like Stag Hunt with multiple stags (where you all must hunt the same stag to win, but can hunt rabbit alone for a guaranteed mediocre payoff), or Schelling Pub where you can just stay home (you'd rather stay home than go out alone).

# Lessons in Slaying Moloch

0. I didn't even address this in this essay, but it's worth mentioning: not all conflicts are zero-sum. In the introduction to the 1980 edition of The Strategy of Conflict, Thomas Schelling discusses the reception of the book. He recalls that a prominent political theorist "exclaimed how much this book had done for his thinking, and as he talked with enthusiasm I tried to guess which of my sophisticated ideas in which chapters had made so much difference to him. It turned out it wasn't any particular idea in any particular chapter. Until he read this book, he had simply not comprehended that an inherently non-zero-sum conflict could exist."

1. In situations such as iterated games, there's no in-principle pull toward defection. Prisoner's Dilemma seems paradoxical when we first learn of it (at least, it seemed so to me) because we are not accustomed to such a harsh divide between individual incentives and the common good. But perhaps, as Sarah Constantine speculated in Don't Shoot the Messenger, modern game theory and economics have conditioned us to be used to this conflict due to their emphasis on single-shot interactions. As a result, Moloch comes to sound like an inevitable gravity, pulling everything downwards. This is not necessarily the case.

2. Instead, most collective action problems are bargaining problems. If a solution can be agreed upon, we can generally use weak enforcement mechanisms (social norms) or strong enforcement (centralized governmental enforcement) to carry it out. But, agreeing about the solution may not be easy. The more parties involved, the more difficult.

3. Try to keep a path open toward better solutions. Since wide adoption of a particular solution can be such an important problem, there's a tendency to treat alternative solutions as the enemy. This bars the way to further progress. (One could loosely characterize this as the difference between religious doctrine and democratic law; religious doctrine trades away the ability to improve in favor of the more powerful consensus-reaching technology of immutable universal law. But of course this oversimplifies things somewhat.) Keeping a path open for improvements is hard, partly because it can create exploitability. But it keeps us from getting stuck in a poor equilibrium.

## New to LessWrong?

New Comment

I found these three papers highly useful, especially the first one

What this means is that two such agents with incompatible notions of fairness can't get all the way to the Pareto frontier, but the closer their notions of fairness are to each other, the closer they can get.

this is helpful for clarifying some thoughts. Thanks.

most collective action problems are bargaining problems.

I came to this conclusion and the nice thing about it is that it collapses a bunch of problems into the signaling landscape problem. (I don't know if there's an academic term for the prevailing state of the signaling landscape at a given time). The frustrating thing about the signaling landscape is that there's a lot of security through obscurity (eg shibboleths).

Going through these now. I started with #3. It's astoundingly interesting. Thank you.

A short note to start the review that the author isn’t happy with how it is communicated. I agree it could be clearer and this is the reason I’m scoring this 4 instead of 9. The actual content seems very useful to me.

AllAmericanBreakfast has already reviewed this from a theoretical point of view but I wanted to look at it from a practical standpoint.

***

To test whether the conclusions of this post were true in practice I decided to take 5 examples from the Wikipedia page on the Prisoner’s dilemma and see if they were better modeled by Stag Hunt or Schelling Pub:

• Climate negotiations
• Relationships
• Marketing
• Doping in sport
• Cold war nuclear arms race

Detailed analysis of each is at the bottom of the review.

Of these 5, 3 (Climate, Relationships, Arms race) seem to me to be very well modeled by Schelling Pub.

Due to the constraints on communication allowed between rival companies it is difficult to see marketing (where more advertising = defect) as a Schelling Pub game. There probably is an underlying structure which looks a bit like Schelling Pub but it is very hard to move between Nash Equilibria. As a result I would say that Prisoner’s Dilemma is a more natural model for marketing.

The choice of whether to dope in sport is probably best modeled as a Prisoner’s dilemma with an enforcing authority which punishes defection. As a result, I don’t think any of the 3 games are a particularly good model for any individual’s choice. However, negotiations on setting up the enforcing authority and the rules under which it operates are more like Schelling Pub. Originally I thought this should maybe count as half a point for the post but thinking about it further I would say this is actually a very strong example of what the post is talking about – if your individual choice looks like a Prisoner’s Dilemma then look for ways to make it into a Schelling Pub. If this involves setting up a central enforcement agency then negotiate to make that happen.

So I’m scoring it 4 out of 5 Prisoners Dilemmas examined are better modeled as Schelling Pubs, which is in line with the “most” claims of the post title.

The example which was least like Schelling Pub was the one where communication was difficult/impossible due to external rules. I think the value of communication is implicit in the post but it would be helpful to have it spelled out explicitly.

One other thing which might be useful from a practical point is that things which don't initially seem iterated may be able to be iterated if you split them into smaller tasks. You don't have to reduce your nuclear arsenal or decarbonise all at once, you can do a little bit and then check that the others have done the same before continuing. This seems obvious on a national level but maybe not so obvious on a personal level.

***

(Read beyond here only if you're interested in more detail of the examples - it doesn't add to the conclusions)

Below is my analysis of the 5 items chosen from the Prisoner’s dilemmas example list on Wikipedia. In discussing Stag Hunts (SH) I use the post’s list of 4 items which might make something more like a SH than a Prisoner’s Dilemma (PD).

Climate negotiations

• PD shaped
• Each country benefits from stable climate
• Each would prefer that they put in minimum effort to achieve it
• SH shaped?
• Iteration: Yes - it is iterated over years and agreements
• Reputation: It seems like yes, although I’m not sure how this works out in practice
• Enforecable contracts: Not really
• Superrationality: Possibly
• SP shaped?
• It seems yes.
• As an example, recall the coal amendment from the Glasgow talks
• This is an amendment which favours some countries over others and means that those disfavoured will put in relatively more effort for a given amount of CO2 reduction compared to the original wording.
• It seems obvious that the actual agreement probably isn’t on the Pareto frontier
• I think the final figure from the post gives a good mental model of what is going on

Relationships

• PD shaped
• In theory, for an individual action, you’re better off if you get your own way over your partner
• SH shaped?
• Iteration: Yes (+ your partner can just leave)
• Reputation: Very important in small communities, less so in large ones
• Enforceable contracts: Sometimes (prenup)
• Superrationality: Potentially yes but not required
• It will be better to hunt stag together just from iteration
• Abusive relationships are not SH shaped as the partner can’t or won’t leave, especially when there’s no reputational effect that the abusive partner cares about
• SP shaped?

Marketing

• PD shaped
• 2 rival companies with equally effective marketing departments are in a roughly PD shaped game (assuming customer pool is fixed size)
• If one spends money on advertising (defect) then the other is disadvantaged if they don’t
• But both would be better off if neither advertised
• Often the customer pool is not of fixed size which would mean that this may not really be a PD in real life
• It is important here to note that collusion between companies is generally forbidden so communication is not allowed
• SH shaped?
• Iteration: Yes.
• Reputation: Not really – there aren’t negative reputation effects to advertising
• Enforceable contracts: No
• Modelling other players: Yes but they are (accurately) modelling each other as playing defect
• In theory the businesses could get into C-C but advertising is so ingrained as the default choice that this would be hard
• Possibly there’s an availability heuristic problem here – I’ll obviously remember the examples of industries which are stuck in D-D as I constantly see their adverts (Pepsi vs Coke, tech companies, supermarkets).
• I tried to think of industries that aren’t advertising much but I’m drawing a blank.
• SP shaped?
• In theory maybe, however the lack of communication makes it extremely hard for companies to change between solutions

Doping in sport

• PD shaped
• Using drugs gives you an edge but has a price potential price with medical dangers
• There is a potential difference from “standard” PD in that doping is disadvantageous to a more able athlete if the less able athlete is not doping (which would make this an alibi game).
• SH shaped?
• Iteration: Yes (although results of previous iterations are not very legible)
• Reputation: Yes (again, legibility problems)
• Enforceable contract: Yes – drugs testing and bans
• Superrationality: No
• The enforceable contract is probably the biggest effect here – just relying on iteration and reputation would be insufficient
• SP shaped?
• Because the solution is an enforceable contract the decision is not very SP shaped on an individual basis
• Negotiating what the contract should be is SP shaped
• What makes a medical exemption ok?
• How much inhaler is an ok amount?

Cold war nuclear arms race

• PD shaped
• Making more nukes is equivalent to defect
• SH shaped?
• Iteration: Yes, treaty followed treaty. Inspections were allowed to verify adherence.
• Reputation: A bit but probably not in a relevant way
• Enforceable contract: Not really – if one country reneged then it’s not like the other country could sue
• Superrationality: Yes. It seems both players realised D-D was terrible and wanted to play C-C. They couldn't rely on just this so iteration was very important.
• SP shaped?
• Deciding the terms of the agreement is SP shaped
• What size arsenal is better for which country?
• Which particular weapons are better for which country?

So, why do we perceive so many situations to be Prisoner's Dilemma -like rather than Stag Hunt -like?

I don't think that we do, exactly. I think that most people only know the term "prisoners' dilemma" and haven't learned any more game theory than that; and then occasionally they go and actually attempt to map things onto the Prisoners' Dilemma as a result. :-/

The goal of this post is to help us understand the similarities and differences between several different games, and to improve our intuitions about which game is the right default assumption when modeling real-world outcomes.

My main objective with this review is to check the game theoretic claims, identify the points at which this post makes empirical assertions, and see if there are any worrisome oversights or gaps. Most of my fact-checking will just be resorting to Wikipedia.

Pareto-optimal: One dimension cannot improve without a second worsening.

Nash equilibrium: No player can do better by unilaterally changing their strategy.

Here’s the payoff matrix from the one-shot Prisoner’s Dilemma and how it relates to these key concepts.

1. There are no Pareto-optimal Nash equilibria.
2. There is a single Pareto-optimal Nash equilibrium, and another equilibrium that is not Pareto-optimal.
3. There are multiple Pareto-optimal Nash equilibria, which benefit different players to different extents.

The author attempts to argue which of these arrangements best describes the world we live in, and makes the best default assumption when interpreting real-world situations as games. The claim is that real-world situations most often resemble iterated PDs, which have multiple Pareto-optimal Nash equilibria benefitting different players to different extents. I will attempt to show that the author’s conclusion only applies when modeling superrational entities, or entities with an unbounded lifespan, and give some examples where this might be relevant.

Iterated Prisoner’s Dilemma is a little more complex than the author states. If the players know how many turns the game will be played for, or if the game has a known upper limit of turns, the Nash equilibrium is always to defect. However, if the players are superrational, meaning that they not only are perfectly rational but also assume all other players are too and that superrational players always converge on the same strategy, then they’ll always cooperate.

As such, the Nash equilibrium for rational, but not superrational, players for games with fixed or upper-bounded N is the same as for the single-shot game. In real life, any game played between human beings that takes a non-zero amount of time has an upper bound on the number of turns, given that we currently must expect ourselves to die. Therefore, game theory suggests that the Nash equilibrium strategy for all iterated Prisoner’s Dilemmas between rational players is defect/defect. Therefore, the claims about iterated PD in steps 2-5 in the author’s argument summary only seems to hold if we are talking about non-human entities with unbounded life expectancies, or if humans are modeled as superrational agents.

Let’s gesture at some plausible but extremely speculative real-world examples of how games with an unbounded upper limit of turns or superrationality might be reasonable models for human games.

Social entities, such as governments, corporations, and cultures, if they can be modeled as agents, could be seen as having unbounded lifespans. When we make strict game-theoretic arguments, we can be equally strict in mathematical assumptions about other facets of reality. If understanding of physics is imperfect, and if there is a non zero possibility of some continuity of agents not just into the distant future, but infinitely into time, then they could be modeled as having unbounded lifespans. If this holds, and if social entities are causally responsible for the way that most games unfold, then this may rescue the argument.

A second angle on this idea is that humans may have a psychological tendency to interpret long periods of time as equivalent to infinite in length, analogously to how people intuitively round low probabilities to 0 and high probabilities to 1. This suggests that studies investigating strategies empirically chosen by people playing iterated PDs in the lab would increasingly approach optimal strategies for unbounded iterated PDs as the turn count increases. A complicating factor here is that we must disambiguate whether the assumption is that humans “round” long lengths of time or large numbers of turns to infinity.

Outcomes for superrational agents are better than those for rational agents in the turn-bounded iterated PD. Imagine a large set of agents, each playing a randomly-chosen strategy. Some of those strategies may match the superrational strategy in iterated PD. If they face selection pressure over time, with some spontaneous generation of new agents, agents playing the superrational strategy may eventually dominate the population of agents. Conceivably, humans, and our social ancestor species, may have been genetically hardwired via group selection to adopt superrational strategies by instinct. This aligns with the psychological explanation.

Edit: Vanessa Kosoy pointed out below that iterated PDs with a finite but unknown number of iterations, or slightly noisy agents, can have Nash equilibria involving cooperation. We therefore don't need to resort to such exotic explanations as I've offered here to explain how abramdemski's arguments 2-5 hold, and we don't need to "trick ourselves into cooperation" in such scenarios.

If this argument holds water, how does it affect the original agenda of this article, which was to inform our intuitions about how to model real-world games? It suggests that these twin questions, of psychological “rounding” and a possible group-selection account of how this might have evolved, would be important to investigate to increase our confidence in this heuristic. If true, it also suggests vulnerabilities in normal human approaches to games. Our ability to cooperate, under this hypothesis, depends on our ability to trick ourselves into cooperation by conveniently ignoring the inevitable end of our games.

The local argument made by this post needn’t be true for its conclusion to be true, and if the post seems plausible because it aligns with our real-world experience, we might want to appreciate the article for the conclusion it articulates as well as the argument it makes in support of that conclusion. The conclusion is that in most situations, we have multiple Pareto-optimal Nash equilibria, favoring different agents. Colloquially, many human problems are about fairness and resource allocation, and the threats and strategies people use to steer negotiation toward the outcome that favors them the most, while still achieving a fundamentally cooperative outcome.

This seems to me like an articulate, usefully predictive, simple, and realistic depiction of an enormous number of fundamental challenges in human organization. Although I don’t think that the original post’s game-theoretic argument is airtight, I think its psychological and sociological plausibility in conjunction with a tweaked game-theoretic argument makes it worthwhile and interesting. I also appreciate the care the author took to summarize, update, and respond to comments. Pointing out similarities and differences in the relationship between Nash equilibria and Pareto optimality in the various games also helped me understand them better, which I appreciate.

Cooperation can be a Nash equilibrium in the IPD if you have a finite but unknown number of iterations (e.g. geometrically distributed). Also, if the number of iterations is known but very large, cooperating becomes an -Nash equilibrium for small (if we normalize utility by its maximal value), so agents which are not superrational but a little noisy can still converge there (and, agents are sometimes noisy by design in order to facilitate exploration).

Thank you for pointing this out. Here's a source for the first claim.

Finitely repeated games with an unknown or indeterminate number of time periods, on the other hand, are regarded as if they were an infinitely repeated game. It is not possible to apply backward induction to these games.

And here's a source that at least provides a starting point for the second claim about ϵ-Nash equilibria.

Given a game and a real non-negative parameter ϵ, a strategy profile is said to be an ϵ-equilibrium if it is not possible for any player to gain more than ϵ in expected payoff by unilaterally deviating from his strategy. Every Nash Equilibrium is equivalent to an ϵ-equilibrium where ϵ = 0.

Another simple example is the finitely repeated prisoner's dilemma for T periods, where the payoff is averaged over the T periods. The only Nash equilibrium of this game is to choose Defect in each period. Now consider the two strategies tit-for-tat and grim trigger. Although neither tit-for-tat nor grim trigger are Nash equilibria for the game, both of them are ϵ-equilibria for some positive ϵ. The acceptable values of ϵ depend on the payoffs of the constituent game and on the number T of periods.

Only a third of the way through so far, but confused – you cite "iteration and enforcement mechanisms" as things that make PDs more like Stag Hunts. But, isn't iteration and enforcement a property that either PD or SH can have? My understanding of the difference between  PD and Stag Hunt was about what the actual payoffs are, and that C/C can be a nash equilibrium because you wouldn't get more payoff by defecting (although D/D is also a nash equilibrium)

In game theory, iterated PD would be a different game than PD. PD as typically defined is a single-shot game. The same is true of stag hunt, battle of the sexes, and many other games: if I say "stag hunt" to a game theorist, they probably don't ask "single shot or iterated?". Rather, if I say "stag hunt" and then start talking about iterated strategies, they might be like "oh, you mean iterated stag hunt?"

Similarly with enforcement mechanisms and so on. None of these are assumed by default.

In (single-shot) PD, the "possible strategies" are the moves you can make (or mixtures of moves, if you have randomness available). In iterated PD, however, the strategy space is much more complex: it's the set of possible iterated strategies, including any possible function of the game history. This gives us a correspondingly much more complex set of equilibria to consider.

It is characteristic of PD that players are incentivised to play away from the Pareto frontier; IE, no Pareto-optimal point is an equilibrium. This is not the case with iterated PD.

It is characteristic of Stag Hunt that there is a Pareto-optimal equilibrium, but there is also another equilibrium which is far from optimal. This is also the case with iterated PD.

Hence my assertion that iterated PD is more like stag hunt.

However, it is furthermore true of iterated PD that there are multiple different Pareto-optimal equilibria, which benefit different players more or less. Also, if players don't successfully coordinate on one of these equilibria, they can end up in a worse overall state (such as mutual defection forever, due to playing grim-trigger strategies with mutually incompatible demands). This makes iterated PD resemble Battle of the Sexes.

However, it is furthermore true of iterated PD that there are multiple different Pareto-optimal equilibria, which benefit different players more or less. Also, if players don't successfully coordinate on one of these equilibria, they can end up in a worse overall state (such as mutual defection forever, due to playing grim-trigger strategies with mutually incompatible demands). This makes iterated PD resemble Battle of the Sexes.

I think this paragraph very clearly summarizes your argument. You might consider including it as a TL;DR at the beginning.

Okay. I think I get what you are saying now, but it wasn't clear on my initial read through.

I did understand on the initial read-through (or, currently think I understand?) that when you say "most games turn out to be Battle of the Sexes in practice", you mean that there is an emergent property of the iterated game that turns it into Battle of the Sexes.

My current summary of what you are intending to say (correct me if I got it wrong ) is:

1. Most prisoners dilemma games are actually iterated.

2. Iterated prisoners dilemma is actually a different game with a different payoff matrix that has a different set of nash equilibria. Choosing which strategy to play in iterated prisoner's dilemma is similar to playing Stag Hunt.

3. Then there is a further step where the process of deciding on how to coordinate (meta-strategy?) that you are choosing in a stag hunt is more similar to battle of the sexes.

I think what I was missing the first time through was #2. I was intepreting you to mean "the thing about stag hunts is that they are iterated, and your PD is probably iterated", where what you actually meant was "your PD is iterated, and iterated PD is actually isomorphic to stag hunt."

One of the key things I think between the 3 games is whether communication beforehand helps (in single shot games).

In PD communication doesn't really help much as you there is little reason to trust what the other person.

In SH communication should be able to solve your problem as S-S is optimal for both players.

In BotS communication which results in agreement can at least be trusted as co-ordinating is optimal for both players. Choosing which option to co-ordinate on is another matter.

(assuming you've included the pleasure of spiting the other person etc. in the payoff matrix)

See my reply to Raemon for the aforementioned confusion.

So... I think this is one of the most important posts I've read lately (it feels like it crystalizes and gives clarity to some key societal problems).

Unfortunately... "Battle of the Sexes" is a super distracting name, and a confusing one. I think it's reasonable for LW to not go out of their way to cave to this week's internet fashions. But, when I've tried to get other people to read this post, even people who care about game theory (but aren't necessarily super well versed in it), they had a very different expectation of what Battle of the Sexes meant. (They assumed this post had something to do with gender, which I suspect contributed to them bouncing off of it)

I'm not actually sure what to do because there isn't another great name for Battle of the Sexes. Bach vs Stravinsky is another name it has used, which at least has the advantage of not obviously suggesting any particular interpretation. It also has the same abbreviation as Battle of the Sexes, so it's a bit more resilient to summarizing.

So I weakly suggest changing the title to replace Battle of the Sexes with Bach or Stravinsky. I'm not super confident because it is a more complicated sounding title that's already pushing the boundary of complexity, but I think probably better on net.

(I think it should probably be basically required reading for LW users to know PD, SH and BoS games as core game theory concepts, but apparently we're away aways from that world)

I think I will probably write a variation on this post with a title something like "which stag hunt are you coordinating on?", approaching the content from a slightly more actionable angle while also summarizing the key pieces here. But, meanwhile I like that this post focuses explicitly on the game theory connections.

I changed to Schelling Pub, but the overall result feels like even more of a mess (because I inserted an explanation of what Schelling Pub is, awkwardly, at the beginning).

This post badly needs (a) a major re-write (b) someone to write other, better posts making the same / related points.

There are a number of problems with how this post is communicated..... it's even possible that I chose the wrong central narrative for the whole post. A different central narrative could be "what are coordination problems?" -- pointing out that people jump to prisoner's dilemma as the central example of coordination problem, but there are multiple things "coordination problem" can mean. I could then examine the different notions of communication problem, and consider which problems seem most common in practice.

Interesting.

If I had to name the game from scratch (which maybe we should), I intuitively jump to something like "Schelling pub game" -- two friends would prefer to encounter each other at the pub, but they each have different favorite pubs. Which pub should each select?

FYI, "most Stag Hunts are Schelling Pubs" left me a bit confused because I don't have that association with "pub" (maybe because I'm not british? although neither are you AFAICT). I thought it was something about "public" or "publishing." But I guess that's approximately as obscure as Bach vs Stravinksy and fewer syllables meanwhile.

In a conversation with John Wentworth recently, it came up that maybe Battle of the Sexes should be called The Schelling Problem, because the general class of game is that there isn't an obvious schelling point. "Most Stag Hunts are Schelling Problems" is... maybe slightly clearer if you already know what Schelling points are.

But really there needs to be a separate post that just says "hey, Battle of the Sexes has a shitty name but also you should know what it is. We should rename it X"

I think in general we should notice when things have bad names and change them, at least if it's not too costly, but especially when it's cheep. for example cost disease isn't a good name, we should call it something like cost inflation.

My cultural version would be "coffee shop", but I felt like "pub" was the more universal reference (based on my media exposure).

I also considered "the Schelling game"... maybe I'll just expunge "pub" from the title.

LW team members today were confused by "schelling game" because the whole problem is there are multiple schelling points. I think maybe "Multi-Schelling Game" might work?

I do think "Schelling Cafe" might also be a fine name.

Schelling Dilemma? (though all three - Schelling pub, Schelling Cafe, and Schelling Game - sound fine to me. pub/cafe probably more than game)

Here's my summary of this post. Is this getting at the point you're trying to make?

The essential difference between a one-off Prisoner's Dilemma and a Stag Hunt is that in the one-shot PD, the prisoners cannot punish or reward each other for cooperating. In a Stag Hunt, the hunters can punish defection and reward cooperation. In both cases, the best outcome is equally good for all players.

The essential difference between a Stag Hunt and a Battle of the Sexes is that in the Stag Hunt, the best outcome is equally successful for all. In Battle of the Sexes, the best one-off outcomes are always unfair to at least one of the participants.

In most real-world situations, we can enforce cooperation. Generally, the outcomes won't be perfectly fair. They'll resemble a Battle of the Sexes more than a Stag Hunt or one-off Prisoner's Dilemma. So the problem is negotiating which unfair outcome the participants will choose. But because the Prisoner's Dilemma is so well-known, people often resort to it as their first game-theoretic analysis of any given situation.

This summary was helpful for me, thanks! I was sad cos I could tell there was something I wanted to know from the post but couldn't quite get it

In a Stag Hunt, the hunters can punish defection and reward cooperation

This seems wrong. I think the argument goes "the essential difference between a one-off Prisoner's Dilemma and an IPD is that players can punish and reward each other in-band (by future behavior). In the real world, they can also reward and punish out-of-band (in other games). Both these forces help create another equilibrium where people cooperate and punishment makes defecting a bad idea (though an equilibrium of constant defection still exists). This payoff matrix is like that of a Stag Hunt rather than a one-off Prisoner's Dilemma"

Nitpick:

While by the end of the article I feel like I understood what you mean by battle of the sexes, I didn't at the start and there is neither an explanation of the battle of the sexes game (even at the beginning of the section titled Battle of the Sexes!) nor is there a link to a post or article about it.

In the Alice/Bob diagrams, I am confused why the strategies are parameterized by the frequency of cooperation. Don't these frequencies depend on what the other player does, so that the same strategy can have different frequencies of cooperation depending on who the other player is?

First off, I'm not trying to illustrate the many-player game here. So imagine there's just Alice and Bob. I agree that the many-player version is relevant, but I was just dealing with the complexities that arise from iteration.

Second, yeah, absolutely: strategies in iterated games can be any function of the history. But that's a really complicated strategy space to try and draw. Essentially I'm showing you just a very high-level summary, focusing on frequency of cooperation as a salient feature.

The idea is that frequency is something each player can observe about the other. Alice can implement a Grim Trigger strategy to enforce any given frequency of cooperation from Bob. It needs to have some wiggle room, to allow chance fluctuations in frequency without pulling the Grim Trigger; but Alice can include wiggle room while enforcing tight enough a guarantee that Bob is forced to cooperate with the desired frequency in the limit, and Alice runs only a small risk of spuriously Grim Triggering.

not all conflicts are zero-sum.

This should be the lede. Most real-world interactions lose a lot of options, and a lot of potential value, by being simplified to a (n iterated) PD, SH, or BotS.

In reality, there's almost always un-modeled transfer and payouts - just being able to say "good job, thanks!" after a result is FREE UTILITY! Also, non-pathological humans have terms for the other player(s) in their utility function. Most importantly, there are far too many future games in the iteration set of a human lifetime for anyone to model, so reputation and self-image effects very often will dominate the modeled payouts.

The longer (i.e., more iterations) you spend in the shaded triangles of defection the more you'll be pulled to the defect-defect equilibrium as a natural reaction to what the other person is doing and the outcome you're getting. The longer you spend in the middle "wedge of cooperation", the more you'll end moving up and to the right in Pareto improvements. So we want to make that wedge bigger.

The size of that wedge is determined by the ratio of a player's outcome from C-C to their outcome in D-D. In this case the ratio is 2:1, so the wedge is between the slopes of 2 and 1/2. If C-C only guaranteed 1.1-1.1 to each player while a defection got them at least 1, the wedge would be a tiny sliver. Conversely, if the payoff for C-C was 999-999 almost the entire square would be the wedge.

But the bigger the wedge, the more difference there is between outcomes on the pareto frontier so the outcome of 100% C-C is a lot less stable than if any deviation from it immediately led to non-equilibrium points that degenerate to D-D.

I'm OOTL, can someone send me a couple links that explain the game theory that's being referenced when talking about a "battle of the sexes"? I have a vague intuition from the name alone, but I feel this is referencing a post I haven't read.

I like this, in the sense that it's provoking fascinating thoughts and makes me want to talk with the author about it further. As a communication of a particular concept? I'm kinda having a hard time following what the intent is.

Promoted to curated: This post made a point that I have been grasping at for a while, and made it quite well. For better or for worse, I use Prisoner's Dilemma analogies at least 5 times a week, and so understanding the dynamics around those dilemmas is quite important to me. This post felt like it connected a number of ideas in this space in a way that I expect to refer back to in the future at least a few times.

I would prefer headlines to spell out the terms they use to make it clearer to a reader that scans the headlines what a post is about.

Fixed.

Thanks.

(The titular insight seems pretty deep, thanks for sharing this)