# Ω 38

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

This could have been a relatively short note about why "zero sum" is a misnomer, but I decided to elaborate some consequences. This post benefited from discussion with Sam Eisenstat.

# "Zero Sum" is a misnomer.

The term intuitively suggests that an interaction is transferring resources from one person to another. For example, theft is zero-sum in the sense that it cannot create resources only transfer them. Elections are zero-sum in the sense that they only transfer power. And so on.

But this is far from the technical meaning of the term.

In order for the standard rationality assumptions used in game theory to apply, the payouts of a game must be utilities, not resources such as money, power, or personal property. Zero-sum transfer of resources is often far from zero-sum in utility.

But I'm getting ahead of myself. Let's examine the technical meaning of "zero sum" more precisely.

## It's used to mean "constant sum".

The term "zero sum" is often used as a technical term, referring to games where the payouts for different players always sums to the same thing.

For example, the game rock-paper-scissors is zero sum, because it always has one winner and one loser.

More generally, constant-sum means that if you add up the utility functions of the players, you get a perfectly flat function.

## "Constant sum" doesn't really make sense as a category.

It makes sense to conflate "zero sum" and "constant sum" because utility functions are equivalent under additive and positive multiplicative transforms, so we can always transform a constant-sum game down to a zero-sum game. However, by that same token, the concept of "constant sum" is meaningless: we can multiply the utility of one side or the other, and still have the same game. If you have good reflexes, you should hear "zero sum"/"constant sum" and shout "Type error! Radiation leak! You can't sum utilities without providing extra assumptions!"

Let's look at the "zero sum" game matching pennies as an example. In this game, two players have to say "heads" or "tails" simultaneously. One player is trying to match the other, while the other player is trying to be different from the one. Here's one way of writing the payoff matrix (with Alice trying to match):

In that case, the game has a constant sum of 1. We can re-scale it to have a constant sum of zero by subtracting 1/2 from all the scores:

But notice that we could just as well have re-scaled it to be zero sum by subtracting 1 from Alice's score:

Notice that this is exactly the same game, but psychologically, we think of it much differently. In particular, the game now seems unfair to Alice: Bob only stands to gain, but Alice can lose! Just like I mentioned earlier, we're tempted to think of the game as if it's an interaction in which resources are exchanged.

I'm not saying this is a bad thing to think about. In real life, there are situations we can understand as games of resource exchange much more often than there are single-shot games where the payoffs are clearly identifiable in utility terms. I just want to emphasize that resource exchange is not what basic game theory is about, so you should be very careful not to confuse the two!

Now, as I mentioned earlier, we can also re-scale utilities without changing what they mean, and therefore, without changing the game:

This game is equivalent to the others, and so, must still be "zero sum" in the technical sense of game theory! Despite this, it isn't even constant-sum.

So, how can we fix our concept of "zero sum" to better fit the underlying mathematical phenomenon?

# Fixing "Zero Sum"

Since the underlying problem is that utilities are not comparable without further assumptions, an obvious solution would be to make the concept "zero sum" dependent on those further assumptions about how to compare utilities. The term "zero sum" and "constant sum" would then be meaningful (and meaningfully distinct), provided one specifies how to sum utilities.

I don't think that's the right route, however. I think it's better to look at what "zero sum" is trying to do, and come up with a concept which does that more effectively.

## Linear Games

Here are diagrams of the four game matrices I used earlier:

Clearly, any game matrix which is zero-sum will occupy the  line. Similarly, any constant-sum game matrix will occupy a line  where  is the constant sum. But when we apply a positive multiplicative transformation, we get a line  for positive constants . So what we can say about a zero-sum game which remains true regardless of transformations is outcomes fall on a line of negative slope.

Note that if outcomes fall on a line of positive slope, then the utility functions of the players must match up to inconsequential transforms. In other words, the two players must have the same preferences.

So, for two-player zero-sum linear games, there are just three choices: players can have equivalent preferences, or players can have completely opposed preferences (their utility functions being equivalent to the negative of each other), or some players can have no preferences (the slope of the line is zero or infinite).

For multi-player games, things get a little more complicated. Two players need not be perfectly aligned with or opposed to each other. In fact, any two-player game can be embedded in a zero-sum three-player game (and furthermore, any N-player game can be embedded in a N+1-player zero-sum game).

However, the concept of linear game still applies well to multi-player games, and we can still easily generalize the concept of zero-sum to linear games where the slope between any two dimensions is negative.

Linear games with negative slope are probably what you should interpret "zero sum" to mean in most formal game-theoretic contexts. This is, after, all, the most general thing that can be re-scaled to create an equivalent game which is literally zero-sum. However, we can still generalize further.

In the original paper about the Nash bargaining problem (John Nash, The Bargaining Problem), Nash divides bargaining into two phases. In the first phase, players make threats: binding commitments about what they'll do if negotiations break down. In the second phase, players make demands: they ask for a specific level of resources. This demand is backed up by their earlier threats.

(Such an adversarial view of negotiation!)

The interesting thing for us here is that he observes that the threat portion of the game, if we consider it in isolation, is not zero sum, but might as well be: the payoff structure is just as adversarial. Because the outcome of the second part of the game is bound to be Pareto-optimal (under some assumptions about rational play), the choice of threats simply changes what trade-off will be arrived at along the Pareto-optimal surface:

The feasible outcomes for the full game include Pareto improvements (otherwise there could not possibly be any gains due to bargaining), but when we assume optimal play for the "demand" step, the threat step considered in isolation has only Pareto-optimal outcomes.

So, we could consider a game completely adversarial if it has a structure like this: no strategy profiles are a Pareto improvement over any others. In other words, the feasible outcomes of the game equal the game's Pareto frontier. All possible outcomes involve trade-offs between players.

Note, however, that if we allow mixed strategies, the only completely adversarial games are the linear ones. This is because mixed strategies imply that the space of possible outcomes is convex. The convex hull of a Pareto frontier that's not already linear must have a nonempty interior, implying the possibility of Pareto improvements.

So, since it's rather common to assume there are mixed strategies, this generalization will usually not be giving you anything over "linear games of negative slope".

# Notes on standard terminology and definitions.

I would have referred to my generalizations of zero sum as "adversarial games", and advocate "adversarial games" as an alternative to the problematic term "zero sum", except that "adversarial game" is a common term for non-cooperative game theory. Cooperative game-theory is game theory with enforceable contracts. Non-cooperative game theory is the version of game theory readers are most likely familiar with, IE, studying games with nash equilibria etc. Therefore, games like Prisoner's Dilemma, Stag Hunt, etc are "adversarial" or "non-cooperative" in the standard parlance, even though they are far from zero-sum.

So, I propose "completely adversarial" as the general term one should use as a replacement of "zero sum"; if someone asks you what this term means, you should say "there are no Pareto improvements in the set of possible outcomes" or something along those lines, and clarify that if we assume mixed strategies, this implies that the possible outcomes form a hyperplane.

In a comment, Vojta proposes "completely cooperative" to point to the opposite: a game where Pareto-domination forms a total order on the strategy profiles. He suggests "mixed-motive games" for games which are neither completely cooperative nor completely adversarial.

Note that if we assume mixed strategies, completely cooperative games are just linear games of strictly positive slope, the same way completely adversarial games must be linear (of negative slope) when we assume the existence of mixed strategies.

(Note also that my use of the term "linear" to describe games is, afaik, very nonstandard. It would be more precise for me to say "collinear", as in, all feasible points are collinear.)

The wikipedia article on zero-sum game uses the term "conflict game" for what I term "completely adversarial". Feel free to use that term if you like, but I find "completely adversarial" to be more satisfying.

Frustratingly, Wikipedia currently defines zero sum as a special case of constant sum, erroneously implying that we can sensibly differentiate between the two. In the same section, it goes on to name resource-redistribution transactions like theft and gambling as examples of zero-sum games, which (as I mentioned in the introduction) is often far from the case.

The article on zero sum games includes a discussion of avoiding a game, which asserts that if avoiding playing the game is an option, then it will always be an equilibrium strategy for one or both players. This further reinforces the idea that zero-sum games are considered as resource transfers, since presumably the idea is that at least one player has nothing to gain from a zero-sum interaction.

I'm not sure what terminology to suggest for the resource-transfer transactions which are usually referred to as zero sum, such as theft, political elections, etc. Maybe it's even fine to keep calling these "zero sum", using the term for what it intuitively invokes, since I'm already proposing that when discussing game theory we should use more technically accurate terms. But the risk is that you'll be mistakenly interpreted as invoking game theory.

There's a whole additional section I could write about how we can try to formally understand what these resource-transfer situations even are, and what it means for them to be "zero sum" in the intuitive sense, but I think I will leave things here for now.

# Ω 38

New Comment

"Completely adversarial" also better captures the strange feature of zero-sum games where doing damage to your opponent, by the nature of it being zero-sum, necessarily means improving your satisfaction, which is a very narrow class of situations.

I think the more self-descriptive the terminology is, the better. Fewer syllables is better too.

• Pareto-frontier games
• My-gain-your-loss games
• Inverse-reward games

"Completely adversarial" sounds a bit too much like "negative sum" games.

In order for the standard rationality assumptions used in game theory to apply, the payouts of a game must be utilities, not resources such as money, power, or personal property. Zero-sum transfer of resources is often far from zero-sum in utility.

Hm, I feel like when I talk about game theory I don't usually use those assumptions? Admittedly I've never studied game theory in depth. But in particular, the concept of a Nash equilibrium only seems to rely on "each player has a preference order for payouts".

Actually, I'm not really sure what assumptions you mean. I assume "the players are indifferent between a certain payout of x and a 50% chance of 2x" is one, but I don't know if there's anything missing. More questions about these assumptions:

IIUC, if utility is logarithmic in a resource, then it's roughly linear in small changes of that resource. If I have £100 then I value a 50% chance of an extra £100 noticeably differently from a certain chance of an extra £50, but if I have £10000 it's about the same. Is it mostly reasonable to act as though the axioms work for resources, provided the amounts at stake are "small" for all players? (And when people talk about game theory over resources, does that tend to be the case, implicitly or explicitly?)

What do you lose if the assumptions are violated? Broadly speaking I assume many theorems about mixed and iterated games no longer apply.

It's a good question.

If we pretend "resource games" always have utility logarithmic in resources, then we can save almost everything, since that's just a transform. We might lose some results about iterated games, since normally iterated games are assumed to be worth a discounted sum (just like rewards in reinforcement learning).

One major case where game theory is applied to a resource is evolutionary game theory, where the payoffs are assumed to be reproductive success. I don't think they do anything logarithmic, and I'm confused about whether they should or not. (I think they should? Reproductive strategies should be evaluated like Kelly betting? But I could be missing something.)

Another case is mechanism design, where payoffs are often thought of as monetary. I think there, too, they often assume (erroneously) than utility is linear in money, when a logarithmic assumption is more appropriate. However, I'm not sure where this makes a real difference vs a cosmetic difference.

The colloquial usage of zero / negative / positive sum games as people apply it to daily life seems like it captures something pretty useful to me. Roughly, they're statements about whether we're neutral / worse off / better off as a result of playing the game. Playing a positive sum game makes the world better, a negative sum game makes it worse.

This conception is Utilitarian, but I think for enough people this is close enough to correct that even if they don't consider themselves Utilitarians it's still a good model for thinking about interactions?

Right, I totally agree. The problem is just that this usage sounds like game theory, but it's not. I want to emphasize that these are zero/positive/negative sum interactions (as opposed to "games").

Interactions are part of a larger context (allowing us to judge whether the interaction was net positive/negative). Games, as understood by game theory, are self-contained: a game includes everything you need to know about its context.

Another problem is that the game-theoretic concept "zero sum" applies to all the ways a game could turn out. If there's the possibility of mutual benefit, then the game isn't "zero sum".

But the colloquial usage is more naturally applied to specific interactions and their particular outcomes. An interaction was negative sum if everyone was worse off for it. The fact that there was a possibility of mutual benefit doesn't have to spoil the classification.

This conception is Utilitarian, but I think for enough people this is close enough to correct that even if they don't consider themselves Utilitarians it's still a good model for thinking about interactions?

Not necessarily... we could define it in different ways. "Negative sum" might be defined as everyone being strictly worse off, rather than a utilitarian overall-things-are-worse-off-in-the-bargain.

That's the thing: we need to give the colloquial definition some formal framework, to be able to say anything specific about it. It currently lacks one, and game theory is being spuriously used despite not supporting the intuitive concept being conveyed.

You can recover (something close to) Jeff's explanation of the colloquial usage from the game theory version by positing that all games have a "do nothing" action such that if all players take "do nothing" then they all get zero utility. (More)

Things like ad tech are often called zero-sum, when the speaker actually is trying to say that they are negative sum.

Unlike "zero-sum game", a meaningful concept that the post carefully analyzes and extends, "negative-sum game" seems to have no meaning at all.

Yeahhh. Maybe I should have emphasized this more.

In my ideal world, everyone would forget that "zero sum", "positive sum", and "negative sum" were terms which applied to game theory at all, but otherwise keep using them as they are.

"Negative sum" makes sense (at least, some sense) as a statement to make about interactions, as opposed to games. We can classify an interaction as positive or negative based on a comparison to a world where it didn't happen.

levels of simulacra, a term can refer either to realized utility or beliefs about utility.

I think they mean that ad tech (or perhaps a more consensus example is nukes) is a prisoner’s dilemma, which is nonzero sum as opposed to positive/negative/constant/zero sum.

attention is zero-sum: there's a fix supply (well, as a simplification)

attention is zero-sum: there's a fix supply (well, as a simplification)

As the post notes, zero-sum in resources is not the same as zero-sum in satisfaction. Even if I can only spend a fixed attention budget, how I spend it determine global satisfaction, not just the distribution of satisfaction among players.

yep! I didn't meant to imply otherwise. but I should have specified, or maybe just phrase differently; ex.: there's a fixed supply of attention (as a first approximation)

As a game theorist, I completely endorse the proposed terminology. Just don't tell other game theorists... Sometimes, things get even worse when some people use the term "general sum games" to refer to games that are not constant-sum.

I like to imagine different games on a scale between completely adversarial and completely cooperative. With things in the middle being called "mixed-motive games".

I started reading this post thinking I would disagree with the thesis, but I was persuaded by it:)

I think I'll switch to saying: zero-sum of X (ex.: zero-sum of dollars) and conflict game (given it's a fair term and is already in use) for their respective meaning.

Makes sense!

I think "zero sum in X" is pretty good for avoiding problems. For example, if someone says "politics is zero sum", it invites the mistaken application of minmax reasoning. If someone says "politics is zero sum in political power" then it's more clear that although there's only one presidency to hand out, only so many seats in Congress, etc, politics can produce outcomes of varying overall quality in other respects.

Another reason I don't like the term "zero sum" is that aggregating utility across different agents is more in the domain of moral philosophy than game theory.

I think people generally use zero sum to refer to zero sum (or constant sum) rewards e.g. one seat in congress or one minute of a viewer's attention. Even rock, paper, scissors would be negative sum if someone tried to disturb his opponent's sleep or spent a million dollars bribing the ref or fanatically practiced for a million games.

Even if we use a framework of rewards, it doesn't make sense to differentiate between zero sum, negative sum, positive sum, constant sum, etc. without (a) assuming that we can compare rewards across people (so you find the congress seat as rewarding as I would, etc) and (b) having a baseline to compare to (the two of us arm-wrestling for a candy bar is zero sum compared to a baseline of us somehow having to split the candy bar between us no matter what, positive sum if compared to a baseline where we wouldn't get any candy bar, and negative sum if we would have both gotten a candy bar otherwise).

So, we could consider a game completely adversarial if it has a structure like this: no strategy profiles are a Pareto improvement over any others. In other words, the feasible outcomes of the game equal the game's Pareto frontier. All possible outcomes involve trade-offs between players.

I must have missed some key word - by this definition, wouldn't common-payoff games be "completely adversarial", because the "feasible" outcomes equal the Pareto frontier under the usual assumptions?

As an example, I think in the game "both players win if they choose the same option, and lose if they pick different options" has "the two players pick different options, and lose" as one of the feasible outcomes, and it is not on the Pareto frontier, because if they picked the same thing, they would both win, and that would be a Pareto improvement.

Right, I understand how this correctly labels certain cases, but that doesn't seem to address my question?

[+][comment deleted]1y 2

How so? The common payoff game where you and I name a number and we both receive the sum of the numbers we name has a Pareto improvement on any strategy: we can always name higher numbers.

Maybe the confusion was the way I used "feasible"? Does it have a different definition in game theory? I stick by the first phrasing I used: a game is completely adversarial if no strategy profiles are Pareto over any others.

I read "feasible" as something like "rationalizable." I think it would have been much clearer if you had said "if no strategy profiles are Pareto over any others."

My game theory is a bit rusty but I remember the pareto frontier as referring to an equal overall utility condition while a pareto improvement requires no participant becoming worse off. In other words you can move along the frontier by negatively impacting other players (which means by not making pareto improvements). This situation makes the players adversaries because there are no longer, strictly speaking, benefits from cooperating.

You're thinking of a Kaldor-Hicks optimality frontier for {outcomes with maximal total payoff}, while the Pareto frontier is {maximal elements in the unanimous-agreement preference ordering over outcomes}.

Thanks for this - it's helpful to have a detailed description of some common misconceptions about types of games.  Personally, I don't particularly mind "zero sum" as the common term, interchangeable with "constant sum", and I'll only have to care about the misconception when someone's making an erroneous inference based on it.

I believe that the mistake in using the term "zero-sum" for games like "theft" or "elections" is NOT that the term zero-sum is limited, but that it throws out incredibly important information in the mapping.  It's just wrong to treat future interactions and trust as outside the decision.  In most real-world cases, the externalities and unmodeled effects are orders of magnitude bigger than the actual outcome of the game under discussion.

I think "zero/positive/negative sum" is just fine for the common term if everyone knows it's not referring to game theory, since "zero sum" seems just fine for describing an interaction which neither produces or destroys resources. I like the suggestion of "zero sum in X" to help make this clear (for example, theft is zero sum in property, at least if there's no property damage, even though it might be far from zero sum in terms of happiness or other things).

What I object to is the association between these common terms and game theory. In particular, I think the most common mistaken reasoning is to infer that minmax reasoning is appropriate in situations which have been described as zero sum.

Personally, I don't particularly mind "zero sum" as the common term, interchangeable with "constant sum", and I'll only have to care about the misconception when someone's making an erroneous inference based on it.

I believe that the mistake in using the term "zero-sum" for games like "theft" or "elections" is NOT that the term zero-sum is limited, but that it throws out incredibly important information in the mapping.

I think there are a few insurmountable problems to using the term for game theory:

• It's too tempting to think "zero sum" contrasts with "positive sum" and "negative sum". These temptations are perfectly good concepts if we just use them for interactions which lose/gain resources, but can be given no sensible interpretations in game theory.
• As I outlined in the post, even "constant sum" isn't the right generalization. If you try to identify completely adversarial games this way, you'll miss examples where the utilities have to be re-scaled. So it's better to at least say "zero sum really means linear game of negative slope" or something along those lines, rather than "constant sum".

Granted, linear games of negative slope can be re-scaled to be zero-sum.

I think we're all a little guilty of using the term zero sum as a substitute for destructive or wasteful competition. Probably better to just call such situations "bad games" heh...

IIUC, a conflict game doesn't have to be a straight line; it can be a curve line, as long as "there are no Pareto improvements in the set of possible outcomes".

Right, and so can "completely adversarial" games, the way I've defined it.