# 10

This post is an informal writeup of an idea discovered while investigating Power. It offers an additional tool for framing multi-agent games in terms of attainable utility, with the hope that data from both Impact and Power can offer a more complete view of Power-scarcity dynamics in multi-agent games.

## Overview

The purpose of this post is to define and give intuition for a measure of a player's impact on some real-valued outcome variable in an arbitrary multiplayer game; we call this measure "Impact". We present motivating examples of multiplayer games, then define Impact in the more general context of arbitrary Bayesian networks. We then explore some basic connections between Impact and standard multiplayer game theory and present some conjectures to motivate further research.

## Motivating Examples - Understanding Multiplayer Games

One of the difficulties in understanding multiplayer games is the principle that in general, a player's optimal action is dependent on the actions of other players. One way around this problem is to restrict consideration to Nash Equilibria, which fully specify optimal actions for us. However, this loses the power to describe sub-optimal actions, which are relevant in real-world examples involving imperfect humans and intractably large action spaces.

While Nash Equilibria "work" by first conditioning on optimal actions and then considering probabilistic strategies, we go the other way around: fix some arbitrary mixed strategy profile, then analyze expected utility. In the single-player setting, this is analogous to a value function and can be constructed straightforwardly. In the multiplayer setting, we're forced to deal with dependencies on other players' strategies that complicate the notion of "expected utility". To handle this dependency, our framework makes the following assumptions motivated by Bayesian games:

• Each player's posterior distribution over other players' actions is optimal given that player's information ("common prior assumption" )
• We consider interim expected utility (IEU), which takes an expectation across the player's posterior beliefs of other players' actions

Borrowing notation from Bayesian games, we now have enough machinery to consider various games in terms of IEU . In each example, we fix a strategy profile  and assume some common payoff  (which we'll later describe as the "outcome variable" in the formal definition of Impact).

Let . This game can be thought of as follows: "everyone flips a coin, with reward given by the number of heads". Intuitively, the strategy for this game should be simple: flipping heads is better than flipping tails. Calculating IEU, we see that the results match our prediction:

In fact, the difference in IEU is exactly , contributed by the added heads coming from coin .

### Illusory Impact

Let . This game can be thought of as follows: "everyone flips a coin; you win iff there are an odd number of heads". This game has nice correlated equilibria: if players can coordinate and determine their coin's side, then they can easily win. However, we assume each player just blindly flips their coin, regardless of reward. Even though player 's strategy is random, we can compute IEU for each choice of action:

Importantly, IEU is constant over choice of , which means that player  has no "good" strategy (even assuming they can choose what side their coin lands on).

### Coordinated impact

Let . Now, let . This game can be thought of as follows: "a referee flips a coin, everyone shouts the result of the coin flip, and everyone wins iff it's heads". Intuitively, this is a ridiculous game: no player produces a meaningful action; the entire game is determined by the referee's coin flip. However, if we shut our eyes and blindly compute IEU, we find that , thus player  benefits from shouting "heads"?!

Well, in a sense, they do. The universes where player  shouts "heads" are exactly the universes in which everyone wins. The problem is that of agency: player  doesn't choose their action, the coin () does. If we condition on the value of , then each player's action becomes deterministic, thus IEU is constant across each player's (trivial) action space.

Interestingly, we have a clear notion of the "IEU" of values of , even though it's an external variable rather than an action in the game. This suggests a limitation of conceptualizing Impact strictly in terms of games: variables have impact, not just actions. In the Bayesian network formalization to come, we'll see that the node  impacts the outcome variable, while no nodes  do.

## Defining Impact using Bayesian Networks

As suggested in the "Coordinated impact" example, the most principled approach is to define Impact as a property of dependent variables, then consider game theory as a useful application. Again motivated by the "Coordinated impact" example, we can show that Impact must explicitly consider variable dependencies to avoid issues with double-counting. We formalize with Bayesian networks to provide the desired dependency structure.

### Definition - General Bayesian Networks

Borrowing notation from Wikipedia, consider an arbitrary Bayesian network  with variables . Additionally, choose an "outcome node"  (we can assume that  is a descendant of each , but the assumption isn't required); we use  as our outcome variable to measure Impact against.

We now define some notation:

• Given arbitrary R.V.s , we define the estimate of  given  as

Note that  is itself a random variable in the value of .

• Given R.V.s , we call  a marginal variable of  if the R.V. identity  holds. Intuitively, we can think of  as an estimate of  given limited information.
• Consider nodes . We say  is an ancestor of  (equivalently,  is a descendant of ) iff  and there exists a directed path . This relationship is direct iff such a path consists of a single edge.
• Let  be the set of ancestors of node . Let  be the set of direct ancestors of node .
• Given node , define the Impact of  on  to be the following R.V.

We now work toward a notion of Impact scarcity - the idea that the "magnitude" of impact of each node is bounded above. We will eventually demonstrate this claim in terms of the sum of variances of . First, we prove some necessary lemmas:

Lemma 1: Given an arbitrary topological ordering , we can construct the following collection of R.V.s

We now claim the following identity on R.V.s:

Proof: The identity follows from a telescoping sums argument, as well as the observation that for , we have .

Lemma 2: Consider R.V.s  s.t.   is a marginal variable of A. Then .

Proof: Consider an arbitrary vector space of R.V.s containing . We see that the function  is a projection, while  is a norm. Thus, the claim is equivalent to , which is a property of general vector spaces.

Lemma 3: Consider arbitrary R.V.s . If  (if  fully determines ), then  is a marginal variable of .

Proof: We observe the following

Now, consider the quantity . We see that  fully determines , so the estimate  is at most as accurate as . However,  "knows"  by the definition of , thus the optimal estimate is

The result follows by the definition of a marginal variable.

Note: We could also prove the result by expressing "A is a marginal variable of B" as "some vector space projection maps B to A" (equivalently,  for some C), the result then follows from .

Lemma 4: Consider arbitrary . Then the R.V. .

Proof: By "pausing" evaluation of the Bayesian network before  is determined, we can argue that the R.V. . Since  is fully determined by , we conclude by Lemma 3 that is a marginal variable of .

By Lemma 2 (and noting that all vector space projections map 0 to 0), we conclude .

Lemma 5: For each  is a marginal variable of .

Proof: We invoke Lemma 3, choosing  and .

We can now proceed with our claim of Impact-scarcity:

Theorem 1 (Impact-scarcity):

Proof: By Lemma 1, we have the following R.V. identity

We now compute variance of both sides. By Lemma 4, the  terms are 0 for . Thus, we're left with

We finish by applying lemmas 5 and 2, which prove .

Note: The only inequality in the above proof is the equation . Thus, equality is achieved when this equation is an equality for each  (for example, in chain-shaped Bayesian networks).

### Application - Representing a Multiplayer Game as a Bayesian Network

As promised, we now apply our framework for Impact to the game theory framework from earlier. To begin, consider an arbitrary multiplayer (Bayesian) game with fixed strategy profile  . We represent the mechanics of the game and the players' strategies as a Bayesian network:

Note: In an abuse of notation, we let  refer both to the R.V. representing player 's action and to the node  in the Bayesian network (thus,  (Bayesian network variable)  (action)). Thus, statements like  can be parsed as "the impact of player 's action " without the need for cumbersome notation (and similarly for other nodes in the Bayesian network).

For now, we define an arbitrary outcome node  as a direct descendant of every other node. We will later set  to represent game-theoretically meaningful quantities; in particular player 's reward .

Additionally, call a node  deterministic if  is fully determined by  (equivalently, if ). Observe that for any deterministic node , we have  by the definition of Impact. This has two important implications for our model:

• We see that the  are deterministic functions of  and the outcome  is a deterministic function of all variables in the Bayesian network. Thus, the only non-deterministic variables are  and , which by the above must contribute all impact.
• Since  contributes zero Impact (because it's deterministic), its dependencies  won't matter for our analysis. Thus, we can safely let  depend on the entire Bayesian network, despite the fact that for certain relevant cases,  depends only on certain variables (example: )

We're left with the impact terms from  and each , which we interpret in game-theoretic language:

• The impact  can be thought of as "how good a random draw is the chosen value of ?" This doesn't translate precisely (as far as I know), but intuition can be gained from viewing the coordinated impact example as a function .
• The impact  "looks like" player 's IEU under the reward function given by . More specifically: letting , we have

The residual term  is best understood as analogous to , but considering  as the fundamental random quantity (instead of , its source of randomness). Since player  only acts based on , this corresponds to player 's logic of "how good a random draw do I think  is, given only knowledge of ?"

## Game Theory in terms of Impact

While research on game-theoretic results from the perspective of Impact is extremely limited (I only know of the preliminary work I've already done), the best litmus test for a proposed framework is to see if it readily produces meaningful results. In this section, I'll outline the immediate results from defining Impact in the setting of multiplayer games and suggest some avenues for further exploration.

### Power- and Impact-scarcity

The crux of these results is the fundamental notion of Impact-scarcity and connection between  and player 's IEU. We begin with stating our Impact-scarcity result in terms of outcome variable :

One natural vein of results comes from plugging in variables for  and seeing what comes out. We give some basic examples:

• Letting  be constant, we find  is constant. This makes sense - you can impact a constant variable, but you can't do anything to change its value.
• Letting , we find a competitive dynamic between player 's interests and "noise" generated by other players' actions. This can be understood by arguing that as  increases, player 's optimal strategy becomes increasingly robust to other players' choices of action.

As mentioned in the intro, one goal of Impact research is to unify the idea of Power- and Impact-scarcity. I suspect that the intuitive understanding is "Impact = change in Power", motivated by the simplification of "Power = , Impact = ". While the notion remains far from precise, I conjecture that Impact on a player's Power is a marginal variable of Impact on that player's reward, from which an upper bound on "Power" follows.

### IEU in terms of Impact

We now start from our other main result: . Since we don't assume any information about IEU by default, a natural starting point is in the case of a Nash Equilibrium.

By definition of (Bayesian) Nash Equilibrium, each player's strategy must be a best response. Thus, for each  in the support of , we have . This implies , which can be understood by arguing that in a Nash Equilibrium, no player can unilaterally increase their expected reward.

Sensing a deeper connection between Impact and Nash Equilibria, we define the self-impact of action  to be  given . Above, we showed that in a Nash Equilibrium, each player's actions have zero self-impact. Generalizing to suboptimal actions, we find that all actions have self-impact , with equality when the action is a best response.

Unfortunately, the converse doesn't hold: each player only taking zero self-impact actions doesn't imply a Nash equilibrium. It implies a Nash Equilibrium of the game where the action spaces  are restricted to only actions played with nonzero probability, but these notions aren't equivalent if strong actions remain unplayed (consider the mixed-strategy equilibrium for rock-paper-scissors when generalized to rock-paper-scissors-[insta-win action]).

We can also explore the fact that in a Nash Equilibrium, Power equals IEU. Thus, we can write , which is strictly a function of . Intuitively, Impact accounts for variation in  while Power takes a max over it, thus they become equivalent in limiting cases for .

### Considering other players / Impact-trading

As a final and unexplored angle on Impact, consider the case where player 1 impacts  and player 2 impacts . Assuming it wouldn't adversely affect their own utilities, the players can "trade" by modifying their strategies to mutually grant each other increased reward. The premise itself immediately raises red flags; I'll attempt to briefly address them:

• "this requires communication between players!" - yep. Barring non-causal decision theory, agents need some way to coordinate strategies.
• "what if the trade accidentally hurts one player?" - the simplest answer is "they only trade if it's mutually beneficial", but that's equivalent to existing solutions to coordination problems like the Prisoners' dilemma. Ideally, a notion of "reward exchange rate" could be computed using Impact, especially if allowing for generalizations of reward like quasilinear utility.
• "Impact is essentially a measure of variance, while utility is a measure of expectation. How do you convert between them?" - I don't know, but have ideas:
• Trade off values of independent "sub-variables" of your action space. Example: if we're playing 10 simultaneous Prisoners' Dilemma-s, then trade off "I cooperate if you do" for each individual PD instance.
• Find some linear measure of Impact and trade with that instead. This looks much more like Power-trading, which offers a similar mechanism for mutually increased reward.

Temporarily putting the issues aside, I intend to explore Impact-trading as an attempt to understand coordination-centric games like the Prisoners' Dilemma. More generally, I hope to apply the Impact framework to a broad range of multiplayer game-theoretic phenomena and see if new insight can be gained.

# 10

New Comment

(midco developed this separately from our project last term, so this is actually my first read)

I have a lot of small questions.

What is your formal definition of the IEU ? What kinds of goals is it conditioning on (because IEU is what you compute after you view your type in a Bayesian game)?

Multi-agent "impact" seems like it should deal with the Shapley value. Do you have opinions on how this should fit in?

You note that your formalism has some EDT-like properties with respect to impact:

Well, in a sense, they do. The universes where player  shouts "heads" are exactly the universes in which everyone wins. The problem is that of agency: player  doesn't choose their action, the coin () does. If we condition on the value of , then each player's action becomes deterministic, thus IEU is constant across each player's (trivial) action space.

This seems weird and not entailed by the definition of IEU, so I'm pretty surprised that IEU would tell you to shout 'heads.'

Given arbitrary R.V.s A, B, we define the estimate of A given B=b as

Is this supposed to be ? If so, this is more traditionally called the conditional expectation of A given B=b.