Consider any finite two-player game in normal form (each player can have any finite number of strategies, we can also easily generalize to certain classes of infinite games). Let be the set of pure strategies of player and the set of pure strategies of player . Let be the utility function of player . Let be a particular (mixed) outcome. Then the alignment of player with player in this outcome is defined to be:
Ofc so far it doesn't depend on at all. However, we can make it depend on if we use to impose assumptions on , such as:
Caveat: If we go with the Nash equilibrium option, can become "systematically" ill-defined (consider e.g. the Nash equilibrium of matching pennies). To avoid this, we can switch to the extensive-form game where chooses their strategy after seeing 's strategy.
✅ Pending unforeseen complications, I consider this answer to solve the open problem. It essentially formalizes B's impact alignment with A, relative to the counterfactuals where B did the best or worst job possible.
There might still be other interesting notions of alignment, but I think this is at least an important notion in the normal-form setting (and perhaps beyond).
This also suggests that "selfless" perfect B/A alignment is possible in zero-sum games, with the "maximal misalignment" only occuring if we assume B plays a best response. I think this is conceptually correct, and not something I had realized pre-theoretically.
In a sense, your proposal quantifies the extent to which B selects a best response on behalf of A, given some mixed outcome. I like this. I also think that "it doesn't necessarily depend on " is a feature, not a bug.
EDIT: To handle common- constant-payoff games, we might want to define the alignment to equal 1 if the denominator is 0. In that case, the response of B can't affect A's expected utility, and so it's not possible for B to act against A's interests. So we might as well say that B is (trivially) aligned, given such a mixed outcome?
So, something like "fraction of preferred states shared" ? Describe preferred states for P1 as cells in the payoff matrix that are best for P1 for each P2 action (and preferred stated for P2 in a similar manner) Fraction of P1 preferred states that are also preferred for P2 is measurement of alignment P1 to P2. Fraction of shared states between players to total number of preferred states is measure of total alignment of the game.
For 2x2 game each player will have 2 preferred states (corresponding to the 2 possible action of the opponent). If 1 of them will be the same cell that will mean that each player is 50% aligned to other (1 of 2 shared) and the game in total is 33% aligned (1 of 3), This also generalize easily to NxN case and for >2 players.
And if there are K multiple cells with the same payoff to choose from for some opponent action we can give 1/K to them instead of 1.
(it would be much easier to explain with a picture and/or table, but I'm pretty new here and wasn't able to find how to do them here yet)
Does agency matter? There are 21 x 21 x 4 possible payoff matrixes for a 2x2 game if we use Ordinal payoffs. For the vast majority of them (all but about 7 x 7 x 4 of them) , one or both players can make a decision without knowing or caring what the other player's payoffs are, and get the best possible result. Of the remaining 182 arrangements, 55 have exactly one box where both players get their #1 payoff (and, therefore, will easily select that as the equilibrium).
All the interesting choices happen in the other 128ish arrangements, 6/7 of which have the ...
1/1 0/0
0/0 0.8/-1
I have put the preferred state for each player in bold. I think by your rule this works out to 50% aligned. However, the Nash equilibrium is both players choosing the 1/1 result, which seems perfectly aligned (intuitively).
1/0.5 0/0
0/0 0.5/1
In this game, all preferred states are shared, yet there is a Nash equilibrium where each player plays the move that can get them 1 point 2/3 of the time, and the other move 1/3 of the time. I think it would be incorrect to call this 100% aligned.
(These examples were not obvious ...
I think this is backward. The game's payout matrix determines the alignment. Fixed-sum games imply (in the mathematical sense) unaligned players, and common-payoff games ARE the definition of alignment.
When you start looking at meta-games (where resource payoffs differ from utility payoffs, based on agent goals), then "alignment" starts to make sense as a distinct measurement - it's how much the players' utility functions transform the payoffs (in the sub-games of a series, and in the overall game) from fixed-sum to common-payoff.
I don't follow. How can fixed-sum games mathematically imply unaligned players, without a formal metric of alignment between the players?
Also, the payout matrix need not determine the alignment, since each player could have a different policy from strategy profiles to responses, which in principle doesn't have to select a best response. For example, imagine playing stag hunt with someone who responds 'hare' to stag/stag; this isn't a best response for them, but it minimizes your payoff. However, another partner could respond 'stag' to stag/stag, which (I think) makes them "less unaligned with you" with you than the partner who responds 'hare' to stag/stag.
Another point you could fix using intuition would be complete disinterest. It makes sense to put it at 0 on the [-1, 1] interval.
Assuming rational utility maximizes, a board that results in a disinterested agent would be:
1/0 1/1
0/0 0/1
Then each agent cannot influence the rewards of the other, so it makes sense to say that they are not aligned.
More generally, if arbitrary changes to one players payoffs have no effect on the behaviour of the other player, then the other player is disinterested.
In my experience, constant-sum games are considered to provide "maximally unaligned" incentives, and common-payoff games are considered to provide "maximally aligned" incentives. How do we quantitatively interpolate between these two extremes? That is, given an arbitrary 2×2 payoff table representing a two-player normal-form game (like Prisoner's Dilemma), what extra information do we need in order to produce a real number quantifying agent alignment?
If this question is ill-posed, why is it ill-posed? And if it's not, we should probably understand how to quantify such a basic aspect of multi-agent interactions, if we want to reason about complicated multi-agent situations whose outcomes determine the value of humanity's future. (I started considering this question with Jacob Stavrianos over the last few months, while supervising his SERI project.)
Thoughts:
The function may or may not rely only on the players' orderings over outcome lotteries, ignoring the cardinal payoff values. I haven't thought much about this point, but it seems important.EDIT: I no longer think this point is important, but rather confused.If I were interested in thinking about this more right now, I would: