We’ll get there in the end, bear with me.

Introduction to ZD strategies in IPD

Feel free to skip if you’re already familiar with ZD strategies.

In the iterated prisoner’s dilemma (IPD) a zero determinant (ZD) strategy is one which forces your opponent’s winnings to be a linear relation to your own winnings. These strategies can take either generous or extortionate forms.

Think of it as a modified version of tit-for-tat.

***

A Generous ZD strategy still always respond to C with C but will sometimes also respond to D with a C (it sometimes fails to punish). With "standard" PD utilities (T=0, R=-1, P=-2, S=-3) my opponent gains 1 utility by defecting. If I defect back in retaliation, I cost him 2 units of utility. If I defect back with probability 0.7, on average I cost him 1.4 units of utility. This still means that defecting is disadvantageous for my opponent (loss of 0.4 utility) but not quite as disadvantageous as it would be if I was playing pure tit-for-tat (loss of 1 utility).

This gets slightly more complex when you don't have constant gaps between T, R, P and S but the principle remains the same.

If he defects at all, my opponent will end up gaining more utility than me, but less than he would have got if he had co-operated throughout.

Advantages of GZD are:

1. Total utility isn’t damaged as much by accidental defections as it is in pure tit-for-tat.

2. It won’t get caught in endless C-D, D-C, C-D, D-C as tit-for-tat can.

***

On the other hand, Extortionate ZD always responds to D with D but also sometimes responds to C with D. Provided I don’t respond to C with D too often, it is still advantageous to my opponent to play C (in terms of their total utility).

If my opponent co-operates at all I'll end up with more utility than him. If he gives in and plays C all the time (to maximise his own utility) I can achieve a better utility than I would with C-C. .

The main disadvantage of EZD in evolutionary games is that it defects against itself.

For both EZD and GZD you can vary your probabilities to be more or less generous/extortionate, provided you always ensure your opponent gets the most utility by co-operating.

Different opinions on fairness

An extortionate ZD strategy is similar to an opponent who has different perceptions of what is fair. Maybe your opponent had to pay to play the game but you got in free so he wants a higher percentage of the winnings. Maybe you think this is just his bad luck and think a 50:50 split is fair.

If you give in to what seems to you to be an extortionate strategy, your opponent is encouraged to make more extortionate demands in future, or modify his definition of what is fair. At some point, the level of extortion is so high that you barely get any advantage from co-operating.

This brings us to a proposal of Eliezer’s.

When choosing whether to give in to an extortioner you can capitulate to some extent, provided that you ensure your opponent gains less utility than he would if he agreed to your favoured position (ideally you should let your opponent know that this is what you're doing).

This removes any motivation to your opponent to extort and encourages him to give his true estimation of what is fair.

Two experiments in ZD strategies

Hilbe et al. performed an experiment on humans playing against computerised ZD strategies. Four different strategies were tried – strong extortion through to strong generosity. Regrettably, pure tit-for-tat wasn’t included as I would have liked to see a comparison with this.

The two generous strategies achieved higher average utility for the ZD programme than the 2 extortionate strategies. If the human players had acted purely in self-interest (they were paid according to the points gained) the extortionate strategists would have won. So what happened?

Firstly, a bit of detail about the experimental setup. The participants were not told that they were playing against a computer programme – the impression given was that they were playing against one of the other experimental subjects (although this wasn’t explicitly stated).

Looking at the results from each individual player it is clear that none of the human participants allowed the extortionate ZD strategists to beat the score that is achievable from co-operating (R=0.3).

It seems that the subjects automatically used a strategy similar to the one suggest by Eliezer (or this represented something of a limit to co-operating) when dealing with a player who seemed to be extortionate.

In another experiment some players did allow the extortionate ZD strategy to achieve higher utility than it would have got by co-operating but over the two experiments there is a strong tendency not to let the unfair strategy get away with it.

***

The second experiment tested the effects of:

1. More rounds of IPD (500 vs 60)

2. Being told that your opponent is a computer (Aware (A), Unaware (U))

3. Extortionate (E) / Generous (G) ZD strategies

Interestingly, human players in this second experiment were, over a long game, much more willing to let their EZD opponent “get away with it” when they were told that their opponent was a computer (see the grouping of red dots in figure a below). For the final 60 rounds of a 500 round IPD the extortionate ZD strategist was achieving 3.127 average utility – significantly (p=0.021) more than the R=3 gained from both players co-operating.

***

So why are we more willing to let a computer “get away with it”?

Maybe we view a computer as being insusceptible to change so are therefore more likely to give in.

Alternatively, if you think you are playing against a human, even after 500 rounds you will probably be annoyed enough with him for not co-operating properly that you won’t be co-operating the whole time. You’re less likely to get annoyed at a computer for beating you at a game. As soon as you realise you can’t beat the computer you can just try to do the best you can for yourself. This doesn't dent your pride as much as it would against a human opponent.

(There was one person who just defected pretty much throughout the whole 500AE experiment despite knowing he was playing against a computer, maybe he just decided to tit-for-tat or maybe it's just a lizardman constant. Without him the 500AE ZD strategy would have had even more impressive results.)

***

To me this looks like humans having a heuristic to deal with extortionate opponents/people who have a different opinion on what a fair split is. People seem to apply this heuristic naturally through emotions such as pride, annoyance and anger.

The heuristic works out roughly similarly to Eliezer’s suggestion of regulating your opponent’s winnings to less than he would achieve if he played fair/by your rules.

Being told that your opponent is a computer effectively turns off this heuristic (if you have long enough to get a rough idea of the computer’s strategy). This motivates your opponent to become more extortionate, something which the original heuristic was protecting you against.

In praise of heuristics

All of that is a very long introduction/example of my main point.

Heuristics are good.

Heuristics are very good.

You don’t even know how many times your heuristics have saved you.

You possibly have no idea what they are saving you from.

Knowing about biases can hurt people. Getting rid of heuristics without understanding them properly is potentially even more dangerous.

***

A recent discussion made me aware of this post by Scott where he tried to come up with a way of dealing with people who claim you have caused them offense.

One of the motivations for the post was Everybody Draw Mohammed Day. EDMD seems like it is a natural outworking of the heuristic described above.

People see the terrorists increasing their utility unfairly by attacking people who draw pictures of Mohammed. To ensure they don’t get an advantage by defecting, people want to decrease their utility back to below where they started – hence Everybody Draw Mohammed Day.

The terrorists were also following a similar heuristic – the original cartoon decreased their utility, they are trying to decrease the utility of those who created it to demotivate further defection.

The heuristic isn’t there to improve the world – it is just there so that the person performing it doesn’t encourage increased defection against themselves and increased demands from others.

Scott’s post was an attempt to turn off the heuristic and replace it with a principled position:

The offender, for eir part, should stop offending as soon as ey realizes that the amount of pain eir actions cause is greater than the amount of annoyance it would take to avoid the offending action, even if ey can't understand why it would cause any pain at all. If ey wishes, ey may choose to apologize even though no apology was demanded

In this case, his proposal was criticised by others and Scott ended up rejecting his own proposal.

Had Scott applied his policy universally he would likely have ended up losing out if those he dealt with had modified to become more demanding of him.

It’s likely that our heuristic doesn’t lead us to an optimal result, it just prevents some bad results which Scott’s proposal may have led to.

Possibly a principled application of Eliezer’s proposal would help optimise the result better than the heuristic. In the experiments there was no standard amount that people chose to penalise the EZD strategist - the results were fairly spread out over the region between full defection and Eliezer's defined maximum co-operation. Sometimes the heuristic doesn't stop there and co-operates more than Eliezer would suggest.

***

This all sounds a bit harsh on Scott. Actually, putting an idea out there, engaging with criticism and admitting when you were wrong is exactly the right thing to do.

I’ll give an example where I didn’t do this and it did, in fact, end up biting me in the butt.

***

A while back, I was thinking about status. Status is, within a fixed group, a zero-sum game. People in the workplace are constantly attempting to improve their position on the ladder at the expense of others. This doesn’t just apply to promotions, it applies to pretty much everything. Alex wants to feel like he’s important and will get massively offended if he feels that Bob is trying to take status which should be Alex’s. This probably accounts for ~95% of disagreements in my workplace.

Zero-sum games are, usually, for suckers. If you can get out of the game and into a positive sum game, you probably should. This is doubly true if you’re competing for a thing you’re not really interested in.

Status very much matches my definition of a zero-sum game which I don’t want to play. The problem is, status also allows access to things which I do want – it is a very useful instrumental value. It is a game which everyone else plays so it’s hard to unilaterally leave.

Instead, I made the decision not to play status games unless I really have a need of the status (e.g. I will attempt to achieve status in the eyes of the person who will decide on a potential promotion but not others). Essentially I was trying to turn off the heuristic of “always attempt to gain status with everyone” and replace it with a trimmed down version “attempt to gain status only with those people who make a decision about your pay/promotions etc.”

Now if you have any experience in how status games work, you may realise that this was a naïve approach. If it isn’t obvious to you, have a think about what might go wrong.

*

*

*

If you don’t fight for your status with your colleagues, it’s like blood in the water. If they can push themselves up at your expense they will, not always maliciously, it’s just “the thing to do” in a workplace. If there are no consequences then it will happen again and again. In the end, this will mean that the people who you care about impressing will see the status that others treat you as having and start to modify their own opinion of your status.

It took me a while to realise just how harmful this was. When I did, I had to do a lot of firefighting to re-establish a sense of normality.

All is fine again now but the experience did teach me that my heuristics are there for a reason and that I shouldn’t get rid of them entirely without properly understanding the consequences.

***

I haven’t decided exactly how I should deal with tackling heuristics in future but I have a few initial thoughts.

1. Don’t be overconfident that you have really understood why the heuristic is there

2. When comparing potential pros and cons, remember the cons are likely to be worse than you think

3. Discuss ideas with others

4. Where possible, make small changes first and monitor progress

New to LessWrong?

New Comment
27 comments, sorted by Click to highlight new comments since: Today at 4:29 AM
Heuristics are good. Heuristics are very good. You don’t even know how many times your heuristics have saved you. You possibly have no idea what they are saving you from.

Agreed. I want the t-shirt.

Status is, within a fixed group, a zero-sum game. People in the workplace are constantly attempting to improve their position on the ladder at the expense of others.

Oops, wrong. Status is a complex mix of different types of esteem and judgement that people apply to one another in different contexts. It's nowhere near zero-sum or even linear.

Which is _WHY_ heuristics are good - they are learned simple responses for complex inputs. They're not optimal for all (or perhaps any) possible applications, but they're a likely-good-enough approach for common situations when you don't know all the relevant details and haven't built a more complete model.

I’m not sure we disagree on anything other than the definition of “status” so let’s taboo it.

I am using the common meaning of “relative social rank”. Within a fixed group this is zero sum from the fact that it’s relative - for me to go up someone else has to go down.

If you replace “status” with “relative social rank” in the OP do you disagree with it?

Without committing to a position on the question of status, I just want to note that your account of relative rank (of any kind, social or otherwise) is flawed. What you say is true if and only if it is the case that:

For any members A, B, either Rank(A) > Rank(B) or Rank(B) > Rank(A).

There seem to be few rank-orderings in the real world that work like this. Usually, there are also other possibilities—namely:

  • Rank(A) == Rank(B)
  • Rank(A) || Rank(B) [this means “Rank(A) is incomparable with Rank(B)”]

(The latter possibility being available makes Rank() a partial order instead of a total order.)

Either of these possibilities make it possible to rise in rank without anyone else dropping in rank.

(Note that this is not the same thing as the value of Rank() being an absolute or cardinal measure. All of the above is consistent with Rank() being relative / ordinal.)

P.S.: As applies to social status in particular, the existence of the first possibility I listed (“Rank(A) == Rank(B)”) corresponds to “flat” parts or levels of otherwise hierarchical structures, whereas the existence of the second possibility I listed (“Rank(A) || Rank(B)”) corresponds to complex hierarchies, where relative status is difficult to judge across branches (within some range of possible values).

Do either of those things prevent it from being zero sum?

A fixed set of people in a fixed branching structure even with flat levels still has no way to add utility that I can see - I feel like I’m missing something.

Can you give an example?

I will add that theoretically flat structures generally don’t feel that way to those inside them and this is actually where the fiercest fighting seems to happen. Having difficult to compare social ranks doesn’t seem to stop people trying to outdo each other either!

Edit: Wait, I think I know what you mean now - if the ranking structure isn’t fixed then you can get additional utility, provided that when people are equal rank you e.g. list the top two both as rank 1 instead of averaging the available ranks and saying they both have rank 1.5.

I was disputing this claim of yours in particular:

for me to go up someone else has to go down

That is, as I have shown, not true given either or both of the two possibilities I listed (which are commonplace across a wide variety of real-world rank structures).

I take no position, here, on the question of “adding utility” (even discussing this question requires a number of non-trivial additional assumptions), nor the question of whether the rank structure (or alterations thereof) is “zero-sum” (it is not clear to me what this means; it would benefit from a more rigorous definition).

I will add that theoretically flat structures generally don’t feel that way to those inside them and this is actually where the fiercest fighting seems to happen. Having difficult to compare social ranks doesn’t seem to stop people trying to outdo each other either!

Indeed. Jo Freeman’s famous essay “The Tyranny of Structurelessness” discusses this in some detail.

I'm not sure you have shown what you say you have. It may depend on what is meant by "rising in rank" and "dropping in rank".

The following is obviously true: If people have status values somehow associated with them, these values lying in some sort of partially ordered set, then it is possible for one person's status value to rise without anyone else's falling.

I take it, though, that this is not how Bucky sees status as working. The picture is this: there are no separate status values, there are only comparisons -- the thing you write as "Rank(A) < Rank(B)", etc. Now, what does it mean for someone's rank to rise or fall in this picture? So far as I can see, it has to be something like this: your rank rises if (1) at least one rank-comparison between you and someone else changes in your favour and (2) no rank-comparison between you and anyone else gets worse for you. Likewise (with obvious changes of sign) for "falling".

If so, then by definition if a change happens that affects only relative rankings involving you, and if that change makes you rise in rank, then it makes at least one other person fall in rank. (Because some rank-comparison changes in your favour, which necessarily means it changes in the other party's disfavour; and rankings not involving you don't change, so the other person doesn't get any compensating improvements vis-a-vis other people.)

The possibility that some pairs of people might be equivalent or incomparable doesn't change this, so far as I can see, because if the result of comparing my rank with yours improves for me then it necessarily worsens for you -- even if one of the "before" or "after" rankings might have been "equivalent" or "incomparable".

(Arguably, the model of status that I'm foisting on Bucky -- Bucky, please let me know and accept my apologies if I've guessed wrong -- presupposes that status is a zero-sum game. But it might be just as reasonable to say that it is based on the observation that status, at least within a single group, is a zero-sum game. Perhaps that presupposition, or alleged observation, is in fact wrong; but if so, I think actual empirical evidence is what's needed to show that it's wrong, rather than theoretical observations about the nature of partial orders.)

Indeed, the distinction between status values existing, vs. only comparisons existing, is a valuable one. My notation was ambiguous between these two types of scenarios.

For now, let us continue to use notation like Rank(A) and Rank(B) to refer to the former case (where Rank() returns some value), and RankOrder(A, B) to refer to the latter case (where RankOrder() returns −1, 0, or 1 in cases where A is lower, where A and B are equivalent, and where A is higher, respectively, and ERR if A and B are incomparable).

First, let me verify that we are on the same page—consider a brief scenario, an instance of the latter case:

Suppose we have a hierarchy with members Alice, Bob, Carol, Dave. Suppose that currently the following are true:

  1. RankOrder(Alice, X) = 1 for all X in { Bob, Carol, Dave }
  2. RankOrder(Bob, Carol) = 0
  3. RankOrder(Dave, Y) = −1 for all Y in { Alice, Bob, Carol }

(In other words: Alice is on top; Dave is in the pits; Bob and Carol are equals, and together occupy the middle tier.)

Suppose Dave is elevated to the middle tier, such that now, RankOrder(Dave, Y) = 0 for all Y in { Bob, Carol }. You are saying that, in this case, Dave has risen in rank, while Bob and Carol have dropped in rank. Yes?

Suppose that instead of #2 being as above, it was RankOrder(Bob, Carol) = ERR; and afterwards, RankOrder(Dave, Y) = ERR for all Y in { Bob, Carol } (and all else as before). You would still say that Dave has risen in rank, while Bob and Carol have dropped in rank. Yes?

Assuming I have understood your position correctly…

Yes, this is certainly a consistent account. As you say, on this account, by definition, for you to rise, someone has to fall—because we have defined “rising” and “falling” in a way that makes this true, necessarily and tautologically.

Given this, it cannot be the case that we are making any claims about what is required to “rise” or “fall” in a sense other than this definition. If it was our intention to make claims on the basis of some other definition, then our conclusion on the basis of this definition does not help us and is a non sequitur.

The questions, then, are:

  1. What sense of “status” or “rank” did Bucky have in mind? If yours, then of course this sort of “rank” is, by construction, zero-sum. If another sense, then my comments stand.
  2. Which sort of “status” or “rank” is predictive of real-world outcomes? Or, which sort of “status” or “rank” is more predictive of which sorts of real-world outcomes, in which sorts of situations?

Aside from what I take to be a slip-up (your postulate 3 should have {Alice, Bob, Carol} where it currently has {Alice, Carol, Dave}: yes, you've correctly described the purely-relative picture of status that I think Bucky had in mind.

I think there is some non-tautologous content associated with this sort of model -- namely, the claim that actual status relations are well modelled by this sort of system. That'll be true in so far as status governs the answers to questions like "if I have to pick one person to do a favour to, which will it be?" or "if Bob and Carol are in a fight, which of them do I expect to be in the right before I have any other information?" and false in so far as status governs the answers to questions like "if it's Dave's birthday and I'm buying him a present, how much time and money shall I put into it?" or "when Carol makes a statement, how inclined am I to believe it?".

It feels to me -- but I have no reason to take my feelings as authoritative -- as if status is mostly relative but some of these "absolute" questions are influenced by status too. Think about celebrity endorsements for products (of the kind where the celebrity isn't famous for being expert in a relevant domain): their point is that when people see a very high-status person using a particular brand of $thing and saying "$thing is great!" they're more inclined to use $thing themselves, and I don't think it's plausible that this is driven by some kind of comparison of $famous_person and all the other people in the world who might not be endorsing $thing.

... But maybe this is still relative; maybe the influence I gain, if I become famous, over what brand of shirt or phone people buy, comes at the cost of other famous people's influence. This would be true e.g. if everyone allocated a roughly fixed amount of attention to seeing what high-status other people are doing so as to copy it, so that when I become famous I'm competing for that attention with Bill Gates and Kanye West, and they get just a bit less of it. So maybe celebrity endorsements aren't a good example of non-relative status effects after all.

Hmmm. Interestingly, I was ready to assent to this comment:

I think there is some non-tautologous content associated with this sort of model—namely, the claim that actual status relations are well modelled by this sort of system.

But, when I read your examples—

That’ll be true in so far as status governs the answers to questions like “if I have to pick one person to do a favour to, which will it be?” or “if Bob and Carol are in a fight, which of them do I expect to be in the right before I have any other information?”

… I started having doubts. I… am actually not sure whether the “purely relative” model of status relations gives sensible / useful / “correct” answers to either of these questions. (Especially the latter; I actually am very confused about why you would even claim the latter sort of question to be governed by a “purely relative” model of status… which suggests that I am totally misunderstanding you. What do you mean by “in the right”? I struggle to see an interpretation of that phrase which makes the sentence be true.)

Re: celebrity endorsements: I think that is much more complicated than can be represented by any model of “status”. I don’t think it’s a good example, not because it supports one side or another of the dichotomy we’re discussing, but because it’s nowhere near a “clean” enough case study.

First of all, to clarify, questions like those will never be governed purely by considerations of status, and in some cases other factors will matter much more. (Bob might be much higher-status than Carol but I might like Carol much better, or be hoping to persuade her to sleep with me or offer me a job or something.) But to whatever degree those questions' answers are influenced by status it will be relative "zero-sum" status that matters, because those are relative zero-sum questions.

What I wrote makes it sound like I was suggesting that status is the only, or the dominant, thing determining the answers to those questions. My apologies for writing unclearly. (I think it was just unclarity -- I don't think I thought status was dominant in determining those answers. But it's easy to forget one's past states of mind.)

I don't know whether that suffices to clear things up. In case it doesn't, some more words about the "Bob and Carol in a fight" scenario: Suppose you see two acquaintances having an argument. Usually this indicates that at least one of them has been unreasonable somehow. Your initial assumption on seeing them cross at one another might be that Bob has been unreasonable to Carol, or that Carol has been unreasonable to Bob. (If you have a sufficiently well trained mind, you may be able to avoid such assumptions. I think many people can't.) In the -- admittedly unlikely -- event that there are no other factors at all to favour one of those assumptions over the other, I am guessing (and it is only a guess) that on balance the higher-status person would tend to get the benefit of the doubt, and more people would jump to "Low-Status-Guy has done something stupid / creepy / offensive" than to the equivalent guess about High-Status-Guy.

I agree that celebrity endorsements are probably too complicated to be useful here. I picked on them because they initially seemed like they might be a nice example of non-relative status effects, but the more I thought about it the less convinced I was of that.

I don’t know whether that suffices to clear things up. In case it doesn’t, some more words about the “Bob and Carol in a fight” scenario: Suppose you see two acquaintances having an argument. Usually this indicates that at least one of them has been unreasonable somehow. Your initial assumption on seeing them cross at one another might be that Bob has been unreasonable to Carol, or that Carol has been unreasonable to Bob. (If you have a sufficiently well trained mind, you may be able to avoid such assumptions. I think many people can’t.) In the—admittedly unlikely—event that there are no other factors at all to favour one of those assumptions over the other, I am guessing (and it is only a guess) that on balance the higher-status person would tend to get the benefit of the doubt, and more people would jump to “Low-Status-Guy has done something stupid /​ creepy /​ offensive” than to the equivalent guess about High-Status-Guy.

Hmm. I understand your point now, yes. However, I have two objections.

First: supposing that Bob and Carol are both members of your informal social group, what you say (“on balance the higher-status person would tend to get the benefit of the doubt”) seems true. However, it seems to me that in such a case, the status model that would best predict who would get the benefit of the doubt, and how much, and when, would not be the “purely relative” model. I can easily think of many cases, from my actual experience, where two people disagree/argue, and neither or both of them get the benefit of the doubt, because they have equivalent/comparable social status (in a situation where, if one of them were arguing with someone of lower social status, they would have gotten the benefit of the doubt).

Second: supposing that Bob and Carol are, instead, members of a formal hierarchy (like at a workplace), then, it seems to me, many people will often give the lower-status (and lower-ranking) person the benefit of the doubt, and assume the higher-status (and higher-ranking) person is abusing power, being foolish, etc.[1]

Now, let me not accuse you of dogmatism, since you did add this caveat:

In the—admittedly unlikely—event that there are no other factors at all to favour one of those assumptions over the other …

I guess what I would say is, “no other factors at all” is something that happens so rarely that it’s hard to even discuss the scenario coherently. Maybe it never happens! What if other factors are always present? What if that’s necessarily true? I don’t claim this, but it’s not obvious to me how we’d establish otherwise…


[1] This is basically the entire premise of Dilbert, which is not exactly an obscure comic strip; and lest you think I am generalizing from fictional evidence, recall that Scott Adams famously gets regular letters from readers expressing shock at how accurately the strip portrays their actual, everyday office existence (despite that the strip includes trolls, talking animals, etc.). In any case, this is hardly an unfamiliar phenomenon to anyone who’s had to work in a good-sized organization.

I don't think I understand your first objection. It seems to say that when there's a dispute between A and B, and neither A nor B has higher status than the other, onlookers don't give the benefit of the doubt to either ... which is precisely what the relative-status model we're talking about predicts should happen. How is this an objection?

On the second objection: I agree that many may assume that higher formal-hierarchy position makes a person more likely to be in the wrong. But status is not quite the same thing as position in a formal hierarchy, and I think it's possible to have both "assume lower-status people have wronged higher-status people rather than the other way around" and "assume formal superiors have wronged formal inferiors rather than the other way around" as heuristics. Also ... consider why people might have that latter heuristic. Presumably it's because higher-ups not infrequently do abuse their authority. Which is to say, they wrong people lower down in the hierarchy and get away with it because of their position.

Of course "no other factors at all" is a vanishingly rare situation. My expectation is that status effects are frequently present but often not alone, and I focused on the situation where there are no other effects for the sake of clarity. When (as usual) there are other effects, the final outcome will result from combining all the effects; the specific effects of status will be hard to disentangle but I see no reason to expect them to vanish just because other things are also present.

I don’t think I understand your first objection. It seems to say that when there’s a dispute between A and B, and neither A nor B has higher status than the other, onlookers don’t give the benefit of the doubt to either … which is precisely what the relative-status model we’re talking about predicts should happen. How is this an objection?

It’s an objection because what you said was that people’s behavior in such a case is predicted by a status model in which it cannot be true that “neither A nor B has higher status than the other”. But I am saying that, empirically, this does happen in such cases, therefore the status model in question (what we’ve been calling the “purely relative” model) is inapplicable / not predictive.

Also … consider why people might have that latter heuristic. Presumably it’s because higher-ups not infrequently do abuse their authority. Which is to say, they wrong people lower down in the hierarchy and get away with it because of their position.

Indeed, but this is very different from third parties concluding that said higher-ups must be ethically / procedurally / etc. “in the right”! Anyway, I think we’ve gotten somewhat far afield in this branch of the conversation, and I am happy to let it drop (unless you think there’s more that’s worth saying, here).

On the second objection: […]

Most of what you say here is reasonable, but simply goes to the point that—as you say later—‘“no other factors at all” is a vanishingly rare situation’. I am not convinced that it’s possible to (a) assume a “purely relative” status model, (b) properly integrate other factors, (c) cleanly separate out the one from the other. It seems to me that a less rigid status model would generally be more predictive. (After all, it is not like we are measuring some objectively real thing, some quantity which corresponds to some clearly separable physical phenomenon! There is no “status” primitive, out in the world…) But I am open to seeing it done the former way.

I still don't understand what you're saying about that first objection. What's this model in which it "cannot be true" that neither A nor B has higher status than the other?

If you're saying that that can never happen in a "purely relative" system, then what I don't understand is why you think that. If you're saying something else, then what I don't understand is what other thing you're saying.

It seems to me that there's no inconsistency at all between a "purely relative" system and equal or incomparable statuses. Equal status for A and B means that all status effects work the same way for A as for B (and in particular that if there's some straightforward status-driven competition between A and B then, at least as far as status goes, they come out equal). Incomparable status would probably mean that there are different sorts of status effect, and some of them favour A and some favour B, such that in some situations A wins and in some B wins.

I don't dispute (indeed, I insist on) the point that it's vanishingly rare to have no other factors. And I bet you're right that cleanly separating status effects from other effects is very difficult. It's not clear to me that this is much of an objection to "purely relative" models of status in contrast to other models. I guess the way in which it might be is: what distinguishes a "purely relative" model is that all you are entitled to say about status is what you can determine from examining who wins in various "status contests", and since pure status contests are very rare and disentangling the effects in impure status contests is hard you may not be able to tell much about who wins. That's all true, but I think there are parallel objections to models of "non-relative" type: if it's hard to tell whether A outranks B because status effects are inseparable from other confounding effects, I think that makes it just as hard to tell (e.g.) what numerical level of status should be assigned to A or to B.

In terms of how relative status works out in practice, I see it as affecting people’s first, instinctive reactions to many (probably most) social circumstances.

In an argument between Bob and Carol, who should I support?

If someone criticises me, how do I react?

If someone praises me, how good does it make me feel?

If I disagree with someone, how likely am I to fight my corner?

How do I react when someone takes something which I don’t feel they deserve?

Who do I want to spend time with?

Would this be a good person to date?

If Dave does this, should I do it too?

Other things will also affect these decisions such as liking the person or system 2 thinking - status is just one model of many required to explain human behaviour. These other models aren't zero-sum. I personally find it helpful for predictive power to have a separate model for relative status:

What would Dave's decision be if he only cared about relative status?

What would Dave's decision be if he based it entirely on whether he likes me?

If Dave just thought about this logically what would he decide?

...

How do I put these answers together to predict Dave's actions?

***

I’d argue that apparently absolute questions such as buying Dave a Birthday present would include relative status considerations as there’s a relative status between you and Dave to consider. This might or might not have a big effect on the final decision but it would probably change how you feel about spending time/money on Dave which could easily subconsciously move your actions one way or the other.

Certainly if you're higher status than him you'd expect him to be more grateful than if he was higher status (within the status model alone).

(your postulate 3 should have {Alice, Bob, Carol} where it currently has {Alice, Carol, Dave}

Right you are; edited.

I wouldn’t be surprised if the model were to be less predictive as the group gets large enough that people can’t keep track of everyone’s relative status or where someone’s status is far away from your own.

In that case I would model people as giving people a placeholder status. e.g. “so high that I don’t have to worry about precisely how high” for VIPs or “just assume roughly average status” when we encounter new people. At this point zero sum might break down.

  1. This is about right.

To truly attack my own position I’d add that I would define RankOrder(A, B) as A’s opinion of their relative rank. If RankOrder(A, B) <> RankOrder(B, A) then it might be arguable that this is no longer zero sum (provided A and B care more about their own opinions than each other’s).

In my experience this is fairly rare - people are very keen to ensure that everyone knows their place.

  1. Obviously I think this is fairly predictive otherwise I wouldn’t believe it. I am aware that the model isn’t perfect but I certainly find it better than any other model I’ve tried. Does anyone have an alternative model?

You have me about right I think. Maybe I should rephrase:

“Relative social rank is, within a fixed group, to a close approximation, according to my observations, a zero sum game.”

This isn’t quite as snappy but may be more precise.

I’ll take Said’s advice into account if I ever try to make a formal model.

My understanding of zero-sum is: assume a pie of a fixed size that will be eaten, entirely, by several people. The size of any given person's slice and only be made larger but making at least one of the other slices smaller.

Positive-sum would be settings where the interactions of the eaters could increase the size of the pie -- or perhaps number of pies to eat. Negative sum is just bad all around -- stay away ;-)

Yes, I am familiar with the general usage of the term “zero-sum”. What was not clear to me was how it applies in this case.

For example: are we envisioning “status” as a single pie, which must be shared by everyone in the hierarchy? However, consider the case of adding more people to the bottom level of a hierarchy (or creating a new bottom level, below the current bottom, with a bunch of new people occupying that new lowest level). Is the entire pie now bigger? Or is it the same size, and everyone’s share has been recalculated? How much of the pie (whether enlarged or unchanged) do the new peons consume? Does it matter how many there are (i.e., is each new peon’s share fixed, or is the entire peon class’s share fixed)? (Note that in the real world, if I am the CEO of a company with 50 employees, and 100 employees join the company, resulting in me being CEO of a company with 150 employees, it seems to be the case that my status within the company has increased, and my status in the greater society has also increased.)

These questions undoubtedly have answers, but not obvious answers. Possibly the answers differ by situation; possibly there are other nuances. This is the sort of thing that should be considered carefully, and with precision and rigor, when attempting to apply concepts like “zero-sum” to such matters as social status.

My bad. I thought you were saying the term itself was not something you were familiar with.

I agree that it is difficult to understand in what settings status would fit the "X-sum" structure. My general thinking is perhaps it is more of the mindset for the person in the situation (in this case, the author) than some external objective metric outside observers would all be able to confirm.

That said, I took the zero-sum as an assumption "for the sake of argument" type rhetoric. I was interested in the bits about heuristics though it seems the main focus is really about how to deal with workplace relationship, in the context of status, which doesn't greatly interest me or shed much light on the value of heuristics as rule and why they may be more valuable than attempts as some rational calculus in making one's decisions in certain aspects of life.

If you replace “status” with “relative social rank” in the OP do you disagree with it?

I disagree that "ordinal social rank" is a thing which matters in almost any situation. Value, esteem, and respect are great determiners of promotions, choice work assignments, etc. They are not, however, strictly a relative measure against other coworkers. It's more a relative measure against the universe of possible employees. Which makes it more absolute than relative.

In fact, I suspect we experience very different things in our work and social life. I do recognize that there are situations where rank is more important than value, but I have trouble imagining functioning that way for very long. As a result I forget the diversity of human experience and that many people DO experience that.

Let me say I'm incredibly jealous! Functioning this way is a lot of work - that's why I was trying to decrease the amount of work required with my simplified rule.

The wish for social circles to be more about value than rank was one of the main reasons I started posting on Less Wrong. It's conversations like this which reassure me that it was a good idea - differences of opinion resolved without acrimony where both parties are better off the end. This does happen elsewhere in my life but not nearly often enough.

I agree that it should work like that but I don't think that this is how it works in practice.

1. Examples of when ordinal social rank matters

Choice work assignments is an interesting example - I'd say that this is a case where ordinal social rank is the thing which matters most - there is generally no universe of possible employees to consider.

Promotions theoretically include a universe of possible employees. In practice I would say that there's a minimum level of competence that you need to achieve to be considered for a promotion. Provided that at least one person has achieved that level of competence, the company is likely to just choose between the people they already have in the company rather than looking externally. Even if the company also looks externally the internal applicants start with a huge advantage in that they are a known quantity and know lots about the company already.

At this point the ordinal social rank of those who are sufficiently competent becomes the thing which matters.

2. How social emotions work in practice

Irrespective of the above, my general experience of people is that they consider ordinal social rank to be hugely important and that zero-sum games are common in this respect. I don't argue that this is always a good idea (quite the opposite as I mentioned in the OP) but from observation of how people act.

Look at any group of teenagers and you will see them engaging in just this conduct. When we get older we generally decrease this behaviour (possibly because we get put into clear-ish hierarchical structures). However, the same emotions seem to govern much of the conduct which I witness - maybe my workplaces are just unusually dysfunctional in this regard.

If I may delve into evolutionary history (not an expert, ignore if you like!), our status emotions evolved when we were in a fixed-ish group of 50 odd people. Ordinal social rank would have been one of the main drivers of reproductive fitness (essentially like any chimps/wolves etc. competing to become the alpha). I don't think we've had enough time in civilised society to de-evolve the tendency to act as though ordinal social rank is incredibly important, even when this is not advantageous to us.