A "truel" is something like a duel, but among three gunmen. Martin Gardner popularized a puzzle based on this scenario, and there are many variants of the puzzle which mathematicians and game theorists have analyzed.

The optimal strategy varies with the details of the scenario, of course. One take-away from the analyses is that it is often disadvantageous to be very skillful. A very skillful gunman is a high-priority target.

The environment of evolutionary adaptedness undoubtedly contained multiplayer social games. If some of these games had a truel-like structure, they may have rewarded mediocrity. This might be an explanation of psychological phenomena like "fear of success" and "choking under pressure".

Robin Hanson has mentioned that there are costs to "truth-seeking". One of the example costs might be convincingly declaring "I believe in God" in order to be accepted into a religious community. I think truels are a game-theoretic structure that suggests that there are costs to (short-sighted) "winning", just as there are costs to "truth-seeking".

How can you identify truel-like situations? What should you (a rationalist) do if you might be in a truel-like situation?

 

New Comment
35 comments, sorted by Click to highlight new comments since: Today at 5:50 AM
[-]pjeby15y160

"Fear of success" is a null concept; a name for a thing that doesn't exist in and of itself. The fact that the thing someone's afraid of is also labeled "success" (by the individual or others) doesn't make the fear special or different in any way. In essence, there's only fear of failure. It's just that sometimes one form of success leads to another form of failure. In other words, it's not "success" that people fear, just unpleasant futures.

Choking under pressure, meanwhile, is just a normal function of fear-induced shutdown, like a "deer in the headlights" freezing to avoid becoming prey. In humans, this response is counterproductive because human beings need to actively DO things (other than running away) to prevent problems. Animals don't need to go to work to make money so they won't go broke and starve a month later; they rely on positive drives for motivation.

Humans, however, freeze up when they get scared... even if it's fear of something that's going to happen later if they DON'T perform. This is a major design flaw, from an "idealized human" point of view, and in my work, most of what I teach people is about switching off this response or preventing it from arising in the first place, as it is (in my experiences with my clients, anyway) the #1 cause of chronic procrastination.

In other words, I doubt truels had any direct influence on "fear of success" and "choking under pressure"; they are far too easily explained as side-effects of the combination of existing mechanisms (fear of a predicted outcome, and fear-induced shutdowns) and the wider reach of those mechanisms due to our enhanced ability to predict the future.

That is, we more easily imagine bad futures in connection with our outcomes than other animals do, making us more susceptible to creating cached links between our plans... and our fears about the futures that might arise from them.

For example, not too long ago during a workshop, I helped a man debug a procrastination issue with his work, where simply looking at a Camtasia icon on his screen was enough to induce a state of panic.

As it turned out, he'd basically conditioned himself to respond that way by thinking about how he didn't really know enough to do the project he was responsible for -- creating a cached link between the visual stimulus and the emotions associated with his expected futures. (In other words, early on he got as far as starting the program and getting in over his head... then with practice he got better and better at panicking sooner!)

And we do this stuff all the time, mostly without even noticing it.

In some corporate structure, you may want to avoid over performing at a job interview. Your manager wants to hire someone who is competent, but no so competent that he will replace him.

For get an older scenario, imagine you're a hunter in a tribe and there's an alpha leader. You want to be perceived as a good hunter so that you'll get a larger share of resources but not so good that you threaten his power.

It then pays to signal that you do not intend to challenge the authority. One way to do it is to have poor self-esteem, by attributing your successes to luck.

Multiplayer Magic: the Gathering (aka 'mtg') has a truel-like strategic structure. In normal duels, half the game is deck building - building your deck optimally given the other decks you are likely to face (the metagame). The other half is in game skill - playing the right cards at the right time.

In multiplayer, a third major component is added: diplomacy. If you keep winning, you'll get ganged up on. If you build a deck that sits back and doesn't do much only to instantly win later using some well-tuned combo, you have to at least act like you are doing something conventional, otherwise your weakness makes you seem more threatening. Experienced multiplayer mtg players are highly suspicious of someone who doesn't play anything substantial by turn 5 because they know that is typically a prelude to an instant loss.

More generally, you want to be 'friendly' in mtg multiplayer. This means appearing mediocre, more or less. If you look too good, you look more likely to win and are thus a target. Even if you are better, fighting multiple opponents is very difficult. If you appear relatively weak, then you might be trying to trick everyone, so again, you are a target.

Does this generalize? In other truel-like situations, is apparent weakness just as detrimental or at least nearly as detrimental as apparent strength? I suspect it's a common feature of zero-sum games, but probably not generally true.

I think truels are a game-theoretic structure that suggests that there are costs to (short-sighted) "winning", just as there are costs to "truth-seeking".

I found the post interesting... except for this penultimate paragraph; I don't think there's a good analogy here. An evolutionary motive for "choking" or signaling choking is an interesting enough observation on its own.

LW is on refining the art of thinking; there's no need to strain for the segues.

(To be specific about where the analogy is strained: One question is about whether common human goals are likely to conflict with epistemic rationality; the other question is about signaling short-term failure as a road to long-term success under clearly defined criteria. Standard instrumental versus terminal goals, which is not at all as thorny as alleged instrumental epistemic stupidity.)

Thanks. I'll try to avoid strained segues in the future.

A very skillful gunman is a high-priority target, but also an attractive ally. I wonder what determines which effect dominates. (A wild stab: Social status is associated with number of allies, and with a moving average of accomplishment. If a low-status individual performs too well, but doesn't gratuitously signal submission, they are punished for getting uppity - by those with higher status to mitigate the threat, or by those with equal status to curry favor. A high-status individual, though, couldn't safely be punished even if anyone wanted to; seeking alliance is favorable.)

See also Wei Dai on a game where the smarter players lose.

It's worth emphasizing that in both these cases the competent agents are hurt not by being competent, but by being perceived as competent, whether or not that perception is correct.

Unless there is some reason for the perception of competence to be systematically biased (can anyone think of a reason?), the only way to credibly feign incompetence is to be in situations where acting competently would benefit you, yet you act as if you're not competent. And you have to do this in every such situation where your actions are observed.

Having to feign incompetence substantially reduces the benefits of being competent (depending on how often you're observed), while the costs of becoming competent still has to be borne. As a positive theory, this explains why competence might not be as common as we'd otherwise expect.

As a normative theory, it suggests that if you expect to be in a truel-like situation, you should consider not becoming competent in the first place, or if the costs of becoming competent is already sunk, but you're not yet known to be competent, then you should feign incompetence, by behaving incompetently whenever such behavior can be observed.

[-][anonymous]15y10

Unless there is some reason for the perception of competence to be systematically biased (can anyone think of a reason?)

If you don't have full information about my competence, then your estimate of my competence is "biased" toward your prior.

My first thought exactly. It reminds me of the story from the Chuang Tzu regarding the hideously gnarled tree that survives to a ripe old age due to its 'flaws'.

Likewise, many employees feign incompetence with respect to certain kinds of tasks -- e.g., programmers feigning incompetence with regard to anything managerial.

Wei Dai begins by assuming that cooperation on the Prisoner's Dilemma is not rational, which is the same decision theory that two-boxes on Newcomb's Problem.

Last I saw, you were only advocating cooperation in one-shot PD for two superintelligences that happen to know each other's source code (http://lists.extropy.org/pipermail/extropy-chat/2008-May/043379.html). Are you now saying that human beings should also play cooperate in one-shot PD?

What goes on with humans is no proof of what goes on with rational agents. Also, truly one-shot PDs will be very rare among real humans.

In social science one is usually expected to outline some concrete real situation that you think is like the abstract game described. Just saying "here's a game, maybe there's some related real situation" is usually considered insufficient.

Warfare is just such a situation, and warlords are disproportionately represented in the human gene pool. The best representation of ancestral environment warfare I've seen is a real time strategy adaptation of Risk, where players receive resources proportional to their territory, and can only gain territory by taking it from others by force. I've played quite a few iterations of this, and the player who appears strongest almost never wins in the end; instead, the second-most powerful player wins.

Consider three warlords A, B, and C, starting out at peace, with A and B the same strength, and C significantly stronger. If A and B go to war with eachother, then C will conquer them both, so they won't do that. If B and C go to war with eachother, then A must also go to war with C, or else he'll find himself facing C plus the conquered remnants of A, with no possible allies, and conversely, if A goes to war with C then B must also go to war with C. In other words, if all players act rationally, then the only player who can't win is the one who starts with the most resources.

Thanks, I'll do that if I post another ev. psych. hypothesis.

I think to be "truel-like" (that is, select for mediocrity) a social interaction would have to have:

  1. more than two players
  2. players know well the skills of the other players
  3. the possibility of coalitions (e.g. fair foot races don't work)

A tribe of apes would have more than two individuals, and the individuals would know each other well. I think the possibility of coalitions is far more likely than the impossibility of coalitions (though I don't have a good argument to back up my intution). Almost every group social game should have some truel-like aspect to it.

[-]HCE15y50

you're missing the essential ingredient:

  1. winner-takes-all

in any situation where the spoils of victory are shared its best to align with the most competent. contrarily, when the winner gets everything, like life or the girl or the title, its almost always best to team up with your fellow incompetents to take down the likely victor.

the game show survivor strikes me as especially illustrative. players routinely gang-up on those perceived to be the most competent to increase everyone's chances of winning. once their usefulness as a workhorse or a ''challenge winner'' has been exhausted, or at least no longer outweighs concerns about winning a million bucks (as soon as the perceived probability of winning exceeds some minimum), the "strongest" or "most (apparently) cunning" player is often ousted..

I was under the impression truels were sometimes real-life situations, and the abstract game is suggested based on them.

One question is to what extent truel situations produce mere underconfidence, versus actual mediocrity? As Johnnicholas aptly describes it, "fear of success" would be more a sign of underconfidence, while "choking under pressure" would produce mediocre performance. There is plenty of mediocrity in the world, but is it because of truels, or because everyone is trying as hard as they can to excel, but by definition only a few can rise above the crowd?

We have long discussed evolutionary situations where overconfidence is a benefit, so here we have a set of cases where underconfidence is good. This is a problem with evolutionary psychology, it is too easy to come up with theories that can explain anything. Whatever ratios we observe of overconfidence to underconfidence, we can explain them by postulating that the evolutionary environment had the corresponding ratios of scenarios where one or the other behavior would be of benefit.

The test would come if we could derive a prediction from this explanation which is non-obvious, and then we could see if underconfidence aligned with that prediction.

There is only a conflict between the predictions ("overconfidence is a benefit" and "underconfidence is a benefit"), if the evolutionary situations are indistinguishable.

If the organism can distinguish the different kinds of games, we may have a single, nuanced prediction; it will be overconfident in these situations and underconfident in these other situations.

I'm not recalling the structure of the ev. psych. / game theory examples that predict overconfidence. Could someone remind me or point me?

Interesting idea. If this was indeed a scenario that presented itself often in the Pleistocene, then we should expect individuals to signal that they do not do well under pressure (and in order to enable self-deception, actually believe that they will not do well), but consistently perform better under pressure than they expect to. There are cultural desires that shape our avowed expectations under many scenarios, so perhaps the best empirical test of this would be under a novel game situation.

If true, perhaps this could explain the pervasiveness of self-deprecating behavior?

Also, creating low expectations and exceeding them very likely creates a better impression than the same level of performance accurately expected.

But Nick, wouldn't it be a mistake for me to prefer a person who exceeded my low expectations, over a person who met my high expectations, if their performance levels were equal? Wouldn't I be more rational to update my opinion of the low-expectations person in the direction of higher ability, but not as high as the person who met my expectations? Since it's possible the low-expectations person did as well as he did just due to luck, I should not update my opinion all the way to his recent performance level. And so he should still be lower than the other guy.

Which means, that to the extent that Nick's observation is correct, we have another puzzle to explain.

This is a really good question.

As others have pointed out, the real issue is not competence but perceived competence. But as others haven't really pointed out, as one deals with more perceptive rivals, the difference between competence and perceived competence approaches zero (if only asymptotically).

As for the last question---what should we do in a truel-like situation---the answer is, I guess, "That depends." If we're talking about a situation where one can falsify incompetence, or perhaps form an unbreakable contract to act incompetently, then the old chestnut applies: rational agents should win. In a literal truel, you could do this by, say, agreeing to fire roughly one-third of your shots (chosen by die roll) straight into the air, provided there were a way of holding you to it. In other cases, as some people pointed out, maybe you could just get really good at convincing people of your incompetence (a.k.a. "nonchalance").

But in a situation where this is impossible? Where competence and perceived competence are one? Then there is no strategy, any more than there was a strategy for passing as white in the last century. You will be punished for being good (unless you're so good that you win anyway).

Regarding the evolution of mediocrity: In some cases, Evolution selects for people who are good at convincing others that they are X, and, by a not-so-stunning coincidence, ends up with people who actually believe they are X, or even really are X. I don't know if "competence" is the sort of thing this works for, though, since it is in itself a genetic advantage almost by definition. Self-perceieved incompetence is just so much better a strategy than actual incompetence, and self-delusion such a commonly evolved trait, that I have trouble believing even a dolt like Evolution would fail to think of it.

Your reply made me connect this with "The Usual Suspects".

[-]pre15y30

I've seen this on the "Diplomacy" board. One player being perceived as skilled is immediately ganged up on by all the other players. It's like a plateau of skillfullness, a hurdle that you need to jump.

I've seen it overcome by deliberately throwing a few bad moves to reduce players opinion, but that puts you at inherent disadvantage. Like a genuine handicap.

I've seen it overcome more effectively by just more application of the skill which wins at "Diplomacy": alliance building.

Mostly you see it overcome by losing a few games thanks to folks ganging up on you and then you don't seem so fierce.

But is "winning" like gaining skill at shooting people, or skill at not getting shot, or skill at helping people shoot other people, or skill at helping people not get shot by other people? It seems to me that the game-theoretic effects here are all over the place, and for every truel there is an anti-truel.

Maybe I can be more clear. There are two levels to the truel: gunmanship, and survival. Short-sighted winning at gunmanship leads to low probability of winning at survival.

The nonintuitive nature of the solution is the evidence that there's something to learn here, to refine your intuition.

I'm saying that "winning" at goals in real life rarely translates to purely offensive/destructive power, which is what the truels assume.

As for nonintuitiveness, at least part of it comes from the assumption that skill is fully visible.

You need to signal weakness, trying something like Obfuscating Stupidity could probably work once or twice, (although obviously not multiple times with the same opponents). The right strategy depends on things like payoffs, your relative strength, and your acting skills.

[-][anonymous]15y00

Looks to me like LW's karma system also is a multi-truel. The slick rational gunfighters are voted up. Posts which resemble and repeat the slick actions of the mighty gunfighters are copied atnd affiliated up. The inept rationalists struggling to suggest new ways of gunfighting are voted down. Just my experience - that's all!

In your analogy, are there coalitions of "mediocre gunfighters" targeting the "skilled gunfighters"? I haven't seen that here (yet).

[+][anonymous]15y-50