RorscHak

Why the empirical results of the Traveller’s Dilemma deviate strongly away from the Nash Equilibrium and seems to be close to the social optimum?

My assumption is that promises are "vague", playing $99 or $100 both fulfil the promise of giving a high claim close to $100, for which there is no incentive to break.

I think the vagueness stops the race to the bottom in TD, compared to the dollar auction in which every bid can be outmatched by a tiny step without risking going overboard immediately.

I do think I overcomplicated the matter to avoid modifying the payoff matrix.

Why the empirical results of the Traveller’s Dilemma deviate strongly away from the Nash Equilibrium and seems to be close to the social optimum?

"breaking a promise" or "keeping a promise" has no intrinsic utilities here.

What I state is that under this formulation, if the other player believes your promise and plays the best response to your promise, your best response is to keep the promise.

Why the empirical results of the Traveller’s Dilemma deviate strongly away from the Nash Equilibrium and seems to be close to the social optimum?

" in this case, "trust" is equivalent to changing the payout structure to include points for self-image and social cohesion "

I guess I'm just trying to model trust in TD without changing the payoff matrix. The payoff matrix of the "vague" TD works in promoting trust--a player has no incentive breaking a promise.

Why the empirical results of the Traveller’s Dilemma deviate strongly away from the Nash Equilibrium and seems to be close to the social optimum?

This is true. The issue is that the Nash Equilibrium formulation of TD predicts that everyone else will bid $2, which is counter-intuitive and does not confirm empirical findings.

I'm trying to convince myself that the NE formulation in TD is not entirely rational.

Why the empirical results of the Traveller’s Dilemma deviate strongly away from the Nash Equilibrium and seems to be close to the social optimum?

If Alice claims close to $100 (say, $80), Bob gets a higher payoff claiming $100 (getting $78) instead of claiming $2 (getting $4).

Could waste heat become an environment problem in the future (centuries)?

I would assume Kelvin users to outnumber Fahrenheit users on LW.

My poorly done attempt of formulating a game with incomplete information.

I think we should still keep b even with the iterations, since I made the assumption that "degrees of loyalty" is a property of S, not entirely the outcome of a rational-game-playing.

(I still assume S rational outside of having b in his payoffs)

Otherwise those kind of tests probably makes little sense.

I also wonder what happens if M doesn't know the repulsiveness of the test for certain, only a distribution of it (ie: CIA only knows that on average killing your spouse is pretty repulsive, except this lady here really hates her husband, oops), could that make a large impact.

I guess I was only trying to figure out whether this "repulsive loyalty test" story that seems to exist in history/mythology/real life in a few different cultures has any basis in logic.

Nash equilibriums can be arbitrarily bad

Thanks, I forgot the proof before replying your comment.

You are correct that in PD (D,C) is Pareto, and so the Nash Equilibrium (D,D) is much closer to a Pareto outcome than the Nash Equilibrium (0,0) of TD is to its Pareto outcomes (somewhere along each person getting a million pounds, give or take a cent)

It still strange to see a game with only one round and no collusion to land pretty close to the optimal, while its repeated version (dollar auction) seems to deviate badly from the Pareto outcome.

My poorly done attempt of formulating a game with incomplete information.

Thanks, the final result is somewhat surprising, perhaps it's a quirk of my construction.

Setting r to be higher than v does remove the "undercover agents" that have practically 0 obedience, but I didn't know it's the optimal choice for M.

I like your last point a lot, does it mean that governments/institutions are more interested in protecting the systems they are in than their constituents? It indeed seems possible and can explain this situation.

I still wonder if such thing happens on an individual level as well, which can help shed some light.