Sorted by New

Wiki Contributions


Earning to Give vs. Altruistic Career Choice Revisited

One thing those articles don't consider is if your career is causing high negative externalities in the world. Which banking arguable does (depend on what exactly you do, and your political views).

If so, then you need to give even more than what you earned just to undo what you did in your career.

Many Weak Arguments vs. One Relatively Strong Argument

This is a great example. It's often very hard to tell whether MWA are independent or not. They could all derive from the same factors. Or they could all be made up by the same type of motivated reasoning.

I think that's the judgment of being a good "Fox" ala Tetlock's Hedgehog vs the Fox.

Thomas C. Schelling's "Strategy of Conflict"

It's amazing how good humans are at this sort of thing, by instinct. I'm reading the book Hierarchy in the Forrest, which is about tribal bands of humans up to 100k years ago. Without law and social structure, they basically solved all of their social equality problems by game theory. And depending on when precisely you think they evolved this social dynamic, they may have had hundreds of thousands of years to perfect it before we became hierarchical again.


If you look at rationality on a spectrum, this type of game theory isn't on the most enlightened/sophisticated form of it. Thugs, bullies, despots and drama queens are very good at this sort of manipulation. Rather it's basically the most primitive instinctive part of human reasoning.

However, that's not to say it doesn't work. The original post's description of not wanting to look yourself in the mirror afterwards is very apt.

Raising the forecasting waterline (part 2)

It's much harder to make well formed predictions than one would initially suspect. The fun part about PB is trying to make them, that you don't get on GJP.

Raising the forecasting waterline (part 2)

I also scored slightly on the hedgehog scale. I think people who like to "think about thinking" are already slightly hedgehog. True foxes don't believe in such grand theories.

Raising the forecasting waterline (part 2)

Good article. As a fellow GJPer, my only nitpick is that the Brier rule is a squared rule, so there is a bigger loss between 95% and 100% than just 0.05. It's not as bad as a logarithm based rule though. Also, the way they do it, the maximum loss is 2 not 1.

Look forward to the next part!

Paper: Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent

You're right snarles. Thanks for spotting my error. I forgot the signs in the formula for adjugate.

What about the problem of the zero determinant in the denominator? Is that fatal? What's the real world interpretation?

Paper: Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent

I find the article very interesting, but have trouble following the math. Maybe someone here better at math can help. I do have some understanding of linear algebra, and I've tried to check it with a spreadsheet:

  1. At the very beginning, their closed form solution for V, the stationary vector, seems to allow V's that have negative numbers for the state probabilities. That can't be describing a real game. E.g. if you set p = (0.9, 0.7, 0.2, 0.1) and q = (0.5, 0.5, 0.5, 0.5), you get V = (0.08, -0.08, 0.1, -0.1). [p here is set to the Force-Opponent-Score-Equal-to-2 values, q is a random strategy, and V is calculated by 3x3 determinants of portions of M' as described in the paper]

I don't know how to convert that into a V with no negative numbers. Some of the co-efficients are positive and some negative, so you can't just scale it. Their formula for s_y correctly returns 2, but it's unclear if that corresponds to a real world equilibrium.

  1. Their formula for payoffs sx and sy require division by D(p,q,1). D(p,q,1) can be 0, e.g. for the classic tit-for-tat strategies matched head to head, p = (1,0,1,0) and q = (1,1,0,0). I don't know if that ruins the conclusion or not. If you match their Extort-3 strategy against tit-for-tat, again you get a 0 denominator.

Are these fatal problems? Not sure yet. Their overall conclusion meets with my intuition. They're just saying that if one player only tries to maximize his own score, while the other player is strategic (in terms of denying the first player a higher score), then the second player is going to win in the long term. Except they call the first player "evolutionary," and the second player "sentient."

And two, there's no point being too "smart" (looking back too many moves) when your opponent is "dumb" (looking back only 1 move).

You could say both of these things about the current bargaining position of the US political parties right now.