I'd like to play a game with you. Send me, privately, a real number between 0 and 100, inclusive. (No funny business. If you say "my age", I'm going to throw it out.) The winner of this game is the person who, after a week, guesses the number closest to 2/3 of the average guess. I will reveal the average guess, and will confirm the winner's claims to have won, but I will reveal no specific guesses.

Suppose that you're a rational person. You also know that everyone else who plays this game is rational, you know that they know that, you know that they know that, and so on. Therefore, you conclude that the best guess is P. Since P is the rational guess to make, everyone will guess P, and so the best guess to make is P*2/3. This gives an equation that we can solve to get P = 0.

I propose that this game be used as a sort of test to see how well Aumann's agreement theorem applies to a group of people. The key assumption the theorem makes--which, as taw points out, is often overlooked--is that the group members are all rational and honest and also have common knowledge of this. This same assumption implies that the average guess will be 0. The farther from the truth this assumption is, the farther the average guess is going to be from 0, and the farther Aumann's agreement theorem is from applying to the group.

Update (June 20): The game is finished; sorry for the delay in getting the results. The average guess was about 13.235418197890148 (a number which probably contains as much entropy as its length), meaning that the winning guess is the one closest to 8.823612131926765. This number appears to be significantly below the number typical for groups of ordinary people, but not dramatically so. 63% of guesses were too low, indicating that people were overall slightly optimistic about the outcome (if you interpret lower as better). Anyway, I will notify the winner ahora mismo.

New Comment
156 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I've played this game, with an actual small prize to give some incentive in favor of cooperating with the experiment. I was surprised at the number of intelligent-seeming people who did not understand that 0 was the "rational" solution. I was unsurprised at the number of people who understood, and submitted answers they knew were irrational just for fun.

This is a bad test of an agreement theorem. There's no reason to believe that participants are motivated to agree, or that their expression of guess is the same as their belief in the "correct" guess.

Honesty is also a condition for Aumann's agreement theorem, though I neglected to actually ask that people submit only honest guesses.
The Danish newspaper Politiken played this game too, for 5000 kroner. Turns out that the actual answer was 21.6 out of 100. I agree that it's a pretty flawed test of the agreement theorem, but the real assumption that this game tests is common knowledge of rationality. Only if that holds can we say 0 is the rational solution. If any player does not have that common knowledge, the rational solution is likely to be nonzero.
Did the Politiken game have an explicit policy for what to do in the case of ties? That becomes a more pressing question when kroner are involved.
Best I can tell from Google Translate's version of the page linked in that Wikipedia article, they split the winnings among those who tied at the best guess. The article says that five people guessed the same number and won.
Right. I'm going to guess the smallest number I think no one else will guess to maximize my chances of being the only winner. I don't place any value in winning along with a lot of other people. Also, the OP is wrong that the iterated 2/3 process will eventually produce 0. If everyone plays 1, then 2/3 of 1 will round back up to 1. Edit: Sorry, in a version of this game I once played you were restricted to guessing integers.
2/3 is a valid guess.

Shouldn't we have the results by now?

Maybe my submission blew up his spreadsheet.
I used a text editor and Haskell.

Proposal for a variation: players may guess any positive real number, and the winner is the one closest to the first quartile of the distribution of answers. This removes both the anchoring effect of the upper bound and the effects of a few jokesters guessing Graham's number and Busybeaver(100) and so on.

It also has the feature of being somewhat more opaque to game-theoretic analysis, at least for me.

Graham's number isn't between 0 and 100.
Obviously. The change from 2/3 the mean to first quartile is only required because of the no-upper-bound change.

This is also known as a Keynesian beauty contest.

Before reading the studies, we did this exercise in my Experimental Econ class a couple years ago. However, beforehand the teacher didn't let any of us know P=0 even though it should have been obvious. We did the test 4 times in a row. There were 12 students in my class (an upper division econ class at a private school) Test 1: I guessed 20 (answer was 22, I was closest) Test 2: I guessed 12 (got it exactly) Test 3: I guessed 7 (split the reward with one other student) Test 4: I guessed 3 and the answer was 2 If more tests were done I could only assume the whole class would have eventually gone to 0. When reading the paper it amazed me how many people put 0 as the answer on single trials. Yes, P=0 but a lot of people don't know that (the study was done by advertising a monetary award in the newspaper) and even more may know that and still guess what others will put. The logical way to look at the test is breaking it down into what level you think people will guess on. Level 1: everyone guesses 100 so guess 66.66 Level 2: What idiot would guess 100, everyone guesses ~67 so guess 2/3*66.66 = 44.44 Level 3: But everyone will think ~44 so guess ~30 Level 4: Guess 30*2/3= ~20 and so on
There's no reason for the game to go all the way down to 0. If everyone is playing 1, that's an equilibrium because 2/3 of 1 is closer to 1 than 0.
That's not a Nash equilibrium.
Are players allowed to guess non-integers? Edit: Warrigal says they are. I'm wrong, you're right.

Boring game. Let's make it interesting! I hereby swear that I sent Warrigal a guess of 100. Use this information wisely.

I was thinking that this game would pretty much only measure (belief of) rationality, but now I see that it measures (belief of) honesty to a good degree as well. By guessing 100, one is being dishonest.
No it is not, especially since cousin_it has upfrontly told us what he values. You are assuming that everyone who submits has a utility function that highly values winning this game, which, given the comments around here, seems to not be true (or is at least widely believed to not be true). Don't confuse 'has different values than I' with 'irrational'.
Or just plain wrong.
If we all take you at your word, that does indeed make it more interesting. If every other entrant acts rationally in choosing P, we must have P = (2/3)(PN + 100)/(N+1), if there are N other participants. This solves to give P = 200/(N+3). But we don't know N, we can only guess at it, or some probability distribution of N. N is at least 2, since Psy-Kosh has claimed to have entered, and anyone making this calculation must count themselves among the N. But Psy-Kosh posted before cousin_it revealed its guess of 100, so we might guess that Psy-Kosh voted 0 on the grounds given in the original post, which then modifies the calculation to give P = 200/(N+5). But suppose of the N non-cousin_it entrants, K entered before knowing cousin_it's entry, and all chose 0. Then we get P = 200/(N+3+2K). Now, Aumann agreement only applies if the parties confer to honestly share their information. However, this has been framed as a competitive game, and someone who wants to be the exclusive winner would to better to avoid any such procedure, or to participate in it dishonestly. A simpler analysis would be to point out that if Psy-Kosh voted zero (as would have been rational without cousin_it), then if everyone else votes zero, all will win except cousin_it. However, if someone votes slightly more than zero, then that one will be the exclusive winner. Someone who values an exclusive win above a tie might try to persuade everyone to vote zero and then defect. Edit: I have entered, based on the above considerations. My entry was greater than zero.
Good, but it's even more interesting. a) I have successfully moved the average by ~2X compared to if I'd stayed silent. b) If N is known and everyone acts rationally as you describe, a pair of colluding players can screw everyone over: one guesses 100, the other guesses the new average. The combined effect is making my brain explode.
If I may attempt to contribute your brain matter's outward velocity: When I was contemplating submitting a guess (which I didn't do), I actually concluded (I have no evidence for this, sorry) that you in particular would probably guess 100. Had I acted on that, the effect would have been nearly the same, except for the possibility that others would be drawing the same inference.
He's obviously lying
Whether or not it's boring is a matter of taste. But the point was to test a hypothesis, not to be interesting. Have you just subverted the point entirely; or does your claim to have sent a guess of 100 actually serve a purpose?
The "guess 2/3 of the average" game is quite well-studied and has been played many times in controlled environments and on the Internet. No point in running yet another simulation. My claim serves a purpose of providing some simple mathematical exercises to all involved.
Except to potentially gain information specific to this community, which is what I assumed the point was. (This is not to suggest that your modification is not interesting. It is. I just think it's kind of poor form to hijack someone else's post like this. If you wanted to play a different game, with a different point, I think the best course would have been to start your own separate game, with you taking the entries, and to have left Warrigal's game as it was. There's no reason we couldn't have done both.)
Now that you said it, I see how my comment has made the community worse off. I somehow didn't see it then. I'm sorry. Edit: please don't upvote this.
If it's any consolation, I'd assign high probability that some other merry prankster is going to intentionally screw with the results by not trying to win at all. Any future study of group rationality on LW would be advised, based on past experience, to throw out some outliers on principle.
Assuming you have actually entered a guess of 100, you may change it. If we didn't let people change their entries, they might as well wait until the last second and then enter.
On the bright side, I suppose, you certainly have helped meet the goal of gaining information specific to this community.
I believe the "correct" answer a is now given by the equation: a = 2/3 (100 + ((n-1) a)) / n where n is the total number of players, including cousin-it. Except this doesn't take into account people who didn't see or ignored cousin-it 's post. But if cousin-it's guess was stated at the outset, I think the "correct" answer would be as above. Edit: Richard Kennaway points out that this simplifies to a = 200 / (n+2)

I saw Warrigal's comments about any guess that is not trying to win as being dishonest. This appears to mean that any answer over 67 is not only wrong, but if I know it is wrong, it is dishonest; but since I know that, any answer over 44 is not merely wrong but also dishonest, etc.

Clever, but not quite; if you knew you were playing the game with 3 other people, who you knew would each probably (dishonestly) play 100, it would be perfectly honest and rational to submit 60 (and simply rational to submit any x with 0 < x < 100, since you'd still win).

I think it's entirely rational to submit a non-zero answer.

I would prefer to win outright, rather than tie, and I think it's safe to assume this is true of more people than just me.

If everyone does the "rational" thing of guessing 0, it will be a big tie.

If anyone guesses above 0, anyone guessing 0 will be beaten by someone with a guess between 0 and the average.

Therefore, a small non-zero guess would seem superior to a guess of zero, to those who value outright wins above ties (EDIT: and don't value a tie as being much better than a loss).

Perhaps I'll write a program to simulate what the best guess would be if everyone reasons as above and writes a program to simulate it...

5Eliezer Yudkowsky
This is not a test of individual rationality. It is a test of how well the Aumann assumptions apply to the group.
OK, but I don't see why the assumptions listed above should result in everyone guessing 0 (which is my point), and 2 minutes with google hasn't told me about any further assumptions. If one of the assumptions is that a tie is as valuable as a win, then I see your point, but I haven't seen that made explicit.
You do see that zero is the only Nash equilibrium, right? If everyone plays zero, you gain nothing by defecting alone, because 1/N is still better than nothing (and your guess will always be greater than 2/3 of the average). So you're arguing that it's not rational, under the assumption of common rationality, to play the unique Nash equilibrium?
Everyone playing 0 is only better than everyone playing 67 because it corrects individual defectors. It doesn't correct multiple coordinated defectors. If we know that there are no defectors, 67 is as good as 0, and if we know that there is some number of defectors who could conspire to play something else, 0 is not much better than 67. This becomes more interesting if the payoff on the non-equilibrium choices is greater. Nash equilibrium is not a universal principle, it's merely a measure against individual uncoordinated madmen, agents unable to cooperate.
Actually, I think it does rather better against uncoordinated rational agents than it does against crazy people. I'm not sure why it should have any traction at all against the latter. More generally, you're right, but: (a) that didn't seem to be the nature of lavalamp's argument; and (b) unless it's also incentive compatible in the standard sense, I tend to consider the possibility of coordination as changing the rules of the game (though that's just a personal semantic preference).
By madmen I meant "rational" agents who refuse to consider an option or implications of coordination (the kind that requires no defection). Impossibility of coordination is a nontrivial concept, I don't quite understand what it means (I should...). If everyone follows a certain procedure that leads them to agree on 0, why can't they agree on 67 just as well?
Because given what others are doing, no individual has an incentive to deviate from 0 (regardless of whether they've agreed to it or not). In contrast, if they're really trying to win, every individual agent has an incentive to deviate from 67. ETA: You can get around the latter problem if you have an enforcement mechanism that lets you punish defectors; but that's adding something not in the original set up, which is why I prefer to consider it changing the rules of the game.
Coordination determines the joint outcome, or some property of the joint outcome; possibility of defection means lack of total coordination for the given outcome. Punishment is only one of the possible ways of ensuring coordination (although the only realistic one for humans, in most cases). Between the two coordinated strategies, 67 is as good as 0. What I wondered is what it could mean to establish the total lack of coordination, impossibility of implicit communication through running common algorithms, having common origin, sharing common biases, etc, so that the players literally can't figure out a common answer in e.g. battle of the sexes).
I'm sure I'm missing your point, but FWIW my original claim was only about the (im)possibility of coordination on a non-Nash equilibrium solution (i.e. of coordinating on a solution that is not incentive-compatible). Coordinating on one of a number of Nash equilibria (which is the issue in battle of the sexes) is a different matter entirely (and not one I am claiming anything about).
Agreed. This is why I specified that I think there are others who also would value a unique win, and why, in another comment I mentioned that of those of us who value a unique win, someone has to guess high. This leads to quite a nice dilemma (as we'd all prefer for someone else to guess high), unless we believe cousin_it, who says he guessed 100. Assuming that the rewards (and/or penalties) were adjusted such that everyone greatly prefers a tie to a loss, then I would have to agree that 0 is the Nash equilibrium (and would guess 0). However, given that the only available reward here is social capital (if even that), I'd rather win outright, even if it brings a risk of losing, and I don't see why I would be alone in that order of preferences. And I think I may be distorting the game as much as cousin_it, and equally unintentionally. Sorry...
I think we've basically resolved this, but just to clear up loose ends, I'm pretty sure it will be a Nash equilibrium provided everyone strictly prefers a tie to a loss; as far as I can tell the preference shouldn't need to be "great".
It has to be great enough to make me unwilling to risk a loss for the possibility of an outright win, which is why I said "greatly." But I suppose it's relative.
Nash equilibrium doesn't work like that. Each player's strategy must be optimal given perfect knowledge of others' equilibrium strategies. Your probabilistic reasoning only applies if you don't know others' equilibrium strategies (or if they're playing mixed strategies#Mixed_strategy), but that isn't relevant here).
Sorry, context switch on my part-- I wasn't thinking about Nash equilibrium when I wrote that. But I still don't see your point--if I assume that everyone's utility function is exactly like mine, I don't see how my probabilistic reasoning would differ from an equilibrium strategy, if I'm using the term right.
Did you just switch context again? My claim is about what happens if everyone strictly prefers to tie rather than to lose. In this case, given others' strategies, any individual's optimal strategy is to answer 2/3 of the average. The only way everyone can answer 2/3 of the average is if everyone plays 0, and this is the only strategy that nobody has an incentive to deviate from.
Maybe I'm being dense, but bear with me for a moment.... Assume: I get X utilons from winning, Y from tying, and Z from losing, where X >= Y >= Z. Everyone plying the game has exactly the same preferences. If I (and everyone else) play 0, I get Y utilons. Straightforward. If I play a value that gives me W chance of winning outright, and (1-W) chance of losing (with an inconsequential chance of tying because I added a small random offset), I will gain W X - (1 - W) Z utilons on average. Assume W is fairly low, the worst and most likely case being 1/N where N is the number of participants, since we're assuming everyone is exactly like me. Therefore, if Y > (X/N - Z + Z/N), I (and everyone) should play 0. Otherwise, we should play the thing that gives us W chance of winning. (hopefully I did the algebra right) So, depending on the values for X, Y, and Z (and N), we could get your scenario or mine. If Y is close to X, we get yours. If it is greatly lower than X, we will probably get mine. All that to say I can create a scenario where the Nash equilibrium really is for everyone to play a small positive number by tweaking the players' utility functions, even given the constraint that winning, tying, and losing are valued in that order. If this is clear to you, then we've been talking past each other. If not, then I don't understand Nash equilibrium very well (or I'm an incredibly sucky writer). EDIT: on second thought, I think my math is probably quite bad, esp. with respect to Z. Anyway, perhaps the central idea of my post is still intelligible, so I'll leave it be. EDIT2: Ah, I got a sign backwards (consider that if the penalty for losing is your house gets burned down, Z is a large negative number). W X - (1 - W) Z should be W X + (1 - W) Z Y > (X/N - Z + Z/N) should be Y > (X/N + Z - Z/N)
There are some games that don't have a Nash equilibrium. Consider a 1-player game where the available strategies are the numbers between 0 and 1, and your payoff is 1-x if you pick x>0 and 0 if you pick x=0. There is no Nash equilibrium. If many players assign 0 utilons to tying and losing in this game, and 1 to winning, then 0 is still a Nash equilibrium, but if there is any positive chance that some gimp will submit a nonzero answer just for the hell of it, then you definately shouldn't play zero. By the way, I guessed 100. I'm not very good with numbers - I think 100 is the best answer, right ;-0
A Nash equilibrium is a set of strategies from which no player has an incentive to deviate, holding others' strategies constant. Take any putative set of (pure) equilibrium strategies; if there is any individual who loses when this set of strategies is played, then they have an incentive to change their guess to 2/3 of the average, and this set of strategies is not a Nash equilibrium. This implies that you are not in Nash equilibrium unless everyone wins.* Holding other players' strategies constant, you have a single optimal strategy, which is to play 2/3 of the average. If there is another player who has already guessed 2/3 of the (new) average then you tie with probability 1; if there is not, you win with probability 1. * Note that everyone winning is necessary, but not sufficient for a Nash equilibrium. Everyone playing 67 lets everyone win, but it is not a Nash equilibrium. If if anyone prefers not to tie, they could deviate and win by themselves.
So games in which there cannot be a tie have no Nash equilibrium? I must have misread the wikipedia page; I thought the requirement was that there's no way to do better with an alternative strategy. I was also assuming that everyone guesses at the same time, as otherwise the person to play last can always win (and so everyone will play 0). But this means it's no longer a perfect-information game, and that there's not going to be a Nash equilibrium. Thanks for your patience :)
No, that's not a general rule. It's just the case that in this particular game, if you're losing you always have a better option that can be achieved just by changing your own strategy. If your prospects for improvement relied on others changing their strategies too, then you could lose and still be in a Nash equilibrium. (For an example of such a game, see battle of the sexes)) Sort of. It's that there's no way to do better with an alternative strategy, given perfect knowledge of others' strategies. They do in the actual game; it's just that that's not relevant to evaluating what counts as a Nash equilibrium. I'm not entirely clear what you mean by the first half of this sentence, but the conclusion is false. Even if everyone guessed in turn, there would still be a Nash equilibrium with everyone playing zero. No problem. ;)
Sorry I didn't/can't continue the conversation; I've gotten rather busy.
Is making an "assumption of common rationality" really a rational choice, even here? With the stakes as low as this, I would assign a very high likelihood to someone getting greater utility from throwing a spanner in the works for the lulz than from a serious attempt at winning, even if at least one such person hadn't already announced their action.
FWIW, I never suggested it was. Lavalamp claimed that zero was not rational under the assumptions in the OP's original justification, one of which was common rationality. It was the validity of that argument I was defending; not it's soundness.
The purpose of this game, admittedly, is to test just how complacent / obedient the Overcoming Bias / Less Wrong community has become. Think about your assumptions: First you've got "common rationality". But that's really a smokescreen to hide the fact that you're using a utility function and simply, dearly, hoping that everybody else is using the same one as you! Your second assumption is that "you gain nothing by defecting alone". There's no meaningful sense in which you're "winning" if everybody guesses zero and you do too. The only purpose of it, the only reward you receive for guessing 0 and 'winning', is the satisfaction that you dutifully followed instructions and submitted the 'correct' answer according to game theory and the arguments put forth by upper echelons of the Less Wrong community. In fact, there is much to gain by guessing a non-zero number. First of all, it costs nothing to play. Right away, all of your game theory and rationalization is tossed right out the window. It is of no cost to submit an answer of 100, or even to submit several answers of 100. Your theory of games can't account for this - if people get multiple guesses, submitted from different accounts, you'll be pretty silly with your submission of 0 as an answer. "But that would be cheating." Well, no. See, the game is a cheat. It's to test "Aumann's agreement theorem" among this community here. It's to test whether or not you will follow instructions and run with the herd, buying into garbage about a 'common rationality' and 'unique solutions', 'utility functions' and such. You see, for me at least, there's great value in defecting. You of course will try to scare people into believing they're defecting alone, but here you're presupposing the results of the experiment - that everybody else is dutifully following instructions. So anyway, I would be greatly pleased if the result turned out to be a non-zero number. It would restore my faith in this community, actually. And to that
I think that you are profoundly mistaken about the attitudes and dispositions of the vast majority here. You appear to be new, so that's understandable. As you look around, though, you'll find a wide array of opinions on the limits of causal decision theory, the aptness of utility functions for describing or prescribing human action, and other topics you assume must be dogma for a community calling itself 'rationalist'. You might even experience the uncomfortable realization that other people already agree with some of the brilliant revelations about rationality that you've derived.
I was an avid visitor of Overcoming Bias, but yes I am new to Less Wrong. I had assumed that the general feel of this place would be similar to Overcoming Bias - much of which was very dogmatic, although there were a few notable voices of dissent (several of whom were censored and even banned). Obviously. But there wouldn't be a point to my lecturing them, now would there? No, conchis made the canonical argument and I responded. And if you weren't so uncomfortable with my dissent you might have left a real response, instead of this patronizing and sarcastic analysis.
That's the problem with the internet: "I'm witty and incisive, you're sarcastic and sanctimonious". I'll admit the tenor of my last sentence was out of line; but I stand by the assertion that your psychoanalysis of this group is well off the mark. Also, what exactly is so awful about a group norm of playing certain games seriously even when for zero stakes, in order to gather interesting information about the group dynamics of aspiring rationalists?
Pretty much nails it. pswoo's initial comment was fairly patronizing itself, so it seems a bit rich to criticise you (orthonormal) for playing along. But whatever. By way of substantive response. Um, yeah. So, patronizing bits aside, I agree with much of your (pswoo's) comment. I just don't think it was especially relevant to the particular conversation you (pswoo) intervened in, which was about the validity of the standard argument rather than its soundness.
I will be very surprised if more than half of the answers are 0.
Aren't you forgetting that its even worse to lose?
I think a reasonable assumption is that a tie between N people is worth 1/N of a unique win, to each of the winners. This would makes sense if there were a prize that was to split between the winners (and your utility function is linear with respect to the prize).
The problem is that the game is not well-defined, since we are not told how to value ties. If everyone is allowed to value ties as they please, the game is far more complicated. Instead the OP should say something explicit, and probably it should be "ties are exactly as valuable as wins". But it's hard to enforce that, isn't it?
Yes, because I would continue to prefer to win without a tie. :)
By allowing you to choose a real number rather than just an integer, Warrigal has made it easy to avoid ties. Since I have myself played a non-integer, it is extremely unlikely that anyone will guess the exact value of the answer, so there is no penalty to picking a probably unique real number very close to your actual guess. EDIT: On second thought, there is a penalty in that you could tie for the right answer with someone else if you chose a salient number, but you would roughly halve your chances of being the winner if you added or subtracted an epsilon.
To improve on in this point, so that it's possible to move either way from anyone else's choice, we could make the choice to be from the open set (0,100) rather than from [0,100]. ;-)
if the average is less than 3/4 then the zeros will still win
It depends where in the range of 0-average the guess is. But of course I see what you mean; I meant between 0 and (average * 3/4), sorry. EDIT: (average 3/4 + average 3/8) is the upper bound, unless I forgot something or you're not allowed to go over. EDIT 2: The point being, there's a lot more winning non-zero answers than zero answers.
Sure, but what are the odds of you getting the right one. If they're too low, then you could still better of with zero.
If I rank a tie and a loss the same, then I'm don't risk anything by guessing a non-0 value for the chance of winning outright.
Yes, but what are the odds of you getting the right non-zero answer, given that everyone else is trying to do the same thing? You seem to be forgetting that it's worse to lose than to share a win.
If everybody reasons as you describe then everyone will guess 1/∞ and everyone will tie. You can't get closer to 2/3 of an infinitesimal than an infinitesimal, so it's stable. Disclaimer: I'm not mathy. Maybe you actually can get closer to 2/3 of an infinitesimal than an infinitesimal.
The question required us to provide real numbers, and infinitesimals are not real numbers. Even if you allowed infinitesimals, though, 0 would still be the Nash equilibrium. After all, if 1/∞ is a valid guess, so is (1/∞)*(2/3), etc., so the exact same logic applies: any number larger than 0 is too large. The only value where everyone could know everyone else's choice and still not want to change is 0.
No, everyone who prefers to win outright will do the logic you just did and decide that 1/∞ is too small, for the reason you state. Also, those of us who prefer to win outright will note that someone has to take one for the team and vote high to give the possibility of an outright winner, unless we believe cousin_it...
Reminds me of yet another semi-interesting game: everyone chooses a positive integer in base-10 notation, and the second highest number wins.
If the game as I've understood it is played over and over, I think it'd be much like rock, paper, scissors: http://www.cs.ualberta.ca/~darse/rsbpc.html
Really? I'd expect the numbers to spiral higher and higher.
After the first round, I wouldn't think there's much reason to guess higher than the previous highest number (or lower than the previous third number), which suggests convergence. If everyone updates based just on what other people did last time, then won't they cycle progressively closer around the initial second number? (Actually, I'm pretty sure the same holds even if people anticipate other's responses, provided they all reason forward the same number of steps.) ETA: What happens If there is a tie for the highest number? Does the third highest guess win, or the two highest together? What if everyone guesses the same thing?
Eh? Why would anyone take one for the team? If it's bad to tie, surely it's worse to lose for the purpose of helping someone else win.
I should have explicitly stated this (you've caused me to realize that it's necessary for my whole line of reasoning), but it's also the case that losing is not a worse outcome for me than tying.
Ah. OK. I think that's generally supposed to be excluded by the set up (though in this case it was admittedly ambiguous). But given that you have those preferences, I agree that your reasoning makes sense.
There's no prize offered, but in theory, these people could collaborate to share whatever prize makes winning better than tying. Since there's no prize except winning, losing to help someone else does seem like it would be bad.

I haven't decided what my guess will be yet, but if it was instead the average of 95% or so and throwing out the outliers, I'd say "0" without hesitation.

I trust the vast majority to be able to get the right answer on this (and that they trust the vast majority to be able to do the same), but the possibility for a few kooks to screw it up would probably have me submitting a nonzero guess (especially considering that I'm unlikely to be alone in this thinking).

That's an interesting variation. Next time we might try this with the stipulation that the five highest answers (if there are at least five submissions greater than 0; otherwise all the positive answers) will get tossed out. I still doubt that 0 would win that one, though.
Of course 0 would be the 'winning' strategy if you dismissed enough non-zero answers. But then you're just cooking the books in a desperate attempt to make the canonical game theory solution seem viable, or interesting. In other words, you'd be denying reality in order to convince people that the theoretical model has some relationship with the empirical reality. You'd be an economist.
Well, no— I wouldn't be bothered if the modified game still wound up above 0. I'm just interested in the light such experiments shed on the LW community, and the modified version removes one facet that acts as noise to the rest of the group dynamic: the actions of the few merry pranksters, plus the reaction of the rest of us to the foreknowledge of the pranksters' tendencies. I think you'd get a similar effect to this modification if you instead played this game with significant stakes, such that it was in everyone's real interest to try to win.
It is quite difficult to manipulate people's interests in such a way. "Merry pranksters", for example, clearly derive enjoyment from claiming to vote for something other than zero, and possibly from actually doing so. If you offered money to everyone if zero won, the question would then become: which is more important to the pranksters, the money or the amusement? That's a very subjective question, and while there may be an ultimate and rational answer, it's not at all clear. The proximate response of people is often what you didn't predict - if the prediction is known, people will often act against it intentionally.
That would be an interesting experiment as well, but I was instead suggesting that most entries would be substantially lower (though perhaps still mostly nonzero) if there were a significant monetary prize for the winner (to be split N ways for an N-way tie), and that this new distribution of responses might look like the distribution that would occur without a prize but if it were known that a certain number of high responses would be thrown out.
You mean, for the people who chose the winning value. That is a different experiment, but I don't think the difference matters to my point.
Here's an interesting question: What percentage of outliers would need to be thrown out for you to confidently guess zero (assuming general pre-knowledge of how many will be thrown out, of course)? I'd probably feel extremely confident if, for instance, only the middle third was kept.
I submitted 100, just to piss everyone off

Would an expression dependent on the number of entrants be acceptable, or would that be funny business?

Go ahead. You're judging the Aumann condition of the group, which is helped if you know who the group is.

I think there may be some signaling issues to be considered here. We all presumably self-identify to some degree as rationalists, and want to validate Less Wrong as a rationalist community. The more rational a community is, the closer to zero the average guess ought to come. So by guessing zero, lowering the average, you contribute to a signal that Less Wrong is a rational sort of place, and validate your own participation. This could be avoided by not revealing the average guess, but then we've gone away from interesting sociological experiment and into forum games.

Or rationalists on LessWrong could conspire to vote "100", thus winning and defeating a dogmatic traditional rationalist who voted "0".

I don't understand how the average guess will be 0. Can you please explain?

You will pick 100. I know that, so I'll pick 66. You know that I know that, so you'll pick 44 instead. But I know that you know that I know that, so I'll pick 29 instead. But you know that I know that you know that I know that, so you'll pick 20 instead. But I know- This continues to infinity until both of our guesses approach 0.
There is no "infinity" to be considered here. We are given a single equation P = (2/3)P with the unique solution P=0. P = (2/3)P P-(2/3)P = 0 P(1-2/3) = 0 P(1/3) = 0 P=0 QED As a general rule, you shouldn't even mention infinity except in very select circumstances. Especially not when the solution is so simple!
But the correct equation is Pwin = (2/3)Pavg.
Naively appealing, but if the third step is "so you'll pick 44 instead", the first step claiming that "You will pick 100, and I know that" is incorrect.
But it can be rewritten in a different way: The correct answer cannot possibly be above 66. So everyone knows that nobody will answer above 66, and thus the correct answer will not in fact be above 44. But everybody knows that, and so the correct answer will not in fact be above 29... etc... Of course, where it breaks down is that we know some people will not reason as above.
If 4/5 of the players conspire to answer 100, they win over the rest of the players who answer 0, so it's not always a good idea to abide by the above argument. See also this strategy. Edit: I fixed the reply; this is the original mistaken/confused argument on which cousin_it replied below (although the conclusion remains the same): Given that a sole player answering 100 wins when all others answer 0, it's not a failure of reason to not abide by the above argument.
Wrong. If we have two players, one says 100, other says 0, average is 50, 2/3 of average is 33, player 2 wins. Add more players saying 0 and it gets even worse.
You are right; fixed.
The problem (with the edited scenario) is that, without an enforcement mechanism, all the players in the conspiracy have an incentive to defect (more precisely, they're indifferent between defecting and not defecting if they don't care how many winners there are; they strictly prefer defection if they would rather be the only winner.)
Fun fact: a similar conspiracy strategy won the 20th anniversary iterated Prisoner's Dilemma tournament, beating out Tit For Tat. Your words hint that, if adherence to Aumann's theorem is to be considered the measure of a rationalist, then invariably defecting in PD and conspiracy situations should be regarded as an equally valid measure. But then we'd have to drop Eliezer from the ship.
Conspiracy is a strategy that leads to winning, while "defecting" is something magical, a change that doesn't exist in isolation. If it was possible for one player to jump to a winning position, while other players remain where they were, then this is obviously preferable to that player, but that's not really the case.
Adhering to the conspiracy is a strategy that leads to losing if anyone else answers slightly below 100. It's not a strategy that exists in isolation either. ETA: More generally, if you're going to try to undermine a basic concept in game theory as being "magical" (whatever that is supposed to mean), I think you owe more of an argument than the one you've given.
2Wei Dai
Vladimir is just following the footsteps of Aumann, who in 1959 proposed the notion of Strong Nash Equilibrium, which requires that an agreement not be subject to an improving deviation by any coalition of players. Other game theorists then realized (like conchis) that this requirement is too strong, since agreements must be resistant to deviations which are not themselves resistant to further deviations. (I'm mostly quoting from http://www.u.arizona.edu/~jwooders/cpcethry.pdf here.) I propose that nobody should be downvoted for making a mistake that Aumann made. :)
What about mistakes that he continues to make? ;) More generally, although I didn't vote Vladimir down (I have a general policy against voting down comments in conversations I'm actively involved in) I'm perfectly happy to vote down mistakes regardless of whether someone smart has made them before.
Okay. You will pick X - a number whose value I don't know. I know that, so I'll pick X*2/3. You know that I know that, so you'll pick X*4/9 instead. But I know that you know that I know that, so I'll pick X*8/27 instead. But you know that I know that you know that I know that, so you'll pick X*16/81 instead. But I know- This continues to infinity until X is multiplied by 0. At this point the value of X doesn't matter.
Nope, it's the same argument. You can't know that I pick X and at the same time know that I pick X*4/9 instead. From the outset, you can't assume to know precisely what I pick, and considering all possible values that I pick and you know I pick (a set of situations indexed by X) doesn't fix that.
It's the only Nash equilibrium. The only way everyone can win (and thus, the only way no-one would want to change their guess if they knew all the other guesses) is for all of us to guess a number that is 2/3 of itself: i.e. 0. ETA: CannibalSmith's explanation is better. ETA2: AllanCrossman's is even better.
Then I don't see the point of the game.
For this to apply in the real world, the players not only have to be rational, they also have to have common knowledge of each others' rationality. E.g. even if you're rational, if you think I'm stupid, and will guess 5, then you should no longer guess zero. Even if I am rational, and everyone else has common knowledge of everyone else's rationality, if they know that you think I'm irrational, then they know that you'll guess higher than zero, so they'll all guess higher than zero, and so on... In general, the more "stupid" people there are, or the more "stupid" people we think there are, or the more "stupid" people we think others think there are, or... the further the average guess is likely to be from 0. So (I assume) the point is to test the assumption of common knowledge of rationality: i.e. how stupid people are, how stupid we think other people are, how stupid we think other people think other people are, etc.

You can also try it online at: http://guess23.com

This comes to mind. The author claims that "the winner was accurate to six decimal places."

Can you tell us how many entries there were so I can see what my entry would have calculated out to? I already know I was too low.

It would be very interesting to know what the guesses were. I'm curious which of my assumptions was wrong.

There were 52 entries, if I remember correctly. As for the guesses themselves, I promised at the beginning not to reveal those. I might change my mind about revealing the guesses 70 years after every participant is dead. :-P

This game (along with the prisoner's dilemma and tragedy of the commons) nicely shows how the best choice to make is heavily influenced with how much you know about the other players (and therefore what they vote). If you know that the other players are "rationalists", then you can safely submit 0 (assuming that this hypothetical rational intelligence indeed submits 0). In real world tests you can pretty safely assume that the players are not-perfectly-rational humans. It may also be possible (as you can here) to influence other players.

finally my many years of watching The Price is Right have paid off!

I guess 1. :p

I'm not going to count guesses made in public. Part of the point is to see how much people trust you.
I assume you mean "guesses made in public and not also sent in private don't count"? i.e., others shouldn't have absolute assurance of what someone's guess is, because the "public guess" could be something other than the real guess.

Here is my question: Is there any payoff whatsoever for everyone drawing?

Which raises the question: How would I contact Warrigal, or anyone else from LW?

You can send a message, but it's rare for people to check their 'inboxes', which include every response to all of their comments.

Is it still rare for people to check their inboxes, now that having stuff in your inbox turns the envelope beneath your karma bubble red?
Also, Warrigal is presumably expecting messages in response to this, and is therefore likely to be checking his or her inbox. More generally, it does seem like it would be useful to be able to separate messages from comment replies in one's inbox. Would that be possible as a feature request?
My envelope is always red, and I can't find any messages in my "inbox".
Perhaps you should mention this phenomenon on the Issues, Bugs, and Requested Features thread. For reference, 'inbox' should contain replies to all of your comments, as well as any private messages. It should turn red when there's a new one, and stop being red when you open your inbox.
Even when you click on the envelope? Huh.

Interesting. In trying to figure out my guess, I discovered that I care more about losing if everyone else wins, than I care about losing if most other people are going to lose as well. This gives me a greater incentive to pick 0, but in a way that doesn't necessarily reflect my beliefs about the rationality/common knowledge of rationality of this group.

The funny part is that if just one player guesses nonzero, all the people who do will miss by an infinite margin.

It's not the multiplicative distance from the average that counts.

I assume "no funny business" means "entries must be of the form 'A' or 'B/C' or 'D.E' for some numeral strings A,B,C,D,E with C nonzero".

In that case Warrigal would have said "rational" rather than "real". Numbers such as 17π would presumably be fine too, not just fractions. "No funny business" presumably means "I'd better be able to figure out whether it's the closest easily". For instance, the number "S(12)/2^n, where S is the max shifts function and n is the smallest integer such that my number is less than 100" is technically well-defined, in a mathematical sense. But if you can actually figure out what it is, you could publish a paper about it in any journal of computer science you liked.
That's right, some real numbers can be easily defined while being arbitrarily difficult to calculate the game result with. But there is another reason why we want to tighten the restriction for a submission beyond the standard of being able to "figure out whether it's the closest easily". The point of the game is for people to try to submit 2/3 the average guess. In order to calculate 2/3 the average guess, you need two operations: addition and division with nonzero divisors. The rational numbers form a dense set (for all a<b there exists c such that a<c<b) that is closed under these two operations. It is the natural playing field for this game. The real numbers are constructed in a way that is unrelated to the structure of this game. (I think one typically invokes the concept of sets of rational numbers to construct the reals.) If you want to see why allowing all real numbers adds nothing to the game play, just note that every real number is equal to a terminating decimal plus a real number smaller than Epsilon, where epsilon is made to be much smaller than the difference between any two submitters' numbers. This is the only type of scenario when submitting an irrational number could help you: You are playing against the submissions π, 2π, 7π. If you submit 2π, you tie for the win, while if you submit a rational number very close to 2π then the 2π submitter wins. The scenario can happen with any irrational number, not just π. The irrational numbers just serve to add pointless additional elements to our already well-structured rationals. I hope I have clarified my previous comment.

The Aumann agreement theorem, as I understand it, has to do with what happens when rational agents share all their data with each other.

But still, sending my guess.

I assume that the point is to test the assumption of common knowledge of rationality (which is crucial to Aumann agreement) rather than testing Aumann agreement directly.
The Aumann agreement theorem has to do with what happens when rational agents share a single probability estimate with each other. They need not share their evidence.
Ah, okay, thanks.
...and to whoever voted me and Robin Hanson down (not that they're necessarily the same person), if you think that we're wrong, please say so, so that our disagreement will be resolved through reasoning, not through brute force.
It wasn't me, but I think that your response is much better than Robin's, because instead of an unsupported flat out contradiction, you described what the theorem is actually about. As near as I can tell, what the theorem says is that, provided two people have common knowledge (meaning not only that they know, but also that they know the other one knows, and that the other one knows they know, ad infinitum) that they are Bayesian rationalists with the same priors, that if they both give each other probability estimates for an event, and they don't then change their estimates, then their estimates must have been equal. It doesn't actually say how they should come to an agreement if their initial estimates differ, or even that they will.
Indeed, Aumann's original proof was not constructive. However, it has since been proved that the protocol "state your current posterior, update on the other agent's statement, repeat" will converge to agreement.
No, that is not what the agreement theorem is about.

I'm reasonably rational, and more-or-less honest, and I have no particular desire to 'win' this contest. But it amuses me greatly.

Thus I have submitted a value.

The exercises for the student: Did I submit zero, or not? And was my submission rational?

I'd bet two karma points against one that you submitted a number you believe has a less-than-average chance of winning this game. You might have submitted a very large number, or you might have submitted 0, confident in the knowledge that 0 will not actually win. As for whether you claim your submission was rational, I suppose it depends on which answer amuses you more today.
All of your points, although initially quite plausible, become uncertain upon deeper reflection - except the last, which is the best example of reasoning in this thread so far. I award you one point, and suggest that you think further upon the problem of the paradox of anticipation.
I wonder why this was voted down so far.
Because the tone seems arrogant and unconstructively insulting to LW readers, which decreases the quality of the discussion. The comment also lacks interesting content.