I've written up a rationality game which we played several times at our local LW chapter and had a lot of fun with. The idea is to put Aumann's agreement theorem into practice as a multi-player calibration game, in which players react to the probabilities which other players give (each holding some privileged evidence). If you get very involved, this implies reasoning not only about how well your friends are calibrated, but also how much your friends trust each other's calibration, and how much they trust each other's trust in each other.
You'll need a set of trivia questions to play. We used these.
The write-up includes a helpful scoring table which we have not play-tested yet. We did a plain Bayes loss rather than an adjusted Bayes loss when we played, and calculated things on our phone calculators. This version should feel a lot better, because the numbers are easier to interpret and you get your score right away rather than calculating at the end.
For those interested in running the game, I put together a Python script that pulls questions from an API for trivia questions, and outputs 2 LaTeX files that can be compiled and used, one for the quizmaster, plus a set of corresponding cut-out cards for the players.
Pull requests and improvement suggestions are welcome!
Super excited that someone has made an online version of the game, here: https://aumann.io/
Currently not working - but github code is online, and someone should do that.
Is the code for https://aumann.io/ available online?
Yes - https://github.com/jacob13400/aumann-game and I spoke with the developer a couple months ago, who said "It's not working properly because I don't have much experience in backend development (The Calibration Game, for example, only had the React Native code and a static Firebase DB), and had to bring in a friend to take care of that part, and he lost interest in it after initial work... If you know anyone who knows Node fairly well (and isn't super averse to wading through what is probably very clunky code, or to writing it from scratch if that's actually easier), that's the biggest bottleneck on this right now."If you have someone who might be interested in a short-term job doing it, definitely let me know.
I asked because I tried it as part of reading group earlier this week. It was quite broken. I thought I would remake it as a 'decentralized app' over the next weeks. I'll DM you.
For the last couple of years, the Russian-speaking LW community has been running the AAG online, using this Google Sheets template: https://docs.google.com/spreadsheets/d/1tm4AYBMs8N-ZkdJJeNezG6H6tIPkQ5tHJmiFx8n3_Xo/edit
It supports and calculates points for 2-6 players.
The participants add and update their probabilities and see the history and the points they’ll get.
Feel free to use it!
The game gets better if multiple teams compete for the largest total amount of points instead of individual players competing with each other for individual points (use multiple sheets and moderators).
Once no one wants to update the certainty in their answer, the round finishes; players reveal their answers in the sheet column; the moderator reveals the correct answer; the points finalise.
In the multi-team mode, it’s fun to also add a KBS-style question: e.g., “What’ll be the sum of the final probabilities the players from all teams put on their answers to this question, mod 4?”
I like this very much. Did the game work in practice as you describe in the example?
Having run this in many other groups, yes it works pretty well.
Essentially, yes! There were often a few more revisions than this, and the trolling was more subtle.
I wonder if the games you played had resembled the expected Aumann process, which is akin to random walk, or did they look more like slow convergence of opinions? If it's the latter, then the game has little to do with Aumann agreement.
Regardless of how well it follows the random walk, it already violates the assumption of rational agents.
Then why take Aumann's name in vain?
I think the relationship to Aumann's theorem is direct and strong. It's the same old question of how Aumann-like reasoning plays out in practice, for only partially rational agents, that was much discussed back in the Overcoming Bias days.
Probably the most relevant post:
Another game proposed to shed light on this:
May I suggest that "Aumann Agreement Game" would be a better name than "Aumann's Agreement Game" because the latter suggests (falsely, I take it) that the game itself is Aumann's?
[EDITED to add:] In case it's not obvious, the title and content of this post used to say "Aumann's" rather than "Aumann".
Yes, good point.