I've written up a rationality game which we played several times at our local LW chapter and had a lot of fun with. The idea is to put Aumann's agreement theorem into practice as a multi-player calibration game, in which players react to the probabilities which other players give (each holding some privileged evidence). If you get very involved, this implies reasoning not only about how well your friends are calibrated, but also how much your friends trust each other's calibration, and how much they trust each other's trust in each other.

You'll need a set of trivia questions to play. We used these

The write-up includes a helpful scoring table which we have not play-tested yet. We did a plain Bayes loss rather than an adjusted Bayes loss when we played, and calculated things on our phone calculators. This version should feel a lot better, because the numbers are easier to interpret and you get your score right away rather than calculating at the end.

New Comment
16 comments, sorted by Click to highlight new comments since: Today at 6:36 AM

For those interested in running the game, I put together a Python script that pulls questions from an API for trivia questions, and outputs 2 LaTeX files that can be compiled and used, one for the quizmaster, plus a set of corresponding cut-out cards for the players.

https://github.com/davidmanheim/Aumann_Game_Printouts

Pull requests and improvement suggestions are welcome!

Super excited that someone has made an online version of the game, here: https://aumann.io/

Currently not working - but github code is online, and someone should do that.

Is the code for https://aumann.io/ available online?

Yes - https://github.com/jacob13400/aumann-game and I spoke with the developer a couple months ago, who said "It's not working properly because I don't have much experience in backend development (The Calibration Game, for example, only had the React Native code and a static Firebase DB), and had to bring in a friend to take care of that part, and he lost interest in it after initial work... If you know anyone who knows Node fairly well (and isn't super averse to wading through what is probably very clunky code, or to writing it from scratch if that's actually easier), that's the biggest bottleneck on this right now."

If you have someone who might be interested in a short-term job doing it, definitely let me know.

I asked because I tried it as part of reading group earlier this week. It was quite broken. I thought I would remake it as a 'decentralized app' over the next weeks. I'll DM you.

I like this very much. Did the game work in practice as you describe in the example?

Having run this in many other groups, yes it works pretty well.

Essentially, yes! There were often a few more revisions than this, and the trolling was more subtle.

I wonder if the games you played had resembled the expected Aumann process, which is akin to random walk, or did they look more like slow convergence of opinions? If it's the latter, then the game has little to do with Aumann agreement.

Regardless of how well it follows the random walk, it already violates the assumption of rational agents.

Then why take Aumann's name in vain?

I think the relationship to Aumann's theorem is direct and strong. It's the same old question of how Aumann-like reasoning plays out in practice, for only partially rational agents, that was much discussed back in the Overcoming Bias days.

May I suggest that "Aumann Agreement Game" would be a better name than "Aumann's Agreement Game" because the latter suggests (falsely, I take it) that the game itself is Aumann's?

[EDITED to add:] In case it's not obvious, the title and content of this post used to say "Aumann's" rather than "Aumann".

Yes, good point.

New to LessWrong?