# Aumann's Agreement Theorem

New Comment

The external link "A write-up of the proof of Aumann's agreement theorem (pdf) by Tyrrell McAllister" seems to be broken. At least, I get a 404 Error. I am not sure how to best fix this but I thought I may as well point this out.

I feel like Aumann's Agreement Theorem is one of those concepts which the community was originally excited about, which didn't quite pan out. It's valid as a piece of math, but people want to use it as a shorthand for "the fact that we disagree means one of us must be being irrational", when that is not the case. The reason is because it's not enough for both people to be Bayesian agents, not enough for each person to also know that the other is a Bayesian agent, not enough to know that each person knows that the other person is a Bayesian agent, etc; they need actual common knowledge. And then it turns out that people mostly aren't Bayesian agents. And that's before getting into the weird anthropic stuff, where there are weird facts and pieces of evidence that aren't person-symmetric; eg, I may think that my subjective experience means futures in which I-in-particular am mass-copied are more likely, but someone else should not believe this.

I think there's work to be done here in each of this page and the Aumann Agreement page clarifying the relationship of the two each other. Also it would be nice if this page had some of the math on it.

From the old Talk Page:

# Talk:Aumann's agreement theorem

Re the reversion of 1736 7 September: on reflection, I do agree that there should be a separate page for the theorem itself (which is math) and the intuitive gloss and discussion of implications (which is not), but I'm not sure the offending text belongs in the "Disagreement" article, either--would it be a good idea to create a page for Aumann agreement referring to the state of agents coming to agree with each other in an Auman-esque fashion? I think that's what I'm going to do. Z. M. Davis 23:51, 7 September 2009 (UTC)

Rationale here being that two agents coming to agree by updating on each other's beliefs (which I'm calling "Aumann agreement") is distinct from more general discussion of disagreements and why they are problematic amongst rationalists (because there's actually a right answer in questions of fact). Z. M. Davis 23:55, 7 September 2009 (UTC)

You've created a duplicate piece of content for now -- and that's not good. If you create the new page, you should factor out its topic out the Disagreement page as well (but that will hurt that page now -- so maybe a subsection on the disagreement page is a better solution for now, if there is a clear topic to divide it).

Did you watch Hanson's talk on "are disagreements honest"? The link is in the article. In his model, agreement is not a process: you just tell me your opinion, I pronounce my conclusion, and we are done: you must agree with my conclusion immediately. --Vladimir Nesov 01:14, 8 September 2009 (UTC)

I've deleted the duplicate content on "Disagreement" for now, but will be sure to improve that page soon, as well as review Hanson's talk. (It's strange, I read the Cowen and Hanson "Are Disagreements Honest?" paper, and I don't remember it saying anything about agreement being instantaneous--and the Aaronson paper certainly models disagreement as a process; I'll have to look into this further.) Thanks for the help! --Z. M. Davis 01:33, 8 September 2009 (UTC)