Some interesting results from We Can't Disagree Forever by Geanakoplos and Polemarchakis (1982) that changed how I think of Aumann Agreement, along with some toy examples.
AI assistance: Claude helped with early feedback, copyedits, and deadlines. All words and errors are my own.
Geanakoplos and Polemarchakis show that if two such agents don't have common knowledge, they can attain common knowledge and thus agreement in a finite number of steps. They do this by repeatedly sharing their current probabilities and updating accordingly. This is the indirect communication equilibrium. The paper also shows that there are cases where the number of steps is large.
Another way of resolving disagreement is for both agents to pool all their information. Trivially this also results in agreement. This is the direct communication equilibrium.
Fair Dice Example
Alice and Bob are ideal Bayesians with common priors. Their priors include the following beliefs:
Alice will roll a fair six-sided dice today and Bob will not see the result.
Bob will roll a fair six-sided dice today and Alice will not see the result.
Let SAME be the event that Alice and Bob get the same result. Before rolling the dice, Alice and Bob have P(SAME) = ⅙.
After the dice are rolled, Alice and Bob continue to have P(SAME) = ⅙, and have common knowledge of this fact. Therefore they are already in an indirect communication equilibrium.
Alice and Bob improve their beliefs by sharing data:
Alice: "My dice landed on 6"
Bob: "P(SAME) = 100%" (Bob's dice landed on 6)
There is now common knowledge that both dice landed on 6 and P(SAME) = 100%.
Takeaway: an indirect communication equilibrium is not necessarily a direct communication equilibrium. This is proposition 3 in the paper.
Almost Fair Dice Example
Alex and Bella are in a similar situation to Alice and Bob, but their dice are not quite fair. Their dice have these odds:
16.4%
16.5%
16.6%
16.7%
16.8%
17.0%
Alex and Bella have an initial P(SAME) that is very slightly higher than ⅙, about 16.669%. If Alex rolls a 6, his P(SAME) increases to 17.0%. When he shares his P(SAME) he reveals which number he rolled. This changes the situation so that the indirect and direct communication equilibrium are the same.
Takeaway: cases where proposition 3 apply are "rare" in some sense. This is proposition 4 in the paper.
Extremizing Example
Abigail and Benjamin are ideal Bayesians with common priors. Their priors include the following beliefs:
There is a single coin that will be flipped twice today.
There is a 50% chance it will be a heads-biased coin, in which case it will land heads with 90% probability.
There is a 50% chance it will be a tails-biased coin, in which case it will land tails with 90% probability.
The coin flip results are otherwise independent
Abigail will see either the first flip (50%) or the second flip (50%). She will be told which flip she saw. Benjamin will not.
Benjamin will see either the first flip (50%) or the second flip (50%). He will be told which flip he saw. Abigail will not.
Let HEAD be the event that the coin is heads-biased. Suppose that Abigail and Benjamin both saw a heads result. Then before they communicate, they both have P(HEAD) = 90%.
Abigail and Benjamin already agree, but they don't have common knowledge. So they can Aumann Agree like this:
Abigail: "P(HEAD) = 90%"
Benjamin: "P(HEAD) = 93.956...%"
There is now common knowledge that P(HEAD) = 93.956...%; they reached the indirect communication equilibrium.
Takeaway: Sharing beliefs can improve beliefs even if you already agree.
The direct communication equilibrium is either 90% or 99%.
My updates
Look at the structure of reasoning needed to reach the indirect communication equilibrium. It's not like a marketplace, where I bid 10%, you bid 90%, and we haggle till we reach consensus. Instead, you use my expressed beliefs as evidence about my hidden observations, and update based on this indirect observational evidence. Another way of putting it: you say what you believe, and I infer what you might have seen to believe that. If my brain isn't doing that type of thinking, it's not doing Aumann Agreement, it's doing something else. Perhaps I'm updating from shared evidence, or negotiating a compromise.
This is also a good reminder that ideal Bayesians are super-intelligent. As a human mimicking the process I'm likely to end up in a case where the indirect equilibrium is inferior to the direct equilibrium. I'm also likely to be unable to reach an indirect equilibrium, hitting a cognitive dead-end like: "So your credence is now 70%, which implies ... umm ... I got nothing". So sharing my actual observations and analysis ends up being key to reaching agreement.
Some interesting results from We Can't Disagree Forever by Geanakoplos and Polemarchakis (1982) that changed how I think of Aumann Agreement, along with some toy examples.
AI assistance: Claude helped with early feedback, copyedits, and deadlines. All words and errors are my own.
Recap and definitions
Consider two ideal Bayesians with common priors. By Aumann's Agreement Theorem, if they have common knowledge of their current probabilities for some proposition, then their probabilities will be the same.
Geanakoplos and Polemarchakis show that if two such agents don't have common knowledge, they can attain common knowledge and thus agreement in a finite number of steps. They do this by repeatedly sharing their current probabilities and updating accordingly. This is the indirect communication equilibrium. The paper also shows that there are cases where the number of steps is large.
Another way of resolving disagreement is for both agents to pool all their information. Trivially this also results in agreement. This is the direct communication equilibrium.
Fair Dice Example
Alice and Bob are ideal Bayesians with common priors. Their priors include the following beliefs:
Let SAME be the event that Alice and Bob get the same result. Before rolling the dice, Alice and Bob have P(SAME) = ⅙.
After the dice are rolled, Alice and Bob continue to have P(SAME) = ⅙, and have common knowledge of this fact. Therefore they are already in an indirect communication equilibrium.
Alice and Bob improve their beliefs by sharing data:
There is now common knowledge that both dice landed on 6 and P(SAME) = 100%.
Takeaway: an indirect communication equilibrium is not necessarily a direct communication equilibrium. This is proposition 3 in the paper.
Almost Fair Dice Example
Alex and Bella are in a similar situation to Alice and Bob, but their dice are not quite fair. Their dice have these odds:
Alex and Bella have an initial P(SAME) that is very slightly higher than ⅙, about 16.669%. If Alex rolls a 6, his P(SAME) increases to 17.0%. When he shares his P(SAME) he reveals which number he rolled. This changes the situation so that the indirect and direct communication equilibrium are the same.
Takeaway: cases where proposition 3 apply are "rare" in some sense. This is proposition 4 in the paper.
Extremizing Example
Abigail and Benjamin are ideal Bayesians with common priors. Their priors include the following beliefs:
Let HEAD be the event that the coin is heads-biased. Suppose that Abigail and Benjamin both saw a heads result. Then before they communicate, they both have P(HEAD) = 90%.
Abigail and Benjamin already agree, but they don't have common knowledge. So they can Aumann Agree like this:
There is now common knowledge that P(HEAD) = 93.956...%; they reached the indirect communication equilibrium.
Takeaway: Sharing beliefs can improve beliefs even if you already agree.
The direct communication equilibrium is either 90% or 99%.
My updates
Look at the structure of reasoning needed to reach the indirect communication equilibrium. It's not like a marketplace, where I bid 10%, you bid 90%, and we haggle till we reach consensus. Instead, you use my expressed beliefs as evidence about my hidden observations, and update based on this indirect observational evidence. Another way of putting it: you say what you believe, and I infer what you might have seen to believe that. If my brain isn't doing that type of thinking, it's not doing Aumann Agreement, it's doing something else. Perhaps I'm updating from shared evidence, or negotiating a compromise.
This is also a good reminder that ideal Bayesians are super-intelligent. As a human mimicking the process I'm likely to end up in a case where the indirect equilibrium is inferior to the direct equilibrium. I'm also likely to be unable to reach an indirect equilibrium, hitting a cognitive dead-end like: "So your credence is now 70%, which implies ... umm ... I got nothing". So sharing my actual observations and analysis ends up being key to reaching agreement.