Sorted by New

Wiki Contributions


Robert Aumann's Agreement Theorem shows that honest Bayesians cannot agree to disagree - if they have common knowledge of their probability estimates, they have the same probability estimate.

Um, doesn't this also depend on them having common priors?


If you start out with a maximum-entropy prior, then you never learn anything, ever, no matter how much evidence you observe. You do not even learn anything wrong - you always remain as ignorant as you began.

Can you clarify what you mean here? Are you referring specifically to the monkey example or making a more general point?