I am trying to model a bayesian agent who updates their credence in some proposition p based on a set of experts on p, who just respond in a binary way to p (i.e. "belief"/"disbelief"). Do you know of attempts (inside or outside academic literature) to model something similar? 

My search wasn't very successful. Thanks a lot for any leads (even if they are only vaguely similar projects)! 

New to LessWrong?

New Answer
New Comment
1 comment, sorted by Click to highlight new comments since: Today at 1:21 PM

I don't know of any research that's this direct.  Well, that's not true - https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem is pretty famous, so if the "bayesean agent" and "experts" have similar priors and mutual knowledge of their rationality, the updates (in both directions) are pretty straightforward.

But when you say "research", it seems like you're talking about humans, and there's not a bayesean agent among us.  Neither the person in question, nor the experts, have any clue what their priors are or what evidence they're updating on.  

You can still use some amount of Bayes-inspired logic in your updates.  "update based on your level of surprise" is pretty solid in many cases.  The main problem I see is selection bias.  Which experts are actually sharing such statements, and how do you weight your surprise at different 'expert' pronouncements.