This is a question in the info-cascade question series. There is a prize pool of up to $800 for answers to these questions. See the link above for full background on the problem (including a bibliography) as well as examples of responses we’d be especially excited to see.

Mathematically formalising info-cascades would be great.

Fortunately, it's already been done in the simple case. See this excellent LW post by Johnicholas, where he uses upvotes/downvotes as an example, and shows that after the second person has voted, all future voters are adding zero new information to the system. His explanation using likelihood ratios is the most intuitive I've found.

The Wikipedia entry on the subject is also quite good.

However, these two entries primarily explain how information cascades when people have to make a binary choice - good or bad, left or right, etc. The question I want to understand is how to think of the problem in a continuous case - do the problems go away? Or more likely, what variables determine the speed at which people update to one extreme? And how far toward that extreme do people go before they realise their error?

Examples of continuous variables include things like project time estimates, stocks, and probabilistic forecasts. I imagine it's very likely that significant quantitative work has been done on the case of market bubbles, and anyone can write an answer summarising that work and explaining how to apply it to other domains like forecasting, that would be excellent.

New to LessWrong?

New Answer
New Comment
5 comments, sorted by Click to highlight new comments since: Today at 4:54 AM

A relevant result is Aumann's agreement theorem, and offshoots where two Bayesians repeating their probability judgements back and forth will converge on a common belief. Although note that that belief isn't always the one they would have in the case that they both knew all their observations - supposing we both privately flip coins, and state our probabilities that we got the same result, we'll spend all day saying 50% without actually learning the answer - nevertheless you shouldn't expect probabilities to badly asymptote in expectation.

This makes me think that you'll want to think about bounded-rational models where people can only recurse 3 times, or something. [ETA: or models where some participants in the discourse are adversarial, as in this paper].

The Wikipedia entry on the subject is also quite good.

The link is not to The Wikipedia, but to some pushy and confusing wiki reader app. Consider fixing.

On a different note, have you tried to do numerical simulations of the phenomenon you are describing? Multiple agents interacting under various conditions, watch if an equilibrium emerges, and what kind.

We haven't, but there's some interesting stuff in the economics literature.

Note that the post you linked by Johnicholas contains a mistake that the author admits invalidates his point.

I think that the rewrite mentioned was actually made, and the post as stands is right.

(Although in this case it's weird to call it an information cascade - in the situation described in the post, people don't have any reason to think that a +50 karma post is any better than a +10 karma post, so information isn't really cascading, just karma).