This is a question in the info-cascade question series. There is a prize pool of up to $800 for answers to these questions. See the link above for full background on the problem (including a bibliography) as well as examples of responses we’d be especially excited to see.


In my (Jacob's) work at Metaculus AI, I'm trying to build a centralised space for both finding forecasts as well as the reasoning underlying those forecasts. Having such a space might serve as a simple way for the AI community to avoid runway info-cascades.

However, we are also concerned with situations where new forecasters overweight the current crowd opinion in their forecasts, compared to the underlying evidence, and see this as major risk for the trustworthiness of forecasts to those working in AI safety and policy.

With this question, I am interested in previous attempts to tackle this problem, and how successful they have been. In particular:

  • What existing infrastructure has been historically effective for avoiding info-cascades in communities? (Examples could include short-selling to prevent bubbles in asset markets, or norms to share the causes rather than outputs of one’s beliefs)

  • What problems are not adequately addressed by such infrastructure?

New Answer
New Comment

3 Answers sorted by



The Systems Dynamics "Beer Game" seems like a useful example of how something like (but not the same as) an info-cascade happens. - "The beer distribution game (also known as the beer game) is an experiential learning business simulation game created by a group of professors at MIT Sloan School of Management in early 1960s to demonstrate a number of key principles of supply chain management. The game is played by teams of at least four players, often in heated competition, and takes at least one hour to complete... The purpose of the game is to understand the distribution side dynamics of a multi-echelon supply chain used to distribute a single item, in this case, cases of beer."

Basically, passing information through a system with delays means everyone screws up wildly as the system responds in a nonlinear fashion to a linear change. In that case, Forrester and others suggest that changing viewpoints and using systems thinking is critical in preventing the cascades, and this seems to have worked in some cases.

(Please respond if you'd like more discussion.)

That's a really interesting effect, thanks for linking. I have two questions:

1) I'm confused about what the mechanism that produces the Bullwhip effect is.

One video suggested the following: as demand rapidly increases during time_step_1, suppliers aren't able to fully adapt and meet it, which causes an even larger shortage during time_step_2 and hence even larger demand; and somehow these effects compound down the supply chain.

Another mechanism is just that the demand signal is noisy, and so its variance will increase as one moves down the... (read more)

1) It's neither noise nor rapid increase - it's delayed feedback. Control theorists in engineering have this as a really clear, basic result, that delayed feedback is really really bad in various ways. There are entire books on how to do it well - - but doing it without using these more complex techniques is bad. 2) You either hire a control theorist, or (more practically) you avoid the current feedback mechanism, and instead get people on the phone to talk about and understand what everyone needs, as opposed to relying on their delayed feedback in the form of numeric orders.

We (jacobjacob and Benito) decided to award $50 (out of the total bounty of $800) to this answer.

It offers a practical example of a cascade-like phenomenon, which is both generally applicable and has real economic consequences. Also, the fact that it comes with a came to understand and practice responding is rare and potentially quite valuable (I'm of the opinion that deliberate practice is currently a neglected virtue in the rationality/EA spheres).



Abstract: Considering information cascades (both upwards and downwards) as a problem of incentives, better incentive design holds some promise. This academic paper suggests a model in which making truth-finding rewards contingent on reaching a certain number of votes prevents down-cascades, and where an informed (self-interested) choice of payout odds and threshold can also prevent up-cascades in the limit of a large population of predictors.

1) cf. avturchin from the question about distribution across fields, pointing out that up-cascades and down-cascades are both relevant concerns, in many contexts.

2) Consider information cascades as related to a problem of incentives -- in the comments of the Johnichols post referenced in the formalization question, multiple commentators point out that the model fails if agents seek to express their marginal opinion, rather than their true (posterior) belief. But incentives to be right do need to be built into a system that you're trying to pump energy into, so the question remains of whether a different incentive structure could do better, while still encouraging truth-finding.

3) Up-Cascaded Wisdom of the Crowd (Cong and Xiao, working paper) considers the information-aggregation problem in terms of incentives, and consider the incentives at play in an all-or-nothing crowdfunding model, like venture capital or Kickstarter (assuming that a 'no' vote is irrevocable like a 'yes' vote is) -- 'yes' voters win if there is a critical mass of other 'yes' voters and the proposition resolves to 'yes'; they lose if there is a critical mass and the proposition resolves to 'no'; they have 0 loss/gain if 'yes' doesn't reach a critical mass; 'no' voters are merely abstaining from voting 'yes'.

Their main result is that if the payment of incentives is conditioned on the proposition gaining a fixed number of 'yes' votes, a population of symmetric, common-prior/private-info agents will avoid down-cascades, as a single 'yes' vote that breaks a down-cascade will not be penalized for being wrong unless some later agent intentionally votes 'yes' to put the vote over the 'yes' threshold. (An agent i with negative private info still should vote no, because if a later agent i' puts the vote over the 'yes' threshold based in part on i's false vote, then i expects to lose on the truth-evaluation, since they've backed 'yes' but believe 'no'.)

A further result from the same paper is that if the actor posing the proposition can set the payout odds and the threshold in response to the common prior and known info-distribution, then a proposition-poser attempting to minimize down-cascades (perhaps because they will cast the first 'yes' vote, and so can only hope to win if the vote resolves to 'yes') will be incentivized to set odds and a threshold that coincidentally minimize the chance of up-cascades. In the large-population limit, the number of cascades under such an incentive design goes to 0.

4) I suspect (but will not here prove) that augmenting Cong and Xiao's all-or-nothing "crowdfunding for 'yes'" design with a parallel "crowdfunding for 'no'" design -- i.e., 'no' voters win (resp. lose) iff there is a critical mass of 'no' voters and the proposition resolves 'no' (resp. 'yes') -- can further strengthen the defenses against up-cascades (by making it possible to cast a more informed 'no' vote conditioned on a later, more-informed agent deciding to put 'no' over the threshold).

A related idea in non-punishment of "wrong" reports that have insufficient support (again in the common-prior/private-info setting) comes from this paper [pdf] (presented at the same conference), which suggests collecting reports from all agents and assigning rewards/punishments by assuming that agents' reports represent their private signal, computing their posterior, and scoring this assumed posterior. Under the model assumptions, this makes it an optimal strategy for agents to truly reveal their private signal to the mechanism, while allo... (read more)

We (jacobjacob and Ben Pace) decided to award $250 (out of the total bounty of $800) to this answer. It does several important things.

  • It references existing (and novel) work in economics and mechanism design, which might have been time-consuming to discover otherwise
  • It distills a technical paper, which is a valuable service that is usually underfunded (academic institutions comparatively incentivise novel and surprising insights)
  • The insights provided are quite action-guiding, and caused me (jacobjacob) to have ideas for how one can experiment with new kin
... (read more)
A further result from the same paper is that if the actor posing the proposition can set the payout odds and the threshold in response to the common prior and known info-distribution, ...

This is a really cool result, but I'm confused about why it holds. Is the idea something like: the actor themself is uncertain about the value of the project, and the kickstarter also helps them find out whether it's worth doing, so up-cascades are costly in expectation (they might land themselves having to run some awful project)?

But if is the mechanism, it seems to apply to any rational actor using a kickstarter, as opposed to having anything to do with minimizing down-cascades?

Nope! The paper's model for this result assumes that the value conditioned on success is known to the proposer, so that the proposer's only incentive is to maximize their own profits by setting the payout odds and threshold. The (non-obvious to me) result that the paper proves is that this coincidentally minimizes the probability of up-cascades:



[this answer was duplicated when I mistakenly copied my comment into an answer and then moved the comment to an answer.]

[This comment is no longer endorsed by its author]
2 comments, sorted by Click to highlight new comments since:

Pretty sure you know this already, and it's not exactly infrastructure, but it seems like if you have a nice formal process for eliciting people's beliefs, then you want to explicitly ask them for their impressions, not credences (or alternatively for both).


[This comment is no longer endorsed by its author]Reply