Cross posted from Overcoming Bias. Comments there.

***

If everyone knows a tenth of the population dishonestly claims to observe alien spaceships, this can make it very hard for the honest alien-spaceship-observer to communicate fact that she has actually seen an alien spaceship.

In general, if the true state of the world is seen as not much more likely than you sending the corresponding false message somehow, it’s hard to communicate the true state.

You might think there needs to be quite a bit of noise relative to true claims, or for acting on true claims to be relatively unimportant, for the signal to get drowned out. Yet it seems to me that a relatively small amount of noise could overwhelm communication, via feedback.

Suppose you have a network of people communicating one-on-one with one another. There are two possible mutually exclusive states of the world – A and B – which individuals occasionally get some info about directly. They can tell each other about info they got directly, and also about info they heard from others. Suppose that everyone likes for they and others to believe the truth, but they also like to say that A is true (or to suggest that it is more likely). However making pro-A claims is a bit costly for some reason, so it’s not worthwhile if A is false. Then everyone is honest, and can trust what one another says.

Now suppose that the costs people experience from making claims about A vary among the population. In the lowest reaches of the distribution, it’s worth lying about A. So there is a small amount of noise from people falsely claiming A. Also suppose that nobody knows anyone else’s costs specifically, just the distribution that costs are drawn from.

Now when someone gives you a pro-A message, there’s a small chance that it’s false. This slightly reduces the benefits to you of passing on such pro-A messages, since the value from bringing others closer to the truth is diminished. Yet you still bear the same cost. If the costs of sending pro-A messages were near the threshold of being too high for you, you will now stop sending pro-A messages.

From the perspective of other people, this decreases the probability that a given message of A is truthful, because some of the honest A messages have been removed. This makes passing on messages of A even less valuable, so more people further down the spectrum of costs find it not worthwhile. And so on.

At the same time as the value of passing on A-claims declines due to their likely falsehood, it also declines due to others anticipating their falsehood and thus not listening to them. So even if you directly observe evidence of A in nature, the value of passing on such claims declines (though it is still higher than for passing on an indirect claim).

I haven’t properly modeled this, but I guess for lots of distributions of costs this soon reaches an equilibrium where everyone who still claims A honestly finds it worthwhile. But it seems that for some, eventually nobody ever claims A honestly (though sometimes they would have said A either way, and in fact A happened to be true).

In this model the source of noise was liars at the bottom of the distribution of costs. These should also change during the above process. As the value of passing on A-claims declines, the cost threshold below which it is worth lying about such claims lowers. This would offset the new liars at the top of the spectrum, so lead to equilibrium faster. If the threshold becomes lower than the entire population, lying ceases. If others knew that this had happened, they could trust A-claims again. This wouldn’t help them with dishonest B-claims, which could potentially be rife, depending on the model. However they should soon lose interest in sending false B-claims, so this would be fixed in time. However by that time it will be worth lying about A again. This is less complicated if the initial noise is exogenous.


New Comment