In Nature, there's been a recent publication arguing that the best way of gauging the truth of a question is to get people to report their views on the truth of the matter, and their estimate of the proportion of people who would agree with them.

Then, it's claimed, the surprisingly popular answer is likely to be the correct one.

In this post, I'll attempt to sketch a justification as to why this is the case, as far as I understand it.

First, an example of the system working well:

 

Capital City

Canberra is the capital of Australia, but many people think the actual capital is Sydney. Suppose only a minority knows that fact, and people are polled on the question:

Is Canberra the capital of Australia?

Then those who think that Sydney is the capital will think the question is trivially false, and will generally not see any reason why anyone would believe it true. They will answer "no" and put high proportion of people answering "no".

The minority who know the true capital of Australia will answer "yes". But most of them will likely know a lot of people who are mistaken, so they won't put a high proportion on people answering "yes". Even if they do, there are few of them, so the population estimate for the population estimate of "yes", will still be low.

Thus "yes", the correct answer, will be surprisingly popular.

A quick sanity check: if we asked instead "Is Alice Springs the capital of Australia?", then those who believe Sydney is the capital will still answer "no" and claim that most people would do the same. Those who believe the capital is in Canberra will answer similarly. And there will be no large cache of people believing in Alice Springs being the capital, so "yes" will not be surprisingly popular.

What is important here is that adding true information to the population, will tend to move the proportion of people believing in the truth, more than that moves people's estimate of that proportion.

 

No differential information:

Let's see how that setup could fail. First, it could fail in a trivial fashion: the Australian Parliament and the Queen secretly conspire to move the capital to Melbourne. As long as they aren't included in the sample, nobody knows about the change. In fact, nobody can distinguish a world in which that was vetoed from one where where it passed. So the proportion of people who know the truth - that being those few deluded souls who already though the capital was in Melbourne, for some reason - is no higher in the world where it's true than the one where it's false.

So the population opinion has to be truth-tracking, not in the sense that the majority opinion is correct, but in the sense that more people believe X is true, relatively, in a world where X is true versus a world where X is false.


Systematic bias in population proportion:

A second failure mode could happen when people are systematically biased in their estimate of the general opinion. Suppose, for instance, that the following headline went viral:

"Miss Australia mocked for claims she got a doctorate in the nation's capital, Canberra."

And suppose that those who believed the capital was in Sydney thought "stupid beauty contest winner, she thought the capital was in Canberra!". And suppose those know knew the true capital thought "stupid beauty contest winner, she claimed to have a doctorate!". So the actual proportion in the belief doesn't change much at all.

But then suppose everyone reasons "now, I'm smart, so I won't update on this headline, but some other people, who are idiots, will start to think the capital is in Canberra." Then they will update their estimate of the population proportion. And Canberra may no longer be surprisingly popular, just expectedly popular.

 

Purely subjective opinions

How would this method work on a purely subjective opinion, such as:

Is Picasso superior to Van Gogh?

Well, there are two ways of looking at this. The first is to claim this is a purely subjective opinion, and as such people's beliefs are not truth tracking, and so the answers don't give any information. Indeed, if everyone accepts that the question is purely subjective, then there is no such thing as private (or public) information that is relevant to this question at all. Even if there were a prior on this question, no-one can update on any information.

But now suppose that there is a judgement that is widely shared, that, I don't know, blue paintings are objectively superior to paintings that use less blue. Then suddenly answers to that question become informative again! Except now, the question that is really being answered is:

Does Picasso use more blue than Van Gogh?

Or, more generally:

According to widely shared aesthetic criteria, is Picasso superior to Van Gogh?

The same applies to moral questions like "is killing wrong?". In practice, that is likely to reduce to:

According to widely shared moral criteria, is killing wrong?

 

New Comment
13 comments, sorted by Click to highlight new comments since: Today at 3:40 AM

I wonder how well it would work on questions like.

"Does homeopathy cure cancer".

Or in general where there are people in the minority that know the majority won't side with them, but the majority might not know how many believe the fringe view.

[-][anonymous]7y50

I've yet to dive deep into this topic, but a friend of mine mentioned that the Nature article seems similar to the Bayesain Truth Serum.

I think it's the same thing.

Nice article!

the population estimate for the population estimate of "yes", will still be low. Thus "yes", the correct answer, will be surprisingly popular.

I missed the 'population estimate of the population estimate' part ... took me a while to understand why you said surprisingly popular :)

Also, this aeon article explains the above concept pretty well.

My takeaways from the aeon article:

  • Experts are more likely to recognise that other people will disagree with them. Novices betray themselves by being unable to fathom any position other than their own
  • This 'meta-knowledge', ability to know your beliefs and the causes for them, is one of the best predictors for expertise. Studies that incorporate such data, discard predictions that are very wrong on the meta-knowledge scale, perform generally better (except when everybody believes something that will happen).
  • The reason for this is that often the crowd is sourcing its predictions from the same resources. Thus, the predictions 'double-count' these sources. But experts or people who are more aware of both sides of the story, will be able to give more rational views.

So essentially when doing a poll/survey if you ask people what they believe and then ask them the number of other people who might believe the same thing, you will be able to differentiate between novices (who are often unaware of alternative views/beliefs) and experts (who will generally know that other views exist and also better predict how many people believe what). Discard novice opinions and use expert opinions to find the answer.

My takeaways from the aeon article:

Those are real-world explanations for why the method might work, which helps reinforce the method in practice. But the mathematics work out in situations where everyone is a rational Bayesian expert, with access to different types of private information.

This feels like an attempt to reverse the Dunning–Kruger effect. Not exactly, but there is a similar assumption that "people who believe an incorrect X are usually unaware that answers other than X exist (otherwise they would start doubting whether X is the correct answer)".

Which probably works well for non-controversial topics. You may be wrong about the capital of Australia, but you don't expect there to be a controversy about this topic. If you are aware that many people disagree with you on what "the capital of Australia", you are aware there is a lot of ignorance about this topic, and you have probably double-checked your answer. People who get it wrong probably don't even think about the alternatives.

But, like in the example whpearson gave, there are situations where people are aware that others disagree with them, but they have a handy explanation, such as "it's all a Big Pharma conspiracy", in which case they will neither reduce their certainty, nor research the topic impartially.

In other words, this may work for honest mistakes, but not for tribalism.

"it's all a Big Pharma conspiracy", in which case they will neither reduce their certainty, nor research the topic impartially.

The method presupposes rational actors, but is somewhat resilient to non-rational ones. If the majority of people know of the conspiracy theorists, then the conspiracy theory will not be a surprisingly popular option.

In the Nature Podcast from January 26th, the author of the paper, Dražen Prelec, said that he developed the hypothesis for this paper by means of some fairly involved math, but that he discovered afterwards he needed only a simple syllogism of a sort that Aristotle would have recognized. Unfortunately, the interviewer didn't ask him what the syllogism was. I spent ~20 minutes googling to satisfy my curiosity, but I found nothing.

If you happen to know what syllogism he meant, I'd be thrilled to hear it. Also it would suit the headline here well.

Maybe something like "there is one truth, but many ways of being wrong, so those who don't know the truth will spread their population estimates too widely?"

Two writers argue for incompatible positions A and B. The first writer says A, and does not mention B. The second writer says B, and mentions that they disagree with A. The writer who mentioned both positions is more likely to be correct.

This is the same effect, scaled up to a survey.

No really. Your situation is one where the writers are being rhetorically honest or not. The method presented here is one where there are structural incentives for the players to be accurate (no one gains points for volunteering extra info), and the truth flows more naturally from the surprisingly popular position.

If the question, "Which interpretation of quantum mechanics is correct?" is posed to physicists, my guess is that the surprisingly popular opinion would be: the Everett interpretation, which in my opinion – and I consider myself a mild expert in the foundations of QM – is the correct one.

The exact same argument could be made for pilot wave theory.