Weak supporting evidence can undermine belief

by Lightwave1 min read29th Sep 20118 comments

19

Personal Blog

Article: Weak supporting evidence can undermine belief in an outcome

Defying logic, people given weak evidence can regard predictions supported by that evidence as less likely than if they aren’t given the evidence at all.

...

Consider the following statement: “Widespread use of hybrid and electric cars could reduce worldwide carbon emissions. One bill that has passed the Senate provides a $250 tax credit for purchasing a hybrid or electric car. How likely is it that at least one-fifth of the U.S. car fleet will be hybrid or electric in 2025?”

That middle sentence is the weak evidence. People presented with the entire statement — or similar statements with the same three-sentence structure but on different topics — answered the final question lower than people who read the statement without the middle sentence. They did so even though other people who saw the middle statement in isolation rated it as positive evidence for, in this case, higher adoption of hybrid and electric cars.

 

Paper: When good evidence goes bad: The weak evidence effect in judgment and decision-making

Abstract:

An indispensable principle of rational thought is that positive evidence should increase
belief. In this paper, we demonstrate that people routinely violate this principle when pre-
dicting an outcome from a weak cause. In Experiment 1 participants given weak positive
evidence judged outcomes of public policy initiatives to be less likely than participants
given no evidence, even though the evidence was separately judged to be supportive.
Experiment 2 ruled out a pragmatic explanation of the result, that the weak evidence
implies the absence of stronger evidence. In Experiment 3, weak positive evidence made
people less likely to gamble on the outcome of the 2010 United States mid-term Congres-
sional election. Experiments 4 and 5 replicated these findings with everyday causal
scenarios. We argue that this ‘‘weak evidence effect’’ arises because people focus dispro-
portionately on the mentioned weak cause and fail to think about alternative causes.

19

8 comments, sorted by Highlighting new comments since Today at 10:27 PM
New Comment
[-][anonymous]10y 39

Generally, if you're given evidence for something, the evidence-giver is trying to convince you of that something. If you're given only weak evidence, that itself is evidence that there is no strong evidence (if there is strong evidence, why didn't they tell you that instead?), and so in some circumstances it could be rational to downgrade your probability estimate.

Sure, this makes perfect sense in a political environment - or in the ancestral environment, where I'm sure this kind of thing was very important to breeding (I could even take a shot at an evolutionary argument for this kind of instinct!). But that instinct is a net positive only in political situations; our current environment is significantly more factual-uncertainty based than political-uncertainty based. This may make the instinct a net negative.

Is that true? Surely even on a purely factual matter, it is still the case that he who makes a claim, will typically give his best evidence for the claim, so if the best evidence offered is weak, that still suggests stronger evidence doesn't exist.

that he who makes a claim, will typically give his best evidence for the claim, so if the best evidence offered is weak,

If a person is making a claim to you and knowing whether this claim is right or wrong is important, things are already pretty political! I was thinking of a scientific study providing weak evidence in favour of something, and this heuristic hurting our estimates.

Also in this case, we have evidence that there is only token support in congress for public measures to improve adoption. I'm kind of surprised the control found this evidence to be net positive really. And I wonder if the evidence gets evaluated a little different when people have to use it rather than just evaluate it.

When I read the title I had expected that this was the point of the post. Perhaps because I've been intending to write a post to that effect for the last three years or so.

So I agree completely.

[-][anonymous]10y 1

Eliezer's post What evidence filtered evidence deals with this idea.

Bayesians should update not just on the signs and portents that a person has reported to them, but also take into account the chain of cause and effect that led the person to report that evidence. So the paper is wrong and what Khoth said is sensible.

This actually feels like a variant on overjustification. In the absence of evidence, people are content with their belief on whatever intuitive basis led them to adopt it in the first place. Provided weak evidence from an external source, they consciously reach for a better reason to believe and don't find one.