Defying logic, people given weak evidence can regard predictions supported by that evidence as less likely than if they aren’t given the evidence at all.
Consider the following statement: “Widespread use of hybrid and electric cars could reduce worldwide carbon emissions. One bill that has passed the Senate provides a $250 tax credit for purchasing a hybrid or electric car. How likely is it that at least one-fifth of the U.S. car fleet will be hybrid or electric in 2025?”
That middle sentence is the weak evidence. People presented with the entire statement — or similar statements with the same three-sentence structure but on different topics — answered the final question lower than people who read the statement without the middle sentence. They did so even though other people who saw the middle statement in isolation rated it as positive evidence for, in this case, higher adoption of hybrid and electric cars.
An indispensable principle of rational thought is that positive evidence should increase
belief. In this paper, we demonstrate that people routinely violate this principle when pre-
dicting an outcome from a weak cause. In Experiment 1 participants given weak positive
evidence judged outcomes of public policy initiatives to be less likely than participants
given no evidence, even though the evidence was separately judged to be supportive.
Experiment 2 ruled out a pragmatic explanation of the result, that the weak evidence
implies the absence of stronger evidence. In Experiment 3, weak positive evidence made
people less likely to gamble on the outcome of the 2010 United States mid-term Congres-
sional election. Experiments 4 and 5 replicated these findings with everyday causal
scenarios. We argue that this ‘‘weak evidence effect’’ arises because people focus dispro-
portionately on the mentioned weak cause and fail to think about alternative causes.