Let me propose a charitable interpretation of what brazil84 is saying (he can correct me if I am wrong). Here is an example:

We are discussing who committed a crime. There are three and only three suspects: Peter, Paul and Mary. Mary has an excellent alibi, so she's basically out of the running. There is some evidence both for Peter's and for Paul's guilt. Let's say we agree that the probabilities of each being guilty are: Mary 2%, Peter 49%, Paul 49%.

Then a witness comes up who saw someone wearing a dress in the scene of the crime. Since men are a priori u... (read more)

I think that's pretty close. If I am arguing that Paul committed the murder (and you are arguing that Peter committed the murder) it doesn't really help your argument to point out that there is evidence the murderer was wearing a dress since it undermines your own position just as much as it undermines the position you have taken.

Getting back to the original discussion, another poster pointed out that my "contested cases later" hypothesis is undermined by the fact is undermined by the observation that for some judges there is a zero percent app... (read more)

1TheOtherDave7yI certainly agree that the situation you describe can occur. (I could quibble about whether the probability-shift for Mary actually depends on the quality of her alibi here, as that seems like double-counting evidence, but either way it's entirely possible for the posterior probabilities to come out the way you describe.) And, OK, sure, if "more likely to be correct" is understood as "more likely [than some other hypothesis] to be correct", rather than "more likely [than it was before that evidence arrived] to be correct", I agree that the phrase describes the situation. That is, as you say, a bit confusing, but not false. So, OK. Provisionally adopting that interpretation and returning to the original comment [http://lesswrong.com/lw/58y/the_bias_you_didnt_expect/7lhy]... their initial comment was "situations can arise where evidence comes out which contradicts a hypothesis but still makes that hypothesis more likely to be correct". Which, sure, if I understand that to mean "more likely [than some other hypothesis] to be correct" is absolutely true. All of which was meant, I think, to refute bigjeff5's comment [http://lesswrong.com/lw/58y/the_bias_you_didnt_expect/7lf7?context=1#7lf7] about what sort of evidence should increase confidence in the belief that there is no bias. Which I understood to refer to increasing confidence relative to earlier confidence.

The Bias You Didn't Expect

by Psychohistorian 1 min read14th Apr 201192 comments

96


There are few places where society values rational, objective decision making as much as it values it in judges. While there is a rather cynical discipline called legal realism that says the law is really based on quirks of individual psychology, "what the judge had for breakfast," there's a broad social belief that the decision of judges are unbiased. And where they aren't unbiased, they're biased for Big, Important, Bad reasons, like racism or classism or politics.

It turns out that legal realism is totally wrong. It's not what the judge had for breakfast. It's how recently the judge had breakfast. A a new study (media coverage) on Israeli judges shows that, when making parole decisions, they grant about 65% after meal breaks, and almost all the way down to 0% right before breaks and at the end of the day (i.e. as far from the last break as possible). There's a relatively linear decline between the two points.

Think about this for a moment. A tremendously important decision, determining whether a person will go free or spend years in jail, appears to be substantially determined by an arbitrary factor. Also, note that we don't know if it's the lack of food, the anticipation of a break, or some other factor that is responsible for this. More interestingly, we don't know where the optimal result occurred. It's probably not the near 0% at the end of each work period. But is it the post-break high of 65%? Or were judges being too nice? We know there was bias, but we still don't know when bias occurred.

There are at least two lessons from this. The little, obvious one is to be aware of one's own physical limitations. Avoid making big decisions when tired or hungry - though this doesn't mean you should try to make decisions right after eating. For particularly important decisions, consider contemplating them at different times, if you can. Think about one thing Monday morning, then Wednesday afternoon, then Saturday evening, going only to the point of getting an overall feel for an answer, and not to the point of really making a solid conclusion. Take notes, and then compare them. This may not work perfectly, but it may help you realize inconsistencies, which could help. For big questions, the wisdom of crowds may be helpful - unless it's been a while since most of the crowd had breakfast.

The bigger lesson is one of humility. This provides rather stark evidence that our decisions are not under our control to the extent we believe. We can be influenced by factors we don't even suspect. Even knowing we have been biased, we may still be unable to identify what the correct answer was. While using formal rules and logic may be one of the best approaches to minimizing such errors, even formal rules can fail when applied by biased agents. The biggest, most condemnable biases - like racism - are in some ways less dangerous, because we know we need to look out for them. It's the bias you don't even suspect that can get you. The authors of the study think they basically got lucky with these results - if the effect had been to make decisions arbitrary rather than to increase rejections, this would not have shown up.

When those charged with making impartial decisions that control people's lives are subject to arbitrary forces they never suspected, it shows how important it is and much more we can do to be less wrong. 

 

96