There are few places where society values rational, objective decision making as much as it values it in judges. While there is a rather cynical discipline called legal realism that says the law is really based on quirks of individual psychology, "what the judge had for breakfast," there's a broad social belief that the decision of judges are unbiased. And where they aren't unbiased, they're biased for Big, Important, Bad reasons, like racism or classism or politics.

It turns out that legal realism is totally wrong. It's not what the judge had for breakfast. It's how recently the judge had breakfast. A a new study (media coverage) on Israeli judges shows that, when making parole decisions, they grant about 65% after meal breaks, and almost all the way down to 0% right before breaks and at the end of the day (i.e. as far from the last break as possible). There's a relatively linear decline between the two points.

Think about this for a moment. A tremendously important decision, determining whether a person will go free or spend years in jail, appears to be substantially determined by an arbitrary factor. Also, note that we don't know if it's the lack of food, the anticipation of a break, or some other factor that is responsible for this. More interestingly, we don't know where the optimal result occurred. It's probably not the near 0% at the end of each work period. But is it the post-break high of 65%? Or were judges being too nice? We know there was bias, but we still don't know when bias occurred.

There are at least two lessons from this. The little, obvious one is to be aware of one's own physical limitations. Avoid making big decisions when tired or hungry - though this doesn't mean you should try to make decisions right after eating. For particularly important decisions, consider contemplating them at different times, if you can. Think about one thing Monday morning, then Wednesday afternoon, then Saturday evening, going only to the point of getting an overall feel for an answer, and not to the point of really making a solid conclusion. Take notes, and then compare them. This may not work perfectly, but it may help you realize inconsistencies, which could help. For big questions, the wisdom of crowds may be helpful - unless it's been a while since most of the crowd had breakfast.

The bigger lesson is one of humility. This provides rather stark evidence that our decisions are not under our control to the extent we believe. We can be influenced by factors we don't even suspect. Even knowing we have been biased, we may still be unable to identify what the correct answer was. While using formal rules and logic may be one of the best approaches to minimizing such errors, even formal rules can fail when applied by biased agents. The biggest, most condemnable biases - like racism - are in some ways less dangerous, because we know we need to look out for them. It's the bias you don't even suspect that can get you. The authors of the study think they basically got lucky with these results - if the effect had been to make decisions arbitrary rather than to increase rejections, this would not have shown up.

When those charged with making impartial decisions that control people's lives are subject to arbitrary forces they never suspected, it shows how important it is and much more we can do to be less wrong. 


New to LessWrong?

New Comment
91 comments, sorted by Click to highlight new comments since: Today at 10:53 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hypothesis: people eat a poor diet because their decision-making ability is most impaired right at the point where they are deciding what to eat next.

Related advice "Don't go food shopping when you're hungry" So would the best option be to pre-commit to your next meal immediately after finishing the current one?
I've found it helps (not completely, but noticeably) to remember how various foods leave me feeling.
Your suggestion has a possible "remembering just how good my favourite unhealthy food tastes" counter-effect. Other than that, it sounds like a pretty slam-dunk test, for any particular individual, as to whether the effect I'm speculating about is actually real. Anyone want to commit to trying this? (improving diet isn't a goal for me right now unfortunately)
That people buy more, less healthy food when they are hungry is pretty well backed up, I understand. (Googling gives this right away) My experience is that how food tastes changes massively depending on my hunger, so you need to bear in mind that "how good my favourite food tastes" will likely be "not very" when you've just eaten. For example, I play sport for about 3 hours on Sundays, and immediately after (before leaving the pitches) I drink a litre of milk, mixed with milk powder (to make double strength milk), mixed with chocolate Ovaltine powder. It tastes great to me at that point in time. I tried it before a practise once, and it was just awful.
As the proverb goes, hunger is the best sauce.
Perhaps it’s the other way around? The study only suggests that the time of day affects the decision, not that worse decisions are made when hungry.
Very interesting suggestion. Thanks.

they grant about 65% after meal breaks, and about 10% right before breaks and at the end of the day

While the idea that judges are more lenient shortly after eating is plausible, this effect size does not pass sanity checking. I would like to see this reproduced. The judgments to be made should break down into some one-sided decisions and some borderline decisions, but these numbers imply not only that one-sided decisions are uncommon, but also that borderline decisions aren't influenced much by other things like the eloquence of the lawyers.

I agree the effects are dramatic and that replication would be ideal. But if you look at the study, it really does say what I said (actually, it drops close to 0%), and it seems like they control for severity of offense (which would hardly be time-of-day correlated anyhow). The N is over a thousand, with 8 different judges.

It wouldn't indicate that lawyer eloquence is irrelevant, only that lawyer eloquence is much less significant when the judge is hungry or tired. That actually isn't such a big stretch.

The true effect may be less noticeable than the effect observed in this study, but the study is nonetheless rather strong evidence that some meaningful effect exists. I bet there will be a lot of followups to this; it's rather staggering.

It wouldn't indicate that lawyer eloquence is irrelevant, only that lawyer eloquence is much less significant when the judge is hungry or tired. That actually isn't such a big stretch.

Or possibly that lawyers and judges meal schedules tend to be synchronized, and lawyer eloquence correlates with lawyer hunger. Which would not be too surprising, really, though arguably it's just as problematic as the interpretation it replaces.

One question worth considering is how parole hearings get scheduled in the first place.

If there's a general understanding among lawyers that their odds are adjusted even slightly based on judges meals, that might drive a systematic tendency to schedule the most compelling cases right after meals (to "put all of my wood behind one arrow," as a former boss used to put it., and the least compelling cases right before them.

The study states that no one in the profession they talked to (judges or lawyers) expected this bias. So such a deliberate scheduling explanation seems unlikely.

How likely is it that lawyers never noticed an effect of such magnitude? I find that rather implausible, assuming that there is some minimal competitiveness in the profession.
Very easily. You wouldn't see a lot of data points, and you wouldn't be looking for it. You'd have on parole hearing maybe every few weeks. It'd be extremely hard to piece together that the time of day had a significant effect on the outcome, especially when you want credit for good results (and generally want to blame the judge for bad results). You probably don't have many lawyers in a position to observe enough parole hearings to really form a suspicion.
I can imagine that individuals might be unable to spot the correlation. But if the lawyering industry is competitive to any significant degree, it would be strikingly implausible if none of the competitors ever stumbled onto it and proceeded to take advantage of it. Especially considering that larger law firms, and even individual lawyers with long experience, could easily data-mine their past record for such correlations. Are they all really too stupid to think of that? Whenever social scientists -- a phrase I'm always tempted to put into scare quotes -- claim that they've found something that indicates unexploited profit opportunities if true, it is likely that they're either talking nonsense or that they've reinvented some wheel that is however infeasible or forbidden to use in practice (at least openly) for other reasons. Otherwise, it would violate the weak efficient markets hypothesis, in which I certainly have far more confidence than in "social science."
"But the market should fix everything!" is something you only get to say when you actually have a decent notion of the market in question. It is apparent that you do not, at least with respect to lawyers doing parole hearings (which may often not involve lawyers in the States). There is no money to be made here. Most of these attorneys are being paid by the state. The ones that are not do not generally disclose their past records in an indexed, searchable database. There is no scalability - there's a finite number of after-meal time slots. There are no (or virtually no) large law firms working in this field. Lawyers as a group are not scientists and are generally relatively innumerate. I could probably go on for pages on how lawyer's interests really aren't that closely aligned with positive outcomes; a thousand times more so when their clients aren't the ones paying the bill. It's one thing to say, "But if you figured this out, you could dominate the market!" It's another thing to articulate how to actually dominate the market with this knowledge. If you can't do that, the EMH doesn't really do much for you.
EDIT: You've edited your comment substantially since the time I posted this reply, so now it looks like I'm writing with disregard to your arguments, and I'm thus editing my reply too. I certainly don't have an idealized view of the legal system in general, or criminal defense lawyers in particular -- on the contrary. If there really is a complete disconnect between the lawyers' incentives and the destiny of their clients, then I can imagine that they wouldn't notice correlations like these (though I'd still find it implausible, given the purported magnitude of the effect). However, if such incentives do exist, then I find it absurdly implausible that lawyers would be unable to figure out these correlations and take advantage of them. (Even if they're theoretically unable to influence the scheduling, no bureaucratic system is immune to manipulation if there are smart and determined people with the right incentives.) As innumerate as they might be, there's no way lawyers could be so incompetent that a few social scientists could just show up and point out that they've been doing their job suboptimally because they're unable to put two and two together about such a simple issue. On the whole, when you argue that the interests of lawyers and their clients are misaligned, I find that a plausible claim. But when you argue that they'd be unable to figure this out even if they had the incentive, that's a far more extreme and implausible assertion.
It could also be that those who notice it, have attributed it to other factors (for example, that easier/simpler cases are generally scheduled first, as mentioned by some other posters). It could even be that these other factors are real, but there's still a pattern after adjusting for that. That said, I can't imagine the correlation being true unless there's some other factor significantly influencing things as well.
I think psycho's point is that not only would they have to notice it, they'd have to care. And since (1) We're talking about people already adjudicated guilty, and (2) almost always NOT the party paying the lawyer there is no real incentive for the lawyer to win, just to present a credible (Professional) case. Most convicts have little money, and even if they had a private lawyer for the court case, they gave him most of their money and their now-ex wife spent the rest on booze and pool boys before the divorce, so now he's got a public defender who gets the same check win or lose.
Even if only a small percentage of prisoners seeking parole pay for their own lawyers, that still constitutes a market whose participants have the incentive to figure out the informal intricacies of parole hearings and adapt to them.
OK, but why haven't private law firms looked into the scheduling of hearings as a possible determinant of outcome? Surely they have an incentive to; or do these results only hold for paroles, in which "there is no money to be made"? If the hunger theory is right, then shouldn't judges of any type of case be under its effect?
Very few judicial decisions are actually made entirely during a hearing; despite what you see on television, most major issues are going to be briefed and the judge (or his staff) will have already read the briefs and come to a not-too-tentative decision about how they are going to rule. For issues that are so small as not to be briefed, lawyers have pretty much no control over when these will be heard by the judge, and the stakes tend to be relatively small anyways. Moreover, where there are two parties involved, it seems impossible to predict which direction this effect would take - would the judge be wiser, less wise, less agreeable, lazier? Even if someone were paying careful attention to the data, it seems unlikely they could discern a clear trend, and no one's paying such attention because (A) no one really has an incentive to and (B) the payoff is likely very close to 0 anyways.
Doesn't this strongly cut against the theory that the degree of hunger at the time of the hearing influences the decision?
... is the exact response I wanted to make. Most legal choices are either incredibly short term - like an objection that a judge must often respond to immediately - or medium to long term - like a motion that a judge will ask for parties to provide briefs (written legal arguments) on. Parole hearing like this are one a few legal decisions where there really is a quick decision made - another area would be bail hearings, but there the outcome isn't binary, it's a dollar amount. There isn't much money to be made in gaming either,.
The Israeli parole result was for a short single high-stakes decision; most hearings are not like that, I think.
No. The study was specifically on parole decisions, which often are made at the time of the hearing, although other judicial decisions generally aren't.
I agree, but you should consider that this is about to happen (in a widespread way) for the first time. Also, some people keep such knowledge to themselves - weighing the advantage they get from being one of the few to have and use it as more than they'd get by gaining fame in sharing it. (But it's true that such powerful secrets tend to spread rapidly, if shared at all.)
That seems like a really really large correlation to miss. One lawyer doesn't see 20 hearings?
Fair point. Not conclusive, but compelling.
This would probably split randomly. Also, lawyers are probably not scheduling blocks of hearings (if they even have any say in scheduling). You schedule for your client, so if you knew, all you would do is schedule ALL your cases for right after lunch, if possible. Even if that weren't the case, there's no particular reason why you'd want a more favorable judge for an easy win than for a tough one. Indeed, there are many reasons you'd want a more favorable judge for a tougher-to-win case - higher marginal benefit, mostly. In short, it'd be very hard for underlying factors to create meaningful systematic bias on such an obvious characteristic in a sample this size. Unless there's some odd organizational principle, like, "More serious offenses are put off until the end of the day," and such was not observed.
I've found two critiques of this study: 1. 2.

I don't know about Israel, but I do know that in American courts, cases are not heard in random order on a given day. It's very common that simple, quick matters are put first so that the attorneys can get out fast.

Presumably, a parole application is either contested by the prosecutor's office or uncontested. If it's uncontested, it's probably pretty quick. Just some perfunctory testimony from the convict and perhaps from the parole services office, a few questions from the judge, and that's that.

On the other hand, if the parole application is contested, one can expect more in the way of witnesses, cross-examination, and so forth.

It would be natural to put the contested applications towards the end of the morning and afternoon sessions.

Anyway, I am just speculating here. But it does seem suspicious that timing alone could make such a dramatic difference unless some other factor is at work.

In the paper, the authors spend a couple paragraphs considering the different biases that can be involved. Since this topic is outside my research area, I'm not allowing myself to spend time reviewing how thorough their analysis was.

Thus, completely independent of this particular paper, I just wanted to point out agreeably that such an observation (of a high probability of passing early in a session and lower probabilities later in a session) could come about naturally if the length of an application increases greatly if it is not going to be passed, even if the applications are ordered randomly. Suppose that a typical session is 3 hours and an application takes 5 minutes if it is going to pass and 2 hours if it is not going to pass. Then several passing applications can get through quickly at the beginning of a session, but most non-passing sessions would pass towards the end, or even right before a break if the judge insists on finishing the application before taking the break.

And further, of course, if this happens over enough sessions the statistical average would be similar to what we've had described. I'm not enough of a mathematician to be able to visualize the difference in the expected graphs though.
I don't see any reason there wouldn't be the inverse as well. That is, applications which are immediately rejected, and therefore quite short. I also find it suspicious that the most highly contested applications would also be the least likely to be approved. Presumably these are the ones which are borderline, and require much argument, pro and con, to come to a decision. Immediate rejections wouldn't require long arguments, and neither would immediate acceptances. Under the above hypothesis, both of these types of cases should be early in the session. If there were no bias and the cases were arranged by length, I would expect to see nearly 50% of the lengthy contested cases, to be accepted. Or, if that many are simply not accepted, some number significantly greater than 0%. For a slightly different arrangement, if the immediate rejections were placed last in the session I would expect the acceptance rates to start at nearly 100% and progress down to 0%. Unless, of course, there is no such thing as a quick, immediate acceptance parole case, in which case the argument doesn't work anyway. If they are all long there isn't much point in arranging by likelihood to be accepted.
Well the situation is not symmetrical. The person applying for parole cares a good deal about having his application approved. If he did not care, then he would not be applying in the first place. On the other hand, the prosecutor's office either cares or it does not care. In the latter case, the application process will be both quicker and more likely to succeed. Let me put it another way. And this applies not just to parole proceedings, but any application to a court for relief: 1. Uncontested applications to a court are more likely to succeed. 2. Contested applications to a court are more likely to take longer. Therefore, 1. Applications which take longer to process are more likely to fail.
I can certainly buy that, but would there really be zero people who apply even though they don't have much chance of winning? I know a few stubborn people who I would expect to apply anyway even if they didn't have much chance of success. I'd be surprised to find out that the prison system has an insignificant number of people who are like that as well. Also, do the most highly contested applications (and therefore the longest, and therefore placed last on the docket) really have 0% chance of success? If so, would not those applications be better off not applying at all? It seems to contradict the idea that an application with 0% chance of success would not be filed. Lastly, with only a 65% approval rate for the early applications, I'm pretty surprised that the prosecution doesn't care about them very much. If they were completely uncontested wouldn't you expect closer to a 100% success rate?
I would think a lot of people would apply even with a small chance of winning. However their applications will not necessarily be disposed of quickly. It's not like the Judge can say "Well, based on my experience you don't have much chance of persuading me so I'm not going to bother listening to witness testimony and hearing argument from attorneys; I'm just going to deny the application right now." I doubt it, but there chances of success are surely worse than uncontested applications. I don't know enough about the details of Israeli parole hearings to speak definitively about that. I can say as a lawyer that many times I have made uncontested applications which were denied. This was not in a criminal or parole context.
I agree that any contested case should be longer than an uncontested, however are there not cases where the prosecution simply doesn't need to go through a lengthy argument to prove their case? Prosecution lays out X, Y, and Z evidence that is definitive, and therefore the prosecution doesn't need to spend a lot of time arguing. Are these types of cases not generally shorter than cases that are contested but more likely to succeed? Or does a lengthy defense attempting to weasel out of the evidence make up for a short prosecution? And are these specific cases few and far between? In this context I could believe a very low success rate, but the researchers found a 0% success rate for a number of courts. That still makes me suspicious. I'm still not sure what "very low success rate" means for parole hearings though. Is 20% low? Is it more like 5%? Somewhere in between? Obviously, the lower a reasonable success rate for these types of cases the more likely you'll see 0% rates in different courts, just based on chance. Fair point. Like I said above I'm not really sure what my expectations should be for a reasonable success rate in these types of cases (or cases in general). Question though, did these applications tend to be closer to or further from a break than your more successful uncontested applications? (obviously purely anecdotal, but I'm sure you see my point)
I don't know enough about the Israeli parole process to address this definitively, but I would guess that yes, it does happen that there are contested cases which are such slam-dunks from the prosecution point of view that they don't need to do much at all. As with many things, the question is how significant this phenomenon is in terms of the overall average. Because what we are talking about is what happens in general and on average. I wasn't aware of that, but it actually makes me more confident that there is some factor in terms of how the cases are scheduled which is linked to the success rate. What else could it be?
The break distance bias found in the papers? You can't use two pieces of contradictory evidence to support the same argument. If the most highly contested cases still have a chance at success, finding 0% success rate at the furthest distance from the last break (because they are the longest cases and therefore placed last) should not increase your belief that there is no bias at work. It should reduce it. How significantly your belief is reduced depends on just how likely you would see 0% success rates at a high distance from break due only to scheduling, but I can't see any way it could legitimately raise your belief that there is no bias.
I kinda doubt it . . . it goes against common sense that there are judges who, once they get hungry, rule against any parole applications no matter how compelling. Yes you can, and I can demonstrate it by stepping back and demonstrating this point with an example in abstract terms: Let's suppose that we are debating whether Hypothesis X is correct or Hypothesis Y is correct. I am relying on evidence A which seems to support hypothesis X. You are relying on Evidence B which seems to support hypothesis Y. Ok, now suppose you present Evidence C which contradicts my hypothesis -- Hypothesis X. Does Evidence C make my hypothesis less likely to be correct? Not necessarily. If Evidence C contradicts Hypothesis Y even more acutely than Hypothesis X, then Hypothesis X is actually more likely than it was before. So situations can arise where evidence comes out which contradicts a hypothesis but still makes that hypothesis more likely to be correct. And that's pretty much the situation here. Your observation about a zero percent success rate at the end of the day in some cases undermines the 'hunger' hypothesis at least as much as it undermines the hypothesis that contested cases are being put at the end of the session (or the hypothesis that there is some other ordering factor at work).
No, Hypothesis X and Hypothesis Y are now both less likely.
No, since the debate (in my abstract example) is over whether Hypothesis X is correct or Hypothesis Y is correct. In my abstract example, one and only one hypothesis is correct. So you can't have a situation where both are less likely.
I have no idea what you think the word "likely" means, or why on Earth anyone should care about that thing. The thing I talk about when I talk about likelihood is a thing that is affected by evidence. That thing absolutely goes down for X given evidence contradicting X, and goes down for Y given evidence contradicting Y, and is not affected by what debate I might or might not be having at any given moment.
"more likely" means greater probability. "less likely" means lower probability. Technically I agree with you, but in interpreting bigjeff5's post, I understood him to be using a more flexible definition of "contradictory evidence" and that is what I used in my response. Either way, my basic point is the same.
You've lost me completely. If we're talking about the probabilities of X and Y, as you say here, then the evidence against them lowers those probabilities, and the fact that the debate in your abstract example is over whether X or Y is correct doesn't change that. It is a situation where both are less likely than they were before C was known. If your basic point is consistent with that, then I do not understand your basic point. It sure sounds to me like your basic point was that C made one of those assertions more likely, which is false.
Let me propose a charitable interpretation of what brazil84 is saying (he can correct me if I am wrong). Here is an example: We are discussing who committed a crime. There are three and only three suspects: Peter, Paul and Mary. Mary has an excellent alibi, so she's basically out of the running. There is some evidence both for Peter's and for Paul's guilt. Let's say we agree that the probabilities of each being guilty are: Mary 2%, Peter 49%, Paul 49%. Then a witness comes up who saw someone wearing a dress in the scene of the crime. Since men are a priori unlikely to wear dresses, this lowers the probability of Peter or Paul doing it.. Let's say however that for whatever reason, we agree that it slightly less unlikely a priori that Peter would wear a dress than that Paul would wear it. Mary's alibi is so good that the new evidence only raises very slightly her probability of being guilty. The posterior probabilities are: Mary: 6%, Peter: 48%, Paul 46%. This seems like a situation which might be described with brazil84's quote in the sense that Peter's guilt, even though in the absolute sense less likely (the evidence "contradicted" it) should now be our top hypothesis; it is "more likely to be correct" compared to the only plausible alternative. I agree that brazil84's way of putting it was a bit confusing, if this is what he meant.
I certainly agree that the situation you describe can occur. (I could quibble about whether the probability-shift for Mary actually depends on the quality of her alibi here, as that seems like double-counting evidence, but either way it's entirely possible for the posterior probabilities to come out the way you describe.) And, OK, sure, if "more likely to be correct" is understood as "more likely [than some other hypothesis] to be correct", rather than "more likely [than it was before that evidence arrived] to be correct", I agree that the phrase describes the situation. That is, as you say, a bit confusing, but not false. So, OK. Provisionally adopting that interpretation and returning to the original comment... their initial comment was "situations can arise where evidence comes out which contradicts a hypothesis but still makes that hypothesis more likely to be correct". Which, sure, if I understand that to mean "more likely [than some other hypothesis] to be correct" is absolutely true. All of which was meant, I think, to refute bigjeff5's comment about what sort of evidence should increase confidence in the belief that there is no bias. Which I understood to refer to increasing confidence relative to earlier confidence.
I think that's pretty close. If I am arguing that Paul committed the murder (and you are arguing that Peter committed the murder) it doesn't really help your argument to point out that there is evidence the murderer was wearing a dress since it undermines your own position just as much as it undermines the position you have taken. Getting back to the original discussion, another poster pointed out that my "contested cases later" hypothesis is undermined by the fact is undermined by the observation that for some judges there is a zero percent approval rate for later cases. The problem with this argument is that it undermines the "hunger" hypothesis even more than the "contested cases later" hypothesis.
Not if it's just a matter of choosing X or Y. It's impossible in such a situation for a piece of evidence to lower both probabilities. Perhaps an example will make it clearer: Let's suppose that a victim is found dead in a pool of blood, apparently having died from a gunshot wound. There are two possibilities: (1) He was shot from a distance with a rifle; and (2) He was shot at close range with a small caliber handgun. I favor the first hypothesis and you favor the second. Ok, now let's suppose we find a new piece of evidence: There is no bullet found inside or around the victim's body. Further, it is known that if somebody is shot from a distance with a rifle, a bullet will be find in or around the person's body 99.99% of the time. In common parlance, one might say that such a piece of evidence contradicts or undermines the hypothesis that the person was shot from a distance with a rifle. Since we have just seen something which is totally unexpected if our hypothesis is correct. On the other hand, suppose we know that being shot at close range with a handgun carries a 99.999% chance of finding a bullet in or around the victim's body. In that case, what has been reasonably described as "contradictory evidence" actually increases the chances that the first hypothesis is correct. Hope that makes things clear for you.
The probability of both, in that case, plummets, and you should start looking at other explanations. Like, say, that the victim was shot with a rifle at close range, which only leaves a bullet in the body 1% of the time (or whatever). It might be true that, between two hypotheses one is now more likely to be true than the other, but the probability for both still dropped, and your confidence in your pet hypothesis should still drop right along with its probability of being correct. So say you have hypothesis X at 60% confidence and hypotheses Y at 40% New evidence comes along that shifts your confidence of X down to 20%, and Y down to 35%. Y didn't just "win". Y is now even more likely to be wrong than it was before the new evidence came it. The only substantive difference is that now X is probably wrong too. If you notice, there's 45% probability there we haven't accounted for. If this is all bound up in a single hypothesis Z, then Z is the one that is the most likely to be correct. Contradictory evidence shouldn't make you more confident in your hypothesis.
That's just not so, since the total of the two probabilities equals one. If the probability of murder with a rifle drops, the probability of murder with a handgun necessarily rises. I'm not sure how to make this point any clearer . . . . perhaps a couple equations will help: Let's suppose that X and Y are mutually exclusive and collectively exhaustive hypotheses. In that case, do you agree that P(X) + P(Y) = 1? Also, do you agree that P(X|E) + P(Y|E) = 1 ?
If either X or Y has to be true, you cannot have 20% for X and 35% for Y. The remaining 45% would be a contradiction (Neither X nor Y, but "X or Y"). While you can work with those numbers (20 and 35), they are not probabilities any more - they are relative probabilities. It is very unlikely that the murderer won in the lottery. However, if a suspect did win in the lottery, this does not reduce the probability that he is guilty - he has the same (low) probability as all others.
I'm talking about probability estimates. The actual probability of what happened is 1, because it is what happened. However, we don't know what happened, that's why we make a probability estimate in the first place! Forcing yourself to commit to only one of two possibilities in the real world (which is what all of these analogies are supposed to tie back to), when there are a lot of initially low probability possibilities that are initially ignored (and rightly so), seems incredibly foolish. Also, your analogy doesn't fit brazil84's murder example. What evidence does the lottery win give that allows us to adjust our probability estimate for how the gun was fired? I'm not sure where you're going with that, at all. The real probability of however the bullet was fired is 100%. All we've been talking about are our probability estimates based on the limited evidence we have. They are necessarily incomplete. If new evidence makes both of our hypotheses less likely, then it's probably smart to check and see if a third hypotheses is now feasible, where it wasn't before.
brazil84 stated that there are just two options, so let's stick to that example first. "[rifle] no bullet will be find in or around the person's body 0.01% of the time" is contradictory evidence against the rifle (and for the handgun). But "[handgun] no bullet will be find in or around the person's body 0.001% of the time" is even stronger evidence against the handgun (and for the rifle). In total, we have some evidence for the rifle. Now let's add a .001%-probability that it was not a gunshot wound - in this case, the probability to find no bullet is (close to) 100%. Rifle gets an initial probability of 60% and handgun gets 40% (+ rounding error). So let's update: No gunshot: 0.001 -> 0.001 Rifle: 60 -> 0.006 Handgun: 40 -> 0.0004 Of course, the probability that one of those 3 happened has to be 1 (counting all guns as "handgun" or "rifle"), so let's convert that back to probabilities: 0.001+0.006+0.0004 = 0.0074 No gunshot: 0.001/0.0074=13.5% Rifle: 0.006/0.0074=81.1% Handgun: 0.0004/0.0074=5.4% The rifle and handgun numbers increased the probability of a rifle shot, as the probability for "no gunshot" was very small. All numbers are our estimates, of course.
I believe brazil84 is describing this: P(X | C & (X v Y)) > P(X | X v Y) P(Y | C & (X v Y)) < P(Y | X v Y) while you are describing this: P(X | C) < P(X) P(Y | C) < P(Y) All four of these statements can be true.
(nods) See also alejandro1's sibling comment and my reply.

Beware the sometimes subtle trap of thinking that, since you have thought about a big decision/belief at seemingly random intervals for a whole week (month, year) now, you have perspective on the decision/belief from a representative variety of your states of mind. State-dependent memory, habits, priming &c. make this unlikely unless you were deliberately making an effort.

If you seriously believe that the majority of Americans believe that judges are relatively unbiased, you need to spend some time with a lower class of people. And I'm not talking about middle class white kids pretending to be rebels.

EVERY convict I know (with one exception, and I'm not sure about him, it's been 20 years since he was in jail) and everyone who has ever tried to argue a child custody case, lost a small claims case, or whatever thought the judge was personally and malevolently biased against them for all sorts of reasons ranging from their haircut to the phase of the moon.

It's really only the upper middle class and the rich who think that judges are mostly unbiased, although many do think it's for racial or cultural reasons, most of them just don't want to admit that they are guilty.

In another vein, most of us DO think about "big" questions at different times, this is why they are "big" questions--we don't wake up in the morning and in between the time we finish grinding the beans and taking the first drink of coffee decide to quit or job designing and printing t-shirts for underground punk bands, drop back into school and get a masters in busine... (read more)

EVERY convict I know ... and everyone who has ever tried to argue a child custody case, lost a small claims case, or whatever thought the judge was personally and malevolently biased against them...

Well, there could be a bit of selection bias there. The people who were acquitted of criminal charges, who were awarded sole custody of their kids, and who won their small claims cases might have different views.

The people who were acquitted of criminal charges, who were awarded sole custody of their kids, and who won their small claims cases might have different views.

I was once ticketed for speeding. (The circumstances were as follows: I was driving several of my friends home after a party -- I was as always completely sober -- and had a full car. After almost entirely highway driving, I pulled out of an on-ramp, waited at a red light, turned red [because there was no turn on red], and then almost immediately turned again because that's where my first delivery-couple lived at. As I was turning that second time police lights came on behind me. The officer spent five minutes trying to get me to admit to being under the influence in some way. After it became painfully clear I wasn't, he quickly wrote me up a ticket for going twenty over, claiming he had 'paced' me, and that he had to immediately run off to some other emergency.) Naturally for this story to be relevant here, I contested the ticket. I had each of my friends sign a notarized letter stating that they personally recall me turning on the cruise-control of my car at a non-speeding velocity.

The officer didn't show up, and so... (read more)

I agree! Speaking as someone who was acquitted, the judge was unbiased; it was the arresting officer that had a problem ;) Seriously, though, humans seem to seek to assign blame for the stress and trouble in their life. I think anyone who has to deal with something as stressful as the legal system is probably going to look for people to blame, and I'd assume that judges get quite a lot of this blame from the losing parties.
Of course there is a selection bias there. I was commenting on the statement that "there's a broad social belief that the decision of judges are unbiased." I think that among those educated at state and private colleges (as opposed to community colleges or no college at all) that is mostly true. I don't think that there are enough of those people to call it a "broad social belief". Here's another highly politicized, mind killing example. There is the opinion that Europe and Australia have a broad social consensus against the death penalty, but time and time again polls show dramatic SUPPORT for putting certain classes of criminals to death. In this case it is the BROAD SOCIAL CONSENSUS that is the flaw in the thinking. I mean, if this is about being less wrong, let's not just work on the comfortable wrong shit, let's face all of it. The truth is that people aren't unbiased. Most don't even want to be. They just want other people to think they are.
I don't know that "the upper middle class and the rich" really think judges are unbiased. Judges are assumed to be biased in any case which directly affects their own interests, or in which the judge knows any of the parties. Also, judges in high-profile positions are often assumed to have political biases -- as is shown in the debates surrounding Senate confirmation hearings, as well as the recent judicial election in Wisconsin. Sometimes the judiciary in general is perceived to have a bias, such as being "soft on crime." What is not usual is to think that judges are strongly biased by how long it has been since their last meal. A lot of commenters on this thread have expressed skepticism on this point, and I do too.
I'm curious what "GU" is, as I've noticed that carrying some sort of energy bar is very helpful for me as well. Learning about something new to experiment with would probably be to my benefit :)
Google to the rescue:
Specifically this: Much easier to get down and keep down when you've been riding for 8 - 10 hours.

The Bias You Didn't Expect

There's only one?

I have observed and used a similar effect in myself. I do my most difficult studying right after eating, then use the long, almost flat plateau afterwards for studying or decision making that requires a high degree of sustained concentration, and try to use the tail for routine chores and R&R. The food doesn't make a lot of difference, the peak seems slightly higher after a high carb snack, but the same effect was present and only a bit weaker when I was eating an almost zero-carb (meat only) diet. Though a sudden change in diet can create unusual effects; for example a really high fiber meal (bean soup) can have unpredictable effects when I haven't been eating a lot of fiber recently.

..there is a rather cynical discipline called legal realism that says the law is really based on quirks of individual psychology, "what the judge had for breakfast,"...

In common law countries, sentencing decisions are made by trial judges. But the decisions of trial judges don't count as "precedent," and therefore are not "the law." Case law -- binding precedent -- is made up of the published decisions of appeals courts. Trial judges often have to make their decisions in real time, right there at the trial in front of ev... (read more)

If a judge lets too many of the day's first cases go free, there's no legal remedy.

I apparently can't access the article at all (I can't see any links for athens/shibboleth), so I can't tell, but does the paper make it clear that the scheduling of cases is purely at random? That could have a huge effect on bias, as could other factors such as which lawyers are around for which cases excetera.

In addition of course, this is a nonrandom sample of some judges in Isreal. Generalising to assume that one will make different decisions on important cases depending on when one has eaten is probably not a good idea. That said, its probably worth thinking about at least if your decision is important, but thats actually probably more to do if one is in a good/bad mood.

Or were judges being too nice? We know there was bias, but we still don't know when bias occurred.

I feel the need to point out that the observation does not necessarily result in a bias. We literally know nothing about the legal system's arrangments for parolees here other than this single data point.

It could be, for example, that there is an understanding that the review boards arrange cases before judges based on the boards' estimate of the potential parolee's worthiness of release; with the 'worst offenders' being later in a giving hearing bracket. T... (read more)

I thumbed you up because you were technically correct about the fact that just because positive judgements drop doesn't mean there's a bias. However, there is some extra data in this economist article on the same study to support the idea that there weren't factors in the arrangements of parole candidates that would account for such a drop:

Since I first read about this study, I have tried to move requests I make of strangers and authority figures to 1 PM when possible. Unfortunately I have not collected enough data to tell how effective this is.

The Economist reported on the Israeli study too:

The article makes an argument which I find persuasive: that it's not about food as much as it's about difficult decisions tiring the brain. When the brain is tired, it resorts to the easy and safe option.

Check out the Economist article for more.

... and these decisions are difficult. You have very little, poor quality information, you are constantly lied to, you get very little feedback on how your decisions went, and any feedback you do get is delayed and noisy.

It turns out that legal realism is totally wrong. It's not what the judge had for breakfast. It's how recently the judge had breakfast.

Was that part of the study? Did they collect information about what the judges consumed for their meal? If it is, in fact, the meal that is important and not time since having a break then it would be somewhat surprising if what they ate was not an important factor. A meal higher in protein and fat has a nutritional impact over a far larger time period than a meal high in refined carbohydrates.

No, the study did not address the effect of dietary choices, partly because that would be really, really, really hard to do. Chalk that statement up to artistic license.

Just a side note: there is growing consensus in the neuroscience community that certain activity, foremost among them self-control (rational control of emotions, inhibitive projections from the prefrontal cortex downward), are highly dependent on the blood glucose level.

In other words, you don't eat, your blood glucose goes down, you are less able to be unbiased and detached. Also, it would seem, the short memory suffers, so you are able to keep fewer things in mind for parallel consideration. All of this adds up to an increase in irritability, more impul... (read more)

For particularly important decisions, consider contemplating them at different times, if you can. Think about one thing Monday morning, then Wednesday afternoon, then Saturday evening, going only to the point of getting an overall feel for an answer, and not to the point of really making a solid conclusion.

This seems congruent with the folk idea of "sleeping on" difficult or particularly important decisions rather than coming to a decision on the spot, and with the legal practice of having "cooling off periods" after a purchase is made or a contract is signed, during which one party can void the agreement.

Very often, however, the immediate judgment is the most representative of the individual's honest opinion on any given topic. "Sleeping on" things is only useful when there is additional data to be reviewed before making said decision. Once you've already got all your givens, there's nowhere to go but to their conclusion.
The OP has already cited an important counterexample to this generalization. Others include anchoring and priming. Sleeping on a decision, cooling off after making it, or otherwise delaying its finality has benefits other than allowing one to review "additional data." It can change the way you think about a decision without adding more information, and this change might be to the good if it counteracts one of the effects mentioned above.
... I would disagree with that entirely. I'd be highly surprised if a statistically relevant random selection size of judges, after being informed of the 'meal break distance' bias, would believe that it applied to any particular case they had decided. Unless you have additional information -- new postulates -- whatever your conclusion was originally will remain your conclusion thereafter. If you get "primed" into thinking that it was colder yesterday than you normally would have, you will still believe that it was cold yesterday -- even if "primed" into thinking that the same temperature is "hot" today.

It strikes me that all the really effective ways of combating subjective bias involve bringing in groups of people, whether it is democratic votes, the communitarianism of science, or juries. It looks like having a single judge deciding a sentence is a loophole to be plugged.

I would expect that bringing in groups of people weakens the influence of idiosyncratic biases (as well as idiosyncratic rationality) and strengthens the influence of shared/conventional biases. I wouldn't expect it to reduce subjective bias per se, though. Note also that appeals courts are intended to counter the most egregious idiosyncracies... though they aren't the same thing as having a council of judges making the decision in the first place.

That's a huge and horrific example of bias. Upvoted.