# 2

I'm interested in how courts and juries might use rational techniques to arrive at correct decisions on guilt.

In a complex case, it would seem to sensible to assess each component of the prosecution and defence case, and estimate the relative likelihood. If the prosecution case is (say) 100 times more likely than the defence case, then you can say the defendant is guilty beyond reasonable doubt.

I never heard of this being done though. I recently made an analysis of the Massei report into the Amanda Knox case. It looked like this ( see http://massei-report-analysis.wikispaces.com/ for the entire analysis and some insight into the numbers below ).

 Event Prosecution Probability Defence Probability Phone at cottage at 22:13 1% 99% DNA evidence correct 50% 50% Break in staged to look like Rudy did it vs unstaged break-in 10% 90% Conspiracy among 3 near strangers with no apparent motive vs burglary gone wrong 5% 95% Murder weapon was two knives rather than one 10% 90% Time of death 23:30 vs 21:10 according to empty duodenum 10% 90%

This is perhaps a bit vague. It's not a great example, because in the end I didn't find any credible prosecution evidence. It's not entirely clear what the "probability" numbers here actually are, and whether two columns are needed. But hopefully it shows that the Massei's account of the murder is quite improbable, and there is considerable doubt.

I'm interested in possibly devising a more complete framework for how such an assessment should be done, the pitfalls that need to be guarded against (how uncertain are the probability estimates?), and even views as to how "reasonable doubt" should be quantified.

Perhaps readers would like to make an assessment of other interesting cases, to explore the issues.

Or how would you approach this problem?

# 2

New Comment

If I get to ignore real-world social constraints, I would approach this problem by more or less ignoring the courts, and concentrating on the schools.

Raise a generation of students who have been taught about decision theory and cognitive bias, who have been taught to analyze an argument for evidence to roughly the same degree that students today are taught to analyze a sentence for its subject and predicate, who have been taught to identify common fallacies of reasoning to roughly the same level that students today are taught to identify the years of famous historical events. Wait two generations and see what happens to the average quality of courtroom reasoning.

I'd probably also try to raise the floor... that is, establish some standard for arguments so common-sense fallacious that for a lawyer to make those arguments in a court of law is grounds for appeal. But I suspect I would fail, as regulatory capture would no doubt set in.

I have a low prior for the success of this idea because it does not address people's motives. If people still want to deceive (others and themselves), then making them cleverer at noticing and articulating evidence should have no net effect on accuracy, especially in court. If everyone became adept at noticing now-common fallacies, lawyers would have a strong incentive to find subtler fallacies or more effective obfuscatory techniques.

I liken this suggestion to the idea of giving advanced weapons to both sides in a war. A priori there's no way to say what effect it would have, and certainly there's no reason to suppose the "right" side will win.

I'm inclined to disagree with your reasoning. The sort of training I described will not significantly improve the toolkit available to the sorts of people who become lawyers, any more than improving grade-school math education will significantly improve the toolkit of accountants. But it might significantly improve the toolkit available to average people, and thus to average jurors, and might thereby change the sorts of arguments that are effective in courtrooms. Ideally, it leads to an arrangement where truth-preserving arguments are more effective than they are now, and therefore get used more, which seems as good an operational definition of using "rational techniques to arrive at correct decisions on guilt" as I expect to get while still keeping randomly selected humans involved. (I more or less endorse the use of randomly selected humans, as a way to avoid regulatory capture, though it's hard to say whether regulatory capture would be worse than the foolishness of juries.)

That said, I mostly agree with your conclusion: it probably wouldn't work, though not for the reason you describe. To keep your martial analogy, I think the result would be similar to that of instituting formal calisthenics programs in grade school in the hopes of improving the quality of our soldiers.

I see. If this critical-thinking curriculum raised the fallacy/bias-spotting abilities of ordinary folks, and did not raise the sophism-spinning abilities of lawyers, then the jurors' abilities would rise relative to the lawyers', which would improve juries' chances of reaching the correct verdict. I think I agree with this.

But why should we assume that lawyes' abilities will not rise as well? You write that this training would not "significantly improve the toolkit available to the sorts of people who become lawyers." But surely lawyers will play to their audience, and learn to present whatever fallacies the jurors will be susceptible to.

Thus I agree that such training might "change the sorts of arguments that are effective in courtrooms," but I don't think it would improve outcomes more.

it probably wouldn't work, though not for the reason you describe

Why do you think it wouldn't work?

But why should we assume that lawyes' abilities will not rise as well?

I expect that the sorts of people who become lawyers today are already better acquainted with critical thinking techniques than the sorts of people who become jurors. Also, I expect an across-the-board training curriculum for subject X to have more of an impact on people who know little about X than it does on people who know a lot about X -- that is, I expect it to raise the floor more than the ceiling.

Therefore, I expect an across-the-board critical thinking curriculum to reduce the difference between a typical lawyer's abilities and a typical juror's abilities in areas related to critical thinking. If you raise the floor more than the ceiling, average height differences tend to decrease.

But surely lawyers will play to their audience, and learn to present whatever fallacies the jurors will be susceptible to.

Sure, but again, I don't expect them to be able to improve enough to maintain the same proportional superiority to jurors, for essentially the same reason that ten years of additional life experience sharply reduces the cognitive advantages that a typical 25-year-old has over a typical 15-year-old.

Also, if juries were sufficiently trained in critical thinking, then lawyers who actually had the facts on their side would eventually find that presenting the actual facts, and pointing out the fallacies in their opponents' arguments, would be a viable strategy. (Right now, I doubt that it is for most juries.) In other words, the more effective the jury is at distinguishing truth from falsehood, the more of an advantage the truth is to a lawyer.

Why do you think it wouldn't work?

Sorry; I thought my analogy was clearer than it was. I expect it not to work because there just isn't enough leverage between the place we'd be exerting the effort (classrooms) and the place where we'd be expecting the results (courtroom), much as with grade school calisthenics and soldiers. That is, I don't actually anticipate the kind of increase in critical thinking skill among jurors I'm discussing here based on the kind of curriculum I'm discussing here, any more than I expect a typical thirty-year-old to know how to factor a polynomial expression or diagram a sentence.

I'd love to be wrong, though.

[-]TimS20

Well, it's an empirical question whether current dis-rationality is more caused by cognitive bias or bounded-rationality with the bound set "too low." If it's the latter, then increasing the baseline will improve the correlation between political decisions and truth.

And I know it's seldom wise to bet against motivated cognition, but if there really were more effective dark arts techniques that could be implemented by the average lawyer, then I would expect that the techniques would already be implemented. There's already lots at stake in the average lawyer's job.

What is the difference between a cognitive bias and a bound on rationality? I thought those were two ways of framing the same phenomenon.

I like your theory of efficient dark arts. (I hope you call it the efficient-darkart hypothesis.) I think you're right that lawyers are already strongly motivated to exploit all effective dark-arts techniques. I was not suggesting the existence of unexploited yet effective techniques. I was suggesting that changing the "baseline" (is this a specific application of raising the sanity waterline?) may increase the effectiveness of certain techniques, from pointlessness to practicability.

Here it is again, more concretely. There would be no point in constructing a fallacious argument, in the language of Bayesian probability, to persuade someone who had no previous understanding of that language. In the present world, that's almost everyone. So lawyers don't spend much time concocting pseudo-Bayesian sophisms. But if enough people learn about probability theory, it might pay for lawyers to do just that.

Thus educating lots of people in probability could usher in new fallacies. This is what we should expect from giving motivated thinkers new ways to think--they'll think in new ways, motivatedly.

[-]TimS00

As I understand the term, bounded rationality (a.k.a. rational ignorance) refers to the theory that a person might make the rational (perhaps not our definition of rational) decision not to learn more about some topic. Consider Alice. On balance, she has reason to trust the reliability of her education, and her education did not mention existential risk from AI going FOOM (which she has reason to expect would be mentioned if it was a "major" risk). Therefore, she does not educate herself about AI development or advocate for sensible AI policies. If Alice were particularly self-aware, she'd probably agree that any decisions she made about AI would not be rational because of her lack of background knowledge of AI. But that wouldn't bother her because she doesn't think that any AI-decisions exist in her life.

Note that the rationality of her ignorance depends on the correctness of her assertion that AI-decisions do not exist in her life. As the Wiki says, "Rational ignorance occurs when the cost of educating oneself on an issue exceeds the potential benefit that the knowledge." Rational ignorance theory says that this type of ignorance is common across multiple topics.

Compare that to Bob, who has taken AI classes but is not concerned about existential risk from AI because he does not want to believe in existential risk. That's motivated cognitive. I agree that changing the level of ignorance would change the words in the fallacies that get invoked, but I would expect that the amount of belief in the fallacies was controlled by the amount of motivated cognition, not the amount the audience knew. Consider how explicitly racist arguments are no longer acceptable, but those with motivated cognition towards racism are willing to accept equally unsupported-by-evidence arguments that have the same racist implications. They "know" more, but they don't choose better.

I thought rational ignorance was a part of bounded rationality--people do not investigate every contingency because they do not have the computational power to do so, and thus their decision-making is bounded by their computational power.

You have distinguished this from motivated cognition, in which people succumb to confirmation bias, seeing only what they want to see. But isn't a bias just a heuristic, misapplied? And isn't a heuristic a device for coping with limited computational capacity? It seems that a bias is just a manifestation of bounded rationality, and that this includes confirmation bias and thus motivated cognition.

[-]TimS00

Yes, bounded-rationality and rational ignorance are consequnces of the limits of human computational power. But humans have more than enough computational power to do better than in-group bias, anchoring effects, deciding when to follow authority simply because it is authority, or believing something because we want it to be true.

We've had that capacity since the recorded history began, but ordinary people tend to not notice that they are not considering all the possibilities. By contrast, it's not uncommon for people to realize that they lack some relevant knowledge. Which isn't to say that realization is common or easy to get people to admit, but it seems possible to change, which is much less clear for cognitive bias.

I strongly disagree with this. The improvements in reasoning, applied bilaterally AND to the jury, work in favour of the side that is in fact correct, just as would increase in everyone's IQ (including that of the jury)

Consider a game of chess. There is one side, other side, and an arbiter. If neither player cares about chess rules, and arbiter is incompetent at enforcing rules, you fail to even play a game of chess at all. If you teach rules of chess to all 3 and provide chess training to the players, you set up situation where the best-reasoning opponent wins (and if one side lacks some pieces at start, that side will reliably lose).

Consider a game of chess. Giving access to chess training to both sides makes the smartest (best reasoning) side more likely to win.

Doesn't sound likely. I'd expect the advantage of superior reasoning to be reduced by equal amounts of training for both sides.

I wasn't good at making the analogy... the important thing is that there is also an arbiter (jury). When that arbiter does not even know rules of chess, you get a major problem. It is absolutely essential that arbiter knows rules of chess perfectly.

With regards to training the players the issue is that w/o training the outcome gets decided by mistake rate. The Kasparov is guaranteed to win versus me, if we were to do equal amount of training. He is not guaranteed to win vs me if we both played chess for the first time in our lives.

I wasn't good at making the analogy

I think I actually agree with the point you are trying to make with the analogy.

The Kasparov is guaranteed to win versus me, if we were to do equal amount of training. He is not guaranteed to win vs me if we both played chess for the first time in our lives.

I would still place the difference the other way. I'd give naive Kasparov (even) better odds against naive you than I would give trained Kasparov against a you of equal training and experience.

I would still place the difference the other way. I'd give naive Kasparov (even) better odds against naive you than I would give trained Kasparov against a you of equal training and experience.

Well the chess is prone to ties, so maybe I'd be able to reliably tie the Kasparov if we had equal training and if I am nearly as smart as him. I do think though that my chances at winning would be massively better if we both played chess for first time after reading the rules off a book. I think it'd be close enough to 50/50 at start with him gaining the lead with each next game, winning reliably after some dozen games.

But the point with the legal debate is that one side is right, and other side is wrong; things do not start off symmetrical. Say, we are to put random people to play chess, with one side missing a queen. I play chess; it seems totally self evident to me that among trained players the one lacking the queen will be guaranteed to lose, while for the untrained players the outcome will be much more random.

For the trials I think the original point was to educate the jury, paralleling the teaching of rules to the chess arbiter. The jury doesn't need to think very hard to verify an argument, it only needs to know the valid rules of reasoning. Suppose we were to have a debate on some fact of mathematics, one real well trained guy trying to prove Pythagorean theorem and other real well trained guy trying to prove something wrong (e.g. approximating the hypotenuse with staircase and proving c=a+b). In front of the jury which decides. With today's jury, chances are the jury would be unreliable in their decision. But it is a fact that trained mathematicians are not so easily misled.

1. Here are a couple links you may find interesting: Steve Landsburg opines on reasonable doubt and gwern cites a Bayesian approach to meting out justice. (Mmm, justice...my favorite kind of mete.)

2. I wonder which has a more adverse effect on trial outcomes, cognitive limitations or poor incentives. This post seems to address the former.

3. I object to the title. It's a sloppy use of the term "rational." I suggest "improved" or a synonym.

Agreed on title.

You might want to look at what Judge Richard Posner has written on the topic, for instance in Frontiers of Legal Theory.

[-][anonymous]00

This is an excellent post. I've posted no-Karma articles to my page (can you see them? I haven't been here long...), dealing with some of these ideas. This is the subject I'm most interested in. Keep up the great posting, and feel free to call me. Jake Witmer 312-730-4037 (in and out of good cell locations this month)

[This comment is no longer endorsed by its author]Reply
[-][anonymous]00

You might want to look at what Judge Richard Posner has written on the topic, for instance in Frontiers of Legal Theory.

[This comment is no longer endorsed by its author]Reply

I'm interested in how courts and juries might use rational techniques to arrive at correct decisions on guilt.

If letting the guilty party go free is net positive utility summed over all agents, I desire to let the guilty party go free. If punishing the guilty party is net positive utility summed over all agents, I desire to punish the guilty party. Let me not become attached to "justice-as-punishing-the-guilty" as a terminal goal.

If letting the guilty party go free is net positive utility summed over all agents, I desire to let the guilty party go free. If punishing the guilty party is net positive utility summed over all agents, I desire to punish the guilty party. Let me not become attached to "justice-as-punishing-the-guilty" as a terminal goal.

To the extent that shokwave has the power to enforce this will I desire to thwart and cripple shokwave's influence.

It's a step up from the most primitive 'happiness' utilitarians but only a small one. It is a 'Justice' function that merely influences to whatever extent possible in the direction of tiling the universe with as many agents as possible with as much 'utility' as possible.

It's not entirely clear what the "probability" numbers here actually are, and whether two columns are needed.

The simplest is just to use odds ratios. It doesn't matter if the likelihoods you give are correct, so long as they are proportional to each other. It would look like follows:

Prior: 1:1 Evidence:

1:99

1:1

1:9

1:19

1:9

1:9

product:

1:1371249

So there's 1371249 to one odds in favor of the defense.

There are a few problems with this:

First, the prior shouldn't be 1:1. You'd probably use something about how likely it is for someone that close to commit the crime. If you really want to be rigorous, use 1:6840507000 as the prior and use the relation and all that as evidence, but that's probably overkill.

Second, there are biases that will cause problems exponentially with evidence. You will tend to notice evidence more on one side, and you will tend to think evidence is more likely for one side than it really is. This would be hard to account for. The best I can figure is to keep track of an error term that grows exponentially with each step. You might end up with

(1/64 to 64):(1371249/64 to 1371249*64) = 1:(334.777588 to 5616635904)

From there, you could integrate in some way I can't think of right now, and you'd probably get something on the order of 2000:1 in favor of the defendant.

[+][anonymous]-50