Measuring police bias using simple ratios doesn't work. You can never cleanly separate the impact of race from other factors associated with race.

Instead, I want augmented-reality goggles that make race invisible. Then we could run the following experiment:

  • Have half of police wear race-invisibility goggles for a year.
  • Have the other half wear non-invisibility goggles.
  • Look at the difference of the two groups.

The police with invisibility goggles would not have equal statistics with respect to race, because race is correlated with many things other than how people appear. However, since the only difference between the two groups is if police can see race, the difference reveals exactly the impact of police bias.

We can't do this experiment, but we can do kind of low-tech approximation. Instead of augmented reality goggles we use the geometry of the earth and sun. [continues here]

New to LessWrong?

New Comment
28 comments, sorted by Click to highlight new comments since: Today at 9:25 AM

Instead of augmented reality goggles we use the geometry of the earth and sun.

Huh?

My guess is that the problem is I didn't make it clear that this is just the introduction from the link? Sorry, I edited to clarify.

Yes, that was it – thanks! No worries tho!

I'm not aware of any good and common convention here for handling link posts. I like to post the link and then my own separate commentary. But I've also seen a lot of people go to the opposite extreme and cross-post here.

For this post, it would have been much less confusing had you quoted the entire last paragraph of the intro, and also added something like "Read the rest here". I like putting "[Link] ..." in the title of my link posts here too so that that info is available for people skimming titles. (I don't think that's always necessary or should be required; just a personal preference.)

What's the theory for why "state patrol agencies" are less racist/biased than "municipal police departments"?

This is a hard topic to discuss rationally (or reasonably) because of politics. I also worry there's a large 'mistake theory vs conflict theory' conflict/mistake dynamic too.

I like your idea of analyzing a bunch of dimensions, e.g. age, gender, income/wealth, education, and political identification, for things like police traffic stops and vehicle searches. That's something Andrew Gelman suggests a lot:

When you do have multiple comparisons, I think the right way to go is to analyze all of them using a hierarchical model—not to pick one or two or three out of context and then try to adjust the p-values using a multiple comparisons correction. ...

To put it another way, the original sin is selection. The problem with p-hacked work is not that p-values are uncorrected for multiple comparison, it’s that some subset of comparisons is selected for further analysis, which is wasteful of information. It’s better to analyze all the comparisons of interest at once.

It'd be nice if the researchers for the studies you reference in your post had also published their data. (Did they? I expect they didn't – but I haven't checked.)