LESSWRONG
LW

Replication Crisis
Personal Blog

31

Statistical error in half of neuroscience papers

by Paul Crowley
9th Sep 2011
1 min read
7

31

Replication Crisis
Personal Blog

31

Statistical error in half of neuroscience papers
8lessdazed
6satt
3BillyOblivion
3falenas108
3RobertLumley
2DanielLC
2RobertLumley
New Comment
7 comments, sorted by
top scoring
Click to highlight new comments since: Today at 9:27 AM
[-]lessdazed14y80

This provides an excellent way for readers to infer the competence of the experimenter.

What surprises me is that the abstract doesn't mention the number of papers done incorrectly for which having a statistically significant result depended on making the error. This would give us some information about how much of this is due to fraud. If all of the incorrect papers depended on misinterpretation to have publishable p values, that would be very disturbing.

Reply
[-]satt14y60

Andrew Gelman has been quite rightly beating this drum for a while.

Reply
[-]BillyOblivion14y30

I suspect that if you were to offer 100 dollars for every statistical error found in published scientific and medical papers a lot of stats majors could get their student loans paid off.

Reply
[-]falenas10814y30

My coworker had this problem in my lab where she was trying to say which of related measures changed the most, and refused to listen when I said she couldn't support that claim statistically.

Incidentally, she had no formal training in statistics (or it was so long ago that she didn't remember the connection between standard deviation and variance).

Reply
[-]RobertLumley14y30

Well, as purely anecdotal evidence, the (neruoscience) lab I was working in this summer analyzed our data correctly (with regard to this). Although, to be fair, I didn't notice it until my PI pointed it out to me...

Reply
[-]DanielLC14y20

You didn't notice that your data was analyzed correctly?

Reply
[-]RobertLumley14y20

I was trying to say more than we actually could say. I was the one analyzing the data.

We were studying the growth of neurons and patterns of gene regulation under the presence of various inhibitors of the mechanism of depolarization regulation. Say we had inhibitors A and B.

We would have done expression profiles for cultures that were, for example, this:

5 mM KCl (physiological conditions); 25 mM KCl (depolarizing conditions); 25 mM KCl + inhibitor A; 25 mM KCl + inhibitor B; 25 mM KCl + inhibitor A and B;

(Upon editing, I can't make those appear as a line by line list for some reason? Don't know what's up with that.)

I wanted to make comparisons between the A+B+25 cultures and the 25 mM Culture (and the 5 mM one), but you can't do that. You can only compare A+B to it's controls, A and B alone.

Reply
Moderation Log
More from Paul Crowley
View more
Curated and popular this week
7Comments

The statistical error that just keeps on coming, Ben Goldacre, Guardian, Friday 9 September 2011 20.59 BST

We all like to laugh at quacks when they misuse basic statistics. But what if academics, en masse, deploy errors that are equally foolish? This week Sander Nieuwenhuis and colleagues publish a mighty torpedo in the journal Nature Neuroscience.

They've identified one direct, stark statistical error so widespread it appears in about half of all the published papers surveyed from the academic psychology research literature.

[...]

How often? Nieuwenhuis looked at 513 papers published in five prestigious neuroscience journals over two years. In half the 157 studies where this error could have been made, it was. They broadened their search to 120 cellular and molecular articles in Nature Neuroscience, during 2009 and 2010: they found 25 studies committing this fallacy, and not one single paper analysed differences in effect sizes correctly.These errors are appearing throughout the most prestigious journals for the field of neuroscience.

Update: Erroneous analyses of interactions in neuroscience: a problem of significance (PDF)