The American Statistician just released a large report that outlines why p-values are problematic and then compiles many potential alternative ways to approach the situation.

New Comment
6 comments, sorted by Click to highlight new comments since:

Here is a summary of John Ioannidis on the topic, kind of defending the usage of p:

Full link, since this one just goes to the frontpage of the blog:

I haven’t ever seen an academic article so direct and even sardonic, especially not one railing against such an established practice. I guess that’s what a Molotov cocktail looks like in print.

That wasn’t just clear and impactful, it was fun to read. Thanks for linking, lifelonglearner.


One of the ideas voiced seems to be that too many scientists (hard and soft) want to shortcut the work of actually studying the data and analyzing it. Put a bit more tersely, too many are lazy. I wonder how much that is driven by

1) the demands to publish (and the refereeing process) in academia and other research based organizations

2) Funding by NSF and similar public money grant program.

3) The general view that education and degrees are necessary for successful participation in a modern economy -- particularly when most employment is within large corporate entities.

I also thought it a bit interesting that they mentioned confidence intervals as an alternative. The problem there, it seems, is that too many don't understand what those really are. They see that their estimated value is within a 95% confidence interval and claim the probability that they estimate is correct is 95%. That is not what that statistic is saying.

"2) Funding by NSF and similar public money grant program."

Based on both what I heard and what I experienced, it's private foundations that would have the lower standards, because they are agenda-driven and the people who work there have the mission to find scientists doing research on whatever the topic-of-the-year is.