Dealing with the high quantity of scientific error in medicine

by NancyLebovitz3 min read25th Oct 201061 comments


Replication CrisisMedicine

In a recent article, John Ioannidis describes a very high proportion of medical research as wrong.

Still, Ioannidis anticipated that the community might shrug off his findings: sure, a lot of dubious research makes it into journals, but we researchers and physicians know to ignore it and focus on the good stuff, so what’s the big deal? The other paper headed off that claim. He zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science community’s two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals. These were articles that helped lead to the widespread popularity of treatments such as the use of hormone-replacement therapy for menopausal women, vitamin E to reduce the risk of heart disease, coronary stents to ward off heart attacks, and daily low-dose aspirin to control blood pressure and prevent heart attacks and strokes. Ioannidis was putting his contentions to the test not against run-of-the-mill research, or even merely well-accepted research, but against the absolute tip of the research pyramid. Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable. That article was published in the Journal of the American Medical Association.

Part of the problem is that surprising results get more interest, and surprising results are more likely to be wrong. (I'm not dead certain of this-- if the baseline beliefs are highly likely to be wrong, surprising beliefs become somewhat less likely to be wrong.) Replication is boring. Failure to replicate a bright shiny surprising belief is boring. A tremendous amount isn't checked, and that's before you start considering that a lot of medical research is funded by companies that want to sell something.

Ioannidis' corollaries:

Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.
Corollary 2: The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.
Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.
Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.
Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.
Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.

The culture at LW shows a lot of reliance on small inferential psychological studies-- for example that doing a good deed leads to worse behavior later. Please watch out for that.

A smidgen of good news: Failure to Replicate, a website about failures to replicate psychological findings. I think this could be very valuable, and if you agree, please boost the signal by posting it elsewhere.

From Failure to Replicate's author-- A problem with metastudies:

Eventually, someone else comes across this small literature and notices that it contains “mixed findings”, with some studies finding an effect, and others finding no effect. So this special someone–let’s call them the Master of the Gnomes–decides to do a formal meta-analysis. (A meta-analysis is basically just a fancy way of taking a bunch of other people’s studies, throwing them in a blender, and pouring out the resulting soup into a publication of your very own.) Now you can see why the failure to publish null results is going to be problematic: What the Master of the Gnomes doesn’t know about, the Master of the Gnomes can’t publish about. So any resulting meta-analytic estimate of the association between lawn gnomes and subjective well-being is going to be biased in the positive direction. That is, there’s a good chance that the meta-analysis will end up saying lawn gnomes make people very happy,when in reality lawn gnomes only make people a little happy, or don’t make people happy at all.

The people I've read who gave advice based on Ioannidis article strongly recommended eating paleo. I don't think this is awful advice in the sense that a number of people seem to actually feel better following it, and I haven't heard of disasters resulting from eating paleo. However, I don't know that it's a general solution to the problems of living with a medical system which does necessary work some of the time, but also is wildly inaccurate and sometimes destructive.

The following advice is has a pure base of anecdote, but at least I've heard a lot of them from people with ongoing medical problems. (Double meaning intended.)

Before you use prescription drugs and/or medical procedures, make sure there's something wrong with you. Keep an eye out for side effects and the results of combined medicines. Check for evidence that whatever you're thinking about doing actually helps. Be careful with statins-- they can cause reversible memory problems and permanent muscle weakness. Choose a doctor who listens to you.

Forum about self-experimentation-- note: even Seth Roberts is apt to oversell his results as applying to everyone.

Link about the failure to replicate site found here.