Jonah Lehrer has up another of his contrarian science articles: "Trials and Errors: Why Science Is Failing Us".

Main topics: the failure of drugs in clinical trials, diminishing returns to pharmaceutical research, doctors over-treating, and Humean causality-correlation distinction, with some Ioannidis mixed through-out.

See also "Why epidemiology will not correct itself"


In completely unrelated news, Nick Bostrom is stepping down from IEET's Chairman of the Board.

New Comment
20 comments, sorted by Click to highlight new comments since: Today at 3:52 PM

Huh, I was expecting Science to mean "science as a social institution," but he really is making the strong claim that science as a way of learning things is "failing us" because the human body is complicated. Where of course the problem is, "failing us relative to what?"

[-]FAWS12y40

Failing us relative to our expectations.

It's not particularly failing me relative to my expectations. And why does he use, say, the Pfizer executive's expectations as an example of something that science is failing by? "Our expectations" seems suspiciously similar to "all expectations ever." Or, more likely, "expectations the author thought it would be a good idea to have had of science when writing the article."

[-]FAWS12y60

Well, most people seem to be surprised that the majority of medical science results (or at least a high percentage) turns out to be bogus.

see: social institution vs. way of learning things.

[-][anonymous]12y00

the majority medical science results (or at least a high percentage) turns out to be bogus.

I assume that you really mean "the majority drug results (or at least a high percentage) turns to be ineffective"? A claim that is still far from uncontroversial.

Edit: Change "drug" results to "epidemiology".

[-]FAWS12y10

Drug results and correlation studies, both environmental and genetic, mostly. Which should be high enough volume that the "at least a high percentage" part should be true even if you add more reliable types of research, no? Or is medical science the wrong word for the category that includes both?

[-][anonymous]12y00

How much is a high percentage?

Or is medical science the wrong word for the category that includes both?

I do think so. A lot of pre-clinical medical science is more about understanding specific mechanism, not looking at correlations and mapping out risk factors.

Drug results and correlation studies, both environmental and genetic, mostly,

Do you have some data? I do agree that it's hard to actually learn something solid from epidemiology, biology is complicated and factors do not usually add in any intuitive way. But then there are categories where epidemiology is invaluable take for example people with hereditary colon cancer where the majority (with a specific set of mutations) get colon cancer. But you might be right that a lot is not really useful information . . .

[-]FAWS12y30

How much is a high percentage?

Let's say more than 20%

I do think so. A lot of pre-clinical medical science is more about understanding specific mechanism, not looking at correlations and mapping out risk factors.

I didn't necessarily mean to exclude things like that, just to include both of the categories mentioned.

Do you have some data?

http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124

Of course there is data. Besides the Ionnidis citations in the linked article, I also linked my previous post on the topic which, among other things, links to my section in the DNB FAQ on this topic with dozens of links/citations.

[-][anonymous]12y00

My bad, only browsed through "Why Science Is Failing Us", behaved kind of like a politician, will do my homework before opening my mouth next time.

But I still think that one should use medical epidemiology instead of the cluster word medical science.

How much is a high percentage?

From the article:

One study, for instance, analyzed 432 different claims of genetic links for various health risks that vary between men and women. Only one of these claims proved to be consistently replicable. Another meta review, meanwhile, looked at the 49 most-cited clinical research studies published between 1990 and 2003. Most of these were the culmination of years of careful work. Nevertheless, more than 40 percent of them were later shown to be either totally wrong or significantly incorrect.

Those didn't analyze all of medicine, of course, but it does sound pretty bad for the overall percentage.

Then our expectations are wrong. The effectiveness of science should add up to normality.

Correct, of course that's still a problem.

[-]Pfft12y10

I guess one qualitative difference is the fact that drug companies now cut down on research, suggesting that area of science has passed the point where it can no longer pay for itself.

Something similar happened in particle physics: in the early 20th century experiments were cheap (and fit on a tabletop), yet the value of the discoveries was immense (x-rays, nuclear power). Nowadays the experiments needed to make new discoveries are staggeringly expensive (LHC, SSC), and they are not expected to have any technological implications at all (since the new science will only be relevant under extreme conditions). So investing in particle physics research went from being free money to being a net cost.

A better subtitle for the article would be "why statistics is failing us".

Summary: "Coincidences exist."

Most recently, two leading drug firms, AstraZeneca and GlaxoSmithKline, announced that they were scaling back research into the brain. The organ is simply too complicated, too full of networks we don’t comprehend.

Of course, pharmaceutical research to the brain isn't the same as cognitive science research to the brain, but still, I'm updating to have a somewhat lower estimate of "P(the brain will be reverse engineered during the next 50 years)" as a result of reading this. (Though there are still partial algorithmic replications of the hippocampus and the cerebellum which do make it seem relatively probable that the reverse engineering will succeed nevertheless.)

Interesting article, thanks for directing my attentions towards it.

Reading through the comments, we all seem to agree: there's nothing wrong with science. (I've grown to expect misleading titles and thesis statements that push too far, it seems part of the blog/internet culture, and the article can still be read for the interesting connecting bits.)

There's nothing wrong with "science" ... I interpret the article as pointing out the problem of induction in the context of a complex system with a limited number of observations. For example, an animation is a very complex system -- much more complex than Newtonian physics, requiring a model of the specific intentions of a human mind.

Given an observation, the scientific method goes, you form a hypothesis. Then you test that hypothesis, especially for the broader context that you would like to apply it to. Michotte's subjects formed a hypothesis about blue and red balls in a set of animations that would not hold up to further observations. Likewise, Pfizer formed hypotheses about cholesterol interactions in human systems that did not hold up. This is the scientific method working, just as well as ever.

An example of the scientific method not working would be experiments that change their behavior depending on what your expectations are and what hypotheses you are forming (exclude anything in psychology for the moment). For example, it would be really weird if objects knew you expected to them to fall down due to gravity and were just obliging. (The scientific worldview is rejecting that sort of hypothesis universally in the absence of any evidence for it.)

It's unfortunate that the important points in this article are surrounded by such fallacious statements as "It’s mystery all the way down." I would love to use the good parts as a discussion piece elsewhere, but it's not worth the risk that people will only take away the author's irrational conclusions.