Pressure to publish increases scientists' vulnerability to positive bias

by lukeprog1 min read8th Sep 20119 comments


Personal Blog

More evidence for this hypothesis:

The growing competition and “publish or perish” culture in academia might conflict with the objectivity and integrity of research, because it forces scientists to produce “publishable” results at all costs. Papers are less likely to be published and to be cited if they report “negative” results (results that fail to support the tested hypothesis). Therefore, if publication pressures increase scientific bias, the frequency of “positive” results in the literature should be higher in the more competitive and “productive” academic environments. This study verified this hypothesis by measuring the frequency of positive results in a large random sample of papers with a corresponding author based in the US. Across all disciplines, papers were more likely to support a tested hypothesis if their corresponding authors were working in states that, according to NSF data, produced more academic papers per capita. The size of this effect increased when controlling for state's per capita R&D expenditure and for study characteristics that previous research showed to correlate with the frequency of positive results, including discipline and methodology. Although the confounding effect of institutions' prestige could not be excluded (researchers in the more productive universities could be the most clever and successful in their experiments), these results support the hypothesis that competitive academic environments increase not only scientists' productivity but also their bias. The same phenomenon might be observed in other countries where academic competition and pressures to publish are high.

Fanelli (2010). Do Pressures to Publish Increase Scientists' Bias? An Empirical Support from US States Data. PLoS ONE 5(4): e10271.

9 comments, sorted by Highlighting new comments since Today at 1:33 PM
New Comment

I'm completely puzzled by the choice of state as the independent variable. The organization of the U.S. university system does not follow state lines in any straightforward way, so that choice of state can show in the causal mechanism only through its correlation with some concrete elements in the organization of the system. However, from what I see, the paper doesn't even speculate on how exactly this could work, and the assumption that there exists competition among researchers at state level strikes me as utterly absurd.

Availability bias? The US is conveniently divided up into 50 chunks, and a lot of statistical information is aggregated at state level, so there's a great convenience for researchers in dividing things up that way, whether it makes sense or not.

From my perspective the big question is the magnitude of these effects: does this just reduce the marginal gain of more scientists/funding for science, or does it change the sign, so that beyond a certain point hiring more scientists actually slows progress? How costly are these false positive results?

Epidemiology is pretty expensive as it is. The sign seems to still be positive for spending more on scientists - diminishing returns have set in hard in some areas like pharmaceuticals, but I haven't heard of an actual net negative.

I suspect that you are correct but I have to wonder if there were a net negative how would we easily tell?

Obviously scientists are not constant in how many problems they cause, or else the answer would be either 'science could never get off the ground' (if they caused more problems than they solved) or 'they're not a net negative (since science is making progress and obviously made it off the ground). So presumably there's some sort of changing marginal returns; usually, marginal returns diminish.

What does it look like if marginal returns are positive? Well, you toss in 1 scientist and get n more units of scientific output. What does it look like if marginal returns have fallen to 0? You toss in 1 more scientist and get 0 more units of scientific output. And if marginal returns have become negative, then you toss in 1 more scientist and see -n units, or scientific output in absolute terms falls.

Currently, all the datapoints I know of like the pharmaceutical industry point to diminishing returns (eg. fall in per-capita output, but not absolute output), and not negative ones. But it's very hard to quantify scientific output...

But it's very hard to quantify scientific output...

That doesn't stop people trying. Do other countries besides the U.K. have a similar system?

That's my guess for this particular effect, and overall, but I have heard plausible arguments that when you add up all the different externalities their collective effect is large, proportionally. For instance, competition for grants diverts a lot of time from good scientists to grantsmanship, and reduces the autonomy of young investigators who might otherwise undertake higher-risk research. Further, it seems plausible that at the margin funding brings in lower-quality scientists who produce less value for the negative externalities they produce. More quantitative data would be very nice for testing these claims.

Also, from what I've observed in practice, in many areas, the publish-or-perish competition tends to produce not only this sort of bias, but also terrible competition in producing papers that are written not to present the findings clearly and objectively, but to give the maximum self-promotional spin short of outright lying and fabrication. This problem is especially severe in areas that have run out of low-hanging fruit.