The problem of research innovation or "disruptiveness" seems like a good riddle at the moment for anyone interested in understanding what we often call "technological progress". Holden Karnofsky talks about it in these blog posts, but it felt worth sharing that the science world is noticing too. 

It's a simple problem to articulate:

The number of science and technology research papers published has skyrocketed over the past few decades — but the ‘disruptiveness’ of those papers has dropped.

 

What explains this phenomenon?

“The data suggest something is changing,” says Russell Funk, a sociologist at the University of Minnesota in Minneapolis and a co-author of the analysis, which was published on 4 January in Nature. “You don’t have quite the same intensity of breakthrough discoveries you once had.”

Is it something about the way science is practiced? Is it academia and the publish-or-perish pressure? A few difficult questions to think about in 2023. 

New to LessWrong?

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 9:46 PM

It seems like we should be accounting somehow for the total number of papers published in a field.  Assuming that a constant level of "disruptive science" occurs per year, then more published papers would lead to less average disruptiveness. So, rather than measuring disruptiveness per paper maybe we should be looking at disruptiveness per year or even per worker-year. 

While this might well be true, my immediate thought is to ask whether the measure of "disruptive" increases over time for any given paper after it was published, and my second thought (on finding it's based on citation patterns) is to ask whether changes in citation methods/fashions over time might spuriously affect this measure. 

And:

The authors reasoned that if a study was highly disruptive, subsequent research would be less likely to cite the study’s references, and instead would cite the study itself. Using the citation data from 45 million manuscripts and 3.9 million patents, the researchers calculated a measure of disruptiveness, called the CD index, in which values ranged from –1 for the least disruptive work to 1 for the most disruptive.

wouldn't that tend to increase over time for any given paper?