The number of experiences I've had of reading an abstract and later finding that the results provided extraordinarily poor evidence for the claims (or alternatively, extraordinarily good evidence -- hard to predict what I will find if I haven't read anything by the authors before...) makes this system suspect. This seems partially conceded in the fictive dialogue ("You don't even have to dig into the methodology a lot") - but it helps to look at it at least a little. I knew a senior academic whose system was as follows: read the abstract (to see if the topic of the paper is of any interest at all) but don't believe any claims in it; then skim the methodology and results and update based on that. This makes a bit more sense to me.
I didn't read the whole post, just the introduction and the bolded lines of the dialogue -- but from what I read, nice post! I think I agree. ETA: For things on the internet, I've also adopted the heuristic of reading the top few comments between reading the introduction and the body of the post.
An important modification in physics: look at the figures. This is such an important shortcut that it should be the absolute first thing on your mind when designing the figures for your paper.
This seems like it trusts the authors to be summarizing their own work accurately. Is that correct? Do you/your friend endorse that assumption?
I'd make a distinction between: "The author believes (almost everything of) what they are writing." and "The experiments conducted for the paper provide sufficient evidence for what the author writes."
The first one is probably true more frequently than the second. But the first one is also more useful than the second: The beliefs of the author are not only formed by the set of experiments leading to the paper but also through conversations with colleagues, explored dead-ends, synthesizing relevant portions of the literature, ...
In contrast, the bar for making the second statement true is incredibly high.
So as a result, I don't take what's written in a paper for granted. Instead, I add it to my model of "Author X believes that ...". And I invest a lot more work before I translate this into "I believe that ...".
On discussion boards, some researchers claim with a straight face to be reading 100 papers per month.
That would be a feat even for someone like Nicolas Bourbaki.
And even if researchers wanted to report everything, "high impact journals" often reward extreme brevity and don't allow more than 2000-2500 words to summarize years of research.
Might be interesting to compare with twitter.
Argyle: What am I supposed to believe about "not reading papers" now? You haven't convinced me that it's actually better than reading a paper carefully.
Belka: That was never my aim, I also would prefer reading them carefully. But that's not really an option. My point is just that "not reading papers" is not as bad as it sounds
Read the important papers carefully. If it's obviously not relevant to what you're working on and you don't care, then don't read it, obviously. "Not reading papers" is obviously a strategy for figuring out what paper you should read next. If you know that already, then just read the paper.
Sometimes I think trying to keep up with the endless stream of new papers is like watching the news - you can save yourself time and become better informed by reading up on history (ie classic papers/textbooks) instead.
This is a comforting thought, so I’m a bit suspicious of it. But also it’s probably more true for a junior researcher not committed to a particular subfield than someone who’s already fully specialised.
"high impact journals" often reward extreme brevity
I read the linked article and I don't think it supports your claim. The author referenced a few examples of extremely short abstracts and papers written with the intention of setting records for brevity, then talks about a conversation with his friend that shorter papers have been proliferating. The article does not provide a strong argument that high impact journals reward extreme brevity in general.
The emperor's new paper.
In a recent lunch conversation, a colleague of mine explained her system for reading scientific papers. She calls the system tongue-in-cheek "not reading the paper". It goes as follows:
I love this system, mostly because of the flippant name, but also because it points out that "the emperor is naked". It is a bit of an open secret that "not reading the paper" is the only feasible strategy for staying on top of the ever-increasing mountain of academic literature. But while everybody knows this, not everybody knows that everybody knows this. On discussion boards, some researchers claim with a straight face to be reading 100 papers per month. Skimming 100 papers per month might be possible, but reading them is not.
So why is this not common knowledge? A likely cause is the stinging horror that overcomes me every time I think about how science really shouldn't work. Our institutions are stuck in inadequate equilibria, the scientific method is too weak, statistical hypothesis testing is broken, nothing replicates, there is rampant fraud,... and now scientists don't even read academic papers? That's just what we needed.
And yet it moves. For some reason, we still get some new insight out of this mess that calls itself science. Even though scientists don't read papers, the entire edifice does not collapse. How is that? In this post, I will go through some obvious and less obvious problems that come from "not reading the paper", and how those turn out to not be that bad. In the end, I tee up a better system that I will expand on in part two of this post.
And yet it moves.
“Not reading papers” is a fix, not a solution.
As useful as the method might be, it is borne out of necessity, not because it is the optimal way of conducting research. The scientific paper, as it exists, has one central shortcoming: The author(s) do not know the reader. When writing a paper the author(s) try to break down their explanations to the smallest common denominator of hypothetical readers. As a consequence, no actual reader is served in an ideal way. A beginner might require a lot more background. An expert might care for more raw data and for more speculative discussions. A researcher from an adjacent field might need translations for certain terminology. A reader from the future might want to know how the study connects to future work. None of those get what they want from the average paper.
It would be so much better to just always have the chance to talk to the author instead of reading their paper. You could go quickly over the easy parts and go deep on the parts you find difficult. I think this is what is supposed to happen in question sessions after a talk, but when there are more than three people in the audience the idea rapidly falls apart. I know that this is what usually happens at conferences and workshops during tea time, where the senior researchers talk with each other to get actually useful descriptions of a research project from each other.
This clearly does not scale. What would be a better solution? Arbital tried something in this direction by offering explanations with different “speeds”, but the platform has become defunct. Elicit from Ought might have something like this on the menu at some point in the future, but at the moment results are not personalized (afaict). What we ideally might want is a “researcher in a box” that can answer questions about their field of expertise on demand. Whenever the original researcher has finished a project, the new insights become available from the box. This sounds implausible, after all, you can’t expect the researcher to hang around all day, can you? But I think there is a way to make it work with language models. In the next post in this series, I will outline my progress on this question. Stay tuned!