New Comment
5 comments, sorted by Click to highlight new comments since: Today at 5:55 PM

Figuring out that a paper contains fake research requires a lot of domain knowledge.  For instance, I have read enough software engineering papers to spot fake research, but would have a lot of trouble spotting fake research in related fields, e.g., database systems.  What counts as fake research, everybody has their own specific opinions.

My approach, based on experience reading very many software engineering, is to treat all papers as having a low value (fake or otherwise) until proven otherwise.

Emailing the author asking for a copy of their data is always interesting; around a third don't reply, and a third have lost/not kept the data.

Spotting fake research is a (very important) niche topic.  A more generally useful proposal would be to teach people how to read papers.  Reading one paper might almost be worse than reading none at all, because of the false feeling of knowing it gives the reader.  I always tell people to read the thesis from which the paper was derived (if there is one); a thesis provides a lot more context and is a much easier read than a paper (which is a very condensed summary of the thesis).  Researchers much prefer to have their paper cited, because thesis citations don't 'count'.

Is a Fake journal club worth the effort?  It's possible to spend more time debunking a paper than was spent doing the original research, and for nothing to happen.

Knowing if North Korea is going to do a hydrogen bomb test this year also requires a lot of domain knowledge, and one can invest arbitrary effort into obtaining new data like smuggling oneself into North Korea or interrogating defectors, and may in fact require knowledge it is impossible to obtain outside a particular skull in North Korea. Yet, calibration training still exists and will improve forecasts on both North Korea and on how many M&Ms are in that big jar over there.

This would definitely teach something, but I'm not sold that it actually teaches useful skills of detecting weaknesses in papers. Failures in research are drawn from their own special distribution, which is very different than the sampling-from-GPT distribution.

Part of this can be blamed on GPT-3 not being smart enough - it doesn't understand (e.g.) magnetic permeability, and so in trying to write about it it will inevitably make mistakes that no human academic would. But then if our language model were smart enough to talk convincingly about magnetic permeability, the journal club members would still be stuck looking for tells that are going to be inhuman unless you've somehow assembled a dataset of bad papers to train a classifier on.

I think that doing this with real papers (that have failed to stand the test of time, but grad students probably won't know that) is actually a lot better, because their mistakes are drawn from the distribution you actually need to learn. It also provides you with a richer supervised signal - you can learn not only that a paper was wrong, but also what process led to it having the contents it did, given that it didn't reflect reality.

A database of such teaching examples, submitted by professors, would be interesting but would probably get very contentious.

This post is about journal papers, not answering real world questions (although many authors would claim this is what they are doing).

With regard to nuclear weapons, Dominic Cummins' recent post is well worth a read, the book he recommends "The Fallacies of Cold War Deterrence and a New Direction" is even more worth reading.
 

Is MAD doctrine fake research, or just research that might well be very wrong?

It may be also worth splitting out "correct reasoning based on invalid assumptions" and "invalid reasoning based on valid assumptions".