Don't know if it was apparent to everyone else, but it wasn't apparent to me that the bolded title was also a link.
The article seems to be heavily biased towards psychology. I wonder if the "harder" sciences like physics, chemistry and biology suffer from the same issues to a similar degree.
The author of the article, Ioannidis, has published extensively about the unreliability of reported medical and biochemical results, over a more than 10 year period. The article is not so much "biased" towards psychology, as focusing on that one area.
Right, "focusing" is a better description. But I wonder if this focusing resulted in a generalization which is a bit too sweeping. The "publish or perish" race is certainly everywhere in academia, but its side effects might be better mitigated in some areas than in others.
I think of the work on blue LEDs that recently got the physics Nobel.
Blue LEDs work. You can buy them off the shelf. Each one works pretty much every time.
Is there anything in sociology or psychology of which the same can be said?
Blue LEDs work. You can buy them off the shelf. Each one works pretty much every time.
Is there anything in sociology or psychology of which the same can be said?
Depends on whether "Each one works pretty much every time" means a phenomenon which works on pretty much every individual on pretty much every occasion, or a phenomenon which can simply be replicated reliably given a big enough sample.
I can think of nothing in sociology or psychology satisfying the former criterion. But the latter, weaker criterion seems to be satisfied by anchoring bias, which was replicated by 36 sites out of 36 in the Many Labs project, as indicated by its table of summary statistics.
Whether one counts anything in psychology as satisfying the former or not, I think depends on where one draws the line between psychology and neurology. There are certainly things we've discovered about how the brain works that tell us things about the thought processes of every human, but one might argue that these fall under the purview of neurology, and not psychology.
Depends on whether "Each one works pretty much every time" means a phenomenon which works on pretty much every individual on pretty much every occasion, or a phenomenon which can simply be replicated reliably given a big enough sample.
Definitely the former. Each one, every time. The world around us is filled with such things, yet when it comes to the study of anything to do with living organisms, people dismiss the idea as "physics envy", a concept which makes no more sense than "separate magisteria", and serves the same function.
If you count medicine as a subfield of biology, people are already well aware of problems there...
Finally, I discuss some proposed solutions to promote sound replication practices enhancing the credibility of scientific results
Which would these be? I skimmed through the article and found nothing beyond the standard 'truth must become more important', and I doubt if that should even be called a solution.
Which would these be? I skimmed through the article and found nothing beyond the standard 'truth must become more important', and I doubt if that should even be called a solution.
I guess it's these, from the last section of the main text:
Some suggestions for potential amendments that can be tested have been made in previous articles (Ioannidis, 2005; Young, Ioannidis, & Al-Ubaydli, 2008) and additional suggestions are made also by authors in this issue of Perspectives. Nosek et al. (2012) provide the most explicit and extensive list of recommended changes, including promoting paradigm-driven research; use of author, reviewer, editor checklists; challenging the focus on the number of publications and journal impact factor; developing metrics to identify what is worth replicating; crowdsourcing replication efforts; raising the status of journals with peer review standards focused on soundness and not on the perceived significance of research; lowering or removing the standards for publication; and, finally, provision of open data, materials, and workflow. Other authors are struggling with who will perform these much-desired, but seldom performed, independent replications. Frank and Saxe (2012) and Grahe et al. (2012) suggest that students in training could populate the ranks of replicators. Finally, Wagenmakers et al. (2012) repeat the plea for separating exploratory and confirmatory research and demand rigorous a priori registration of the analysis plans for confirmatory research.