In an article in Nature magazine, Scott Marek et al. [1] asserts that current studies that link behaviour to brain imaging have datasets which are too small to be reliable. In taking a larger dataset and reproducing previously established results on different subsets of data, it is found that few of these studies are reproducible. 

Marek and his colleagues show that even large brain-imaging studies, such as his, are still too small to reliably detect most links between brain function and behaviour.

I was wondering if the community has a prior on what other areas of recent academic interest have fallen to a similar trap? 

 

References 
[1] https://www.nature.com/articles/d41586-022-00767-3

New to LessWrong?

New Answer
New Comment

2 Answers sorted by

tailcalled

Apr 16, 2022

40

Social psychology seems infamous for this sort of thing; social priming, stereotype threat, ego depletion, etc..

Evolutionary psychology can also be understood as having a sample size that is too small. IME they are doing less bad than social psychology in terms of number of participants, but evolutionary psychology should be interested in cross-cultural universals, and should therefore probably study people across diverse societies (as well as studying animals of other species, perhaps). However, often evolutionary psychology studies only investigate a single society.

If you are willing to generalize the question a bit, there's also the issue of reliability. For instance, in polls, people may answer a question "differently depending on how they are asked" (really, it seems to me that often the different "ways of asking" are different but highly related questions - but the same point still holds). This introduces noise, and one way to reduce this noise is to ask multiple times in "different ways" and seeing what the overall response is. There's also reliability statistics like Cronbach's alpha which have been invented to see whether you've done a good enough job with asking in multiple ways. But in some contexts this is often not really done. (This is related to sample size in the sense that it is the sample size of the "transposed data" - where you turn variables into observations and observations into variables.)

Derek M. Jones

Apr 16, 2022

30

Where to start?  In my own field of software engineering we have: studies in effort estimation, and for those readers into advocating particular programming languages, the evidence that strong typing is effective, and the case of a small samples getting lucky.  One approach to a small sample size is to sell the idea not the result.

Running a software engineering experiment with a decent sample size would cost about the same as a Phase I clinical drug trial.

Thanks Derek. I'm writing a blog post on results from small samples - may I cite your answer? 

1Derek M. Jones2y
I'm always happy to be cited :-) Sample size is one major issue, the other is who/what gets to be in the sample. Psychology has its issues with using WEIRD subjects. Software engineering has issues with the use of student subjects, because most of them have relatively little experience. It all revolves around convenience sampling.