This article critically examines previous studies that showed a link between working memory training (specifically via n-back training) and fluid intelligence, finding that the results may not have been as positive as reported owing to a number of factors including the use of a no-contact rather than active control group, and difficulty selecting tests that isolate the impact of working memory on fluid intelligence. The authors also present findings from a new study that show no improvement in fluid intelligence from dual n-back training, visual search training (active placebo) and no training (no contact placebo).

 

PubMed

Journal Challenged

 

New to LessWrong?

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 8:22 PM

This is fairly old news for people following n-back research, and over-emphasizes one particular study: there have been other studies, even just looking at post-Redick studies. From http://www.gwern.net/DNB%20meta-analysis :

  • Takeuchi et al 2012
  • Rudebeck 2012
  • Vartanian 2013
  • Heinzel et al 2013
  • Smith et al 2013
  • Stephenson & Halpern 2013
  • Nussbaumer et al 2013
  • Oelhafen et al 2013
  • Clouter 2013
  • Sprenger et al 2013

Thank you for the link to your meta-analysis. That's a lot more helpful than the limited literature review presented in the paper I linked.

After reading your analysis, I am confused about how you determined that "that there is a gain of small-to-medium effect size." It seems like once you account for the passive placebo effect you actually showed that there is a small-to-non-existent effect.

After reading your analysis, I am confused about how you determined that "that there is a gain of small-to-medium effect size." It seems like once you account for the passive placebo effect you actually showed that there is a small-to-non-existent effect.

That's the net effect. It might be overreaching at this moment to say that the passive studies are complete junk and should be ignored.

Part of the issue here is that the active studies have such a small effect partially because of Clouter 2013 which has a negative effect size - I think, because he didn't report standard deviations, my inferred numbers may exaggerate the strength of his effect, and I haven't been able to get a hold of him yet. So when I got the Sprenger study last week, I just shrugged and added them both in until such time as I get the right numbers.

(It's really frustrating because Clouter is a young techy guy who is on both Google+ and Facebook, it shouldn't be hard at all to get ahold of him! But somehow it's not working out.)

https://www.facebook.com/photo.php?fbid=332127806875228&set=a.316702251751117.76832.316639368424072&type=1&theater seems to be the post through which he recruited the people for his study.

As far as I can see it's possible to leave a comment on that post. If facebook doesn't deliver your message directly to his inbox because you aren't friends that might be a way to get in contact.

As far as I can see it's possible to leave a comment on that post.

Actually, even better, it seems to include his email address! aclouter@dal.ca at the bottom; I can just use that in my next try. Thanks!

EDIT: got the numbers & updated

On a somewhat related note, I read an article today claiming that playing music makes one a "sharper thinker":

In the journal Neuropsychologica, the researchers describe an experiment featuring 36 young adults. They were divided into four groups: Musicians who had accumulated at least 5,000 hours of practice; those who had clocked 2,000 to 5,000 hours; the lightweights (or newcomers to music) who had practiced for 200 to 2,000 hours; and non-musicians.

After answering a series of questions, all the participants took part in two standard cognitive tests: a Stroop task, in which they were asked to respond to words written in the color blue (even if the letters spelled out “red”); and a Simon task, in which they were instructed to respond with their right hand if they saw a red shape, and with their left hand if they saw a blue shape—even if the shapes popped up on the opposite side of the screen.

As they performed these tricky tasks, their brains were continuously monitored via EEG recording.

The results: People with more musical training responded faster than those with little or no training, with no loss in accuracy. “This result suggests that higher levels of musical training might result in more efficient information processing in general,” the researchers write.

Anyone have any critiques of this study? I would fit in the group of people who have over 5,000 hours of practice so I'm skeptical of any study that puts me in a good light :)

From the article:

The researchers caution that they haven’t established “a causal link between musical activity and the effectiveness of frontal brain functions.” They concede it’s possible that people who generally perform well on cognitive tasks might be more likely to take up an instrument.

Or a separate gene or experience or background makes one attracted or likely to attempt both playing music and different cognition-sharpening tasks.

Also, the sample size was 36, split into four groups. That's TINY!! 8-10 per group!

I checked out the original study, and saw a bunch of weird things. Most of the participants were female, many of them played different instruments, so who knows what difference gender or instrument made.

Also also, that test only measures an extremely specific type of thinking, which may not be linked to adeptness of other types of cognition.

The results seem completely plausible, but the study design seems pretty poor and the conclusions very shakily drawn.