In this post, Jesse Marczyk argues that psychological neuroscience research often doesn't add much value per dollar spent and therefore is not worth the cost.

 

In my last post, when discussing some research by Singer et al (2006), I mentioned as an aside that their use of fMRI data didn’t seem to add a whole lot to their experiment. Yes, they found that brain regions associated with empathy appear to be less active in men watching a confederate who behaved unfairly towards them receive pain; they also found that areas associated with reward seemed slightly more active. Neat; but what did that add beyond what a pencil and paper or behavioral measure might? That is, let’s say the authors (all six of them) had subjects interact with a confederate who behaved unfairly towards them. This confederate then received a healthy dose of pain. Afterwards, the subjects were asked two questions: (1) how bad do you feel for the confederate and (2) how happy are you about what happened to them? This sounds fairly simple, likely because, well, it is fairly simple. It’s also incredibly cheap, and pretty much a replication of what the authors did. The only difference is the lack of a brain scan. The question becomes, without the fMRI, how much worse is this study?

There are two crucial questions in mind, when it comes to the above question. The first is a matter of new information: how much new and useful information has the neuroscience data given us? The second is a matter of bang-for-your-buck: how much did that neuroscience information cost? Putting the two questions together,we have the following: how much additional information (in whatever unit information comes in) did we get from this study per dollar spent?

...I’ll begin my answer to it with a thought experiment: let’s say you ran the initial same study as Singer et al did, and in addition to your short questionnaire you put people into an fMRI machine and got brain scans. In the first imaginary world, we obtained results identical to what Singer et al reported: areas thought to be related to empathy decrease in activation, areas thought to be related to pleasure increase in activation. The interpretation of these results seems fairly straightforward – that is, until one considers the second imaginary world. In this second world, we see the results of brain scan show the reverse pattern: specifically, areas thought to be related to empathy show an increase in activation and areas associated with reward show a decrease. The trick to this thought experiment, however, is that the survey responses remain the same; the only differences between the two worlds are the brain pictures.

This makes interpreting our results rather difficult. In the second world, do we conclude that the survey responses are, in some sense, wrong? The subjects “really” feel bad about the confederates being hurt, but they are unaware of it? This strikes me as a bit off, as far as conclusions go. Another route might be to suggest that our knowledge of what areas of the brain are associated with empathy and pleasure is somehow off: maybe increased activation means less empathy, or maybe empathy is processed elsewhere in the brain, or some other cognitive process is interfering. Hell; maybe it’s possible that the technology employed by fMRIs just isn’t sensitive to what you’re trying to look at. Though the brain scan might have highlighted our ignorance as to how the brain is working in that case, it didn’t help us to resolve it. Further, that the second interpretative route seems like a more reasonable one than the first, it also brings to our attention a perhaps under-appreciated fact: we would be privileging the results of the survey measure above the results of the brain scan.

...While such a thought experiment does not definitely answer the question of how much value is added by neuroscience information in psychology, it provides a tentative starting position: not the majority. The bulk of the valuable information in the study came from the survey, and all the subsequent brain information was interpreted in light of it.

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 7:51 AM

If you like psychology research say "I'm sure this is much more efficient than the wars or farm subsidies the government was going to use this money on."

If you don't like psychology research say "This is an outrage! We could preventing malaria with that money"

Or "mostly the same science funding agencies fund both fMRI and non-fMRI psychology with mostly fixed budgets, so they should reallocate funding away from fMRI and towards studies with sample sizes large enough to be adequately powered for finding real effects."

Another way would be to start allocate funding for the replication of previous studies.

Is there a name for this fallacy? I have seen it a lot of places before.

Yep. I noticed that in myself, too.

[-][anonymous]11y00

Yeah, same here. I guess I need to update my beliefs.

In my last post, when discussing some research by Singer et al (2006), I mentioned as an aside that their use of fMRI data didn’t seem to add a whole lot to their experiment. Yes, they found that brain regions associated with empathy appear to be less active in men watching a confederate who behaved unfairly towards them receive pain; they also found that areas associated with reward seemed slightly more active. Neat; but what did that add beyond what a pencil and paper or behavioral measure might?

It means you don't have to rely on the honesty of their self reporting, which is pretty important since human self-analysis is frankly pretty shitty, and it doesn't take much to persuade them to lie.

To use an example from this book which I recently finished reading, one experiment tested whether people who were allowed to electronically view an art display rated paintings higher when they had marks showing they were owned by a gallery which they were told had sponsored their viewing experience. They did, and they disavowed the sponsorship having anything to do with their ratings. The researchers could have concluded from this that the subjects viewed the paintings more favorably out of an unconscious sense of reciprocity, but that would have been pretty lousy science, because they could easily have been lying and acting on conscious reciprocity and then denying it, as social niceties demand that they do. They did the experiment with brain scans and confirmed that the subjects' pleasure centers showed more activity for the paintings associated with the sponsor gallery. Had they not, we could infer that they were acting out of conscious reciprocity.

Another route might be to suggest that our knowledge of what areas of the brain are associated with empathy and pleasure is somehow off: maybe increased activation means less empathy, or maybe empathy is processed elsewhere in the brain, or some other cognitive process is interfering. Hell; maybe it’s possible that the technology employed by fMRIs just isn’t sensitive to what you’re trying to look at. Though the brain scan might have highlighted our ignorance as to how the brain is working in that case, it didn’t help us to resolve it.

Knowing that your model has flaws is a necessary step to correcting it. If a research tool gives you information you wouldn't otherwise have had, it's done you a valuable service, even if it doesn't by itself provide sufficient information to build you a new model.

A whole lot of psychology's poor reputation as a field relative to harder sciences like physics or chemistry comes from overinterpretation of open-ended results like surveys or behavioral studies, where the observations could be explained under multiple models. Brain scanning and other neuroscientific research options provide a useful mechanism for distinguishing between models and helping to avoid that half-assed state.

If we stalled physics research when we reached a point where we couldn't distinguish between hypotheses easily and cheaply, we wouldn't know that much about physics today.

Entirely agreed. Even if you more often than not get the same answers from fMRI and surveys, the fMRI externalizes the judgment of whether or not someone is empathizing/emotional/cognitive stating with regards to something else.

One might argue that we probably have a decent understanding of how well people's verbal statements line up with different facts, but where this diverges from the neurological reality is interesting enough to be spending money on the chance of finding the discrepancies. If we don't find them, that's also fascinating, and is worth knowing about.

Taking for granted that what people say about themselves is accurate, but externalized measurement is also worthwhile for it's own sake.

He's largely missing the point.

Medicine advanced when we started cutting open bodies. Neuroscience is about getting beyond black box testing of brains, and understanding the underlying mechanisms of how a brain works, and integrating that knowledge with behavioral observations.

It's probably true that in many studies employing brain scans, they provided little value. So what? It's the basic reductionist program to do white box testing and come up with an integrated model. Neuroscience shouldn't be a separate magisteria from psychology.

I like the suggested thought experiment a lot. Something like a cross between value of information and conservation of evidence.

Investment in psychology research won't stop, I think, so if there are any issues with how it is being done, it would be best to resolve them, no? A minimal investment will have to be made, just to avoid people wasting more money in the future.