Jason Mitchell is [edit: has been] the John L. Loeb Associate Professor of the Social Sciences at Harvard. He has won the National Academy of Science's Troland Award as well as the Association for Psychological Science's Janet Taylor Spence Award for Transformative Early Career Contribution.

Here, he argues against the principle of replicability of experiments in science. Apparently, it's disrespectful, and presumptively wrong.

Recent hand-wringing over failed replications in social psychology is largely pointless, because unsuccessful experiments have no meaningful scientific value.

Because experiments can be undermined by a vast number of practical mistakes, the likeliest explanation for any failed replication will always be that the replicator bungled something along the way. Unless direct replications are conducted by flawless experimenters, nothing interesting can be learned from them.

Three standard rejoinders to this critique are considered and rejected. Despite claims to the contrary, failed replications do not provide meaningful information if they closely follow original methodology; they do not necessarily identify effects that may be too small or flimsy to be worth studying; and they cannot contribute to a cumulative understanding of scientific phenomena.

Replication efforts appear to reflect strong prior expectations that published findings are not reliable, and as such, do not constitute scientific output.

The field of social psychology can be improved, but not by the publication of negative findings. Experimenters should be encouraged to restrict their “degrees of freedom,” for example, by specifying designs in advance.

Whether they mean to or not, authors and editors of failed replications are publicly impugning the scientific integrity of their colleagues. Targets of failed replications are justifiably upset, particularly given the inadequate basis for replicators’ extraordinary claims.

This is why we can't have social science. Not because the subject is not amenable to the scientific method -- it obviously is. People are conducting controlled experiments and other people are attempting to replicate the results. So far, so good. Rather, the problem is that at least one celebrated authority in the field hates that, and would prefer much, much more deference to authority.

New to LessWrong?

New Comment
83 comments, sorted by Click to highlight new comments since: Today at 8:43 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The amusing thing is that Mitchel's argument proves much more than he wants it to prove.

Because experiments can be undermined by a vast number of practical mistakes, the likeliest explanation for any failed replication will always be that the replicator bungled something along the way. Unless direct replications are conducted by flawless experimenters, nothing interesting can be learned from them.

Notice that the above argument applies just as well to the original experiment being replicated.

Yes, noticed.

Has anyone read his entire article? Does he attempt any justification for why this particular argument doesn't equally apply to the original experiment?

One principle I try to keep in mind is "The other guy is probably not a total moron. If it seems that way, you're probably missing something."

I read it. He has a section titled "The asymmetry between positive and negative evidence".

His argument is that a positive result is like seeing a black swan, and a null result is like seeing a white swan, and once you see a black swan, then no matter how many white swans you see it doesn't prove that all swans are white.

He addresses the objection that this leaves us unable to ever reject a spurious claim. His answer is that, since negative evidence is always meaningless, we should get positive evidence that the experimenter was wrong.

I think this is a fair summary of the section. It's not long, so you can check for yourself. I am... not impressed.

9buybuydandavis10y
Actually, it's like hearing a report of a black swan, which is why the burden of proof is generally put on the report. It's even worse than that for him. What a bad analogy for him to rest his case on. Surely, the purpose of these social science studies is not to make a claim about the existence of some bizarre subset of the population (a black swan), but that the results will generalize to the population at large (all swans are black). That's more than enough for me. Thanks for taking the bullet for us.

There's a lot wrong with the argument; he has no actual justification for assuming that social science is anything like swan-spotting.

But even within his unjustified analogy... apparently if someone reports a new color of swan in Australia, he might give polygraphs and vision tests to the reporter, but sending an expedition to Australia to check it out would be of no scientific value.

9Richard_Kennaway10y
Medicine has had its Ioannidis. Is anyone doing the same for psychology and social science?
3satt10y
Uri Simonsohn?
3ChristianKl10y
There's El Vul's Voodoo Correlations in Social Neuroscience.
7philh10y
My interpretation is that he's saying false positives are unlikely but false negatives are common. If that's the case, then getting a positive result should greatly affect your beliefs; but a negative replication shouldn't shift them much. (I can't actually justify this interpretation by reading the text, but it does make the text seem fairly reasonable.)
2Eliezer Yudkowsky10y
Beautiful point.
1[anonymous]10y
Congratulations, you're smarter than a Harvard professor. I noticed the same thing, so I'm also smarter than a Harvard professor. I would hope that everyone on this site is smarter than a Harvard professor. A Harvard professor who got his B.A. and M.S. from Yale and his Ph.D. from Harvard. This is sociologically interesting: is it an isolated incident that a Harvard professor -- an award-winning one, no less -- would so loudly fail to comprehend how science and statistics work? That doesn't seem likely, for two reasons. First, he probably talked it over with other people in the field, or at least mentioned why he thought replication was bad and wrong, and no one talked him out of it. Second, the system he came through produced him, and it's unlikely that that sort of error would only be produced once, since education is fairly standardized. So: what does this say about the relevant institutions? (The social sciences, HYP, academia, etc.)

I don't think that means you are smarter than that Harvard professor. He is a very successful person and has reached heights coveted by many very smart people. It just means that the game he is playing is not one where you get ahead by saying things that make sense.

For example, if you listen to a successful politician and spot a false statement he utters, that does not mean that you are smarter than that politician.

7James_Miller10y
Yes, academics are supposed to raise the status of their institution. This brings in money, which helps educate students, which makes the world a better place. Unsuccessful replication threatens this. Plus, replication does nothing to advance social justice.
3ChristianKl10y
His website describes his research as: That doesn't seem to me like someone who focus on advancing social justice.

You should have looked at his vita for a more accurate description of his activities. If you had looked at his paper titles, some of them indicate he's not a stranger to social justice like theorizing and investigation, and likewise his funding sources, on top of Harvard's well-earned reputation: eg. "What’s in a forename?: Cue familiarity and stereotypical thinking", "Gender differences in implicit weight identity", "Deflecting negative self-relevant stereotype activation: The effects of individuation", "Me and my group: Cultural status can disrupt cognitive consistency", and the funding:

June 2007 – May 2010: National Science Foundation (BCS 0642448), "The neural basis of stereotyping", $609,800 (co-PI: Mahzarin Banaji)...September 2010 – August 2012: Templeton Foundation for Positive Neuroscience, "Vicarious Neural Response to Others as a Basis for Altruistic Behavior", $180,000 (co-PI: Jamil Zaki)

4ChristianKl10y
Sorry my mistake. As far as the paper titles goes. As far as the funding goes the National Science Foundation isn't an entity that I would see as spearheading the social justice movement. The same goes for the Templeton Foundation. They have the reputation of wanting "progress in spiritual discoveries" instead of "advancing social justice".
5gwern10y
Both groups are quite big and may fund more than you think; not that the grantors always get what they think they're getting or are the only people who are then allowed to draw upon the research. For example, consider "stereotype threat", much beloved of social-justice types for explaining how bad white people keep test scores low for women and blacks; you can see the NSF certainly has been involved in that research in the past just with a cursory google: http://scholar.google.com/scholar?q=%22stereotype%20threat%22%20%22National%20Science%20Foundation%22 (I count ~4 NSF grants attested to just from the snippet-view for the first page).
3ChristianKl10y
I do accept that both groups do fund a large variety of courses but I still wouldn't conclude from the funding source that there a bias in the direction of the social justice movement. When thinking about the title "The neural basis of stereotyping" you might be right that it smells like pseudoscience. It's a bit like the "The neural basis of acupuncture". Instead of searching for the neural basis it would make much more sense to focus of studying the actual effect. I wonder if I picked up to much stereotypes about fMRI research ;)
6gwern10y
I never said the NSF is biased in favor of SJ. I said if you had bothered to look at the vita instead of stopping at the most convenient place, you would have found a number of paper and grant titles which indicate a more than theoretical interest in topics strongly associated with SJ on top of his affiliation with an institute with a strong background both current and historical in liberal thought & SJ-like figures such as Cornell West and to borrow from parallel ongoing conversations, crucified Sumners for his well-founded suggestion that the elite math achievement gap might be a necessary consequence of gender differences. And then you said the NSF was evidence against SJ association, which is either wrong or weak evidence since they fund related research all the time.
2ChristianKl10y
The might have crucified Summers but they led him rise to be president at Harvard in the first place and didn't seem to successfully taught him not to say things like that. Cornell West has professor for religion and then professor for African-American Studies. As far as my instincts go I wouldn't expect the same thing from Harvard department of psychology than I would expect from the department of African-American Studies. I didn't. I took your post as saying that I should update in the direction of him being part of the social justice movement based on his funding source and wanted to reject updating based on that information in that direction.
-4James_Miller10y
He teaches at an elite university so it's highly probably he greatly values social justice.
0hairyfigment10y
Either you don't read your own comments, or you classify all social justice as signaling. In the latter case, I'm sure you'll come up with reasons why you aren't making a fully general counterargument. I would urge you to stop for a second and ask concretely what you would have thought about, say, abolitionists.
0James_Miller10y
I don't understand what you mean. Social justice is a label for a set of beliefs. I do not think that social justice is just about signaling. If it helps to understand what I mean, I'm a politically incorrect college professor.
-1hairyfigment10y
-3James_Miller10y
Are you implying that there is a contradiction in what you quoted? If so, I don't see it.
-2hairyfigment10y
There may indeed be a contradiction, but the real question - if we grant the first part - is what do you know and how do you know it? This seems to me like a really obvious point.
0[anonymous]10y
Reasoning for this claim? Do you also believe that original research can do nothing to advance social justice? That doesn't seem likely.

And now you find a man saying that is is an irrelevant demand to expect a repeatable experiment. This is science?

-- Richard Feynman, "Cargo Cult Science"

(Yes, I am aware of the irony of appealing to authority to mock someone who says we need to defer more to established authorities.)

6[anonymous]10y
What irony? People want to have authorities that can be appealed to, so if you value epistemic rationality, just saying it's bad to appeal to authority won't work as well as capturing the drive: providing authorities who can be appealed to in order to support memes beneficial to epistemic rationality, including that it's bad to appeal to authority. If you leave the drive uncaptured, someone else can capture it.
3DaFranker10y
It's ironic in the same way that adding the text "DEFACING STOP SIGNS" under the main text of a stop sign is ironic. The method used is the very one which is being condemned / warned against, and the fact that it works better than other methods (in both examples) only adds to the irony, as one should assume that something that preaches not doing exactly what it's doing would invalidate itself, rather than its actual effect of producing greater results due to a quirk of humans.

I sort of side with Mitchel on this.

A mentor of mine once told me that replication is useful, but not the most useful thing you could be doing because it's often better to do a followup experiment that rests on the premises established by the initial experiment. If the first experiment was wrong, the second experiment will end up wrong too. Science should not go even slower than it already does - just update and move on, don't obsess.

It's kind of how some of the landmark studies on priming failed to replicate, but there are so many followup studies which are explained by priming really well that it seems a bit silly to throw out the notion of priming just because of that.

Keep in mind, while you are unlikely to hit statistically significance where there is no real result, it's not statistically unlikely to have a real result that doesn't hit significance the next time you do it. Significance tests are attuned to get false negatives more often than false positives.

Emotionally though... when you get a positive result in breast cancer screening even when you're not at risk, you don't just shrug and say "probably a false positive" even though it is. Instead, you irrationally d... (read more)

If the first experiment was wrong, the second experiment will end up wrong too.

I guess the context is important here. If the first experiment was wrong, and the second experiment is wrong, will you publish the failure of the second experiment? Will you also publish your suspicion that the first experiment was wrong? How likely will people believe you that your results prove the first experiment was wrong, if you did something else?

Here is what the selection bias will do otherwise:

20 people will try 20 "second experiments" with p = 0,05. 19 of them will fail, one will succeed and publish the results of their successful second experiment. Then, using the same strategy, 20 people will try 20 "third experiments", and again, one of them will succeed... Ten years later, you can have dozen experiments examining and confirming the theory from dozen different angles, so the theory seems completely solid.

It's kind of how some of the landmark studies on priming failed to replicate, but there are so many followup studies which are explained by priming really well that it seems a bit silly to throw out the notion of priming just because of that.

Is there a chance that the process I described was responsible for this?

0someonewrongonthenet10y
In practice, individual scientists like to be able to say "my work causes updates". If you do something that rests on someone else's work and the experiment doesn't come out, you have an incentive to say "Someonewrongonthenet's hypothesis X implies A and B. Someonewrongonthenet showed A [citation], but I tried B and that means X isn't completely right. Cue further investigation which eventually tosses out X. Whether or not A was a false positive is less important than whether or not X is right. Yes, that's possible. I'm not sure direct replication actually solves that issue, though - you'd just shift over to favoring false negatives instead false positives. The existing mechanism that works against this is the incentive to overturn other people's work.

A mentor of mine once told me that replication is useful, but not the most useful thing you could be doing because it's often better to do a followup experiment that rests on the premises established by the initial experiment. If the first experiment was wrong, the second experiment will end up wrong too. Science should not go even slower than it already does - just update and move on, don't obsess.

Tell me, does anyone actually do what you think they should do? That is, based on a long chain of ideas A->B->C->D, none of which have been replicated, upon experimenting and learning ~Z, do they ever reject the bogus theory D? (Or wait, was it C that should be rejected, or maybe the ~Z should be rejected as maybe the experiment just wasn't powered enough to be meaningful as almost all studies are underpowered or, can you really say that Z logically entailed A...D? Maybe some other factor interfered with Z and so we can 'save the appearances' of A..Z! Yes, that's definitely it!) "Theory-testing in psychology and physics: a methodological paradox", Meehl 1967, puts it nicely (and this is as true as the day he wrote it half a century ago):

This last methodological si

... (read more)
2someonewrongonthenet10y
Wait, I'm not sure we're talking about the same thing. I'm saying direct replication isn't the most useful way to spend time. You're talking about systematic experiment design flaws. According to your writing, the failures in this example stem from methodological issues (not using an active control group). A direct replication of the n-back-IQ transfer would have just hit p<.05 again, as it would have had the same methodological issues. Of course, if the methodological issue is not repaired, all subsequent findings will suffer from the same issues. I'm strictly saying that direct replication isn't useful. Rigorous checking of methods and doing it over again correctly where there is a failure in the documented methodology is always a good idea.
2gwern10y
But the Jaeggi cluster also sometimes use active control groups, with various kinds of differences in the intervention, metrics, and interpretations. In fact, Jaeggi was co-author on a new dual n-back meta-analysis released this month*; the meta-analysis finds the passive-active difference I did, and you know what their interpretation is? That it's due to the correlated classification of US vs international laboratories conducting particular experiments. (It never even occurred to me to classify the studies this way.) They note that sometimes psychology experiments reach different conclusions in other cultures/countries - which they do - so perhaps the lower results in American studies using active control groups is because Americans gain less from n-back training. The kindest thing I can say about this claim is that I may be able to falsify it with my larger collection of studies (they threw out or missed a lot). So, after performing these conceptual extensions of their results - as you suggest - they continue to So it goes. * http://www.gwern.net/docs/dnb/2014-au.pdf / https://pdf.yt/d/VMPWmd0jpDYvZIjm / https://dl.dropboxusercontent.com/u/85192141/2014-au.pdf ; initial comments on it: https://groups.google.com/forum/#!topic/brain-training/GYqqSyfqffA
-4ChristianKl10y
The first sentence in your dual-n-back article is: If you believe that there's a net gain of medium effect size then why do you think we should throw dual n-back under the bus?

You should probably have read part of the second sentence: "active vs passive control groups criticism: found, and it accounts for most of the net effect size".

4Luke_A_Somers10y
This is not good, and I guess is not what he meant. You design the second experiment so that it aims to find something assuming the first was right, but if the first was wrong, it can expose that too. Basically, it has to be a stronger experiment than the first one.
0someonewrongonthenet10y
Agreed, that is a better way to say what I was trying to say.
1Dan_Moore10y
If you're concerned about the velocity of scientific progress, you should also be concerned about wrong turns. A Type 1 Error (establishing a wrong result by incorrectly rejecting a null hypothesis) is, IMHO, far more damaging to science than failure to establish a correct result - possibly due to an insufficient experimental setup.
0someonewrongonthenet10y
Yeah, there's definitely an "exploration / rigor" trade-off here (or maybe "speed / accuracy") and I'm not sure it's clear which side we are erring on right now. I'm not terribly surprised that LW favors rigor, just due to the general personality profile of the users here, and that my favoring of exploration at the cost of being wrong a few times is in the minority. I definitely think a rational agent would be more exploratory than science currently is, but on the other hand we've got systematic biases to contend with and rigor might offset that.
1DanielLC10y
If you get a positive result, you run another test. If you keep getting positive results, you probably have breast cancer. Similarly, if an experiment fails to replicate, you try again. If it replicates this time, then it's probably fine. If it keeps failing to replicate, then there's a problem. At the very least, you need to try to replicate a random sample of studies, just to make sure there aren't more false studies than you've been assuming.
0someonewrongonthenet10y
Not an expert on cancer, but I don't think it works that way .I think the cancer test accurately tests a variable wihch is a proxy for cancer risk. So a patient who doesn't have cancer but tests positive will continue testing positive, because the variable that the cancer test measures as a proxy for cancer is elevated in that patient. Experiments do work that way, but I'm not arguing against that. I'm only arguing that direct replication isn't a better use of resources than just going on to a followup experiment with a different methodology (unless direct replication is really easy and you can just have some students do it or something).
0DanielLC10y
Is there only one kind of test? Couldn't they find another proxy? If the followup is testing the same thing with a different methodology, then the metaphor works. If you run followup experiments just to find more detail, it would be like someone testing positive for cancer so then you run a test for what kind of cancer. You're assuming they have cancer when you run the second test, so the results could be misleading.
0someonewrongonthenet10y
Generally an idea is considered well supported when multiple methodologies support it, yes. In the psychology lab I used to work in, at least, we never try to replicate, but we do try to show the same thing in multiple different ways. There are maybe 15 different experiments a year, but they're generally all centered around proving or dis-proving a cluster of 2 or 3 broad, conceptually linked hypotheses. Biology labs I've worked with do often do the whole "okay, the results are in and this is established now, let's find additional detail' thing, but that's because they were usually looking at much simpler systems, like a single protein or something, so they could afford to take liberties and not be so paranoid about experimental methods.
0Lumifer10y
...and now you have two problems X-) It's not a matter of speed, it's a matter of velocity. Going fast in the wrong direction is (much) worse than useless. You are quite likely. You start with a 5% chance under ideal circumstances and that chance only climbs from there. P-hacking is very widespread. 8-0 You think getting additional screenings after testing positive for cancer is "irrational"??
0someonewrongonthenet10y
The process of screening itself involves risks, not to mention the misplaced stress and possibility of unnecessary surgery.
0Lumifer10y
This is true for e.g. any visit to the doctor. Are you saying that it's irrational to go for medical checkups? In the cancer screening case, what do you think does the cost-benefit analysis say?
0someonewrongonthenet10y
It would be irrational to go for medical check ups when they aren't necessary - if you did it every 3 days, for example. I'm looking at this from a birds eye view. A lot of people get unnecessary screenings, which tell them information which is not worth acting upon no matter whether it says that it is positive or negative, and then start worrying and getting unnecessary testing and treatment. Information is only useful to the extent that you can act upon it.
0Lumifer10y
And from up there you take it upon yourself to judge whether personal decisions are rational or not? I think you're way too far away for that. That's a different issue. In a post upstream you made a rather amazing claim that additional tests after testing positive for cancer on a screening would be irrational. Do you stand by that claim?
0someonewrongonthenet10y
Er...I think that's a little harsh of you. Overscreening is recognized as a problem among epidemiologists. When I say overscreening is a problem, I'm mostly just trusting expert consensus on the matter. I stand by that a lot of smart people who study this issue believe that in actual medical practice, these screenings are either a problem in themselves, or that the information from the screenings can lead people to irrational behavior, and I do trust them. But really, that was just an illustrative example used to steelman Michael. You don't have to except the actual example, just the general concept that this sort of thing can happen.
0Lumifer10y
Rationality does not specify values. I rather suspect that the cost-benefit analysis that epidemiologists look at is quite different from the cost-benefit analysis that individuals look at. LOL. Don't bother you pretty little head with too much information. No, you don't need to know that. No, you can't decide what you need to know and what you don't need to know. X-/
0eggman10y
Scientists as community of humans should expect there research to return false positives sometimes, because that is what is going to happen, and they should publish those results. Scientists should also expect experiments to demonstrate that some of their hypotheses are just plain wrong. It seems to me replication is only not very useful if the replications of the experiment are likely prone to all the same crap that currently makes original experiments from social psychology not all that reliable. I don't have experience, or practical knowledge of the field, though, so I wouldn't know.

When natural scientists attempt to replicate famous experiments where the original result was clearly correct, with what probability do they tend to succeed? Is it closer to 1 than, say, .7?

I'd think that "famous experiments where the original result was clearly correct" are exactly those whose results have already been replicated repeatedly. If they haven't been replicated they may well be famous -- Stanford prison experiment, I'm looking at you -- but they aren't clearly correct.

7James_Miller10y
I was thinking more "What is the error rate in replication experiments when we know the results from the original experiment were correct?" So if mixing X and Y under certain conditions has to yield Z, how often when scientists actually try to do this do they get Z?
4Lightwave10y
The error rate in replication experiments in the natural sciences is expected to be much much lower than in the social sciences. Humans and human environments are noisy and complicated. Look at nutrition/medicine - it's taking us decades to figure out whether some substance/food is good or bad for you and under what circumstances. Why would you expect it be easier to analyze human psychology and behavior?
3ChristianKl10y
If you want to know whether food is good or bad you have to look at mortality which means you might have to wait a decade. A lot of psychology experiments claim effects over much shorter timeframes.
2LimberLarry10y
I think he is more suggesting that the number of confounding factors in psychology experiments is generally far higher than in the natural sciences. The addition of such uncontrollable factors leads to a generally higher error rate in human sciences.
0ChristianKl10y
The number of confounding factors isn't that important if it's possible to do controlled experiments that control for them. Nutrition science has the problem that you usually can't do good controlled experiments or those are very expensive.
2LimberLarry10y
Obviously if you can control for a confounding factor then its not an issue, I was simply stressing that the nature of human sciences means that it is effectively impossible to control for all confounding factors, or even be aware of many of them.
7ChristianKl10y
To the extend that's true careful replication of studies to identify factors is important if you don't want to practice what Feymann described as Cargo Cult science. If you follow Feymann argument physicists also would get a bunch of bad results if they would work with the scientific standards used in psychology. Feymann on rat psychology: Nutrition is really a different case than a lot of psychology. There are question in psychology such as whether doing certain things to a child in it's childhood effect whether that child is a healthy adult. Those questions are hard to investigate scientifically because of time lag. The same isn't true for many psychology experiments.
2LimberLarry10y
I don't think we actually disagree on anything, the only point I was making was that your reply to Lightwave, while accurate, wasn't actually replying to the point he made.
2ChristianKl10y
I did reply to his point. He spoke about nutrition science. That field has it's own problems that psychologists don't have to deal with. It's a bad example if he wanted to make the point you think he wanted to make.
2LimberLarry10y
Well fair enough. His use of nutrition science as an example was probably poorly chosen.
6satt10y
I've suggested on LW before that most attempts at physics experiments are wrong, if one counts physics students' attempts. The standard reaction to a student getting a counterintuitive result is, "well, obviously they messed up the experiment". I notice I feel OK with that response in the case of physics but don't like Mitchell trying it for psychology. (I wonder whether biology students have to count chromosomes.)
7Pfft10y
Students are particularly bad at experimentation (which is why they have to take those labs in the first place), and the experiments they do are selected for being particularly fundamental and well-understood (in particular, they have already been replicated lots of times). I think this is a more important difference than physics versus psychology.

Rather, the problem is that at least one celebrated authority in the field hates that, and would prefer much, much more deference to authority.

I don't think this is true at all. His points against replicability are very valid and match my experience as a researcher. In particular:

Because experiments can be undermined by a vast number of practical mistakes, the likeliest explanation for any failed replication will always be that the replicator bungled something along the way.

This is a very real issue and I think that if we want to solve the current issues with science we need to be honest about this, rather than close our eyes and repeat the mantra that replication will solve everything. And it's not like he's arguing against accountability. Even in your quoted passage he says:

The field of social psychology can be improved, but not by the publication of negative findings. Experimenters should be encouraged to restrict their “degrees of freedom,” for example, by specifying designs in advance.

Now, I think he goes too far by saying that no negative findings should be published; but I think they need to be held to a high standard for the very reason he gives. On the other han... (read more)

What is the purpose of an experiment in science? For instance, in the field of social psychology? For instance,what is the current value of the Milgram experiment? A few people in Connecticut did something in a room at Yale in 1961. Who cares? Maybe it's just gossip from half a century ago.

However, some people would have us believe that this experiment has broader significance, beyond the strict parameters of the original experiment, and has implications for (for example) the military in Texas and corporations in California.

Maybe these people are wrong. Maybe the Milgram experiment was a one-off fluke. If so, then let's stop mentioning it in every intro to psych textbook. While we're at it, why the hell was that experiment funded, anyway? Why should we bother funding any further social psychology experiments?

I would have thought, though, that most social psychologists would believe that the Milgram experiment has predictive significance for the real world. A Bayesian who knows about the results of the Milgram experiment should better be able to anticipate what happens in the real world. This is what an experiment is for. It changes your expectations.

However, if a supp... (read more)

4fubarobfusco10y
http://en.wikipedia.org/wiki/Milgram_experiment#Replications_and_variations — (emphasis added)

Because experiments can be undermined by a vast number of practical mistakes, the likeliest explanation for any failed replication will always be that the replicator bungled something along the way.

This is a very real issue and I think that if we want to solve the current issues with science we need to be honest about this, rather than close our eyes and repeat the mantra that replication will solve everything.

Why is it more likely that the followup experiment was flawed, rather than the original? Are we giving a prior of > 50% to every hypothesis that a social scientist comes up with?

Either way, I think you are being quite uncharitable to Mitchell.

I disagree. Let's look at this section again:

Whether they mean to or not, authors and editors of failed replications are publicly impugning the scientific integrity of their colleagues. Targets of failed replications are justifiably upset, particularly given the inadequate basis for replicators’ extraordinary claims.

Contrast this to:

“This been difficult for me personally because it’s an area that’s important for my research,” he says. “But I choose the red pill. That’s what doing science is.”

From here, linked before on LW here.

The first view seems to have the implied assumption that false positives don't happen to good researchers, whereas the second view has the implied assumption that theories and people are separate, and people should follow the facts, rather than the other way around.


But perhaps it is the case that, in social psychology, the majority of false positives are not innocent, and thus when a researchers results do not replicate it is a sign that they're dishonest rather than that they're unlucky. In such a case, he is declaring that researchers should not try to expose dishonesty, which should bring down opprobrium from all decent people.

1Dan_Moore10y
The goal is to set up the experiments to make it solely about the results and not about colleagues. If 'scientific integrity' means sloppy, porous experimental setup, then impugning this is not a bad thing. Ideally the experimental design and execution should transcend the question of the researchers' motives.
4CarlShulman10y
Do you agree with the empirical claim about the frequencies of false positives in initial studies versus false negatives in replications?

This is why we can't have social science. Not because the subject is not amenable to the scientific method -- it obviously is. People are conducting controlled experiments and other people are attempting to replicate the results. So far, so good.

So, you say people are trying the scientific approach. My guess is, the nature of the problem is such that nothing much came out of these attempts. No great insights were gained, no theories were discovered. Real scientists had nothing to show for their efforts, and this is why the these fields are now not owned by... (read more)

2Nornagest10y
This may have happened.

I think someone should mention Harry Collins and Trevor Pinch's book The Golem here. It's a collection of episodes from the history of science. The general theme is that in practice, new discoveries do not involve a clear-cut observation followed by theorizing, instead there is a lot of squabbling over whether the researchers involved carried out their experiments correctly, and these kind of feuds can persist for a scientific generation.

My view is that this makes replication attempts all the more important. But it also shows that some resistance and recri... (read more)

In the second paragraph of the quote the author ignores the whole point of replication efforts. We know that scientific studies may suffer from methodological errors. The whole point of replication studies are to identify methodological errors. If they disagree then you know there is an uncontrolled variable or methodological mistake in one or both of them, further studies and the credibility of the experimenters is then used to determine which result is more likely to be true. If the independent studies agree then it is evidence that they are both correct... (read more)

5DanArmak10y
Original, non-replication studies are mostly made by people who agree with what their studies are showing. (Also, publication bias.) So this is not a reason to think replication studies are particularly biased.

While I agree that this guy needs to hand in his "Scientist" card this is an individual who no more reflects on his field than any other individual does on theirs.

There was a notable climate scientist whose response to people asking for his data was literally "no, you'll just try to use it to prove me wrong".

Edit: exact quote:"Even if WMO agrees, I will still not pass on the data. We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.&quo... (read more)

1waveman9y
You were down-voted in part because of bad grammar (who's=>whose) and the absence of citations.
0HungryHobo9y
edited