I’m currently reading Polio: An American Story, by David Oshinsky, and I came across a fascinating story:

In 1954, when it was time for large-scale human trials of the first polio vaccine, some researchers were against the idea of doing a properly randomized, double-blind, placebo-controlled clinical trial—including Jonas Salk, the inventor of the vaccine.

What did they want to do instead? An “observed control” trial: They would ask for volunteers (children) to get the vaccine, and then compare the rate of polio in the volunteers to the rate in their schoolmates who weren’t vaccinated. No placebo. No randomization. Not blind.

Of course, this was hopelessly confounded. In that era, the families most likely to volunteer were the more educated and affluent families (and those were actually the ones most at risk for the disease).

So why did Salk and others oppose proper randomized blind controls? The argument against a randomized trial was the urgency of protecting the nation’s children against a debilitating and deadly disease. If the vaccine worked, it would be a tragedy to withhold it from the control children. Quoting Oshinsky:

There were ethical issues as well. Were injected controls really suited to a polio trial? Was it proper, in short, to deny someone access to a potentially lifesaving vaccine in the name of statistical accuracy? Thousands of parents were going to volunteer their children to receive an injection—all of them hoping it contained the polio vaccine, not the placebo. Yet one-half of this study composed of six- to nine-year-olds, the group most vulnerable to paralytic polio, would receive a worthless liquid. Some, including Salk himself, saw this as elite science at its worst, a cynical form of Russian roulette.

Some context: the trial was massive, involving hundreds of thousands (eventually over a million) children across the country. This is not n=50 we’re talking about here. And the disease was seasonal, striking in epidemic waves every summer. So even if all the controls were properly vaccinated at the end of the trial, it would be too late for anyone who had been stricken that year.

So there was a real dilemma here. Salk himself seemed to be already convinced that the vaccine worked, and wanted it to be administered as widely as possible:

… Salk refused to budge. There must be no placebo. He could not deny his own product to those who volunteered to receive it. If thousands of children were going to be injected, then every one of them deserved the benefit of his vaccine. The object of these trials should be to protect as many lives as possible, not to run a textbook experiment. Given the stakes, Salk wrote O’Connor, “I would feel that every child who [gets] a placebo and becomes paralyzed will do so at my hands. I know this truthfully is not the case, but I know equally well that if the same child were to receive a vaccine that proved to be effective, then he might have been spared.” It was enough, he said, “to make the humanitarian shudder [and] Hippocrates turn over in his grave.”

Ultimately, some people resigned over the issue, including Harry Weaver, the director of research for the National Foundation for Infantile Paralysis, which was sponsoring the trials; and Joseph Bell, the scientific director who was to run them. The Foundation replaced Bell with Thomas Francis, Salk’s old mentor—but Francis also insisted on a proper blind placebo control. Salk ended up going along with it (perhaps because he had to at that point, perhaps because he trusted Francis more than Bell?)

In the end they did a combination: Some counties did a placebo control, others an “observed control”, at their own discretion. Fortunately, there were enough randomized controls to draw sound scientific conclusions at the end of the study.

While I’m sympathetic to the practical issue of wanting to protect patients, I think the history of medicine shows how easy it is for scientific “knowledge” to become polluted with falsehoods based on less-than-perfect experiments. In medicine especially, these mistakes become tradition, entrenched “wisdom” from revered authority figures that can stand undefeated as common practice for decades or centuries. So it’s important to get things right, and I think Bell and Francis were clearly correct here.

Mostly I just find it fascinating that as late as the 1950s, the need for proper randomized blind placebo controls in clinical trials was not universally accepted, even among scientific researchers. Cultural norms matter, especially epistemic norms.


New Comment
3 comments, sorted by Click to highlight new comments since: Today at 2:12 PM
Mostly I just find it fascinating that as late as the 1950s, the need for proper randomized blind placebo controls in clinical trials was not universally accepted, even among scientific researchers. Cultural norms matter, especially epistemic norms.

This seems to misunderstand the dispute. Salk may have had an overly optimistic view of the efficacy of his vaccine (among other foibles your source demonstrates), but I don't recall him being a general disbeliever in the value of RCTs.

Rather, his objection is consonant with consensus guidelines for medical research, e.g. the declaration of Helsinki (article 8): [See also the Nuremberg code (art 10), relevant bits of the Hippocratic Oath, etc.]

While the primary purpose of medical research is to generate new knowledge, this goal can never take precedence over the rights and interests of individual research subjects.

This cashes out in a variety of ways. The main one is a principle of clinical equipoise - one should only conduct a trial if there is genuine uncertainty about which option is clinically superior. A consequence of this is that clinical trials conducted are often stopped early if a panel supervising the trial finds clear evidence of (e.g.) the treatment outperforming the control (or vice versa) as continuing the trial continues to place those in the 'wrong' arm in harm's way - even though this comes at an epistemic cost as the resulting data is poorer than that which could have been gathered if the trial continued to completion.

I imagine the typical reader of this page is going to tend unsympathetic to the virtue ethicsy/deontic motivations here, but there is also a straightforward utilitarian trade-off: better information may benefit future patients, at the cost of harming (in expectation) those enrolled in the trial. Although RCTs are the ideal, one can make progress with less (although I agree it is even more treacherous), and the question of the right threshold for these is fraught. (There also also natural 'slippery slope' style worries about taking a robust 'longtermist' position in holding the value of the evidence for all future patients is worth much more than the welfare of the much smaller number of individuals enrolled in a given trial - the genesis of the Nuremberg Code need not be elaborated upon.)

A lot of this ethical infrastructure post-dates Salk, but this suggests his concerns were forward-looking rather than retrograde (even if he was overconfident in the empirical premise that 'the vaccine works' which drove these commitments). I couldn't in good conscience support a placebo-controlled trial for a treatment I knew worked for a paralytic disease either. Similarly, it seems very murky to me what the right call was given knowledge-at-the-time - but if Bell and Francis were right, it likely owed more to them having a more reasonable (if ultimately mistaken) scepticism of the vaccine efficacy than Salk, rather him just 'not getting it' about why RCTs are valuable.

even though this comes at an epistemic cost as the resulting data is poorer than that which could have been gathered if the trial continued to completion.

There are ways of handling that epistemically, although they're more complicated - if enough evidence is acquired quickly, the harmful part of the trial is stopped - whichever that is.

Sure - there's a fair bit of literature on 'optimal stopping' rules for interim results in clinical trials to try and strike the right balance.

It probably wouldn't have helped much for Salk's dilemma: Polio is seasonal and the outcome of interest is substantially lagged from the intervention - which has to precede the exposure, and so the 'window of opportunity' is quickly lost; I doubt the statistical methods for conducting this were well-developed in the 50s; and the polio studies were already some of the largest trials ever conducted, so even if available these methods may have imposed even more formidable logistical challenges. So there probably wasn't a neat pareto-improvement of "Let's run an RCT with optimal statistical control governing whether we switch to universal administration" Salk and his interlocutors could have agreed to pursue.

New to LessWrong?