Aquinas famously said: beware the man of one book. I would add: beware the man of one study.

For example, take medical research. Suppose a certain drug is weakly effective against a certain disease. After a few years, a bunch of different research groups have gotten their hands on it and done all sorts of different studies. In the best case scenario the average study will find the true result – that it’s weakly effective.

But there will also be random noise caused by inevitable variation and by some of the experiments being better quality than others. In the end, we might expect something looking kind of like a bell curve. The peak will be at “weakly effective”, but there will be a few studies to either side. Something like this:

We see that the peak of the curve is somewhere to the right of neutral – ie weakly effective – and that there are about 15 studies that find this correct result.

But there are also about 5 studies that find that the drug is very good, and 5 studies missing the sign entirely and finding that the drug is actively bad. There’s even 1 study finding that the drug is very bad, maybe seriously dangerous.

This is before we get into fraud or statistical malpractice. I’m saying this is what’s going to happen just by normal variation in experimental design. As we increase experimental rigor, the bell curve might get squashed horizontally, but there will still be a bell curve.

In practice it’s worse than this, because this is assuming everyone is investigating exactly the same question.

Suppose that the graph is titled “Effectiveness Of This Drug In Treating Bipolar Disorder”.

But maybe the drug is more effective in bipolar i than in bipolar ii (Depakote, for example)

Or maybe the drug is very effective against bipolar mania, but much less effective against bipolar depression (Depakote again).

Or maybe the drug is a good acute antimanic agent, but very poor at maintenance treatment (let’s stick with Depakote).

If you have a graph titled “Effectiveness Of Depakote In Treating Bipolar Disorder” plotting studies from “Very Bad” to “Very Good” – and you stick all the studies – maintenence, manic, depressive, bipolar i, bipolar ii – on the graph, then you’re going to end running the gamut from “very bad” to “very good” even before you factor in noise and even before even before you factor in bias and poor experimental design.

So here’s why you should beware the man of one study.

If you go to your better class of alternative medicine websites, they don’t tell you “Studies are a logocentric phallocentric tool of Western medicine and the Big Pharma conspiracy.”

They tell you “medical science has proved that this drug is terrible, but ignorant doctors are pushing it on you anyway. Look, here’s a study by a reputable institution proving that the drug is not only ineffective, but harmful.”

And the study will exist, and the authors will be prestigious scientists, and it will probably be about as rigorous and well-done as any other study.

And then a lot of people raised on the idea that some things have Evidence and other things have No Evidence think holy s**t, they’re right!

On the other hand, your doctor isn’t going to a sketchy alternative medicine website. She’s examining the entire literature and extracting careful and well-informed conclusions from…

Haha, just kidding. She’s going to a luncheon at a really nice restaurant sponsored by a pharmaceutical company, which assures her that they would never take advantage of such an opportunity to shill their drug, they just want to raise awareness of the latest study. And the latest study shows that their drug is great! Super great! And your doctor nods along, because the authors of the study are prestigious scientists, and it’s about as rigorous and well-done as any other study.

But obviously the pharmaceutical company has selected one of the studies from the “very good” end of the bell curve.

And I called this “Beware The Man of One Study”, but it’s easy to see that in the little diagram there are like three or four studies showing that the drug is “very good”, so if your doctor is a little skeptical, the pharmaceutical company can say “You are right to be skeptical, one study doesn’t prove anything, but look – here’s another group that finds the same thing, here’s yet another group that finds the same thing, and here’s a replication that confirms both of them.”

And even though it looks like in our example the sketchy alternative medicine website only has one “very bad” study to go off of, they could easily supplement it with a bunch of merely “bad” studies. Or they could add all of those studies about slightly different things. Depakote is ineffective at treating bipolar depression. Depakote is ineffective at maintenance bipolar therapy. Depakote is ineffective at bipolar ii.

So just sum it up as “Smith et al 1987 found the drug ineffective, yet doctors continue to prescribe it anyway”. Even if you hunt down the original study (which no one does), Smith et al won’t say specifically “Do remember that this study is only looking at bipolar maintenance, which is a different topic from bipolar acute antimanic treatment, and we’re not saying anything about that.” It will just be titled something like “Depakote fails to separate from placebo in six month trial of 91 patients” and trust that the responsible professionals reading it are well aware of the difference between acute and maintenance treatments (hahahahaha).

So it’s not so much “beware the man of one study” as “beware the man of any number of studies less than a relatively complete and not-cherry-picked survey of the research”.

II.

I think medical science is still pretty healthy, and that the consensus of doctors and researchers is more-or-less right on most controversial medical issues.

(it’s the uncontroversial ones you have to worry about)

Politics doesn’t have this protection.

Like, take the minimum wage question (please). We all know about the Krueger and Card study in New Jersey that found no evidence that high minimum wages hurt the economy. We probably also know the counterclaims that it was completely debunked as despicable dishonest statistical malpractice. Maybe some of us know Card and Krueger wrote a pretty convincing rebuttal of those claims. Or that a bunch of large and methodologically advanced studies have come out since then, some finding no effect like Dube, others finding strong effects like Rubinstein and Wither. These are just examples; there are at least dozens and probably hundreds of studies on both sides.

But we can solve this with meta-analyses and systemtic reviews, right?

Depends which one you want. Do you go with this meta-analysis of fourteen studies that shows that any presumed negative effect of high minimum wages is likely publication bias? With this meta-analysis of sixty-four studies that finds the same thing and discovers no effect of minimum wage after correcting for the problem? Or how about this meta-analysis of fifty-five countries that does find effects in most of them? Maybe you prefer this systematic review of a hundred or so studies that finds strong and consistent effects?

Can we trust news sources, think tanks, econblogs, and other institutions to sum up the state of the evidence?

CNN claims that 85% of credible studies have shown the minimum wage causes job loss. But raisetheminimumwage.com declares that “two decades of rigorous economic research have found that raising the minimum wage does not result in job loss…researchers and businesses alike agree today that the weight of the evidence shows no reduction in employment resulting from minimum wage increases.” Modeled Behavior says “the majority of the new minimum wage research supports the hypothesis that the minimum wage increases unemployment.” The Center for Budget and Policy Priorities says “The common claim that raising the minimum wage reduces employment for low-wage workers is one of the most extensively studied issues in empirical economics. The weight of the evidence is that such impacts are small to none.”

Okay, fine. What about economists? They seem like experts. What do they think?

Well, five hundred economists signed a letter to policy makers saying that the science of economics shows increasing the minimum wage would be a bad idea. That sounds like a promising consensus…

..except that six hundred economists signed a letter to policy makers saying that the science of economics shows increasing the minimum wage would be a good idea. (h/t Greg Mankiw)

Fine then. Let’s do a formal survey of economists. Now what?

raisetheminimumwage.com, an unbiased source if ever there was one, confidently tells us that “indicative is a 2013 survey by the University of Chicago’s Booth School of Business in which leading economists agreed by a nearly 4 to 1 margin that the benefits of raising and indexing the minimum wage outweigh the costs.”

But the Employment Policies Institute, which sounds like it’s trying way too hard to sound like an unbiased source, tells us that “Over 73 percent of AEA labor economists believe that a significant increase will lead to employment losses and 68 percent think these employment losses fall disproportionately on the least skilled. Only 6 percent feel that minimum wage hikes are an efficient way to alleviate poverty.”

So the whole thing is fiendishly complicated. But unless you look very very hard, you will never know that.

If you are a conservative, what you will find on the sites you trust will be something like this:

Economic theory has always shown that minimum wage increases decrease employment, but the Left has never been willing to accept this basic fact. In 1992, they trumpeted a single study by Card and Krueger that purported to show no negative effects from a minimum wage increase. This study was immediately debunked and found to be based on statistical malpractice and “massaging the numbers”. Since then, dozens of studies have come out confirming what we knew all along – that a high minimum wage is economic suicide. Systematic reviews and meta-analyses (Neumark 2006, Boockman 2010) consistently show that an overwhelming majority of the research agrees on this fact – as do 73% of economists. That’s why five hundred top economists recently signed a letter urging policy makers not to buy into discredited liberal minimum wage theories. Instead of listening to starry-eyed liberal woo, listen to the empirical evidence and an overwhelming majority of economists and oppose a raise in the minimum wage.

And if you are a leftist, what you will find on the sites you trust will be something like this:

People used to believe that the minimum wage decreased unemployment. But Card and Krueger’s famous 1992 study exploded that conventional wisdom. Since then, the results have been replicated over fifty times, and further meta-analyses (Card and Krueger 1995, Dube 2010) have found no evidence of any effect. Leading economists agree by a 4 to 1 margin that the benefits of raising the minimum wage outweigh the costs, and that’s why more than 600 of them have signed a petition telling the government to do exactly that. Instead of listening to conservative scare tactics based on long-debunked theories, listen to the empirical evidence and the overwhelming majority of economists and support a raise in the minimum wage.

Go ahead. Google the issue and see what stuff comes up. If it doesn’t quite match what I said above, it’s usually because they can’t even muster that level of scholarship. Half the sites just cite Card and Krueger and call it a day!

These sites with their long lists of studies and experts are super convincing. And half of them are wrong.

At some point in their education, most smart people usually learn not to credit arguments from authority. If someone says “Believe me about the minimum wage because I seem like a trustworthy guy,” most of them will have at least one neuron in their head that says “I should ask for some evidence”. If they’re really smart, they’ll use the magic words “peer-reviewed experimental studies.”

But I worry that most smart people have not learned that a list of dozens of studies, several meta-analyses, hundreds of experts, and expert surveys showing almost all academics support your thesis – can still be bullshit.

Which is too bad, because that’s exactly what people who want to bamboozle an educated audience are going to use.

III.

I do not want to preach radical skepticism.

For example, on the minimum wage issue, I notice only one side has presented a funnel plot. A funnel plot is usually used to investigate publication bias, but it has another use as well – it’s pretty much an exact presentation of the “bell curve” we talked about above.

This is more of a needle curve than a bell curve, but the point still stands. We see it’s centered around 0, which means there’s some evidence that’s the real signal among all this noise. The bell skews more to left than to the right, which means more studies have found negative effects of the minimum wage than positive effects of the minimum wage. But since the bell curve is asymmetrical, we intepret that as probably publication bias. So all in all, I think there’s at least some evidence that the liberals are right on this one.

Unless, of course, someone has realized that I’ve wised up to the studies and meta-analyses and and expert surveys, and figured out a way to hack funnel plots, which I am totally not ruling out.

(okay, I kind of want to preach radical skepticism)

Also, I should probably mention that it’s much more complicated than one side being right, and that the minimum wage probably works differently depending on what industry you’re talking about, whether it’s state wage or federal wage, whether it’s a recession or a boom, whether we’re talking about increasing from $5 to $6 or from $20 to $30, etc, etc, etc. There are eleven studies on that plot showing an effect even worse than -5, and very possibly they are all accurate for whatever subproblem they have chosen to study – much like the example with Depakote where it might an effective antimanic but a terrible antidepressant.

(radical skepticism actually sounds a lot better than figuring this all out).

IV.

But the question remains: what happens when (like in most cases) you don’t have a funnel plot?

I don’t have a good positive answer. I do have several good negative answers.

Decrease your confidence about most things if you’re not sure that you’ve investigated every piece of evidence.

Do not trust websites which are obviously biased (eg Free Republic, Daily Kos, Dr. Oz) when they tell you they’re going to give you “the state of the evidence” on a certain issue, even if the evidence seems very stately indeed. This goes double for any site that contains a list of “myths and facts about X”, quadruple for any site that uses phrases like “ingroup member uses actual FACTS to DEMOLISH the outgroup’s lies about Y”, and octuple for RationalWiki.

Most important, even if someone gives you what seems like overwhelming evidence in favor of a certain point of view, don’t trust it until you’ve done a simple Google search to see if the opposite side has equally overwhelming evidence.

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 3:46 PM

Your overall argument is well made.

However I think choosing to use an economic example as your theme may be problematic. Economic studies are basically impossible to conduct rigorously. There can be no controlled, double blind, repeatable experiments. In addition, given the fundamental inteconnectedness of all things (Douglas Adams was a greater man than many people realize), it is extremely difficult to predict what the effects of a change actually are, and therefore what effects need to be measured (at least in order for any results to be usable for policy decisions, where it is necessary to consider all effects, not just a subset of them).

As a consequence at least one school of economics (Austrians) argues that all economic studies are bunk and that only economics based on deductive logic is valid. I'm not sure I completely agree with this position, but I can see where they are coming from, and it is interesting that the current state of economic "science", based upon such studies, appears incapable of successfully making any useful prediction at all. The Austrians aren't really any better: they are very limited in the predictions they are able to make because deductive logic limits their models; but at least in their case they know that they cannot make predictions.

For the sake of argument let us reject the Austrian view, and accept that evidence from flawed experiments is better than no evidence at all. Even so we must at least I think accept that all evidence from studies in such a field must be treated exceedingly cautiously and not given the same weight as a laboratory experiment in a "hard" science, or even a well designed psychology study or medical trial. And perhaps this means that we must also continue to give credence to well founded logical theory even when we have some real world evidence that conflicts with it.

So what are the implications of the above thinking when applied to the theme of minimum wage?

(1) Firstly I think that your assumption that the asymmetry of the funnel probably arises from publication bias is suspect. Asymmetry can also arise from a systematic difference between studies of higher and lower precision, or use of an inappropriate effect measure: exactly the sort of problems that we would expect to arise given the real world constraints on economic research. So I think you should interpret the asymmetry not as "probable publication bias", but as invalidating your use of this funnel plot to draw conclusions without further detailed investigation of individual studies.

(2) The economic law of supply and demand is a very strong and well supported law, both by simple, elegant theory and a great deal of real world evidence. To suggest that it doesn't apply to the price of labor is a staggering assertion which I suspect you are not making, although your blanket statement "all in all, I think there’s at least some evidence that the liberals are right on this one" seems somewhat misleading to me. To your credit you do identify the many shortcomings of this method, but my interpretation is that these shortcomings which result from attempting to amalgamate the results of studies on different populations, subject to different inputs and measuring different effects are sufficient to render any conclusion essentially meaningless.

(3) Even if we discovered that publication bias is the cause of the asymmetry (which seems unlikely to me and would take a great deal of investigative effort on our part), should we allow these studies to guide policy and raise minimum wages? Surely (at least if we are aiming to maximize utility rather than win votes) the answer is no? We have no idea what other potentially negative effects a change in minimum wage has on the economy, nor do we even have any specific information about the set of circumstances for which our tentative conclusion that raising minimum wage does not impact employment holds true.