Professional Patients: Fraud that ruins studies

by jimrandomh3 min read5th Jan 201213 comments

22

Personal Blog

I just read Antidepressants: Bad Drugs... Or Bad Patients, linked by wallowinmaya in a discussion post and based on the journal articles Antidepressant Clinical Trials and Subject Recruitment: Just Who Are Symptomatic Volunteers? and Failure Rate and "Professional Subjects" in Clinical Trials of Major Depressive Disorder (paywalled). The authors of the latter paper "were told anonymously by trial sponsors that duplicate subjects in some protocols have been as high as 5%", and write "we believe that failure rates are rising due to the increase in "professional subjects," who go from site to site, learning inclusion and exclusion criteria and collecting stipends."

Aha! No wonder so much antidepressant research is crap. How many people do you suppose there are, who sign up for lots of trials at once? They'd have to defraud the researchers, of course; no one would allow a patient like that into their study knowingly. What do you suppose that sort of person would do, and how would it be reflected in the data? What other types of studies are affected? And how would we find out? (Dietary studies look vulnerable, and their results have been conspicuously unreliable. On the other hand, lots of people want to lose weight, so legitimate subjects are probably plentiful and drive down the percentage).

Let's start with some speculative modeling. First, to be clear: we're talking about people who participate in multiple studies, and lie about it. That means defrauding researchers, and they know it. If they're doing so rationally, then a lot of their responses are going to be lies, designed to maximize the chance they get paid, and minimize the chance they get caught. Now, if you were defrauding researchers anyways, there's no reason to actually take all the drugs, and good reasons not to (other drugs they denied taking could interact, for example). Instead, you'd start by figuring out whether you were in the placebo group or not. This isn't very hard; you can either try the pills and see if there's any effect at all (real drugs have perceptible effects, even if they're only side effects), or break one open and taste it, or even do a proper chemical test. If in the placebo group, you'd answer all questions in a manner consistent with getting no effect: no side effects, no change in status, etc. This could be even more placebo-like than a real placebo; normal subjects on placebo sometimes report unrelated things as side-effects, but fraudsters probably wouldn't. If in the experimental group, on the other hand, you'd try to match the experimenters' expectations, to avoid attention. This would mean claiming that it helped. So in a drug trial with a goal that's unverifiable, you'd expect people who got into the study fraudulently to systematically bias the study towards showing effectiveness.

One of the papers repeated an anonymous claim of finding up to 5% duplicate subjects in some studies. Those are subjects who used the same name each time, but there could be more, who use aliases. Since one fraudster can participate in many studies, a small number of them can have a big impact. Since many studies end with small effect sizes, but we count significance rather than size, their effect is magnified further.

The main obstacle to preventing this kind of fraud is confidentiality; medical records are supposed to be kept secret, which means it's hard to get at other studies' patient lists to cross-check. But it's certainly not impossible, and I think some auditing is called for. Another strategy is to conduct a "study on study fraud" - advertise it like an antidepressant study, conduct it like one, pay well, and give everyone placebos in a special bottle that logs the time and the weight of its contents whenever it's opened. A similar strategy for dietary studies is to include a food with distinctive metabolites, and test for those metabolites.

22

13 comments, sorted by Highlighting new comments since Today at 12:24 AM
New Comment

You suggest "professional subjects" would try to decide whether they're in the placebo or treatment arm. But this seems difficult (people who are badly-off enough to consider "professional patient" a career choice probably are going to have a hard time testing for chemical compositions, and as you mention placebos can cause placebo side effects), unnecessary (researchers can never say "You didn't give us the effect we expected, so we think you're cheating and we're not going to pay you for this study and we're blacklisting you for future studies" without opening a can of worms), and contradictory to the observed results (if fake patients did this successfully, it would mean a greater difference between placebo and experimental, since placebo would report no change and experimental would report strong positive effects, but the conundrum to be explained is that antidepressant studies seem to be showing less efficacy over time.)

I think it's much more plausible that if these "professional subjects" existed, they would just say "Yes, the drug worked great" on the theory that this is what the researchers want to hear (they may not understand the idea of placebo-control, and even if they do, they know that it means they can totally get away with being "cured" by a fake pill). This would present as placebos appearing more effective than they did in previous, more honest trials - which seems to be happening - and a decrease in positive findings.

I would be surprised if this were large enough to make a big difference, but it's certainly worth considering. I wonder if there are any meta-analyses comparing studies recruiting by public advertising, versus studies recruiting by a doctor or institution referring their own patient population.

(researchers can never say "You didn't give us the effect we expected, so we think you're cheating and we're not going to pay you for this study and we're blacklisting you for future studies" without opening a can of worms)

I linked the previous post on my Facebook and got this comment from Paul Riddell:

"This reminds me of tales related by friends living in Austin, which has both the University of Texas and a big pharmaceutical testing facility. Although researchers would swear up and down that they'd never withhold payment for testing if a test subject showed, say, severe allergic reations, it happened all of the time. To that end, subjects learned very early on, and they'd pass it on to friends in need of cash, that they should never report adverse symptoms. And now we see the legacy of this, with lots of very disturbing side effects only now being recognized from the line of antidepressants tested there."

That is seriously f'ed up.

Likewise, the fact that we're better at checking for serial credit card applicants than serial test patients.

The research itself is full of bad incentives. Ben Goldacre has frequently suggested a database of all clinical trials, including failed ones. He has a raft of newspaper columns on the hows and whys of bad science in medicine. Professional patients are the very least of it.

Back when there were no good treatments for AIDS and it was regarded as pretty much a death sentence, patients would in fact conspire to see who was getting the drug or placebo, and drop out if they had the placebo and try to sign up for a study with the drug. This of course misses the point of an RCT, but the point is that desperate people aren't in it for the science.

A while ago, I read in Guinea Pig Zero, a occupational jobzine for people who are used as medical or pharmaceutical research subjects, about people who felt that they were under no obligation to be honest to help develop medical treatments they'd never be able to afford to use. IIRC, a lot of the non-compliance was about dietary restrictions-- and there are such restrictions in some drug studies as well as in nutritional and diet studies.

Aha! No wonder so much antidepressant research is crap.

This, perhaps facetiously, suggests way more explanatory power than the potential problem in question deserves.

Indeed. I'd heard about this before, so this is not news.

Since many studies end with small effect sizes, but we count significance rather than size, their effect is magnified further.

This is what I want to hear more about. Given 5% dupes, how much significance and effect-size does this leech away? My intuition says this could explain a non-zero number of failed drugs, but not the entire dismal litany, but intuition is far inferior to some worked-out numbers.

[-][anonymous]10y 5

From my buddy, a research psychologist:

"Oh yeah, those are a big issue. They're a big problem for medical shit and business shit. Less so for university stuff, for obvious reasons. Even the most basic recruitment agency knows about 'em. "

Indeed. The issue with antidepressants is that the reported experiences are entirely subjective to start with (and so are the symptoms when drawing volunteers - I feel kinda down because I am low on money, I guess I'd enter this antidepressant trial.

How would we tell anti-whining medication from the one that does actually improve the condition? Usually we look at objective criteria - results of blood tests, death rate, what ever. The closest we got to an objective metric for the depression is the (attempted and successful) suicide rate. The suicide rate is at best not affected and at worst is increased by anti-depressants, strongly suggesting that the anti-depressants are not an effective treatment.

It is easy to create anti-whining medication. Anything that has more severe or more common side effects than the placebo would do, to some extent. The business conducting the trials would try their best to use such placebo that un-blinds the trial the most, while being legally passable - that is simply a rational thing for them to do.

Another strategy is to conduct a "study on study fraud" - advertise it like an antidepressant study, conduct it like one, pay well, and give everyone placebos in a special bottle that logs the time and the weight of its contents whenever it's opened.

That "special bottle" would be ridiculously expensive to make, and it would be really obvious that it wasn't a real bottle.

We could just remove the incentive. Compensating subjects with things other than money would reduce participation much less than fraud.

What else might we compensate them with?