When you're suffering from a life-changing illness, where do you find information about its likely progression? How do you decide among treatment options?

You don't want to rely on studies in medical journals because their conclusion-drawing methodologies are haphazard. You'll be better off getting your prognosis and treatment decisions from a social networking site: PatientsLikeMe.com.

PatientsLikeMe.com lets patients with similar illnesses compare symptoms, treatments and outcomes. As Jamie Heywood at TEDMED 2009 explains, this represents an enormous leap forward in the scope and methodology of clinical trials. I highly recommend his excellent talk, and I will paraphrase part of it below.

Here is a report in the Proceedings of the US National Academy of Sciences (PNAS) about Lithium, which is a drug used to treat Bipolar disorder that a group in Italy found slowed ALS down in 16 patients. When PNAS published this, 10% of the patients in our system started taking Lithium, based on 16 patients' data in a bad publication.

This one patient, Humberto, said, "Can you help us answer these kinds of treatment questions? I don't want to wait for the next trial; I want to know now!"

So we launched some tools to help patients track their medical data like blood levels, symptoms, side effects... and share it.

People said, "You can't run a clinical trial like this. You don't have blinding, you don't have data, it doesn't follow the scientific method -- you can't do it."

So we said, OK, we can't do a clinical trial? Let's do something even harder. Let's use all this data to say whether Lithium is going to work on Humberto.

We took all the patients like Humberto and brought their data together, bringing their histories into it, lining up their timelines along meaningful points, and integrating everything we know about the patient -- full information about the entire course of their disease. And we saw that this orange line, that's what's going to happen to Humberto.



And in fact he took Lithium, and he went down the line. This works almost all the time -- it's scary.



So we couldn't run a clinical trial, but we could see whether Lithium was going to work for Humberto.

Here's the mean decline curve for the most dedicated Lithium patients we had, the ones who stuck with it for at least a year because they believed it was working. And even for this hard core sample, we still have N = 4x the number in the journal study.

When we line up these patients' timelines, it's clear that the ones who took Lithium didn't do any better. And we had the power to detect an effect only 1/4 the strength of the one reported in the journal. And we did this one year before the time when the first clinical trial, funded with millions of dollars by the NIH, announced negative results last week.

New to LessWrong?

New Comment
49 comments, sorted by Click to highlight new comments since: Today at 10:52 AM

Something like this is useful for the types of data points patients would have no reason to self-deceive over, however I worry that the general tendency for people to make their 'data' fit the stories they've written about themselves in their minds will promote superstitions. For example, a friend of mine is convinced that the aspartame in diet soda caused her rosacea/lupus. She's sent me links to chat-rooms that have blamed aspartame for everything from diabetes to alzheimer's, and it's disturbing to see the kind of positive feed-back loops that are created from anecdotes in which chat members state a clear link exists between symptoms and usage. One says, "I got symptom X after drinking diet soda," and another says, "I have symptom X, it must be from drinking diet soda!" and another says, "Thanks, after reading your comments, I stopped drinking diet soda and symptom X went away!" In spite of chat rooms dedicated to blaming diet soda for every conceivable health problem and the fall of American values, no scientific study to date has shown ANY negative side effect of aspartame even at the upper bounds of current human consumption.

Another example of hysterical positive-feedback would be the proliferation of insane allegations that the MMR vaccine causes autism. I would guess angry parents who wanted to believe MMR caused their child's autism would plot their 'data points' for the onset of their child's symptoms right after vaccination.

A site like this one may allow certain trends to rise out of the noise, but we must not forget the tendency people have to lie to themselves for a convenient story.

In spite of chat rooms dedicated to blaming diet soda for every conceivable health problem and the fall of American values, no scientific study to date has shown ANY negative side effect of aspartame even at the upper bounds of current human consumption.

And in spite of those studies, I get a terrible splitting headache within minutes of drinking a diet soda containing aspartame.

I'm in the middle of preparing a proposal that explains one way in which all previous aspartame studies are flawed. Sorry, not going to explain it now. Aspartame studies are actually pretty complicated. One flaw, which is not the flaw I'm focusing on, is that studies are done using fresh aspartame, even though it's known that aspartame breaks down into other by-products after sitting on a shelf for a few months. Those by-products are not studied for safety.

The vast majority of studies demonstrating the safety of aspartame were done in lab animals who are incapable of getting, or reporting if they did get, many of the symptoms that people claim to get from aspartame.

Another interesting fact about aspartame is that many of the studies demonstrating its safety were funded by Donald Rumsfeld, who was CEO of Searle at the time.

There's some information here about the history of its approval. I don't know whether it's accurate.

Whatever happened to this?

You don't want to rely on studies in medical journals because their conclusion-drawing methodologies are haphazard.

I dispute none of this, but so far as I can tell or guess, the main thing powering the superior statistical strength of PatientsLikeMe is the fact that medical researchers have learned to game the system and use complicated ad-hoc frequentist statistics to get whatever answer they want or think they ought to get, and PatientsLikeMe has some standard statistical techniques that they use every time.

Also, I presume, PatientsLikeMe is Bayesian or Bayes-like in that they take all available evidence into account and update incrementally, while every medical experiment is a whole new tiny little frequentist universe.

This is not really an article about PatientsLikeMe being strong, it is an article about the standard statistical methods of academic science being weak and stupid.

I dispute none of this, but so far as I can tell or guess, the main thing powering the superior statistical strength of PatientsLikeMe is the fact that medical researchers have learned to game the system and use complicated ad-hoc frequentist statistics to get whatever answer they want or think they ought to get, and PatientsLikeMe has some standard statistical techniques that they use every time.

1) I'd like to see independent evidence of their "superior statistical strength".

2) On the face of it, the main difference between these guys and a proper clinical trial is an assumption that you can trust self-reports. Placebo effect be damned.

In particular, I'd really, really like to see the results for some homeopathic "remedy" (a real one, not one of those that silently include real active compounds).

Isn't the main difference just that they have a bigger sample. (e.g. "4x" in the hardcore group).

What is your evidence for the claim that the main thing powering the superior statistical strength of PatientsLikeMe is the fact that medical researchers have learned to game the system and use complicated ad-hoc frequentist statistics to get whatever answer they want or think they ought to get? What observations have you made that are more likely to be true given that hypothesis?

I fear this might suffer from Google's problem: once it becomes relied on, the persistent factor which was relied upon gets corrupted.

You can treat it as information, but not as science - it's not protected from malice the same way scientific studies are.

Google is still pretty reliable - as is Wikipedia, despite warnings that "anybody can edit" would lead to noise and inaccuracy.

The dedicated employees of Google and dedicated editors of Wikipedia do a lot of work to make sure that it is as little of a problem as it is - and there are still persistent issues. If it takes off, the dedicated employees of PatientsLikeMe will have to expect to do the same, and I will expect problems to fall through the cracks the same way they do on Wikipedia and Google.

Strategies for gaming search engines are relatively easy to automatically detect and counteract, once you know what they are. On Wikipedia, a person who knows the subject in question will easily notice fradulent or incorrect information. But I'm not sure what criteria you could use to clearly detect cheaters on a site like this. You could ignore outliers, but that risks losing information from real people who actually have an unusual reaction to the medication. Even if you accepted that as necessary sacrifice, nothing's to prevent the cheaters from creating enough accounts to make them into non-outliers.

You can treat it as information, but not as science - it's not protected from malice the same way scientific studies are.

Scientific studies aren't protected from malice the way PatientsLikeMe is either.

I intuitively agree with your fears, with a mild caveat or two; how much of my fear comes from not respecting doctors? How much comes from boo lights for social networking? Or did I have applause lights and am overcompensating for them?

Since I have no idea what your sentence means past "I fear...Google's problem" I think a lot of my agreement is bias. What might help is a better explanation of what the persistent factor is and how it would get corrupted.

Of course generally treating information as information and science as science is good, but making fast medical decisions involving lots of money and major health issues seems like a good place to introduce new information that could help lots of people. Skepticism is good but barriers to use and acceptance could be harmful.

"Google's problem" = "Search engine optimization" = people with an agenda trying to game the algorithm (so their site gets ranked higher than better ones). For example, linkspam makes "number of inbound links" a less reliable metric of site quality than it would otherwise be.

If something is known to be used as a proxy for quality and people are rewarded accordingly, then you'll end up with people trying to achieve the proxy for quality at the expense of actual quality.

I'm not sure what that has to do with the original topic, though. Are you anticipating that quacks will go on sites like this and say "You should buy my snake oil - look at all these sockpuppets that it's helped!"

If something is known to be used as a proxy for quality and people are rewarded accordingly, then you'll end up with people trying to achieve the proxy for quality at the expense of actual quality.

But isn't that exactly what happens in science too? Citation stat gaming, ghostwritten papers, senior person's name first, etc.?

And those kinds of gaming are much harder for outsiders to detect, as well as harder to fix or regulate, especially when they're the commonly accepted official practices.

So even if the public data aggregation approach is more open to gaming, it doesn't necessarily follow that it's more vulnerable to gaming. Gaming might also be much easier to detect and/or eliminate by automated or semi-automated means.

I'm not sure what that has to do with the original topic, though. Are you anticipating that quacks will go on sites like this and say "You should buy my snake oil - look at all these sockpuppets that it's helped!"

I'm saying it's definitely a concern.

Okay, that clarifies things.

Another great thing about patient social networks is that you can go back and ask the patients questions. Currently, whenever you want to study the correlation between genes and a phenotype, you put together a microarray study, gathering microarray data on a thousand patients, and hiring a doctor to ask each patient "Do you, or have you ever, X, Y, or Z" at a couple hundred bucks per patient. Then you get a DNA sample from each patient, and run it on a microarray, at about $700 per patient.

If you decide you want to look at phenotype W, you do it all over again.

With a social network, patients can post their microarray data, and you can ask them "Do you ever W?" even after the initial study.

Also importantly, you can study things that you would have a hard time getting a grant to study because they're controversial. For instance, correlations between genes and SAT scores.

Last June, I wrote up a brief description for the NSF of a proposed project to use the Affymetrix Genome-Wide Human SNP Array 6.0, which looks for 900,000 SNPs and 900,000 copy number variations. The idea was to provide a website on which patients could upload their own data onto a website, and researchers - or anyone at all - could distribute questionnaires to specific sets of patients. This would let people conduct studies for almost nothing that would currently cost millions of dollars. (A lot of people would argue that you can't ask patients to self-report their symptoms/attributes. I would say, among other things, that I'd rather have self-reporting by 5,000 patients than doctor observations on 20 patients.)

The fact that patients would provide their own data should have been able to get around the strict privacy requirements. Unfortunately, some states, including California, are ahead of me: A patient may not get their own microarray data in CA without going through a doctor.

Also unfortunately, the NSF was so uninterested that they never even responded to my email, which has never happened before.

23andme.com is doing something a little similar. But they don't let you ask phenotype questions of the members, so it's not enough.

The talk was interesting, but I wish he'd spent less time stating that the service is great and more time explaining why it is great. I didn't really get a very good picture of how accurate it is on average or how well it compares to medical studies in general.

Awesome.

Pie in the sky fantasizing:

What I'd really love to see is much cheaper and faster chemical/bio assays, something like a blood glucose monitor that could read out hormone levels, vitamin levels, hematocrit, etc. Something easy enough for millions of people to adopt and commit to on a daily basis. Throw it all into a giant database along with each participant's genome for good measure. The personal insights individuals could gain by comparing their data to the entire database would probably have a much more profound effect on health than a few decades worth of drug development -- of course, the tool would be invaluable for drug development as well.

What I would like are implantable bio-sensors that people could use to measure as much as possible about their physiology. I would love to have a device that can graph my blood sugar levels through the day, painlessly, or tell me any vitamins that I'm running low on, or warn about chronically elevated blood pressure. There's work being done on this sort of thing, with quite a lot of funding, so I expect this to become a reality soon, probably starting with soldiers.

By the way, the usual objection to any sort of monitoring implant chip is privacy. A simple solution would be to only transmit information from the implant via short-range infrared signals (PDF), with no radio capability at all.

Blood sugar can also be measured noninvasively through http://www.orsense.com/Glucose There no strong reason why their tool shouldn't be able to measure blood pressure, hemoglobin as well when they would have enough funding.

Vitamins are a bit more complicated as they appear in smaller quantities. I'm however not sure whether an implantable chip would do a better job at measuring vitamins. You can't easily refill chemicals in an implant. You can only transfer energy wirelessly (or you burn glucose). Energy allows you to run a centrifuge and a laser.

When you use implementable chips you won't be able to do fMRI on those patients anymore.

What I'd really love to see is much cheaper and faster chemical/bio assays, something like a blood glucose monitor that could read out hormone levels, vitamin levels, hematocrit, etc. Something easy enough for millions of people to adopt and commit to on a daily basis.

Yes, this is a beautiful vision. This would be a whole new way of doing medical science.

I'd love to see a lot more study of healthy people. I'm not convinced that what constitutes improvement in the factors that get measured is well understood.

You don't want to rely on studies in medical journals because their conclusion-drawing methodologies are haphazard.

This is an incredibly strong claim. Mainstream medical research is far from optimal but the medical advancement we've seen in the last 100 years indicates that it's certainly better than random.

Other than that I strongly agree with the article.

The first thing that comes to my hyper paranoid mind is what would happen, for example, if the lithium did seem to work for these patients. Who would then prescribe the lithium, and based on what? If it isn't supported by ordinary studies (let's assume it isn't) and it's not even remotely standard of care, then there's going to be some trouble.

So I'm a progressive doctor, and I look this data over and I say to give it a try; and the patient dies of lithium toxicity. Seems like they have a good case against me.

I do think it's a defensible maneuver (they could still sue me, though) but my worry is more about the very difficult position this puts everyone in. I'm not against this type of research, just wondering aloud.

I'd speculate that this is better suited for surveillance, or for identifying which subgroups benefit more or less from a treatment. Everyone's going to hate this idea, but I think it's quite realistic: it would be great if doctors in general uploaded such data to a common site, especially about illnesses for which novel treatments are lacking. Pharma could/would help cover the costs, of this I am sure.

Hi, welcome to LW! I've been following your blog for longer than I've been following LW!

Perhaps findings on PatientsLikeMe will trigger mainstream clinical trials to investigate treatments that may otherwise have gone unnoticed. Ultimately, though, this isn't a very good options and it'd certainly be desirable for this kind of data to be an acceptable basis for medical decisions.

Free the data!

The reporting engine they have created is impressive but it always better to have more people looking at the data. Who knows what mashups hackers on the side would make?

I applaud the idea of getting a large set medical data together though crowd sourcing. I just wish anyone could run there own statistics on the data.

Freeing data is problematic for privacy reasons.

Even if you don't have problems with releasing your own medical data some other people have problems. Anything that might reduce in some people refusing to sign up for the service reduces the virality of the internet service. Successful web 2.0 sites need high virality.

You could add an optional button to release health information to the public but I would still imagine that it would be hard to convince a sizeable number of users to switch to that option. Users are selfish ;)

Why is this post being voted down? Do you consider virality/quality tradeoffs to be distracting?

This is great. But what's FRS?

Functional Rating Scale

I'm going to forward this to my mom.

The "people" in the quoted bit are correct. This is not science; this is statistical analysis.

It is possible that an individual would be better served by this social network, though I have generally agreed that a physician who treats himself has a fool for a patient, and the more so for a layman who neglects to consult competent medical authorities. These social networks certainly cannot take the place of original research; they rely on existing observed trends.

This depends on the situation.

With a rare diagnosed conditions it is kind of easy for the patient to have more knowledge than a typical doctor. The doctor has heard 15 minutes about it 20 years ago in med school while the patient has gone through all the recent research.

Self-diagnosing is typically problematic. Self-managing chronic conditions is many times quite rational.

Doctors make decisions based on a mix of theoretical knowledge and experience. More the experience than the knowledge.

'Experience' is another word for their subjective view of the patient histories that they have observed through their career. Why not make the decision based on an emprical measure of patient histories, taken over a large random-ish sample, rather than one particular physicians subjective interpretation of only the patients he has seen?

Better yet, why not present this data to your physician and have a talk about it?

Well, you will have to be careful how you do it; my understanding is that most doctors are exasperated at people who self-diagnose based on reading things on the Internet. It's a bias, sure, but it doesn't seem to be an unreasonable one. So you wouldn't want to bring it up on your very first visit. You will need to wait until you've demonstrated your non-crank-ness.

Once you and your doctor know each other better, though, I think it would be an excellent idea to bring more data to the table. My objection is to an article entitled "Med Patient Social Networks Are Better Scientific Institutions", not one entitled "Med Patient Social Networks Are A Useful Tool In Improving Care".

These social networks certainly cannot take the place of original research

The way you phrased that implies that these social networks cannot be used for original research.

According to the article, they lack crucial features such as double-blinding. Most social networks lack the openness and data retention critical for effective peer review. It is possible to learn something from a network like the one described, but I would hesitate to call it science.

Lack of double-blinding ought to increase the false positive rate, right? But the result presented in the OP (the lithium) was a finding of a negative.

No. Lack of double-blinding will increase the false negative rate too, if the patients, doctors or examiners think that something shouldn't work or should be actively harmful. If you test a bunch of people who believe that aspartame gives them headaches or that wifi gives them nausea without blinding them you'll get garbage out as surely as if you test homeopathic remedies unblinded on a bunch of people who think homeopathic remedies cure all ills.

In this particular case I think it's likely the system worked because it's relatively hard to kid yourself about progressing ALS symptoms, and even with a hole in the blinding sometimes more data is just better. This is about as easy as medical problems get.

Generalising from this to the management of chronic problems seems like a major mistake. There's far, far more scope to fool oneself with placebo effects, wishful thinking, failure to compensate for regression to the mean, attachment to a hypothesis and other cognitive errors with a chronic problem.

Fair enough. I don't think the biases are symmetrical though: these people have a real and life-threatening disease, so they approach any intervention hoping strongly that it will work; hence we should expect them to yield more false positives than false negatives compared to whatever an equal medical trial would yield. On the other hand, when we're looking at the chatrooms of hypochondriacs & aspartame sufferers, I think we can expect the bias to be reversed: if even crazy people find nothing to take offense to in something, that something may well be harmless.

This yields the useful advice that when looking at any results, we should look at whether the participants have an objectively (or at least, third-party) validated problem. If they do, we should pay attention to their nulls but less attention to their claims about what helps. And vice versa. (Can we then apply this to self-experimentation? I think so, but there we already have selection bias telling us to pay little attention to exciting news like 'morning faces help my bipolar', and more attention to boring nulls like 'this did nothing for me'.)

Kind of a moot point I guess, because the fakes do not seem to be well-organized at all.

I think you're probably right in general, but I wouldn't discount the possibility that, for example, a rumour could get around the ALS community that lithium was bad, and be believed by enough people for the lack of blinding to have an effect. There was plenty of paranoia in the gay community about AZT, for example, despite the fact that they had a real and life-threatening disease, so it just doesn't always follow that people with real and life-threatening diseases are universally reliable as personal judges of effective interventions.

Similarly if the wi-fi "allergy" crowd claimed that anti-allergy meds from a big, evil pharmaceutical company did not help them that could be a finding that would hold up to blinding but then again it might not.

I do worry that some naive Bayesians take personal anecdotes to be evidence far too quickly, without properly thinking through the odds that they would hear such anecdotes in worlds where the anecdotes were false. People are such terrible judges of medical effectiveness that in many cases I don't think the odds get far off 50% either way.