In a world where 85% of doctors can't solve simple Bayesian word problems...

In a world where only 20.9% of reported results that a pharmaceutical company tries to investigate for development purposes, fully replicate...

In a world where "p-values" are anything the author wants them to be...

...and where there are all sorts of amazing technologies and techniques which nobody at your hospital has ever heard of...

...there's also MetaMed.  Instead of just having “evidence-based medicine” in journals that doctors don't actually read, MetaMed will provide you with actual evidence-based healthcare.  Their Chairman and CTO is Jaan Tallinn (cofounder of Skype, major funder of xrisk-related endeavors), one of their major VCs is Peter Thiel (major funder of MIRI), their management includes some names LWers will find familiar, and their researchers know math and stats and in many cases have also read LessWrong.  If you have a sufficiently serious problem and can afford their service, MetaMed will (a) put someone on reading the relevant research literature who understands real statistics and can tell whether the paper is trustworthy; and (b) refer you to a cooperative doctor in their network who can carry out the therapies they find.

MetaMed was partially inspired by the case of a woman who had her fingertip chopped off, was told by the hospital that she was screwed, and then read through an awful lot of literature on her own until she found someone working on an advanced regenerative therapy that let her actually grow the fingertip back.  The idea behind MetaMed isn't just that they will scour the literature to find how the best experimentally supported treatment differs from the average wisdom - people who regularly read LW will be aware that this is often a pretty large divergence - but that they will also look for this sort of very recent technology that most hospitals won't have heard about.

This is a new service and it has to interact with the existing medical system, so they are currently expensive, starting at $5,000 for a research report.  (Keeping in mind that a basic report involves a lot of work by people who must be good at math.)  If you have a sick friend who can afford it - especially if the regular system is failing them, and they want (or you want) their next step to be more science instead of "alternative medicine" or whatever - please do refer them to MetaMed immediately.  We can’t all have nice things like this someday unless somebody pays for it while it’s still new and expensive.  And the regular healthcare system really is bad enough at science (especially in the US, but science is difficult everywhere) that there's no point in condemning anyone to it when they can afford better.

I also got my hands on a copy of MetaMed's standard list of citations that they use to support points to reporters.  What follows isn't nearly everything on MetaMed's list, just the items I found most interesting.

90% of preclinical cancer studies could not be replicated:

"It is frequently stated that it takes an average of 17 years for research evidence to reach clinical practice. Balas and Bohen, Grant, and Wratschko all estimated a time lag of 17 years measuring different points of the process." -

"The authors estimated the volume of medical literature potentially relevant to primary care published in a month and the time required for physicians trained in medical epidemiology to evaluate it for updating a clinical knowledgebase.... Average time per article was 2.89 minutes, if this outlier was excluded. Extrapolating this estimate to 7,287 articles per month, this effort would require 627.5 hours per month, or about 29 hours per weekday." 

One-third of hospital patients are harmed by their stay in the hospital, and 7% of patients are either permanently harmed or die:

(I emailed MetaMed to ask for the actual bibliography for the following citations, since that wasn't included in the copy of the list I saw.  I already recognize some of the citations having to do with Bayesian reasoning, which makes me fairly confident of the others.)

Statistical Illiteracy

Doctors often confuse sensitivity and specificity (Gigerenzer 2002); most physicians do not understand how to compute the positive predictive value of a test (Hoffrage and Gigerenzer 1998); a third overestimate benefits if they are expressed as positive risk reductions (Gigerenzer et al 2007).
Physicians think a procedure is more effective if the benefits are described as a relative risk reduction rather than as an absolute risk reduction (Naylor et al 1992).
Only 3 out of 140 reviewers of four breast cancer screening proposals noticed that all four were identical proposals with the risks represented differently (Fahey et al 1995).
60% of gynecologists do not understand what the sensitivity and specificity of a test are (Gigerenzer at al 2007).
95% of physicians overestimated the probability of breast cancer given a positive mammogram by an order of magnitude (Eddy 1982).
When physicians receive prostate cancer screening information in terms of five-year survival rates, 78% think screening is effective; when the same information is given in terms of mortality rates, 5% believe it is effective (Wegwarth et al, submitted).
Only one out of 21 obstetricians could estimate the probability that an unborn child had Down syndrome given a positive test (Bramwell, West, and Salmon 2006).
Sixteen out of twenty HIV counselors said that there was no such thing as a false positive HIV test (Gigerenzer et all 1998).
Only 3% of questions in the certification exam for the American Board of Internal Medicine cover clinical epidemiology or medical statistics, and risk communication is not addressed (Gigerenzer et al 2007).
British GPs rarely change their prescribing patterns and when they do it’s rarely in response to evidence (Armstrong et al 1996).

Drug Advertising

Direct-to-customer advertising by pharmaceutical companies, which is intended to sell drugs rather than to educate, often does not contain information about a drug's success rate (only 9% did), alternative methods of treatment (29%), behavioral changes (24%), or the treatment duration (9%) (Bell et al 2000).
Patients are more likely to request advertised drugs and doctors to prescribe them, regardless of their misgivings (Gilbody et al 2005).

Medical Errors

44,000 to 98,000 patients are killed in US hospitals each year by documented, preventable medical errors (Kohn et al 2000).
Despite proven effectiveness of simple checklists in reducing infections in hospitals (Provonost et al 2006), most ICU physicians do not use them.
Simple diagnostic tools which may even ignore some data give measurably better outcomes in areas such as deciding whether to put a new admission in a coronary care bed (Green and Mehr 1997).
Tort law often actively penalizes physicians who practice evidence-based medicine instead of the medicine that is customary in their area (Monahan 2007).
Out of 175 law schools, only one requires a basic course in statistics or research methods (Faigman 1999), so many judges, jurors, and lawyers are misled by nontransparent statistics.
93% of surgeons, obstreticians, and other health care professionals at high risk for malpractice suits report practicing defensive medicine (Studdert et al 2005).

Regional Variations in Health Care

Tonsillectomies vary twelvefold between the counties in Vermont with the highest and lowest rates of the procedure (Wennberg and Gittelsohn 1973).
Fivefold variations in one-year survival from cancer across different regions have been observed (Quam and Smith 2005).
Fiftyfold variations in people receiving drug treatment for dementia has been reported (Prescribing Observatory for Mental Health 2007).
Rates of certain surgical procedures vary tenfold to fifteenfold between regions (McPherson et al 1982).
Clinicians are more likely to consult their colleagues than medical journals or the library, partially explaining regional differences (Shaughnessy et al 1994).


Researchers may report only favorable trials, only report favorable data (Angell 2004), or cherry-pick data to only report favorable variables or subgroups (Rennie 1997).
Of 50 systematic reviews and meta-analyses on asthma treatment 40 had serious or extensive flaws, including all 6 associated with industry (Jadad et al 2000).
Less high-tech knowledge and applications tend to be considered less innovative and ignored (Shi and Singh 2008).

Poor Use of Statistics In Research

Only about 7% of major-journal trials report results using transparent statistics (Nuovo, Melnivov and Chang 2002).
Data are often reported in biased ways: for instance, benefits are often reported as relative risks (“reduces the risk by half”) and harms as absolute risks (“an increase of 5 in 1000”); absolute risks seem smaller even when the risk is the same (Gigerenzer et al 2007).
Half of trials inappropriately use significance tests for baseline comparison; 2/3 present subgroup findings, a sign of possible data fishing, often without appropriate tests for interaction (Assman et al 2000).
One third of studies use mismatched framing, where benefits are reported one way (usually relative risk reduction, which makes them look bigger) and harms another (usually absolute risk reduction, which makes them look smaller) (Sedrakyan and Shih 2007).

Positive Publication Bias

Positive publication bias overstates the effects of treatment by up to one-third (Schultz et al 1995).
More than 50% of research is unpublished or unreported (Mathieu et al 2009).
In ten high-impact medical journals, only 45.5% of trials were adequately registered before testing began; of these 31% show discrepancies between outcomes measured and published (Mathieu et al 2009).

Pharmaceutical Company Induced Bias

Studies funded by the pharmaceutical industry are more likely to report results favorable to the sponsoring company (Lexchin et al 2003).
There is a significant association between industry sponsorship and both pro-industry outcomes and poor methodology (Bekelman and Kronmal 2008).
In manufacturer-supported trials of non-steroidal anti-inflammatory drugs, half the time the data presented did not match claims made within the article (Rochon et al 1994).
68% of US health research is funded by industry (Research!America 2008), which means that research that leads to profits to the health care industry tends to be prioritized.
71 out of 78 drugs approved by the FDA in 2002 are “me too” drugs that are more profitable because of the patent but not substantially different from existing medication (Angell 2004).
“Seeding trials” by pharmaceutical companies promote treatments instead of testing hypotheses (Hill et al 2008).
Even accurate research may be misreported by pharmaceutical company advertising, including ads in medical journals (Villanueva et al 2003).
In 92% of cases, pharmaceutical leaflets distributed to doctors have data summaries that either cannot be verified or inaccurately summarize available data (Kaiser et al 2004).

I don't plan on becoming seriously sick, but if I do, I think I'll check in with MetaMed just to make sure nobody is ignoring the research results showing that you shouldn't feed the patient rat poison.
New Comment
193 comments, sorted by Click to highlight new comments since: Today at 3:21 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It is fairly terrifying that the term "evidence-based medicine" exists because that implies that there are other kinds.

LessWrong is a non-evidence-based method of teaching rationality. We don't have good evidence that someone will get more rational after reading the sequences.

You can make a reasonable theoretic argument that people will get more rational. You don't have the kind of evidence that you need for a EBM-treatment. In most domains where we make choices in our lives you don't follow pratices that are supported by evidence from peer-reviewed trials.

You don't get a haircut from a barber who practices evidence-based barbering. Even the people who pay a lot of money for their haircuts don't. Reading scientific papers just isn't the only way to gather useful knowledge.

The term evidence-based medicine comes from a published in 1992.

It says:

Evidence-based medicine de-emphasizes intuition, unsystematic clinical experience, and pathophysiologic rationale as sufficient grounds for clinical decision making and stresses the examination of evidence from clinical research.

I wouldn't want someone to practice open-heart surgery on me based on his intuition but I don't see a problem with taking a massage from someone who read no scientific papers but who let's themselves be guided by his intuition and who has a positive track record with other patients.

Sure, but the idea that one should explicitly go about trying to teach rationality because there are these things called biases is much younger than the idea of medicine. Doctors have had a much longer time than LessWrong to get their act together.

Doctors have had a much longer time than LessWrong to get their act together.

The idea of teaching people to think better isn't new. Aristoteles also tried to teach a form of rationality. But even if the idea would be radically new, why would that matter?

Why should newer ideas be subject to a lower standard of evidence? Fairness? If you want to know the truth fairness has no place.

Let's look at another example: Romantic courtship. Do you practice evidence-based courtship, when you seek a fulfilling relationship with a woman? Would you say there no other form of courtship besides evidence-based courtship?

Most humans don't practice evidence-based courtship. Sometimes courtship doesn't work out. You can blame it on couple not being familiar with the scientific papers that are published on the subject of human courtship.

Nobody has shown that given the couple those scientific papers improves their relationship changes. Nobody has shown with EBM like evidence standards that doctors who are in touch with the scientific literature archieve better health outcomes for their patients.

That doesn't mean to me that EBM has no place, but I don't see a reason to reject any approach to increase... (read more)

I think there's some sort of rule against discussing PUA here.
Not so much a rule against it as an understanding that it consistently leads to low-quality discussion.
(Which ChristianKI didn't do. His kind of general observation isn't the kind that brings on the notorious failure mode of courtship moralizing.)
That's not exactly 100.00% true -- I once overheard a barber priding himself with the fact that someone once got laid the night after getting a haircut from him. Jokes aside, barbering is evidence-based -- given that it works at all, then barbers either have knowledge of how to do that hard-coded in their DNA (unlikely) or have learned to do that -- using evidence (even though not in a systematized way). You can immediately see that if you use this cutting technique then your client's hair will look this way. OTOH, a practitioner of non-evidence-based medicine cannot immediately see that giving a patient this substance diluted in 10^20 times as much water or sticking a needle in this particular spot or whatever will help cure the patient. (Likewise, musicians are normally evidence-based musicians to some extent, but astrologists are not evidence-based astrologists; can you find more examples?)
If you interpret evidence-based in the widest sense possible, the phrase sort of loses its meaning. Note that the very post you quote explains the intended contrast between systematic and statistical use of evidence versus intuition and traditional experience based human learning. Besides, would you not say that astrologers figure out both how to be optimally vague, avoiding being wrong while exciting their readers, much the same way musicians figure out what sounds good?
Yes, but “intuition and traditional experience based human learning” is probably much less reliable in medicine than it is in barbering, so the latter isn't a good example in a discussion about the former. :-) Something similar could be said about practitioners of alternative medicine, though.
The goal of barbering is to create haircuts that increase the attractiveness of the client to people besides the barber and the client. A barber might think: "All my clients look really great", when in reality his haircuts reduce the attractiveness of the clients.
Surely, judging someone's attractiveness using your System 1 alone is less hard than judging someone's health using your System 1 alone, for most people in most situations?
A professional barber is likely to notice a lot of things about a haircut that the average person doesn't see. It could be that he creates haircuts that look impressive to other barbers but don't look good to the average person of the opposing sex who isn't a barber. I do think that you can get a decent assessment of someone's backpain by asking them whether it has gotten better. Actually that's even how most scientific studies who measure pain do it. They let the person rate their pain subjectively and when the subjective rating gets better through the drug they see it as a win. For a lot of serious health issues it's easy to see when a person gets better. Most homeopathists spend more time interviewing their patients and getting a good understanding of their condition than the average mainstream doctor who takes 5 minutes per patient.
I think the barbering example is excellent - it illustrates that, while controlled experiments more or less is physics, and while physics is great, it is probably not going to bring a paradigm shift to barbering any time soon. One should not expect all domains to be equally well suited to a cut and dried scientific approach. Where medicine lies on this continuum of suitedness is an open question - it is probably even a misleading question, with medicine being a collection of vastly different problems. However, it is not at all obvious that simply turning up the scientificness dial is going to make things better. It is for instance conceivable that there are already people treating medicine as a hard science, and that the current balance of intuition and evidence in medicine reflects how effective these two approaches are. I am not trying to argue whether astrology is evidence-based or not. I am saying that the very inclusive definition of evidence-based which encompasses barbering is, (a) nearly useless because it includes every possible way of doing medicine and (b) probably not the one intended by the others using the term.
Huh? What evidence are homoeopathy and crystal healing and similar (assuming that's what Qiaochu_Yuan meant by “other kinds”) based on? EDIT: Apparently not.
"Other kinds" meant "whatever mainstream medicine does that doesn't fall under the evidence-based label," not alternative medicine. I should've been clearer.
Yes, I realized that later, while reading another branch of the thread (see my edit).
What do you mean with "mainstream medicine" in that context?
What ambiguity is there in what I mean by "mainstream medicine" here?
A lot of people use the term evidence-based medicine interchangeable with mainstream medicine. What's in your opinion medicine that counts as mainstream medicine but that doesn't count as evidence-based medicine?
That doesn't agree with my experience. Evidence-based medicine refers to a specific and recent movement within mainstream medicine, which is much older.
If a doctor would today practice medicine the exact way it was practiced in 1950 I don't think you would say that the doctor practices mainstream medicine. If you define "mainstream" by the amount of people who use it than homeopathy is probably "mainstream medicine". Even if you go by the status of the people, when the Queen uses homeopathy it's no low status treatment. There are two reasons why homeopathy gets classified as "alternative medicine". (1) It's uses a ideological framework that goes against the reductionist world view. (2) There's are not enough high quality double blind studies to allow an institution such as cochrane to recommend homeopathy as a treatment. The term Evidence-Based medicine got made up by a bunch of university professors in 1992 to describe the style of medicine that they were teaching. At the beginning the term intentionally downplayed clinicial experience. Today most medicial schools say that they teach Evidence-Based medicine but they weakened the definition in a way that allows clinicians using their clinical experience but that still focuses on peer reviewed trials. If you don't pracitice medicine the way the university teach it in their normal programs than you are practicing in my opinion "alternative medicine".
There are even dozens of scientific studies that support homeopathy. According to a report titled "Effectiveness, Safety and Cost-Effectiveness of Homeopathy in General Practice – Summarized Health Technology Assessment" commissioned by the Swiss government: There are plenty people out there who can explain you why all those homeopathy studies are flawed, but on the other hand how many double blind controlled trials do you know that show that barbers can create haircuts that increase someone's chances with the opposing sex? But in general people do buy homeopathic medicine not because they read the report of the Swiss government and belief it. They buy it based on anecdotal evidence. They hear that some friend had success with homeopathy and then the go out and buy it. The fact that you are ideologically opposed to homeopathy and crystal healing working, doesn't mean that it fails to produce anecdotal evidence.
If that was the only point of barbers, then already-married people, prepubescent children, homosexuals, etc. would never go to the barber's.
That's wrong. If a acupuncturist puts needles in 10 people and 5 of them lose their back pain than he has "unsystematic clinical experience" that provides evidence for his treatment. The core of evidence-based medicine is the belief that you shouldn't use that kind of evidence for clinical decision making but that doctors should read medicial journals that report clinical trials that show whether or not a treatment works. Actually musicians and astrologists are very similar. Both make money with providing entertaining performances for their clients. Members of those professions who ignore evidence about what entertains their clients go out of business.
Maybe some of those 5 would have lost their pain even without needles. Whereas the barber knows what his client would have looked like without the hair cut.
Right, that's why it's unsystematic. In the Bayesian sense of the word, "I stuck a needle in this person and the amount of pain he reported went down" would have to be considered to be evidence that would increase the Bayesian possibility that your hypothesis that acupuncture helps back pain is correct. However, it's not systematic, scientific evidence. To get that kind of evidence, you would have to do systematic studies of a large number of people, give some of them acupuncture and give some of them asprin, and see what the statistical result is. I think that's what bogging this discussion down here, is that the word "evidence" is being used in two different ways. If we were perfectly rational beings, we would be able to use either kind of evidence, but the problem is that the first kind of evidence (individual unsystematic personal experiences) tends to be warped by all kinds of biases (selection bias, especially) making it hard to use in any kind of reliable way. You use it if it's all you have, but systematic evidence is very much preferable.
OK, if you consider the point of astrology to be “making money”, as opposed to “predicting people's personalities and future events”, then it is evidence-based -- but then again, if you consider the point of alternative medicine to be “making money”, as opposed to “improving people's health”, then it is evidence-based as well. (But now that Qiaochu_Yuan has made clear that it's not alternative medicine that he was talking about, this is kind of moot, so I'll tap out now.)
I didn't. I advocated another goal, entertainment. I don't know that much about astrology but I think a fair percentage of the people who do pay a astrologists do it for entertainement purposes. Letting someone stick needles inside you, when you go to a acupuncturist is less about getting entertainement. The kind of people who like astrology often also like other personality tests that they find in magazines. People enjoy going through those tests. If an astrologer would tell people something about their personality that's accurate but that those people aren't willing to accept, I doubt he would stay long in business. A bit like the musician who only plays music that he himself considers to be good, but that's "too advanced" for his audience. If the musician only sees his own opinion of his work he's not different than an astrologer who only sees whether his horoscope is good. If you call that musician "evidence-based" than the astrologer who goes after his own judgement of his work is also "evidence-based". Why does that matter to the question whether barbers can be meaningfully to be said to practice evidence-based barbering?
I was claiming that barbering is more evidence-based than alternative medicine, but if alternative medicine is not what's being discussed, then even if I turned out to be right it still wouldn't be relevant.

Only in the sense that the term "pro-life" implies than there exist people opposed to life.

Opposed to all life? No. Opposed to specific, nonsentient life when weighed against the mother's choice? Yes.
pro-life is an intentional misuse of ontology.
A perusal of murder and suicide statistics - even the fact that such statistics exist - suggests the conclusion that there may, in fact, exist some people opposed to life; sometimes their own, sometimes that of others.
That's irrelevant to the point that incogn is making, though, which is that you can't make that inference from the fact that a label called "pro-life" exists because it's rhetoric. I'm willing to believe that the label "evidence-based medicine" is also rhetoric, but I don't actually know that yet; I would first have to know what doctors were doing before EBM became a thing.
And how good the followers of EBM are at actually being evidence based as opposed Straw Vulcan.
Homeopathy? Crystal Therapy? Color Therapy? A quick Google search for "alternate medicine" should produce all sorts of non-evidence-based medical philosophies.
You mean like acupuncture?
Wikipedia informs me that evidence-based medicine is a movement in the health care community that really only got underways in the 90s. I am not sure I want to know what the health care community was doing before the 90s. I'm not talking about alternative medicine, I'm talking about whatever mainstream medicine was and is doing that doesn't fall under this label.
Well, until the mid-80's doctors believed that infants either a) didn't feel pain or b) wouldn't remember it anyway (mostly because of this study from the 40's), so they didn't use anesthesia for infants when performing heart surgery until someone collected evidence that babies were more likely to live through the surgery if given something to knock them out. EDIT: Removed extraneous word
Really? I notice (with some relief) that the control babies in the linked study still got anaesthesia; it's just that they got nitrous oxide instead of nitrous oxide and fentanyl.
On the bright side, some of it was just evidence-based medicine without the branding. For example, the UK Medical Research Council put randomized trials on the map in 1948 with its randomized trial of streptomycin, which had been discovered only a few years before. The massive 1954 trials of the famous Salk polio vaccine also included a randomized trial comprising over 700,000 children. (That said, the non-randomized trial was even larger; the origin of this odd, hybrid study design is an interesting bit of history.)
Quoth said Wikipedia article, in the "criticisms": "EBM applies to groups of people but this does not preclude clinicians from using their personal experience in deciding how to treat each patient. One author advises that "the knowledge gained from clinical research does not directly answer the primary clinical question of what is best for the patient at hand" and suggests that evidence-based medicine should not discount the value of clinical experience.[26] Another author stated that "the practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research".[1]" Which suggests that the precursor to EBM is a combination of Education and Intuition. Sorry if I'm not framing it terribly well - there's an intuitive category in my head for this method, but I've never really had to refer to it explicitly. It's the same technique I use to troubleshoot computer problems - I get a hunch as to what is causing it, and then proceed through a mixture of "safe, generalized advice" (try rebooting!) and "advice specific to the problem I think it is" (aha, you must not have your DNS configured correctly). If both of those fail, THEN I'll resort to actually collecting data, analyzing it, and seeing where that leads me - "have you had other problems?", "hmm, let me look up this error code..." I've generally observed this path as the default human behavior, with "call someone else" occurring when they hit the limit of their abilities.
Not a bad plan if you know the limits of your abilities and aren't trained to act confident even when you're not.
For a lot of the medical advances we had earlier in the 20th century, you didn't really need to do large-scale clinical studies to see if it was working. You gave someone an antibiotic, and they suddenly got much better. You gave people a polio vaccine, and they didn't get polio. You took someone's appendix out, and they didn't die. It was really later in the 20th century, when medicine got more and more focused on treating and preventing long-term degenerative illnesses like cancer or heart attacks or high blood pressure, that it became more vital to measure the difference in a large-scale statistical way between how effective different types of treatment were over a long period of time.
Not the best examples, although you're right about appendectomies! I nonetheless agree with the broader point that decades ago there was less need for fine-grained, systematic medical studies (you were just unlucky in your choice of examples).
IIRC, acupuncture has some limited use, probably as a combination of placebo and endorfin release. Unless you knew about those, the evidence would suggest you were on to something.
On the positive side, evidence-based medicine promotes greater measurement of patient outcomes and sharing of that information to weigh treatment options. On the negative side, it is largely about denying patients coverage and access to treatments based on officially approved disease models and preference trade offs that ignores evidence that was not used in the model and overrules patient preferences. It's evidence ignoring and patient controlling medicine.
I think you unknowingly {submitted this comment prematurely}? :)
Thanks. I edited around and left that last line when I should have deleted it. All tidy now.

Once MetaMed has been paid for and done a literature search on a given item, will that information only be communicated to the individual who hired them, or will it be made more widely available?

Once MetaMed has been paid for and done a literature search on a given item, will that information only be communicated to the individual who hired them, or will it be made more widely available?

A related question: Assuming that the information remains private (as seems to be the most viable business model) will the company attempt to place restrictions on what the clients may do with the information? That is, is the client free to publish it?

Clients are free to publish whatever they like, but we are very strict about patient confidentiality, and do not release any patient information without express written consent.

I like the idea of clients being free to publish anything... but what will you do if they misrepresent what you said, and claim they got the information from you? If could be a honest mistake (omiting part of information that did not seem important to them, but which in fact changes the results critically), oversimplification for sake of popularity ("5 things you should do if you have cancer" for a popular blog), or outright fraud or mental illness. For example someone could use your services and in addition try some homeopatic treatment, and at the end they would publish your advice edited to include the recommendation for homeopathy.

So there should be a rule like: "either publish everything verbatim... or don't mention our name". (I guess you probably already have it, but I say this for the case you don't.)

I assume that means that you won't be publishing your findings stripped of the clients identifying information?
Oh, Tom is involved too! Thankyou for responding to our questions. I was curious.

We won't publish anything, but clients are free to publish whatever they wish to in any manner that they wish.

caveats: they're new; it's hard to do what they're doing; they have to look serious; this is valuable the more it's taken seriously.

They have really wonderful site design/marketing...except that it doesn't give me the impression that they will ever be making the world better for anyone other than their clients. Here's what I'd see as ideal:

  • They've either paid the $5k themselves, a drop in the bucket of their funding apparently, and put up one report as both a sample and proof of their intent to publish reports for everyone, or (better) gotten a client who's had a report to agree to allow them to release it.
  • This report, above, is linked to from their news section and there's a prominent search field on the news section (ok), or there's a separate reports section (better)
  • The news section has RSS (or the reports section has RSS, or both, best)

On a more profiteering viewpoint, they could offer a report for either $5k for a private report, or $3k for a public report, with a promise to charge $50 for the public report until they reach $5k (or $6k, or an internal number that isn't unreasonable) and then release it.

Most people who are seriously sick tend to get into a pretty idealist... (read more)

A patient might profit from open publishing of the report. If MetaMed starts getting a reputation for good reports it will get read by medical experts. If an experts reads something that's wrong in the report it would be great if there a way for that expert write a comment under the report. That comment could be very helpful to the patient.
I'm not sure that's the best scheme, but I'm hoping MetaMed finds some way of taking their findings public.
They're all hardcore x-risk reductionists and good people. I would be very surprised if they didn't do everything they could to help people in any way they could as soon as it made sense, and they weren't sacrificing long term goals for shorter term ones.
I suspect that later, when they have more presence in the public and expert view, they will open up new payment options to increase visibility of their reports, but only after they have employed significantly more researchers and run them through rigorous epistemic ethics training. Otherwise, there's little stopping a Big Pharma company from hiring Metamed for a $3,000 report, and then posting a biased summary of the report on their news page, along with an "APPROVED BY METAMED" sticker. Even worse if Metamed considers the "approval sticker" to be useful to spreading awareness of evidence-based medicine. The potential for corruption is just too high.

Shouldn't there be a disclosure of some sorts that MetaMed shares some sponsors with MIRI?

Simple diagnostic tools which may even ignore some data give measurably better outcomes in areas such as deciding whether to put a new admission in a coronary care bed (Green and Mehr 1997).

Better outcomes than what? Typical doctors' diagnostics, I assume?

Shouldn't there be a disclosure of some sorts that MetaMed shares some sponsors with MIRI?

I thought that was obvious by listing Jaan Tallinn as an x-risk funder and Peter Thiel, but yes, you're very correct that this should be explicitly stated on general principles. Will edit.

Obvious to LW readers, perhaps, but this is the kind of article that would be good to share! Thanks for adding it.
Why? That doesn't sound like much of a conflict of interest. Am I missing something? If Metamed sponsored MIRI, that would definitely be an issue.

"Hey, MIRI folks, we're giving you a lot of money, how about you said a couple of nice words about this other company of ours?"

"Hey, our sponsors are funding another company, maybe if we helped promote that company and it ended up doing well, our sponsors would have more money to give us."

""Hey, our sponsors are funding another company, maybe if we helped promote that company our sponsors would be grateful and give us some extra money."

For the sake of humanity, cute kittens, whatever it takes to get past your qualms about this being advertising...

Please promote this immediately to the front page so it can get as much attention as possible.

I'm overall not impressed- looking at their reports, what do they offer that up-to-date ( doesn't? Sure, they advertise at patients, and up to date is aimed at institutions- but most hospitals I'm familiar with (and hence almost any specialist physician) are going to have access already to up-to-date. Also, in general, I'm willing to bet most doctors are in a better position to digest a research report than the average patient.

Sure, its a good idea, but its already being done in a very comprehensive fashion by a company that already has something like 90+% market penetration for academic hospitals. What is metamed's comparative advantage?

I get the impression that Metamed also figures out likely diagnoses, which would be a pre-requisite for using Up to Date.

That seems a very tricky proposition- for $5k you get a team of medical students and phd students doing a literature search for 24 hours. Without a diagnosis to start with and without an ability to order and receive test results (even if you suggest a test, will the results be back in 24 hours?) my prior would be that diagnosis would be extremely unlikely. WIthout a diagnosis, I'm not even sure how informative such a short literature search can be.

In the case of symptoms-just-started/no diagnosis, doesn't an experienced doctor at a hospital (with all the support staff a hospital implies-. labs,etc) have a pretty high competitive advantage? Apriori, an experienced physician with diagnostic equipment and several days should outperform some medical students with journal access and 24 hours.

Also, this whole thread I find myself shilling for the status-quo, but I should make it clear- hospitals scare the hell out of me. I've done statistical work for internal performance reviews for a large carrier in Southern California and found tons of alarming medical mistakes. I just don't see how Metamed solves any of the actual problems. Most mistakes are of the form transfer orders go t... (read more)

This matches my feeling that a lot of what's wrong with (American?) medicine is the result of patients being viewed as low status. What you've been seeing is what can go wrong at the hospital. I've heard a fair amount of anecdotes about sloppy diagnosis-- patients' symptoms being ignored for months or years of doctor visits. My impression is that doctors who listen and think are not terribly common.
A typical family doctor's appointment is scheduled every 15 min where I am (except for annual checkups). This includes the time between patients for any necessary paperwork. So, not much you can do for people with rare symptoms in that setting. This is where MetaMed can help, since they spend 100 to 1000 times more time than that on each case and are looking specifically for edge cases and individualized treatment.
I agree that fifteen minutes minus paperwork is shockingly short. Still, there are doctors who do reasonably well at paying attention. Most of my information is from the fat acceptance community, where there are a great many stories about doctors who just tell fat patients to lose weight*, regardless of symptoms. The typical stories seem to be either "I had to go to three or four doctors to find one who would listen" or "I must be lucky, I have a great doctor". I can't derive a strong opinion about the proportion of attentive doctors from this, though I wouldn't be surprised to find that it's under half. *I've also seen a few stories from unusually thin people who were simply told to gain weight, and one from a man who (as far as I could tell) was lean and muscular, but was told to lose weight by a doctor who literally only looked at his BMI.
International list of fat-friendly medical professionals
Sure- but if you have some rare symptom, any decent family doctor should say "go see a specialist" and refer you. You certainly aren't going to contact metamed every time you get sick, and for chronic conditions, a specialist (with journal and up-to-date access) is going to be the managing physician. Anything other than routine sniffles, vaccinations and check-ups and you probably have exceeded your family doctor's expertise. The big problem for misdiagnosis at the family-med level are the hordes of relatively rare diseases with common symptoms, but this is a very hard problem to solve. Having spent some time dealing with this as a statistical problem, even if you have a rare cluster of common symptoms, its usually the case that you are more likely to have a rare presentation of a common disease than it is that you have a rare disease.
I think MetaMed is intended to supplement the treatment advice you'd otherwise receive from specialists.
Presumably what you're paying for is for someone smarter than you to do the literature research for you; if I'm reading uptodate's product page correctly, they make the information available but it's up to you (or your physician) to sort through it and figure out what applies to you.

UpToDate provides a summary of research based on disease. i.e. for this disease, these treatments are recommended because of study A,B and C and physiological facts D, E and F. There are counter-indications and risks from these treatments because of x,y, and z. Unfortunately, I can't reproduce one of their reports here, but its not just a huge literature dump, its summarized and treatment options are graded.

Looking at the metamed concierge report on Gout (linked to elsewhere in this thread) its formatting appears to be very much like an UpToDate report- the most recent literature is digested and summarized at a decently high level, but it doesn't strike me as better than (or even different from!) the UpToDate recommendations. Given that 90% of academic hospitals already have paid for UpToDate, and honestly in most cases it will be better for the physician to interpret the report, I can't see very much for metamed to bring to the table.

Also worth pointing out- the people who write the summaries for UpToDate are most often researchers in the field of the illness. Near as I can tell from their webpage metamed's researchers are often medical students, or non-medical phd students (the point being that with metamed you are paying for something general called "expertise" and in many cases not actual field-relevant medical expertise).

What's the price difference between metamed and going out of your way to go to an academic hospitals? Am I wrong in thinking most hospitals are not academic hospitals?
Probably depends on your insurance (i.e. if you are with an HMO, you'll be locked in to the network unless you get a referral). Outside of HMOs, I'm not aware of an insurance that has a copay difference between going to an academic or community hospital. If you go to a community hospital with something complicated, you'll almost certainly end up transferred to an academic referral center anyway.

Only one out of 21 obstetricians could estimate the probability that an unborn child had Down syndrome given a positive test

Say the doctor knows false positive/negative rates of the test, and also the overall probability of Down syndrome, but doesn't know how to combine these into the probability of Down syndrome given a positive test result.

Okay, so to the extent that it's possible, why doesn't someone just tell them the results of the Bayesian updating in advance? I assume a doctor is told the false positive and negative rates of a test. But what matters to the doctor is the probability that the patient has the disorder. So instead of telling a doctor, "Here is the probability that a patient with Down syndrome will have a negative test result," why not just directly say, "When the test is positive, here is the probability of the patient actually having Down syndrome. When the test is negative, here is the probability that the patient has Down syndrome."

Bayes theorem is a general tool that would let doctors manipulate the information they're given into the probabilities that they care about. But am I crazy to think that we could circumvent much of their n... (read more)

This stops working in the case where some of the people upstream can't be trusted. Consider the following statement: "The previous test, if you have a positive result, means that the baby has a 25% chance of having Down syndrome, according to the manufacturer. But my patented test will return a positive result in 99% of cases in which the baby has Down syndrome."
"False positive rate" and "False negative rate" have strict definitions and presumably it is standard to report these numbers as an outcome of clinical trials. Could we similarly define a rigid term to describe the probability of having a disorder given a positive test result, and require that to be reported right along with false positive rates? Seems worth an honest try, though it might be too hard to define it in such a way as to forestall weaseling.
If I understand the following Wikipedia page correctly: The term you are requesting is Positive predictive value and Negative predictive value is the term for not having a disorder given a negative test result. It also points out that these are not solely dependent on the test, and also require a prevalence percentage. But that being said, you could require each test to be reported with multiple different prevalence percentages: For instance, using the above example of Downs Syndrome, you could report the results by using the prevalence of Downs Syndrome at several different given maternal ages. (Since prevalence of Down's Syndrome is significantly related to maternal age.)
thanks, PPV is exactly what I'm after. The alternative to giving a doctor positive & negative predictive values for each maternal age is to give false positive & negative rates for the test plus the prevalence rate for each maternal age. Not much difference in terms of the information load. One concern I didn't consider before is that many doctors would probably resist reporting PPV's to their patients because they are currently recommending tests that, if they actually admitted the PPV's, would look ridiculous! (e.g. breast cancer screening).
Another alternative is to provide doctors with a simple, easy-to-use program called Dr. Bayes. The program would take as input: the doctor's initial estimate of the chance the patient has the disorder (taking into account whatever the doctor knows about various risk factors) the false positive and false negative rates of a test. The program would spit out the probability of having the disorder given positive and negative test results. Obviously there are already tools on the internet that will implement Bayes theorem for you. But maybe it could be sold to doctors if the interface were designed specifically for them. I could see a smart person in charge of a hospital telling all the doctors at the hospital to incorporate such a program into their diagnostic procedure. Failing this, another possibility is to solicit the relevant information from the doctor and then do the math yourself. (Being sure to get the doctor's prior before any test results are in). Not every doctor would be cooperative...but come to think of it, refusal to give you a number is a good sign that maybe you shouldn't trust that particular doctor anyway.
Because then they would be assuming they had all relevant prior information for that particular patient. They don't. For example, age of mother, age of father, their genes, when they've lived where, what chemicals they've been exposed to, etc., are many factors the manufacturer has no knowledge of, but the doctor might. Naturally, it would be helpful for the company to make an online diagnostic model of all known relevant factors available online, updated as new information comes in, but given the regulatory and legal climate (at least here in the US), something so sensible is likely completely infeasible.
The incidence of the disease may be different for different populations while the test manufacturer may not know where and on which patients the test is going to be used. Also, serious diseases are often tested multiple times by different tests. What would a Bayes-ignorant doctor do with positives from tests A and B which are accompanied with information: "when test A is positive, the patient has 90% chance of having the syndrome" and "when test B is positive, the patient has 75% chance of having the syndrome"? I'd guess most statistically illiterate doctors would go with the estimate of the test done last.
Is anyone else attempting to do this? Is there any data on MetaMed's success rate (other than the fact that they went under)?

Am I correct in thinking this is a continuation of the vanished company Personalized Medicine?

What's the story there?

companies often go under one name pre-launch, then adopt a new one so they can have a 'clean slate', publicity-wise.

you shouldn't feed the patient rat poison

Are you referring to warfarin here or am I imagining things?

1Eliezer Yudkowsky11y
(Blinks.) Hadn't thought of that. Actually, from what I understand, the status of warfarin is mostly okay now because they test for unusual sensitivity to it before they administer it?
Prescription warfarin -- actually they might use related molecules these days with the same basic mechanism of action -- kills 10s of thousands per year: they die from loss of blood because the warfarin-like molecules have inhibited the clotting mechanism more than intended. So for example someone I was friends with died (in his sleep) this way. Nevertheless, warfarin-like molecules have positive expected global utility because clots cause so much negative utility. So for example I was on it for about 6 years. You're supposed to get a blood test every 2 weeks for as long as you're on it. Since Alex was a grad student in pharmacy, he'll probably correct any untruths in the above in the unlikely event there are any. ADDED. "Unusual sensitivity" is the wrong way to describe it.
Of note, 23andme tests for genetically determined warfarin tolerance. I can copypasta their references on demand.
This is actually not relevant as warfarin dosage is determined by regular testing and dose adjustment. Your inborn metabolic rate is a very small effect compared to, for example, dietary preferences. (for those who are unfamiliar with the agent, warfarin antagonises the effects of vitamin K and so must be adjusted against dietary intake) Unfortunately there are many people in the health sector offering tests that, whilst factually correct, are irrelevant to a patient's care.
Someone tell the NHS, which is sponsoring a large trial to explore just that question. The influence of the genotype varies from "typical sensitivity" to "may require greatly decreased warfarin dose". A range that is all but irrelevant, regular testing or no (think for example of the initial dosage).
There is the option of being tested for polymorphisms of 1-2 of the most relevant metabolic enzymes, which account for some of the bleeding risk. My impression is that genotyping is not routinely done. Also, warfarin is risky in normal metabolizers (many many drug/food/disease interactions). I agree with Richard on the overall cost-benefit. (ETA: Though there are new expensive drugs approved for some of the same indications -- rivaroxaban and dabigatran -- that show some promise of being safer.)

This is a new service and it has to interact with the existing medical system, so they are currently expensive, starting at $5,000 for a research report. (Keeping in mind that a basic report involves a lot of work by people who must be good at math.) If you have a sick friend who can afford it - especially if the regular system is failing them, and they want (or you want) their next step to be more science instead of "alternative medicine" or whatever - please do refer them to MetaMed immediately.

A friend of mine suffers from debilitating effects of fibromyalgia, to the degree that she had to quit her job. She has tried all possible conventional and alternative medicine, with little success. She would certainly be prepared to pay $5000 or more for a near-certain relief, but not for yet another literature search of undetermined benefit. I'm guessing she is not the target audience for MetaMed?


Well, according to their FAQ, they offer a trial service. So your friend would not have to continue to a larger report if the trial seemed to be indicating low benefits of further research.

Can I try MetaMed before committing to a large purchase? Yes. If your case has a larger budget, we can start with a smaller, trial report to ensure quality of service, and confirm that MetaMed is the right choice for you.

And they also offer Financial aid - There is almost no information about this posted, other than that it exists. I guess you would have to call to determine more about how it worked. If your friend did qualify, that would be a substantial boon:

Once you have consulted with our medical team, if you need financial aid to help with the cost of your MetaMed service, we will email you an application right away.

And it looks like overall there are at least three tiers of potential research:

And here are examples of reports at each tier: (Standard) (Plus) (read more)

Those results do not impress me as to the value of their research. There is nothing there that isn't covered by Up-to-Date ( and every hospital I've done stats work for (several, all over the country both community and academic) has provided their physicians with up to date access.

Your best bet (apparently) would be simply asking your physician for the up-to-date report for your diagnosis. If your physician does not have up-to-date access, get a referral to the nearest academic center.

I'm unsure why I've been voted down here- if shminux's friend wants a version of a report similar in quality to what metamed can provide she can ask her physician if the physician has up-to-date (or an other research aggregator) access. If the physician doesn't, its potentially a measure of quality (which can otherwise be hard to judge), and she should get a referral to an academic medical center, which will definitely have something like up-to-date available. This seems to me like decently practical advice for those who have insurance, but don't have the money for metamed. I'm still relatively new so I'm requesting explicit feedback to improve the quality of my posts.

Edited to Add: According to the company, > 90% of American academic hospitals already have subscribed to Up-to-Date, so getting a referral to an academic center has a great chance of getting you to someone with already-paid-for-access to this sort of report.

I'm glad you linked competition, so we can compare similar industries for reference (I asked a very similar thing about Watson.) One item that stood out on their site: And I was also able to find their methods for that grading as well, in case anyone wants to compare Meta Research Methods across Meta Research Organizations:
Sleep apnea seems like something regular doctors should be able to figure out, and I know gout has at least been known for a long time. Are these meant to be examples of what the reports look like more than examples of how Metamed can find obscure treatments? $5000 seems a bit much for someone to be told 'get checked for sleep apnea and lose weight'.
If she's tried all possible conventional and alternative medicine, MetaMed will not help. If she missed something (1) obscure but promising, (2) cutting edge and promising, or (3) unique about her particular body that makes an unusual treatment promising; MetaMed might be able to help. So, if $5000 is what certain relief is worth to her, MetaMed isn't for her. If certain relief is worth $10000 to her, she should estimate how likely it is that paid, reasonably savvy researchers can find something she's missed; and go for it if she feels it's over 50% likely.
No, it's much worse than that: "how likely it is that paid, reasonably savvy researchers can find something she's missed" AND it has a near certainty of helping. Current prior: nothing has helped so far, so the odds of something she missed ended up being useful is pretty low. If the estimate of helpfulness is, say, 1% (that's pretty optimistic), and the odds that MetaMed will find something new is 50%, then certain relief has to be worth $100k.

You meant $1 mil, right?

Right, sorry.
So about what do you think it IS worth? FYI, I think, based on experience with people whom have tried everything, that a 1% chance of finding something is unrealistically low. 20% with the first $5K and a further 30% with the next 35K would fit my past experience.
Define tried everything? Your prior is that there is a 1/5 chance a handful of researchers can find something helpful in 24 hours that isn't listed in something like an Up-To-Date report on the diagnosis (a decent definition of 'everything')? Does Metamed do patient tracking to see if their recommendations lead to relief? Or do they deliver a report and move on?
From the body of the main post (source): Granted, I know very little about Up-To-Date, but I would be surprised if they completely eliminated that 17-year lag, especially in the more obscure conditions. They do, after all, have to cover all the conditions, and their return on investment is obviously going to be much higher on common conditions than on obscure ones. In fact, if they put out a fantastically detailed report on Stage III Boneitis (fictional) and nobody suffers a case that year, they've wasted their money. I strongly suspect Up-To-Date is aware of this, though I obviously have no way of knowing whether it affects their decisions. MetaMed's offer is, as far as I understand it, "pay us 5k and we'll eliminate the 17-year lag for your particular case". This lets them plausibly offer value that Up-To-Date can't, in some cases. Disclaimer: I am not associated with MetaMed, but I do think they're cool.
Have you actually read the metamed sample reports/what do you think metamed actually does? As far as I can tell, their core product is to have a team of medical and phd students do a literature search for about 1 working day (compare to Up-to-date, where actual researchers in various fields write the reports and clinicians edit the treatment plans). This seems highly unlikely to move that 17 year lag even a little bit. I have no horse in this race, but I have worked as a statistician for hospital researchers and for health insurance companies. I just happen to think metamed's boosters here are dramatically underestimating the availability of evidence-based-medicine literature surveys in the clinical hospital setting.
This assumes she's good at sifting through the massive expanse of information available, and good at implementing the suggestions therein. These are two extremely questionable assumptions. Knowing nothing about her except that she has severe fibromyalgia and that she's the friend of a frequent poster on LW--two factors that hardly seem very relevant, and I'd put the likelihood of those two assumptions holding up to be very low. Quite bluntly, most people have no idea what's really out there. The Internet is a vast space.
No, because -$100k is much more than 20 times worse than -$5k.
Saying 1% is extremely optimistic... about the quality and competence of the medical profession as encountered by average people with mildly unusual conditions.
That assumes they find only one thing that she hasn't tried. I have a sister with fibro and some cursory googling on my part suggests that there are so many theories out about what's wrong and what can bring relief that it's difficult to believe that a systematic search by good researchers will only turn up a single thing that she hasn't tried. That said, you're right that 1% helpfulness is probably pretty optimistic.
All possible conventional and alternative medicine? I doubt it. This is a mind-destroying sentence if I ever saw one. I'd suggest re-wording it to "she's tried a ton of different approaches both from conventional and alternative medicine". First thing to be said: Fibromyalgia is one of those health issues where there are no widely adopted hypotheses for the base mechanism at work. This means, quite simply, that there is little hope for targeting it specifically. It's not a case where e.g. your lips are chapped and your knuckles are splitting, and one of the first places you look is hydration--more water, more trace minerals, etc. Instead it's a health issue where you have nothing to target, and your only real hope is to do whatever you can to improve your general health, and hope whatever the yet-to-be-discovered underlying cause is taken out by fortunate accident. Look to the other symptoms. What other symptoms does she have? It doesn't matter whether they're considered to be related. Constipation, headaches, splitting nails, PMS, dry skin, cold extremities, dandruff, frequent colds, dizziness upon standing too quickly, acne... anything at all. Note it, target it, fix it. Keep doing this for years. Make a checklist. Anything to be considered a symptom. Notice it, treat it, move on. Do this for a long enough time, and either the fibromyalgia will go away or get better, or it won't. But at least you tried, and believe me: Her life will be better either way. Well, unless she doesn't like hard work. Potential leads I found through a few brief Google searches:
Not sure why the parent is upvoted so much. Trivial and rather useless advice, some platitudes, a few rather suspect google hits (paleohacks? really?), and a veiled insult "unless she doesn't like hard work".
I'm surprised as well. I expected to be downvoted to -2 or so pretty quickly, and stay around there. As for your disagreements, I should stress that what I said is perhaps the absolute most important thing for the average person with a health issue like that to hear. All too many people get hung up on trying to target the problem specifically, when they're dealing with an issue where doing so is not practical. Day after day, they ask, "What causes fibromyalgia? What are the new treatments suggested for it?" They remain fixated on these questions, while they sweep all sorts of other symptoms under the rug--random symptoms like headaches or splitting nails, which may be coming from the same source. As for the Google hits, I'm not sure why you're calling them suspect. Jon Barron is one of the best alternative health writers out there, the Weston A. Price Foundation has a huge following, PaleoHacks is perhaps the best forum on paleo (which is a diet and lifestyle with a massive following), and the other link is a blog that I've seen cited a bunch of times in paleo circles as being someone who is less likely than average to fall for various forms of silliness. Is this enough evidence to suggest you should read the links and take them seriously? No idea. They have a lot of links within them though. My goal was to as quickly as possible find some articles that put the conditions for 'tab explosion' in place in a way I thought would be beneficial. Generally when conventional medicine doesn't have the answer, the best place to look is where people are talking about paleo. Even stereotypically non-paleo things like raw vegan juicing, such as the Gerson Diet, will come up in paleo circles--quite simply because it seems to work.
I recommend looking into low dose naltrexone. Cheap, safe, and with reported success for fibromyalgia (I haven't looked into that use in particular). Generally appropriate for pain issues, as it is an opioid receptor antagonist with over night use modeled (and perhaps verified) as upregulating opioid receptors, and thereby pain relief, during the day. I believe D-Phenylalanine limits breakdown of opioids, and would be another cheap, safe, and effective addition to this treatment. Also, the stimulation of gnrh release is likely generally helpful in people past 30. Recent small study out of Stanford on LDN for Fibromyalgia: If you're interested, message me and I'll send you where I get it cheap through a compounding pharmacy in NYC.
I have glanced at the abstract, and the study appears to be deeply flawed. They see significant self-reported pain reduction in about 30% of the patients vs 20% for placebo. Whoa, big deal. What are the odds that another study would replicate these results? Moreover, they did not compare it with the mainline painkillers like acetaminophen or ibuprofen, or anything else cheap currently on the market, or with the classic fibro drugs, like pregabalin. To sum up, there is zero reason to try it specifically, except maybe as one of the many random things to try in desperation.
That's not how I read the abstract. It's not % of patients, but % of pain reduction, whatever that means. I assume they're referring to some self reported numeric pain scale. The percentage of responders for "significant pain reduction" was 32% naltrexone vs 11% placebo , which strikes me as significant. If I had serious pain, and someone offered me a "widely available, inexpensive, safe, and well-tolerated" treatment with a 30% chance of "significant pain reduction" I'd be all over it. Your mileage may vary. Really? "Zero reason"? So you predict equal efficacy as a random treatment, such as spinning around and squawking like a chicken? Plenty of reasons. You don't have to like them or know of them. I wasn't attempting or claiming to prove anything, just trying to point you to some information I thought would be helpful. I hope she finds something.
I've read that some of the pain in fibromyalgia typically comes from trigger points; has she researched those?

I am under the impression that IBM's Watson is being tested in a few hospitals for something which seems at least somewhat similiar to Metamed, but I don't know enough about either to really judge well. Sample link to what I am referring to:

Is anyone familiar enough with both Metamed and Watson to help me compare and contrast the support provided by the two of them?

From what I understand, Watson is more supposed to do machine learning and question answering in order to do something like make medical diagnoses based on the literature.

MetaMed tries to evaluate the evidence itself, in order to come up with models for treatment for a patient that are based on good data and an understanding of their personal health.

They both involve reviewing literature, but MetaMed is actually trying to ignore and discard parts of the literature that aren't statistically/logically valid.

My view: MetaMed is designed to extract the maximal amount of information out of the medical research community that exists today. Much of their value-add involves 'meta' evidence that is difficult for others to collect or interpret. (A doctor may be skilled at understanding how a part of the body works, but not how the medical research community actually works.) If you have a condition that is serious, rare, or strange enough that investing $5k in making medical attention more effective seems like a good idea, then you should talk to MetaMed. MetaMed is in no way a substitute for doctors; it's a way to find which doctors you should be talking to, and about what. Watson can be a substitute for doctors. The key enabler for Watson is massive amounts of data on patients, and the statistical knowledge to make good use of that data. One of the things to remember here is that expert diagnosis systems have been around for a long time, but that if you're expert enough to prepare the relevant information for the computer you're probably expert enough to make an okay guess yourself, at which point using the computer doesn't seem very high priority. Eventually, Watson will enable patients and nurses to input most of the necessary information using natural language. It doesn't look like Watson is a substitute for medical research, but is rather a complement to it- if you have all the patient data together, you can build great models, and great models allow for superior discoveries. (Watson might eventually be able to automate parts of the hypothesis-generating and testing aspects of medical research, but I expect humans to have strong to moderate comparative advantage here for at least two decades.) The short version: MetaMed makes better use of existing evidence that anyone else; Watson will generate a river of new evidence that will dramatically alter all parts of medicine.
[Emphasis mine] Dear Bayes, I hope not! I'd hope there's much more precise info that could be input instead.
The question is not what's most useful for the system, but what's most useful for the user.

According to their site Jaan Tallinn is not the CEO but chairman of the board. Zvi Mowshowitz is the CEO.

Wow- that is former MTG Pro Zvi, one of the best innovators in the game during his time. Awesome to see him involved in something like this.
Jaan is also the CTO, I'm not sure if that's on the website.

Have enough people at MetaMed been influenced sufficiently by (meatspace) LessWrong/think 'similarly enough' to LW rationality that we should precommit to updating by prespecified amountson the effectiveness of LW rationality in response to its successes and failures?

At a first glance, I'm not sure humans can update by prespecified amounts, much less prespecified amounts of the right quantity in this case: something like >95% of all startups fail for various reasons, so even if LW-think could double the standard odds (let's not dicker around with merely increasing effectiveness by 50% or something, let's go all the way to +100%!), you're trying to see the difference between... a 5% success rate and a 10% success rate. One observation just isn't going to count for much here.

Definitely, though others must decide the update size.
Interesting question! Since it's an especially interesting question for those not fully in the in-crowd I thought it might be worth rephrasing in less technical language: Is MetaMed comprised of LessWrong folks or significantly influenced by LessWrong folks, or that style of thinking? If so, this sounds like a great test of the real-world efficacy of LessWrong ideas. In other words, if MetaMed succeeds that's some powerful evidence that this rationality shit works! (And to be intellectually honest we have to also precommit to admitting that -- should MetaMed fail -- it's evidence that it doesn't.) PS: Since Michael Vassar is involved it's safe to say the answer to the first part is yes!
But, either way, not much evidence at all.

Is there a postmortem somewhere of why this didn't work? Ah, I see there is.

3Austin Chen3y
Thank you for posting this; I found it very helpful in clarifying some things with my current startup. And thanks to Zvi (presumably) for writing this too!
3Daniel Kokotajlo3y
I think Sarah Constantin is the author not Zvi

it's spam, but it's our spam. upvoted. (I don't mean I work for Meta; I just support the Cause and things/community supporting it)

Upvoted, but I'm a bit confused as to what we're trying to refer to with "spam".

If by spam we mean advertising, yes. Definitely.

If by spam we mean undesirable messaging that lowers the quality of the site, then I would think that this is very much not spam.

Some people (myself included) use "spam" to refer to any kind of advertising in a public setting, e.g. you might preface an email sent out to multiple mailing lists as "sorry for the spam, guys, but..." even if it's a valuable and high-quality email. The connotation, to me, is mildly self-deprecating rather than strictly negative.

If this startup was not associated with MIRI I would downvote it; there are lots of great startups but this is not the place to advertise them.
It's medicine, done rationally. This is a site about rationality. The relevance seems clear regardless of it's origin.
A lot of businesses could have "done rationally" appended to them. MetaMed is "medicine, done rationally" (using statistics). Google is "search done rationally"(with statistics). The only reason medicine stands out is due the the rather poor baseline.
Alternative words to "only" include "valid" and "sufficient". Your example doesn't support your intended conclusion. In a world with irrational and often unhelpful search engines and an unknown, newly formed "Google", it would be entirely appropriate to make people aware of it, in a similar post to this one.

Seeing these statistics has got me thinking.

I've checked the undergraduate course requirements as my local university's medical faculty, and there's nothing listed for probability and statistics. I'm considering setting up an appointment with somebody about this, assuming doctors not being able to uncover test results properly is a serious problem.

Would this be worth it, or am I wasting time?

3Eliezer Yudkowsky10y
I hate to say it, but my guess is that you're wasting time unless your universe^H^H^H university has unusually good undergraduate statistics courses.
I don't think it's a waste of time. If you pay attention in your introductory courses, you'll learn a good chunk of how to abuse NHST and what the criticisms of it mean. I have learned very little Bayesian statistics, but for trying to understand the very large existing medical/psychological research corpus, I have never regretted focusing my reading on frequentist material.
4Eliezer Yudkowsky10y
I defer to your superior domain knowledge of universities.
You don't have to; you can see CMU's "Probability & Statistics" for yourself, for example.
Oh not for me- I'm doing CS, but it seems like we could get very large returns in hospital performance for the effort expended in teaching med students the proper stats training. I'm not sure what to expect here, except that at best they'll flat out say that the program is difficult enough as it is, and at worst shrug with some kind of vague "corporate-representative-being-questioned" answer. In my wildest dreams they could come up with some new-fangled "Life Stats" course, streamlined so only the parts related to diagnostics and prognostics are taught.
I can't decide if this is a typo or not.

Hanson recently commented on MetaMed on OB, but not here, so might as well quote some of it:

I wrote this post because I know several of the folks involved, and they asked me to write a post endorsing MetaMed. And I can certainly endorse the general idea of second opinions; the high rate and cost of errors justifies a lot more checking and caution. But on what basis could I recommend MetaMed in particular? Many in the rationalist community think you should trust MetaMed more because they are inside the community, and therefore should be presumed to be mor

... (read more)
It is indeed a kind of poisoned 'endorsement' that MetaMed could have done without, a Nessus' tunic. It's telling that he obviously didn't run it by MetaMed before publishing it, or didn't straight out decline to give a recommendation, instead of wrapping it in "I expect it will be corrupted like all the others". It's surprising after reading his recent elogy on Yvain, who's involved with MetaMed.

This seems like a really good idea. Especially given the impossibility of a single doctor keeping up with all the literature...

Moreover, I rather expect Metamed to be able, ater a while, to suggest profitable research opportunities to people looking to do medical research.

This is a new service and it has to interact with the existing medical system, so they are currently expensive, starting at $5,000 for a research report. (Keeping in mind that a basic report involves a lot of work by people who must be good at math.) If you have a sick friend who can afford it - especially if the regular system is failing them, and they want (or you want) their next step to be more science instead of "alternative medicine" or whatever - please do refer them to MetaMed immediately.

What might it be worth to people to find out that some or all of the usual procedures are so dangerous and/or ineffective as to be not worth doing?

Likely more than the list price of those procedures. People who have expensive potentially harmful procedures being done on them would get great benefits having MetaMed review those procedures.

Statistical and Health Illiteracy (Patients)

Is this a placeholder for more citations?

2Eliezer Yudkowsky11y
I accidentally left that in after deleting the section underneath it. Like I said, this was only a fraction of their total citations list.

So much room for improvements in healthcare even without new stuff :).

I'd be interested in the linked Begg's paper but it's behind a paywall. Can someone please tell what exactly they had done and how did they obtain all those various p-values?

"On inferences from Wei's biased coin design for clinical trials". You can always request fulltexts here.
Thank you.

What are the capital investments that need to be recouped in the early adopter period? Is the price tag based on "It is worth more than this to our target market.", or "It costs this much to do this research, with reasonable amortizing of capital costs."?

At a guess: computer hardware, office space, recruiting competent people, training them to work together effectively, and a well-organized library of the company's previous reports so that not every request requires them to reinvent the wheel.
Computer hardware and support adequate for the large-scale implementation is roughly 500k capital and 240K/yr; office space incl utilities should be in the realm of $500k/yr (again, for a large-scale operation) and one hundred competent people should be about $4m/year; with another $1m for management expenses. Total capital+first year's operating expenses is ~$6m; if they expect to sell a thousand reports per year (over one man-month each) at $5k each, they repay the investors almost in the first year. I haven't tried to price the custom software involved, but for such a (in the large business sense) small investment I don't see why they didn't start full-scale.
You're only budgetting $40k per person? That seems low, especially considering overhead, health insurance etc.
I think it's a reasonable rate for part-time independent contractors putting out one tenth of a report in a month.
Is price ever based on anything other than "this will maximize our revenue over this period of time"?
The terminology you wanted was 'maximize our profit'. And yes, some pricing is based on a goal other than maximizing profit.
I've worked at three jobs where the firm was not even in the ballpark of approximately maximizing profit. The first is now out of business. The second and third were in government.
Government jobs are now going out of business. You still have to understand the bottom line to be in business, but you don't have to worship it above e.g. employee health and welfare.
For example, it is sometimes based on the goal "Maximize the bonus given to the CEO".
I was referring to corporations with core values that don't involve Laffer peaks of money.
This position wasn't challenged, missed, nor even weakened by my reply. Rather, it was strengthened by the agreement with the actual claim you made.
Sorry; it seemed to me that you were agreeing with the claim I made by subverting the intent. As someone who intends to create and invest heavily into a corporation with goals other than maximizing the taxable investment income I receive, it is a sensitive subject to me.
Exciting (and brave, considering the failure rate). I'm curious... what industry/goal if you don't mind sharing?
Coffee shop/social justice. Its a planned attempt to deinforce class division by making the middle-class investor(s) a little richer while making the working-class employees each absolutely more richer. I'm currently have someone with the philosophical background but not quite enough business training to serve as general manager. I've figured a low six-digit upfront investment along with a couple years to get a fully-qualified general manager, a couple more hundred $k in capital costs, and another in operating losses each year for four years, leading to a recoup starting eight years after beginning. I've got about half that now, and just need to get my day job adjusted and settled to a better location to oversee the operation and expect to cover the remaining investment out of income. (Before I actually start, I'm going to develop enough of a plan to know how much reality deviates from the plan at any given point)