Prediction-based-Medicine instead of E̵v̵i̵d̵e̵n̵c̵e̵ ̵b̵a̵s̵e̵d̵ ̵M̵e̵d̵i̵c̵i̵n̵e̵ Authority-based-Medicine

by ChristianKl4 min read30th Jul 20216 comments

9

MedicineExpertise (topic)World ModelingWorld Optimization
Frontpage

This article explores how evidence-based Medicine currently works in 2021 with ivermectin as a case study. I don't make any argument for or against ivermectin but look at the deeper issues about how we decide which drug and expert to trust. First I explore the status quo and then I will present Prediction-based Medicine as an alternative to the status quo.

The idea of Evidence based Medicine

In the last three decades the idea of Evidence-based Medicine arose. A large part of what Evidence-based Medicine is about is doctors reading the medical literature and making treatment decisions based on the evidence available in the literature.

There's no study that showed any benefit of doctors reading the medical literature, so instead of providing evidence that Evidence-based Medicine works based on the standards of evidence that Evidence based Medicine proposed, the framework gets accepted without empiric evidence and is only backed up by intellectual arguments. For a long-time that wasn't a real problem and everybody more or less agreed on what the dogma of Evidence based Medicine is supposed to mean.

Ivermectin

At the moment we have two meta-analyses about Ivermectin who went through competent peer review before getting published. With competent peer review I mean review by people who are competent and familiar enough with the subject to spot when control and placebo get exchanged in the reporting of one trial.

The two competently peer reviewed meta-analyses are published in Reviews in Medical Virology and American Journal of Therapeutics who have impact factors of 4.221 and 1.955 which suggests that both normal journals and not obscure ones that are willing to publish anything. Given that more prestigious journals are more likely to publish wrong data, calling for meta analysis is good enough and it doesn't make sense to require it to be published in more prestigious journals.

While the two meta-analyses conclude that there's moderate-certainty evidence for ivermectin working, authorities like the FDA, CDC, WHO and EMA reject ivermectin as a treatment.

As a doctor deciding how to treat a patient, the doctor has to make a decision what to trust. While we can make theoretical arguments about whom we should trust, it's essentially about the question in which authority you want to believe. There's no empiric evidence for a heuristic that's proven to help us decide whom to trust. This is bad because one side of the debate is likely right and given that some doctors follow either side this means that a lot of patients get bad treatment.

Deciding about Ivermectin is about believing in heuristics of authorities

I could make an argument here for the decision making heuristics of one authority or the other, but the argument I could make wouldn't be based on good empiric evidence. This has the great advantage that even when people read my article in a few years, I run no risk of being seen as having made the wrong call.

Instead, I argue that this case shows that medicine currently is essentially about believing the heuristics of authorities without any evidence which heuristics are the best. When people argue that they make a treatment decision because the CDC, FDA, WHO or EMA holds a position, the decision essentially is about trust in authority.

Prediction-based Medicine

Once you accept that treatment decisions are about believing in authorities, the logical conclusion is to ask how to determine which authority is better than another. This is similar to the problem of determining which political analyst to trust. Philip Tetlock found a way to evaluate the experts. To evaluate them he asked them to predict political outcomes in the future. Modern statistics like the Briers scoring rule or the Logarithmic scoring rule then allow us to evaluate the predictions.

The same process can be used in medicine to decide whom to trust. This can be done at two levels. At one level we could ask individual scientists and institutions to predict the outcome of clinical trials. We can also ask doctors to predict the outcomes of individual patients.

While the first is also valuable, I will focus here on the second. When dealing with a pandemic of a new virus, in the beginning no trials are available about what the best treatments are but there are still patients that need to be treated.

The FLCCC is a group of Intensive Care Unit (ICU) doctors who came up with treatment guidelines that differed from the official guidelines. One way to evaluate whether to trust them when Dr. Pierre Kory of the FLCCC gave his first senate testimony in May 2020 would be to ask: "How good are you at predicting the outcomes for your COVID-19 patients compared to ICU doctors who follow the standard protocol." If Dr. Pierre Kory has a bad score at predicting outcomes for COVID-19 patients he should be ignored. If he however advocates a different treatment and is good at making predictions that suggests that other ICU doctors are well advised to learn from him.

In a pandemic where good fast response to treat the first wave of patients is very important, this process allows expertise to travel much faster through the medical community because it's easy for doctors to identify the people who understand treating COVID-19 well and learn from them.

Outside of the pandemic we can give those doctors with the best prediction scores in their field seats at the table where the treatment guidelines are written because they are the people who are best at synthesising all available evidence to make treatment decisions for a particular disease.

Conclusion

We need to use the empiric method to find out how to best interpret the available evidence. Asking doctors to predict outcomes of their clinical cases is an empiric method that allows for scoring their prediction, which gives us a good heuristic to decide whom to trust.



 

9

5 comments, sorted by Highlighting new comments since Today at 11:00 PM
New Comment

suggests that both [are] normal journals

Also, maybe mention something about quality of outcomes vs. just ability to predict. If I give all my patients poison, then I can score very well on my outcome predictions.

The argument of getting better 'outcomes' by giving patients poison works similarly with clinical trials in our existing system. A drug company who uses a poison as a placebo or a badly dosed existing treatment as control is going to have their drug look better. 

I don't advocate that prediction scores alone should drive decisions but that they should be an important part of the debate and the debate should actually happen between experts that share their views on how things work.

Why would poison (or anything with a known negative effect) be used as a placebo?

Of course I understand the drug companies' incentives, but I don't get how that could be justified or look reasonably scientific. Do you have a specific example?

My main point is that the ability exists in both the existing system and also a prediction-based one.

But when it comes to justifying poison, imagine you have a drug that gives everybody nausia after they take it. You have to tell your trial participants because of informed consent that everybody who gets verum will very likely get nausia soon after taking it. 

You can justify giving the patients in the placebo group a poison that causes nausia because otherwise all the patients know whether or not they got the placebo. 

Many studies don't specify what they are using as placebo. 

I do remember cases where someone argued that the dose of the control group was likely net negative but don't have specific links right now. 

I think his point is that the same failure state Measure mentioned, doctors giving patients poison and correctly predicting outcomes, is just as likely as for the current clinical trial scheme.