Ivermectin and outcomes from Covid-19 pneumonia: A systematic review and meta-analysis of randomized clinical trial studies comes to the conclusion: "Our study suggests that ivermectin may offer beneficial effects towards Covid-19 outcomes. More randomized clinical trial studies are still needed to confirm the results of our study."

On the other hand Ivermectin for the treatment of COVID-19: A systematic review and meta-analysis of randomized controlled trials comes to the conclusion: "In comparison to SOC or placebo, IVM did not reduce all-cause mortality, length of stay or viral clearance in RCTs in COVID-19 patients with mostly mild disease. IVM did not have an effect on AEs or severe AEs. IVM is not a viable option to treat COVID-19 patients."

What did the studies do differently to come to their conclusions? How do I go about interpreting which of them provides the better analysis?

New to LessWrong?

New Answer
New Comment

8 Answers sorted by

Bucky

Jun 30, 2021

240

I had a quick look and essentially it seems the latter found fewer studies (10 vs 19) and therefore fewer patients (1173 vs 2768)*

They have similar central estimates for RR of all cause mortality (0.37 vs 0.31) but due to having more patients the former has tighter CI (0.15 to 0.62) and concludes that there is an effect but the latter has wider CI (0.12 to 1.13) and concludes that there isn't an effect.

The latter could claim that there is as yet insufficient evidence of an effect based on the studies in their analysis but not that these isn't an effect. I especially take issue with the claim that "IVM is not a viable option for treating COVID-19 patients" when they themselves take such pains to talk about how low quality much of the evidence is!

The two meta-analyses also differ on their ratings of different papers - for instance the largest study (n=400, Lopez-Medina et al.) is rated as High quality (7 out of 7) in the former but at high risk of bias in the latter (due to deviations from intended interventions). 

Scanning the paper there are a few issues. For the most part the problems are mitigated but there could still be issues:

  • There was a period of 17 days where the placebo group were receiving ivermectin(!)
    • The people from these 17 days were excluded from the primary analysis and additional subjects were enrolled
  • The primary outcome was changed during the study
    • This occurred ~30% through the study.
    • The reasoning for changing from the old outcome seems reasonable
    • It's hard to comment on whether the selection of the new outcome could have seen bias
  • Placebo was changed during the study from dextrose water to something tasting more like ivermectin
    • This was ~25% through the study
    • Results did not differ much between the 2 placebo groups
    • I don't feel like this would make a massive difference but I'm not sure

This paper is fairly typical of the quality of the studies (according to meta-analysis 2) or on the top end of study quality (according to meta-analysis 1) which causes me some concern.

In conclusion, if I was offered Ivermectin I would take it at this point (side effects seem to be small) and might even look to sign up to a trial if I had COVID - in the UK some people would be eligible for this one.

* 6 studies were common to both analyses.

[-]ndr3y120

The Medina study received some methodological complains, see the JAMA letter.

Ivermectin proponents seem to consistently push for a regimen of:

  • high dosage (0.2mg/kg once-a-week for prevention)
  • early usage, ideally as prevention
  • usage with/after meals

If they're right one can imagine studies that see no effects either because of low dosage, late administration or administering it on empty stomach (the anti-parasite regimen), which the Medina study does.

2Bucky3y
That’s super-helpful - thanks! One thing the negative meta-study noted was the variation in doses between studies (12 - 210mg).

I had a quick look and essentially it seems the latter found fewer studies (10 vs 19) and therefore fewer patients (1173 vs 2768)*

It seems like one of the reasons they found fewer studies was that they only searched till March 22, 2021 which is odd for a review paper published 28 June 2021.

The pro-Ivermectin was published earlier and went till May 10th. 

2Bucky3y
Yes, I think this definitely had some effect (I misread 10th May as 10th March originally!). I also think the fundamental search must have been better in #1. Of the 13 present in #1 but missing in #2 6 were from 2020, 7 from 2021 (I haven't looked at specific dates for the 2021 ones but I'd guess some of them are from before March 22nd). For #1 the original search turned up 1237 articles then 95 full text analysed leading to the 19 included. Corresponding numbers for #2 are 256, 12 & 10. 

waveman

Jun 30, 2021

190

Some general comments about medical research. Source: I have studied the statistics books in detail, and have read several cubic meters of medical papers and learned most of the lessons the hard way. 

When reading medical papers look for 

1. Funding sources for the study or for the authors of the study (e.g. "speaking fees" and "consulting fees"). He who pays the piper calls the tune. 

2.  Statistical incompetence, which is rife in medical research. For example, you routinely see "lack of statistical significance" interpreted as "proof of no effect". 

3. Pre publication of the study design, end points and intended statistical analysis. There is a lot of scope to move the goalposts and engage in p-hacking and other nefarious activities. 

4. Differences between the abstract and the text. Often you can read the abstract and wonder if it refers to the same paper .

5. In meta-analyses look for whether the selection criteria were adhered to  or not or whether subjective criteria were used to exclude inconvenient studies. 

6. Financial interests. For example it is notable that countries like India, that make generic drugs, appear to be more favourable to generic drugs. Meanwhile in the US, there seems to be a strong bias in favour of drugs in patent. 

7. Read the methods section very carefully. Once you have read enough papers this will become instinctive. 

8. Be ready for the vast majority of papers to be of low quality and worthless. 

9. I routinely see studies rigged to deliver a predetermined outcome. For example, if you want to find a non-statistically significant effect which can be misrepresented as "no effect", then run a small study, for a short period, and use suboptimal doses or take other measures to minimize differences between the groups compared.

This all sounds rather grim, an extreme case of the hype and uneven quality that probably afflicts all research areas now... Number 8 seems especially grim, even though it doesn't involve outright corruption, since it means that any counter-institution trying to do quality control will be overwhelmed by the sheer quantity of papers... Nonetheless: What you describe is a way to check the quality of an individual paper. Is there any kind of resource that reliably turns up high-quality papers? Perhaps literature reviews or citation counts? 

4waveman3y
No you just have to filter. In any particular field you get to know the agendas and limitations of many of the researchers. X is a shill for company Y, A pushes the limits for p hacking, B has a fixed mindset about low fat diets. etc. Some researchers also tend to produce me-too and derivative papers, others are more innovative. Also you do get quicker at spotting the fatal flaw.  In finance there are blogs that pick out recent good papers; these are a huge time saver (e.g. Alpha Architect which I have mentioned before).

gwern

Jul 16, 2021

60

The obvious difference is that the second does not include Elgazzar, while the first includes Elgazzar, which is bad for the first one because Elgazzar faked its data so incompetently it has been retracted: https://grftr.news/why-was-a-major-study-on-ivermectin-for-covid-19-just-retracted/ https://gidmk.medium.com/is-ivermectin-for-covid-19-based-on-fraudulent-research-5cc079278602 https://www.theguardian.com/science/2021/jul/16/huge-study-supporting-ivermectin-as-covid-treatment-withdrawn-over-ethical-concerns

[-]ndr3y140

I have not managed to see Hariyanto et al reproduced yet (any help welcome), so I don't know what effect removing Elgazzar from it would have on that specific meta-study.

For Bryant et al though this is the result with both Elgazzar's in:

This is the result with both Elgazzar's out:

RR moved, but the result is fundamentally the same.
Do you think it would change the result for Hariyanto et al?

ndr

Jul 02, 2021

60

Another meta-analysis (Bryant et al) has a very similar title but positive claims Ivermectin for Prevention and Treatment of COVID-19 Infection: A Systematic Review, Meta-analysis, and Trial Sequential Analysis to Inform Clinical Guidelines.

The authors have put out an official rebuttal of the negative meta-analysis which is an interesting read and point to many of their perceived flaws.

The comments on the preprint of the negative study (Roman et al) are also interesting.

For instance:

Hi, I'm Dr.Niaee and I was surprised that even basic data from our RCT is completely mispresented and is WRONG. We had 60 indivisuals in control groups and 120 in intervention groups and even this simple thing is mispresented.

And:

after your "mistake" inverting the control and IVM arm of the Niaee study, the RR goes from 1.11 to 0.37 yet you dare to not change a single word in your conclusion

My current impression is that the negative study is not very high quality at the moment, for any reason among rush to publish, incompetence or malice.
For sake of argument I still have to look at what studies Roman et al did include that was omitted by Bryant et al and Hariyanto et al as that would reveal any pro-ivm biases.

Well worth reading the linked material - quite damning.

[-]ndr3y20

Update:
A recent preprint compares Roman et al and Bryant et al: Bayesian Meta Analysis of Ivermectin Effectiveness in Treating Covid-19 Disease

Summary:
The two studies find similar RR (risk reduction as )

Bryant found RR = 0.38 [CI 95%: (0.19, 0.73)]
Roman found RR = 0.37 [CI 95%: (0.12, 1.13)]

Roman et al should conclude there's not enough evidence because they can't rule out RR >= 1 at 95% confidence. Instead they conclude:

In comparison to SOC or placebo, IVM did not reduce all-cause mortality, length of stay or viral clearance in RCTs in COVID-19 pa

... (read more)

Alexey Lapitsky

Jun 30, 2021

60

The negative meta-study is borderline malicious.

"This article has an embarrassing history whereby treatment arms in the study of Niaee were reversed, attracting protest from Dr Niaee himself. This egregious error has been corrected in the revised version, but with no change to the Conclusions in spite of dramatic change…" - from BIRDGroup twitter

Pubpeer is also useful in cases like this:

https://pubpeer.com/publications/955418F3D4D39742CFFA8C1B023AA3

ChristianKl

Jun 30, 2021

50

I googled a bit to see whether ivermectin can be ordered online and while there are website that superficially look like normal online pharmacies selling it, those seem to be lacking an impressum and seem pretty shady.

The pharmacies that I found that sell it and aren't shady all online give it out for prescriptions.

Alex Power

Jul 13, 2021

30

Because it's political.  Some people are invested in Ivermectin being effective, other people are invested in it not being effective.  The extant studies are all inconclusive due to a small N, and mostly have problems with their methodology; if you pick and choose your studies in the right way you can get whatever result you want.

And the individual studies are often extremely bad.  I note Cadegiani et al, who claim that Ivermectin (and also Hydroxychloroquine, and also Nitazoxanide) are each so effective, either individually or combined (they didn't bother to track which patients got which drugs) that it is unethical to use a placebo group in studying those drugs.  I'm not sure how Elsevier can be affiliated with a journal that publishes material like that and retain any credibility.

Because it's political.  

Pretending that just because something is political you can believe whatever you want is hugely problematic. 

if you pick and choose your studies in the right way you can get whatever result you want.

It's interesting to what malpractice the contra-ivermectin study engages. Not withdrawing it from publication after they mistated the results of a key study (and not giving it to any peer-reviewer competent enough to notice the error) seems to me a lot more ethically problematic as allowing a low quality study to be published ... (read more)

3Alex Power3y
My longer-form thoughts are at Substack.

Slider

Jun 30, 2021

30

By reading them?

It seems they hit different studies and one can check that. One also says that everything is low quality of evidence and other says everything is very applicable to be analysed.

It is also a bit funny how one of papers goes study by study "IVM reduced mortaliy but QoE was low" and then goes on to conclude that overall "IVM does not reduce mortality"

The two statement are not neccesarily so much in conflict, they are just weaseled in opposite directions. One of the them says "suggests" and other says "is not proven" which you get if you have a faint trace going one way.

By reading them?

I think the question of whether ivermectin works is important enough that I don't want to completely rely on the impression that I personally get by reading them. 

The question is both important on the policy level and also on the personal level in case any rationalist does get infected with COVID-19.

and other says "is not proven"
 

In the abstract they make a definitive statement that IM is not useful. This goes well past any rational or reasonable interpretation of the evidence. This raises the question of bias / motivated reasoning. I will read the paper in full today and may comment further.

I read the negative paper (I had already read the positive one). 

The positive one concludes, rightly I think, that there is evidence falling short of proof that IM is likely to be useful. 

I am not at all happy with the negative paper. 

1. Lots of highly emotive language against IM suggesting a lack of objectivity. Another thing suggesting lack of objectivity is that they put <did not find IM useful> in their list of strengths. I wonder who would find this a strength and why? Also sneering about studies done in low income countries did not endear them to me.

2. They really went all out, above and beyond the call of duty, trying to exclude papers. Again it did not seem like they were  humbly and objectively seeking the truth. It seemed to reek of motivation. Having reduced the papers that qualified to a tiny number, then surprise surprise the result is N.S. Which they can then misrepresent (see next point).

3. Misstatement of the conclusion. Lack of statistical significance does not mean you showed the thing doesn't work, especially given P=13% and RR=0.37. Given the small numbers the reduction in deaths would have had to have been enormous (~80%) to achieve si... (read more)

3ChristianKl3y
The number I have in memory is that it takes roughly a week. If you think it's longer, can you point me to a resource?
2ChristianKl3y
Prophylaxis is a strong point given it's potential effect but given that other studies found that currently the evidence for treatment effects is higher then the evidence for prophylaxis, focusing on the issue that's more studied seems reasonable to me I consider the other points more concerning.  At the moment that raises the question to me whether it makes sense to order Ivermectin from India (likely takes a month to arrive). Given that Delta is enough to produce r>1 in the UK in summer while people are more outside and the UK has still a lot of restrictions while having 85% with one vaccination dose and 50% fully vaccinated, Delta Plus already having a mutation that makes it likely better at evade vaccines, a new wave in autumn seems very likely to me.