I've recently gotten in touch with Alexandros Marinos, which has been writing lengthy critiques of Scott's most famous post, his review of Ivermectin.

After hearing his point of view, I must admit I've been swayed to the thinking that:

  1. There are some obvious statistical errors in Scott's post and his interpretation of Bitterman's epidemiological data
  2. The evidence for ivermectin, while poor, is about as reasonable as that for any repurposed covid drug (e.g. fluvoxamine), and even purpose-made ones (plaxovid)

While I fancy myself an off-label statistician I'm prone to error and maybe my interpretation is wrong. Given that Alexandros has recently finished this post which condenses his core criticisms, I was curious to see what people on LW thought about it. 

New to LessWrong?

New Comment
10 comments, sorted by Click to highlight new comments since: Today at 7:27 PM

This is a shotgun comment, and sorry if I'm being very ignorant here (I've stopped following most covid science-y stuff in the last year+), but:

The evidence for ivermectin, while poor, is about as reasonable as that for any repurposed covid drug (e.g. fluvoxamine), and even purpose-made ones (plaxovid) [emphasis mine]

Aren't the effects of Paxlovid pretty extreme? 

  1. In studies, mortality benefits large enough (>10x?) it really slams you in the face. 
  2. My understanding is that in 2022, covid mortality has gotten lower in the general population, even among unvaccinated people. 

My understanding is that you don't have effects nearly this large and well-studied for ivermectin on covid. If anything (feel free to correct me!) countries that use ivermectin a lot have pretty high covid mortality, though of course there are many confounders.

Given that this was my claim and not the author's I'd rather not discuss it since I'd detract from the point.

But tl;dr, strength aside, if you count all trials ivm efficacy is arguably higher than plaxovid, once you start eliminating and de-biasing the story changes, the direction and magnitude of change is the whole story that generated e.g. Scott's ivm post and this reply.

Scott's post is interesting in many regards. If everything that Scott is saying in his post would be true, then it would imply that Ivermectin should be given to COVID patients in those areas with a high number of worm infections and not in those areas with low numbers of worm infections. It does not imply that we shouldn't give Ivermectin to any COVID patients. 

The fact that this didn't happen tells you a lot about the discourse. It's unfortunately not as deep as I would have liked it to be.

One issue that might be worth looking more at is the sharing of trial data of the TOGETHER trial. Creating manifold markets and or Metaculus questions for the question of whether or not the data will be shared more widely and about what it will say if that happens would be a useful way to ground the discussion in something more concrete. 

I think it would be great if Alexandros Marinos would formulate specific predictions around the TOGETHER trial that can be put into prediction markets. If a manifold market is created, it would make sense that someone else than Alexandros is the judge/market creator for the market.

Given that I have access to insider sources of information and a lot of inside data that I can't yet release publicly (you will have to take me on my word on this, sadly) it would be pretty bad form of me to make predictions other than the ones I have already made (many of which were made before I had that inside data): 

The together trial suffered randomization failure: the placebo group is not concurrent, and that triggered a chain of events that led to it allocating disproportionately sick patients to ivermectin, and disproportionately healthy patients to fluvoxamine, with placebo being in the middle. This was amplified by several puzzling decisions by the together team. All this in a backdrop of indefensible dosing for ivermectin, and widespread community use in brazil, where it was available OTC.

I've summarized many of my concerns here: https://doyourownresearch.substack.com/p/10-questions-for-the-together-trial

And I've shared my model of what I think happened here: https://doyourownresearch.substack.com/p/together-trial-solving-the-3-day

There's a lot more to go over, but long story short, what I do doesn't involve a lot of probabilistic arguments, it's just logical inference for the most part, inference that anyone can replicate since I try to post receipts as much as possible. As a result, whenever I've had the chance to see internal data, it's matched my models pretty well.

Given that I have access to insider sources of information and a lot of inside data that I can't yet release publicly (you will have to take me on my word on this, sadly) it would be pretty bad form of me to make predictions other than the ones I have already made (many of which were made before I had that inside data)

It's not at all bad form. If you have the market and are open about that there are two things that might happen:

  1. People believe you that the trial has such a randomization failure and the market is ~99% that the trial has randomization failure
  2. People don't believe you and bet against you.

If the people who don't believe you, lose money because they misjudged you, that's not your problem. It's basically a market about whether or not you are trustworthy.

Having that market is helpful because the complexity of what you are saying is very high and not enough people want to spend the time to wade through that complexity. "Prediction market says 99% he's trustworthy" on the other hand is a simple story. 

In addition to being a simple story, it's a notable story for rationalists. It's a story that's interesting to people who care about prediction markets like Scott or Zvi. 

Admittedly I've not looked into how metaculus works. How would I go about registering such a prediction?

Understanding that there was randomization failure, and that that failure was at the expense of ivermectin takes about 10-15 minutes for someone who can do addition and subtraction to understand -- I've got all the receipts here:

https://doyourownresearch.substack.com/p/demonstrating-randomization-failure

Maybe a little more time if they want to confirm the receipts and make sure there's no credible counter-argument to be made. It's either that, or the numbers coming out of the trial are false -- not sure which is worse.

Ever since I've written that post, I've seen more internal data from the trial that confirms it. 

How would I go about getting people to bet against me on this? And crucially, how would it help get the data released? I already offered to donate $25k to ACX Grants if Scott helps get the data released, which is my main objective. Will this help in that direction?

You can create a new Metaculus question. After you created it the moderators at Metaculus decide whether the question is specific enough. If they release it, otherwise they give feedback about what needs improvement. 

That's likely a good first step to create the Metaculus question as the process helps make the question well-defined.

How would I go about getting people to bet against me on this?

If nobody bets against you, you say on Twitter that nobody is willing to bet against you because they don't sincerely believe you are wrong. 

That will either gets people to bet against you, or you got a good way to demonstrate to people who are not willing to read through all your posts that you are likely telling the truth. 

I already offered to donate $25k to ACX Grants if Scott helps get the data released, which is my main objective.

If Scott wanted more money for ACX Grants he could ask the FTX Foundation and likely get it in a way that causes him much less of a headache. Donating $25k to ACX Grants is costly for you but not very motivating for Scott. 

And crucially, how would it help get the data released?

A market on when the data will get released. You put in money that it won't get released. People who bet that the data will be released have an incentive to do things that get the data released.

Rationalists like Scott care about using prediction markets to resolve scientific questions. If the market can't resolve because they violate their promise to release the data, that's likely to feel viscerally wrong to people like Scott and be more motivating than the $25k to ACX Grants.

I also expect it to make a bunch of rationalists who currently are not interested in reading posts about Ivermectin curious to figure out what's going on. 

Much of this sounds very speculative, to be completely honest, and I'm not sure I agree with your diagnosis of what "rationalists like Scott care" about.

I would be interested in hearing what prediction, specifically, would be interesting and specific enough to put up on metaculus. Or was that the one about the data not being released? Because I'm actively working on multiple fronts to get it released, so "predicting it won't" just feels wrong.

It seems that you believe that the data getting openly released will have a positive effect in convincing people to take another look at Ivermectin and/or believing that we currently have evidence that it works. 

If that's true, than there's likely something that you think could be specified that one will learn from the data being openly released.

You can also just bet on the probability that you consider to be valid for the data to get released. That still sets the incentive for people to more the probability by taking actions to make it more likely that the data gets released. 

Would the stakes be high enough to get participants in the market? m&m always seemed fairly unreliable to me, hype is required to generate answers and answers can be extensively biased due to the lack of a real incentive.

Still, if you'd be down for creating the markets yourself or know someone that would, I'm pretty sure the author would be ok sharing more specific predictions around TOGETHER.