Previous Covid-19 thoughts: On R0Taking Initial Viral Load Seriously

Epistemic Status: Something Is Wrong On The Internet. Which should almost always be ignored even when you are an expert, and I am nothing of the kind. Thus, despite this seeming like a necessary exception, I expect to regret writing this.

People are taking the projection of 60,000 American deaths from Covid-19 as if it were a real prediction. This number is being used to make policy, to deny states medical equipment and to make plans that spend trillions of dollars and when to plan to reopen entire economies.

Ignoring this in the hopes it will go away does not seem reasonable.

My suspicions that this was necessary were more than confirmed when, failing to realize just how obvious the nonsense in question was and thinking I needed to justify labeling it nonsense, I wrote a reference post called The One Mistake Rule. 

The second comment on that post was to argue that we should indeed use exactly the model that motivated me to write the post. The comment is here in full:

>> If a model gives a definitely wrong answer anywhere, it is useless everywhere.

Except if it needs to be used right now to make important decisions and it’s the best model we have. See: https://covid19.healthdata.org/united-states-of-america

We could plausibly think this is the best model we have? Oh my are we screwed.

The Baseline Scenario That Makes No Sense

There seems to be a developing consensus on many fronts, for now, that the model linked above represents our reality. The model says it is ‘designed to be a planning tool’ and that is exactly what is happening here. 

What is this model doing? Time to look at the pdf

Here’s the money quote that describes the core of what they are actually doing.

A covariate of days with expected exponential growth in the cumulative death rate was created using information on the number of days after the death rate exceeded 0.31 per million to the day when 4 different social distancing measures were mandated by local and national government:

School closures, non-essential business closures including bars and restaurants, stay-at-home recommendations, and travel restrictions including public transport closures. Days with 1 measure were counted as 0.67 equivalents, days with 2 measures as 0.334 equivalents and with 3 or 4 measures as 0. For states that have not yet implemented all of the closure measures, we assumed that the remaining measures will be put in place within 1 week. This lag between reaching a threshold death rate and implementing more aggressive social distancing was combined with the observed period of exponential growth in the cumulative death rate seen in Wuhan after Level 4 social distancing was implemented, adjusted for the median time from incidence to death. For ease of interpretation of statistical coefficients, this covariate was normalized so the value for Wuhan was 1.

In other words, this model assumes that social distancing measures work really, really well. Absurdly well. All you have to do to stop Covid-19 is any three of: Close schools, close non-essential businesses, tell people to stay at home, impose travel restrictions.

If you do that and maintain it, people stop dying. Entirely.

Look at the graph they have up as of this writing (updated on 4/10). By June 20, they predict actual zero deaths that day and every future day. They have us under 100 deaths per day by the end of May.

The peak in hospital use? Today, April 11.

The peak in deaths? Yesterday, April 10. For New York, several days ago, with our last death on May 20. 

In other words, considering the delay in deaths is about three weeks, they predict that no one in New York State will be infected after April. No one! We’ll all be safe in only three weeks! 

This is despite us not yet seeing any evidence of a major decline in positive test rates in New York. Deaths lag positive tests by weeks.

Hard to be more maximally optimistic than that. One could call this the ‘theoretical beyond best case scenario.’ 

(The statement is actually even more absurd than that, considering variation in time to case progression, but I’m going to let that one go.)

(Exercise for the reader, you have five seconds: What is the implied R0?)

(Second exercise for the reader: If there are four things that reduce the spread of infection some amount, and R0 is about 4 initially, and you implement three of them, what is the new R0?) 

They Account for Uncertainty, Right?

They generously account for uncertainty with the following ‘confidence interval’:

Figure 9 shows the expected cumulative death numbers with 95% uncertainty intervals. The average forecast suggests 81,114 deaths, but the range is large, from 38,242 to 162,106 deaths.

(Note: this was as of paper publishing, numbers are now lower.)

That is not how this works. That is not how any of this works.

The way this works once we correct for all the obvious absurdities is that this is a lower bound on how good things could possibly go.

If I am incorrect, and that is how any of this works I have some very, very large bets I would like to place.

A Simpler Version of the Same Model

The model seems functionally the same as this:

Assume all reported numbers are accurate, and assume that no one gets infected once you nominally implement three of the four social distancing measures. Which you assume every US state will do within a week from the model starting.

Let’s simplify that again. 

Assume that no one under an even half-serious (three quarters serious?) lock down ever gets infected out-of-household.

We still see deaths for a few weeks, because there is a lag, but then it’s all over.

What the Model Outputs

As of when I wrote this line, this more-than-maximally-optimistic model projects 61,545 deaths in the United States.

People with power, people with influence, what some might call our “best people,” are on television and in the media predicting around 60,000 total American deaths.

I will say that again.

We are telling the public a death count that effectively implies that by about a month from now, and in many places earlier than that, no new American ever gets infected with Covid-19.

The model assumes that our half measures towards social distancing will have the same impact as was reported in Wuhan. In Wuhan, they blockaded apartment buildings, took anyone suspected of being positive away for isolation, and still, months after this model says there are no infections or even deaths, has severe movement restrictions and blockades up all over the place.

Whereas the New York City subways continue to run, and California thinks weed sales are an essential business.

I hope that my perception of this is wrong. Perhaps everyone knows this model is nonsense. Perhaps there are better ones out there – if you know of one you respect, please let me know about it!

But again, this is a maximally optimistic model on every front. I keep seeing people whose voice matters share this same final answer of predicting 60,000 deaths. If it’s not from a model doing more or less this, I don’t know how you get an answer in that ballpark.

Unless of course answers are being chosen without regard to reality.

 

 

 

New to LessWrong?

New Comment
28 comments, sorted by Click to highlight new comments since: Today at 3:04 PM

The model seems not far off estimating peak hospitalization date, at least for states that are currently peaking like CA and NY. The peaks in places that are close to peaking can be pretty accurately estimated just with curve fitting though, I assume that being fit to past data is why the model works OK for this.

It's clearly overly optimistic about the rate of drop-off after the peak in deaths, at least in some cases. Look at Spain and Italy. Right now here's how they look:

Italy: graph shows 610 deaths on April 9. Predicts 335 on April 10, 281 on April 11. Actual is 570 on April 10, 619 on April 11.

Spain: graph shows 683 on April 8, Predicts 372, 304, 262 on next three days. Actual 655, 634, 525.


The model for New York says deaths will be down to 48, 6% of the peak, in 15 days. Italy is 15 days from it's peak of 919 and is only down to 619, 67% of the peak.

The model for the US as a whole is a little less obviously over-optimistic, assuming the peak really was April 10. it's only predicting 40% decline in the next 15 days. California model predicts an even slower decline. It seems to think fast growth in cases in the outbreak phase leads to fast recovery, which has not been borne out thus far in Italy and Spain.

[-][anonymous]4y50

This increases my estimated odds of the federal government attempting to suppress positive test numbers via defunding and not collecting statistics.

Italy seems to me to have stalled in decreasing R at about R=0.9. China and South Korea both got down to R=0.5. I have a concern that the UK has stalled at about R=1.3 (25% confidence) but I suspect that a few days more data may disprove this.

The US appears to still be on a downwards trajectory (currently just above R=1) but where exactly it stops will make a huge difference to the final tally. If I were to be making a model then this is the main place where I would focus my attention to give reasonable confidence intervals.

We need a new model I think. The purpose of the IHME was to figure out how to allocate hospital resources at the peak. Now we are roughly at or past the peak and we need to figure out how to re-open and what calculated risks are worth taking to ensure that businesses don't get devastated even more. Hopefully someone is working on it.

Below is a simplified COVID-19 framework:

Data acquiring ---> social engineering based on model ----> better result


Yes. A better model will be definitely helpful. However, (as pointed out indirectly earlier by someone else), to my best knowledge, there were no good and robust model for large lag dynamic systems. Such kind of model could lead to Chaos and random like result easily. Thus, I believed that increasing the data acquiring capability was the key (South Korea's approach).

[This comment is no longer endorsed by its author]Reply

April 17th Stat News story: Influential Covid-19 model uses flawed methods and shouldn’t guide U.S. policies, critics say:

“It’s not a model that most of us in the infectious disease epidemiology field think is well suited” to projecting Covid-19 deaths, epidemiologist Marc Lipsitch of the Harvard T.H. Chan School of Public Health told reporters this week, referring to projections by the Institute for Health Metrics and Evaluation at the University of Washington.
Others experts, including some colleagues of the model-makers, are even harsher. “That the IHME model keeps changing is evidence of its lack of reliability as a predictive tool,” said epidemiologist Ruth Etzioni of the Fred Hutchinson Cancer Center, home to several of the researchers who created the model, and who has served on a search committee for IHME. “That it is being used for policy decisions and its results interpreted wrongly is a travesty unfolding before our eyes.”

“ Deaths lag positive tests by weeks.”

False. Deaths lag new infections by 3-4 weeks.

Positive tests are an extremely misleading stat and definitely do NOT represent actual infection rates except in the few places where testing is widespread (ie a few small countries that have highly prioritized testing like Iceland, Estonia, and Bahrain).

Epistemic Status: Something Is Wrong On The Internet.

If you think this applies, it would seem that "The Internet" is being construed so broadly that it includes the mainstream media, policymaking, and a substantial fraction of people, such that the "Something Is Wrong On The Internet" heuristic points against correction of public disinformation in general.

This is a post that is especially informative, aligned with justice, and likely to save lives, and so it would be a shame if this heuristic were to dissuade you from writing it.

"If I am incorrect, and that is how any of this works I have some very, very large bets I would like to place."

Maybe you can state what bets you'd like to make? Are you predicting that the number of cases or deaths in, say, NYC will look very different from consensus estimates?

An update by the OP on what bets they are willing to make would be much appreciated.

Zvi commenting on his The One Mistake Rule post: "E.g. if you want to bet me that there will be no American Covid-19 deaths in July, I will be very, very surprised."

Yes, the model isn't properly sensitive to uncertainties - but the projection that they are near zero isn't unreasonable, if transmission is stopped.

You can be pretty sure that whatever forecast is touted by authorities is one designed to increase support+compliance with whatever measures they decided to take this time. Just like the previous was badly overestimating severity with social distancing (and probably without too), I'm willing to believe this one is optimistic about a gradual reopening of physical commerce in select areas.

This post was important to my own thinking because it solidified the concept that there exists the thing Obvious Nonsense, that Very Serious People would be saying such Obvious Nonsense, that the government and mainstream media would take it seriously and plan and talk on such a basis, and that someone like me could usefully point out that this was happening, because when we say Obvious Nonsense oh boy are they putting the Obvious in Nonsense. It's strange to look back and think about how nervous I was then about making this kind of call, even when it was this, well, obvious. Making that first correct call makes a difference.

But in terms of being part of an overall 'best of' or 'most important' collection for a community as a whole, it would only count if you think it had the same effect on you/others, and made it clear how nonsensical all the Very Serious People could be, and that you had to think for yourself. If all it did for others was point out that the Obvious Nonsense was obvious nonsense in this particular case, there's not much point.

Note that the model assumes that those level 4 measures remain in place forever.


What do you predict will be, assuming level 4 restrictions remain in place, the last day in which fewer than 100 people are infected who go on to become symptomatic in King County, and in CA?


Again assuming that there is a continuous level 4 quarantine, when do you predict the first non-weekend day without a C19-attributed death will be in those areas?

The money quote is misleading, because they don't actually have a mechanistic model. They're just fitting a parameterized logistic curve to all the death data in the world. They incorporate some black-box factor that causes more deaths without social distancing, and arbitrarily declare that factor's effect is 66%/33%/0 with 1/2/3+ social distancing measures. The goal isn't to claim that nobody's ever infected in the 0 case, just that the not-social-distancing factor is gone, so our course should follow the empirical progression of countries that do social distancing.

From a quick skim of the paper it looks like they effectively assume that implementing any 3 of those social distancing measures at the same time that Wuhan implemented their lockdown would lead to the same number of total deaths (with some adjustments).

This is less aggressive than assuming no new deaths after lockdown, but does seem quite optimistic given that the lockdown in Wuhan seems (much) more severe than school closures + travel restrictions + non-essential business closures. And this part of the model seems to be assumed rather than fit to data.

Your rule of models is flawed. Newton's mechanical model of the universe is good enough for all practical purposes except where it isn't. Then you have to go relativistic or quantum. Probabilities only apply to future events. Once events have been observed, the probability changes.

The real rules have no exceptions

In Newton’s case the real rule (or at least the practical rule) is the meta-rule of when Newton is good enough and what to use when it isn’t. Without that knowledge you can’t form a meta-rule and you don’t know when to believe the model and when not to. You can maybe assess it probabilistically but I wouldn’t want to place much on the result.

They are not very explicit about it (which is a huge problem by itself), but they seem to be saying that they are only predicting the "first wave" - so they are not predicting 0 deaths after July - they just defining them to not be a part of the "first wave" anymore. So the way they present the model predictions is even more unbelievably wrong than the model itself!

Even with that as the goal this model is useless - social distancing demonstrably does not lead to 0 new infections. Even Wuhan didn't manage that, and they were literally welding people's doors shut.

But don't you see - those infections are a second wave, so do not have to be counted. The model is almost tautologically true that way. But terribly misleading, and very irresponsibly so.

[-]kjz4y10

Interesting that the model hasn't been updated since April 13, which was the point when daily deaths started to rise above the model's predictions.

[-]kjz4y10

Update: I'm quite surprised that the total expected deaths has gone down in today's update. I would have expected it to rise after this week's data.

The problem the modelers have is how to account for reduced transmission in a continuous model. If you don't set it to zero, you can end up with 1/10,000th of a person still sick, and then the virus comes back full force a couple months later, despite having literally eradicated it. So yes, setting it to zero is wrong, but not doing so is also wrong. Because all models are wrong.

Perhaps you think they should be using an entirely different and more sophisticated model, and maybe they should, but it turns out that those have other drawbacks, like needing far more data than we have to calibrate and build, or needing you to make up inputs.

With actual numbers very, very large, this isn't remotely a concern; the domain of a correct continuous model might be "so long as there are at least 100 positive tests per week" or the like. Once we're there, we obviously need to treat things more discretely.

It's just not a sufficient reason for the modelers to make this egregious an optimistic error in setting R as a function of social distancing measures.

those have other drawbacks, like needing far more data than we have to calibrate and build, or needing you to make up inputs

Those are exactly the drawbacks Zvi is pointing to! And they're not even putting distributions on the parameter values the pulled from their asses!

(1)

A wrong model could be useful if the action (based on the module) can compensate the models' error effectively. Usually, you need to know some properties of the model's error.

(2)

Even a wrong model could be very useful. For example, the earth is flat. That wrong model setup the question correctly and so that people could start thinking the shape of the earth.

[This comment is no longer endorsed by its author]Reply