448

LESSWRONG
LW

447
Biosecurity
Frontpage

13

Weekly Incidence vs Cumulative Infections

by jefftk
7th Sep 2023
jefftk
2 min read
6

13

Biosecurity
Frontpage

13

Weekly Incidence vs Cumulative Infections
6jefftk
8jimrandomh
4jefftk
4dr_s
8jimrandomh
4jefftk
New Comment
6 comments, sorted by
top scoring
Click to highlight new comments since: Today at 2:55 PM
[-]jefftk2y60

The equations aren't displaying properly because it looks like LW currently strips MathML (only a specific list of allowed tags make it through). You can see the version with full math at https://www.jefftk.com/p/weekly-incidence-vs-cumulative-infections

Reply
[-]jimrandomh2y80

I've adjusted our sanitizer to let MathML through (will take effect after PR review and deploy), which should affect future crossposts. For this post, I used my moderator power to edit the stuff the sanitizer removed back in.

Reply
[-]jefftk2y40

Awesome, thanks!

Reply
[-]dr_s2y4-2

LW uses its own system for equations, you can just use dollar tags in Markdown or Ctrl+M/Ctrl+4 in LessWrongDocs mode.

So you get, for example,

i(t)=dc(t)dt

Reply
[-]jimrandomh2y80

Jeff is someone where we've configured LW to auto-crosspost posts from his blog via RSS, so it's not being authored within the LW editor.

Reply
[-]jefftk2y40

I draft my posts in HTML and they're imported into LW via RSS. I can't make just part of a post Markdown or LessWrongDocs.

Reply
Moderation Log
More from jefftk
View more
Curated and popular this week
6Comments

Imagine you have a goal of identifying a novel disease by the time some small fraction of the population has been infected. Many of the signs you might use to detect something unusual, however, such as doctor visits or shedding into wastewater, will depend on the number of people currently infected. How do these relate?

Bottom line: if we limit our consideration to the time before anyone has noticed something unusual, where people aren't changing their behavior to avoid the disease, the vast majority of people are still susceptible, and spread is likely approximately exponential, then:

incidence=cumulative infectionsln(2)doubling time

Let's derive this! We'll call "cumulative infections" c(t), and "doubling time" Td. So here's cumulative infections at time t:

c(t)=2tTd

The math will be easier with natural exponents, so let's define k=ln(2)Td and switch our base:

ekt

Let's call "incidence" i(t), which will be the derivative of c(t):

i(t)=ddtc(t)=ddtekt=kekt

And so:

i(t)c(t)=kektekt=k=ln(2)Td

Which means: i(t)=c(t)ln(2)Td

What does this look like? Here's a chart of weekly incidence at the time when cumulative incidence reaches 1%:

For example, if it's doubling weekly then when 1% of people have ever been infected 0.69% of people became infected in the last seven days, representing 69% of people who have ever been infected. If it's doubling every three weeks, then when 1% of people have ever been infected 0.23% of people became infected this week, 23% of cumulative infections.

Is this really right, though? Let's check our work with a bit of very simple simulation:

def simulate(doubling_period_weeks):
  cumulative_infection_threshold = 0.01
  initial_weekly_incidence = 0.000000001
  cumulative_infections = 0
  current_weekly_incidence = 0
  week = 0
  while cumulative_infections < \
        cumulative_infection_threshold:
    week += 1
    current_weekly_incidence = \
        initial_weekly_incidence * 2**(
          day/doubling_period_weeks)
    cumulative_infections += \
        current_weekly_incidence

  return current_weekly_incidence

for f in range(50, 500):
  doubling_period_weeks = f / 100
  print(doubling_period_weeks,
        simulate(doubling_period_weeks))

This looks like:

The simulated line is jagged, especially for short doubling periods, but that's not especially meaningful: it comes from running the calculation a week at a time and how some weeks will be just above or just below the (arbitrary) 1% goal.

Comment via: facebook, mastodon