DirectedEvolution

Pandemic Prediction Checklist: H5N1

Pandemic Prediction Checklist: Monkeypox

I have lost my trust in this community’s epistemic integrity, no longer see my values as being in accord with it, and don’t see hope for change. I am therefore taking an indefinite long-term hiatus from reading or posting here.
 

Correlation does imply some sort of causal link.

For guessing its direction, simple models help you think.

Controlled experiments, if they are well beyond the brink

Of .05 significance will make your unknowns shrink.

Replications prove there's something new under the sun.

Did one cause the other? Did the other cause the one?

Are they both controlled by something already begun?

Or was it their coincidence that caused it to be done?

Wiki Contributions

Comments

Sorted by

Great, that's clarifying. I will start with Tamiflu/Xofluza efficacy as it's important, and I think it will be most tractable via a straightforward lit review.

I've been researching this topic in my spare time and would be happy to help. Do you have time to clarify a few points? Here are some thoughts and questions that came up as I reviewed your post:

  1. Livestock vs. Wild Birds
    The distinction between livestock and wild birds is significant. Livestock are in much closer contact with humans and are biologically closer as well. How granular of an analysis are you interested in here?
  2. US-specific H5N1 Trends
    It's peculiar that H5N1 seems so prevalent in the US. Could this be due to measurement bias, or does the US simply have more factory farming? How interested are you in exploring the reasons behind this trend?
  3. Citations and Depth
    While most points aren’t cited (which is fine), it might be valuable to compile both a list of key aspects and resources for further reading. Are you looking for a more polished, thoroughly cited document?
  4. Biological Factors of Severity
    Binding to human receptors is just one factor controlling the severity and infectiousness of a virus. Would you like a deeper dive into the biology of respiratory infections and what makes them dangerous?
  5. Tamiflu and Xofluza
    Wikipedia notes that Tamiflu has limited evidence of being worth the side effects. Are you interested in a detailed evaluation of its effectiveness? Similarly, how interested are you in assessing the likelihood of shortages and efficacy of Tamiflu/Xofluza during an H5N1 pandemic?
  6. Over-the-counter Tests
    Is the issue a lack of over-the-counter tests specifically for H5N1, or for flu in general? General flu PCR testing is likely available—should we investigate this?
  7. Trajectory of Illness
    For past H5N1 cases, is there a treatable "window of opportunity" before the infection becomes severe? How critical is it to determine whether mild cases might escalate and require aggressive intervention?
  8. Historical Epidemics
    I could pull together a list of relevant modern epidemics (human-to-human airborne transmission without an animal vector). Are there any specific criteria you'd like to prioritize?
  9. Cross Immunity
    While cross immunity seems important, determining decision-relevant information may be challenging. Would you like a summary of existing knowledge or only actionable insights?
  10. Respiratory Infection Dynamics
    Epidemiologists suggest that respiratory infections are deadlier lower in the lungs but more infectious higher in the system. Is this a fundamental tradeoff? Would a "both-and" virus be possible? What evolutionary advantages might viruses have in infecting the lower lungs?
  11. Government Stockpiles and Interventions
    What stockpiles of H5N1 vaccines exist? What options are available for increasing testing and vaccination of livestock? How are governments incentivizing medication, vaccine, and PPE production?
  12. Political Considerations
    Should we examine how a Trump presidency or similar political scenarios might influence the interaction between local and federal health agencies?
  13. Species-to-Species Spread
    The rapid spread of H5N1 to multiple bird and mammal species raises the question of whether humans will inevitably be affected. Is this worth exploring in-depth?
  14. Mortality and Long-term Effects
    What demographics do other flu strains tend to affect most? Are there long-term side effects comparable to "long COVID"?
  15. Mutation and Vaccine Efficacy
    How quickly do flu strains, especially H5N1, tend to mutate? What implications does this have for vaccine efficacy and cross-reactivity? How much asymptomatic spread occurs with flu, and how long does it remain airborne?
  16. No Deaths Yet
    How should we update based on the fact that, contrary to past occurrences of H5N1 that had a ~50% CFR, none of the 58 confirmed cases have died?

Finally, I’d be interested to hear which of these questions or areas you find most compelling. Are there other questions or directions you’d like to explore? This will help me prioritize my efforts.


Epidemic Scares That Did Not Pan Out

  • 1976 - Legionnaires' Disease: Initially alarming but identified as a bacterial infection treatable with antibiotics. (Not relevant: bacterial)
  • 2001 - Anthrax Attacks: Bioterrorism-related bacterial outbreak causing fear but limited deaths. (Not relevant: bacterial)
  • 2005 - Avian Flu (H5N1): No confirmed US human cases despite global fears. (Relevant)
  • 2014 - Ebola: Strict public health measures limited US cases to three. (Relevant)
  • 2016 - Zika Virus: Local transmission limited to parts of Florida and Texas. (Not relevant: mosquito vector)

I had to write several new Python versions of the code to explore the problem before it clicked for me.

I understand the proof, but the closest I can get to a true intuition that B is bigger is:

  • Imagine you just rolled your first 6, haven't rolled any odds yet, and then you roll a 2 or a 4.
  • In the consecutive-6 condition, it's quite unlikely you'll end up keeping this sequence, because you now still have to get two 6s before rolling any odds.
  • In the two-6 condition, you are much more likely to end up keeping this sequence, which is guaranteed to include at least one 2 or 4, and likely to include more than one before you roll that 6.

I think the main think I want to remember is that "given" or "conditional on X" means that you use the unconditional probability distribution and throw out results not conforming to X, not that you substitute a different generating function that always generates events conforming to X.

Well, ideas from outside the lab, much less academia, are unlikely to be well suited to that lab’s specific research agenda. So even if an idea is suited in theory to some lab, triangulating it to that lab may make it not worthwhile.

There are a lot of cranks and they generate a lot of bad ideas. So a < 5% probability seems not unreasonable.

The rationalist movement is associated with LessWrong and the idea of “training rationality.” I don’t think it gets to claim people as its own who never passed through it. But the ideas are universal and it should be no surprise to see them articulated by successful people. That’s who rationalists borrowed them from in the first place.

This model also seems to rely on an assumption that there are more than two viable candidates, or that voters will refuse to vote at all rather than a candidate who supports 1/2 of their policy preferences.

If there were only two candidates and all voters chose whoever was closest to their policy preference, both would occupy the 20% block, since the extremes of the party would vote for them anyway.

But if there were three rigid categories and either three candidates, one per category, or voters refused to vote for a candidate not in their preferred category, then the model predicts more extreme candidates win.

I'm torn between the two for American elections, because:

  • The "correlated preferences" model here feels more true to life, psychologically.
  • Yet American politics goes from extremely disengaged primaries to a two-candidate FPTP general election, where the median voter theorem and the "correlated preferences" model seem to predict the same thing.
  • Voter turnout seems like a critically important part of democratic outcomes, and a model that only takes the order of policy preferences into account, rather than the intensity of those preferences, seems too limited.
  • Politicians often seem startlingly incompetent at inspiring the electorate, and it seems like we should think perhaps in "efficient market hypothesis" terms, where getting a political edge is extremely difficult because if anybody knew how to do it reliably, everybody would do it and the edge would disappear. In that sense, while both models can explain facets of candidate behavior and election outcomes, neither of them really offers a sufficiently detailed picture of elections to explain specific examples of election outcomes in a satisfying way. 

Yes, I agree it's worse. If ONLY a better understanding of statistics by Phd students and research faculty was at the root of our cultural confusion around science.

It’s not necessary for each person to personally identify the best minds on all topics and exclusively defer to them. It’s more a heuristic of deferring to the people those you trust most defer to on specific topics, and calibrating your confidence according to your own level of ability to parse who to trust and who not to.

But really these are two separate issues: how to exercise judgment in deciding who to trust, and the causes of research being “memetic.” I still say research is memetic not because mediocre researchers are blithely kicking around nonsense ideas that take on an exaggerated life of their own, but mainly because of politics and business ramifications of the research.

The idea that wine is good for you is memetic both because of its way of poking at “established wisdom” and because the alcohol industry sponsors research in that direction.

Similar for implicit bias tests, which are a whole little industry of their own.

Clinical trials represent decades of investment in a therapeutic strategy. Even if an informed person would be skeptical that current Alzheimer’s approaches are the way to go, businesses that have invested in it are best served by gambling on another try and hoping to turn a profit. So they’re incentivized to keep plugging the idea that their strategy really is striking at the root of the disease.

It's not evidence, it's just an opinion!

But I don't agree with your presumption. Let me put it another way. Science matters most when it delivers information that is accurate and precise enough to be decision-relevant. Typically, we're in one of a few states:

  • The technology is so early that no level of statistical sophistication will yield decision-relevant results. Example: most single-cell omics in 2024 that I'm aware of, with respect to devising new biomedical treatments (this is my field).
  • The technology is so mature that any statistics required to parse it are baked into the analysis software, so that they get used by default by researchers of any level of proficiency. Example: Short read sequencing, where the extremely complex analysis that goes into obtaining and aligning reads has been so thoroughly established that undergraduates can use it mindlessly.
  • The technology's in a sweet spot where a custom statistical analysis needs to be developed, but it's also so important that the best minds will do that analysis and a community norm exists that we defer to them. Example: clinical trial results.

I think what John calls "memetic" research is just areas where the topics or themes are so relevant to social life that people reach for early findings in immature research fields to justify their positions and win arguments. Or where a big part of the money in the field comes from corporate consulting gigs, where the story you tell determines the paycheck you get. But that's not the fault of the "median researcher," it's a mixture of conflicts of interest and the influence of politics on scientific research communication. 

In academic biomedicine, at least, which is where I work, it’s all about tech dev. Most of the development is based on obvious signals and conceptual clarity. Yes, we do study biological systems, but that comes after years, even decades, of building the right tools to get a crushingly obvious signal out of the system of interest. Until that point all the data is kind of a hint of what we will one day have clarity on rather than a truly useful stepping stone towards it. Have as much statistical rigor as you like, but if your methods aren’t good enough to deliver the data you need, it just doesn’t matter. Which is why people read titles, not figure footnotes: it’s the big ideas that really matter, and the labor going on in the labs themselves. Papers are in a way just evidence of work being done.

That’s why I sometimes worry about LessWrong. Participants who aren’t professionally doing research and spend a lot of time critiquing papers over niche methodological issues be misallocating their attention, or searching under the spotlight. The interesting thing is growth in our ability to measure and manipulate phenomena, not the exact analysis method in one paper or another. What’s true will eventually become crushingly obvious and you won’t need fancy statistics at that point, and before then the data will be crap so the fancy statistics won’t be much use. Obviously there’s a middle ground, but I think the vast majority of time is spent in the “too early to tell” or “everybody knows that” phase. If you can’t participate in that technology development in some way, I am not sure it’s right to say you are “outperforming” anything.

Load More