jimrandomh

LessWrong developer, rationalist since the Overcoming Bias days. Connoisseur of jargon.

jimrandomh's Comments

Jimrandomh's Shortform

This tweet raised the question of whether masks really are more effective if placed on sick people (blocking outgoing droplets) or if placed on healthy people (blocking incoming droplets). Everyone in public or in a risky setting should have a mask, of course, but we still need to allocate the higher-quality vs lower-quality masks somehow. When sick people are few and are obvious, and masks are scarce, masks should obviously go on the sick people. However, COVID-19 transmission is often presymptomatic, and masks (especially lower-quality improvised masks) are not becoming less scarce over time.

If you have two people in a room and one mask, one infected and one healthy, which person should wear the mask? Thinking about the physics of liquid droplets, I think the answer is that the infected person should wear it.

  1. A mask on a sick person prevents the creation of fomites; masks on healthy people don't.
  2. Outgoing particles have a larger size and shrink due to evaporation, so they'll penetrate a mask less, given equal kinetic energy. (However, kinetic energies are not equal; they start out fast and slow down, which would favor putting the mask on the healthy person. I'm not sure how much this matters.)
  3. Particles that stick to a mask but then un-stick lose their kinetic energy in the process, which helps if the mask is on the sick person, but doesn't help if the mask is on the healthy person.

Overall, it seems like for a given contact-pair, a mask does more good if it's on the sick person. However, mask quality also matters in proportion to the number of healthy-sick contacts it affects; so, upgrading the masks of all of the patients in a hospital would help more than upgrading the masks of all the workers in that hospital, but since the patients outnumber the workers, upgrading the workers' masks probably helps more per-mask.

Has the effectiveness of fever screening declined?

For now, mods do it. This will be expanded to all users and to a wider variety of tags when tagging leaves beta.

History's Biggest Natural Experiment

It would show up as people with a particular year of birth having a much lower risk than people born one year earlier or later. Since most research includes collecting date of birth, this should be easy to check.

Virus As A Power Optimisation Process: The Problem Of Next Wave

Biological global catastrophic risks were neglected for years, while AGI risks were on the top.

This is a true statement about the attention allocation on LessWrong, but definitely not a true statement about the world's overall resource allocation. Total spending on pandemic preparedness is and was orders of magnitude greater than spending on AGI risk. It's just a hard problem, which requires a lot of expensive physical infrastructure to prepare for.

Will nCoV survivors suffer lasting disability at a high rate?

"Impaired consciousness" doesn't sound unusual for patients with severe fever, but five strokes out of 214 hospitalized patients is pretty noteworthy.

SARS-CoV-2 pool-testing algorithm puzzle

It's a newspaper article based on an unpublished paper; that reference class of writing can't be trusted to report the caveats.

(I could be wrong about the mechanics of PCR; I'm not an expert in it; but the article itself doesn't provide much information about that.)

SARS-CoV-2 pool-testing algorithm puzzle

This can only be used on groups where everyone is asymptomatic, and there will be low limits on the pool size even then.

The first step of a PCR test is RNA amplification; you use enzymes which take a small amount of RNA in the sample, and produce a large number of copies. The problem is that there are other RNA viruses besides SARS-CoV-2, such as influenza, and depending when in the disease course the samples were taken, the amount of irrelevant RNA might exceed the amount of SARS-CoV-2 RNA by orders of magnitude, which would lead to a false negative.

Preprint says R0=~5 (!) / infection fatality ratio=~0.1%. Thoughts?

tl;dr: Someone wrote buggy R code and rushed a preprint out the door without proofreading or sanity checking the numbers.

The main claim of the paper is this:

The total number of estimated laboratory–confirmed cases (i.e. cumulative cases) is 18913 (95% CrI: 16444–19705) while the actual numbers of reported laboratory–confirmed cases during our study period is 19559 as of February 11th, 2020. Moreover, we inferred the total number of COVID-19 infections (Figure S1). Our results indicate that the total number of infections (i.e. cumulative infections) is 1905526 (95%CrI: 1350283– 2655936)

So, they conclude that less than 1% of cases were detected. They claim 95% confidence that no more than 1.5% of cases were detected. They combine this with the (unstated) assumption that 100% of deaths were detected and reported, and that therefore the IFR is two orders of magnitude lower than is commonly believed. This is an extraordinary claim, which the paper doesn't even really acknowledge; they just sort of throw numbers out and fail to mention that their numbers are wildly different from everyone else's. Their input data is

the daily series of laboratory–confirmed COVID-19 cases and deaths  in Wuhan City and epidemiological data of Japanese evacuees from Wuhan City on board government–chartered flights

This is not a dataset which is capable of supporting such a conclusion. On top of that, the paper has other major signals of low quality. The paper is riddled with typos. And there's this bit:

Serial interval estimates of COVID-19 were derived from previous studies of nCov, indicating that it follows a gamma distribution with the mean and SD at 7.5 and 3.4 days, respectively, based on [14]

In this post I collected estimates of COVID-19's serial interval. 7.5 days was the chronologically first published estimate, was the highest estimate, and was an outlier with small sample size. Strangely, reference [14] does not point to the paper which estimated 7.5 days; that's reference 21, whereas reference 14 points to this paper which makes no mention of the serial interval at all.

Is the Covid-19 crisis a good time for x-risk outreach?

Right now, most people are hyperfocused on COVID-19; this creates an obvious incentive for people to try to tie their pet issues to it, which I expect a variety of groups to try and which I expect to mostly backfire if tried in the short run. (See for example the receptiontthe WHO got when they tried to talk about stigma and discriminatio; people interpreted it as the output of an "always tie my pet issue to the topic du jour" algorithm and ridiculed then for it. Talking about AI risk in the current environment risks provoking the same reaction, because it probably would in fact be coming from a tie-my-pet-topic algorithm.

A month from now, however, will be a different matter. Once people start feeling like they have attention to spare, and have burned out on COVID-19 news, I expect them to be much more receptive to arguments about tail risk and to model-based extrapolation of the future than they were before.

Covid-19 Points of Leverage, Travel Bans and Eradication

To start, the severity estimates that Joshua assumed were worst case and are implausible. The very alarmist Fergeson et al paper has much lower numbers than Joshua's [Joscha Bach's] claim that "20% will develop a severe case and need medical support to survive."

I believe the 20% figure comes from the WHO joint report which says

13.8% have severe disease (dyspnea, respiratory frequency ≥30/minute, blood oxygen saturation ≤93%, PaO2/FiO2 ratio <300, and/or lung infiltrates >50% of the lung field within 24-48 hours) and 6.1% are critical (respiratory failure, septic shock, and/or multiple organ dysfunction/failure).

There are a lot of modeling assumptions that go into this, and the true number is probably lower, but not so low as to invalidate Joscha's point.

Load More