AdamGleave

Posts

Sorted by New

Wiki Contributions

Comments

Immobile AI makes a move: anti-wireheading, ontology change, and model splintering

My sense is that Stuart assuming there's an initial-specified reward function is a simplification, not a key part of the plan, and that he'd also be interested in e.g. generalizing a reward function learned from other sources of human feedback like preference comparison.

IRD would do well on this problem because it has an explicit distribution over possible reward functions, but this isn't really that unique to IRD -- Bayesian IRL or preference comparison would have the same property.

What fraction of breakthrough COVID cases are attributable to low antibody count?

It could be net-negative if receiving a booster shot caused stronger imprinting, making future immune response less adaptive. I don't have a good sense of whether this original antigenic sin effect has already saturated after receiving two-doses (or even a single-dose), or whether it continues to become stronger.

My sense is this is an open question. From Petras et al (2021):

As suggested by a recent observation in naturally immunized individuals receiving two doses of the Pfizer COVID-19 (Comirnaty) vaccine, original antigenic sin may pose a problem in future research and development of vaccines.16 While the first dose of the vaccine was able to raise the preexisting levels of functional and specific antibodies, these either failed to change or even declined after the second dose (virus-neutralizing antibodies), and the same applied to the levels of antigen-specific antibody-secreting cells. As this observation was made in only a small group of 13 subjects with naturally acquired immunity against SARS-CoV-2, who had rather average or below-average levels of the antibodies assessed, one may expect an enhanced effect of original antigenic sin after new vaccination against COVID-19 in those with manyfold higher antibody levels after complete immunization.

That said, I'd expect a third booster to be protective against Delta, given that vaccines against ancestral variant are still highly effective against Delta and that Delta is a significant threat right now. But I do think it's plausible (though not firmly established) that a third booster shot may reduce the effectiveness of future variant-specific boosters. Targeting dramatically different protein targets might well help, although might also take longer to get approved.

Ultimately, I expect a third booster will still make sense for a lot of people, if (a) your immune response has waned (e.g. 6 months or longer since 2nd dose, or immunocompromised); and (b) you expect to be receiving significant exposure from Delta in the immediate future.

What fraction of breakthrough COVID cases are attributable to low antibody count?

I largely with this analysis. One major possible "side-effect" of a third booster is original antigenic sin. Effectively, the immune system may become imprinted on the ancestral variant of the spike protein, preventing adaptation to new variants (whether via direct exposure or via future boosters targeting new variants). This would be the main way I could see a third booster being seriously net-negative, although I don't have a good sense of the probability. Still, if antibody levels are low, the benefit of a booster is greater and I'd guess (caveat: not an immunologist) the risk of antigenic imprinting is somewhat lower (on the basis that the immune response has already decayed).

A Better Time until Sunburn Calculator

Thanks for sharing this! I did notice a weird non-monotonicity: if I go from 90 minutes exposure to 120 minutes, the "Percent of Population w/ Sunburn Degree 1 at Time Exposed" drops from 96.8% to 72.7%. There is a warning in both cases that it's outside normal range, but it still seems odd that more exposure gives lower risk.

Delta Strain: Fact Dump and Some Policy Takeaways

Just to flag I messed up the original calculation and underestimated everything by a factor of 2x, I've added an errata.

I'd also recommend Matt Bell's recent analysis, who estimates 200 days of life lost. This is much higher than the analysis in my comment and the OP. I found the assumptions and sources somewhat pessimistic but ultimately plausible.

The main things driving the difference from my comment were:

  • Uses data from the UK's Office of National Statistics that I'd missed, which has a very high number of 55% of people reporting symptoms after 5 weeks, with fairly slow rates of recovery all the way out to 120 days post-infection. Given this is significantly higher than most other studies I've seen, I think Matt is being pessimistic by only down-adjusting to 45%, but I should emphasize these numbers are credible and the ONS study is honestly better than most out there.
  • Long COVID making your life 20% worse is on the pessimistic end. I put most mild symptoms at 5% worse. Ultimately subjective and highly dependent on what symptoms you get.
  • I think the difference in hospitalized vs non-hospitalized risk is closer to 10x (based on Al-Aly figure) not Matt's estimate of 2x, that means we should multiply by a factor of ~60% not ~97%.
Delta Strain: Fact Dump and Some Policy Takeaways

This is a good point, the demographics here are very skewed. I'm not too worried about it overstating risk, simply because the risk ended up looking not that high (at least after adjusting for hospitalization). I think at this point most of us have incurred more than 5 days of costs from COVID restrictions, so if that was really all the cost from COVID, I'd be pretty relaxed.

The gender skew could be an issue, e.g. chronic fatigue syndrome seems to occur at twice the rate in women than men.

Delta Strain: Fact Dump and Some Policy Takeaways

This is an accurate summary, thanks! I'll add my calculation was only for long-term sequelae. Including ~10 days cost from acute effects, my all-things-considered view would be mean of ~40 days, corresponding to 1041 uCOVIDs per hour.

This is per actual hour of (quality-adjusted) life expectancy. But given we spend ~1/3rd of our time sleeping, you probably want to value a waking-hour at 1.5x a life-hour (assuming being asleep has neutral valence). If you work a 40 hour work week and only value your productive time (I do not endorse this, by the way), then you'd want to adjust upwards by a factor of (7*24)/40=4.2.

However, this is purely private cost. You probably want to take into account the cost of infecting other people. I'm not confident in how to reason about the exponential growth side of things. If you're in a country like the US where vaccination rates have plateaued, I tend to expect Delta to spread amongst unvaccinated people until herd immunity is reached. In this scenario you basically want infection rates to be as high as possible without overwhelming the healthcare system, so we get to herd immunity quicker. (This seems to actually be the strategy the UK government is pursuing -- although obviously they've not explicitly stated this.) But if you're in a country that's still actively vaccinating vulnerable people, or where flattening the curve makes sense to protect healthcare systems, then please avoid contributing to exponential growth.

Neglecting the exponential growth side of things and just considering immediate impact on your contacts, how likely are you to transmit? I'd be surprised if it was above 40% per household contact assuming you quarantine when symptomatic (that's on the higher end of transmission seen even with unvaccinated primary cases), but I'd also be surprised if it was below 5% (lowest figure I've seen); I'd guess it's around 15% for Delta. This means if you have ~6-7 contacts as close as housemates, then your immediate external cost roughly equals your private cost.

Specifically, two studies I've seen on secondary attack rate given vaccination (h/t @Linch) give pretty wildly varying figures, but suggest at least 2x reduction in transmission from vaccination. Layan et al (2021) found 40% of household contacts of Israeli medical staff developed an infection (when Alpha was dominant), with vaccination of the primary case reducing transmission by 80%, so an 8% chance of transmission overall. Harris et al (2021) from Public Health England suggest vaccination cuts transmission risk from 10% to 5%, but these figures are likely skewed low due to not systematically testing contacts.

Delta Strain: Fact Dump and Some Policy Takeaways

Errata: My original calculation underestimated the risk by a factor of about 2x. I neglected two key considerations, which fortunately somewhat canceled each other out. My new estimate from the calculation is 3.0 to 11.7 quality-adjusted days lost to long-term sequelae, with my all-things-considered mean at 45. 

The two key things I missed:

  - I estimated the risk of a non-hospitalized case is about 10x less than a hospitalized case, and so divided the estimates of disease burden by 10x. The first part is correct, but the second part would only make sense if all disease burden was due to hospitalized cases. In fact, there's a 15:85%  split between hospitalized and non-hospitalized patients in the study (13,654:73,435). So if the disease burden for non-hospitalized is x, the total burden is 0.15*10x + 0.85*x = 2.35x. So we should divide by 2.35, not 10.

  - However, as Owain pointed out below, the [demographics](https://www.nature.com/articles/s41586-021-03553-9/tables/1) are non-representative and probably skew high-risk given the median age is 60. the demographics are relatively high-risk. Indeed, this is suggested by the 15% hospitalized figure (which also, I suspect, means they just never included asymptomatic and most mildly symptomatic cases). An ONS survey (Figure 4) put symptoms reported after 5 weeks at 25% (20-30%) for 50-69 year olds and 17.5 (12.5 to 22.5%) for 17 to 24 year olds, which is surprisingly little difference, about a 1.5 decrease. I'd conjecture a 2x decrease in risk (noting that assuming no hospitalization is already doing a lot of work here).

Original post:

I did my own back-of-the-envelope calculation and came up with a similar but slightly higher estimated cost of 1.4 to 5.5 quality-adjusted days lost to long-term sequalea conditional on getting symptomatic COVID case. FWIW, I originally thought the OPs numbers seemed way too low, and was going to write a take-down post -- but unfortunately the data did not cooperate with this agenda. I certainly don't fully trust these numbers: it's based on a single study, and there were a bunch of places I didn't keep track of uncertainty, so the true credible interval should definitely be a lot wider. Given that and the right-tailed nature of the distribution, my all-things-considered mean is closer to 30 because of this, but figured I'd share the BOTEC anyway in case it's helpful to anyone.

My model is pretty simple:

  1. What % of symptoms are there at some short-term follow up period (e.g. 4 to 12 weeks)? This we actually have data on.

  2. How bad are these symptoms? This is fairly subjective.

  3. How much do we expect these symptoms to decay long-term? This is going off priors.

For 1. I used Al-Aly et al (2021) as a starting point, which was based on comparing medical records between a COVID-positive and non-COVID demographically matched control group in the US Department of Veterans Affairs database. Anna Ore felt this was one of the more rigorous ones, and I agree. Medical notes seem more reliable than self-report (though far from infallible), they seem to have actually done a Bonferroni correction, and they tested their methodology didn't pick up any false positives via both a negative-outcome and negative-exposure controls. Caveat: many other studies have scarier headline figures, and it's certainly possible relying on medical records skews this low (e.g. doctors might be reluctant to give a diagnosis, many patients won't go to the doctor for mild symptoms, etc).

They report outcomes that occurred between 30 and 180 days after COVID exposure, although infuriatingly don't seem to break it down any further by date. Figure 2 shows all statistically significant symptoms, in terms of the excess burden (i.e. increase above control) of the reported symptom per 1000 patients. There were 38 in total, ranging from 2.8% (respiratory signs and symptoms) to 0.15% (pleurisy). In total the excess burden was 26%.

I went through and rated each symptom with a very rough and subjective high / medium / low severity. 2% excess burden of high severity symptoms, 19% medium severity, 5% low severity. I then ballparked that high severity (e.g. heart disease, diabates, heart failure) wiped out 30% of your QALYs, medium severity (e.g. respiratory signs, anxiety disorders, asthma) as 5% and low (e.g. skin rash) as 1%. Caveat: there's a lot of uncertainty in these numbers. Although I suspect I've gone for higher costs than most people would, since I tend to think health has a pretty big impact on productivity.

Using my weightings, we get a 1.6% reduction in QALYs conditional on symptomatic COVID case. I think this is misleading for three reasons:

  1. Figure 3 shows that excess burden is much higher for people who were hospitalized, and if anything the gap seems bigger for more severe symptoms (e.g. about 10x less heart failure in people positive but not hospitalized, whereas rates of skin rash were only 2x less). This is good news as vaccines seem significantly more effective at preventing hospitalizations, and if you are fortunate enough to be a young healthy person your chance of being hospitalized was pretty low to begin with. I'm applying a 10x reduction for this.

  2. This excess burden is per diagnosis, not per patient. Sick people tend to receive multiple diagnoses. I'm not sure how to handle this. In some cases, badness-of-symptoms does seem roughly additive: if I had a headache, I'd probably pay a similar amount not to also develop a skin rash then if my head didn't hurt. But it seems odd to say that someone who drops dead from cardiac arrest was more fortunate than another patient with the same cause of death, who also had the misfortune of being diagnosed with heart failure a week earlier. So there's definitely some double-counting with the diagnosis, which I think justifies a 2-5x decrease.

  3. This study was presumably predominantly the original COVID strain (based on a cohort between March 2020 and 30 November 2020). Delta seems, per the OP, about 2-3x worse: so let's increase it by that factor.

Overall we decrease 1.6% by a factor of 6.5 (10*2/3) to 25 (10*5/2), to get a short-term QALY reduction of 0.064% to 0.24%.

However, El-Aly et al include any symptom reported between 30 to 180 days. What we really care about is chance of lifelong symptoms if someone is experiencing a symptom after 6 months there seems like a considerable chance it'll be lifelong, but if only 30 days has elapsed the chance of recovery seems much higher. A meta-review by Thompson et al (2021) seems to show a drop of around 2x between symptoms in a 4-12 week period vs 12+ weeks (Table 2), although with some fairly wild variation between studies so I do not trust this that much. In an extremely dubious extrapolation from this, we could say that perhaps symptoms half again from 12 weeks to 6 months, again from 6 months to a year, and after that persist as a permanent injury. In this case, we'd divide the "symptom after 30 days figure" from Al-Aly et al by a factor of 8 to get the permanent injury figure, which seems plausible to me (but again, you could totally argue for a much lower number).

With this final fudge, we get a lifelong QALY reduction of 0.008% to 0.03%. Assuming a 50-year life expectancy, this amounts to 1.4 to 5.5 days of cost from long-term sequelae. Of course, there are also short-term costs (and risk of morbidity!) that is omitted from this analysis, so the total costs will be higher than this.

Inner Alignment in Salt-Starved Rats

I googled "model-based RL Atari" and the first hit was this which likewise tries to learn the reward function by supervised learning from observations of past rewards (if I understand correctly)

Ah, the "model-based using a model-free RL algorithm" approach :) They learn a world model using supervised learning, and then use PPO (a model-free RL algorithm) to train a policy in it. It sounds odd but it makes sense: you hopefully get much of the sample efficiency of model-based training, while still retaining the state-of-the-art results of model-free RL. You're right that in this setup, as the actions are being chosen by the (model-free RL) policy, you don't get any zero-shot generalization.

I added a new sub-bullet at the top to clarify that it's hard to explain by RL unless you assume the planner can query the ground-truth reward function in arbitrary hypothetical states. And then I also added a new paragraph to the "other possible explanations" section at the bottom saying what I said in the paragraph just above. Thank you.

Thanks for updating the post to clarify this point -- I agree with you with the new wording.

In ML today, the reward function is typically a function of states and actions, not "thoughts". In a brain, the reward can depend directly on what you're imagining doing or planning to do, or even just what you're thinking about. That's my proposal here.

Yes indeed, your proposal is quite different from RL. The closest I can think of to rewards over "thoughts" in ML would be regularization terms that take into account weights or, occasionally, activations -- but that's very crude compared to what you're proposing.

Inner Alignment in Salt-Starved Rats

Thanks for the clarification! I agree if the planner does not have access to the reward function then it will not be able to solve it. Though, as you say, it could explore more given the uncertainty.

Most model-based RL algorithms I've seen assume they can evaluate the reward functions in arbitrary states. Moreover, it seems to me like this is the key thing that lets rats solve the problem. I don't see how you solve this problem in general in a sample-efficient manner otherwise.

One class of model-based RL approaches is based on [model-predictive control](https://en.wikipedia.org/wiki/Model_predictive_control): sample random actions, "rollout" the trajectories in the model, pick the trajectory that had the highest return and then take the first action from that trajectory, then replan. That said, assumptions vary. [iLQR](https://en.wikipedia.org/wiki/Linear%E2%80%93quadratic_regulator) makes the stronger assumption that reward is quadratic and differentiable.

I think methods based on [Monte Carlo tree search](https://en.wikipedia.org/wiki/Monte_Carlo_tree_search) might exhibit something like the problem you discuss. Since they sample actions from a policy trained to maximize reward, they might end up not exploring enough in this novel state if the policy is very confident it should not drink the salt water. That said, they typically include explicit methods for exploration like [UCB](https://en.wikipedia.org/wiki/Thompson_sampling#Upper-Confidence-Bound_(UCB)_algorithms) which should mitigate this.

Load More