the body starts attacking the cells that produce the antigen... including the brain as polyethylene glycol goes through the blood brain barrier
How do you know what you think you know? Specifically, regarding the PEG enabling the LNP's to cross the BBB, and regarding a followup by immune cells that have crossed the BBB?
Various points on Delta & vaccination:
-On the UK vaccination data, the 79% number is for Pfizer and AZ combined. Since the vast majority of US vaccinations are Pfizer or Moderna, the Pfizer number should be much closer to the truth. Their EV is 87.9%, with a confidence interval from 78.2 to 93.2%.
-Looking at Israel's Delta/vaccination document linked to in my other comment, they don't have many hospitalizations or severe disease cases for either vaccinated or unvaccinated. So I don't expect their expected value number to be very meaningful, due to huge confidence intervals.
-When you compare predictions to reality in "Transmissibility", you seem to assume vaccine efficacy (VE) from cases should equal VE from R. Vaccinations seem to reduce peak viral load by a lot, regardless of conditioning on symptoms. So we should not expect the R to be very predictive of VE.
Various points on Delta and R:
-When I dig into R estimates for new variants, I find lots of disagreement comes from the serial interval estimate. Personally I convert everything to weekly growth now so I don't have to hold that information in my head.
-Regarding the calculation you did in "Transmissibility", there's pretty good data from the UK. While Delta was taking over, they estimated that the natural log of the Delta/Alpha ratio increased by 0.91/week to 0.93/week (a factor of ~2.5). I trust this value more because it is less biased by the control system. See for example Table 7 on pg 25 of https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/993879/Variants_of_Concern_VOC_Technical_Briefing_15.pdf
-I've been doing a similar estimate for the US based on weekly proportions of delta and it's descendants, albeit with a shoddier method because time constraints & heterogeneous data. The best fit is usually a relative growth rate of ~1.85 per week (although it can vary by +/- 0.1 depending on the day). I've been surprised because the US overall growth rate has been faster than 1.85 times the pre-delta rates. It might be that the highest-transmission states are now contributing much more than before? Data from https://outbreak.info/situation-reports?pango=B.1.617.2&loc=USA&selected=USA and equivalents for the AY's.
I dug into the Israel vaccine data some. Full data is lacking and I strongly suspected the true VE is significantly higher, based on the UK's 78.2-93.2% estimate for Pfizer 2 dose. Below is my thought process.
TL;DR I thought I would find a clear reason the Israeli data was wrong. I tried to see if the interval was so large that the Israeli estimate was meaningless or if there was a huge bias, but nothing solid came up, so I've gone from "confident" to "somewhat nervous".
Here's the announcement: https://www.gov.il/en/departments/news/06072021-04
And a more quantitative version (fortunately Google Translate was pretty good at Modern Hebrew, at least in this context): https://www.gov.il/BlobFolder/news/06072021-04/en/NEWS_Corona_vaccine-eficacy.pdf
They say they used the same methodology for Delta effectiveness as what's in this older paper: https://www.gov.il/BlobFolder/news/06052021-02/ru/NEWS_Corona_lancet-article.pdf
It looks like for the recent numbers they took each age group and did a VE estimate, and combined the result. They do an example point estimate of VE for age 35-44 and get 55.7% efficacy, based on 47 vaccinated and 15 unvaccinated infections in a population that was 7.08:1 [2nd dose 7+ days ago]:[No vaccine]. Population is in units of person*days.
That's not many cases (for this age bin). What are the bounds on the example efficacy? Turns out there's a Bayesian way to calculate this, which I won't write out. Assuming I did it right, the 95% credibility interval is 16.5-74.3% for this age group.
So could the 64% expected value of VE be similarly low-confidence? There's no obvious way to guess the brackets on the 64% for the full population without knowing the relative population sizes for all groups. But it looks like the total population count is 257:1271, which is something like 25x the data points. I expect a tighter interval, but not necessarily 5x tighter because of complicated statistics.
My other thought is that there is a bias. Something that seems pretty funky is the usage of person-days. During the interval June 6-July 3, the first two weeks had <10% as many cases as the last two weeks. https://ourworldindata.org/explorers/coronavirus-data-explorer?zoomToSelection=true&time=2021-06-06..2021-07-03&pickerSort=asc&pickerMetric=location&Metric=Confirmed+cases&Interval=New+per+day&Relative+to+Population=false&Align+outbreaks=false&country=~ISR Since people strictly leave the unvaccinated group and strictly enter the fully vaccinated group, the average fully vaccinated person-day was on a more case-heavy day than the average unvaccinated person-day. Combined with 1 week being too short to account for the actual effect of dose 2 on positives, maybe this introduces a heavy bias? The paper referenced for methodology includes numbers adjusted for week, but it's not clear if that means week-of-vaccination or weekly cases, and it's not clear if the Delta numbers were adjusted this way. So seems reasonable.
But only 1.97% of the population was vaccinated in this interval, and only 0.29% got Dose 2 in [interval - 1 week]. https://ourworldindata.org/explorers/coronavirus-data-explorer?zoomToSelection=true&time=2021-06-06..2021-07-03&pickerSort=asc&pickerMetric=location&Metric=People+vaccinated&Interval=New+per+day&Relative+to+Population=true&Align+outbreaks=false&country=~ISR and https://ourworldindata.org/explorers/coronavirus-data-explorer?zoomToSelection=true&time=2021-05-30..2021-06-26&pickerSort=asc&pickerMetric=location&Metric=People+fully+vaccinated&Interval=7-day+rolling+average&Relative+to+Population=true&Align+outbreaks=false&country=~ISR Compared to the June 6 numbers, the unvaccinated population count only had a ~5.3% decrease by the end, and the fully vaccinated had a 0.5% relative increase by the end. The only way I could see this making a big difference is if most cases and new vaccinations were in the same age bin (since the bin would have > 5.3%/.5% relative changes, and more cases mean more weight). It would not be surprising if this was a big enough factor to account for differences with UK data, given the age distribution of cases.
So long story short, I'm leaning towards the UK data being correct. But my expectations were that the Israeli data would be confidently falsified with a couple hours of thought, but this didn't happen, so I'm no longer as highly confident.
Oops, missed this. I don't check LW messages much. 20% was not an exact value. At the time I wasn't aware of any estimates. Since then I've heard that the standard curve fit returns a ~50% growth per 6.5 days, some or all of which may be due to immune escape.)
I had a couple assumptions that made me think the SA strain was less contagious in expectation:
I notice I'm confused- SA's variant, if legitimately due to a huge jump in R, doesn't have huge numbers of mutations.
If the UK variant had a 45% jump in R, and SA's has a 20%, and >20% is much more commonly due to IC'd patients, then it seems reasonable that the super-fit, highly mutated strains show up alongside the more mundanely fit, moderately mutated ones. The super-fit's take longer to bake but they take off faster. But then again I'm trying to make a theory to explain 2 data points that I'm not 100% are both correct, so as much as this feels correct it probably isn't.
AFAICT the reason immunocompromised patients are important is they can stay infected for several months. I read a paper recently where such a patient held on for about 5 months, and by my count, samples averaged 3 mutations per month (although I'm sure there's a better way to adjust the numbers than what I did). So there's time to infect enough IC'd patients, plus n months to evolve in them. If antibodies are a necessary ingredient that would delay these steps more. Then there's time for the highly fit strain to outcompete other strains, which is proportional to 1ln(Rfit/Rother). And finally, time to establish the strain is growing and time to check for evidence of causality. IIRC the UK strain became a major issue later, but the UK has nearly the best genome surveillance which is why the announcements happened so close to each other. Fuzzy on the timelines, but I think SA announced theirs later. Maybe SA decided to call the press due to the UK announcement instead of waiting for better proof? And/or sped up the search for evidence.
Regardless, assuming both are legit, the close announcement times seem to be mostly coincidence. But I think we should expect other strains with large jumps in R to start being an issue soon, even if most won't be recognized as quickly.
This post helped me clarify my thoughts on interference with supervisors.
Before this, I was unclear on how to draw the boundary between interference (like a cleaning robot disabling a human to stop punishments for broken furniture) and positive environmental changes (like turning on a light fixture to see better) in a concrete way. The difference I thought of is that the supervisor exerts direct pressure to keep the agent from altering the supervisor. So a rule to prevent treacherous turns might look like "if an aspect of the environment is optimizing against change by the agent, act as though the defenses against change had no loophole."
Of course, we'd eventually want something finer-grained than that- we'd want a sufficiently aligned agent to be able to dismantle a dangerous object, or eventually carry out a complicated brain surgery that was too tricky for a human doctor.
I don't think this demonstration truly captures treacherous turns, precisely because the agent needs to learn about how it can misbehave over multiple trials. As I understand it, a treacherous turn involves the agent modeling the environment sufficiently well that it can predict the payoff of misbehaving before taking any overt actions. The Goertzel prediction is what is happening here.
It's important to start getting a grasp on how treacherous turns may work, and this demonstration helps; my disagreement is on how to label it.
Currently we can access all course materials at once. For the time being, it might be better to hide the incomplete bits so nobody can wander ahead and miss things. Slash, it might be better to force users to try one section before unlocking the next; otherwise people might eternally put off the hard sections.
That said, the platform looks new so it might not support this.