I wonder what “O(n) performance” is supposed to mean, if anything?
The question here is whether general arguments that experts make based on inference are reliable, or do you need specific evidence. What is the track record for expert inferences about vaccines?
From a quick search, it seems that the clinical trial success rate for vaccines is about 33%, which is significantly higher than for medical trials in general, but still not all that high? Perhaps there is a better estimate for this.
Estimation of clinical trial success rates and related parameters
I found an answer on the PCR question here:
But there is something good to say about their data collection: since the UK study that’s included in these numbers tested its subjects by nasal swab every week, regardless of any symptoms, we can actually get a read on something that everyone’s been wondering about: transmission.
AstraZeneca has not applied for emergency use authorization, because it has been told not to do so.
That resolves a mystery for me if true. How do you know this?
(I was wondering if maybe they are selling all they can make in other countries.)
I'm not sure about this statement in the blog post:
In the meantime, the single dose alone is 76% effective, presumably against symptomatic infection (WaPo) and was found to be 67% effective against further transmission.
I read another article saying that this is disputed by some experts:
With a seductive number, AstraZeneca study fueled hopes that eclipsed its dataMedia reports seized on a reference in the paper from Oxford researchers that a single dose of the vaccine cut positive test results by 67%, pointing to it as the first evidence that a vaccine could prevent transmission of the virus. But the paper, which has not yet been peer-reviewed, does not prove or even claim that — although it hints at the possibility.
If a person tests negative, Andrew Pollard, one of the study authors and a professor of pediatric infection and immunity at the University of Oxford, told STAT via email, “then it is a reasonable assumption that they cannot transmit.”But it is a big and unjustified leap, outside experts agree, from that suggestion to proof of decreased transmission from people who are vaccinated.“The study showed a decrease in [viral] shedding, not ‘transmission,’” said Carlos del Rio, a professor of infectious diseases at the Emory University School of Medicine. “The bottom line is, no, one cannot draw a conclusion or straight line.”
If a person tests negative, Andrew Pollard, one of the study authors and a professor of pediatric infection and immunity at the University of Oxford, told STAT via email, “then it is a reasonable assumption that they cannot transmit.”
But it is a big and unjustified leap, outside experts agree, from that suggestion to proof of decreased transmission from people who are vaccinated.
“The study showed a decrease in [viral] shedding, not ‘transmission,’” said Carlos del Rio, a professor of infectious diseases at the Emory University School of Medicine. “The bottom line is, no, one cannot draw a conclusion or straight line.”
Unfortunately the article doesn't say specifically why these experts consider this an unreasonable inference while the study's author thinks it's a reasonable inference. The closest thing is "There are too many, in my view, moving variables."
I can imagine one possibility for a counterintuitive result. Suppose the vaccine turns severe cases into asymptomatic cases, and transmissions happen mostly in asymptomatic cases?Also, I was unable to tell from the paper when they do PCR+ tests. I have read that in some studies, they only do tests when a test subject shows symptoms, which would mean that some asymptomatic cases might be missed?As a non-expert, I think we need to hedge our bets when experts disagree.
What’s an example of a misconception someone might have due to having a mistaken understanding of causality, as you describe here?
This is a bizarre example, sort of like using Bill Gates to show why nobody needs to work for a living. It ignores the extreme inequality of fame.
Tesla doesn’t need advertising because they get huge amounts of free publicity already, partly due to having interesting, newsworthy products, partly due to having a compelling story, and partly due to publicity stunts.
However, this free publicity is mostly unavailable for products that are merely useful without being newsworthy. There are millions of products like this. An exciting product might not need advertising but exciting isn’t the same as useful.
So It seems like the confidence to advertise a boring product might be a signal of sorts? However, given that many people in business are often unreasonably optimistic, it doesn’t seem like a particularly strong one. Faking confidence happens quite a lot.
It seems like some writers have habits to combat this, like writing every day or writing so many words a day. As long as you meet your quota, it’s okay to try harder.
Some do this in public, by publishing on a regular schedule.
If you write more than you need, you can prune more to get better quality.
One aspect that might be worth thinking about is the speed of spread. Seeing someone once a week means that it slows down the spread by 3 1/2 days on average, while seeing them once a month slows things down by 15 days on average. It also seems like they are more likely to find out they have it before they spread it to you?
Yes, sometimes we don't notice. We miss a lot. But there are also ordinary clarifications like "did I hear you correctly" and "what did you mean by that?" Noticing that you didn't understand something isn't rare. If we didn't notice when something seems absurd, jokes wouldn't work.