You're missing the very real possibility of long-term negative side-effects from the vaccine, such as triggering an auto-immune disease or actually increasing your susceptibility, both mentioned in the whitepaper (whose risk-assessment I would be pretty sceptical of). I would think of this as more a trade-off between risks of side effects and COVID risks, rather than whether or not you can afford it.
Surprised no one has brought up the Fourier domain representation/characteristic functions. Over there, convolution is just repeated multiplication, so what this gives is . Conveniently, gaussians stay gaussians, and the fact that we have probability distributions fixes . So what we're looking for is how quickly the product above squishes to a gaussian around , which looks to be in large part determined by the tail behavior of . I suspect what is driving your result of needing few convolutions is the fact that you're working with smooth, mostly low frequency functions. For example, exp, which is pretty bad, still has a decay. By throwing in some jagged edges, you could probably concoct a function which will eventually converge to a gaussian, but will take rather a long time to get there (for functions which are piecewise smooth, the decay is .
One of these days I'll take a serious look at characteristic functions, which is roughly the statisticians way of thinking about what I was saying. There's probably an adaptation of the characteristic function proof of the CLT that would be useful here.
There's generally a simpler explanation in this case that Trump and the Joint chiefs of staff have had a rocky relationship, so the military has no interest in assisting a coup attempt, even if they are willing to renounce democratic norms (they are sworn to protect the constitution, after all). Without cooperation from the military a coup is a non-starter.
The classic Expert Political Judgment: How Good Is It? How Can We Know? The cover even has adorable foxes and hedgehogs on it.
What needs to be assumed when reasoning about existential risk, and how are the high stakes responsible for forcing us to assume it?
I guess I opted for too much brevity. By their very nature, we don't* have any examples of existential threats actually happening, so we have to rely very heavily on counterfactuals, which aren't the most reliable kind of reasoning. How can we reason about what conditions lead up to a nuclear war, for example? We have no data about what led up to one in the past, so we have to rely on abstractions like game theory and reasoning about how close to nuclear war we were in the past. But we need to develop some sort of policy to make sure it doesn't kill us all either way.
*at a global scale at least. There are civilizations which completely died off (Rapa Nui is an example), but we have few of these, and they're only vaguely relevant, even as far as climate change goes.
I believe the term you are looking for is a fox, in the sense of Tetlock. But honestly, as someone who is generally pro-toolboxism, I don't understand why that's offensive. The whole point is that you have a whole toolbox of different approaches
Often the issue is that what you're trying to predict is sufficiently important that you need to assume *something*, even if the tools you have available are insufficient. Existential risks generally fall in this category. Replacing the news with an upcoming cancer diagnosis, and telepathy with paying very careful attention to that organ, and whether Sylvanus is being an idiot is much less clear.
On the other hand, if someone is taking even odds on an extremely specific series of events, yeah, they're kind of dumb. And I wouldn't be surprised to find pundits doing this.
In a Bayesian context, seeking evidence is about narrowing the probability distribution from what should be a relatively flat prior. One could probably make a case for not making a decision until the cost of putting it off outweighs the gain by decreasing the uncertainty.