Right! My untrained intuition still resists a bit; I should play with the numbers.
Niice, it makes sense! Thanks!
So to recap, I was right in that riskier assets can have higher avg returns, but I was missing the usually bigger and opposing effect where as the assets gets riskier, the same avg returns rely more and more on lucky very big gains while doing worse more often (at least if they are sort of lognormal).
My second point I still think was correct, right? -- i.e., that if Scott believed ETH had some chance of total collapse (a mixture distribution), then this skews it to the other side and pushes the median below the mean, and gives some reason to think ETH is more likely to outperform BTC. Does this make sense?
If ETH is less risky than BTC then the median performance of ETH will outperform BTC and his probability could be consistent with EMH
Wait. Does this mean that EMH expects less risky investments to have higher performance on average? That sounds shocking enough that I must be confusing something here. Or is this some sort of median vs mean distinction that I'm not seeing
About 17 and the EMH. Can't Scott be just thinking that ETH is sufficiently more risky than BTC so it may have higher expected returns even with the EMH (the EMH allows this, right?). Or even that he might think ETH has some chance of total collapse (like an outlier at 0) so even with equal expected returns it's much more probable that ETH outperforms BTC than the other way around (?)
What's this supposed to be estimating or predicting with Bayes here? The thing you'll end up doing? Something like this?:
Each of the 3 processes has a general prior about how often they "win" (that add up to 100%, or maybe the basal ganglia normalizes them). And a bayes factor, given the specific "sensory" inputs related to their specific process, while remaining agnostic about the options of the other process. For example, the reinforcer would be thinking: "I get my way 30% of the time. Also, this level of desire to play the game is 2 times more frequent when I end up getting my way than when I don't (regardless of which of the other 2 won, let's assume, or I don't know how to keep this modular). Similarly, the first process would be looking at the level of laziness, and the last one at the strength of the arguments or sth.
Then, the basal ganglia does bayes to update the priors given the 3 pieces of evidence, and gets to a posterior probability distribution among the 3 options.
And finally you'll end up doing what was estimated because, well, the brain does what minimizes the prediction error. Is this the weird sense in which the info is mixed with bayes and this is all bayesian stuff?
I must be missing something. If this interpretation was correct, e.g., what would increasing the dopamine e.g. in the frontal cortex be doing? Increasing the "unnormalized" prior for such process? (like, it falsely thinks it wins more often than it does, regardless of the evidence). Falsely bias the bayes factor? (like, it thinks it almost never happens that it feels this convinced of what should happen in the cases when it doesn't end up winning.)
Whatever prevents the most infection, hospitalization and death is the right answer either way
I first read this sentence as suggesting that killing people is the best way to prevent infection.
Yeah, if R0 is held constant and also COVID-UK is going up in absolute numbers.
Israel's deaths are dropping more slowly than I would have intuitively expected given the vaccinations; I now wonder if it's because of longer duration of the new strains which means we may have to wait a little longer until most of the previous infections resolve. Anyone that's been looking at detailed data (like strain prevalence, the ages of the people still dying, etc) has an opinion? (I just looked at the daily death and vaccination rate)
I haven't read the papers so, please correct me if I guess wrong (most likely), anybody.
I'm guessing the UK strain was estimated from relative growth between strains when the UK cases were skyrocketing, and that gave around ~40% higher R0 than COVID-classic.
Now, say they were underestimating the duration of the UK strain. That would mean it is actually more transmissible than estimated -- but it was masked by the long timescales (transmissibleness means R, right?). And that would mean that it's that much harder to contain than we thought (yet it was contained in the UK, which is great and suggests I'm talking BS). And it also means that it comes to dominate COVID-classic that much faster when COVID is going down.
> This means that we should expect the English strain to arrive in numbers somewhat slower than its level of infectiousness would otherwise indicate.
I'd instead guess that we should expect it to arrive faster since it's would be more infectious than previously expected and the US seems to be mitigating much more decently than the UK at that time? Does this make any sense?
I think you get more points for earlier predictions.