Posts

Sorted by New

Wiki Contributions

Comments

How good would have been a pre-mortem as many products in tech get before they are released to the public?   https://medium.com/paypal-tech/pre-mortem-technically-working-backwards-1724eafbba02

How well did they follow the Wisdom of Crowds tenants? (James Surowiecki)

Diversity of opinion - Each person should have private information even if it's just an eccentric interpretation of the known facts. 

Independence - People's opinions aren't determined by the opinions of those around them. 

Decentralization - People can specialize and draw on local knowledge. 

Aggregation - Some mechanism exists for turning private judgments into a collective decision. 

I would say this is based on the Matt Ridley book The Rational Optimist?  Rational Optimist Book | Matt Ridley

What is considered the half-life of facts now?

“In an unlikely but conceivable turn of events, what if that scientist becomes infected with the virus, which leads to an outbreak and ultimately triggers a pandemic?” Fauci wrote in a paper reported on by The Australian. “Scientists working in this field might say — as indeed I have said — that the benefits of such experiments and the resulting knowledge outweigh the risks.”  National Institute of Allergy and Infectious Diseases (NIAID) Director Anthony Fauci defended “gain-of-function” research in 2012

Flatland for machine learning?

How would you use a Brier score on this going forward? Also ran across this podcast last night on bias. https://knowledge.wharton.upenn.edu/article/want-better-forecasting-silence-the-noise/

Is there a trustworthy confusion matrix on for all the various testing methods? The drive to quote "cases" in a hysterical media seems to be misleading without it. Also a definition of "cases" seems to vary widely.

My other issue is a net deaths versus dyeing from other things previously. Is there a good metric seeing as there is a economic reason to label deaths for Covid over the other usual suspects.

Does any one believe that experts in this field are any better than others?  Check out Tetlock's work on forecasting.

Tetlock decided to put expert political and economic predictions to the test. With the Cold War in full swing, he collected forecasts from 284 highly educated experts who averaged more than 12 years of experience in their specialties. To ensure that the predictions were concrete, experts had to give specific probabilities of future events. Tetlock had to collect enough predictions that he could separate lucky and unlucky streaks from true skill. The project lasted 20 years and comprised 82,361 probability estimates about the future.

The result: The experts were, by and large, horrific forecasters. Their areas of specialty, years of experience, and (for some) access to classified information made no difference. They were bad at short-term forecasting and bad at long-term forecasting. They were bad at forecasting in every domain. When experts declared that future events were impossible or nearly impossible, 15 percent of them occurred, nonetheless. When they declared events to be a sure thing, more than one-quarter of them failed to transpire. As the Danish proverb warns, “It is difficult to make predictions, especially about the future.”

A guy in a cornfield in the Midwest destroyed the rest in a competition of the so called intelligence of Thinktanks, PhD's, and others that should and were paid to know better. 

Load More