A second round is scheduled to begin this Saturday, 2020-02-08. New predictors should have a minor advantage in later rounds as the winners will have already exhausted all the intellectual low-hanging fruit. Please join us!
Thanks! Also, thanks to Pablo_Stafforini, DanielFilan and Tamay for judging.
I would also like to convert it to a more flexible e-reader format. It appears to have been typeset using LATEX... Would it be possible to share the source files?
It's time to test the Grue Hypothesis! Anyone have some Emeralds handy?
It occurs to me that the world could benefit from more affirmative fact checker. Existing fact checkers are appropriately rude to people who publicly make false claims, but there's not much in the way of celebration of people who make difficult true claims. For example, Politifact awards "Pants on Fire" for bald lies, but only "True" for bald truths. I think there should be an even higher-status classification for true claims that run counter to the interests of the speaker. For example, we could award "Bayesian Stars" to figures who publicly update on new evidence, or "Bullets Bitten" to public figures who promulgate true evidence that weakens their arguments.
It occurs to me that "Following one's passion" is terrible advice at least in part because of the lack of diversity in the activities we encourage children to pursue. It follows that encouraging children to participate in activities with very high-competition job markets (e.g. sports, the arts) may be a substantial drag on economic growth. After 5 minutes of search, I could not find research on this relationship. (It seems the state of scholarship on the topic is restricted to models in which participation in extracurriculars early in childhood leads to better metrics later in childhood.) This may merit a more careful assessment.
Attention Conservation Warning: I envision a model which would demonstrate something obvious, and decide the world probably wouldn't benefit from its existence.
The standard publication bias is that we must be 95% certain a described phenomenon exists before a result is publishable (at which time it becomes sufficiently "confirmed" to treat the phenomenon as a factual claim). But the statistical confidence of a phenomenon conveys interesting and useful information regardless of what that confidence is.
Consider the space of all possible relationships: most of these are going to be absurd (e.g. the relationship between number of minted pennies and number of atoms in moons of Saturn), and exhibit no correlation. Some will exhibit weak correlations (in the range of p = 0.5). Those are still useful evidence that a pathway to a common cause exists! The universal prior on random relationships should be roughly zero, because most relationships will be absurd.
What would science look like if it could make efficient use of the information disclosed by presently unpublishable results? I think I can generate a sort of agent-based model to imagine this. Here's the broad outline:
I believe that both agents will converge on the correct DAG, but the un-publication-biased agent will converge much more rapidly. There are a bunch of open parameters that need careful selection and defense here. How do the properties of the original DAG affect the outcome? What if agents can update on a relationship multiple times (e.g. run a test on 100 samples, then on 10,000)?
Given defensible positions on these issues, I suspect that such a model would demonstrate that publication bias reduces scientific productivity by roughly an order of magnitude (and perhaps much more).
But what would the point be? No one will be convinced by such a thing.
Please please please make this happen!