Summary: I think it is worth running a contest to measure ability to make accurate predictions, and am prepared to put up a prize of $200 for the winner. I need to identify a base of 20-50 propositions, all likely to settle within 2-10 months from now, and would like to ask the community for their suggestions, or for some suggested algorithms for picking them. I'd also like feedback on the idea.
I think that practicing predicting future events significantly improves calibration and provides valuable feedback on one's own level of rationality, especially around the topics you are predicting on.
More fuzzily, I think that on the community level, a greater prevalence of predictions being made could provide gains throughout the community, through feedback on how effective our peers are at making predictions. I don't think it is quite a solution to the schools proliferating without evidence problem- it's a single very narrow metric- but schools proliferating with a single very narrow metric is a positive step, I think.
Thirdly, I think a lot of people would like to practice prediction-making, but do not get around to it for various reasons, one of which being the difficulty identifying what propositions to make predictions about. As a result, I think with a base of propositions, it would take quite a small expected value nudge to get a decent number of people to try making predictions.
Bring up all these thoughts together, and the candidate strategy of running a prediction contest came to mind pretty easily. And the simplest way to see if it is a good idea is to try it. If I get 20 entrants I'd consider it a weak success and worth running another next year- more would be more of a success. I put maybe 60% odds on that, conditional on it being run.
Once I have identified a set of 20-50 propositions, I'll create predictions for all of them on PredictionBook, and make a subsequent post here on Less Wrong, listing them all and announcing the contest.
From that time, anyone will have until a specified deadline (~1 month from posting time) to submit predictions on all of them, and give me a contact email and their PredictionBook account name through a Google Form. If people submit multiple predictions, the latest one before the deadline would be used (to remove the incentive to delay until the last minute to minimise uncertainty by permitting a later revision of the prediction).
Once the predictions have all settled (in about ten months), I'll score everyone's predictions using log scoring, and the winner is whoever score is highest. I'll make a subsequent post, listing the way they all settled and the winner and immediate runners up, and email the winner, asking them to make a comment on one of the predictions containing a string I provide in order to prove ownership of the PredictionBook account. Once that's done I'll ask for a PayPal account to send the prize to- or I can send it via a cryptocurrency, or even Western Union if preferred.
(In subsequent years I might solicit community contributions to the prize, but for this experiment I'll take on the risk.)
Any issues people see? Anything worth changing? Anything that makes this a bad idea?
The blocker on executing this, at the moment (aside giving the chance for feedback on whether this is actually a good idea) is identifying the propositions to use. The hard requirements on these are that they need to:
Preferred characteristics of the propositions, if I'm lucky enough to get enough ideas that I can be choosy- don't let these block you from making suggestions that violate these because there's a good chance that won't happen:
So, any ideas? Any thoughts on where to look for good propositions? This is the main place I'd like to crowdsource some ideas from the community, because figuring this out on my own would probably produce lower quality results than some of what people here could come up with.
If people have any particular ideas, it might also be worth throwing in up to a maximum of five longer term propositions, settled years or even decades out, which you're required to assign a probability in order to enter the contest but aren't judged as part of the contest, if we have any that it is significantly high value to have community consensus probabilities for.
There'd be no incentive to not just put random numbers in for these, but I predict that most people won't (although they might put less effort in).
I'm interested in what thoughts people have here. Worth doing? Annoying and would put you off participating? What if they were not required, but just linked from the contest post as an optional extra?
You might want to try adapting some of the ones from http://slatestarcodex.com/2018/02/06/predictions-for-2018/ and the lists linked at the bottom.
Sounds good. I've looked over them and I could definitely use a fair few of those.
User jacobjacob and I recently made ~150 questions to do predictions on for the next year, you could PM him and he can give you them (sorry I'm busy right now).
Thanks for letting me know! I've sent them a PM, and hopefully they'll get back to me once they're free.
Many questions on Good Judgment Open seem to fit your hard characteristics if not your preferred characteristics. Might you consider running a challenge for people's Brier score on GJO (if Ts & Cs allow), or cribbing some questions? An advantage here is that judging is settled for you, and people have evidence to model GJO's resolution
I need to take a good look over what GJO has to offer here- I'm not sure if running a challenge for score on it would meet the goals here well (in particular I think it needs to be bounded in amount of prediction it requires in order to motivate doing it, and yet not gameable by just doing easy questions, and I'd like to be able to see what the probability assignments on specific questions were), but I've not looked at it closely with this in mind. I should at least hopefully be able to crib a few questions, or more.