So, I've been thinking about prediction markets and why they aren't really catching on as much as I think they should.
My suspicion is that (beside Robin Hanson's signaling explanation, and the amount of work it takes to get to the large numbers of predictors where the quality of results becomes interesting) the basic problem of prediction markets is that they look and feel like gambling. Or at best like the stock market, which for the vast majority of people is no less distasteful.
Only a small minority of people are neither disgusted by nor terrified of gambling. Prediction markets right now are restricted to this small minority.
Poker used to have the same problem.
But over the last few decades Poker players have established that Poker is (also) a sport. They kept repeating that winning isn't purely a matter of luck, they acquired the various trappings of tournaments and leagues, they developed a culture of admiration for the most skillful players that pays in prestige rather than only money and makes it customary for everyone involved to show their names and faces. For Poker, this has worked really well. There are much more Poker players, more really smart people are deciding to get into Poker and I assume the art of game probably improved as well.
So we should consider re-framing prediction the same way.
The calibration game already does this to a degree, but sport needs competition, so results need to be comparable, so everyone needs to make predictions on the same events. You'd need something like standard cards of events that players place their predictions on.
Here's a fantasy of what it could look like.
- Late in the year, a prediction tournament starts with the publication of a list of events in the coming year. Everybody is invited to enter the tournament (and maybe pay a small participation fee) by the end of the year, for a chance to be among the best predictors and win fame and prizes.
- Everyone who enters plays the calibration game on the same list of events. All predictions are made public as soon as the submission period is over and the new year begins. Lots of discussion of each event's distribution of predictions.
- Over the course of the year, events on the list happen or fail to happen. This allows for continually updated scores, a leaderboard and lots of blogging/journalistic opportunities.
- Near the end of the year, as the leaderboard turns into a shortlist of potential winners, tension mounts. Conveniently, this is also when the next tournament starts.
- At new year's, the winner is crowned (and I'm open to having that happen literally) at a big celebration which is also the end of the submission period for the next tournament and the revelation of what everyone is predicting for this next round. This is a big event that happens to be on a holiday, where more people have time for big events.
I think your starting assumptions are false.
Not true. Most financial markets are prediction markets. They seem to be popular.
Not true. In fact, I do not know a single person who would characterize the stock market as "distasteful". Caveat: I don't know many tankies.
Not true. Look at how many people are playing the lotteries, going to casinos, etc.
In general, "prediction market as a sport" is called trading the financial markets. HUGE prizes :-D
That may be technically true, but only in a superficial sense. Stocks prices have a very complicated relationship to real world events, except in the very long term. That's very different from markets like https://www.predictit.org which have clear connections to things like who will win elections and objective criteria.
First, I didn't say "stocks", I said "financial markets".
Second, all markets, prediction included, have a complicated relationship to real-world events. The markets strongly react to some, weakly react to others, and ignore the great majority of them.
I think you're trying to say that financial markets ignore some events you're interested in. That's a fair point, but it also applies to all markets.
My favorite word of the day!
Currently, people who don't want to gamble for money can take part in prediction making at https://www.gjopen.com and metaculus. Your alternative suggestion is to have all the prediction making happen in a shorter time frame during an event. A special event can bring people together, but most games of Poker don't happen in the setting of a tournament.
Poker has the advantage of being able to be played casually with low feedback circles. Adrenalin rises while you play and you don't have to wait a year to see whether or not you win.
I think we need games like the credence game that have tight feedback loops. One of the core problems of the credence game is that most of the questions aren't very interesting.
I can think of a few ways to get better questions:
① Get knowledge from Wikidata and make questions with the knowledge. Test every question in the database for being a question that increases player engagement and slowly optimize your way to highly engaging questions.
② Use real world data. Sensors like heart rate monitors provide interesting data that can be predicted. Weather API provide interesting data. Time tracking software can ask me question such as whether I spent more or less than 3 hours on facebook in the last week.
③ Tie prediction to common tasks with binary outcomes. It would be possible to ask for a prediction whether or not the test suite will find an error every time I run automated tests.
Of course finding ways to actually drive economic decisions with predictions like http://lesswrong.com/lw/oe0/predictionbased_medicine_pbm/ is also important.
Oh, absolutely. But the big events are great PR and lead to lots of private games. So if there's a big event for prediction I think that helps people start to think in probabilities and conscious handling of prediction on other occasions.
That would be cool. I have a hard time imagining them though. Maybe a group of players could watch a video together, pause every couple of minutes and place percentages on a set of predictions on what happens next?
As long as you have an existing set of questions with known answers that are unknown to participants of the game you can have instead feedback.
Public knowledge that you can find on Wikidata works if you have an offline tournament. For an online tournament, it can use data from nonpublic experiments. The CASP tournament for protein structure prediction uses that method. For our purposes, I think surveys make good experimental data.
But in that case, it isn't really about prediction anymore. A game like that rewards knowledge, not the ability to do research and deal with probabilistic information.
Someone who has read a lot of Wikipedia, or who happens to have read papers on topics similar to the experiment in question, could outperform someone who constructs predictions very rationally but from a different set of domain knowledge facts. This makes it closer to a quiz show, i.e. a less original and less interesting event.
A slow, online tournament (where everyone has the same internet to do research in) greatly reduces the value of blunt knowledge and makes success more dependent on the ability to weigh evidence.
I'm not sure why you consider quiz shows to be uninteresting. It's a quite successful format when it comes to gathering an audience.
I don't know why you think that. Quiz shows need huge production values and very valuable prizes to still be interesting.
With the kind of budget that's conceivable for a startup group of amateur organizers, you have to be novel/creative to be found worth noticing outside the immediate circle of participants. Sure you could run a quiz show on a shoestring budget, but nobody is going to talk about it after.
And since this is about reaching people with ideas of thinking in probabilities and updating on evidence, everything that doesn't get talked about after is a failure. Even if the event itself was entertaining.
I hold that opinion because a variety of Quiz shows are commercially successful. I think most entertainment has experiences with short feedback circles.
I don't see how the event you propose is about updating on evidence. Updating on evidence in the sense it was done in the Good Judgement Project needs longer time frames than a tournament of a few days.
I see that the offline model doesn't let people compete on research abilities but competition on calibration still gives you an event that's about probabilities. It has the advantage that the players can make a lot more predictions in a short time frame and it's less likely that the tournament gets won by lucky overconfident participants.
A 2-day event where people do 1 hour research per question likely doesn't give you a dataset that allows you to pick a winner based on skill.
In order to reach a level of entertainment as near as possible to the one provided by poker tournaments you should consider building a prediction game having at least two characteristics:
1) Restricted to low-stakes events (consequences of the outcomes are minimal and affect only a small set of parties; ideally within an artificial environment rather than real world events).
2) The timeframe should be reduced to a single day.
In that type of scenario, I think it is hard to avoid a situation where domain knowledge dominates rational handling of evidence. It might be better, I don't know. Could you describe in more detail what kind of game you're imagining?
I'm slightly biased against it because this seems even more like gambling. Specifically, like sports betting. And as soon as you involve prizes, it'll be hard to avoid being subject to gambling legislation.
Before releasing the results of the LW survey, we might have a tournament about predicting the outcome of many survey questions.
We essentially have this already occurring in the form of fantasy football leagues, which itself has gone from basically being gambling to basically being an e-sport. If you haven't considered it already, perhaps you should look into some of the ways that the NFL is making use of fantasy football for both marketing and information gathering purposes.
It's obvious - we need buzzfeed to create a "which celebrities will get divorced this year" quiz (with prizes?). There is no way people will be interested in predicting next year's GDP.
Yes, rationality could use more competitive tests of skill. I'm not sure about having them take a year, though. Have you thought about ways to make it faster?
A simple fix would be to include predictions about particular parts of the year. For example, have a bunch of predictions about each quarter, on top of the ones about the whole year. And then you could have an extra prize for each quarter where you score only that subset of predictions.
You could easily go more short term, like "what are your predictions for September", but I think this requires more participation and work from everyone so maybe it would be better as a second step that you do only if the relatively relaxed yearly tournament has turned out to be cool.
There are way too many "shoulds" in this post. If anyone can have fun predicting important events at all, then it would probably be people in this forum. Can we make something like this happen? Would we actually want to participate? I'm not sure that I do.
I'd definitely want to participate, and looking at the yearly predictions SSC and others do, I'm surely not the only one.
But someone would have to set it up, run it and advertise it. You don't even strictly need to write software for it. It could be done on any forum, as a thread or series of threads. It could be done here, if this place wasn't so empty nowadays.
Well, I can imagine a post on SSC with 5 statements about the next week, where other users would reply with probabilities of each becoming true, and arguments for that. Then, after the week, you could count the scores and name the winners in the OP. It would probably get a positive reaction. Why not give it a try?
I'm not sure what the 5 statements should be though. I think it must be "next week" not "next year", because you can't enjoy a game if you've forgotten you're playing it. Also, for it to be a game, it has to be repeatable, but if you start predicting the most important events of the year, you'll run out very fast. On the other hand, weekly events tend to be unimportant random fluctuations. I think that's a big problem with the whole idea.
One possible solution could be to do experiments rather than predict natural events, i.e. "On day X I will try to do Y. Will it work?".