# 51

Betting markets are the gold standard of expert predictions because bets are the ultimate test of what people truly believe.

The best betting markets are highly liquid. A liquid market is one where you can place a large bet without moving the price very much. Liquid prediction markets work when no individual person can influence the outcome. Bettering markets are a great way to find out if "it will rain tomorrow" or whether "candidate will be elected president next year".

But what if a single person can influence the outcome? For example, what would happen if I created a betting market for "Lsusr will publish a blog post tomorrow"?

Suppose I am ambivalent about whether I will publish a blog post tomorrow. If the price of "Lsusr will publish a blog post tomorrow" drops below 1.00 then I will buy shares of "Lsusr will publish a blog post tomorrow" and then pocket a risk-free profit by posting a blog post tomorrow. If the price of "Lsusr will publish a blog post tomorrow rises above 0.00 then I will buy shares of "Lsusr will not publish a blog post tomorrow" and then pocket a risk-free profit by not posting a blog post tomorrow.

The market equilibrium occurs even if I am unaware that the prediction market exists. Suppose the price drops to 0.99. A trader could buy shares and then pay me a small fee to influence the outcome.

I am not truly ambivalent. Suppose I'm willing to influence the outcome in exchange for $500. What happens? If the market liquidity is less than$500 then we have a functional prediction market. If the market liquidity is more than \$500 then we have a regular market.

Prediction markets function best when liquidity is high, but they break completely if the liquidity exceeds the price of influencing the outcome. Prediction markets function only in situations where outcomes are expensive to influence.

# 51

New Comment

So... what I'm getting is that prediction markets will be just as annoying but necessary to police for insider trading as the stock market? Alas.

Not exactly. You do want people who have insider knowledge to contribute (say, Lsusr friend, who knows him well and has a better guess if he would post). But you don't want people abusing their influence (rather than knowledge) to buy and then tip things away from what the rest of the market thinks will happen, or trying to make sure something does happen just because they bet on it already.

Yep.

Prediction markets function best when liquidity is high, but they break completely if the liquidity exceeds the price of influencing the outcome. Prediction markets function only in situations where outcomes are expensive to influence.

There are a ton of fun examples of this failing:

I think one of the strengths of a prediction market is that they ARE markets.  Your participation in a market you control (by buying "will post" shares) makes it more accurate, just like any other good predictor.  And if someone knows you have this behavior, they can use their own capital to keep the price artificially high, reducing your incentive to post.

People who want things to be "fair" hate this.  People who want to maximize information love it.

People who want to maximize information love it.

If by "maximize information", you mean "make the world more predictable", I think this is wrong and sometimes the exact opposite of right. When all of the outcomes are equally cheap to boost, prediction markets incentivize increasing the likelihood of the least likely outcome (because this is where you get the best odds), making the world less predictable.

"equally cheap to boost" is explicitly NOT the result of markets.  The true prediction is cheapest (in that it's profitable).  In cases where the market can influence the outcome, it's cheapest to encourage the outcome that actually wins (i.e. truth), and it's cheapest when there is less weight against it.

The key to markets is that "weight" of prediction/vote/influence is inversely proportional to risk/cost of failure.   It costs a lot to move the market significantly, so it better be worth it.

Suppose a prediction market predicts 1% probability that I will make a post tomorrow. In that case, I can earn 100x returns by buying some bets and making the post anyway. So I buy a bunch of bets - but that of course changes the odds, and so changes my incentives. As it reaches 50% probability, I can only earn 1x returns by buying some bets, which is still something, but I might hold off on buying more, since I've already earned the vast majority of the possible value, and there might come something up tomorrow which makes it impractical for me to write a post.

Suppose people pick up on the fact that the odds have risen, and assume that therefore I've participated in my own market and conclude that I will be highly incentivized to write a post, so therefore they bid it up to 99% probability. In that case, I can earn money by selling my bets, bringing it back down to 50% probability and leaving me in basically as good a position as if I had just written the post, but eliminating my incentives to write the post.

In general, whenever there is a very low-probability event that some person in the market can influence, they can get extremely good odds by buying into the market, which gives them an incentive to cause that low-probability event to happen. However, the odds worsen as they bid it up, so they are generally only weakly incentivized to make it ~100% likely to happen. Instead, they are merely incentivized to cause chaos, to make low-probability events maybe happen.

The problem underlying this is lack of liquidity in the specific market.  When one or a few participants can cheaply have outsided impact, it's not a very functional market.

I don't see how the example changes when you add liquidity. Could you clarify, e.g. by tracing out a modified example?

What? What is the "true prediction"?

In cases where the market can influence the outcome, it's cheapest to encourage the outcome that actually wins

This looks like a recursive definition, with no base case.

Isn't this Moldbug's argument in the Moldbug/Hanson futarchy debate?

(Though I'd suggest that Moldbug would go further and argue that the overwhelming majority of situations where we'd like to have a prediction market are ones where it's in the best interest of people to influence the outcome.)

Doesn't that argument prove too much?

To some extent this is because our definition of 'function' is somewhat more complicated than 'predict well'.

If by 'function' you mean 'successfully predict an outcome', the example above is great! We started out unsure if you would publish a blog post, then you entered the market, and now we are certain of the result! Hooray!

But in practice we have a fuzzier definition of 'function' along the lines of 'predict the outcome as accurately as you can without actually affecting it', and prediction markets suffer an AI-alignment-esque issue in pursuing this goal.

But in practice we have a fuzzier definition of 'function' along the lines of 'predict the outcome as accurately as you can without actually affecting it', and prediction markets suffer an AI-alignment-esque issue in pursuing this goal.

Part of the issue is that "predict the outcome as accurately as you can without actually affecting it" is a causal concept, but prediction markets can't do anything about causality because they can't model counterfactuals such as what would have happened if someone was not allowed to make the bet. This makes it a fundamentally unsolvable problem if one is restricted to purely mechanical rules like those of prediction markets.

If by 'function' you mean 'successfully predict an outcome', the example above is great! We started out unsure if you would publish a blog post, then you entered the market, and now we are certain of the result! Hooray!

Note that this doesn't work in general; see my response to Dagon.

I think this is a special case of the more general fact that probabilities are for outcomes beyond our control.

So suppose there is a desired outcome, like lsusr writing posts. How can that be incentivized?

Or to put it another way, how can accuracy (and greater yields) both be incentivized?

So suppose there is a desired outcome, like lsusr writing posts. How can that be incentivized?

You buy shares in the outcome you don't want.

In most cases, there are more direct ways to pay for what you want.  In cases where the controller is obscured or partial, you can look at encouragement and hedging as related: you want to make bets that make the dis-preferred outcome less painful for you.

Betting markets are the gold standard of expert predictions

Citation needed.