Given that prediction markets currently don't really have enough liquidity, saying 'you need 1000x more liquidity to try to entice traders into putting work into something that can only pay off 0.1% of the time' does in fact sound like something of a flaw.
Thanks, I should add this point to the post: providing 1000x more 0.1% of the time should cost a little bit more than 1x, there would obviously be companies providing this service, it’s straightforward and uncorrelated insurance.
You can anti-correlate it by running 1000 markets on different questions you're interested in, and announcing that all but a randomly chosen one will N/A, so as to not need to feed an insurer. This also means traders on any of your markets can get a free loan to trade on the others.
Just for the record, Dynomight proposed this back in 2022: https://dynomight.net/prediction-market-causation/#commit-to-randomization. (I assume that the idea has been around for longer.)
(Also I would phrase it as being able to use the same money to trade on all 1000 of the markets at once. I think that is equivalent to your free loan.)
The post does indeed propose the idea of implementing the do() operator this way, but I don't think it proposes the idea of various people running 1000 markets on different questions and chosing one not to N/A, so that the cost of providing liquidity or of trading doesn't increase due to that structure?
Here are the relevant quotes:
- Gather proposals for a hundred RCTs ...
- Randomly pick 5% of the proposed projects, fund them as written, and pay off the investors who correctly predicted what would happen.
- Take the other 95% of the proposed projects, give the investors their money back, and use the SWEET PREDICTIVE KNOWLEDGE [to take useful actions]
Other than the difference in the portion of the markets you run (1/20 vs 1/1000), this is equivalent.
(It does not discuss liquidity costs, just the the randomization as a way to avoid having to take many random actions.)
This really works to make CDT decisions! Try thinking through what the market would do in various decision-theoretic problems.
I thought through whether it works in Newcomb’s problem and it was unexpectedly complicated and confusing (see below) and I now doubt that it always recovers CDT in Newcomblike problems. I may have done something wrong.
Newcomblike problems are obviously not the reason we want casual decision markets. But more generally I’m realising now that the relationship between evidential and causal decision markets is quite different from the relationship between EDT and CDT as normally conceived. EDT and CDT agree with each other in everyday problems and only disagree in Newcomblike problems, whereas evidential and causal decision markets disagree in everyday problems.
So perhaps ‘EDT’ and ‘CDT’ are not good terms to use when talking about decision markets.
Consider Newcomb’s problem. Say I make markets for “If I twobox, will I get the million?” and “If I onebox, will I get the million?” and follow the randomisation scheme. In the nonrandomised cases, I should twobox if , where is the first market probability and is the second. That is, if .
Let’s assume omega can’t predict when I’ll randomise or what the outcome will be, and so always fills the boxes according to what I do in the nonrandomised cases.
If , then I’ll twobox in the nonrandomised cases, so the box will be empty even in the randomised cases, so both the twobox and the onebox market will resolve NO and so both p1 and p2 should be bet down.
If , then I’ll onebox in the nonrandomised cases, so the box will always be full, and both the twobox and onebox markets will resolve YES and so both and should be bet up.
I think the only equilibrium here is if and are both 0, in which case I’ll twobox and in the randomised cases both markets will correctly resolve NO. That does agree with CDT in the end but it’s kind of a weird way to get there. Not sure if it generalises.
Alternatively we could assume that omega can predict the randomisation. But then the twobox market will be at 0% and the onebox at 100%, so I would onebox.
In the academic literature, this sort of scheme has been analyzed by Chen et al., e.g.: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/04/TEAC-final1.pdf
Thanks for the link, but (having only skimmed it, so maybe I missed it) I don’t think the paper analyzes this sort of scheme? It says that you need to have at least some randomness so that options are explored, but this is somewhat orthogonal to my claim (that you might want to cancel the market 99.9% of the time and take a random decision which is not informed by the market 0.1% of the time to make the market predict the causal consequences of your decision via implementing the do() operator this way).
I would be curious if any literature actually analyzes the type of scheme that uses policy markets to implement CDT instead of EDT.
Yes but this decreases traders' alpha by 99.9%, right? At least for traders who are constrained by number of markets where they have an edge (maybe some traders are more constrained by risk or something).
I don't understand the footnote.
In 99.9% of cases, the market resolves N/A and no money changes hands. In 0.1% of cases, the normal thing happens.
What's wrong with this reasoning? Who pays for the 1000x?
Thanks- I'll update the footnote.
Imagine you want to spend $10k on subsidizing a market. (One way you can do that is to use Manifold-like AMM with a liquidity subsidy.)
For a trader, it doesn't make sense to spend an hour of time for 0.1% of eating even all of $10k.
So you might want to provide $10m of subsidy that you get back if the market resolves to N/A.
Paying $10m with p=0.1% would naturally be a service costing $10.01k or something.
Markets usually take some time to resolve, and money has a time value. Paying only $10 seems incredibly cheap for tying up a million dollars for even one day, and cheaper still when you consider any of the possible risks of putting $1M into a market that claims to resolve N/A with 99.9% chance.
I’m confused. What in your mind prevents a service that just takes a million clients (each with a $10k->$1m at 1/1000 market) and earns $10m by taking $10.01k from every one of them and returning on average $10k to every one of them?
Like, no one needs to tie up any money; you should be able to just pay someone $10.01k for them to pay $1m with 0.1% chance.
(Obviously randomization can be run on the exchange level in some very transparent way so that as a market participant you don’t have to unusually trust anyone.)
A trader will probably want considerably more than 1000x payout if the probability goes down by 1000x, right?
Another thing I'm not sure this gives us, which we might want: P(X | do(some difficult action)).
E.g. P(AI safety movement grows | do(I write the most highly rated Narnia fanfic)).
We could separate it into
but I'm pretty sure that's not equivalent (e.g. maybe it's more likely to grow in worlds where writing the fic is hard).
Not sure how useful it is to have this - maybe we more often prefer to elicit P(X | do(attempt some difficult action)).
Yep, it doesn't give us P(x | do(event that involves randomness)). The two are indeed not equivalent.
This only lets you get causal probabilities for actions performed by people who opt in. Idk if that's a problem for futarchy but it makes it more limited than I think we'd like. E.g. I'm worried Apple might sue me for something, so I want a market for P(Apple wins lawsuit | do(they sue me)), and I can't commit them to following this scheme.
My intuitive reaction is
With 0.1% chance, transparently take a random decision among available decisions (the randomness is pre-set and independent of specific decisions/market data/etc.)
As a note, I think this doesn't need to be uniformly random. So if there's a decision that you a priori think is a terrible idea, you can downweight it in the random choice, as long as the market prices don't affect that.
Uniform randomness is easier to think about here/has fewer problems with correlations (the choice of the set of available actions can still be gamed by the market creator (for example, by splitting an action into two to get a better outcome in the 0.1%). An algorithm doing some reasoning and then deciding on the weights the actions get in the 0.1% might be breaking the do() operator.
Ideally, you’d maybe want to have some independent body that assigns weights to actions relative to some description length, so that the market creator’s choice of how fine-grained some actions are also doesn’t really break the do() operator.
Why not?
Mostly "priors on this kind of thing".
(I might be able to get something more specific but that comment won't come for a week minimum, if ever.)
Assuming the 99.9% / 0.1% trick does work and there are large numbers of markets to compensate for the small chance of any given market resolving, what would be the defense against actors putting large bets on a single market with the sole intent of skewing the signal? If the vast majority of bets are consequence-free, it seems:
(1) the cost of such an operation would be comparatively cheaper, and
(2) the incentive for rational profit-seeking traders to put enough volume of counter-bets to "punish" that would be comparatively smaller,
than in a regular (non-N/A resolving) market.
There’s correspondingly more liquidity subsidies on these markets, which makes the consequences the same in expectation (i.e., others would love to eat your free money by correcting the attempted manipulation just as much as they would on normally structured decision markets). Everyone just makes bets 1000x larger than they normally would.
Yes, in expectation. But you're adding a lot of variance. I thought the stochasticity of the punishment affects its effectiveness, and that's a tradeoff you make to get the causal structure.
(A response to this post.)
If you use prediction markets to make decisions, you might think they’ll generate EDT decisions: you’re asking for P(A|B), where you care about A, and B is something like “a decision … is taken”.
Okay, so say you want to use prediction markets to generate CDT decisions. You want to know P(A|do(B)).
There’s a very simple way to do that:
Now, 99.9% of the time, you can use freely use market data to make decisions, without impacting the market! You screened off everything upstream of the decision from the market. All you need for that is to make a random choice every now and then.
This really works to make CDT decisions! Try thinking through what the market would do in various decision-theoretic problems.
I claim that real markets actually do the exact same thing.
In the limit, you can imagine that when Tim Cook hires an awful designer, there’s no incentive for the markets to go down: because if they go down at all, Tim Cook would notice and fire them back.
But in reality, people and institutions are sometimes random! If there’s a chance Tim Cook doesn’t listen to the markets, the markets should down at least a tiny bit, to reflect that probability.
When Trump announced the tariffs, real markets showed only something like 10% of the reaction they would have if they thought the tariffs were to stay in place, because the markets correctly anticipated that the government would react to the market change.
But also, they went down, a lot, because they thought Trump might ignore them and not go back on the tariffs.
Even if you’re lower variance than Trump, to the extent you can be modeled well as mostly making a good choice but sometimes taking random decisions, the difference in conditional markets would already reflect at least some of the causal difference between different actions you can take.
And if that’s not enough, you can explicitly simulate the do() operator using the scheme above. It would require you to provide more liquidity when the market resolves, for traders to want to trade despite the overwhelming chance the market will resolve to N/A[2]; yet, it enables you to make decisions based the markets estimating only their causal effects and not anything that might correlate.
(This is not new, related ideas have been proposed in 1, 2, and by many people in personal conversations. Trump and Apple examples were offered in a conversation with Nick Decker in a somewhat similar context.
(I want to mention I believe I know of two ways to structure prediction markets to generate FDT decisions, though I wasn’t able to come up with a single real-life situation where that could possibly be helpful and so it’s I consider it to be the realm of agent foundations- AIs might do something like that internally- and not futarchy.)
Cancelling the market and reverting all transactions. This operation is not strictly necessary for running conditional prediction markets (instead of running a market on P(A|B) you could run a single market on all outcomes of A and B), but makes them a lot more straightforward.
Imagine you want to spend $10k on subsidizing a market. (One way you can do that is to use Manifold-like AMM with a liquidity subsidy.)
For people at a trading firm, it doesn't make sense to spend an hour of their time for 0.1% of eating even all of $10k.
So you might want to provide $10m of subsidy that you get back if the market resolves to N/A: for the market participants, in expectation, this is $10k of subsidy that pays for information.
Paying $10m with p=0.1% would naturally be a service costing $10.01k or something.
So the cost of providing 1000x liquidity 1/1000 of the time is not noticeably higher than providing 1x of liquidity all the time.