Futarchy is bound to fail because conditional decision markets are structurally incapable of estimating causal policy effects once their outputs are acted upon. Traders must price contracts based on welfare conditional on approval, not welfare caused by approval. As a result, decision markets systematically reward traders for exploiting non-causal correlations between policy adoption and latent welfare fundamentals. We can expect futarchy markets to endogenously generate such correlations. Policies that signal strong fundamentals are favored even if causally harmful, while policies that signal weakness are disfavored even if causally beneficial. This effect persists under full rationality, common knowledge, and perfect supporting institutions (welfare metric, courts, legislatures, etc.).
This bias is worst when individual estimates of fundamentals are noisy and dispersed, i.e. where markets should be most useful as information aggregators. The resulting inefficiency cost is paid by the organization being governed, while gains accrue to market participants, making futarchy parasitic on its host. Randomization schemes can recover causal estimates only by breaking the feedback loop between prices and decisions, but doing so either renders futarchy ineffective as a decision making tool, fails to fix the problem, or collapses it into an influence market where the wealthy can buy policy.
There is no payout structure that simultaneously incentivizes decision market participants to price in causal knowledge and allows that knowledge to be acted upon.
Introduction
Futarchy is a form of governance that leverages conditional predictions markets to take decisions, invented by Robin Hanson. In theory, because markets are great at aggregating dispersed, tacit information, futarchy could lead to better decisions than private-business autocracy or democracy, but it has so far failed to gain much traction as a practical decision-making tool. Many concerns over futarchy have been raised over the years, ranging from the difficulty of defining the welfare metric needed to settle the bets, to oligarchy concerns and market manipulation.[1] Today, we will be talking about a more fundamental problem, one that would be sufficient to cripple futarchy by itself.
The problem is that futarchy is based on a fundamental confusion between prediction markets, which have no causal effect on the event they are trying to predict, and decision markets, which do have a causal effect on the event or metric they are trying to predict. While it is generally correct that prediction markets are outstanding institutions for aggregating dispersed predictive information, this effectiveness does not transfer to the ability of decision markets to take good decisions, because causal probabilities and conditional probabilities are different game-theoretic objects.
In this article, I intend to prove that:
Futarchy's reliance on conditional probability would lead to systematically suboptimal decision-making relative to causal decision making.
We can expect this to be the default outcome of futarchy, not an edge case.
Randomization schemes, which aim at "fixing" futarchy into providing causal expected values, destroy futarchy as a decision-making tool.
The reason behind this failure is that rational traders will systematically price information about welfare fundamentals into futarchy decision markets using "superstition" signaling mechanism. This signaling mechanism persists because it is capital-efficient for market participants. It is parasitic on the ability of the organization to pay the cost of bad policies while market participants profit from gambling on welfare fundamentals.
Appendix A provides some responses to anticipated questions, while Appendix B is a mathematical formalization of the argument made in the article.
Prior Work
I am not the first to point out that decision markets implements a form of evidential decision theory, in which decisions are made based on what is correlated with favorable welfare instead of what causes favorable welfare. Dynomight did a seriesofthorough articles in 2022-2025 on the inability of decision markets to provide causal welfare estimates, which helped spark my interest in the question. Caspar Oesterheld picked up that futarchy implements EDT in 2017. Anders_H showed the same result using a toy example in 2015.
However, those articles use confounders whose source is external to the market to demonstrate the problem: a trick coin for Dynomight, a geopolitical event for Anders_H, Omega's prediction for Caspar's Newcomb paradox. They use toy examples that could be seen as a bit convoluted and adversarially constructed.[2] This allowed Hanson and other proponents of futarchy, while agreeing that confounders are a problem ("decision selection bias" is the term he uses), to consistently answer that the solution is endogenizing the decision within the market as much as possible: "putting the market in charge of decision-making", or "allowing the decision-makers to trade" in advisory markets. Under those conditions, Hanson assures that decision selection bias is "rare", and we are led to believe those prior adversarial examples would be edge cases: futarchy would still work well most of the time. The point of my article is to close those escape hatches right now: those solutions do not work.
Consider a simple example we might call the Bronze Bull problem. Suppose someone submits this proposal to a futarchic decision market: "let's build a massive bronze statue of a bull in Times Square as a prosperity monument. It will cost half a billion dollars and be ten times taller than the Wall Street one". Would this policy be approved?
If we assume that this policy has a slight negative effect on national welfare, because any tourism or aesthetic benefits fails to cover the construction costs of the statue, a naive futarchist would answer that it would (and should) be rejected. But this is wrong. Even with negative causal effect on national welfare, a prosperity bull statue could, and I argue would, be approved by a futarchic decision market.
This is because the payout structure of the decision market rewards Wconditional on the market approving the policy, not the causal impact of the policy itself. Approval of such a wasteful confidence-signaling policy signals one thing: the market aggregate believes that economic fundamentals are strong enough that resources can be wasted on prosperity symbols. Conversely, rejecting the policy means that economic fundamentals are so dire we cannot afford such a waste. The policy's approval is endogenous to the very economic conditions that determine welfare.
Therefore, a market trader would—correctly—estimate that "worlds where the market approves the Bronze Bull" are high-welfare worlds, not because the Bull causes prosperity, but because approval signals underlying confidence and strong fundamentals: E(W|approve the bull) is high. Conversely, "worlds where the market rejects the Bronze Bull", because it is a frivolous waste that we can't afford, are low-welfare worlds: E(W|reject the bull) is low. Result: E(W|approve the bull)>E(W|reject the bull), and the Bronze Bull gets approved despite having a net negative impact on welfare.
Critically, this bias manifests even when traders are rational, use causal decision theory, and know perfectly well that the Bronze Bull actively hurts welfare. The problem is the payout structure of futarchy itself. A trader who ignores selection effect and tries to price contracts based solely on the Bull's causal effect on national welfare would lose money. If they treat approve-the-bull as less valuable than reject-the-bull contracts, they would either overpay for reject-the-bull contracts that only pay off in low-welfare worlds, or undersell approve-the-bull contracts that pay off in high-welfare worlds.
The Bailout Problem
Construction of the dam, by William Groper. This art was commissioned as part of the New Deal.
The Bronze Bull shows how a harmful policy can be approved when it signals confidence in fundamentals. But the bias also works in reverse, causing futarchy to reject beneficial policies because they signal weak fundamentals.
Consider the example of deciding whether to pre-emptively pass a bailout/stimulus package when an economic crisis might be looming near. Does approving the stimulus package provide sufficient causal benefit to offset the market wisdom that any stimulus amounts to a confirmation that crisis is right around the corner?[3] Besides the causal effect of the policy, the answer to this question depends on two factors: the strength of the market norm about what rejection and approval means for underlying welfare fundamentals; and the accuracy of the trader's own estimate of welfare fundamentals based on "off-decision" sources (research, gut feeling, media, anything but decision markets).
When every trader has excellent information about welfare fundamentals, market norms lose some of their informative power. Once everyone knows, with high confidence, that things are going great, then "the market picked the bailout" or "the market rejected the bailout" do not provide much additional information about fundamentals. At this point, decision markets do provide better estimate of the causal effects of each policy. But note that this is a better estimate, not an estimate free from selection decision bias. A rational trader must still consider the possibility that the market decision might nevertheless reveal something about fundamentals, because other traders might know things he or she does not know about.
Conversely, when traders have noisy estimates of welfare fundamentals, confidence bias reign supreme. If no one is quite sure how good things will be in the future, "the market picked the bailout" and "the market rejected the bailout" are extremely meaningful aggregate signals. This leads to an unfortunate conclusion for futarchy: when markets are most helpful as aggregation mechanisms, i.e. information is dispersed and individual estimates are noisy, decision markets are most vulnerable to endogenous superstitions steering them away from causal decision-making. When information is widely distributed and consensus reigns, decision markets provide better estimates of causal policy effects (but given that consensus reigns, you probably do not need them in the first place!).
This is the crux: under conditions of uncertainty about welfare fundamentals, we can expect futarchy to adopt, on average, systematically worse policies than an organization using causal decision-making. This conclusion stands even if the institutional machinery around it (courts, legislature, agenda setting, defining and measuring welfare) works perfectly.
Endogenous Conditioning and Market Superstitions
It is reasonable to wonder whether confidence bias would be common in practice or if it would remain a weird edge case. For example, one of Hanson's main line of defense against "decision selection bias" is an intuition that such conditions are rare, and depend entirely on external confounders (e.g., decision-maker psychology) that disappear when we "put the market in charge". I fundamentally disagree with this argument. Absent an external source of confounders, a market is entirely capable of generating its own confounders via the beliefs of market participants, and we can in fact expect this failure to be the default outcome.
Consider the Bronze Bull example we just examined. Here, the confounder is the state of unobserved welfare fundamentals, acting on policy via the shared belief of traders about what adoption of the Bull would mean regarding those fundamentals. Because adoption also depends on the behavior of traders, this belief is self-fulfilling, arbitrary, and endogenous to the market itself: it cannot be eliminated easily. If the traders believe you only build bulls in good times, they will price in good time into approve-the-bull contracts, making approval more likely. If they believe bronze bulls are only approved in desperation when fundamental are terrible, then they will price in bad times into approve-the-bull contracts, making approval less likely. The result is a confidence bias directionally pointing toward adopting whatever policies signal good fundamentals, embedded within futarchy's payout structure.
In any case, the bull is causally harmful, and adoption only depends on arbitrary market folklore, which we could adequately call a superstition. Because the superstition is a coordination point (i.e. the collective belief about what adoption or rejection means), it nevertheless carries valuable information for individual traders. To be precise, a superstition allows market participants to use their capital more efficiently when trying to profit off private information about fundamentals.
Consider the case of a savvy trader who just got information that future welfare is likely to be low. If adoption has no directional bias from underlying fundamentals, the trader must hedge his knowledge by trading on both sides of the adoption branch, immobilizing capital on the ultimately rejected for the duration of the market for zero return. This is inefficient.
If a market superstition makes adoption more likely under a specific state of fundamentals, the savvy trader can focus his trades on the branch made more likely by his private information. He is rewarded with higher profits than if there wasn't a superstition in the first place. Under this lens, the decisional efficiencies of futarchy are a parasitic externality of traders using approval as an information channel to trade on welfare fundamentals: the costs to society are diffuse (inefficiency, bad policy), while the benefits are concentrated to informed market participants.
Once a superstition takes hold, there is nothing to arbitrage, which makes it persistent despite being collectively groundless.[4] This is a class of problems called in economics a sunspot equilibrium. The confidence bias induced by sunspot beliefs can be potentially much larger than the causal impact, depending on what traders collectively believe each option signals about welfare fundamentals.
Can Randomization Rescue Futarchy?
It is often said that the solution to decision selection bias is simple: partial randomization. By breaking the confounder between the selection of the decision and the context of the decision (including the underlying welfare fundamentals), the conditional odds of the decision market contracts should correspond more closely to the causal effects of adopting or rejecting the policy.
This is correct in a technical sense, but it does not rescue futarchy. Hanson and others have mentioned small randomization fraction, say 5% or 1% of all markets, being decided at the flip of a coin. Sounds reasonable, isn't it? A modest price to pay for accurately causal decision-making.[5] Futarchists mention two ways to go about this: an ineffective one (randomization after market choice) and a bad one (randomization as the settlement trigger on advisory markets).
Approach 1: Randomizing the Decision (Ineffective)
Let the futarchy decision markets proceed normally (1−ϵ) of the time, with decisions reached according to market prices. A fraction ϵ of the time, upon resolution of the market, the policy is implemented randomly at the flip of a coin.
This method pulls the conditional probability a=Pr(A|G) between approval A and underlying fundamentals state G toward a pure coin flip:
aϵ=a+ϵ(1/2−a)
Or equivalently:
aϵ=(1−ϵ)a+ϵ/2
Randomization scales the superstition strength (2a−1) by a factor (1−ϵ). When adoption is strongly correlated with fundamentals (a→1), you must randomize a lot, perhaps most of the time, to hope to recover anything but crumbs of causal estimates. The 5% randomization fraction mentioned by Hanson would be mostly ineffective.[6]
Approach 2: Randomizing Settlement (Straight Up Pay-for-Play)
Under this architecture, markets are advisory and do not directly control policy adoption, which is a significant departure from Hanson's pure futarchy proposal. Instead, the conditional prediction markets resolve randomly, according to a coin flip, for a fraction ϵ of bets. The rest of the bets (1−ϵ) are called off, and bettors are reimbursed for their trades. Since markets can only resolve upon random adoption of policy, E[W|approve]should be priced as E[W|do(policy)]. Congratulations! We should now have causal estimates, that decision-makers can use (1−ϵ) of the time to inform their thinking, while implementing random policy ϵ of the time. If ϵ is small, this should be a manageable cost.
The unfortunate truth is that there is no such thing as a market-derived causal E[W|do(policy)] that one can act on, even indirectly. If decision-makers use the predictions of the market in any regular way (perhaps, let's be bold, by adopting policies whose impact on welfare is higher than the alternative), the market can, and will, price this fact in. We are back to estimating welfare conditional on adoption, just like in regular futarchy, but this time with a payout structure that explicitly rewards market manipulation.
Let's look at a practical example, under a reasonably small ϵ of 0.01. What will the welfare be if a government contract is awarded to Pork, Inc. or Honest Little Guy (HLG), LLC? For the sake of argument, assume that welfare will be higher if the contract goes to HLG, but that Pork, Inc. happens to have deeper pockets. Let's also assume that when the market resolves to N.A. (that is 99% of the time), the decision-makers pick the policy with the highest price ~80% of the time.
Despite being a worse contractor, if Pork can use its credit to keep its contracts priced higher than HLG's, they stand to profit handsomely. They risk their capital only 0.5% of the time, while being awarded the contract 79.2% of the time, because decision-makers observe and act on market prices even from markets that won't resolve.
Pork's expected gain is:
GainPork=(1−ϵ)αB−ϵL/2
with α=0.8 the probability that decision-makers select the highest priced decision contract, B the contract payout, and L the amount of capital Pork can commit to market manipulation. Pork can commit up to:
Lmax=2(1−ϵ)αBϵ
That is 160 times the contract payout (!) in manipulation capital, and Pork still ends up in the green. The decision market has stopped being a contest of who is best informed. Instead, it's a contest of who can best deploy capital to influence the thinking of decision-makers, with a lottery-ticket risk of ruin if your trades have the misfortune to execute.
What about arbitrage? Let's assume an external arbitrageur that is a) without opportunity for insider profit on either branch and b) knows that HLG is better for welfare than Pork, Inc. To profit from this knowledge, he must bid up HLG using as much capital as Pork, Inc, but he may profit only 0.5% of the time. Otherwise, he immobilizes his funds for no payout. Unless the welfare mispricing is on the order of 1/ϵ, no arbitrageur would touch this market with a ten foot pole. Holding treasuries would be better business.
Providing a better payout for arbitrageurs requires to crank epsilon up, which causes the same problems as Approach 1.
The main takeaway is that there exists no payout structure that simultaneously incentivizes revelation of causal effects, and allows decision-makers to act on those revelations. If market prices influence decisions in any predictable way, rational traders must price in that influence, returning to conditional rather than causal estimates. If prices don't influence decisions, futarchy ceases to be a decision mechanism and becomes a randomized controlled trial (RCT) that you can bet on.
Do We Have Empirical Examples of Decision Markets Failing Due to Decision Selection Bias?
We might, but it is circumstantial. Because futarchy has rarely been implemented at scale, we must rely on evidence from conditional prediction markets (i.e. "what will Y be if X happens?") without direct decision-making power. There is Dynomight's coin experiment, of course, which did succeed in showing that futarchy implements EDT, but this was an adversarially constructed case. However, Ford's internal prediction market program in the mid-2000s included conditional prediction markets, as presented in the paper "Corporate Prediction Markets: Evidence from Google, Ford, and Firm X"[7] by Cowgill and Zitzewitz. This is an empirical, large-scale test performed in good faith by an organization genuinely eager to harness the power of prediction markets.
Ford's conditional "features markets" asked traders whether specific car features would attract consumer interests, if they were tested via conventional market research. Because market research is expensive to run, narrowing down the field of feature to test using the wisdom of crowds seemed fairly sensible. However, settling the features market would have exposed valuable information to market participants at large, since it told quite directly which features Ford tested and how well they did with customers. Ford chickened out halfway into the experiment, and decided to turn the whole thing into a Keynesian Beauty Contest, killing the predictive value. However, before they pulled the plug, here is what the authors observed:
"[Conditional feature] markets were poorly calibrated. Markets trading at high prices were roughly efficient, but those trading at low and intermediate prices displayed a very large optimism bias. Features with securities that traded below their initial price never achieved the threshold level of customer interest, and therefore were always expired at zero, and yet the market appeared to not anticipate this. Subsequent discussions with Ford revealed that these markets included features that were not shown to customers, and that these markets may have been unwound rather than expired at zero."
I have good reasons to suspect that the "optimism bias" of "low and intermediate price" securities is simply decision selection bias under another name. Quite straightforwardly, traders believed that if management decided to test the feature at all, it must have some value they may be unaware of, regardless of their own personal feeling about the feature. After all, even if I think an in-car vacuum is a stupid idea, the simple fact that we test it in the first place means the idea might not be that stupid. This is limited evidence, but it is consistent with the case I present here.
Conclusion
Prediction markets can either provide accurate causal prediction of policies you cannot act on, or conditional estimates that you can, but should not, act on. There is no secret third way. In the case of futarchy, decision markets will be systematically hijacked to allow market traders to gamble on underlying welfare fundamentals in addition to the causal effects of the policy. This mechanism leads to the systematic adoption of wasteful policies signaling strong fundamentals and the rejection of policies that are helpful but signal bad fundamentals. Because this signaling operates at the expense of the organization being governed, who will bear the cost of those harmful policies, and to the benefit of futarchy market traders, it fits the definition of parasitism.
Appendix A: Response to Anticipated Objections
Objection 1: What about the various crypto projects that do use futarchy today?
Futarchy may genuinely be well-suited to crypto governance. In crypto, value is reflexive and determined primarily by market sentiment rather than external fundamentals. In such systems, E(W|A) may actually be the correct objective, if signaling confidence is the desired causal effect. When "the market believes the Bronze Bull will pump the coin" causes the pump, then building the Bull genuinely increases welfare. This is generally not true outside of crypto.
Objection 2: You are just proving that Futarchy implements Evidential Decision Theory (EDT) and not Causal Decision Theory (CDT).
This is true. And since EDT is considered a valid decision-theoretic framework by many philosophers, with strong support in the Newcomb Paradox and the Smoking Lesion Paradox, why couldn't futarchy simply be valid under EDT? Because policy is an inherently causal domain. A polity that adopts policies because they are causally beneficial will systematically dominate one that adopts policies that are merely correlated with good fundamentals. The entire edifice of evidence-based science relies on breaking confounders via randomization to calculate the causal effect of interventions. Regardless of whether you are a one-boxer or a two-boxer, you should support causal policymaking.
Objection 3: Wouldn't a CDT arbitrageur be able to exploit the decision selection bias against the market?
No. As we explained in the Bronze Bull section, the problem is inherent to the payout structure of futarchy, not to the rationality or decision theory of market participants. A CDT arbitrageur would lose money under futarchy by over-selling causally harmful policies that get executed in good times (Bronze Bulls) and over-buying policies that are causally beneficial but only pay out in bad times (Bailouts).
Appendix B: Mathematical Model of Decision Selection Bias
Model Setup
Fundamentals and Priors
Let's assume that the world has two possible future states S∈{G,B}: good (G) and bad (B), with prior belief of being good p. The respective values of welfare in each state is WG and WB (where WG>WB, since things are better in good times).
Policy Effects
Consider a policy P that, if adopted, adds state-dependent causal effects:
τG in good times
τB in bad times
The realized welfare in the future state is:
W=WS+τS⋅1A
where A denotes policy adoption.
Summary of Variables
Variable
Definition
S∈{G,B}
Fundamentals state (Good times, Bad times)
p≡Pr(S=G)
Prior probability of good state
WG,WB
Baseline welfare in each state (WG>WB)
τG,τB
Causal policy effects in each state
A
Policy adoption event
We assume that adoption is correlated with the state of underlying fundamentals: some policies are more likely in good times, others in bad times (e.g. building Bronze Bulls is more likely in good times, stimulus in bad times). We model the informativeness of the decision about welfare fundamentals as:
Pr(A∣G)=aPr(A∣B)=b=1−a
When a>12, approval is more likely in good times. From this, we can calculate the expected value of rejecting and adopting the policy, and therefore the decision that a conditional decision market will adopt.
Derivation
First, we calculate the probability of adopting and rejecting the policy based on a and p.
We can then calculate the posterior beliefs of market participants about welfare fundamentals after the policy is adopted or rejected using Bayes' formula:
Posterior Given Adoption:
Pr(G∣A)=Pr(A∣G)Pr(G)Pr(A)=ap(1−a)+p(2a−1)
Posterior Given Rejection:
Pr(G∣R)=Pr(R∣G)Pr(G)Pr(R)=(1−a)pa−p(2a−1)
We can then calculate the expected welfare conditional on rejection and adoption, including the causal effect of the policy τS and the effect of fundamentals most associated with either decision:
Which we can compare to the difference in expected welfare value due purely to the causal effect of the policy:
E[W∣do(A)]−E[W∣do(R)]=τGp+τB(1−p)
Those equations tell us that the signaling effect of a is strongest when p≈1/2, i.e. when the uncertainty of market participants about fundamentals and the informative value of adoption about fundamentals are highest. While a binary state world is a simplification over a continuously varying welfare distribution, the derivation can be extended to an arbitrarily large number of future states, eventually converging to the continuum case.
Graphical Example: The Bronze Bull and the Bailout
The next two plots show the difference in expected welfare value between policy approval and policy rejection across values of p∈[0,1], for cases with different values of a, WB, WG, τB, τG. While the cases are chosen to exemplify the specific failure modes of futarchy, they are hardly pathological, and can manifest over a broad range of conditions. In green is the region of positive difference in EV (the policy is approved), and in red the difference in EV is negative (the policy is rejected). The red line shows the difference in EV due to causal policy effects and the blue line shows the futarchy decision metric, i.e. the difference in conditional EV including selection bias.
Figure 1: Approval Threshold for the Bronze Bull
This first plot represents the Bronze Bull: i.e. a wasteful policy with net negative causal effects, but with high informational value about fundamentals. More specifically, the policy is correlated with good fundamentals (a>0.5), and the delta between good and bad fundamentals is large (WG−WB=0.5). As a result, the futarchy approval threshold is positive over a broad range of priors p>0.266, despite the causal effects being negative for any value of p, because the signaling value of the policy is sufficiently large to overcome its negative causal effects.
Figure 2: Approval Threshold for the Bailout
This second plot represents the Bailout, which is the flip side of the Bronze Bull. The Bailout has positive causal EV over a broad range of priors, which should lead to approval most of the time, unless the market is confident that the fundamentals are good (causal approval for p<0.80). However, because the Bailout is usually adopted when fundamentals are dire (a<0.5), the conditional EV of rejecting the policy is higher than adopting it for a much broader range of p than with causal EV. Here, instead of adopting a noxious policy because it signals strength, the market rejects a beneficial policy because it signals weakness.
Figure 3: Approval Threshold for Bailout if a = 0.5
This last plot represents the effect of decorrelating policy adoption on futarchic estimates of conditional EV. When a=0.5, i.e. adopting the policy signals nothing about underlying fundamentals, then the conditional EV matches the causal EV.
Footnotes
Many were tackled by Hanson in his original article formalizing the idea of futarchy. ↩︎
No disrespect intended to them. The flaw they pointed out is real and their method is sound. But proving the existence of a flaw using an abstract toy model unrelated to governance and proving that the flaw is sufficiently severe to render the concept dead on arrival for practical governance are different things. ↩︎
This example isn't theoretical at all. It is more or less the conundrum pre-Keynesian institutional economics (including president Hoover) faced in the early days of the 1929 market crash. ↩︎
This is essentially the same reason why technical analysis persists and "works". It allows traders to monetize random-walk patterns by collectively agreeing on what patterns mean, which makes the movement signaled by those patterns self-fulfilling: a bull flag signals higher stock prices because every chartist will buy the stock after seeing it, in anticipation of the rise... which they collectively create. ↩︎
Randomization creates its own problems too. If decision markets cease to be a meaningful policy filter under futarchy, then political battles will shift to getting on the agenda in the first place. Which political group could resist a lottery ticket to implement their preferred policy without democratic or market oversight? ↩︎
Hanson has said that because adopting random policy could get "very expensive", one might imagine only rejecting policy at random, which would provide a partially causal estimate of welfare on the "adopt" branch, while leaving the question of how to estimate the causal welfare impact of the reject branch as an exercise to the reader. We could retort that "adopting" and "rejecting" policy are conventions relative to what "business as usual" means rather than categorical absolutes, which makes them vulnerable to gaming. Rejecting Keynesian stimulus is functionally identical to adopting a bold liquidationist policy, for example. ↩︎
Summary
Epistemic status: quite confident.
Futarchy is bound to fail because conditional decision markets are structurally incapable of estimating causal policy effects once their outputs are acted upon. Traders must price contracts based on welfare conditional on approval, not welfare caused by approval. As a result, decision markets systematically reward traders for exploiting non-causal correlations between policy adoption and latent welfare fundamentals. We can expect futarchy markets to endogenously generate such correlations. Policies that signal strong fundamentals are favored even if causally harmful, while policies that signal weakness are disfavored even if causally beneficial. This effect persists under full rationality, common knowledge, and perfect supporting institutions (welfare metric, courts, legislatures, etc.).
This bias is worst when individual estimates of fundamentals are noisy and dispersed, i.e. where markets should be most useful as information aggregators. The resulting inefficiency cost is paid by the organization being governed, while gains accrue to market participants, making futarchy parasitic on its host. Randomization schemes can recover causal estimates only by breaking the feedback loop between prices and decisions, but doing so either renders futarchy ineffective as a decision making tool, fails to fix the problem, or collapses it into an influence market where the wealthy can buy policy.
There is no payout structure that simultaneously incentivizes decision market participants to price in causal knowledge and allows that knowledge to be acted upon.
Introduction
Futarchy is a form of governance that leverages conditional predictions markets to take decisions, invented by Robin Hanson. In theory, because markets are great at aggregating dispersed, tacit information, futarchy could lead to better decisions than private-business autocracy or democracy, but it has so far failed to gain much traction as a practical decision-making tool. Many concerns over futarchy have been raised over the years, ranging from the difficulty of defining the welfare metric needed to settle the bets, to oligarchy concerns and market manipulation.[1] Today, we will be talking about a more fundamental problem, one that would be sufficient to cripple futarchy by itself.
The problem is that futarchy is based on a fundamental confusion between prediction markets, which have no causal effect on the event they are trying to predict, and decision markets, which do have a causal effect on the event or metric they are trying to predict. While it is generally correct that prediction markets are outstanding institutions for aggregating dispersed predictive information, this effectiveness does not transfer to the ability of decision markets to take good decisions, because causal probabilities and conditional probabilities are different game-theoretic objects.
In this article, I intend to prove that:
The reason behind this failure is that rational traders will systematically price information about welfare fundamentals into futarchy decision markets using "superstition" signaling mechanism. This signaling mechanism persists because it is capital-efficient for market participants. It is parasitic on the ability of the organization to pay the cost of bad policies while market participants profit from gambling on welfare fundamentals.
Appendix A provides some responses to anticipated questions, while Appendix B is a mathematical formalization of the argument made in the article.
Prior Work
I am not the first to point out that decision markets implements a form of evidential decision theory, in which decisions are made based on what is correlated with favorable welfare instead of what causes favorable welfare. Dynomight did a series of thorough articles in 2022-2025 on the inability of decision markets to provide causal welfare estimates, which helped spark my interest in the question. Caspar Oesterheld picked up that futarchy implements EDT in 2017. Anders_H showed the same result using a toy example in 2015.
However, those articles use confounders whose source is external to the market to demonstrate the problem: a trick coin for Dynomight, a geopolitical event for Anders_H, Omega's prediction for Caspar's Newcomb paradox. They use toy examples that could be seen as a bit convoluted and adversarially constructed.[2] This allowed Hanson and other proponents of futarchy, while agreeing that confounders are a problem ("decision selection bias" is the term he uses), to consistently answer that the solution is endogenizing the decision within the market as much as possible: "putting the market in charge of decision-making", or "allowing the decision-makers to trade" in advisory markets. Under those conditions, Hanson assures that decision selection bias is "rare", and we are led to believe those prior adversarial examples would be edge cases: futarchy would still work well most of the time. The point of my article is to close those escape hatches right now: those solutions do not work.
The Bronze Bull Problem
Consider a simple example we might call the Bronze Bull problem. Suppose someone submits this proposal to a futarchic decision market: "let's build a massive bronze statue of a bull in Times Square as a prosperity monument. It will cost half a billion dollars and be ten times taller than the Wall Street one". Would this policy be approved?
If we assume that this policy has a slight negative effect on national welfare, because any tourism or aesthetic benefits fails to cover the construction costs of the statue, a naive futarchist would answer that it would (and should) be rejected. But this is wrong. Even with negative causal effect on national welfare, a prosperity bull statue could, and I argue would, be approved by a futarchic decision market.
This is because the payout structure of the decision market rewards Wconditional on the market approving the policy, not the causal impact of the policy itself. Approval of such a wasteful confidence-signaling policy signals one thing: the market aggregate believes that economic fundamentals are strong enough that resources can be wasted on prosperity symbols. Conversely, rejecting the policy means that economic fundamentals are so dire we cannot afford such a waste. The policy's approval is endogenous to the very economic conditions that determine welfare.
Therefore, a market trader would—correctly—estimate that "worlds where the market approves the Bronze Bull" are high-welfare worlds, not because the Bull causes prosperity, but because approval signals underlying confidence and strong fundamentals: E(W|approve the bull) is high. Conversely, "worlds where the market rejects the Bronze Bull", because it is a frivolous waste that we can't afford, are low-welfare worlds: E(W|reject the bull) is low. Result: E(W|approve the bull)>E(W|reject the bull), and the Bronze Bull gets approved despite having a net negative impact on welfare.
Critically, this bias manifests even when traders are rational, use causal decision theory, and know perfectly well that the Bronze Bull actively hurts welfare. The problem is the payout structure of futarchy itself. A trader who ignores selection effect and tries to price contracts based solely on the Bull's causal effect on national welfare would lose money. If they treat approve-the-bull as less valuable than reject-the-bull contracts, they would either overpay for reject-the-bull contracts that only pay off in low-welfare worlds, or undersell approve-the-bull contracts that pay off in high-welfare worlds.
The Bailout Problem
The Bronze Bull shows how a harmful policy can be approved when it signals confidence in fundamentals. But the bias also works in reverse, causing futarchy to reject beneficial policies because they signal weak fundamentals.
Consider the example of deciding whether to pre-emptively pass a bailout/stimulus package when an economic crisis might be looming near. Does approving the stimulus package provide sufficient causal benefit to offset the market wisdom that any stimulus amounts to a confirmation that crisis is right around the corner?[3] Besides the causal effect of the policy, the answer to this question depends on two factors: the strength of the market norm about what rejection and approval means for underlying welfare fundamentals; and the accuracy of the trader's own estimate of welfare fundamentals based on "off-decision" sources (research, gut feeling, media, anything but decision markets).
When every trader has excellent information about welfare fundamentals, market norms lose some of their informative power. Once everyone knows, with high confidence, that things are going great, then "the market picked the bailout" or "the market rejected the bailout" do not provide much additional information about fundamentals. At this point, decision markets do provide better estimate of the causal effects of each policy. But note that this is a better estimate, not an estimate free from selection decision bias. A rational trader must still consider the possibility that the market decision might nevertheless reveal something about fundamentals, because other traders might know things he or she does not know about.
Conversely, when traders have noisy estimates of welfare fundamentals, confidence bias reign supreme. If no one is quite sure how good things will be in the future, "the market picked the bailout" and "the market rejected the bailout" are extremely meaningful aggregate signals. This leads to an unfortunate conclusion for futarchy: when markets are most helpful as aggregation mechanisms, i.e. information is dispersed and individual estimates are noisy, decision markets are most vulnerable to endogenous superstitions steering them away from causal decision-making. When information is widely distributed and consensus reigns, decision markets provide better estimates of causal policy effects (but given that consensus reigns, you probably do not need them in the first place!).
This is the crux: under conditions of uncertainty about welfare fundamentals, we can expect futarchy to adopt, on average, systematically worse policies than an organization using causal decision-making. This conclusion stands even if the institutional machinery around it (courts, legislature, agenda setting, defining and measuring welfare) works perfectly.
Endogenous Conditioning and Market Superstitions
It is reasonable to wonder whether confidence bias would be common in practice or if it would remain a weird edge case. For example, one of Hanson's main line of defense against "decision selection bias" is an intuition that such conditions are rare, and depend entirely on external confounders (e.g., decision-maker psychology) that disappear when we "put the market in charge". I fundamentally disagree with this argument. Absent an external source of confounders, a market is entirely capable of generating its own confounders via the beliefs of market participants, and we can in fact expect this failure to be the default outcome.
Consider the Bronze Bull example we just examined. Here, the confounder is the state of unobserved welfare fundamentals, acting on policy via the shared belief of traders about what adoption of the Bull would mean regarding those fundamentals. Because adoption also depends on the behavior of traders, this belief is self-fulfilling, arbitrary, and endogenous to the market itself: it cannot be eliminated easily. If the traders believe you only build bulls in good times, they will price in good time into approve-the-bull contracts, making approval more likely. If they believe bronze bulls are only approved in desperation when fundamental are terrible, then they will price in bad times into approve-the-bull contracts, making approval less likely. The result is a confidence bias directionally pointing toward adopting whatever policies signal good fundamentals, embedded within futarchy's payout structure.
In any case, the bull is causally harmful, and adoption only depends on arbitrary market folklore, which we could adequately call a superstition. Because the superstition is a coordination point (i.e. the collective belief about what adoption or rejection means), it nevertheless carries valuable information for individual traders. To be precise, a superstition allows market participants to use their capital more efficiently when trying to profit off private information about fundamentals.
Consider the case of a savvy trader who just got information that future welfare is likely to be low. If adoption has no directional bias from underlying fundamentals, the trader must hedge his knowledge by trading on both sides of the adoption branch, immobilizing capital on the ultimately rejected for the duration of the market for zero return. This is inefficient.
If a market superstition makes adoption more likely under a specific state of fundamentals, the savvy trader can focus his trades on the branch made more likely by his private information. He is rewarded with higher profits than if there wasn't a superstition in the first place. Under this lens, the decisional efficiencies of futarchy are a parasitic externality of traders using approval as an information channel to trade on welfare fundamentals: the costs to society are diffuse (inefficiency, bad policy), while the benefits are concentrated to informed market participants.
Once a superstition takes hold, there is nothing to arbitrage, which makes it persistent despite being collectively groundless.[4] This is a class of problems called in economics a sunspot equilibrium. The confidence bias induced by sunspot beliefs can be potentially much larger than the causal impact, depending on what traders collectively believe each option signals about welfare fundamentals.
Can Randomization Rescue Futarchy?
It is often said that the solution to decision selection bias is simple: partial randomization. By breaking the confounder between the selection of the decision and the context of the decision (including the underlying welfare fundamentals), the conditional odds of the decision market contracts should correspond more closely to the causal effects of adopting or rejecting the policy.
This is correct in a technical sense, but it does not rescue futarchy. Hanson and others have mentioned small randomization fraction, say 5% or 1% of all markets, being decided at the flip of a coin. Sounds reasonable, isn't it? A modest price to pay for accurately causal decision-making.[5] Futarchists mention two ways to go about this: an ineffective one (randomization after market choice) and a bad one (randomization as the settlement trigger on advisory markets).
Approach 1: Randomizing the Decision (Ineffective)
Let the futarchy decision markets proceed normally (1−ϵ) of the time, with decisions reached according to market prices. A fraction ϵ of the time, upon resolution of the market, the policy is implemented randomly at the flip of a coin.
This method pulls the conditional probability a=Pr(A|G) between approval A and underlying fundamentals state G toward a pure coin flip:
aϵ=a+ϵ(1/2−a)Or equivalently:
aϵ=(1−ϵ)a+ϵ/2Randomization scales the superstition strength (2a−1) by a factor (1−ϵ). When adoption is strongly correlated with fundamentals (a→1), you must randomize a lot, perhaps most of the time, to hope to recover anything but crumbs of causal estimates. The 5% randomization fraction mentioned by Hanson would be mostly ineffective.[6]
Approach 2: Randomizing Settlement (Straight Up Pay-for-Play)
Under this architecture, markets are advisory and do not directly control policy adoption, which is a significant departure from Hanson's pure futarchy proposal. Instead, the conditional prediction markets resolve randomly, according to a coin flip, for a fraction ϵ of bets. The rest of the bets (1−ϵ) are called off, and bettors are reimbursed for their trades. Since markets can only resolve upon random adoption of policy, E[W|approve] should be priced as E[W|do(policy)]. Congratulations! We should now have causal estimates, that decision-makers can use (1−ϵ) of the time to inform their thinking, while implementing random policy ϵ of the time. If ϵ is small, this should be a manageable cost.
The unfortunate truth is that there is no such thing as a market-derived causal E[W|do(policy)] that one can act on, even indirectly. If decision-makers use the predictions of the market in any regular way (perhaps, let's be bold, by adopting policies whose impact on welfare is higher than the alternative), the market can, and will, price this fact in. We are back to estimating welfare conditional on adoption, just like in regular futarchy, but this time with a payout structure that explicitly rewards market manipulation.
Let's look at a practical example, under a reasonably small ϵ of 0.01. What will the welfare be if a government contract is awarded to Pork, Inc. or Honest Little Guy (HLG), LLC? For the sake of argument, assume that welfare will be higher if the contract goes to HLG, but that Pork, Inc. happens to have deeper pockets. Let's also assume that when the market resolves to N.A. (that is 99% of the time), the decision-makers pick the policy with the highest price ~80% of the time.
Despite being a worse contractor, if Pork can use its credit to keep its contracts priced higher than HLG's, they stand to profit handsomely. They risk their capital only 0.5% of the time, while being awarded the contract 79.2% of the time, because decision-makers observe and act on market prices even from markets that won't resolve.
Pork's expected gain is:
GainPork=(1−ϵ)αB−ϵL/2with α=0.8 the probability that decision-makers select the highest priced decision contract, B the contract payout, and L the amount of capital Pork can commit to market manipulation. Pork can commit up to:
Lmax=2(1−ϵ)αBϵThat is 160 times the contract payout (!) in manipulation capital, and Pork still ends up in the green. The decision market has stopped being a contest of who is best informed. Instead, it's a contest of who can best deploy capital to influence the thinking of decision-makers, with a lottery-ticket risk of ruin if your trades have the misfortune to execute.
What about arbitrage? Let's assume an external arbitrageur that is a) without opportunity for insider profit on either branch and b) knows that HLG is better for welfare than Pork, Inc. To profit from this knowledge, he must bid up HLG using as much capital as Pork, Inc, but he may profit only 0.5% of the time. Otherwise, he immobilizes his funds for no payout. Unless the welfare mispricing is on the order of 1/ϵ, no arbitrageur would touch this market with a ten foot pole. Holding treasuries would be better business.
Providing a better payout for arbitrageurs requires to crank epsilon up, which causes the same problems as Approach 1.
The main takeaway is that there exists no payout structure that simultaneously incentivizes revelation of causal effects, and allows decision-makers to act on those revelations. If market prices influence decisions in any predictable way, rational traders must price in that influence, returning to conditional rather than causal estimates. If prices don't influence decisions, futarchy ceases to be a decision mechanism and becomes a randomized controlled trial (RCT) that you can bet on.
Do We Have Empirical Examples of Decision Markets Failing Due to Decision Selection Bias?
We might, but it is circumstantial. Because futarchy has rarely been implemented at scale, we must rely on evidence from conditional prediction markets (i.e. "what will Y be if X happens?") without direct decision-making power. There is Dynomight's coin experiment, of course, which did succeed in showing that futarchy implements EDT, but this was an adversarially constructed case. However, Ford's internal prediction market program in the mid-2000s included conditional prediction markets, as presented in the paper "Corporate Prediction Markets: Evidence from Google, Ford, and Firm X"[7] by Cowgill and Zitzewitz. This is an empirical, large-scale test performed in good faith by an organization genuinely eager to harness the power of prediction markets.
Ford's conditional "features markets" asked traders whether specific car features would attract consumer interests, if they were tested via conventional market research. Because market research is expensive to run, narrowing down the field of feature to test using the wisdom of crowds seemed fairly sensible. However, settling the features market would have exposed valuable information to market participants at large, since it told quite directly which features Ford tested and how well they did with customers. Ford chickened out halfway into the experiment, and decided to turn the whole thing into a Keynesian Beauty Contest, killing the predictive value. However, before they pulled the plug, here is what the authors observed:
I have good reasons to suspect that the "optimism bias" of "low and intermediate price" securities is simply decision selection bias under another name. Quite straightforwardly, traders believed that if management decided to test the feature at all, it must have some value they may be unaware of, regardless of their own personal feeling about the feature. After all, even if I think an in-car vacuum is a stupid idea, the simple fact that we test it in the first place means the idea might not be that stupid. This is limited evidence, but it is consistent with the case I present here.
Conclusion
Prediction markets can either provide accurate causal prediction of policies you cannot act on, or conditional estimates that you can, but should not, act on. There is no secret third way. In the case of futarchy, decision markets will be systematically hijacked to allow market traders to gamble on underlying welfare fundamentals in addition to the causal effects of the policy. This mechanism leads to the systematic adoption of wasteful policies signaling strong fundamentals and the rejection of policies that are helpful but signal bad fundamentals. Because this signaling operates at the expense of the organization being governed, who will bear the cost of those harmful policies, and to the benefit of futarchy market traders, it fits the definition of parasitism.
Appendix A: Response to Anticipated Objections
Objection 1: What about the various crypto projects that do use futarchy today?
Futarchy may genuinely be well-suited to crypto governance. In crypto, value is reflexive and determined primarily by market sentiment rather than external fundamentals. In such systems, E(W|A) may actually be the correct objective, if signaling confidence is the desired causal effect. When "the market believes the Bronze Bull will pump the coin" causes the pump, then building the Bull genuinely increases welfare. This is generally not true outside of crypto.
Objection 2: You are just proving that Futarchy implements Evidential Decision Theory (EDT) and not Causal Decision Theory (CDT).
This is true. And since EDT is considered a valid decision-theoretic framework by many philosophers, with strong support in the Newcomb Paradox and the Smoking Lesion Paradox, why couldn't futarchy simply be valid under EDT?
Because policy is an inherently causal domain. A polity that adopts policies because they are causally beneficial will systematically dominate one that adopts policies that are merely correlated with good fundamentals. The entire edifice of evidence-based science relies on breaking confounders via randomization to calculate the causal effect of interventions. Regardless of whether you are a one-boxer or a two-boxer, you should support causal policymaking.
Objection 3: Wouldn't a CDT arbitrageur be able to exploit the decision selection bias against the market?
No. As we explained in the Bronze Bull section, the problem is inherent to the payout structure of futarchy, not to the rationality or decision theory of market participants. A CDT arbitrageur would lose money under futarchy by over-selling causally harmful policies that get executed in good times (Bronze Bulls) and over-buying policies that are causally beneficial but only pay out in bad times (Bailouts).
Appendix B: Mathematical Model of Decision Selection Bias
Model Setup
Fundamentals and Priors
Let's assume that the world has two possible future states S∈{G,B}: good (G) and bad (B), with prior belief of being good p. The respective values of welfare in each state is WG and WB (where WG>WB, since things are better in good times).
Policy Effects
Consider a policy P that, if adopted, adds state-dependent causal effects:
The realized welfare in the future state is:
W=WS+τS⋅1Awhere A denotes policy adoption.
Summary of Variables
We assume that adoption is correlated with the state of underlying fundamentals: some policies are more likely in good times, others in bad times (e.g. building Bronze Bulls is more likely in good times, stimulus in bad times). We model the informativeness of the decision about welfare fundamentals as:
Pr(A∣G)=aPr(A∣B)=b=1−aWhen a>12, approval is more likely in good times. From this, we can calculate the expected value of rejecting and adopting the policy, and therefore the decision that a conditional decision market will adopt.
Derivation
First, we calculate the probability of adopting and rejecting the policy based on a and p.
Adoption:
Pr(A)=Pr(A∣G)Pr(G)+Pr(A∣B)Pr(B)=ap+(1−a)(1−p)=(1−a)+p(2a−1)Rejection:
Pr(R)=1−Pr(A)=a−p(2a−1)We can then calculate the posterior beliefs of market participants about welfare fundamentals after the policy is adopted or rejected using Bayes' formula:
Posterior Given Adoption:
Pr(G∣A)=Pr(A∣G)Pr(G)Pr(A)=ap(1−a)+p(2a−1)Posterior Given Rejection:
Pr(G∣R)=Pr(R∣G)Pr(G)Pr(R)=(1−a)pa−p(2a−1)We can then calculate the expected welfare conditional on rejection and adoption, including the causal effect of the policy τS and the effect of fundamentals most associated with either decision:
Expected Welfare Given Adoption:
E[W∣A]=E[WS+τS∣A]=E[WS∣A]+E[τS∣A]=WB+(WG−WB)Pr(G∣A)+τB+(τG−τB)Pr(G∣A)=WB+τB+(WG−WB+τG−τB)Pr(G∣A)Expected Welfare Given Rejection (no policy effects under rejection):
E[W∣R]=E[WS∣R]=WB+(WG−WB)Pr(G∣R)The difference in expected welfare value, which under futarchy determines whether to adopt the policy, decomposes as:
E[W∣A]−E[W∣R]=(WG−WB)[Pr(G∣A)−Pr(G∣R)]Signaling Value+τB+(τG−τB)Pr(G∣A)Policy Effects
Substituting the priors, we obtain:
E[W∣A]−E[W∣R]=(WG−WB)[ap(1−a)+p(2a−1)−(1−a)pa−p(2a−1)] +τB+(τG−τB)⋅ap(1−a)+p(2a−1)Which we can compare to the difference in expected welfare value due purely to the causal effect of the policy:
E[W∣do(A)]−E[W∣do(R)]=τGp+τB(1−p)Those equations tell us that the signaling effect of a is strongest when p≈1/2, i.e. when the uncertainty of market participants about fundamentals and the informative value of adoption about fundamentals are highest. While a binary state world is a simplification over a continuously varying welfare distribution, the derivation can be extended to an arbitrarily large number of future states, eventually converging to the continuum case.
Graphical Example: The Bronze Bull and the Bailout
The next two plots show the difference in expected welfare value between policy approval and policy rejection across values of p∈[0,1], for cases with different values of a, WB, WG, τB, τG. While the cases are chosen to exemplify the specific failure modes of futarchy, they are hardly pathological, and can manifest over a broad range of conditions. In green is the region of positive difference in EV (the policy is approved), and in red the difference in EV is negative (the policy is rejected). The red line shows the difference in EV due to causal policy effects and the blue line shows the futarchy decision metric, i.e. the difference in conditional EV including selection bias.
This first plot represents the Bronze Bull: i.e. a wasteful policy with net negative causal effects, but with high informational value about fundamentals. More specifically, the policy is correlated with good fundamentals (a>0.5), and the delta between good and bad fundamentals is large (WG−WB=0.5). As a result, the futarchy approval threshold is positive over a broad range of priors p>0.266, despite the causal effects being negative for any value of p, because the signaling value of the policy is sufficiently large to overcome its negative causal effects.
This second plot represents the Bailout, which is the flip side of the Bronze Bull. The Bailout has positive causal EV over a broad range of priors, which should lead to approval most of the time, unless the market is confident that the fundamentals are good (causal approval for p<0.80). However, because the Bailout is usually adopted when fundamentals are dire (a<0.5), the conditional EV of rejecting the policy is higher than adopting it for a much broader range of p than with causal EV. Here, instead of adopting a noxious policy because it signals strength, the market rejects a beneficial policy because it signals weakness.
This last plot represents the effect of decorrelating policy adoption on futarchic estimates of conditional EV. When a=0.5, i.e. adopting the policy signals nothing about underlying fundamentals, then the conditional EV matches the causal EV.
Footnotes
Many were tackled by Hanson in his original article formalizing the idea of futarchy. ↩︎
No disrespect intended to them. The flaw they pointed out is real and their method is sound. But proving the existence of a flaw using an abstract toy model unrelated to governance and proving that the flaw is sufficiently severe to render the concept dead on arrival for practical governance are different things. ↩︎
This example isn't theoretical at all. It is more or less the conundrum pre-Keynesian institutional economics (including president Hoover) faced in the early days of the 1929 market crash. ↩︎
This is essentially the same reason why technical analysis persists and "works". It allows traders to monetize random-walk patterns by collectively agreeing on what patterns mean, which makes the movement signaled by those patterns self-fulfilling: a bull flag signals higher stock prices because every chartist will buy the stock after seeing it, in anticipation of the rise... which they collectively create. ↩︎
Randomization creates its own problems too. If decision markets cease to be a meaningful policy filter under futarchy, then political battles will shift to getting on the agenda in the first place. Which political group could resist a lottery ticket to implement their preferred policy without democratic or market oversight? ↩︎
Hanson has said that because adopting random policy could get "very expensive", one might imagine only rejecting policy at random, which would provide a partially causal estimate of welfare on the "adopt" branch, while leaving the question of how to estimate the causal welfare impact of the reject branch as an exercise to the reader. We could retort that "adopting" and "rejecting" policy are conventions relative to what "business as usual" means rather than categorical absolutes, which makes them vulnerable to gaming. Rejecting Keynesian stimulus is functionally identical to adopting a bold liquidationist policy, for example. ↩︎
("Firm X" is Koch Industries.) ↩︎