We're now approaching parity between AI and human prediction markets.
See: https://arxiv.org/abs/2402.18563
It's an interesting idea to replace the information gathering of prediction markets with AI [1] in Futarchy. Ultimately, AI will probably be dominating political decisions in the coming decades, and we have the choice of whether that happens in opaque ways, or formalizing such an outcome in a system such as AI-Futarchy as you propose.
One way to get around AI power is to restrain the use of AI to something very narrow and non-agentlike. For example, an AI system not too much more powerful than current systems that is fine-tuned to simply forecasts policy outcomes.
One of the major issues with Futarchy for public policy is that defining a welfare function that is universally acceptable is very difficult. Unfortunately, this is not addressed by replacing the prediction markets with AI.
Another issue with Futarchy is the information gotten when a certain vote resolves in a given direction (e.g. when a vote for whether lockdowns will increase covid, futarchy has the problem that the information that there are going to be lockdowns is itself baked into the betting of voters). See the comment by linch in this post regarding Covid lockdowns: https://forum.effectivealtruism.org/posts/ijohdoDbPvdeXMpiz/summary-and-takeaways-hanson-s-shall-we-vote-on-values-but
As you mention, one issue AI solves with Futarchy is thin markets - very little liquidity in the prediction market. AI does not have this problem, so AI-based decision making could be used on a micro scale and on an organizational or even personal level. This could aid uptake of the system on small scales.
One issue AI introduces when being used to replace prediction markets with Futarchy is that while markets have natural tendencies to correct manipulation (market automatically adjusts if a policy is manipulated), the development of AI is currently highly centralized. Manipulation in training a model is relatively easy, and specific training data could be introduced to influence the forecast and predictions of policies. There may be ways to get "skin in the game" to correct AI forecasts, such as monetary incentives to detect biases in training or parameters of models - these may be aided by interpretability techniques.
[1] when I say "AI", I am referring to narrow AI presumably fine-tuned to produce comparable or better future forecasts of policy impacts on the relevant welfare metric
We're now approaching parity between AI and human prediction markets.
See: https://arxiv.org/abs/2402.18563
It's an interesting idea to replace the information gathering of prediction markets with AI [1] in Futarchy. Ultimately, AI will probably be dominating political decisions in the coming decades, and we have the choice of whether that happens in opaque ways, or formalizing such an outcome in a system such as AI-Futarchy as you propose.
One way to get around AI power is to restrain the use of AI to something very narrow and non-agentlike. For example, an AI system not too much more powerful than current systems that is fine-tuned to simply forecasts policy outcomes.
One of the major issues with Futarchy for public policy is that defining a welfare function that is universally acceptable is very difficult. Unfortunately, this is not addressed by replacing the prediction markets with AI.
Another issue with Futarchy is the information gotten when a certain vote resolves in a given direction (e.g. when a vote for whether lockdowns will increase covid, futarchy has the problem that the information that there are going to be lockdowns is itself baked into the betting of voters). See the comment by linch in this post regarding Covid lockdowns: https://forum.effectivealtruism.org/posts/ijohdoDbPvdeXMpiz/summary-and-takeaways-hanson-s-shall-we-vote-on-values-but
As you mention, one issue AI solves with Futarchy is thin markets - very little liquidity in the prediction market. AI does not have this problem, so AI-based decision making could be used on a micro scale and on an organizational or even personal level. This could aid uptake of the system on small scales.
One issue AI introduces when being used to replace prediction markets with Futarchy is that while markets have natural tendencies to correct manipulation (market automatically adjusts if a policy is manipulated), the development of AI is currently highly centralized. Manipulation in training a model is relatively easy, and specific training data could be introduced to influence the forecast and predictions of policies. There may be ways to get "skin in the game" to correct AI forecasts, such as monetary incentives to detect biases in training or parameters of models - these may be aided by interpretability techniques.
[1] when I say "AI", I am referring to narrow AI presumably fine-tuned to produce comparable or better future forecasts of policy impacts on the relevant welfare metric