There’s been a lot of talk lately about an “AI explosion that will automate everything” to “AI will produce huge rents”. While it’s far from clear if any of these predictions will pan out, there’s a more grounded version of such questions we can quantitatively address:
Suppose AI automated every task that’s currently automatable — which is still not all jobs — and didn’t even create new ones. How capable would it need to be before its rents could fund something like a universal basic income (UBI)?
It turns out one can, in fact, give a clean, analytic threshold for this question.
Off the bat, it’s perhaps worth mentioning that this is explicitly a funding result, and not a welfare result. I’m not arguing whether UBI is good/bad, or how it should be distributed, or how society should handle purpose/meaning in a post-work world. In fact, we don’t even have to assume the world is fully post-work in our treatment here – we can base ourselves on current U.S. economic parameters. Our goal is just to identify whether AI rents can become large enough to cover basic needs, and to explicitly identify what societal levers can make this threshold more or less feasible.
Below I give the broad strokes. For full technical details, proofs, and calibrations, see my paper:
“An AI Capability Threshold for Rent-Funded UBI in an AI-Automated Economy” https://arxiv.org/abs/2505.18687
In a Solow-Zeira task model where AI boosts the productivity of automated tasks by a factor , the minimum capability needed to fund a transfer B (as a share of GDP) from AI rents alone is:
Where the meaningful levers are:
Calibrating to U.S. data (sources in the paper):
This gives:
Based on current estimates, AI needs to be ~5-7× as productive as today’s automation on automatable tasks to fund a UBI-sized transfer.
Not 50×. Not 500×. About 5-7× beyond current automation productivity.
And that’s a worst-case upper bound—after all, we assumed:
In the paper, we analyze all of the above and show they would lower the required UBI AI capability threshold .
Figure 1. AI capability doubling trajectories vs. UBI threshold.
Figure 1 shows AI reaching in:
Even very conservative AI capability growth scenarios cross the bar by mid-century.
In Figure 2, we analyze monopolistic or concentrated oligopolistic markets and show that they reduce the threshold by increasing economic rents, whereas heightened competition between AI firms significantly raises it. We'll skip this for the sake of keeping this post short, but feel free to consult the paper, especially theorem that supports it in Proposition 2.
The cleanest comparative static is that:
Increasing public revenue share (Θ) from ~15% to ~33% cuts roughly in half.
Note, this doesn’t require full-scale nationalization; rather, it mirrors the difference between current U.S. and Scandinavian-style corporate-profit capture. As we can see in Figure 3 below, beyond ~50% public ownership, there are diminishing returns — unless operating/AI alignment costs c rise.
This makes Θ the most practical policy knob for reducing how much AI capability is needed to fund a basic transfer.
Figure 3 (from the paper). Public revenue share vs. AI capability threshold under cost regimes.
Figure 3 shows:
The economic message is simple: capturing more of the rents makes an AI based UBI more feasible without requiring nationalization, unless operating and alignment more capable AI systems becomes too costly.
This result isn’t an endorsement of UBI. It’s a feasibility benchmark.
If AI-generated rents exceed this threshold, society would have the financial slack to fund a basic transfer — or any number of alternative redistributive mechanisms (sovereign wealth dividends, tax credits, negative income tax), depending on political taste.
The point is:
AI capabilities don’t need to grow very far to support a UBI even without creating new jobs.
Whether one likes or dislikes that fact, we think it’s useful to know where the tipping point is.
We believe the analytic result (and associated comparative statics) is the more lasting contribution than any specific numbers we plug in here. We've made our code available, so anyone can try their own calibrations here.
There’s also a public tool made by the AI+Wellbeing Institute that implements our model based on the code above, connecting UBI feasibility to certain well-being measures.
It’s fun to play with and makes the comparative statics more intuitive.