This is an automated rejection. No LLM generated, assisted/co-written, or edited work.
Read full explanation
TL;DR: Most debates about AI and automation focus on income: how to protect people if labor income falls, and how to distribute the gains from AI more fairly. That is why proposals like UBI and broad-based ownership of AI capital matter. But there is a major blind spot here. Even if people retain income, they may still lose agency: the practical ability to initiate projects, solve problems, create, coordinate, and act in the world without depending entirely on a few dominant firms or institutions. If AI becomes part of the basic infrastructure of production, education, coordination, and problem-solving, then the question is not just who gets money from the system. It is also who retains meaningful access to the system’s productive power. My tentative conclusion is that beyond some point, societies may need not only UBI or AI-capital share, but also some form of universal productive access to AI, compute, automated tools, and perhaps eventually limited robotic production.
Epistemic status: This is not a finished political program. It is an attempt to identify what I think is an underexplored gap in current AI policy debates. My claim is not that income no longer matters. It is that discussions centered on income may miss a deeper transition: from loss of wages to loss of agency.
The current debate is mostly about income
As AI and automation advance, the most common concerns are familiar: jobs, wages, bargaining power, unemployment, and inequality.
That focus makes sense. If more tasks are done by models, agents, and robots, then the share of income going to labor may fall, while more of the gains flow to the owners of models, compute, data, platforms, and automated infrastructure. Even without total technological unemployment, labor may lose bargaining power and capital may gain a larger claim on growth.
That is why two responses dominate so much of the discussion.
The first is UBI: if labor markets no longer distribute enough income, give everyone an unconditional cash floor. This addresses insecurity directly and does not require judging who is “deserving.”
The second is what I’ll call AI-capital share: some form of broad public claim on the gains from AI capital. This might mean a social wealth fund, public equity stakes, taxes on AI rents, sovereign holdings, or another mechanism that ties public benefit to AI-driven growth. UBI says people need protection; AI-capital share says people deserve a stake in the new productive system.
Both proposals are serious. Both may be necessary. But both are still primarily answers to one question:
How do we protect people’s income as labor becomes less central?
I think that is where the blind spot begins.
The blind spot: people can keep income and still lose agency
A society can, in principle, solve much of the income problem while leaving a deeper problem untouched.
People might receive a basic income. They might receive dividends from AI-driven wealth. They might be materially secure.
And yet they could still cease to be meaningful participants in production.
They could lose the practical ability to start projects, solve local problems, build things, coordinate with others, and shape their own environment without depending on permission from a small number of firms, state institutions, or infrastructure gatekeepers.
That is what I mean by agency here. Not a vague feeling of empowerment. Something more concrete: the practical ability to act in the world in nontrivial ways.
This distinction matters because consumption is not the same thing as agency. A person may be protected from destitution and still be locked out of the systems through which future wealth, coordination, and institutional power are created.
So the real question is not just:
Who gets income from the AI economy?
It is also:
Who retains the ability to do meaningful things within it?
Why this could become more important than it sounds
At first glance, this may sound abstract. Why not say that as long as people are comfortable and secure, that is enough?
Because AI may do something unusual: it may radically lower the barrier to entry for many forms of creation and problem-solving, while at the same time concentrating the underlying infrastructure.
In many domains, the bottleneck used to be technical skill: programming, drafting, design, research support, legal analysis, simulation, entrepreneurship, and many kinds of planning. Strong AI systems can compress some of those bottlenecks. They do not eliminate judgment, responsibility, or coordination. But they can make it possible for far more people to move from intention to execution.
That means access to AI is not just access to a consumer service. It begins to look more like access to literacy, industrial tools, or basic digital infrastructure.
And that raises the stakes. If AI becomes part of the baseline environment through which people learn, design, organize, build, and create, then lacking access to it may start to function like a form of structural exclusion.
Thinking about this through accessibility
One useful analogy is accessibility.
We already accept that physical and digital infrastructure should not be built in ways that systematically exclude people from meaningful participation. A society that builds stairs everywhere and then shrugs at wheelchair users has made a political choice, not just an architectural one.
The same logic may eventually apply to AI-enhanced agency.
If production, education, coordination, and problem-solving are increasingly organized around AI tools and automated systems, then access to those systems may stop being a luxury add-on. It may become part of the conditions of full participation.
This does not mean that someone without AI access is literally disabled in a biological sense. That would be the wrong way to put it. The point is institutional, not medical.
A society built around AI-enhanced capability may leave those without access in a position of structurally limited practical capacity inside an environment designed for those who do have it. They may still survive. But they may do so in a systematically dependent and subordinate way.
That is exactly the kind of condition societies usually try to avoid when thinking seriously about accessibility.
Why money alone may not solve this
At this point the obvious objection is:
If people already have UBI or AI dividends, why can’t they just buy AI tools, compute, and robotic services on the market?
This is the most important objection, and the post has to answer it clearly.
The answer is that in a sufficiently advanced AI economy, the most important productive systems may stop functioning like ordinary consumer goods. They may instead resemble a combination of strategic infrastructure, high-fixed-cost industrial capacity, safety-regulated dual-use technology, and quasi-monopolistic platforms.
In that world, money may not be enough to secure meaningful access.
There are at least four reasons for that.
First, frontier capability may be too concentrated. The most economically consequential systems may be controlled by a very small number of firms, states, or public-private alliances. Ordinary people may be able to purchase narrow services while having no realistic access to the tools that would allow genuinely independent action.
Second, access may be permissioned, not merely priced. For safety, misuse-prevention, national security, or liability reasons, advanced AI and robotics may be tightly governed. The key question may become not “can you pay?” but “are you allowed?”
Third, productive capability may come in bundles. Serious use of advanced AI may require integrated access to compute, models, tooling, data interfaces, physical execution layers, logistics, and institutional permissions. Buying fragments on the market is not the same as having a real share of productive capacity.
Fourth, market access may preserve dependence rather than autonomy. Renting intelligence from a dominant provider is not the same as having secure access to the means of action. A person can be a paying customer and still remain structurally subordinate.
This is why the issue is not simply redistribution after the fact. It is also whether markets alone can guarantee non-subordinate access to the productive tools of an AI society.
What agency could mean in practice
“Agency” is easy to endorse in the abstract and hard to picture concretely. So here are examples of the kind of thing I mean.
A neighborhood group could use guaranteed access to advanced AI tools and bounded compute to design a local hydroponic system, simulate maintenance costs, generate fabrication plans, and coordinate deployment without needing a venture-backed company.
Parents, teachers, or local communities could build customized educational simulations, language environments, or tutoring systems for their own children instead of depending entirely on whatever mass-market platform dominates their country.
A small community could use AI tools plus limited automated production access to create assistive devices, repair workflows, environmental monitoring systems, or local software tailored to real needs rather than generic commercial demand.
An artist, engineer, or researcher outside elite institutions could explore ambitious projects that today require expensive software stacks, specialized labor, or institutional gatekeepers.
These are deliberately modest examples. The point is not that every individual becomes a self-sufficient industrial sovereign. The point is that ordinary people retain some nontrivial ability to initiate and execute meaningful projects, rather than being confined to passive consumption.
A possible answer: universal productive access
If that diagnosis is right, then we may eventually need something beyond income redistribution:
a universal right of access to productive AI and automation.
Not just a chatbot. Not just a discounted subscription. Something more like a socially guaranteed baseline share of the tools that make agency possible in an AI economy: strong AI assistance, compute access, planning and design tools, educational and creative systems, and perhaps later some bounded access to robotic production or fabrication.
The purpose would not be to abolish markets overnight. Nor would it be to promise unlimited access to everything.
The purpose would be narrower and more defensible:
to ensure that ordinary people retain some direct capacity to act within an increasingly automated society, rather than being reduced to passive recipients of its output.
Why non-transferability matters
If such access exists, I suspect at least part of it would need to be substantially non-transferable.
Otherwise, it would quickly become an object of purchase, rental, aggregation, or indirect capture by larger concentrations of capital. A right meant to preserve broad agency could easily mutate into a new market in access rights, which would reproduce the inequalities it was supposed to solve.
This is not a remote theoretical concern. One can easily imagine gray markets in which individuals effectively lease out their quotas, or act as legal shells through which corporations aggregate supposedly personal access rights.
But this immediately creates a new problem. Preventing reconcentration too aggressively can push the system toward surveillance, identity control, and invasive monitoring of how people use their access. That would undermine the very freedom the system is meant to protect.
So this is not a simple design problem with a clean answer. It is a real institutional tension: how to prevent reconcentration without building an infrastructure of total inspection.
A rough three-layer architecture
One way to think about universal productive access is in three layers.
1. Infrastructure layer Some share of productive capacity must remain dedicated to maintaining the system itself: energy, data centers, logistics, repair, raw materials, safety, model upkeep, and resilience. This layer cannot simply be distributed. But because it cannot be distributed, it is also the most likely site of hidden power. That means it would need strong transparency and public accountability.
2. Personal access layer Each person would have a baseline personal claim on productive AI tools and automated services for their own use: assistance, education, design, planning, creativity, and some bounded productive capability. This is the layer most directly tied to preserving individual agency.
3. Collective project layer People should also be able to combine some of their access into collaborative projects: research, engineering, local initiatives, cultural efforts, and public missions. Otherwise the system risks splitting into isolated personal use on one side and centralized institutional power on the other.
I do not present this as a finished blueprint. It is only a way of making the problem more legible.
The central political tension: safety vs agency
Any serious version of this idea has to confront a central tension.
Powerful AI systems and automated infrastructure create obvious pressures toward centralization: safety, alignment, misuse prevention, security, liability, resilience, and control.
But preserving human agency seems to require at least some degree of decentralized access. If all meaningful AI capability remains tightly centralized, then formal rights may coexist with deep practical dependency.
This suggests a core political tension for advanced AI:
Safety pushes toward concentration and control.
Agency pushes toward wider access and decentralization.
I do not think this can be solved by a slogan. It is probably one of the central institutional problems of the coming era.
Four phases of the transition
To keep the argument disciplined, I find it useful to think in phases.
Phase 1: automation displaces labor income
The first problem is insecurity. Labor still matters, but more of the gains flow elsewhere. At this stage, UBI, wage insurance, and other income-stabilizing measures make sense.
Phase 2: gains from AI increasingly concentrate in capital
The next problem is ownership. If AI-driven growth mainly benefits the owners of automated systems, then AI-capital share becomes important: social wealth funds, public equity, dividends, and other ways of broadening the public claim on AI-generated wealth.
Phase 3: many people lose not only income centrality, but role centrality
At this stage, income support and capital dividends are no longer sufficient. People may be materially secure while becoming economically passive. This is where universal productive access becomes necessary: access not only to output, but to the tools of meaningful action.
Phase 4: automated infrastructure becomes the dominant core of society
At this stage, the main question is governance. Who controls the infrastructure layer? Who defines acceptable access? Who prevents a new technocratic elite from monopolizing effective power? Here the issue becomes democratic co-governance, constitutional limits, and protection against infrastructural oligarchy.
My core claim
My basic claim is simple:
UBI helps preserve economic security.
AI-capital share helps preserve a public claim on AI-driven wealth.
Universal productive access helps preserve human agency.
Democratic governance of AI infrastructure helps preserve political freedom.
Current debates understandably focus on the first two. But that may leave a crucial blind spot.
A world in which people receive money from automated abundance but lack meaningful access to the tools of creation may be far better than mass deprivation. But it may still leave most people in a structurally subordinate position.
If so, then the next institutional question is not only how to distribute AI-generated wealth.
It is how to distribute access to AI-generated power.
Questions I’d most want criticism on
I do not think this is a finished proposal, and several challenges seem obvious:
Under what conditions does money stop being enough to secure meaningful access?
What forms of access would preserve real agency rather than merely simulate it?
How non-transferable should such access be?
How do we prevent gray markets without building intrusive systems of surveillance?
How should we navigate the safety–agency tradeoff in practice?
How do we stop the infrastructure layer from becoming the hidden seat of a new elite?
Should access be personal, cooperative, municipal, or some combination?
My hope is not that this post settles those questions. It is that it makes one neglected issue harder to ignore:
the AI distribution debate is heavily focused on income, but it pays much less attention to agency.
And that, in the long run, may turn out to be the more politically important problem.
TL;DR:
Most debates about AI and automation focus on income: how to protect people if labor income falls, and how to distribute the gains from AI more fairly. That is why proposals like UBI and broad-based ownership of AI capital matter. But there is a major blind spot here. Even if people retain income, they may still lose agency: the practical ability to initiate projects, solve problems, create, coordinate, and act in the world without depending entirely on a few dominant firms or institutions. If AI becomes part of the basic infrastructure of production, education, coordination, and problem-solving, then the question is not just who gets money from the system. It is also who retains meaningful access to the system’s productive power. My tentative conclusion is that beyond some point, societies may need not only UBI or AI-capital share, but also some form of universal productive access to AI, compute, automated tools, and perhaps eventually limited robotic production.
Epistemic status:
This is not a finished political program. It is an attempt to identify what I think is an underexplored gap in current AI policy debates. My claim is not that income no longer matters. It is that discussions centered on income may miss a deeper transition: from loss of wages to loss of agency.
The current debate is mostly about income
As AI and automation advance, the most common concerns are familiar: jobs, wages, bargaining power, unemployment, and inequality.
That focus makes sense. If more tasks are done by models, agents, and robots, then the share of income going to labor may fall, while more of the gains flow to the owners of models, compute, data, platforms, and automated infrastructure. Even without total technological unemployment, labor may lose bargaining power and capital may gain a larger claim on growth.
That is why two responses dominate so much of the discussion.
The first is UBI: if labor markets no longer distribute enough income, give everyone an unconditional cash floor. This addresses insecurity directly and does not require judging who is “deserving.”
The second is what I’ll call AI-capital share: some form of broad public claim on the gains from AI capital. This might mean a social wealth fund, public equity stakes, taxes on AI rents, sovereign holdings, or another mechanism that ties public benefit to AI-driven growth. UBI says people need protection; AI-capital share says people deserve a stake in the new productive system.
Both proposals are serious. Both may be necessary. But both are still primarily answers to one question:
How do we protect people’s income as labor becomes less central?
I think that is where the blind spot begins.
The blind spot: people can keep income and still lose agency
A society can, in principle, solve much of the income problem while leaving a deeper problem untouched.
People might receive a basic income.
They might receive dividends from AI-driven wealth.
They might be materially secure.
And yet they could still cease to be meaningful participants in production.
They could lose the practical ability to start projects, solve local problems, build things, coordinate with others, and shape their own environment without depending on permission from a small number of firms, state institutions, or infrastructure gatekeepers.
That is what I mean by agency here. Not a vague feeling of empowerment. Something more concrete: the practical ability to act in the world in nontrivial ways.
This distinction matters because consumption is not the same thing as agency. A person may be protected from destitution and still be locked out of the systems through which future wealth, coordination, and institutional power are created.
So the real question is not just:
Who gets income from the AI economy?
It is also:
Who retains the ability to do meaningful things within it?
Why this could become more important than it sounds
At first glance, this may sound abstract. Why not say that as long as people are comfortable and secure, that is enough?
Because AI may do something unusual: it may radically lower the barrier to entry for many forms of creation and problem-solving, while at the same time concentrating the underlying infrastructure.
In many domains, the bottleneck used to be technical skill: programming, drafting, design, research support, legal analysis, simulation, entrepreneurship, and many kinds of planning. Strong AI systems can compress some of those bottlenecks. They do not eliminate judgment, responsibility, or coordination. But they can make it possible for far more people to move from intention to execution.
That means access to AI is not just access to a consumer service. It begins to look more like access to literacy, industrial tools, or basic digital infrastructure.
And that raises the stakes. If AI becomes part of the baseline environment through which people learn, design, organize, build, and create, then lacking access to it may start to function like a form of structural exclusion.
Thinking about this through accessibility
One useful analogy is accessibility.
We already accept that physical and digital infrastructure should not be built in ways that systematically exclude people from meaningful participation. A society that builds stairs everywhere and then shrugs at wheelchair users has made a political choice, not just an architectural one.
The same logic may eventually apply to AI-enhanced agency.
If production, education, coordination, and problem-solving are increasingly organized around AI tools and automated systems, then access to those systems may stop being a luxury add-on. It may become part of the conditions of full participation.
This does not mean that someone without AI access is literally disabled in a biological sense. That would be the wrong way to put it. The point is institutional, not medical.
A society built around AI-enhanced capability may leave those without access in a position of structurally limited practical capacity inside an environment designed for those who do have it. They may still survive. But they may do so in a systematically dependent and subordinate way.
That is exactly the kind of condition societies usually try to avoid when thinking seriously about accessibility.
Why money alone may not solve this
At this point the obvious objection is:
If people already have UBI or AI dividends, why can’t they just buy AI tools, compute, and robotic services on the market?
This is the most important objection, and the post has to answer it clearly.
The answer is that in a sufficiently advanced AI economy, the most important productive systems may stop functioning like ordinary consumer goods. They may instead resemble a combination of strategic infrastructure, high-fixed-cost industrial capacity, safety-regulated dual-use technology, and quasi-monopolistic platforms.
In that world, money may not be enough to secure meaningful access.
There are at least four reasons for that.
First, frontier capability may be too concentrated.
The most economically consequential systems may be controlled by a very small number of firms, states, or public-private alliances. Ordinary people may be able to purchase narrow services while having no realistic access to the tools that would allow genuinely independent action.
Second, access may be permissioned, not merely priced.
For safety, misuse-prevention, national security, or liability reasons, advanced AI and robotics may be tightly governed. The key question may become not “can you pay?” but “are you allowed?”
Third, productive capability may come in bundles.
Serious use of advanced AI may require integrated access to compute, models, tooling, data interfaces, physical execution layers, logistics, and institutional permissions. Buying fragments on the market is not the same as having a real share of productive capacity.
Fourth, market access may preserve dependence rather than autonomy.
Renting intelligence from a dominant provider is not the same as having secure access to the means of action. A person can be a paying customer and still remain structurally subordinate.
This is why the issue is not simply redistribution after the fact. It is also whether markets alone can guarantee non-subordinate access to the productive tools of an AI society.
What agency could mean in practice
“Agency” is easy to endorse in the abstract and hard to picture concretely. So here are examples of the kind of thing I mean.
A neighborhood group could use guaranteed access to advanced AI tools and bounded compute to design a local hydroponic system, simulate maintenance costs, generate fabrication plans, and coordinate deployment without needing a venture-backed company.
Parents, teachers, or local communities could build customized educational simulations, language environments, or tutoring systems for their own children instead of depending entirely on whatever mass-market platform dominates their country.
A small community could use AI tools plus limited automated production access to create assistive devices, repair workflows, environmental monitoring systems, or local software tailored to real needs rather than generic commercial demand.
An artist, engineer, or researcher outside elite institutions could explore ambitious projects that today require expensive software stacks, specialized labor, or institutional gatekeepers.
These are deliberately modest examples. The point is not that every individual becomes a self-sufficient industrial sovereign. The point is that ordinary people retain some nontrivial ability to initiate and execute meaningful projects, rather than being confined to passive consumption.
A possible answer: universal productive access
If that diagnosis is right, then we may eventually need something beyond income redistribution:
a universal right of access to productive AI and automation.
Not just a chatbot. Not just a discounted subscription. Something more like a socially guaranteed baseline share of the tools that make agency possible in an AI economy: strong AI assistance, compute access, planning and design tools, educational and creative systems, and perhaps later some bounded access to robotic production or fabrication.
The purpose would not be to abolish markets overnight. Nor would it be to promise unlimited access to everything.
The purpose would be narrower and more defensible:
to ensure that ordinary people retain some direct capacity to act within an increasingly automated society, rather than being reduced to passive recipients of its output.
Why non-transferability matters
If such access exists, I suspect at least part of it would need to be substantially non-transferable.
Otherwise, it would quickly become an object of purchase, rental, aggregation, or indirect capture by larger concentrations of capital. A right meant to preserve broad agency could easily mutate into a new market in access rights, which would reproduce the inequalities it was supposed to solve.
This is not a remote theoretical concern. One can easily imagine gray markets in which individuals effectively lease out their quotas, or act as legal shells through which corporations aggregate supposedly personal access rights.
But this immediately creates a new problem. Preventing reconcentration too aggressively can push the system toward surveillance, identity control, and invasive monitoring of how people use their access. That would undermine the very freedom the system is meant to protect.
So this is not a simple design problem with a clean answer. It is a real institutional tension: how to prevent reconcentration without building an infrastructure of total inspection.
A rough three-layer architecture
One way to think about universal productive access is in three layers.
1. Infrastructure layer
Some share of productive capacity must remain dedicated to maintaining the system itself: energy, data centers, logistics, repair, raw materials, safety, model upkeep, and resilience. This layer cannot simply be distributed. But because it cannot be distributed, it is also the most likely site of hidden power. That means it would need strong transparency and public accountability.
2. Personal access layer
Each person would have a baseline personal claim on productive AI tools and automated services for their own use: assistance, education, design, planning, creativity, and some bounded productive capability. This is the layer most directly tied to preserving individual agency.
3. Collective project layer
People should also be able to combine some of their access into collaborative projects: research, engineering, local initiatives, cultural efforts, and public missions. Otherwise the system risks splitting into isolated personal use on one side and centralized institutional power on the other.
I do not present this as a finished blueprint. It is only a way of making the problem more legible.
The central political tension: safety vs agency
Any serious version of this idea has to confront a central tension.
Powerful AI systems and automated infrastructure create obvious pressures toward centralization: safety, alignment, misuse prevention, security, liability, resilience, and control.
But preserving human agency seems to require at least some degree of decentralized access. If all meaningful AI capability remains tightly centralized, then formal rights may coexist with deep practical dependency.
This suggests a core political tension for advanced AI:
I do not think this can be solved by a slogan. It is probably one of the central institutional problems of the coming era.
Four phases of the transition
To keep the argument disciplined, I find it useful to think in phases.
Phase 1: automation displaces labor income
The first problem is insecurity. Labor still matters, but more of the gains flow elsewhere. At this stage, UBI, wage insurance, and other income-stabilizing measures make sense.
Phase 2: gains from AI increasingly concentrate in capital
The next problem is ownership. If AI-driven growth mainly benefits the owners of automated systems, then AI-capital share becomes important: social wealth funds, public equity, dividends, and other ways of broadening the public claim on AI-generated wealth.
Phase 3: many people lose not only income centrality, but role centrality
At this stage, income support and capital dividends are no longer sufficient. People may be materially secure while becoming economically passive. This is where universal productive access becomes necessary: access not only to output, but to the tools of meaningful action.
Phase 4: automated infrastructure becomes the dominant core of society
At this stage, the main question is governance. Who controls the infrastructure layer? Who defines acceptable access? Who prevents a new technocratic elite from monopolizing effective power? Here the issue becomes democratic co-governance, constitutional limits, and protection against infrastructural oligarchy.
My core claim
My basic claim is simple:
Current debates understandably focus on the first two. But that may leave a crucial blind spot.
A world in which people receive money from automated abundance but lack meaningful access to the tools of creation may be far better than mass deprivation. But it may still leave most people in a structurally subordinate position.
If so, then the next institutional question is not only how to distribute AI-generated wealth.
It is how to distribute access to AI-generated power.
Questions I’d most want criticism on
I do not think this is a finished proposal, and several challenges seem obvious:
My hope is not that this post settles those questions. It is that it makes one neglected issue harder to ignore:
the AI distribution debate is heavily focused on income, but it pays much less attention to agency.
And that, in the long run, may turn out to be the more politically important problem.