If AI is aligned, human labor may still be needed.
Many of the ~14,000+ articles on AI’s labor impact assume that AI will largely substitute for human labor and warn of the consequences - like Trammell and Patel looking at how to address resulting inequality, or the Something Big is Happening post that argues “If your job happens on a screen…then AI is coming for significant parts of it”.
The answer isn’t clear, but we need to count on two conditions for human labor to persist:
Market demand is defined by the preferences of humans, not machines
Human labor remains a complement to (AI) capital
We can preserve the first condition through our engineering and policy.
The second condition is empirical and harder to intentionally ensure. But it may be more likely to hold than the scarier AI articles argue.
A look at labor data hints at growing demand for human ‘specification’ inputs, beyond simply cognitively complex labor. This could persist, provided humans still direct what gets produced in the economy.
One labor model parameter will likely determine who’s right about human labor in the age of AI - the elasticity of capital-labor substitution (σ). This captures the % change in capital stock relative to workers that we’d see with a relative change in the factor price ratio. If wages (relative to capital costs) go up 1%, at σ=0.9, companies raise their mix of machines vs workers by 0.9%.
If we’re below 1, wage increases only drive partial substitution by machines. In this case, AI-capital capabilities should ultimately complement human workers and keep generating new jobs even as some are automated. If we’re above 1, wage increases yield aggressive human labor substitution. Jobs are automated sooner or later and it’s time to grapple with a post-work world.
Oberfield and Raval (2021) measure the σ in US manufacturing over the last 80 years, estimating it to be in the range of 0.5 to 0.7 since 1970. At the same time, Acemoglu and Restrepo 2019 finds overall more intensive substitution with stronger labor displacement and relatively lower new job creation over the last 30 years. Labor has been a complement to capital through modern history, but we can’t be sure the future won’t be different.
Two economists broadly represent these contrasting views on the labor market: Anton Korinek and David Autor.
Anton Korinek models the future such that the σ runs over 1 and ultimately the value of human labor descends below subsistence wages (Korinek and Juelfs 2022). The only work to be left is ‘nostalgic’ work, where ‘humans are hired merely for the fact of being human, even if machines could perform their work more cheaply and more effectively’. In this future policy interventions are required to provide humans income in the absence of livable wages. His view lines up with many internet writers, whom you could consider in the ‘Korinek’ camp, expecting a σ >1 and full human labor automation.
David Autor looks at jobs created by technological changes over time (Autor et al 2024) and provides that these new jobs will continue to match or exceed jobs being automated well into the future. His models envision labor demand migrating from low-skill to high-skill, but net new job creation persisting as a result of emerging labor-augmenting technology and changes in the size of markets using tech-augmented labor. You could consider writers skeptical of mass AI-driven job displacement, and hopeful for new AI-augmented jobs part of the ‘Autor’ camp.
Production is for Humans
To help me think about the role of human labor in the future and figure out which of the camps seems more plausible, I imagined a series of products - a meal, a movie, a smart phone, national defense. How do these get made? Are there uniquely human inputs required?
The execution of work to deliver all these goods and services may be done by AI at some point. Provided a specification and means of verifying matching output, it’s hard to imagine how advancing machine intelligence doesn’t execute better than a human could.
But how does the specification get created? Who decides what needs to get executed? If these inputs aren’t uniquely human, then that would seem an ominous outcome. If production decisions are being made outside human-expressed preferences then we are also in a world outside of legible utility functions. Then we’re competing for scarce resources with entities driven by alien motives and potentially superior intelligence.
Indicative future human / AI division of production tasks
(note: cell examples initially seeded with AI from author’s prompt, edited by author)
Korinek (Korinek 2019) relaxes the assumption that humans determine the production function and do the specification, suggesting:
Demand does not have to derive from humans….What matters for a well-functioning economy is not human absorption but aggregate absorption, which includes AIA [‘Artificially Intelligent Agent’] absorption.
With human demand distinct from agent demand, Korinek derives a result in which the human population goes to 0 absent strong mitigating assumptions (sufficient property rights for a ‘human reservation’, labor input complementarity for a good necessary for agents, or ‘factor separability’’ - agents don’t need inputs for human consumption), none of which seem good to rely on.
Historically economics has relied on assumptions about human choices. Maintaining this stance seems critical for alignment. If we grant machines independent (and potentially competitive) will, then we’ve probably lost more than just our labor demand. We need engineering and policy approaches that ensure that output is an expression of human preferences. If we are not starting from this tenet, we’ve conceded our jobs, and perhaps our survival.
Markets need Human Preferences
Restricting our production function to serving human utility, the centrality of human preferences persists. Preferences remain a decisive input as signals to the market in consumption and as entrepreneurial vision:
Deciding what to consume - there is effort in this act, and while it hasn’t historically been considered as labor, it takes time and effort, and delivers signals to the market that shape production. Consumption decisions derive from the consumer’s irreducibly private information and lived experience
Deciding what to produce - entrepreneurs don’t just follow specifically expressed preferences, they choose what to realize, based on perceptions of potential demand, but their own preferences, desires and perceived skills. They are specifying objects in the world from which consumers choose (as expressed by Steve Jobs: “people don't know what they want until you show it to them”)
Markets represent a constant interplay between the specified and realized objects that consumers choose from. The specification of a good or service draws from the lived experience and the speculation of the entrepreneur or manager.
Specification is the act of bringing together lived experience, private information and understanding of others’ preferences to serve. While machines may assist, informing the specification with unprecedented insights, the entrepreneur, the contributor, the worker will have to decide what is realized.
This is the labor that may persist. Labor in the execution of products may be substituted, but labor involving the expression of human will and choice won’t.
Beyond Complexity, Specification is in Demand
If specification is a unique human input, then with greater diffusion of AI, we should expect to see related tasks earning premia in wages. Can we detect these premia already? Particularly in firms with greater AI?
Similar to Autor’s optimistic take on new work emerging (as in Autor 2024 - Applying AI to Rebuild Middle Class Jobs), my investigations suggest there are signs of AI-human labor complementarity that will sustain labor demand. We can see tasks with attributes associated with ‘specification’ enjoying wage premia and greater job growth over the last 20 years.
To support this claim, I looked at O*NET Generalized Work Activity (GWA) importance scores per job title, where scores cover 41 activity categories and range from 1 to 5. Below are the top 5 most important scores for lawyers, occupation 23-1011 in the database:
4.62 Getting Information
4.59 Communicating with Supervisors, Peers, or Subordinates
4.57 Making Decisions and Solving Problems
4.51 Evaluating Information to Determine Compliance with Standards
4.49 Resolving Conflicts and Negotiating with Others
Applying PCA analysis we find a top axis (PC1) that corresponds very closely to indications of cognitive complexity, with a .85 correlation to the ‘non-routine cognitive’ tasks used in Acemoglu and Autor 2011. Top scoring PC1 jobs also align intuitively with higher complexity levels where the following jobs show highest PC1 scores
Education Administrators
Chief Executives
Human Resources Managers
Medical and Health Services Managers
Neurologists
I then used the cognitive complexity scores per GWA with the attributes most uniquely associated with executive leadership to find a residual that should correspond more purely with task specification and orchestration.
Management attributes were generated by using activity importance scores for roles in SOC 11-XXXX which represent Chief Executives, Managers and Administrators, then finding the difference in average scores per GWA for executive roles from the average GWA importance scores for non-executive positions.
PC1 cognitive complexity scores were fit to management attributes over GWAs, and generated a residual that represents executive attributes not explained by cognitive complexity. This residual should correspond to pure ‘deciding/directing’ content inherent in production tasks, not the cognitive load. Top five GWA scores for management attributes, our cognitive complexity measure (PC1) and the ‘specification’ residual are summarized in the table below.
The GWAs highest in this specification score align with an intuitive sense of ‘specification’, and in the top five are almost identical in rank to our raw management scores, aside from the substitution of ‘Selling or Influencing Others’ for ‘Developing and Building Teams’.
Jobs with overall management traits saw much greater growth in the last two decades, adding 20 million new roles vs 5 million added for jobs not associated with management tasks.
If we break down growth attributable more to cognitive complexity and more to residual ‘specification’-intensity, we see that more of the overall ‘management’ occupation growth is in higher specification occupations, rather than higher complexity alone.
If the early diffusion of AI were contributing to these labor market dynamics, we’d also expect role AI exposure to interact meaningfully with growth rates and these management-task characteristics.
To check for this, I looked at the AI Occupational Exposure (AIOE) dataset, developed in Felten et al 2021, which measures how much AI could be applied to occupational abilities and roles. In the dataset, roles which map to O*NET SOC codes are marked with an ‘AI-exposure’ score indicating the extent to which AI could be applied to their activities.
The top 5 AIOE-score roles are Genetic Counselors, Financial Examiners, Actuaries, Budget Analysts and Judges (bottom AIOE roles are Slaughterers and Meat Packers, Roofers and Fence Erectors).
Using our SOC occupation codes we can group occupations by estimated AI-exposure. Employment growth for AI-exposed, spec-heavy occupations has been rapid, while execution-heavy roles with AI exposure have seen striking levels of job loss.
These findings are illustrative and not conclusive, particularly as we are still potentially early in the process of AI technology diffusion.
But looking more closely at the interactions between AI exposure, occupation spec attributes and log wages through a regression specified below, we see a specification x AI interaction term that is consistently positive over our years, with annual error bands that just dip below 0, but a pooled confidence interval that spans from 0.11 to 0.52.
Any value above 0 is consistent with specification labor being complementary to AI capital, and these positive values indicate that more AI exposure in occupation is associated with higher wage premia for increased specification-intensity in roles.
These values remain consistent with the σ estimates in manufacturing from Oberfield and Raval, with the positive value indicating AI exposure increases the relative marginal product of specification labor inputs. More AI capital makes this kind of human contribution more valuable, not less - perhaps auguring well for alignment.
If the specification labor x AI interaction remains in the same range, we’d expect to see more demand and higher wages for the unique human inputs independent of complexity, but associated with direction, choice and specification.
The AI-exposed specification input ties back to an overall σ if we model human labor input in an overall production function as a composite of execution and specification inputs. While execution labor may be substituted by capital, if specification labor is 1) required for or complementary to capital and/or execution labor and 2) a sufficiently important part of the human labor composite input, then overall human labor may continue to complement capital.
Results are also robust to excluding health-care occupations, which represented an outsized portion of job growth over the sample. While the specification-intensive job growth measure is slightly lower when excluding healthcare jobs from the analysis, excluding healthcare jobs actually increases our spec x AIOE interaction estimate, as healthcare occupations have a negative mean spec residual. Restricting to interaction term to Language Model-AI exposed professions from the AIOE data set also shows even stronger results, with an even higher estimated value.
Keep an Eye on the Next OEWS Release
We can’t be overly confident that human labor will not be replaced, but we shouldn’t write it off yet as a critical input to our economy. Particularly, when assuming that the economy is there to serve humans, not machines.
To properly orient policy and our own career planning, we should be keeping an eye on the spec x AI interaction term with new BLS data releases. If this sign of human AI production complementarity holds up, we might avoid the scarier wealth inequality spirals envisioned in Trammell and Patel, and secure durable human incomes beyond just the government–provided UBI envisioned by Korinek.
I’ll predict here that this value will remain positive when re-running this analysis with the upcoming 2026 OEWS release (covering data through May 2025) - at say 75% confidence.
If so, this provides another sign human labor will still be needed, at least for the next decade, if not well beyond. I’m prepared to be wrong, and if I am, we should all prepare for a post-work world.
If AI is aligned, human labor may still be needed.
Many of the ~14,000+ articles on AI’s labor impact assume that AI will largely substitute for human labor and warn of the consequences - like Trammell and Patel looking at how to address resulting inequality, or the Something Big is Happening post that argues “If your job happens on a screen…then AI is coming for significant parts of it”.
An anecdotally smaller set of articles assure us that AI-assisted machines will transform but ultimately complement human labor (Why I’m not concerned about AI job loss or Why I'm not worried about AI causing mass unemployment), creating more new types of jobs even if others are replaced.
The answer isn’t clear, but we need to count on two conditions for human labor to persist:
We can preserve the first condition through our engineering and policy.
The second condition is empirical and harder to intentionally ensure. But it may be more likely to hold than the scarier AI articles argue.
A look at labor data hints at growing demand for human ‘specification’ inputs, beyond simply cognitively complex labor. This could persist, provided humans still direct what gets produced in the economy.
Data, findings and scripts to reproduce here
Korinek and Autor - Divided by Assumptions
One labor model parameter will likely determine who’s right about human labor in the age of AI - the elasticity of capital-labor substitution (σ). This captures the % change in capital stock relative to workers that we’d see with a relative change in the factor price ratio. If wages (relative to capital costs) go up 1%, at σ=0.9, companies raise their mix of machines vs workers by 0.9%.
If we’re below 1, wage increases only drive partial substitution by machines. In this case, AI-capital capabilities should ultimately complement human workers and keep generating new jobs even as some are automated. If we’re above 1, wage increases yield aggressive human labor substitution. Jobs are automated sooner or later and it’s time to grapple with a post-work world.
Oberfield and Raval (2021) measure the σ in US manufacturing over the last 80 years, estimating it to be in the range of 0.5 to 0.7 since 1970. At the same time, Acemoglu and Restrepo 2019 finds overall more intensive substitution with stronger labor displacement and relatively lower new job creation over the last 30 years. Labor has been a complement to capital through modern history, but we can’t be sure the future won’t be different.
Two economists broadly represent these contrasting views on the labor market: Anton Korinek and David Autor.
Anton Korinek models the future such that the σ runs over 1 and ultimately the value of human labor descends below subsistence wages (Korinek and Juelfs 2022). The only work to be left is ‘nostalgic’ work, where ‘humans are hired merely for the fact of being human, even if machines could perform their work more cheaply and more effectively’. In this future policy interventions are required to provide humans income in the absence of livable wages. His view lines up with many internet writers, whom you could consider in the ‘Korinek’ camp, expecting a σ >1 and full human labor automation.
David Autor looks at jobs created by technological changes over time (Autor et al 2024) and provides that these new jobs will continue to match or exceed jobs being automated well into the future. His models envision labor demand migrating from low-skill to high-skill, but net new job creation persisting as a result of emerging labor-augmenting technology and changes in the size of markets using tech-augmented labor. You could consider writers skeptical of mass AI-driven job displacement, and hopeful for new AI-augmented jobs part of the ‘Autor’ camp.
Production is for Humans
To help me think about the role of human labor in the future and figure out which of the camps seems more plausible, I imagined a series of products - a meal, a movie, a smart phone, national defense. How do these get made? Are there uniquely human inputs required?
The execution of work to deliver all these goods and services may be done by AI at some point. Provided a specification and means of verifying matching output, it’s hard to imagine how advancing machine intelligence doesn’t execute better than a human could.
But how does the specification get created? Who decides what needs to get executed? If these inputs aren’t uniquely human, then that would seem an ominous outcome. If production decisions are being made outside human-expressed preferences then we are also in a world outside of legible utility functions. Then we’re competing for scarce resources with entities driven by alien motives and potentially superior intelligence.
Indicative future human / AI division of production tasks
(note: cell examples initially seeded with AI from author’s prompt, edited by author)
Korinek (Korinek 2019) relaxes the assumption that humans determine the production function and do the specification, suggesting:
With human demand distinct from agent demand, Korinek derives a result in which the human population goes to 0 absent strong mitigating assumptions (sufficient property rights for a ‘human reservation’, labor input complementarity for a good necessary for agents, or ‘factor separability’’ - agents don’t need inputs for human consumption), none of which seem good to rely on.
Historically economics has relied on assumptions about human choices. Maintaining this stance seems critical for alignment. If we grant machines independent (and potentially competitive) will, then we’ve probably lost more than just our labor demand. We need engineering and policy approaches that ensure that output is an expression of human preferences. If we are not starting from this tenet, we’ve conceded our jobs, and perhaps our survival.
Markets need Human Preferences
Restricting our production function to serving human utility, the centrality of human preferences persists. Preferences remain a decisive input as signals to the market in consumption and as entrepreneurial vision:
Markets represent a constant interplay between the specified and realized objects that consumers choose from. The specification of a good or service draws from the lived experience and the speculation of the entrepreneur or manager.
Specification is the act of bringing together lived experience, private information and understanding of others’ preferences to serve. While machines may assist, informing the specification with unprecedented insights, the entrepreneur, the contributor, the worker will have to decide what is realized.
This is the labor that may persist. Labor in the execution of products may be substituted, but labor involving the expression of human will and choice won’t.
Beyond Complexity, Specification is in Demand
If specification is a unique human input, then with greater diffusion of AI, we should expect to see related tasks earning premia in wages. Can we detect these premia already? Particularly in firms with greater AI?
Similar to Autor’s optimistic take on new work emerging (as in Autor 2024 - Applying AI to Rebuild Middle Class Jobs), my investigations suggest there are signs of AI-human labor complementarity that will sustain labor demand. We can see tasks with attributes associated with ‘specification’ enjoying wage premia and greater job growth over the last 20 years.
To support this claim, I looked at O*NET Generalized Work Activity (GWA) importance scores per job title, where scores cover 41 activity categories and range from 1 to 5. Below are the top 5 most important scores for lawyers, occupation 23-1011 in the database:
Applying PCA analysis we find a top axis (PC1) that corresponds very closely to indications of cognitive complexity, with a .85 correlation to the ‘non-routine cognitive’ tasks used in Acemoglu and Autor 2011. Top scoring PC1 jobs also align intuitively with higher complexity levels where the following jobs show highest PC1 scores
I then used the cognitive complexity scores per GWA with the attributes most uniquely associated with executive leadership to find a residual that should correspond more purely with task specification and orchestration.
Management attributes were generated by using activity importance scores for roles in SOC 11-XXXX which represent Chief Executives, Managers and Administrators, then finding the difference in average scores per GWA for executive roles from the average GWA importance scores for non-executive positions.
PC1 cognitive complexity scores were fit to management attributes over GWAs, and generated a residual that represents executive attributes not explained by cognitive complexity. This residual should correspond to pure ‘deciding/directing’ content inherent in production tasks, not the cognitive load. Top five GWA scores for management attributes, our cognitive complexity measure (PC1) and the ‘specification’ residual are summarized in the table below.
The GWAs highest in this specification score align with an intuitive sense of ‘specification’, and in the top five are almost identical in rank to our raw management scores, aside from the substitution of ‘Selling or Influencing Others’ for ‘Developing and Building Teams’.
Jobs with overall management traits saw much greater growth in the last two decades, adding 20 million new roles vs 5 million added for jobs not associated with management tasks.
If we break down growth attributable more to cognitive complexity and more to residual ‘specification’-intensity, we see that more of the overall ‘management’ occupation growth is in higher specification occupations, rather than higher complexity alone.
If the early diffusion of AI were contributing to these labor market dynamics, we’d also expect role AI exposure to interact meaningfully with growth rates and these management-task characteristics.
To check for this, I looked at the AI Occupational Exposure (AIOE) dataset, developed in Felten et al 2021, which measures how much AI could be applied to occupational abilities and roles. In the dataset, roles which map to O*NET SOC codes are marked with an ‘AI-exposure’ score indicating the extent to which AI could be applied to their activities.
The top 5 AIOE-score roles are Genetic Counselors, Financial Examiners, Actuaries, Budget Analysts and Judges (bottom AIOE roles are Slaughterers and Meat Packers, Roofers and Fence Erectors).
Using our SOC occupation codes we can group occupations by estimated AI-exposure. Employment growth for AI-exposed, spec-heavy occupations has been rapid, while execution-heavy roles with AI exposure have seen striking levels of job loss.
These findings are illustrative and not conclusive, particularly as we are still potentially early in the process of AI technology diffusion.
But looking more closely at the interactions between AI exposure, occupation spec attributes and log wages through a regression specified below, we see a specification x AI interaction term that is consistently positive over our years, with annual error bands that just dip below 0, but a pooled confidence interval that spans from 0.11 to 0.52.
ln(wage_j) = α + β₁·spec_residual_j + β₂·AIOE_j + β₃·(spec × AIOE)_j + γ·PC1_j + ε_j
Any value above 0 is consistent with specification labor being complementary to AI capital, and these positive values indicate that more AI exposure in occupation is associated with higher wage premia for increased specification-intensity in roles.
These values remain consistent with the σ estimates in manufacturing from Oberfield and Raval, with the positive value indicating AI exposure increases the relative marginal product of specification labor inputs. More AI capital makes this kind of human contribution more valuable, not less - perhaps auguring well for alignment.
If the specification labor x AI interaction remains in the same range, we’d expect to see more demand and higher wages for the unique human inputs independent of complexity, but associated with direction, choice and specification.
The AI-exposed specification input ties back to an overall σ if we model human labor input in an overall production function as a composite of execution and specification inputs. While execution labor may be substituted by capital, if specification labor is 1) required for or complementary to capital and/or execution labor and 2) a sufficiently important part of the human labor composite input, then overall human labor may continue to complement capital.
Results are also robust to excluding health-care occupations, which represented an outsized portion of job growth over the sample. While the specification-intensive job growth measure is slightly lower when excluding healthcare jobs from the analysis, excluding healthcare jobs actually increases our spec x AIOE interaction estimate, as healthcare occupations have a negative mean spec residual. Restricting to interaction term to Language Model-AI exposed professions from the AIOE data set also shows even stronger results, with an even higher estimated value.
Keep an Eye on the Next OEWS Release
We can’t be overly confident that human labor will not be replaced, but we shouldn’t write it off yet as a critical input to our economy. Particularly, when assuming that the economy is there to serve humans, not machines.
To properly orient policy and our own career planning, we should be keeping an eye on the spec x AI interaction term with new BLS data releases. If this sign of human AI production complementarity holds up, we might avoid the scarier wealth inequality spirals envisioned in Trammell and Patel, and secure durable human incomes beyond just the government–provided UBI envisioned by Korinek.
I’ll predict here that this value will remain positive when re-running this analysis with the upcoming 2026 OEWS release (covering data through May 2025) - at say 75% confidence.
If so, this provides another sign human labor will still be needed, at least for the next decade, if not well beyond. I’m prepared to be wrong, and if I am, we should all prepare for a post-work world.