I think ACS and gradual disempowerment crowd have quite a few of these types of recordings by now. Are they published as a podcast anywhere? I'd love to consume them that way.
In the leading capitalist economies, there’s basically a monopoly on selling labor, created by trade unions and enforced through minimum wages — and yet we still call it a “market” economy, which is kind of ironic. Naturally, after the AI revolution, labor prices won’t fall, because unions won’t let them. But there will be Equal Employment Opportunity — for every AI, you’ll need to hire a hundred thousand or so human slackers, without discriminating based on race, gender, religion, nationality, sexual orientation, or anything else.
Maybe the communists will come up with something even more “revolutionary” — <here goes DeepSeek’s answer> — but honestly, I’m not a fan.
But what worries me even more are the political movements whose human value lies in expanding their Lebensraum, since even military labor will be replaced by AI, drastically reducing the human cost for the aggressor — though I don’t rule out the possibility that expansion could become a constructive vector for humanity’s development if it is directed beyond Earth, with the main professions becoming traveler, explorer, and conqueror.
I’m subtly hinting that liberal ideology might handle the emergence of AGI worse than… other alternatives.
Anton Korinek is an economist at UVA and the Brookings Institution who focuses on the macroeconomics of AI. This is a lightly edited transcript of a recent lecture where he lays out what economics actually predicts about transformative AI — in our view it's the best introductory resource on the topic, and basically anyone discussing post-labour economics should be familiar with this.
The talk covers historical development of what the bottlenecks are: for most of history, land was the bottleneck and humans were disposable. The Industrial Revolution flipped this. AI may flip it again: if labor becomes reproducible, humans are no longer the bottleneck.
Korinek walks through what this implies for growth (potentially dramatic), for wages (likely positive effects until some threshold of automation, then a sudden collapse), and for policy (e.g. our entire tax system assumes labor income exists). He also addresses some confusions: "prices falling" and "productivity rising"; "post-scarcity" is a misnomer since even cheap resources have prices; and no, there's no economic law that technology must create jobs—that was just an empirical regularity when humans were the bottleneck.
The uncomfortable conclusion is the economy doesn't need us. It can run perfectly well "of the machines, by the machines, and for the machines." Whether that's what we want is a different question.
Transcript is under the video
I want to start by talking about some high-level lessons about the economics of transformative AI—or the economics of AGI, which are not exactly the same but close substitutes to each other. I've split this into three big themes which I call "the good", "the bad," and "the ugly".
What do I mean by that? There is the productivity and growth impact of AGI, transformative AI—that's "the good". There is the labor market disruption and inequality aspect—which is "the bad". And then there are the TAI risks and alignment. I unfortunately don't have very much to say on that, but I'm still including it—which is "the ugly". It also has some economic repercussions. So let's first jump into the good, so that we can end on a high point.
The Good: Productivity and Growth
I'll start with some economics 101. You can forget about the equations, but maybe some of you have seen this in your undergraduate econ. The way that we economists think about the economy is that we produce output Y by combining capital K and labor L through some sort of production function. That production function operates at a certain level of technology, which we call A. We mix those two things together, use our technology, and produce output.
From that output, we take a part and invest it, we take a part and use it for government spending, and the rest is consumption. And that consumption—I guess that's the critical part—is supposed to deliver us some sort of utility u(C). So the big question that I want to unpack in the next 20 minutes or so is: how will AGI affect all of that, and what will all these effects depend on?
What the Most Cited Economist Thinks
Let's start with the good. What will the output effects of AGI depend on? Well, if we ask the most cited economist in the world, Daron Acemoglu, he published a paper last year on "The Simple Macroeconomics of AI" in which he predicted that AI will affect growth by raising it by 0.07% a year. So he does not believe that this is going to be very transformative. He thinks AI is essentially BS.
But that means there is more work for us to be done. This guy is the closest thing to an economic super-intelligence—if he doesn't pay attention to these questions, then I guess many of us have to.
A Longer Arc of History
I think to take this question seriously, we have to actually take a step back and look at a much longer arc of history to understand how transformative AI will be for the economy.
In prior decades—or even for the past two centuries—everything in the economy revolved around capital and labor. But if we take a step even further back towards the Malthusian age, it wasn't always like that.
During the Malthusian times, the most important factor of production, the most important thing to produce output in the economy, was actually land. And then you needed human labor to work the land. But unfortunately, the human labor was actually pretty dispensable. The way that economists put it is the marginal human had just enough to cover their subsistence needs. What they ate and what they produced was roughly equal, which means they did not produce any excess economic value. This is what Malthus described as essentially the Malthusian trap.
During Malthusian times—during the Middle Ages and so on—human populations multiplied until they ran out of resources to sustain themselves. So everybody, or at least everybody except for a few special kings and so on had just enough to meet their basic necessities. Living standards were very low. Utility, as we economists would characterize it, was not particularly high. In some sense, from a material perspective, those were pretty bleak times—people were not doing particularly well. But since land was essentially the bottleneck factor of the economy, those who controlled the land were the most powerful: the lords.
The Industrial Revolution
Well, then something really miraculous happened: the industrial revolution—driven probably by scientific advances, the age of enlightenment, and so on. What happened during the industrial revolution is that we developed technologies to produce stuff so that land was no longer the primary bottleneck factor in the economy. Instead, we started to produce things with machines, and we combined those machines with labor. That gave us the productive structure that all of you who took econ 101 saw in your undergraduate studies, where we combine capital and labor to produce output. This is just given as one of the most fundamental economic laws—although, maybe it will soon no longer be that way.
The other thing that happened with the industrial revolution is that we suddenly started to advance technology in a sustained way. Technology started to progress at a rate of like one-and-a-half to two percent. We constructed more and more machines, and the accumulation of better technologies and more capital allowed the economy to grow—and to grow in a sustained way.
But what was the bottleneck? The bottleneck was suddenly the human: the human worker. Humans did not multiply at the same rate as technological progress advanced and as we accumulated capital. That means humans became very scarce. And that scarcity of human labor is what drove the sustained advances in living standards over the past two-and-a-half centuries, and what made living standards in advanced countries something like 20× what they were before the industrial revolution.
So the fact that we were the bottleneck, the fact that we were scarce—that's what made us economically valuable, and that's what basically supported the standard of living that all of us are experiencing today.
The Age of AI
Now we may be about to enter the age of AI. What's going to happen there?
Technology is almost certainly going to accelerate. Capital, just like it was during the industrial age, can be reproduced. But labor is no longer bound to how many human bodies we have—it can also be simply reproduced by starting up another server farm, by building another robot.
And those machines are going to compete with human labor.
All the indications right now are that they are going to perform the kinds of jobs that humans can perform at what is currently a lower cost. And if a machine can perform your job, then competitive forces will ultimately drive your wage to the cost of the machine.
Simulating the Output Effects
What will this imply for output? I have some simulations in a paper that I published with the IMF two years ago. The bottom line shows the baseline of how growth proceeded for the past couple of decades, and then there are two AGI scenarios which essentially assume that we automate the economy within five years versus within 20 years. The faster we automate everything, the faster growth takes off. I'll discuss the effects on wages when we talk about the labor market in the corresponding diagram.
SOURCE: Anton Korinek. NOTE: AGI = artificial general intelligence.
So, back more conceptually to the output effects. These output effects are going to be driven by technological progress, because I think all of us expect that AGI will allow us to make advances—both scientific advances and organizational advances—more quickly than what we currently can in the human economy.
They will be driven by automating labor. And I think it's important—I visit this website every couple of weeks just to make sure I really read it correctly. The charter of OpenAI, for example, mentions "highly autonomous systems that outperform humans at most economically valuable work". Well, that means if they really succeed at what they're saying, then labor is toast.
But right now I'm supposed to talk about the good—the output effects. The good is: if you relieve this bottleneck of humans, if you basically allow machines to perform all the valuable tasks in the economy, then you can grow the economy to a much bigger size. You can expand output and move beyond that bottleneck. That means growth rates that are currently unthinkable may be possible.
One way of thinking about it—you hear all these numbers, is it going to be 20% or 50%, I have no idea—but one way of thinking about it is: imagine you have robots and server farms. How long is it going to take those robots and server farms together to double themselves? That's going to give you a good perspective on what reasonable estimates of growth rates will be.
And the third factor that's also very important: we need this capital accumulation. We need these additional machines, these additional robots and server farms, in order to advance growth in the economy.
The Intelligence Explosion
Now what may this growth takeoff look like, and what may it be driven by? In a paper that I'm about to put out with three co-authors—Tom Davidson, Basel Halper, and Tom Holden—we look at how an intelligence explosion may trigger essentially a growth takeoff.
Source: "Is Automating AI Research Enough for a Growth Explosion?" (with T. Davidson, B.Halperin, and T. Houlden), Nov. 2025
In one diagram, the right column shows you how economic growth proceeded during the industrial age. You had output being driven by advances in technology—or total factor productivity, as we call it technically in economics—and by the accumulation of capital. Total factor productivity kind of fed on itself, because if you have better technology, you can produce even further advances in the future. And the capital stock rose because we used part of our output for investment and accumulated more of it. So this right column captures what is driving growth during the industrial age.
Now what would happen if we do experience AGI, and what are the potential dynamics for an intelligence explosion?
First of all, we would suddenly have all this AI labor that can perform tasks that previously were reserved for human labor. Having all this additional labor will immediately allow our output to jump up. In addition, a lot of that labor can be dedicated to advancing technology—to increasing total factor productivity.
Now let's unpack what drives the increase in AI labor. There are two forces: the first one is advancing software quality, and the second one is advancing hardware quality, plus hardware accumulation. These two can feed back on each other and imply that essentially the available amount of AI labor is going to grow very rapidly, therefore drive these increases in productivity and increases in output.
Important Factors to Consider
I want to add a little bit more texture and discuss a few factors that I think are important to keep in mind if we dig a little bit more into the weeds.
Bottlenecks. In some ways, you can say bottlenecks are like the weakest link in the chain of expanding output. They are what's holding us back from producing more. The simplest story is like the O-ring story: if you need to follow 100 steps to produce something but you can only automate 99 of them, and the hundredth relies on some bottleneck, then you cannot produce more because that bottleneck is always going to hold you back.
In practice, it's not going to be as stark. Bottlenecks can be to some extent substituted for. If you don't have enough energy, you can focus on developing technologies that do the same thing with a little bit less energy. So you can get around bottlenecks. But still, the more bottlenecks there are—and many of them we're probably not aware of yet—the more that will hold back economic growth.
Cognitive versus physical work. This is a really important distinction that is oftentimes conflated in this debate. For all those of you with aggressive timelines, your timelines are probably at first about cognitive advances—about the fact that AI may take off on the intelligence side. But that may or may not give rise to physical automation. And even if we have the physical automation, we also have to produce a lot of machines, a lot of robots to take full advantage of that.
I'll go back again to OpenAI's charter. They wrote about basically automating the majority of economically valuable tasks. Now, the majority of economically valuable tasks is actually physical, or involves at least some physical component. It's only—depending on how exactly you measure it—10 to 20% of the economy that is purely cognitive. I guess many of us are in that segment, and that's why we can feel it acutely.
But still, the majority of jobs has an important physical component. If we only have cognitive intelligence at human or superhuman levels, that doesn't mean that we should expect a dramatic growth takeoff. It only means that we can do these 10 or 20% of the economy much more efficiently. What that means is: if that happens, there's going to be so much economic value in automating the physical parts as well, so we should expect a lot of investment flowing into that.
Progressive bottleneck relief. You can think of the long trajectory of economic development as progressively relieving bottlenecks. The industrial revolution relieved the bottleneck of land but then introduced the bottleneck of labor. Now we may be on the verge of relieving the bottleneck of labor, and it's not quite clear what will hold back growth after that. It may be energy. It may be the availability of certain rare earths. I don't know—none of them seems like the obvious predominant one, especially if we are really close to fusion or things like that. Ultimately—and I'm listening to people like Anders when I say something like that—maybe it's just going to be energy and matter within our event horizon. But that's kind of beyond my expertise. It's a huge economic question, because whatever is the bottleneck will be the most valuable.
Confusions Between Economists and Technologists
I want to talk about some confusions that sometimes occur in the debate between economists and technologists.
Productivity versus prices. Economists always like to talk about productivity, but technologists oftentimes talk about prices going down. In some sense, that's actually just using different language for the same thing. I remember Sam Altman wrote this blog post "Moore's Law for Everything" in which he suggested the prices of everything are going to halve every two years in accordance with Moore's law.
We economists don't really measure things in dollar terms when we talk about GDP—we measure them in real terms, adjusted for price changes. So if you say prices go down by half every two years, what you may be meaning is: we can produce the same dollar amount but at half the price, which would correspond to we can produce twice as much every two years—which would be a growth rate of like the square root of two, 41% or something like that a year.
Those two statements are economically equivalent. Economists prefer talking about the growth rate adjusted for prices. Halving costs means essentially doubling productivity. And one of the reasons is because prices ultimately are a unit of account. It doesn't really matter whether you say my economy is $1 trillion big or 100 trillion yen big—it's the same, you just convert things. People in Japan are not wealthier because $1 is 100 yen. These are just units of account, and we want to adjust for that to measure the real effects of economic growth.
Post-scarcity. This is maybe more of a very strong opinion on my side than a confusion. I think in part inspired by Star Trek and so on, a lot of people talk about a post-scarcity economy. If they use that term to just say an economy that will be a lot wealthier, in which there will be a lot more abundance, then I'm okay with that.
But technically, we economists call a resource "scarce" whenever it has a nonzero price. Even if something is really cheap, it meets that definition. Now, if AI takes off in a good way and produces all this abundance for us, resources will still have nonzero prices. There will be a lot more of them, but they will still be valuable, and the relative value of different resources is going to be reflected in their relative prices. So I think it makes more sense to talk about material abundance than about post-scarcity.
One thought experiment that's highly useful in this context: imagine we were having this conversation in 1800, and we're going to talk about a post-Malthusian economy because some people are very foresightful and are seeing the writings of the industrial revolution on the wall. Imagine you told people, "Okay, every one of you is going to be 20 times wealthier in 200 years from now." People would say, "Well, that's unbelievable—it's inconceivable."
In some sense, you could say we almost already live in this post-scarcity economy compared to the conceptions that people had in 1800. But for a lot of people, it doesn't quite feel like post-scarcity. Prices are still positive, and what matters are their incomes relative to those prices.
The Bad: Labor Market Disruption and Inequality
That brings me to the bad—the potential labor market disruptions and inequality effects. Again, what will drive these? What will they depend on?
Here I won't start by citing Daron Acemoglu, but I will put up an interview by Dario Amodei that he gave with Axios a couple of months ago, in which he warned that AI could wipe out half of all entry-level white collar jobs and spike unemployment to 10 to 20% in the next—there's a wide confidence interval—one to five years. So this is what an industry insider describes as the potential labor market effects.
The Economic Channels
Let me look at what are the economic channels that would drive this disruption.
The first one—and I should say that I'm talking about the channels that drive the effects on the labor market, because some of these effects are positive—is just technological progress, which has a tendency of lifting all boats, of making everybody more productive. If I just give you ChatGPT or Claude or Gemini, you are more productive as a worker, and that makes your labor more valuable.
The second one, which is the hugely disruptive one, is the automation of labor. Again, if a machine can do a worker's job, the worker's wage will tend towards the machine's cost.
And the third one, which is again positive, is capital accumulation. If I give you better machines to work with, then your labor becomes more valuable.
So what this tells us is there are three main channels. Two of them are actually positive, but one of them is negative. Ultimately, there is a horse race going on between the positive and negative effects. In the short term, it is plausible that a lot of workers are going to benefit from the positive effects. But then if we reach the full AGI as in this definition here, I think it is very likely that the negative effects are going to predominate.
Adding More Texture
Task displacement versus job displacement. Right now we are talking about task displacement rather than job displacement. Right now, there are very few jobs that can be wholly done by AI, and a lot of the economic work in this area is on which tasks can be replaced, not which jobs can be replaced.
Labor demand, not just jobs. A second point I want to emphasize: when you read the newspaper, you often hear about what will be the jobs impact of AI. But the more interesting and more useful question from an economic perspective is: what will be the effects on labor demand?
The reason I say that's more useful is that we think of the labor market as an equilibrium driven by both demand and supply. What usually happens—and what has been happening for the past 200 years—is the supply of labor has been pretty much fixed. Essentially, every working-age adult nowadays, or the vast majority who are not busy for family reasons, are supplying their labor to the labor market. That means supply is pretty much inelastic, as we say. So supply is fixed, but labor demand is what fluctuates when technology fluctuates.
If you reduce labor demand but you have a fixed supply, what happens in equilibrium is that wages actually bear the brunt of adjustment. In the very short term, there's some job displacement. But as the economy re-equilibrates, wages go down, and the total number of jobs is not materially different from before the shock—but you can see that wages are at a lower level when you have a negative labor market shock. That means focusing just on the job numbers may be a useful short-term guide, but talking about medium- to longer-term developments, we have to focus on wages, not just jobs. And that's captured by essentially the relationship that we call the demand curve, which captures at which wage will employers hire how many workers.
Simulation Results
Let me show you a simulation from a paper on scenarios for the transition to AGI, in which we trace out how the fraction of automated tasks will affect both the wage bill and the returns to capital.
See Korinek and Suh, "Scenarios for the Transition to AGI," NBER, May 2024
What you can see is: if you automate, and if capital and labor are complements, at first labor becomes more and more valuable because it makes the economy more productive to use machines for tasks that previously required very scarce labor. That means at first, almost all of the benefits go to labor.
But then, after a specific threshold—here in this simulation it's like 80% automation—all of a sudden, the abundance of capital and the fact that we need workers only for very few remaining jobs implies that wages plummet and that all the returns suddenly go to capital.
This is one specific simulation and specific parameter values, but I want to put it up just to show the possibility that as we automate, for a long time we'll see positive effects on labor, and then they may suddenly flip.
And now I'll show you the counterpart to the output effects of growth on the wage front. The baseline was that wages and output kind of grow in tandem, but if we have AI in 20 years or five years, then you can see that wages at first rise faster, and then they plummet.
Source: Anton Korinek. Note: AGI = artificial general intelligence.
Where Does the Value Go?
If labor is devalued, the big question is: where does the value go? Because the value doesn't disappear. If we can do something more cheaply—let's say using an AI that costs a hundredth of the cost of a worker right now—that means the economy is still producing the same thing. And the value goes, depending on how the economy is arranged, either to workers or capitalists, or frankly a lot of it just goes to consumers. Which is actually good, but it also highlights the importance of distributive policies.
Managing the Transition
That's where I want to go next. Managing the transition is going to be particularly hard. There will be big winners and big losers on the road to AI. The big winners never want to share what they won. The big losers always want to be compensated and don't want to lose.
There will be an important role for steering technological progress—maybe also a role for slowing down progress, because having so many winners and losers is hugely disruptive.
There are many longer-term policies that are being discussed: UBI, UBK, job guarantees, compute slices. Right now in the economic policy debate, they all seem outlandish. Nobody's taking these seriously except on the fringes of the political spectrum—which is too bad, if we are expecting these disruptions to happen very quickly.
And of course, there are very important non-economic forces as well: what does this all mean for meaning, for control, for agency, and so on.
Adjusting the Tax System
In a recent paper, I look at how our tax system will have to adjust under AGI. Right now, you can say that taxes on labor τL are the primary way of raising revenue. But if labor suddenly becomes devalued, our government is not going to have a lot of financial resources. That means in the post-labor economy, we have to switch at first to the taxation of consumption τC, and then ultimately to the taxation of the capital accumulation of AI itself.
See Korinek and Lockwood, "Public Finance in the Age of AI," NBER Economics of TAI, Nov 2025.
Roles for Labor in the Post-AI Economy
There will be some roles left for labor in the post-AI economy. Some of them transitional, some of them for fundamental human-centric purposes. But from my perspective, it is unlikely that the importance and the share of labor in overall economic value will be anywhere near where it is right now.
Clearing Up Confusions
Again, I want to clear up some confusions in the debate.
There is no "economic law" that new technologies always create jobs. In fact, it's just the law of demand and supply that as long as people are willing to supply their labor at any price, the market is going to hire them. And sometimes the price needs to go down to clear the market. Sometimes the price goes down a lot, as people during the disruptions of industries in the Midwest experienced in the past few decades.
There is no fundamental law that labor always plays some sacred role in the economy. It was just an empirical regularity for the past 200 years.
What ultimately matters is relative prices. Even if the price of everything declines—or some other form of what Sam Altman predicted in "Moore's Law for Everything"—what ultimately matters for workers is how fast the price of their consumption goods declines compared to their wages.
There are a lot of reasons to expect that wages are going to go down faster than energy prices. And a lot of the things that we consume to keep ourselves alive, like food, require a lot of energy. So I think even if you have a strong belief that everything will become cheaper, labor is going to get cheaper even faster.
The Ugly: Risks and Alignment
Last point: the ugly—the risks of AGI and alignment.
Economic Aspects of Alignment
First, I want to observe that there are a number of really important economic aspects to alignment.
People need material resources to survive. We may be able to do things more efficiently and so on, but we need some sort of income.
Something else that's inherent in human preferences: people don't like uncertainty—certainly not really big uncertainty. People don't like big inequality. People don't like excessively rapid change, in part because it entails a lot of uncertainty.
I think we need to consider all these factors when we want to align AI to human values, because these basic economic preferences are part of our human values.
The Central Economic Challenge
Ultimately, the main economic challenge in this debate is—to put it kind of in the brute and cold-blooded economic fashion—trading off the expected costs of disaster versus the expected benefits that AI may deliver, both in terms of abundance and in terms of things like longer lives.
In some sense, you can see that lives are both on the left-hand side and on the right-hand side of this tradeoff. If the AGI kills us, that's a cost. But if it makes us all live for hundreds of years, that's also a benefit in terms of lives—not just in terms of pure abundance.
But I think the central part is: there are vast externalities in who makes these tradeoffs. Right now, it is a handful of executives in a handful of very powerful organizations that are making the decision on behalf of all of humanity. And their decisions entail vast externalities—vastly asymmetric payoffs. If they succeed, they will pocket a huge amount of the benefits. If they lose, all of us are going to pay the costs. It's kind of the classical definition of an externality, and I don't think that's good.
Does the Economy Need Humans?
Let me clear up a confusion that you sometimes hear in this debate: Does the economy need humans? Does it need human demand?
No. For the economy, it is perfectly possible to have an economy that's driven only by the machines—to paraphrase Lincoln, to have an economy of the machines, by the machines, and for the machines. We don't need to be there for the economy to function.
But—sorry, David, you said we should not impose values—I do think that's not what we want.
Thank you.