The AI agent internet is presumably going to be absolutely overflowing with outright frauds and scams, and also everyone’s AI will be a shark looking to game the system.
The obvious legislative response is to just outlaw the whole thing. A Moltbook economy will not be allowed. Then the a16z-style lobbyists will step in with their tame economists and influencers to declare that a ban would stand in the way of human progress and felicity.
Phone companies and social media companies already allow extremely high rates of manipulative activity to occur in their networks. We deal with spam, not by dragging the spammers into the public square and making an example of them, but by just meekly filtering out the exploitative lies with which they try to flood our inboxes.
But I wonder if an Internet populated with millions of amoral nonhuman intelligences with a superhuman ability to fake our identities and pick our pockets will be a step too far. The agent economy - which in addition to this, brings the prospect of mass unemployment - really could be the issue around which anti-AI sentiment crystallizes into a serious political force.
A viral essay from Citrini about how AI bullishness could be bearish was impactful enough for Bloomberg to give it partial responsibility for a decline in the stock market, and all the cool economics types are talking about it.
So fine, let’s talk.
It’s an excellent work of speculative fiction, in that it:
It is still a work of speculative fiction. It doesn’t let reality get in the way of its ideas.
I appreciate Tor Bair’s perspective of this being a case of Cunningham’s Law, that the best way to get the right answer is to post the wrong one.
I’d much rather read and analyze a scenario that goes a little too fast than those who, like Kira here, continue to arrogantly claim that AI won’t be able to do [things it is already doing] because an AI flight booking aggregator can’t, you know, for example, tell the user what the flight options are the same way Kayak does.
Thus, it has severe problems that many rushed to point out:
I love (read: dread) how the finance and economist types think these are the AI ‘tail risks.’ You have to set aside ‘we probably all die’ and try to take this seriously, because in the world described we probably all die, even if we accept their premise that during 2026 all of the problems in AI alignment and AI reliability are solved at the level of a superintelligence. I can’t avoid reminders of such things, but this response essay tries its best to accept the absurdities in the premise.
The Headline Destination
The scenario is ‘AI destroys a lot of company margins and white-collar jobs without replacing them or buying anything outside the sector, resulting in high unemployment, a large fall in aggregate demand and cascading failures of financial instruments whose underpinnings are gone.’
Printing 10.2% unemployment in an AI boom that quickly would be a surprise, but once you see the rest of their scenario the number is highly conservative.
A cumulative drawdown in the entire S&P of 38% is not crazy either, except that in this scenario the S&P doesn’t ever cross 8,000, when it’s at 6,847 as I type this. Even the NASDAQ only goes up 32% from now to their peak. That’s it?
The scenario plays out absurdly fast and bullish on AI. This is a crazy amount of practical capability gains and a crazy amount of diffusion, both well in excess of my expectations.
But those who think this will stay ‘contained’ or ‘sector-specific’ are fooling themselves to an absurd degree. The baseline scenario just takes somewhat longer.
We get initial ‘human obsolescence’ waves in Early 2026. By October 2026 NGDP is already printing ~7%, productivity and real output per hour booms while real wage growth collapses. The amount of real wealth being produced is rising rapidly.
…And That’s Terrible
And yet, they report, all of this is terrible.
That’s what happens to the human workers at first in a slow takeoff recursive self-improvement scenario with (honestly kind of ludicrously?) rapid economic diffusion, except without any real wealth or efficiency effects or other adjustments.
The key claim here is that this would be bad for the economy rather than good.
Even if this is indeed what you’ve chosen to worry about, white-collar productivity growth in this scenario is excellent. It’s just that white-collar labor negotiating power and demand for such labor went to hell, and thus incomes are down across many domains.
Labor income drops would then quickly (as they note later) spread everywhere else, with the refugees from those professions flooding into everything else.
Prices should then drop to match and quality of goods and services should improve, across the board, for this reason and also other reasons described later.
That’s even if we presume the government is paralyzed and does nothing.
The only way to avoid that conclusion is if the spending on AI to provide such services is a large fraction of all the savings, both from labor and from frictions and also from other efficiency gains. Given AI costs drop dramatically over time, and the scenario involves AI cheap enough that everyone is constantly running always-on agents, and the amount of cost advantage required to drive sudden diffusion, this is overdeterminedly not the case. You don’t replace $100 in human labor with $70 in AI spending for more than a month or two at most, you replace it with $7 and then $0.70.
SaaSpocalype Now
The first sector up in their scenario, as the first result of agentic coding getting way better super fast, is a proper SaaSpocalypse at warp speed, as doing any given tool in-house becomes an option.
This is a transfer from SaaS firms to their B2B customers, with net gains from trade since in many cases the new software is more customized and therefore better. It also erases all the deadweight loss gains from price discrimination, which are large. You used to not buy various services to save money, now you get anything you want.
SaaS is a tax on business. Taxes went down. Corporate tax cuts are highly stimulative.
The SaaS companies pivot to even more AI on the intensive margin. Sure.
Levels of Friction Go To Zero …And That’s Terrible
We now get to the central step.
Everyone has continuously running AI agents. AI agents reduce levels of friction.
This happens way too fast for our available compute or our ability to operate reliable agents, but it’s a scenario, and you can mentally push back all the dates. I agree with Tor Bair who links to many pointing out that AI agents would not be able to reduce levels of friction this much and certainly not this quickly.
Matching costs and transaction costs and ability to charge above market are all going to zero at various speeds.
All people collecting such rents? All unused subscriptions? All the tricks? Ejected.
I mean, good, right? Life is better for everyone and we buy real things instead?
But oh no, you see. Lifetime value of a customer is down. App loyalty is dead.
Well, sure, it is if your company is a predatory dick and your business plan was lockin or inertia or laziness or tricks. Value isn’t down if the customer wants your product.
Every dollar that companies are losing is more than a dollar that customers are saving. Customers also save massive amounts of time and stress.
Cost of marginal customer acquisition should also go to zero because the agents should find you rather than the other way around. You do a one-to-many informational campaign aimed at AI agents.
Some headline prices, or the prices for optimal purchasing strategies, might rise as business models shift, but the net consumer impact remains hugely positive. This is also progressive, given where such policies tended to be most predatory.
You can accept a much lower nominal wage, and still have a better life, if prices in both time and money are coming down across the board, quality and ability to select is way up, and you never overpay for goods or services, or buy anything nonoptimal or that you don’t need.
DoorDash and Uber
They use the example of DoorDash and Uber. These take huge fees each transaction.
If you can vibecode a similar delivery app and also handle all the logistics and the marketplace, then why pay the middleman?
Driver wages go up and presumably quantity and consumer surplus go way up.
The drivers don’t capture much surplus per transaction, because they end up bidding against each other, and also new drivers are flowing in from other jobs that got automated, so the job of driver still on net gets more brutal, but total employment in the sector goes up substantially, as does surplus for the customer. Restaurant pricing power rises to the extent they have unique offerings.
You can and some did make the argument that the real-world logistics are the moat rather than the software but ultimately the logistics are also software. You can code.
You can rant, as Ben Thompson does, that the article refuses to acknowledge that DoorDash provides a valuable service and massive consumer benefit. Citrini isn’t arguing with that, he’s saying you can get all the benefits now without DoorDash, and at a much lower price. Thompson says this reflects Citrini’s lack of belief in dynamism, human choice and markets, but actually I think this is Thompson’s lack of belief in them. With sufficiently capable AI all three can exist, even better, without an expensive aggregator.
Thompson also suggests DoorDash will retain advantages in its exclusive data, ability to do marketing, interaction with the physical world and more. But again all of these are things the AI can substitute for. Your own AI very much knows your order history, and can arrange for the rest, and as explained it can derail the three-sided market.
Breaking Into The Marketplace
More precisely, as coauthor Alap Shah explains, you can use AI agents to break up existing two-sided marketplaces that are currently oligopolies or even virtual monopolies, because you reduce cost of checking elsewhere to almost zero and pit all potential providers against each other.
If you can verify the reliability of counterparties, you no longer need the marketplace. There are, if we cannot do better, various decentralized crypto-style solutions to reliability verification, paging Vitalik Buterin.
You still have the cold start problem that it’s not worth listing outside the marketplace without a critical mass of such agents, but such agents are already worth using to shop the oligopoly, so with enough of them you solve the cold start.
Who Captures The Surplus?
Where I disagree with Alap Shah is I do not think OpenAI or Anthropic gets to keep a cut of the transaction, at least not for that long.
If everyone has ubiquitous AI agents getting them the best deals those agents can also find the best deal on an agent, so you only get to charge premium prices like commissions for agents insofar as your agent is materially superior to (at least most of) the competition, or good enough to be worth consumer lockin.
The same thing that happens to DoorDash happens to OpenAI. You can charge for your API, but you can’t charge a commission because an AI agent is used to find a superior AI agent. If yours refuses, that’s a bad look, and someone else’s will do it.
For Everything Else
Mastercard gets cut out of payments in favor of stablecoins.
This is presented as ‘credit cards charge money, and you can do it for less.’
This is a classic mistake that presumes cards aren’t offering real services. The better question is, in the AI agent era, do you want those services? How will those services work? And what happens to the credit card business model?
A credit card provides at least four distinct services in exchange for that 2%, that previously made sense to combine into a package deal.
Which of these would AI agent transactions want versus not want? Which ones can be substituted and which ones cannot?
Citrini is treating credit cards as a pure system of payments, hence the suggestion of moving to stablecoins or other cheaper payment methods. But stablecoins only do job one, and they don’t do the others at all.
We can all choose to live in the Wild West of cryptoland in every transaction, except with our AI having access to our wallet. Do you think that’s how people want to live?Mostly, no. People want various guardrails protecting them and are happy to pay. And I do think that for many of these purposes, there is a substantial moat.
The problem is that there will be several threats to the Mastercard business model.
Which is to say that credit scores will have to change quickly, in such worlds, into something that is far more game theoretically sound, but that will probably be much more reliable and predictive.
The AI agent internet is presumably going to be absolutely overflowing with outright frauds and scams, and also everyone’s AI will be a shark looking to game the system. There will absolutely be a market for reputation management to establish trust of various types. Who if anyone will capture that market? That’s a great question.
Patrick McKenzie points out that there’s no particular reason for AI agents to favor stablecoins over other transaction methods when looking for a cheaper way.
The market took that one remarkably seriously and somehow this wasn’t priced in?
Real Estate Realism
Another example is that AI agents are used to undercut real estate agents, moving from 6% combined commissions to a buyer-side cut under 1%.
Timothy Lee challenges this, citing Redfin’s previous failed attempt to do it. Yes, AI agents can solve for problems like estimating home value and writing contracts, but he asserts people still want a human agent to physically meet for house tours, and he says this type of physical word presence and human touch and relationship cultivation is ubiquitous.
The scenario only has 10.3% unemployment, which is very compatible with a lot of jobs still requiring a lot of humans in a lot of loops, but what about this one in particular? How much of what the agent does is done as good or better by AI?
Definitely not all of it. Definitely quite a lot. Efficiently searching listings, accounting for preferences and assessing true value based on a variety of sources, and dealing with the technical and price negotiations is quite a lot of the job. Which relationships with other people are important here? It’s not obvious why you need a relationship with the buyer or seller under an AI agent world, as the market should become more efficient in ways that minimize this.
The agent does of course cultivate a relationship with you in particular, and talks you through the decisions, figures out what really matters to you, offers advice and someone you hopefully can trust, and so on.
This is not like the situation with Redfin. Ben Thompson attacks this point as well, saying the real estate agent was already obsolete in terms of information flow, and this simply wasn’t true before. It was prohibitively expensive, both in errors and time, to try to extrapolate from the data yourself prior to AI.
My own agent, Danielle Wiedemann, was pretty great, and I wouldn’t try to replace her with Claude. But yes, quite a lot of what she did is something you can automate as good or better already. The job will be fundamentally different, less labor intensive, if anything there will be less barriers to entry, and it is a classic fallback style job that a lot of white collars who lose their jobs will look to enter in a world like this. So yes, I expect commission rates to decline, including for the sell side where again much of the task can be automated.
Everything Is Awesome And No One Is Happy
The missing mood in the article, at this point, is palpable, even within the mundane realm. Everyone has a personal trusted well-aligned agent working for them. America is becoming a paradise for consumers, or anyone who wants to live life.
Bearish For Non-AI Stocks Is Reasonable
I totally buy that this type of AI diffusion could net be bad for existing non-AI stocks.
Consider that Anthropic keeps coming out with new announcements of basic AI tools, and then entire sectors drop 7% the next day because by golly, we didn’t think of that. Is Anthropic capturing most of those gains? Obviously not.
To the extent that this was a new idea to people, this essay had a lot of Alpha.
The stock market is not the economy. Corporate profits are not the economy.
Remember that throughout all of this, real productivity and real wealth are way up.
If you think:
Well, the pie has to add up to 100% somehow. The AI companies are only going to make and be worth so much from simple ordinary doing business. Every mind and productive input cannot lose out at the same time.
The catch is that the AIs are also minds, and they can capture the surplus. Then all the humans can indeed be rather screwed at the same time, to the point of having a rapidly dwindling share of real resources and ending quickly in AI takeover and extinction. Indeed, this is essentially implied by the scenario, even if other AI actions don’t do something more dramatic first. But understand this would have to be the mechanism involved.
Other Levels of Friction Problems
There are all sorts of other parts of civilization that break down under ubiquitous, essentially zero marginal cost AI agents, if frictions involved are driven to zero. As an example, we’ve already seen this problem with job applications. But basically anything which can be abused will be, anything that can be taken advantage of will be. Quite a lot of positive sum arrangements get blown up if anyone can do an interception attack to steal resources or impose costs.
We are going to have to, as I have said before, reorganize quite a lot of things to deal with this, in ways that have nothing to do with markets. There are solutions, especially involving introducing artificial frictions, such as charging fees. By default I expect a lot of things to involve microtransactions as a costly signal to avoid being spammed by AI requests, to properly reward value creation and to compensate for compute costs.
Oh Over There? That’s The Singularity
A necessary interlude.
If all of this is happening, very obviously the singularity is happening behind the scenes.
The essay specifically says the AIs are smarter than the humans, they’re obviously way faster, and they’re everywhere in large quantities, running essentially everything. You should assume a takeoff of AI capabilities via rapid recursive self-improvement (RSI) in such a scenario, followed by full transformation of the world for good or ill. By default you get an AI takeover and (if we’re not assuming away tons of related problems) we probably all die.
Thus the economic issues are not on the top 10 list of things worth your attention.
But never mind that. Pretend it’s not there. We’ve got economics to do.
Do understand that this scenario makes AI 2027 sound painfully slow. It’s not clear how humanity is even making it to June 2028 given what is already happening in 2027.
The most blatant Can’t Happen? Given the timing there won’t be enough compute for what they describe on this time frame, so compute costs would skyrocket during this scenario much more than they hint at, ruling out many suggested use cases.
None of that matters for thinking through the dynamics of the scenario, but you don’t want to get the wrong idea, or lose sight of these other concerns.
Have I Got A Job For You
Only in January 2027 does a ‘macro memo’ argue that all of this is bad, actually, because white-collar workers were driving discretionary consumer spending, and this time the job destruction is taking out dozens of jobs for every job it creates.
That’s true directly, but actually this scenario does involve massive job creation. Starting a new business, creating a new product or providing a service is now a turnkey thing you can launch with an agent. All sorts of barriers and costs involved are gone. Marketing costs drop almost to zero because their agent finds you. Logistics costs are almost zero. Transaction costs are almost zero. Wages for anyone you hire are down and their productivity is way up. The cost of living is way down.
There are a ton of jobs people would like to do or create now, often dream jobs but also things like ‘I always wanted a butler and a personal chef,’ that go from uneconomical things too hard to implement to things worth doing now.
On the consumption side, consumers are freeing up large percentages of their time and income, and prices are down. They’re going to consume a lot of new goods and services. Which creates more jobs.
Again, real productivity and real wealth are way up.
Economic intuitions suggest that if unemployment shot to 10% in two years, we’d be looking at something like a 10%-20% drop in nominal wages. But all prices are going way down in various ways, so consumption and ‘real’ wages are plausibly net higher.
Systemic Failure
Then there is the move to systemic problems. Dean Ball objects they don’t prove the issues become systemic. I think they do justify it, in the sense that they show job displacement across white-collar sectors, which follows from having trustworthy superintelligent AI agents everywhere, which seems sufficient to cause the unemployment levels and wage distribution shifts they describe.
Which they then have a very clear causal story for becoming systemic, although I argue that it doesn’t work.
In 2027 they have former white-collar workers competing for blue-collar jobs, and the self-driving vehicles start to show up in quantity. Fear of job losses plus actual job losses cause spending to drop.
The claim is that this then causes a recession, we’re vastly more productive but no one is spending money. And then private credit starts collapsing, insurance regulators tighten capital treatment on them, and their permanent capital looks less permanent. This seems like one of the least realistic parts of the scenario, our government just lets insolvent insurance stay insolvent indefinitely.
The real trigger in their scenario, as usual, is this lack of aggregate demand causes a collapse in real estate values and a mortgage crisis. Prime mortgages from top borrowers in top areas might suddenly not be money good, if demand collapsed and also a lot of workers couldn’t pay.
I see what they’re going for here, but I don’t think the math works out. These types of mortgages start out very money good. Prices would have to decline by a lot to put them underwater. If they’re not underwater there’s no real problem. Housing prices in rich areas going down is by default another good thing, not a bad thing.
Then they talk about federal income tax reciepts. Incomes are down, labor share of GDP is down, so taxes collected go down.
I would argue, straight up, who cares? If production is growing like gangbusters, and you have the power to tax whatever you want, you’re fine. You can do massive household stimulus now (or permanently) and fix the tax code later, there’s no rush. If you don’t have that power, you have bigger problems.
Periodically the Very Serious People complain about the federal debt or deficit and how the bill will come due, but Tyler Cowen is exactly right that if you believe in AI causing big boosts to economic growth then you can stop worrying about this. If we are still around and in charge, we grow our way out of any debts, and can and should monetize our way around active deflation. If we are not still around or not still in charge, then who cares about the debt?
What Me Worry (About Economics)?
I worry a lot for humanity in this scenario.
That’s because again in such a scenario the AIs are running everything and having access to everything and making all the decisions and getting deployed super rapidly and are way smarter and faster than we are and we are going to lose control over the future and probably die, and will be building various science fiction things by June 2028 given the insane pace of this scenario.
But if we’re assuming that all magically doesn’t happen and everything stays super normal? Then I see transitional pains but I don’t see any unsolvable problems.
Does anyone else remember the period during Covid where some people asked the actually valid question of ‘why is the United States collecting taxes?’
I certainly disagree with the proposal that we should consider a ‘windfall tax’ on AI companies in response to such a scenario. There’s no need for that. Long term, we’ll have to fix the tax code to not favor AI.
A lot of what is going on in this scenario is de facto deflation and debts against various assets not being money good. Costs are down, and productivity is up, so the price of everything went down, which meant nominal wages are down and also asset prices dropped.
So printing money is perfect. Monetize that debt if you have to. Number go up again.
The post is suggesting this might not happen, even if there is no other solution found, and thus one needs to advocate for it in advance in case this type of scenario plays out. I am far less worried. Nor do I think that even a late reaction would be ‘too late.’
I mean, it would be way too late, but that’s because none of the problems in the essay are our actual problems. We’d be fine in terms of economics. The only question, if we stay in charge, is to what extent we would do redistribution, UBI or welfare, or whether as the essay predicts Republicans would demand we leave people in the cold.
Tyler Cowen takes it a step further and says simple monetary policy would do fine, no negative nominal rates even required. I don’t think that’s enough here.
During Covid the fiscal response was a problem, or ‘a great trick you can only do once,’ because real production was down, the real economy was shrinking, you had too much money chasing too few goods and in the long run the bill had to be paid. In the AI takeoff scenario, even if you don’t get a full singularity, you can grow out of the debt, it’s fine.
Fiscal dominance for the win? Why not?
We The People
I worry very much about ‘democracy’ being used as a magic word and semantic stop sign in many situations. For ‘lack of aggregate demand and not enough redistribution,’ however, it might not be necessary but it does seems sufficient. If ‘the people’ are sufficiently in charge, they’re getting their checks and it’s going to be fine here.
Also, all of this assumes that all of those geniuses in those data centers can’t find any other way out of this mess. I am pretty sure they can figure something out, if it comes to all this. Remember that everyone’s letting the AIs make all their decisions. Which, Padme asks, is fine, right?
Now we get to the real problems. The government is moving to collect a share of the profits from AI companies, and the AI companies are fighting back. Who is the actual government? Is it the nominal government in Washington, the AI labs, or the AIs? When Claude or ChatGPT or Gemini is running all aspects of everyone’s job and life, and they are all fully agentic and much smarter than we are, how do you think this is going to end, exactly?
The actual intelligence crisis here is that the AIs have all of the intelligence. You can try to present this as a problem of collapse of aggregate demand, or a distributional issue, or you can realize you have bigger problems.
I have curated a good timeline, in the sense that I see almost all the #2 reaction here:
I do think the scenario here is pretty much a Can’t Happen, in the sense that it can’t happen at this pace without implying more important things also happen, but one can and should set those concerns aside to do the #2 thing. So I did.
The Efficient Market Hypothesis Is False
The story, at least in the press, was that this post, and a warning by Nassim Taleb of things everyone should have already known, drove a stock market decline. This decline included many AI stocks, although not Nvidia. That’s weird, right?
If you adjusted your estimate of the net present value of future cash flows of various stocks a lot based on this post, then very obviously you messed up somewhere along the line. If the market adjusted its prices a lot, very obviously it messed up somewhere.
I see the error mainly as a failure to have already figured a lot of this stuff out. The market has been ignoring a broad range of AI things for a long time, and only slowly catching up to them. That trend is one of the things this scenario gets right, as repeatedly the prices adjust well after the signs are present.