I mostly agree with Charles that the pro-bubble arguments are remarkably weak, given the amount of bubble talk we are seeing, and that when you combine these two facts it should move you towards there not being a bubble.
Unlike Charles, I am not about to use leverage. I consider leverage in personal investing to be reserved for extreme situations, and a substantial drop in prices is very possible. But I definitely understand.
Charles: There have been so many terrible arguments for why we’re in an AI bubble lately, and so few good ones, that I’ve been convinced the appropriate update is in the “not a bubble” direction and increased my already quite long position.
Fwiw that looks like now being 1.4x leveraged long a mix of about 50% index funds and 50% specific bets (GOOG, TSMC, AMZN the biggest of those).
Nowadays, I think updating on the fact that the arguments for the other side, like done in the quote, is the biggest mistake a lot of people do epistemically, and it causes them to radicalize into incorrect empirical beliefs, and this is because poor criticism of an idea will always exist independent of your idea's quality.
I'll make a linkpost on this tomorrow, and I'd argue that this is the single biggest reason why selection effects are so hard to deal with empirically, because you simply have to ignore the vast majority of possible critics/people who aren't believers, but this also means you lose the ability to tell whether an effect is an artifact of selection without access to ground truth or a way to cheaply verify a theory independent of the social context.
Edit: I made a linkpost 2 days late on what I was talking about selection effects and the fact that poor criticism of an idea will always exist independent of your idea's quality, and it's here titled "the main way I've seen people turn ideologically crazy [Linkpost]".
I think the strongest case for AI stocks being overpriced is to ignore any specific facts about how AI works and take the outside view on historical market behavior. I don't see a good argument being made in the quotes above so I will try to make a version of it.
I'm going based on memory instead of looking up sources, I'm pretty sure I'm wrong about the exact details of the claims below but I believe they're approximately true.
(These are five different perspectives on the same general market phenomenon, so they're not really five independent pieces of evidence.)
On the outside view, I think there's pretty good reason to believe that AI stocks are overpriced. However, on the inside view, the market sort of still doesn't seem like it appreciates how big a deal AGI could be. So on balance I'm pretty uncertain.
Nothing importantly bearish happened in that month other than bullish deals
What happened that made a bunch of people more bearish is that AI stocks went up a good deal, especially some of the lesser known ones.
I'm unsure what exact time period you're talking about, but here are some of the more interesting changes between Aug 29 and Oct 15:
IREN +157%
CLSK +145%
APLD +136%
INOD +118%
NBIS +75%
MU +61%
AMD +47%
If I thought AI was mostly hype, those kinds of near-panic buying would have convinced me to change my mind from "I don't know" to "some of those are almost certainly in a bubble". (Given my actual beliefs, I'm still quite bullish on MU, and weakly bullish on half of the others).
I wouldn't obviously even put AMD on the list given that they're up on rather big single stock news, but yes, good note, there is that.
expect such a crisis to have at most modest effects on timelines to existentially dangerous ASI being developed
It may by my lack of economics education speaking, but how can it it be the case? Are current timelines not relying heavily on the ability of the labs to raise huge capital for building huge datacenters and for paying many people who are smarter than current frontier models to manually generate huge amounts of quality data? Wouldn't such a crisis make it much harder for them, plausibly beyond what makes direct economic sense, due to what responsible investers think a responsible invester is expected to do?
Yes, it could be a plausible scenario. But the project can in theory be directly sponsored by the government. Or a Chinese project could be sponsored by the CCP. What I suspect is that creating superhuman coders or researchers is infeasible due to problems not just with economy, but with scaling laws and quantity of training data unless someone does make a bold move and apply some new architectures.
My other predictions of progress on benchmarks
If my suspicions are true, then the bubble will pop after it becomes clear that the METR law[1] reverted to its original trend of doubling the time horizon every 7 months along with training compute costs (and do inference compute costs grow even faster?)
However, my take at scaling laws could be invalidated in a few days if it mispredicts things like the METR-measured time horizon of Claude Haiku 4.5 (which I forecast to be ~96 minutes) or performance of Gemini 3[2] on the ARC-AGI-1 benchmark. (Since o4-mini, o3, GPT-5 form a nearly straight line, while Claude Sonnet 4.5[3] produces results on the line or a bit under the line, I don't expect Gemini 3 to land above the line).
We have the classic phenomenon where suddenly everyone decided it is good for your social status to say we are in an ‘AI bubble.’
Are these people short the market? Do not be silly. The conventional wisdom response to that question these days is that, as was said in 2007, ‘if the music is playing you have to keep dancing.’
So even with lots of people newly thinking there is a bubble the market has not moved down, other than (modestly) on actual news items, usually related to another potential round of tariffs, or that one time we had a false alarm during the DeepSeek Moment.
So, what’s the case we’re in a bubble? What’s the case we’re not?
My Answer In Brief
People get confused about bubbles, often applying that label any time prices fall. So you have to be clear on what question is being asked.
If ‘that was a bubble’ simply means ‘number go down’ then it is entirely uninteresting to say things are bubbles.
So if we operationalize ‘bubble’ simply means that at some point there is a substantial drawdown in market values (e.g. a 20% drop in the Nasdaq sustained for 6 months) then I would be surprised by this, but the market would need to be dramatically, crazily underpriced for that not to be a plausible thing to happen.
If a bubble means something similar to the 2000 dot com bubble, as in valuations that are not plausible expectations for the net present values of future cash flows? No.
[Standard disclaimer: Nothing on this blog is ever investment advice.]
Before I dive into the details, a time sensitive point of order, that you can skip if you would not consider political donations:
So They’re Saying There’s a Bubble
So a month ago things most people thought things were fine and now it’s a bubble?
This is a very light bubble definition, as these things go.
Nothing importantly bearish happened in that month other than bullish deals, so presumably this is a ‘circular deals freak us out’ shift in mood? Or it could be a cascade effect.
AI Is Atlas And People Worry It Might Shrug
There is definitely reason for concern. If you remove the label ‘bubble’ and simply say ‘AI’ then the quote from Deutsche Bank below is correct, as AI is responsible for essentially all economic growth. Also you can mostly replace ‘US’ with ‘world.’
Can A Bubble Be Common Knowledge?
Not quite, at this size you need some doubt involved. But the basic answer is yes.
It is definitely possible to get into an Everybody Knows situation with a bubble, for various reasons, both when it is and when it isn’t actually a bubble. For example, there’s Bitcoin, and Bitcoin, and Bitcoin, and Bitcoin, but there’s also Bitcoin.
Is it evidence for or against a bubble when everyone says it’s a bubble?
My gut answer is it depends on who is everyone.
If everyone is everyone working in the industry? Then yeah, evidence for a bubble.
If everyone is everyone at the major economic institutions? Not so much.
So I decided to check.
There was essentially no correlation, with 42.5% of AI workers and 41.7% of others saying there is a bubble, and that’s a large percentage, so things are certainly somewhat concerning. It certainly seems likely that certain subtypes of AI investment are ‘in a bubble’ in the sense that investors in those subtypes will lose money, which you would expect in anything like an efficient market.
Steamrollers, Picks and Shovels
In particular, consensus seems to be, and I agree with it (reminder: not investment advice), that investment in ‘companies with products in position to get steamrolled by OpenAI and other frontier labs’ are as a group not going to do well. If you want to call that an ‘AI bubble’ you can, but that seems net misleading. I also wouldn’t be excited to short that basket, since you’re exposed if even one of them hits it big. Remember that if you bought a tech stock portfolio at the dot com peak, you still got Amazon.
Whereas if you had a portfolio of ‘picks and shovels’ or of the frontier labs themselves, that still seems to me like a fine place to bet, although it is no longer a ‘I can’t believe they’re letting me buy at these prices, this is free money’ level of fine. You now have to actually have beliefs about the future and an investment thesis.
What Can Go Up Must Sometimes Go Down
Noah Smith speculates on bubble causes and types when it comes to AI.
You could have a speculative bubble, or an extrapolative bubble, or simply a big mistake about the value of the tech. He thinks if it is a bubble it would be of the later type, proposing we use Bezos’ term ‘industrial bubble.’
I don’t think that’s quite right. The market reflects a variety of perspectives, and it will almost always be way below where the ardent optimists would place it. The ardent optimists are the rock with the word ‘BUY!’ written on it.
What is right is that if AI over some time frame disappoints relative to expectations, sufficiently to shift forward expectations downward from their previous level, that would cause a substantial drop in prices, which could then break momentum and worsen various positions, causing a larger drop in prices.
Thus, we could have AI ultimately having a huge economic impact and ultimately being fully transformative (maybe killing everyone, maybe being amazingly great), and have nothing go that wrong along the way, but still have what people at the time would call ‘the bubble bursting.’
What Can Go Up Quite A Lot Can Go Even More Down
Indeed, if the market is at all efficient, there is a lot of ‘upside risk’ of AI being way more impactful than suggested by the market price, which means there has to be a corresponding downside risk too. Part of that risk is geopolitical, an anti-AI movement could rise, or the supply chain could be disrupted by tariff battles or a war over Taiwan. By traditional definitions of ‘bubble,’ that means a potential bubble.
Even a small chance of a big upside should mean a big boost to valuation. Indeed that is the reason tech startups are funded and venture capital firms exist. If you don’t get the fully transformational level of impact, then at some point value will drop.
Consider the parallel to Bitcoin, and in thinking there is some small percentage chance of becoming ‘digital gold’ or even the new money. If you felt there was no way it could fall by a lot from any given point in time, or even if you were simply confident that it was probably not going to crash, it would be a fantastic screaming buy.
Step Two Remains Important
AI also has to retain expectations that providers will be profitable. If AI is useful but it is expected to not provide enough profits, that too can burst the bubble.
Matthew Yglesias writes more thoughts here, noting that AI is propping up the whole economy and offering reasons some people believe there’s a bubble and also reasons it likely isn’t one, and especially isn’t one in the pure bubble sense of cryptocurrency or Beanie babies, there’s clearly a there there.
To say confidently that there is no bubble in AI is to claim, among other things, that the market is horribly inefficient, and that AI assets are and will remain dramatically underpriced but reliably gain value as people gain situational awareness and are mugged by reality. This includes the requirement that the currently trading AI assets will be poised to capture a lot of value.
Oops We Might Do It Again
Alternatively, how about the possibility that there could be a crash for no reason?
The DeepSeek moment is sobering, since the AI market was down quite a lot on news that should have been priced in and if anything should have made prices go up. What is to stop a similar incorrect information cascade from happening again? Other than a potential ‘Trump put’ or Fed put, very little.
Derek Thompson Breaks Down The Arguments
Derek Thompson provides his best counterargument, saying AI probably isn’t a bubble. He also did an episode on this for the Plain English podcast with Azeem Azhar of Exponential View to balance his previous episode from September 23 on ‘how the AI bubble could burst.’
Well, yeah, when there’s a bubble everyone goes around saying ‘there’s a bubble’ but no one does anything about it, until they do and then there’s no bubble?
As Tyler Cowen sometimes asks, are you short the market? Me neither.
Derek breaks down the top arguments for a bubble.
A lot of overlap here. We have a lot of money being invested in and spent on AI, without much revenue. True that. The AI companies all invest in and buy from each other, at a level that yeah is somewhat suspicious. Yeah, fair. The chips are only being discounted on five year horizons and that seems a bit long? Eh, that seems fine to me, older chips are still useful as long as demand exceeds supply.
So why not a bubble? That part is gated, but his thread lays out the core responses. One, the AI companies look nothing like the joke companies in the dot com bubble.
AI Revenues Are Probably Going To Go Up A Lot
Two, the AI companies need AI revenues to grow 100%-200% a year, and that sounds like a lot, but so far you’re seeing even more than that.
As Matt Levine says, OpenAI has a business model now, because when you need debt investors you need to have a business plan. This one strikes a balance between ‘good enough to raise money’ and ‘not so good no one will believe it.’ Which makes it well under where I expect their revenue to land.
Frankly, OpenAI is downplaying their expectations because if they used their actual projections then no one would believe them, and they might get sued if things didn’t work out. The baseline scenario is that OpenAI (and Anthropic) blow the projections out of the water.
True Costs That Matter Are Absolute Not Relative
Timothy Lee thinks it’s probably ‘not a bubble yet,’ partly citing Thompson and partly because we are seeing differentiation on which models do tasks best. He also links to the worry that there might be no moat, as fast follows offer the same service much cheaper and kill your margin, since your product might be only slightly better.
The thing about AI is that it might in total cost a lot, but in exchange you get a ton. It doesn’t have to be ten times better to have the difference be a big deal. For most use cases of AI, you would be wise to pay twice the price for something 10% better, and often wise to pay 10 or 100 times as much. Always think absolute cost, not relative.
We Are Spending a Lot But Also Not a Lot
It is, but is it? I spend more than that on one of my AI subscriptions, and get many times that much value in return. Thinking purely in terms of present day concrete benefits, when I ask ‘how many different use cases of AI are providing me $1,800 in value?’ I can definitely include taxes and accounting, product evaluation and search, medical help, analysis of research papers, coding and general information and search. So that’s at least six.
Similarly, does this sound like a problem, given the profit margins of these companies?
Valuations Are High But Not Super High
Similarly, Vinay notes that the valuations are only somewhat high.
That’s a P/E ratio, and all this extra capex spending if anything reduces short term earnings. Does a 28x forward P/E ratio sound scary in context, with YoY growth in the 20% range? It doesn’t to me. Sure, there’s some downside, but it would be a dramatic inefficiency if there wasn’t.
Vinay offers several other notes as well.
Official GPU Depreciation Schedules Seem Pretty Reasonable To Me
One thing I find confusing is all the ‘look how fast the chips will lose value’ arguments. Here’s Vinay’s supremely confident claims, as another example of this:
Hedgie takes a similar angle, calling the economics unsustainable because the lifespan of data center components is only 3-10 years due to rapid technological advances.
No one I have seen is saying that chip capability improvements are accelerating dramatically. If that is the case we need to update our timelines.
When Nvidia releases a new chip every year, that doesn’t mean they do the 2027 chip in 2026 and then do the 2029 chip in 2027. It means they do the 2027 chip in 2027, and before that do the best chip you can do in 2026, and it also means Nvidia is good at marketing and life is coming at them fast.
Huang’s statement about free hoppers is obviously deeply silly, and everyone knows not to take such Nvidia statements seriously or literally. The existence of new better chips does not invalidate older worse chips unless supply exceeds demand by enough that the old chips cost more to run then the value they bring.
That’s very obviously not going to happen over three years let alone one or two. You can do math on the production capacity available.
If the marginal cost of hoppers in 2028 was going to be approximately zero, what does that imply?
By default? Stop thinking about capex depreciation and start thinking about whether this means we get a singularity in 2028, since you can now scale compute as long as you have power. Also, get long China, since they have unlimited power generation.
If that’s not why, then it means AI use cases turned out to be severely limited, and the world has a large surplus of compute and not much to do with it.
It kind of has to be one or the other. Neither seems plausible.
I see not only no sign of overcapacity, I see signs of undercapacity, including a scramble for every chip people can get and compute being a limiting factor on many labs in practice right now, including OpenAI and Anthropic. The price of compute has recently been rising, not falling, including the price for renting older chips.
Dave Friedman looked into the accounting here, ultimately not seeing this as a solvency or liquidity issue, but he thinks there could be an accounting optics issue.
Could recent trends reverse, and faster than expected depreciations and ability to charge for older chips cause problems for the accounting in data centers? I mean, sure, that’s obviously possible, if we actually produce enough better chips, or demand sufficiently lags expectations, or some combination thereof.
This whole question seems like a strange thing for those investing hundreds of billions and everyone trading the market to not have priced into their plans and projections? Yes, current OpenAI revenue is on the order of $20 billion, but if you project that out over 3-10 years, that number is going to be vastly higher, and there are other companies.
The Bubble Case Seems Weak
I mostly agree with Charles that the pro-bubble arguments are remarkably weak, given the amount of bubble talk we are seeing, and that when you combine these two facts it should move you towards there not being a bubble.
Unlike Charles, I am not about to use leverage. I consider leverage in personal investing to be reserved for extreme situations, and a substantial drop in prices is very possible. But I definitely understand.
Most of what changed, I think, is that there were a bunch of circular deals done in close succession, and when combined with the exponential growth expectations for AI and people’s lack of understanding the technology and what it will be able to do, and the valuations approaching the point where one can question there being any room to grow, this reasonably triggered various heuristics and freaked people out.
If we define a bubble narrowly as ‘we see a Nasdaq price decline of 20% sustained for 6 months’ I would give that on the order of 25% to happen within the next few years, including as part of a marketwide decline in prices. It has happened to the wider market as recently as 2022, and about 5 times in the last 50 years.
What It Would Mean If Prices Did Go Down
If a decline does happen, I predict I will probably use that opportunity to buy more.
That does not have to be true. Perhaps there will have been large shifts in anticipated future capabilities, or in the competitive landscape and ability to capture profits, or the general economic conditions, and the drop will be fully justified and reflect AI slowing down.
But most of the time this will not be what happened, and the drop will not ultimately have much effect, although it would presumably slow down progress slightly.