There’s no bigger narrative than the one AI industry leaders have been pushing since before the boom: AGI will soon be able to do just about anything a human can do, and will usher in an age of superpowerful technology the likes of which we can only begin to imagine. Jobs will be automated, industries transformed, cancer cured, climate change solved; AI will do quite literally everything.
The article unfortunately does not seriously consider the possibility that AGI has the potential to automate most jobs in a few years. The large investments into AI would be justified in this case, even if current revenue is small! I think this is an important difference to past bubbles.
OpenAI, Anthropic, and the AI-embracing tech giants are burning through billions, inference costs haven’t fallen (those companies still lose money on nearly every user query), and the long-term viability of their enterprise programs are a big question mark at best.
The part about inference costs seems false, unless they mean total inference costs of all their instances.
Novice investor participation is nowhere near what it was at the 2000 dot com peak. Current conditions look more like 1998. A bubble is probably coming, but there's lots of room still for increased novice enthusiasm.
From journalist Brian Merchant:
The explanations given for each of the four heuristics were insightful. I left those out, but you can find them back in the original article.