Research coordinator of Stop/Pause area at AI Safety Camp.
See explainer on why AGI could not be controlled enough to stay safe:
lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable
Glad to read your thoughts!
Agreed on being friends with communities who are not happy about AI.
I’m personally not a fan of working with OpenAI or Anthropic, given that they’ve defected on people here concerned about a default trajectory to mass extinction, and used our research for their own ends.
Yes, I get you don’t just want to read about the problem but a potential solution.
The next post in this sequence will summarise the plan by those experienced organisers.
These organisers led one of the largest grassroots movements in recent history. That took years of coalition building, and so will building a new movement.
So they want to communicate the plan clearly, without inviting misinterpretations down the line. I myself rushed writing on new plans before (when I nuanced a press release put out by a time-pressed colleague at Stop AI). That backfired because I hadn’t addressed obvious concerns. This time, I drafted a summary that the organisers liked, but still want to refine. So they will run sessions with me and a facilitator, to map out stakeholders and their perspectives, before going public on plans.
Check back here in a month. We should have a summary ready by then.
Thanks for your takes! Some thoughts on your points:
Those are my takes. Curious if this raises new thoughts.
Yes, the huge ramp up in investment by companies into deep learning infrastructure & products (since 2012) at billion dollar losses also reminds me of the dot-com bubble. With the exception that now not only small investment firms and individual investors are providing the money – big tech conglomerates are also diverting profits from their cash-cow businesses.
I can't speak with confidence about whether OpenAI is more like Amazon or other larger internet startups that failed. Right now though, OpenAI does not seem to have much of a moat.
Glad you spotted that! Those two quoted claims do contradict each other, as stated. I’m surprised I had not noticed that.
but I'm not sure where that money goes.
The Information had a useful table on OpenAI’s projected 2024 costs. Linking to a screenshot here.
But I'm not sure why the article says that "every single paying customer" only increases the company's burn rate given that they spend less money running the models than they get in revenue.
I’m not sure either why Ed Zitron wrote that. When I’m back on my laptop, I’ll look at older articles for any further reasoning.
Looking at the cost items in The Information’s table, revenue share with Microsoft ($700 million) and hosting ($400 million) definitely seem mostly variable with subscriptions. It’s harder to say for the sales & marketing ($300 million) and general administrative costs ($600 million).
Given that information, the revenue that OpenAI earns for itself would still be higher than just the cost of running the models and hosting (which we could call the “cost of running software”).
It’s hard to say though on the margin how much cost overall is added per normal-tier user added. Partly, it depends on how much more they use OpenAI’s tools than free users. But I guess you’d be more likely right than not that (if we exclude past training and research compute costs, and other fixed costs), that the overall revenue per normal-tier user added would be higher than the accompanying costs.
Now the article claims that OpenAI spent $9 billion in total
Note also that the $9 billion total cost amount seems understated in three ways:
Update: back up to 70% chance.
Just spent two hours compiling different contributing factors. Now I weighed those factors up more comprehensively, I don't expect to change my prediction by more than ten percentage points over the coming months. Though I'll write here if I do.
My prediction: 70% chance that by August 2029 there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.
For:
Against:
Update: back up to 60% chance.
I overreacted before IMO on the updating down to 40% (and undercompensated when updating down to 80%, which I soon after thought should have been 70%).
The leader in turns of large model revenue, OpenAI has basically failed to build something worth calling GPT-5, and Microsoft is now developing more models in-house to compete with them. If OpenAI fails on the effort to combine its existing models into something new and special (likely), that’s a blow to perception of the industry.
A recession might also be coming this year, or at least in the next four years, which I made a prediction about before.
This is a solid point that I forgot to take into account here.
What happens to GPU clusters inside the data centers build out before the market crash?
If user demand slips and/or various companies stop training, that means that compute prices will slump. As a result, cheap compute will be available for remaining R&D teams, for the three years at least that the GPUs last.
I find that concerning. Because not only is compute cheap, but many of the researchers left using that compute will have reached an understanding that scaling transformer architectures on internet-available data has become a dead end. With investor and managerial pressure to release LLM-based products gone, researchers will explore their own curiosities. This is the time you’d expect the persistent researchers to invent and tinker with new architectures – that could end up being more compute and data efficient at encoding functionality.
~ ~ ~
I don’t want to skip over your main point. Is your argument that AI companies will be protected from a crash, since their core infrastructure is already build?
Or more precisely:
That sounds right, given that compute accounts for over half of their costs. Particularly if the companies secure another large VC round ahead of a crash, then they should be able to weather the storm. E.g. the $40 billion just committed to OpenAI (assuming that by the end of this year OpenAI exploits a legal loophole to become for-profit, that their main backer SoftBank can lend enough money, etc).
Just realised that your point seems similar to Sequoia Capital’s:
“declining prices for GPU computing is actually good for long-term innovation and good for startups. If my forecast comes to bear, it will cause harm primarily to investors. Founders and company builders will continue to build in AI—and they will be more likely to succeed, because they will benefit both from lower costs and from learnings accrued during this period of experimentation.”
~ ~ ~
A market crash is by itself not enough to deter these companies – from continuing to integrate increasingly automated systems into society.
I think a coordinated movement is needed; one that exerts legitimate pressure on our failing institutions. The next post will be about that.