Remmelt

Research coordinator of Stop/Pause area at AI Safety Camp.

See explainer on why AGI could not be controlled enough to stay safe:
lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable

 

Sequences

Preparing for an AI Market Crash
Bias in Evaluating AGI X-Risks
Developments toward Uncontrollable AI
Why Not Try Build Safe AGI?

Comments

Sorted by

This is a solid point that I forgot to take into account here. 

What happens to GPU clusters inside the data centers build out before the market crash? 

If user demand slips and/or various companies stop training, that means that compute prices will slump. As a result, cheap compute will be available for remaining R&D teams, for the three years at least that the GPUs last. 

I find that concerning. Because not only is compute cheap, but many of the researchers left using that compute will have reached an understanding that scaling transformer architectures on internet-available data has become a dead end. With investor and managerial pressure to release LLM-based products gone, researchers will explore their own curiosities. This is the time you’d expect the persistent researchers to invent and tinker with new architectures – that could end up being more compute and data efficient at encoding functionality. 

~ ~ ~

I don’t want to skip over your main point. Is your argument that AI companies will be protected from a crash, since their core infrastructure is already build? 

Or more precisely: 

  • that since data centers were build out before the crash, that compute prices end up converging on mostly just the cost of the energy and operations needed to run the GPU clusters inside,
  • which in turn acts as a financial cushion for companies like OpenAI and Anthropic, for whom inference costs are now lower,
  • where those companies can quickly scale back expensive training and R&D, while offering their existing products to remaining users at lower cost.
  • as a result of which, those companies can continue to operate during the period that funding has dried up, waiting out the 'AI winter' until investors and consumers are willing to commit their money again.

That sounds right, given that compute accounts for over half of their costs. Particularly if the companies secure another large VC round ahead of a crash, then they should be able to weather the storm. E.g. the $40 billion just committed to OpenAI (assuming that by the end of this year OpenAI exploits a legal loophole to become for-profit, that their main backer SoftBank can lend enough money, etc). 

Just realised that your point seems similar to Sequoia Capital’s
“declining prices for GPU computing is actually good for long-term innovation and good for startups. If my forecast comes to bear, it will cause harm primarily to investors. Founders and company builders will continue to build in AI—and they will be more likely to succeed, because they will benefit both from lower costs and from learnings accrued during this period of experimentation.”

~ ~ ~

A market crash is by itself not enough to deter these companies – from continuing to integrate increasingly automated systems into society.

I think a coordinated movement is needed; one that exerts legitimate pressure on our failing institutions. The next post will be about that. 

Glad to read your thoughts!

Agreed on being friends with communities who are not happy about AI. 

I’m personally not a fan of working with OpenAI or Anthropic, given that they’ve defected on people here concerned about a default trajectory to mass extinction, and used our research for their own ends.

Yes, I get you don’t just want to read about the problem but a potential solution. 

The next post in this sequence will summarise the plan by those experienced organisers.

These organisers led one of the largest grassroots movements in recent history. That took years of coalition building, and so will building a new movement. 

So they want to communicate the plan clearly, without inviting misinterpretations down the line. I myself rushed writing on new plans before (when I nuanced a press release put out by a time-pressed colleague at Stop AI). That backfired because I hadn’t addressed obvious concerns. This time, I drafted a summary that the organisers liked, but still want to refine. So they will run sessions with me and a facilitator, to map out stakeholders and their perspectives, before going public on plans.

Check back here in a month. We should have a summary ready by then.

Thanks for your takes!  Some thoughts on your points:

  • Yes, OpenAI has useful infrastructure and brands. It's hard to imagine a scenario where they wouldn't just downsize and/or be acquired by e.g. Microsoft.
  • If OpenAI or Anthropic goes down like that, I'd be surprised if some other AI companies don't go down with them. This is an industry that very much relies on stories convincing people to buy into the promise of future returns, given that most companies are losing money on developing and releasing large models. When those stories fail to play out with an industry leader, the common awareness of that failure will cascade into people dropping their commitments throughout the industry.
  • AI companies may fail in part because people stop using their products. For example, if a US recession happens, paid users may switch to cheaper alternatives like DeepSeek's, or stop using the tools altogether. Also, ChatGPT started as a flashy product that relied on novelty and future promises to get people excited to use it. After a while, people get bored of a product that isn't changing much anymore, and is not actually delivering on OpenAI's proclamations of how AI will rapidly improve.
  • Sure, companies fund interesting research. At the same time, do you know other examples of $600 billion+ being invested yearly into interesting research without expectations of much profit?
  • Other communities I'm in touch with are already outraged about the AI thing. This includes creative professionals, tech privacy advocates, families targeted by deepfakes, tech-aware environmentalists, some Christians, and so forth. More broadly, there has been growing public frustration about tech oligarchs extracting wealth while taking over the government, about a 'rot economy' that pushes failing products, about algorithmic intermediaries creating a sense of disconnection, and about a lack of stable dignified jobs. 'AI' is at the intersection of all of those problems, and therefore become a salient symbol for communities to target. An AI market crash, alongside other correlated events, can bring to surface and magnify their frustrations.


Those are my takes. Curious if this raises new thoughts.

Yes, the huge ramp up in investment by companies into deep learning infrastructure & products (since 2012) at billion dollar losses also reminds me of the dot-com bubble. With the exception that now not only small investment firms and individual investors are providing the money – big tech conglomerates are also diverting profits from their cash-cow businesses.

I can't speak with confidence about whether OpenAI is more like Amazon or other larger internet startups that failed. Right now though, OpenAI does not seem to have much of a moat.

Glad you spotted that! Those two quoted claims do contradict each other, as stated. I’m surprised I had not noticed that.

 

but I'm not sure where that money goes.

The Information had a useful table on OpenAI’s projected 2024 costs. Linking to a screenshot here.

 

But I'm not sure why the article says that "every single paying customer" only increases the company's burn rate given that they spend less money running the models than they get in revenue.

I’m not sure either why Ed Zitron wrote that. When I’m back on my laptop, I’ll look at older articles for any further reasoning. 

Looking at the cost items in The Information’s table, revenue share with Microsoft ($700 million) and hosting ($400 million) definitely seem mostly variable with subscriptions. It’s harder to say for the sales & marketing ($300 million) and general administrative costs ($600 million). 

Given that information, the revenue that OpenAI earns for itself would still be higher than just the cost of running the models and hosting (which we could call the “cost of running software”).

It’s hard to say though on the margin how much cost overall is added per normal-tier user added. Partly, it depends on how much more they use OpenAI’s tools than free users. But I guess you’d be more likely right than not that (if we exclude past training and research compute costs, and other fixed costs), that the overall revenue per normal-tier user added would be higher than the accompanying costs.

 

Now the article claims that OpenAI spent $9 billion in total

Note also that the $9 billion total cost amount seems understated in three ways: 

  • the amortised research compute amount lags behind the recent higher compute costs for research.
  • the data costs (which I assume are not variable with user count) do not appear to price in possible compensation OpenAI will be ordered to pay for any past violations identified in ongoing lawsuits.
  • from an investor perspective, the $1.5 billion(?) worth in profit shares handed out are also a ‘cost’. 

Yes, I was also wondering what ordering it by jurisdiction contributed. I guess it's nice for some folks to have it be more visual, even if the visual aspects don't contribute much?

Update: back up to 70% chance.

Just spent two hours compiling different contributing factors. Now I weighed those factors up more comprehensively, I don't expect to change my prediction by more than ten percentage points over the coming months. Though I'll write here if I do.

My prediction: 70% chance that by August 2029 there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.

 

For:

  • Large model labs losing money
    • OpenAI made loss of ~$5 billion last year.
      • Takes most of the consumer and enterprise revenue, but still only $3.7 billion.
      • GPT 4.5 model is the result of 18 months of R&D, but only a marginal improvement in output quality, while even more compute intensive.
      • If OpenAI publicly fails, as the supposed industry leader, this can undermine the investment narrative of AI as a rapidly improving and profitable technology, and trigger a market meltdown.
    • Commoditisation
      • Other models by Meta, etc, around as useful for consumers.
      • DeepSeek undercuts US-designed models with compute-efficient open-weights alternative.
    • Data center overinvestment
      • Microsoft cut at least 14% of planned data center expansion.
  • Subdued commercial investment interest.
    • Some investment firm analysts skeptical, and second-largest VC firm Sequoia Capital also made a case of lack of returns for the scale of investment ($600+ billion).
    • SoftBank is the main other backer of the Stargate data center expansion project, and needs to raise debt to do raise ~$18 billion. OpenAI also needs to raise more investment funds next round to cover ~$18 billion, with question whether there is interest
  • Uncertainty US government funding
    • Mismatch between US Defense interest and what large model labs are currently developing.
      • Model 'hallucinations' get in the way of deployment of LLMs on the battlefield, given reliability requirements.
        • On the other hand, this hasn't prevented partnerships and attempts to deploy models.
      • Interest in data analysis of integrated data streams (e.g. by Palantir) and in self-navigating drone systems (e.g. by Anduril).
        • The Russo-Ukrainian war and Gaza invasion have been testbeds, but seeing relatively rudimentary and straightforward AI models being used there (Ukraine drones are still mostly remotely operated by humans, and Israel used an LLM for shoddy target identification).
    • No clear sign that US administration is planning to subsidise large model development.
      • Stargate deal announced by Trump did not involve government chipping in money.
  • Likelihood of a (largish) US economic recession by 2029.
    • Debt/misinvestment overload after long period of low interest.
    • Early signs, but nothing definitive:
      • Inflation
      • Reduced consumer demand
      • Business uncertainty amidst changing tariffs.
    • Generative AI subscriptions seem to be a luxury expense for most people rather than essential for completing work (particularly because ~free alternatives exist to switch to and for most users those aren't significantly different in use). Enterprises and consumers could cut heavily on their subscriptions once facing a recession.
  • Early signs of large progressive organising front, hindering tech-conservative allyships.
    • #TeslaTakedown.
    • Various conversations by organisers with a renewed motivation to be strategic.
      • Last few years' resurgence of 'organising for power' union efforts, overturning top-down mobilising and advocacy approaches.
    • Increasing awareness of fuck-ups in the efficiency drives by Trump-Musk administration coalition.

Against:

  • Current US administration's strong public stance on maintaining America's edge around AI.
    • Public announcements.
      • JD Vance's speech at the renamed AI Action Summit.
    • Clearing out regulation
      • Scrapped Biden AI executive order.
      • Copyright
        • Talks as in UK and EU about effectively scrapping copyright for AI training materials (with opt-out laws, or by scrapping opt-out too).
    • Stopping enforcement of regulation
      • Removing Lina Khan at head of FTC, which were investigating AI companies.
      • Musk internal dismantling of departments engaged in oversight.
    • Internal deployment of AI model for (questionable) uses.
      • US IRS announcement.
      • DOGE attempts of using AI to automate evaluation and work by bureacrats.
  • Accelerationist lobby's influence been increasing.
    • Musk, Zuckerberg, Andreessen, other network state folks, etc, been very strategic in
      • funding and advising politicians,
      • establishing coalitions with people on the right (incl. Christian conservatives, and channeling populist backlashes against globalism and militant wokeness),
      • establishing social media platforms for amplifying their views (X, network of popular independent podcasts like Joe Rogan show).
    • Simultaneous gutting of traditional media.
  • Faltering anti-AI lawsuits
    • Signs of corruption of plaintiff lawyers,
      • e.g. in case against Meta, where crucial arguments were not made, and judge considered not allowing class representation.
  • Defense contracts
    • US military has budget in the trillions of dollars, and could in principle keep the US AI corporations propped up.
      • Possibility that something changes geopolitically (war threat?) resulting in large funds injection.
      • Guess Pentagon already treating AGI labs such as OpenAI and Anthropic as a strategic asset (to control, and possibly prop up if their existence is threatened).
    • Currently seeing cross-company partnerships.
      • OpenAI with Anduril, Anthropic with Palantir.
  • National agenda pushes to compete in various countries.
    • Incl. China, UK, EU.
    • Recent increased promotion/justification in and around US political circles of the need to compete with China.
  • New capability development
    • Given the scale of AI research happening now, it is quite possible that some teams will develop of new cross-domain-optimising model architecture that's data and compute efficient.
    • As researchers come to acknowledge the failure of the 'scaling laws' focussed approach using existing transformer architectures (given limited online-available data, and reduced marginal returns on compute), they will naturally look for alternative architecture designs to work on.

Update: back up to 60% chance. 

I overreacted before IMO on the updating down to 40% (and undercompensated when updating down to 80%, which I soon after thought should have been 70%).

The leader in turns of large model revenue, OpenAI has basically failed to build something worth calling GPT-5, and Microsoft is now developing more models in-house to compete with them. If OpenAI fails on the effort to combine its existing models into something new and special (likely), that’s a blow to perception of the industry.

A recession might also be coming this year, or at least in the next four years, which I made a prediction about before.

Load More