Remmelt

Research coordinator of Stop/Pause area at AI Safety Camp.

See explainer on why AGI could not be controlled enough to stay safe:
lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable

 

Sequences

Preparing for an AI Market Crash
Bias in Evaluating AGI X-Risks
Developments toward Uncontrollable AI
Why Not Try Build Safe AGI?

Comments

Sorted by

It's about world size, not computation, and has a startling effect that probably won't occur again with future chips

 

Thanks, I got to say I’m a total amateur when it comes to GPU performance. So will take the time to read your linked-to comment to understand it better. 

Thanks, I might be underestimating the impact of new Blackwell chips with improved computation. 

I’m skeptical whether offering “chain-of-thought” bots to more customers will make a significant difference. But I might be wrong – especially if new model architectures would come out as well. 

And if corporations throw enough cheap compute behind it plus widespread personal data collection, they can get to commercially very useful model functionalities. My hope is that there will be a market crash before that could happen, and we can enable other concerned communities to restrict the development and release of dangerously unscoped models.

 

But even then, OpenAI might get to ~$25bn annualized revenue that won't be going away

What is this revenue estimate assuming?

This is a neat and specific explanation of how I approached it. I tried to be transparent about it though.

Remmelt20

If your bet is that something special about the economics of AI will cause it to crash, maybe your bet should be changed to this?

What's relevant for me is that there is an AI market crash, such that AI corporations have weakened and we in turn have more leeway to restrict their reckless activities. Practically, I don't mind if that's actually the result of a wider failing economy – I mentioned a US recession as a causal factor here.

Having said that, it would be easier to restrict AI corp activities when there is not a general market crash at the same time (since the latter would make it harder to fund organisers as well as for working citizens to mobilise).
 

PS: I don't exactly have $25k to bet, and I've said elsewhere I do believe there's a big chance that AI spending will decrease.

Understood!  And I appreciate you discussing thoughts with me here.
 

Another thought is that changes in the amount of investment may swing further than changes in the value...?

Interesting point! That feels right, but I lack experience/clarity about how investments work here.

Remmelt20

That's a good distinction.

I want to take you up on measuring actual inflows of capital into the large-AI-model development companies. Rather than e.g. measuring the prices of stocks in companies leading on development – where declines may not much reflect an actual reduction in investment and spending on AI products. 

Consumers and enterprises cutting back on their subscriptions and private investors cutting back on their investment offers and/or cancelling previous offers – those seem reliable indicators of an actual crash.

It's plausible that a general market crash feeds into, and is reflective of, worsening economics of the AI companies. So it seems hard to decouple causation there. And, I'd still call it an AI market crash even if investment/valuations/investments are going down to a similar extent in other industries. So I would not try to control for other market declines happening around the same time, but your suggested indicators make sense!

Remmelt10

For sure!  Proceeds go to organisers who can act to legitimately restrict the weakened AI companies.

(Note that with a crash I don’t just mean some large reduction in the stock prices of tech companies that have been ‘leading’ on AI. I mean a broad-based reduction in the investments and/or customer spending going into the AI companies.)

Remmelt10

Maybe I'm banking too much on some people in the AI Safety community keep thinking that AI "progress" will continue as a rapid upward curve :)

Elsewhere I posted a guess of 40% chance of an AI market crash for this year, though I did not have precise crash criteria in mind there, and would lower the percentage once it's judged by a few measures, rather than my sense of "that looks like a crash". 


 

Remmelt20

Thanks, I hadn't seen that graph yet! I had only searched Manifold.

The odds of 1:7 imply a 12.5% chance of a crash. That's far outside of the consensus on that graph. Though I also notice that their criteria for a "bust or winter" are much stricter than where I'd set the threshold for a crash.

That makes me wonder whether I should have selected a lower odd ratio (for a higher return on the upside). Regardless, this month I'm prepared to take this bet.

 

but calling this "near free money" when you have to put up 25k to get it...

Fair enough – you'd have to set aside this amount in your savings. You could still earn some interest from the bank, but that's not much.

Remmelt32

This is a solid point that I forgot to take into account here. 

What happens to GPU clusters inside the data centers build out before the market crash? 

If user demand slips and/or various companies stop training, that means that compute prices will slump. As a result, cheap compute will be available for remaining R&D teams, for the three years at least that the GPUs last. 

I find that concerning. Because not only is compute cheap, but many of the researchers left using that compute will have reached an understanding that scaling transformer architectures on internet-available data has become a dead end. With investor and managerial pressure to release LLM-based products gone, researchers will explore their own curiosities. This is the time you’d expect the persistent researchers to invent and tinker with new architectures – that could end up being more compute and data efficient at encoding functionality. 

~ ~ ~

I don’t want to skip over your main point. Is your argument that AI companies will be protected from a crash, since their core infrastructure is already build? 

Or more precisely: 

  • that since data centers were build out before the crash, that compute prices end up converging on mostly just the cost of the energy and operations needed to run the GPU clusters inside,
  • which in turn acts as a financial cushion for companies like OpenAI and Anthropic, for whom inference costs are now lower,
  • where those companies can quickly scale back expensive training and R&D, while offering their existing products to remaining users at lower cost.
  • as a result of which, those companies can continue to operate during the period that funding has dried up, waiting out the 'AI winter' until investors and consumers are willing to commit their money again.

That sounds right, given that compute accounts for over half of their costs. Particularly if the companies secure another large VC round ahead of a crash, then they should be able to weather the storm. E.g. the $40 billion just committed to OpenAI (assuming that by the end of this year OpenAI exploits a legal loophole to become for-profit, that their main backer SoftBank can lend enough money, etc). 

Just realised that your point seems similar to Sequoia Capital’s
“declining prices for GPU computing is actually good for long-term innovation and good for startups. If my forecast comes to bear, it will cause harm primarily to investors. Founders and company builders will continue to build in AI—and they will be more likely to succeed, because they will benefit both from lower costs and from learnings accrued during this period of experimentation.”

~ ~ ~

A market crash is by itself not enough to deter these companies – from continuing to integrate increasingly automated systems into society.

I think a coordinated movement is needed; one that exerts legitimate pressure on our failing institutions. The next post will be about that. 

Load More