Anthropic is raising even more funds and the pitch deck seems scary. A choice quote from the article:

These models could begin to automate large portions of the economy,” the pitch deck reads. “We believe that companies that train the best 2025/26 models will be too far ahead for anyone to catch up in subsequent cycles.

This frontier model could be used to build virtual assistants that can answer emails, perform research and generate art, books and more, some of which we have already gotten a taste of with the likes of GPT-4 and other large language models.

Anthropic estimates its frontier model will require on the order of 10^25 FLOPs, or floating point operations — several orders of magnitude larger than even the biggest models today. Of course, how this translates to computation time depends on the speed and scale of the system doing the computation; Anthropic implies (in the deck) it relies on clusters with “tens of thousands of GPUs.”

New to LessWrong?

New Comment
22 comments, sorted by Click to highlight new comments since: Today at 5:17 AM

terrible news, sucks to see even more people speedrunning the extinction of everything of value.

reminder that if you destroy everything first you don't gain anything. everything is just destroyed earlier.

[+][anonymous]1y-9-14

Anthropic estimates its frontier model will require on the order of 10^25 FLOPs, or floating point operations — several orders of magnitude larger than even the biggest models today.

This doesn't make sense. GPT-4 used around 2*10^25 FLOP, someone estimated.

[-]dsj1y40

Got a source for this estimate?

Epoch says 2.2e25. Skimming that page, it seems like a pretty unreliable estimate. They say their 90% confidence interval is about 1e25 to 5e25.

[-]dsj1y1-5

My guess is “today” was supposed to refer to some date when they were doing the investigation prior to the release of GPT-4, not the date the article was published.

Minerva (from June 2022) used 3e24; there's no way "several orders of magnitude larger" was right when the article was being written. I think the author just made a mistake.

just looks like keeping pace in the arms jog to me. not good news, but not really much of an update either, which is the minimum I want to hear.

Reading this, it reminds me of the red flags that some people (e.g. Soares) saw when interacting with SBF and, once shit hit the fan, ruminated over not having taken some appropriate action.

Not genuinely relevant due to differences in the metrics discussed, but it does recall many years ago seeing 10^25 flops as the estimate given for a human brain.

For training a brain like model? Are you talking about bioanchors?

it was an AI impacts estimate from probably 2015-16 iirc

Please don't call it an arms race or it might become one. (Let's not spread that meme to onlookers) This is just about the wording, not the content

it looks to me like it's behaving like an arms jog: people are keeping up but moving at a finite smooth rate. correctly labeling it does help a little, but mostly it's the actual behavior that matters.

[-][anonymous]1y10

Would the cold war not be a cold war if it wasn't called that? Your suggestion is useless. The dynamics of the game make it an arms race.

The way we communicate changes how people think. So if they currently just think of AI as normal competition but then realize it's worth to race to powerful systems, we may give them the intention to race. And worse, we might get additional actors to join in such as the DOD, which would accelerate it even further.

you've really caught a nasty case of being borged by an egregore. you might want to consider tuning yourself to be less adversarial about it - I don't think you're wrong, but you've got ape specific stuff that to me, someone who disagrees on the object level anyway, it seems like you're reducing rate of useful communication by structuring your responses to have mutual information with your snark subnet. though of course I'm maybe doing it back just a little.

That is only about 300 H100 GPU years: 10^15 flops/s * 10^9 secs/30 yrs * 10

It is easy to understand why such news could increase P(doom) even more for people with high P(doom) prior.

But I am curious about the following question: what if an oracle told us that P(doom) is 25% before the announcement (suppose it was not clear to the oracle what strategy will Anthropic choose, it was inherently unpredictable due to quantum effects or whatever).

Would it still increase P(doom)?

What if the oracle said P(doom) is 5%?

I am not trying to make any specific point, just interested in what people think.

Anthropic is, ostensibly, an organization focused on safe and controllable AI. This arms race is concerning. We've already seen this route taken once with OpenAI. Seems like the easy route to take. This press release sure sounds like capabilities, not alignment/safety.

Over the past month, reinforced even more every time I read something like this, I've firmly come to believe that political containment is a more realistic strategy, with a much greater chance of success, than focusing purely on alignment. Even comparing the past month to the month of December 2022, things are accelerating dramatically. It only took a few weeks between the release of GPT-4 and the development of AutoGPT, which is crudely agentic. Capabilities is starting with a pool of people OOMs higher than alignment, and as money pours into the field at ever growing rates (toward capabilities, of course, because that's where the money is), it's going to be really hard for alignment folks (who I deeply respect) to keep pace. I believe that this year is the crucial moment for persuading the general populace that AI needs to be contained, and doing so effectively because if we use poor strategies and backfire, we may have missed our best chance.