Anthropic estimates its frontier model will require on the order of 10^25 FLOPs, or floating point operations — several orders of magnitude larger than even the biggest models today.
This doesn't make sense. GPT-4 used around 2*10^25 FLOP, someone estimated.
My guess is “today” was supposed to refer to some date when they were doing the investigation prior to the release of GPT-4, not the date the article was published.
Minerva (from June 2022) used 3e24; there's no way "several orders of magnitude larger" was right when the article was being written. I think the author just made a mistake.
just looks like keeping pace in the arms jog to me. not good news, but not really much of an update either, which is the minimum I want to hear.
Not genuinely relevant due to differences in the metrics discussed, but it does recall many years ago seeing 10^25 flops as the estimate given for a human brain.
Please don't call it an arms race or it might become one. (Let's not spread that meme to onlookers) This is just about the wording, not the content
it looks to me like it's behaving like an arms jog: people are keeping up but moving at a finite smooth rate. correctly labeling it does help a little, but mostly it's the actual behavior that matters.
Would the cold war not be a cold war if it wasn't called that? Your suggestion is useless. The dynamics of the game make it an arms race.
The way we communicate changes how people think. So if they currently just think of AI as normal competition but then realize it's worth to race to powerful systems, we may give them the intention to race. And worse, we might get additional actors to join in such as the DOD, which would accelerate it even further.
you've really caught a nasty case of being borged by an egregore. you might want to consider tuning yourself to be less adversarial about it - I don't think you're wrong, but you've got ape specific stuff that to me, someone who disagrees on the object level anyway, it seems like you're reducing rate of useful communication by structuring your responses to have mutual information with your snark subnet. though of course I'm maybe doing it back just a little.
It is easy to understand why such news could increase P(doom) even more for people with high P(doom) prior.
But I am curious about the following question: what if an oracle told us that P(doom) is 25% before the announcement (suppose it was not clear to the oracle what strategy will Anthropic choose, it was inherently unpredictable due to quantum effects or whatever).
Would it still increase P(doom)?
What if the oracle said P(doom) is 5%?
I am not trying to make any specific point, just interested in what people think.
Anthropic is, ostensibly, an organization focused on safe and controllable AI. This arms race is concerning. We've already seen this route taken once with OpenAI. Seems like the easy route to take. This press release sure sounds like capabilities, not alignment/safety.
Over the past month, reinforced even more every time I read something like this, I've firmly come to believe that political containment is a more realistic strategy, with a much greater chance of success, than focusing purely on alignment. Even comparing the past month to the month of December 2022, things are accelerating dramatically. It only took a few weeks between the release of GPT-4 and the development of AutoGPT, which is crudely agentic. Capabilities is starting with a pool of people OOMs higher than alignment, and as money pours into the field at ever growing rates (toward capabilities, of course, because that's where the money is), it's going to be really hard for alignment folks (who I deeply respect) to keep pace. I believe that this year is the crucial moment for persuading the general populace that AI needs to be contained, and doing so effectively because if we use poor strategies and backfire, we may have missed our best chance.
Anthropic is raising even more funds and the pitch deck seems scary. A choice quote from the article: