Meta is delaying their Behemoth model launch because of disappointing evals.
This is another major lab (both OpenAI, Anthropic have also experienced this) that has seen disappointing results in trying to scale their model via raw parameter size into the next generation, which suggests to me that there really is some sort of soft or hard wall at this size. It's good news for people favoring a slow/pause, though of course there is now RL to pursue. I am genuinely curious what's going on though; it seems like maybe it's just getting enough high quality tokens that's an issue, and synthetic data is too hard to get or it could be a qualitative shift like a reverse of the original change with LLMs.
I definitely think this should update the priors of RSI folks though, because if these sorts of barriers keep cropping up along different avenues of scaling, I would expect linear increases in intelligence rather than exponential ones.
This is also somewhat commingled by the lab specific issues with Meta that seem very hard to ignore now (i.e. bad execution), so maybe this particular instance shouldn't update you too much, but it is still of note.