LESSWRONG
LW

Damin Niohe
16130
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
1Damin Niohe's Shortform
4mo
1
No wikitag contributions to display.
Julian Bradshaw's Shortform
Damin Niohe3mo10

I really wish someone tried out o3/gemini with a weaker harness (say equal to claude), which is where it would be more interesting and also it would make a cross-model comparison easier.

Reply
Meditations on Doge
Damin Niohe4mo143

I just want to note another data point about reforming institutions which was postwar Iraq. De-Baathification was an explicit policy undertaken to explicitly remove and replace members of the government associated with the Saddam affiliated Ba'ath Party, and it's generally considered a failure and having lead to a lot of sectarian violence, the rise of ISIS, and generally contributing to an ineffective government afterwards. 

It's a somewhat different situation since that was more of an ideological project, but is I think notable and relevant.

Reply
Damin Niohe's Shortform
Damin Niohe4mo40

Meta is delaying their Behemoth model launch because of disappointing evals. 

This is another major lab (both OpenAI, Anthropic have also experienced this) that has seen disappointing results in trying to scale their model via raw parameter size into the next generation, which suggests to me that there really is some sort of soft or hard wall at this size. It's good news for people favoring a slow/pause, though of course there is now RL to pursue. I am genuinely curious what's going on though; it seems like maybe it's just getting enough high quality tokens that's an issue, and synthetic data is too hard to get or it could be a qualitative shift like a reverse of the original change with LLMs.

I definitely think this should update the priors of RSI folks though, because if these sorts of barriers keep cropping up along different avenues of scaling, I would expect linear increases in intelligence rather than exponential ones.

This is also somewhat commingled by the lab specific issues with Meta that seem very hard to ignore now (i.e. bad execution), so maybe this particular instance shouldn't update you too much, but it is still of note.

Reply
1Damin Niohe's Shortform
4mo
1