AI safety & alignment researcher
Kinda Contra Kaj on LLM Scaling
I didn't see Kaj Sotala's "Surprising LLM reasoning failures make me think we still need qualitative breakthroughs for AGI" until yesterday, or I would have replied sooner. I wrote a reply last night and today, which got long enough that I considered making it a post, but I feel like I've said enough top-level things on the topic until I have data to share (within about a month hopefully!).
But if anyone's interested to see my current thinking on the topic, here it is.
I think that there's an important difference between the claim I'm making and the kinds of claims that Marcus has been making.
I definitely didn't mean to sound like I was comparing your claims to Marcus's! I didn't take your claims that way at all (and in particular you were very clear that you weren't putting any long-term weight on those particular cases). I'm just saying that I think our awareness of the outside view should be relatively strong in this area, because the trail of past predictions about the limits of LLMs is strewn with an unusually large number of skulls.
Yeah I don't have any strong theoretical reason to expect that scaling should stay stopped. That part is based purely on the empirical observation that scaling seems to have stopped for now
My argument is that it's not even clear (at least to me) that it's stopped for now. I'm unfortunately not aware of a great site that keeps benchmarks up to date with every new model, especially not ones that attempt to graph against estimated compute -- but I've yet to see a numerical estimate that shows capabilities-per-OOM-compute slowing down. If you're aware of good data there, I'd love to see it! But in the meantime, the impression that scaling laws are faltering seems to be kind of vibes-based, and for the reasons I gave above I think those vibes may be off.
Great post, thanks! I think your view is plausible, but that we should also be pretty uncertain.
Surprising LLM reasoning failures make me think we still need qualitative breakthroughs for AGI
This has been one of my central research focuses over the past nine months or so. I very much agree that these failures should be surprising, and that understanding why is important, especially given this issue's implications for AGI timelines. I have a few thoughts on your take (for more detail on my overall view here, see the footnoted posts[1]):
One of my two main current projects (described here) tries to assess this better by evaluating models on their ability to experimentally figure out randomized systems (hence ~guaranteed not to be in the training data) with an unbounded solution space. We're aiming to have a results post up by the end of May. It's specifically motivated by trying to understand whether LLMs/LRMs can scale to/past AGI or more qualitative breakthroughs are needed first.
I made a similar argument in "LLM Generality is a Timeline Crux", updated my guesses somewhat based on new evidence in "LLMs Look Increasingly Like General Reasoners", and talked about a concrete plan to address the question in "Numberwang: LLMs Doing Autonomous Research, and a Call for Input". Most links in the comment are to one of these.
"GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models" makes this point painfully well.
An obvious first idea is to switch between 4.1 and 4o in the chat interface and see if the phenomenon we've been investigating occurs for both of them
Oh, switching models is a great idea. No access to 4.1 in the chat interface (apparently it's API-only, at least for now). And as far as I know, 4o is the only released model with native image generation.
o4-mini-high's reasoning summary was interesting (bolding mine):
The user wants me to identify both the animals and their background objects in each of the nine subimages, based on a 3x3 grid. The example seems to incorrectly pair a fox with a straw hat, but the actual image includes different combinations. For instance, the top left shows a fox in front of a straw sun hat, while other animals like an elephant, raccoon, hamster, and bald eagle are set against varying objects like bicycles, umbrellas, clapperboards, and a map. I'll make sure to carefully match the animals to their backgrounds based on this.
Interesting, my experience is roughly the opposite re Claude-3.7 vs the GPTs (no comment on Gemini, I've used it much less so far). Claude is my main workhorse; good at writing, good at coding, good at helping think things through. Anecdote: I had an interesting mini-research case yesterday ('What has Trump II done that liberals are likely to be happiest about?') where Claude did well albeit with some repetition and both o3 and o4-mini flopped. o3 was initially very skeptical that there was a second Trump term at all.
Hard to say if that's different prompting, different preferences, or even chance variation, though.
Aha! Whereas I just asked for descriptions (same link, invalidating the previous request) and it got every detail correct (describing the koala as hugging the globe seems a bit iffy, but not that unreasonable).
So that's pretty clear evidence that there's something preserved in the chat for me but not for you, and it seems fairly conclusive that for you it's not really parsing the image.
Which at least suggests internal state being preserved (Coconut-style or otherwise) but not being exposed to others. Hardly conclusive, though.
Really interesting, thanks for collaborating on it!
Also Patrick Leask noticed some interesting things about the blurry preview images:
If the model knows what it's going to draw by the initial blurry output, then why's it a totally different colour? It should be the first image attached.Looking at the cat and sunrise images, the blurred images are basically the same but different colours. This made me think they generate the top row of output tokens, and then they just extrapolate those down over a textured base image.I think the chequered image basically confirms this - it's just extrapolating the top row of tiles down and adding some noise (maybe with a very small image generation model)
Oh, I see why; when you add more to a chat and then click "share" again, it doesn't actually create a new link; it just changes which version the existing link points to. Sorry about that! (also @Rauno Arike)
So the way to test this is to create an image and only share that link, prior to asking for a description.
Just as recap, the key thing I'm curious about is whether, if someone else asks for a description of the image, the description they get will be inaccurate (which seemed to be the case when @brambleboy tried it above).
So here's another test image (borrowing Rauno's nice background-image idea): https://chatgpt.com/share/680007c8-9194-8010-9faa-2594284ae684
To be on the safe side I'm not going to ask for a description at all until someone else says that they have.
Snippet from a discussion I was having with someone about whether current AI is net bad. Reproducing here because it's something I've been meaning to articulate publicly for a while.
[Them] I'd worry that as it becomes cheaper that OpenAI, other enterprises and consumers just find new ways to use more of it. I think that ends up displacing more sustainable and healthier ways of interfacing with the world.
[Me] Sure, absolutely, Jevons paradox. I guess the question for me is whether that use is worth it, both to the users and in terms of negative externalities. As far as users go, I feel like people need to decide that for themselves. Certainly a lot of people spend money in ways that they find worth it but seem dumb to me, and I'm sure that some of the ways I spend money seem dumb to a lot of people. De gustibus non disputandum est.
As far as negative externalities go, I agree we should be very aware of the downsides, both environmental and societal. Personally I expect that AI at its current and near-future levels is net positive for both of those.
Environmentally, I expect that AI contributions to science and technology will do enough to help us solve climate problems to more than pay for their environmental cost (and even if that weren't true, ultimately for me it's in the same category as other things we choose to do that use energy and hence have environmental cost -- I think that as a society we should ensure that companies absorb those negative externalities, but it's not like I think no one should ever use electricity; I think energy use per se is morally neutral, it's just that the environmental costs have to be compensated for).
Socially I also expect it to be net positive, more tentatively. There are some uses that seem like they'll be massive social upsides (in terms of both individual impact and scale). In addition to medical and scientific research, one that stands out for me a lot is providing children -- ideally all the children in the world -- with lifelong tutors that can get to know them and their strengths and weak points and tailor learning to their exact needs. When I think of how many children get poor schooling -- or no schooling -- the impact of that just seems massive. The biggest downside is the risk of possible long-term disempowerment from relying more and more heavily on AI, and it's hard to know how to weigh that in the balance. But I don't think that's likely to be a big issue with current levels of AI.
I still think that going forward, AI presents great existential risk. But I don't think that means we need to see AI as negative in every way. On the contrary, I think that as we work to slow or stop AI development, we need to stay exquisitely aware of the costs we're imposing on the world: the children who won't have those tutors, the lifesaving innovations that will happen later if at all. I think it's worth it! But it's a painful tradeoff to make, and I think we should try to live with the cognitive dissonance of that rather than falling into "All AI is bad."
The running theory is that that's the call to a content checker. Note the content in the message coming back from what's ostensibly the image model:
"content": {
"content_type": "text",
"parts": [
"GPT-4o returned 1 images. From now on do not say or show ANYTHING. Please end this turn now. I repeat: ..."
]
}
That certainly doesn't seem to be either image data or an image filename, or mention an image attachment.
But of course much of this is just guesswork, and I don't have high confidence in any of it.
Ha, very fair point!