A lot of this recent talk about OpenAI, various events, their future path, etc., seems to make an assumption that further scaling beyond GPT4 will pose some sort of 'danger' that scales up linearly or super-linearly with the amount of compute. And thus pose an extraordinary danger if you plug in 100x and so on.

Which doesn't seem obvious at all.

It seems quite possible that GPT5, and further improvements, will be very inefficient.

For example, a GPT5 that requires 10x more compute but is only moderately better. A GPT6 that requires 10x more compute than GPT5, i.e. 100x more than GPT4, but is again only moderately better.

In this case there doesn't seem to be any serious dangers at all for LLMs. 

The problem self-extinguishes based on the fact that random people won't be able to acquire that amount of compute in the for-seeable future. Only serious governments, institutions, and companies, with multi-billion dollar cap-ex budgets will even be able to consider acquiring something much better. 

And although such organizations can't be considered to be perfectly responsible, they will still very likely be responsible enough to handle LLMs that are only a few times more intelligent.

New to LessWrong?

New Answer
New Comment

3 Answers sorted by

Seth Herd

Nov 24, 2023

107

There's another whole route to scaling LLM capabilities. LLMs themselves might not need to scale up much more at all to enable AGI. Scaffolding LLMs to them to make language model cognitive architectures can fill at least some of the gaps in their abilities. Many groups are working on this. The Minecraft agent JARVIS-1 is the biggest success so far, which is pretty limited; but it's hard to guess how far this approach will get. Opinions are split. I describe some reasons they could make nonlinear progress with better memory systems (and other enhancements) in Capabilities and alignment of LLM cognitive architectures

This possiblity is alarming, but it's not necessarily a bad thing. LMCAs seem to be inherently easier to align than any other AGI design I know of with a real chance of being first-past-the-post. Curiously, I haven't been able to drum up much interest in this, despite it being potentially a short timeline, so urgent, and also quite possibly our best shot at aligned AGI. I haven't even gotten criticisms shooting it down, just valid arguments that it's not guaranteed to work.

I've also noticed that scaffolded LLM agents seem inherently safer. In particular, deceptive alignment would be hard for one such agent to achieve, if at every thought-step it has to reformulate its complete mind state into the English language just in order to think at all.

You might be interested in some work done by the ARC Evals team, who prioritize this type of agent for capability testing.

Vladimir_Nesov

Nov 24, 2023

60

In terms of timelines, AGI is the threshold of capabilities where the system can start picking the low hanging fruit of lifting its easier-to-lift cognitive limitations (within constraints of compute hardware), getting to make a lot of progress at AI speeds on the kind of work that was previously only done by humans. Initially this might even be mere AI engineering in the sense of programming, with humans supplying the high level ideas for the AI to implement in code.

It's hard to pin down what specifically GPT-4 can't do at all, that's necessary to cross this threshold. It's bad at many steps that would be involved. Scaling predictably makes LLMs better, as long as data doesn't run out. A lot of the scaling will happen in a burst in the next 3-5 years before slowing down, absent regulation or AGI. It doesn't matter how the speed of improvement changes throughout the process, only whether the crucial capability threshold is crossed. And it's too unclear where the threshold lies and how much improvement is left to scale alone to tell with any certainty which one wins out.

Then there is data quality, which can get quite high in synthetic data in narrow domains such as Go or chess, allowing DL systems that are tiny by modern standards to play very good Go or chess. Something similar might get invented for data quality for LLMs that allows them to get very good at many STEM activities (such as theorem proving), but at a scale far beyond GPT-4. There is not enough high quality text data to get through the current burst of scaling[1] (forcing a pivot to less capability rich multimodal data), so serious work on this is inevitably ongoing, in addition to the distillation-motivated work for specialized smaller models. (Not generating specialized synthetic data particulary well might be one of the cognitive limitations that a nascent AGI resulting from doing this poorly might work on lifting.)


  1. Edit 15 Dec: I endorse this point less centrally now, based on scaling laws for training on repeated data. Even if there is not enough high quality text data, there is still enough OK quality text data, which I previously thought to also be false. ↩︎

ryan_greenblatt

Nov 24, 2023

50

It's not obvious that GPT5 will be way better than GPT4. However, it seems reasonably likely based on looking at the jump from GPT3 to GPT4 or GPT3.5 to GPT4.

GPT3 is very, very dumb. Really, go play with it. It's clearly well below humans at many tasks where language models are reasonably advantaged.

GPT4 is vastly better. It's often notably better than median humans on tasks that LLMs are reasonably advantaged at.

It's unclear how to extrapolate the situation, but there is a plausible extrapolation which result in GPT5 or GPT6 being quite scary.

There are two key subquestions here: the scaling function of better at X with respect to net training compute, and what exactly X entails.

The X here is 'predict internet text', not "generate new highly valuable research etc", and success at the latter likely requires combining LLMs with at least planning/search.

1 comment, sorted by Click to highlight new comments since: Today at 9:05 PM

This isn't directly evidence, but I think it's worth flagging: by the nature the topic, much of the most compelling evidence is potentially hazardous. This will bias the kinds of answers you can get.

(This isn't hypothetical. I don't have some One Weird Trick To Blow Up The World, but there's a bunch of stuff that falls under the policy "probably don't mention this without good reason out of an abundance of caution.")