464

LESSWRONG
LW

463
AI
Frontpage

21

35 Thoughts About AGI and 1 About GPT-5

by snewman
16th Aug 2025
Linkpost from secondthoughts.ai
20 min read
20

21

AI
Frontpage

21

35 Thoughts About AGI and 1 About GPT-5
6ryan_greenblatt
2ryan_greenblatt
1snewman
2ryan_greenblatt
4ryan_greenblatt
1snewman
2ryan_greenblatt
3ryan_greenblatt
1snewman
3ryan_greenblatt
1snewman
3ryan_greenblatt
1snewman
3ryan_greenblatt
3StanislavKrym
2Noosphere89
2ryan_greenblatt
1snewman
4ryan_greenblatt
1M.-A. Wolf
New Comment
20 comments, sorted by
top scoring
Click to highlight new comments since: Today at 2:36 PM
[-]ryan_greenblatt3mo*60

The best public estimate is that GPT-4 has 1.8 trillion “parameters”, meaning that its neural network has that many connections. In the two and a half years since it was released, it’s not clear that any larger models have been deployed (GPT-4.5 and Grok 3 might be somewhat larger).

The human brain is far more complex than this; the most common estimate is 100 trillion connections, and each connection is probably considerably more complex than the connections in current neural networks. In other words, the brain has far more information storage capacity than any current artificial neural network.

Which leads to the question: how the hell do LLMs manage to learn and remember so many more raw facts than a person[4]?

I don't have any expertise in neuroscience, but I think this is somewhat confused:

  • Probably the effective number of parameters in the human brain is actually lower than 100 trillion because many of these "parameters" are basically randomly initialized or mostly untrained. (Or are trained very slowly/weakly.) The brain can't use a global learning algorithm, so it might effectively use parameters much less efficiently.
  • It's a bit confusing to describe GPT-4 as having 1.8 trillion connections as 1.8 trillion is the number of floating point operations (roughly) not the number of neurons. In general, the analogy between the human brain and LLMs is messy because a single brain neuron probably has far fewer learned parameters than a LLM neuron, but plausibly somewhat more than a single floating point number.
Reply
[-]ryan_greenblatt3mo20

@Steven Byrnes might have takes on this.

Reply
[-]snewman3mo10

I'm having trouble parsing what you've said here in a way that makes sense to me. Let me try to lay out my understanding of the facts very explicitly, and you can chime in with disagreements / corrections / clarifications:

The human brain has, very roughly, 100B neurons (nodes) and 100T synapses (connections). Each synapse represents at least one "parameter", because connections can have different strengths. I believe there are arguments that it would in fact take multiple parameters to characterize a synapse (connection strength + recovery time + sensitivity to various neurotransmitters + ???), and I'm sympathetic to this idea on the grounds that everything in the body turns out to be more complicated than you think, but I don't know much about it.

Regarding GPT-4, I believe the estimate was that it has 1.8 trillion parameters, which if shared weights are used may not precisely correspond to connections or FLOPs. For purposes of information storage ("learning") capacity, parameter count seems like the correct metric to focus on? (In the post, I equated parameters with connections, which is incorrect in the face of shared weights, but does not detract from the main point, unless you disagree with my claim that parameter count is the relevant metric here.)

To your specific points:

Probably the effective number of parameters in the human brain is actually lower than 100 trillion because many of these "parameters" are basically randomly initialized or mostly untrained. (Or are trained very slowly/weakly.) The brain can't use a global learning algorithm, so it might effectively use parameters much less efficiently.

What is your basis for this intuition? LLM parameters are randomly initialized. Synapses might start with better-than-random starting values, I have no idea, but presumably not worse than random. LLMs and brains both then undergo a training process; what makes you think that the brain is likely to do the worse job of training its available weights, or that many synapses are "mostly untrained"?

Also note that the brain has substantial sources of additional parameters that we haven't accounted for yet: deciding which synapses to prune (out of the much larger early-childhood count), which connections to form in the first place (the connection structure of an LLM can be described in a relative handful of bits, while the connection structure of the brain has an enormous number of free parameters; I don't know how "valuable" those parameters are, but natural systems are clever!), where to add additional connections later in life.

It's a bit confusing to describe GPT-4 as having 1.8 trillion connections as 1.8 trillion is the number of floating point operations (roughly) not the number of neurons.

I never mentioned neurons. 1.8 trillion is, I believe, the best estimate for GPT-4's parameter count. Certainly we know that the largest open-weight models have parameter counts of this order of magnitude (somewhat smaller but not an OOM smaller). As noted, I forgot about shared weights when equating parameters to connections, but again I don't think that matters here. FLOPs to my understanding would correspond to connections (and not parameter counts, if shared weights are used), but I don't think FLOPs are relevant here either.

In general, the analogy between the human brain and LLMs is messy because a single neuron probably has far fewer learned parameters than a LLM neuron, but plausibly somewhat more than a single floating point number.

GPT-5 estimates that GPT-4 had just O(100M) neurons. Take that figure with a grain of salt, but I mention it to point out that in both modern LLMs and the human brain, there are far more connections / synapses than nodes / neurons, and the vast majority of parameters will be associated with connections, not nodes. (Which is why I didn't mention neurons in the post, and I don't think it's useful to talk about learned parameters in reference to neurons.)

Reply
[-]ryan_greenblatt3mo20

Regarding GPT-4, I believe the estimate was that it has 1.8 trillion parameters, which if shared weights are used may not precisely correspond to connections or FLOPs.

For standard LLM architectures, forward pass FLOPs are ≈2⋅parameters (because of the multiply and accumulate for each matmul param). It could be that GPT-4 has some non-standard architecture where this is false, but I doubt it.

So, yeah we agree here, I was just noting that connection == FLOP (roughly).

What is your basis for this intuition? [...] what makes you think that the brain is likely to do the worse job of training its available weights, or that many synapses are "mostly untrained"?

The brain is purely local which makes training all the parameters efficiently much harder, my understanding is that in at least the vision focused part of the brain there is a bunch of use of randomly initialized filters, and I seem to recall some argument made somewhere (by Steven Byrnes?) that the effective number of parameters was much lower. Sorry I can't say more here.

Reply
[-]ryan_greenblatt3mo4-2

To put it another way: compared to people, large language models seem to be superhuman in crystallized knowledge, which seems to be masking shortcomings in fluid intelligence. Is that a dead end, great for benchmarks but bad for a lot of work in the real world?

You seem to imply that AIs aren't improving on fluid intelligence. Why do you think this? I'd guess that AIs will just eventually have sufficiently high fluid intelligence while still compensating with (very) superhuman crystallized knowledge (like an older experienced human professional).

If fluid intelligence wasn't improving, this would be a dead end, but if there is some pipeline which is improving fluid intelligence (quickly), then I don't see a particular reasons to think that high crystallized knowledge is a reason for discounting AI.

Reply
[-]snewman3mo10

I do believe that AIs will eventually surpass humans at fluid intelligence, though I'm highly uncertain as to the timeline.

My point here is really just the oft-repeated observation that when we see an AI do X, intuitively we tend to assess the AI the way we would assess a human being who could do X, and that intuition can lead to very poor estimates of whether the AI can also do Y. (For instance, bar exam → practicing law.) For instance, the relative ratios of fluid vs. crystal intelligence may capture much of the reason that AIs are approaching superhuman status at competition coding problems but are still far from superhuman at many real-world coding tasks. It doesn't mean AIs will never get to real-world tasks. It just suggests (to me) that they might be farther from that milestone than their performance on crystal-intelligence-friendly tasks would imply.

Reply
[-]ryan_greenblatt3mo20

It just suggests (to me) that they might be farther from that milestone than their performance on crystal-intelligence-friendly tasks would imply.

I basically agree, but we can more directly attempt extrapolations (e.g. METR horizon length) and I put more weight on this.

I also find it a bit silly when people say "AIs are very good at competition programming, so surely they must soon be able to automate SWE" (a thing I have seen at least some semi-prominent frontier AI company employees imply). That said, I think AIs being good at competitive programming is substantially not based on better cystalized intelligence and is instead based on this being easier to train for with RL and easier to scale up inference compute on.

Reply
[-]ryan_greenblatt3mo3-2

In general, this post seems to make a bunch of claims that LLMs have specific large qualitative barriers relative to humans, but these claims seem mostly unsupported. The evidence as far as I can tell seems more consistent with LLMs being weaker in a bunch of specific quantitative ways which are improving over time. For instance, LLMs can totally do continuous learning or consolidate memory, it's just that the best methods for this work pretty poorly. (But plausibly are still within the human range for most/many relevant tasks.)

Reply
[-]snewman3mo10

Agreed that I have not supported my claims here – this was a vibes piece.

I agree that LLMs are improving at ~everything, but my intuition is that some of those improvements – for instance, regarding continuous learning – may currently be of the "climbing a ladder to get closer to the moon" variety. Sounds like we just have very different intuitions here.

Reply
[-]ryan_greenblatt3mo30

AIs have been demonstrating what arguably constitutes superhuman performance on FrontierMath, a set of extremely difficult mathematical problems.

AIs aren't superhuman on frontier math. I'd guess that Terry Tao with 8 hours per problem (and internet access) is much better than current AIs. (Especially after practicing on some of the problems etc.)

At a more basic level, this superhumanness would substantially be achieved by broadness/generality rather than by being superhuman within some field (which is arguably less important/impactful). Like, if you compared AIs to a group of humans who are pretty good at this type of math, the humans would probably also destroy the AI.

Reply
[-]snewman3mo10

Yeah, I was probably too glib here. I was extrapolating from the results of the competition Epoch organized at MIT, where "o4-mini-medium outperformed the average human team, but worse than the combined score across all teams, where we look at the fraction of problems solved by at least one team". This was AI vs. teams of people (rather than any one individual person), and it was only o4-mini, but none of those people were Terence Tao, and it only outperformed the average team.

I would be fascinated to see how well he'd actually perform in the scenario you describe, but presumably we're not going to find out.

if you compared AIs to a group of humans who are pretty good at this type of math, the humans would probably also destroy the AI.

I wonder? Given that, to my understanding, each FrontierMath problem is deep in a different subfield of mathematics. But I have no understanding of the craft of advanced / research mathematics, so I have no intuition here.

Anyway, I think we may be agreeing on the main point here: my suggestion that LLMs solve FrontierMath problems "the wrong way", and your point about depth arguably being more important than breadth, seem to be pointing in the same direction.

Reply1
[-]ryan_greenblatt3mo30

Anyway, I think we may be agreeing on the main point here: my suggestion that LLMs solve FrontierMath problems "the wrong way", and your point about depth arguably being more important than breadth, seem to be pointing in the same direction.

Yep, though it's worth distinguishing between LLMs often solving FrontierMath problems the "wrong way" and always solving them the "wrong way". My understanding is that they don't always solve them the "wrong way" (at least for Tier 1/2 problems rather than Tier 3 problems), so you should (probably) be strictly more impressed than you would be if you only know that LLMs solved X% of problems the "right way".

Reply
[-]snewman3mo10

Good point.

Reply
[-]ryan_greenblatt3mo30

Is sample-efficient learning a singularly important step on the path to AGI?

Almost definitionally, learning as efficiently as top humans would suffice for AGI. (You could just train the AI on way more data/compute and it would be superhuman.)

AIs will probably reach milestones like full automation of AI R&D before matching top human sample efficiency in broad generality (though they might be better in some/many cases).

Reply
[-]StanislavKrym3mo30

Will the journey from here to AGI feature “aha” moments?

Looks like it did feature such moments in the past. The METR graph that you quote had a GPT4-GPT4o plateau, and all subsequent models used CoTs and context window lengtheners and rapidly increased compute spendings on RL. This strategy began to crumble when Claude Opus 4 (who didn't even reach SOTA on time horizon), Grok 4 and GPT-5 failed to follow the 4o-o3[1] faster trend. 

something deep about the nature of large tasks vs. small tasks, and the cognitive skills that people and LLMs bring to each.

A human brain, unlike current AIs, has a well-developed dynamic memory which is OOMs bigger (and OOMs worse trained, forcing evolution to use high learning rates) than current context windows or CoTs, let alone the number of neurons in a layer of a LLM. What if the key to AGI lies in a similar direction?

  1. ^

    However, METR observed the trend by using 4o-o1 because o3 had yet to be released. Another complication is that the set of METR's tasks is no longer as reliable as it once was, potentially causing us to underestimate the models' abilities.

Reply
[-]Noosphere893mo20

A take I haven't seen yet is that scaling our way to AI that can automate away jobs might fail for fundamentally prosaic reasons, and that new paradigms might be needed not because of fundamental AI failures, but because scaling compute starts slowing down when we can't convert general chips into AI chips.

This doesn't mean the strongest versions of the scaling hypothesis was right, but I do want to point out that fundamental changes in paradigm can happen for prosaic reasons, and I expect a lot of people to underestimate how much progress was made in the AI summer, even if it isn't the case that imitative learning scales to AGI with realistic compute and data.

Reply
[-]ryan_greenblatt3mo20

But if this were true, you’d think they’d be able to handle ARC-AGI puzzles (see the example image just above)

In a footnote you note that models do well on ARC-AGI-1, but I think you're description of the situation is misleading:

  • AIs trained on the training set of ARC-AGI (and given a bunch of compute) can beat humans on ARC-AGI-1.
  • The example puzzle you show for ARC-AGI is one of the easiest puzzles; AIs have been able to succeed at this for a while. The ones AIs get wrong are typically ones which are very large (causing difficulties with perception) or which are actually hard for humans.
  • ARC-AGI-2 isn't easy for humans. It's hard for humans and AIs probably do similarly to random humans (e.g. mturkers) given a limited period.
  • ARC-AGI-3 is much more "perception" loaded than ARC-AGI-2/ARC-AGI-1 due to being structured as a video game (which implicitly means you have to process and understand a long series of frames). I expect this is why AIs struggle.

Overall, I think LLMs do handle ARC-AGI puzzles. They are well within the human range for ARC-AGI-1/2 and their failures are pretty often perception failures.

Fair enough if your objection is that the level of sample efficiency on this type of task for typical humans isn't sufficient. (I agree.)

Maybe they’re only good at picking up ideas from an example, if they’d already learned that idea during their original training? In other words, maybe in-context learning is helpful at jogging their memory, but not for teaching new concepts.

My view is that LLMs are generally qualitatively dumber than the most capable humans in a bunch of ways (including ability to learn new things), but that this is improving over time. Thiere isn't some dictomy between "sample efficient learning" and not. I think you'll struggle to find tasks where AIs haven't been improving by following the heuristic "what haven't AIs already learned" (though AIs do gain an advantage by knowing lots of stuff, they are also improving at all kinds of stuff).

Reply
[-]snewman3mo10

AIs trained on the training set of ARC-AGI (and given a bunch of compute) can beat humans on ARC-AGI-1.

Say more? At https://arcprize.org/leaderboard, I see "Stem Grad" at 98% on ARC-AGI-1, and the highest listed AI score is 75.7% for "o3-preview (Low)". I vaguely recall seeing a higher reported figure somewhere for some AI model, but not 98%.

ARC-AGI-2 isn't easy for humans. It's hard for humans and AIs probably do similarly to random humans (e.g. mturkers) given a limited period.

This post states that the "average human" scores 60% on ARC-AGI-2, though I was unable to verify the details (it claims to be a linkpost for an article which does not seem to contain that figure). Personally I tried 10-12 problems when the test was first launched, and IIRC I missed either 1 or 2.

The leaderboard shows "Grok 4 (Thinking)"  on top at 16%... and, unfortunately, does not present data for "Stem Grad" or "Avg. Mturker" (in any case I'm not sure what I think of the latter as a baseline here).

Agreed that perception challenges may be badly coloring all of these results.

There isn't some dictomy between "sample efficient learning" and not.

Agreed, but (as covered in another comment – thanks for all the comments!), I do have the intuition that the AI field is not currently progressing toward rapid improvement on sample efficient learning, and may currently be heading toward a fairly low local maximum.

Reply
[-]ryan_greenblatt3mo40

Say more? At https://arcprize.org/leaderboard, I see "Stem Grad" at 98% on ARC-AGI-1, and the highest listed AI score is 75.7% for "o3-preview (Low)". I vaguely recall seeing a higher reported figure somewhere for some AI model, but not 98%.

By "can beat humans", I mean AIs are well within the human range, probably somewhat better than the average/median human in the US at ARC-AGI-1. In this study, humans get 65% right on the public evaluation set.

This post states that the "average human" scores 60% on ARC-AGI-2, though I was unable to verify the details (it claims to be a linkpost for an article which does not seem to contain that figure). Personally I tried 10-12 problems when the test was first launched, and IIRC I missed either 1 or 2.

I'm skeptical, I bet mturkers do worse. This is very similar to the score that was found for humans for ARC-AGI-1 which is much easier from my understanding this study.

By "hard for humans", I just mean that it takes substantially effort even for somewhat smart humans, I don't mean that humans can't do it.

Reply
[-]M.-A. Wolf2mo10

Many thanks for sharing your reflections. I found them very valuable for me and agree with most of them fully or mostly, while you are much closer to LLM foundational model developments than I am, so this is just a humble feeling on my side (I am intensive user in data space and understanding things and society/policy/economy developments, since early 2022, and we plan to integrated LLMs into a SAAS we want to develop). Then, I was actually surprised to read you were active in the 80s already (what makes you even older than me, being from 1968 😉). 

Anyway, I wanted to share some reflections back with you (inserted after the numbers from from your text - it didn't want to keep the quotations due to form reformatting of what I pasted in, my apologies) about some aspects, where I have some ideas around the Sample-Efficient Learning (mostly, and a bit on other points). If you find them useful, please use them; I would also be happy about your feedback:

 

Sample-Efficient Learning
 

on 10. and 11.

Marc: I think, it is a contributor: filtering out fluff, but intelligence is also/more combining and transferring things we learned in comprehensive and complex ways
 

12.

Marc: possibly, largely (with the filtering and during-intake-interpretation&classifying being a key part, I think).

 

13 and 14.

Marc: this fits to my interpretation above: GPUs see all pixels, but they don’t interpret/filter the pattern but store it all and the pattern emerges (very inefficiently) from the the amount of pixel “bags”
 

15.  

Marc: That is not the same, of course. It can though obviously compensate for a lot, same as thinking does very effectively and also rather efficiently, but hallucinations and easily losing context/track (more than smart humans would) appears the inherent price to pay.

 

16.

Marc: In my experience humans can do this; I have worked across domains and found this to be true for me.

 

18.

Marc: This fits to my above interpretation: many more facts/data, but not explicitly filtered/interpreted&classified during ingestion.

 

19.

Marc: I think that context pollution from inner reflection and context window size, and getting then lost in complexity: too many different sub-contexts that are mixed up in a single “thinking”, where each wrong turn derails the who effort, due to lack of de-pollution measures and lack of retro-inspection from outside the active context window - at least what I understand how this is currently done, while this could even rather easily be changed! I’d wish I had moved to work I the AI field as developer - I feel like this fits to my way of thinking and problems to solve.

 

30.

Marc: I would have some ideas how to improve this situation in LLMs – the tricks I am aware of now are arguably unsuitable (here humans are indeed better, but I think something similar can be achieved, even with a mix of LLM&ML).

 

36.

Marc: Indeed. There are even more reasons, why this graphic does not tell us much, really (yes, I should name them (maybe CAPEX/OPEX ratios, available cash for invest, narrowness of topic, scale of economic expectations, who finances etc.), but you already named several so that should suffice). But putting things in perspective is always compelling (including clearly to me - even if only to realising that another perspectives would be needed.)

 

Another thought: I think AGI and ASI is not defined/understood as it should be - this is currently overly anthropocentric - but why?

 

Note: No LLM was abused or at least used in writing this feedback 😉

Reply
Moderation Log
More from snewman
View more
Curated and popular this week
20Comments
Image
If this is GPT-5 in “Thinking” mode, I wonder what “Pro” mode looks like

 

Amidst the unrelenting tumult of AI news, it’s easy to lose track of the bigger picture. Here are some ideas that have been developing quietly in the back of my head, about the path from here to AGI.

  1. I drafted this post a couple of weeks ago. The subsequent launch of GPT-5 didn’t lead me to make any changes. That says something about how uneventful GPT-5 is.
     
  2. Current AIs aren’t AGI. But I don’t know why.

    I mean, I have thoughts. I talk about missing functions like “memory” and “continuous learning”, and possibly “judgement” and “insight”. But these are all debatable; for instance, ChatGPT has a form of memory. The honest answer is: I dunno what’s missing, but something is, because there are a lot of things AI still can’t do. Even if it’s getting harder and harder to articulate exactly what those things are.
     
  3. Prior to GPT-5, ChatGPT users had to tell the chatbot whether to think hard about a problem (by selecting a “reasoning” model like o3), or just give a direct answer. One of the biggest changes in GPT-5 is that the system decides for itself whether a question calls for “thinking hard”. And, according to many reports, it often gets this wrong. In other words, current cutting-edge AIs can solve Ph.D-level math and science problems, but can’t reliably decide which questions deserve thinking about before answering.

    (OK, I lied: GPT-5 didn’t lead me to change any of my previously written points, but it did prompt me to add this one.)
     
  4. Often, when embarking on a large software project, I can’t see how it’ll come together. The task may be too complex to wrap my head around, or there may be conflicting requirements that seem difficult to reconcile. Sometimes this eventually leads to an “aha” moment, when I find a clever reframing that changes the problem from confusing and intractable to straightforward and tractable. Other times I just grind away and grind away until there’s nothing left to do.

    The latter cases, with no aha moment, are disconcerting. Lacking a specific moment when the difficulty was overcome, I find myself questioning whether I have in fact overcome it. I worry that I’ve missed something important – that I’ve built something in the my workshop that will never fit out the door, or an airplane that’s too heavy to fly. Sometimes this does in fact turn out to be the case; other times everything is fine.

    Will the journey from here to AGI feature “aha” moments? Or will it be a long slow grind, and when we get to the end and look back to see what the key insights were, we won’t be able to find any?
     
  5. Back in the 80s and 90s, I used to attend SIGGRAPH, the annual computer graphics conference. The highlight of the week was always the film show, a two-hour showcase demonstrating the latest techniques. It was a mix of academic work and special-effects clips from unreleased Hollywood movies.

    Every year, the videos would include some important component that had been missing the year before. Shadows! Diffuse lighting! Interaction of light with texture! I’d gaze upon the adventurer bathed in flickering torchlight, and marvel at how real it looked. Then the next year I’d laugh at how cartoonish that adventurer’s hair had been, after watching a new algorithm that simulated the way hair flows when people move.

    undefined
    In the late 80’s I would have thought this looked sooooo real

    I think AI is a little like that: we’re so (legitimately!) impressed by each new model that we can’t see what it lacks… until an even-better model comes along. As I said when I first started blogging about AI: as we progress toward an answer to the question “can a machine be intelligent?”, we are learning as much about the question as we are about the answer.

    (Case in point: in the press briefing for the GPT-5 launch, Sam Altman said that we’ll have AGI when AIs get continuous learning. I’ve never heard him point to that particular gap before.)
     

  6. Moravec's paradox states that, in AI, “the hard problems are easy and the easy problems are hard”[1] – meaning that the most difficult things to teach an AI are the things that come most naturally to people. However, we are often surprised by which things turn out to be easy or hard. You might think running is easy, until you see a cheetah do real running.
     
  7. The accepted explanation for Moravec’s paradox is that some things seem easy to us, because evolution has optimized us to be good at those things, and it’s hard for clumsy human designs to outdo evolution. Evolution didn’t optimize us for multiplying large numbers or shifting gigantic piles of dirt, and so crude constructions such as calculators and bulldozers easily outperform us.

    With that in mind: evolution would laugh if it saw how crude our algorithms are for training neural networks.
     
  8. Evolution’s grin might fade a bit when it sees how much sheer scale (of computing capacity and data) we can devote to training a single model. A child’s intellectual development is driven by processes far more sophisticated than our procedures for training AIs, but that child has access to only a tiny fraction as much data.
     
  9. The human genome (DNA) contains a few billion bits of information. Loosely speaking, this means that you are the product of a design that reflects billions of decisions, most of which have been ruthlessly optimized by evolution. Not all of those decisions will be relevant to how the brain works: your DNA also includes instructions for the rest of your body, there’s a certain amount of junk that hasn’t been optimized away yet, etc. But still, the design of our brains probably reflects hundreds of millions of optimization decisions.

    I don’t know how many carefully optimized decisions are incorporated into the design of current LLMs[2], but I doubt it comes to hundreds of millions. This is why I believe current AI designs are very crude.


    Sample-Efficient Learning
     
  10. When comparing human and AI capabilities, one important concept is “sample-efficient learning”. This refers to the ability to learn a new idea or skill from a small number of examples or practice sessions. In general, current AI models are much less sample-efficient than we are: a teenager learns to drive in less than 100 hours; Waymo vehicles have logged millions of hours and are still working their way up to driving on the highway, in snow, etc.

    (This suggests that sample-efficient learning is one of the things evolution optimized us for.)
     
  11. “Sample efficiency” is probably a crude label for a complex tangle of capabilities. Just as there are many flavors of intelligence, there must be many flavors of sample efficiency. Some people argue that true intelligence is in fact more or less the same thing as sample efficiency.
     
  12. Here are some examples of sample efficiency in humans: learning to drive a car in a few dozen hours. Figuring out the rule in an ARC-AGI task from just a couple of examples. Learning the ropes at a new job. Sussing out the key trick to solve a difficult mathematical problem. Are these all basically the same skill?

    In this ARC-AGI puzzle, the goal is to look at the first two pairs of images, identify the rule, and apply it to the last image. It’s quite doable for people, and quite challenging for AIs.
  13. I often encounter the assertion that LLMs are sample-efficient learners within their context window. That is, while they need many examples of a concept to learn that concept during the training process, they can (it is said) quickly pick up a new idea if you supply a few targeted examples while asking them a question.

    But if this were true, you’d think they’d be able to handle ARC-AGI puzzles (see the example image just above)[3]. Maybe they’re only good at picking up ideas from an example, if they’d already learned that idea during their original training? In other words, maybe in-context learning is helpful at jogging their memory, but not for teaching new concepts.
     
  14. Is sample-efficient learning a singularly important step on the path to AGI? If so, could other strengths of large language models (e.g. their superhuman breadth of knowledge) compensate for the lack of sample efficiency?
     
  15. “Judgement” and “insight” also seem like crude labels that will turn out to encompass many different things. Will these things transfer across domains? If we develop a model that has judgement and insight in mathematics or coding, will it have a big head start on developing those same capabilities in other, messier domains? Or will current AI architectures struggle to generalize in this way? For that matter, are people able to transfer judgement, insight, and taste from one domain to another?


    Crystallized and Fluid Intelligence
     
  16. AIs have been demonstrating what arguably constitutes superhuman performance on FrontierMath, a set of extremely difficult mathematical problems. But they mostly seem to do it “the wrong way”: instead of finding elegant solutions, they either rely on knowledge of some obscure theorem that happens to make the problem much easier, or grind out a lengthy brute-force answer.

    Does this matter? I mean, if you get the answer, then you get the answer. But in mathematics, much of the value in finding a proof is the insights you acquired along the way. If AIs begin knocking off unsolved problems in mathematics, but in ways that don’t provide insight, perhaps we’ll still need mathematicians to do the real work of advancing the overall field. Or maybe, once AIs can solve these problems at all, it’ll be a short step to solving them with insight? My instinct is that it’s not a short step, but that could be cope. In any case, the big question is what this tells us about AI’s potential in applications other than mathematics. What portion of human activity requires real insight?

    Source: https://x.com/dmimno/status/949302857651671040

     

  17. To put it another way: compared to people, large language models seem to be superhuman in crystallized knowledge, which seems to be masking shortcomings in fluid intelligence. Is that a dead end, great for benchmarks but bad for a lot of work in the real world? Or is it a feasible path to human-level performance?

    (Taren points out the irony that while LLMs know far more facts than people, they also routinely get facts wrong – “hallucinations”.)
     
  18. The best public estimate is that GPT-4 has 1.8 trillion “parameters”, meaning that its neural network has that many connections. In the two and a half years since it was released, it’s not clear that any larger models have been deployed (GPT-4.5 and Grok 3 might be somewhat larger).

    The human brain is far more complex than this; the most common estimate is 100 trillion connections, and each connection is probably considerably more complex than the connections in current neural networks. In other words, the brain has far more information storage capacity than any current artificial neural network.

    Which leads to the question: how the hell do LLMs manage to learn and remember so many more raw facts than a person[4]?


    One possible answer: perhaps models learn things in a shallower way, that allows for more compact representations but limits their ability to apply things they’ve learned in creative, insightful, novel ways. Perhaps this also has something to do with their poor sample efficiency.


    Solving Large Problems

  19. When projecting the future of AI, many people look at this graph:

    This is the latest version, updated to include GPT-5 (source)

    It shows that the size of software engineering tasks an AI can complete has been roughly doubling every 7 months. This trend has held for over 5 years (arguably[5]), during which the achievable task size increased from around 3 seconds to around 2 hours. Why should the trend be so steady?

    It’s not obvious that the difficulty that AI will have in completing a task should increase steadily with the size of the task. If a robot could assemble 10 Ikea bookshelves, it could assemble 20 bookshelves. If a coding agent can create a form with 10 fields, it can probably create a form with 20 fields. Why is it that if an AI can complete a 10-minute project, it still may not be able to complete a 20-minute project? And why does the relationship between time and difficulty hold steady across such a wide range of times?

    I think the predictable(-ish) trend of AIs tackling larger and larger software engineering tasks has something to do with large tasks containing a fractal distribution of subtasks. There is a fuzzy collection of tactical and strategic skills involved, ranging from “write a single line of code” to “design a high-level architecture that breaks up a one-month project into smaller components that will work well together”. Larger tasks require high-level skills that are more difficult for AIs (and people) to master, but every task requires a mix of skills, tasks of the same size can involve different mixes (building one fancy model airplane vs. 20 bookshelves), and the fuzzy overlaps smooth out the graph.
     

  20. Even so, if the doubling time for task sizes holds steady at 7 months, the consistency of that trend will point to something deep about the nature of large tasks vs. small tasks, and the cognitive skills that people and LLMs bring to each.
     
  21. It's been argued that the skills needed to solve tasks of a given size flatten out as you progress toward larger tasks. That is, there’s a big difference between solving 1-minute vs. 2-minute tasks, but (the argument goes) once you can carry out a project that requires a full month, you’re pretty much ready to tackle two-month projects. And so we should expect to see an acceleration in the size of tasks which AIs can handle – it should start doubling more frequently than every 7 months.

    I don’t share this intuition. I’d expect increases in task length to keep upping the difficulty, even when the base line task length is already large. For instance, if I’m approaching a 2-month project, perhaps I should start out by spending a couple of weeks prototyping several different approaches, or taking an online course to learn a new programming technique. Those are high-level ideas that might not make sense if I only have one month.

    Heck, maybe we should expect 1 month → 2 months to be a bigger leap than 1 minute → 2 minutes: the former is an increase of one month, the latter is an increase of only one minute! I wouldn’t actually expect successive doublings to get more difficult, but it’s not obvious to me why we should expect them to get easier, either.
     
  22. If you’ve mastered tasks that take a single day, what additional skills do you need to handle week-long, month-long, or year-long projects? Do we have any clear idea of what those skills are? I suspect that we don’t understand them very well, that we tend to discount the skills involved, and that this contributes to some people having (what I believe are) unrealistically short estimates of the time it will take to develop AGI.
     
  23. Model developers are working hard to train their models to carry out complex tasks. The current approach involves letting the model attempt practice tasks, and tweaking the model after each success or failure. Roughly speaking, this approach generates one bit of learning for each attempted task.

    This is feasible under current conditions, when models are mostly handling tasks that take a few seconds to a few minutes to carry out. What happens when we’re trying to train models to independently manage month-long projects? One bit of learning per month is a slow way to make progress. Perhaps sample-efficient learning becomes more important as you attempt longer tasks.
     
  24. One workaround to the difficulty of learning how to carry out long tasks is to break them into subtasks, and handle each subtask as a separate project. But this simplifies away many important aspects of problem solving. A large project does not neatly decompose into tidy subprojects.

    For instance, when I tackle a subtask during a large software project, the result is not just that a certain chunk of the code gets written. I come out with a slightly deeper understanding of the problem. I may have learned new things about the existing codebase. I may have hit upon a handy trick for testing the kinds of operations that this code performs, or had an insight about the data being processed. I may have gotten some little nudge that will eventually accumulate with 20 other nudges across other portions of the project, eventually leading me to rethink my entire approach. If the subtask is assigned to a separate agent, whose memories are discarded as soon as the subtask is complete, none of that learning can take place.
     
  25. At the same time, perhaps efficiently managing a complex month-long project is not something that evolution optimized us for? In which case AIs might eventually blow past us the way they did with chess.


    Continuous Learning
     
  26. “Continuous learning” refers to the ability to assimilate new skills and knowledge while carrying out a task. People are of course continuous learners. Current LLMs are not: a model like GPT-5 first undergoes “training”, and then is released for use, at which point the model is frozen and can never learn anything new.

    Continuous learning seems related to sample-efficient learning. Cutting-edge models today are trained on tens of trillions of “tokens” (roughly meaning, words). This is astronomically more data than a person might encounter over the course of a project. So to learn new things on the fly, you need to be sample-efficient.
     
  27. Continuous learning seems related to the way models often struggle when we try to integrate them deeply into our work. Think about your experience on the first day of a new job: you struggle, too. You don’t know how to do anything, you don’t know where to find anything. Every little task requires conscious deliberation and extensive research. Roaming around the internal website, not to find the information you need, but just to get a sense of where to look; asking someone for help in figuring out who to ask for help.

    Everything that current models do, they do in their first hour on the job.
     
  28. There’s an argument that once AIs have mastered continuous learning, we’ll apprentice them to do every possible job, and amalgamate the resulting learnings into a single model that’s pre-trained to be good at everything. It’s not obvious to me that this will work. The accountant-bot will have taken its neural network in a different direction than the therapy-bot, and jamming two different things together may not work out any better than it did for Jeff Goldblum in The Fly. This is like an extreme version of “distributed training” (using multiple data centers to train a single model), which is something that has only been demonstrated in limited ways, requiring close coordination between the different learning centers.
     
  29. A model that has been working as an apprentice accountant will have learned a lot of things about accounting, but also a lot of sensitive details regarding specific clients. Those details would have to somehow be excluded from the knowledge aggregation process, both for privacy reasons, and to avoid overwhelming the unified model with unimportant detail.
     
  30. Arguably, “continuous, sample-efficient learning” is a good description of the way you keep track of what you’re doing over the course of a project. You accumulate knowledge of the project’s context – for instance, if the assignment is to add some new functionality to a piece of software, you’ll need to learn how the existing code works. As you work, you also remember what you’ve already done and what blind alleys you’ve explored.

    It’s been proposed that LLMs can rely on their “context window” – the transcript of everything you’ve said to them, everything they’ve said to you, and their own internal monologue as they work through the task – as a substitute for continuous learning. I have trouble accepting that this will scale up to large projects. Current LLM architectures make it very expensive to keep increasing the context window, and this seems like a fundamental barrier. People are able to fluidly “wake up”, “put to sleep”, consolidate, and otherwise manage sections of our memory according to need, and today’s LLMs cannot.


    Phase Changes as We Get Closer to AGI
     
  31. Currently, when AIs are used in the workplace, they are a tool, an extra ingredient that is sprinkled into processes that remain fundamentally human-centered. A human manager assigns tasks to a human worker, who might rely on an AI to handle some isolated subtasks. Even in the infrequent examples where AI is reported to have written most of the code for some software project, that’s still happening in small to medium-sized chunks, organized and supervised by people.

    When AIs start to do most of the higher-level work, workplace dynamics will change in hard-to-anticipate ways. Futurists like to point out that AIs can run 24 hours a day, never get bored, can be cloned or discarded, and do many things much faster than people. So long as people are the glue that holds the AI workers together, all this is of limited consequence. When the AIs take center stage, all of those AI advantages will come into play, and the result will be something strange.

    Think about a long vacation when you really unplugged – not in December when nothing was happening, but during a busy period. Think about how much catching up you had to do afterwards. Now imagine if every single morning, you discover that you have that much catching up to do, because the AI team you’re attempting to supervise has done the equivalent of two weeks of work while you slept. You’ll no longer be a central participant in your own job; it’ll be all you can do to follow the action and provide occasional nudges.

    The transition from AI-as-a-tool to people-are-mostly-spectators could happen fairly quickly, like a phase change in physics.
     
  32. An important moment in the history of AI was the conversation in which Bing Chat, built around an early version of GPT-4, tried to break up a New York Times reporter’s marriage. This was far outside the behavior that Microsoft or OpenAI had observed (or desired!) in internal tests. My understanding is that they had only tried short, functional interactions with the chatbot, whereas reporter Kevin Roose carried out an extended conversation that took the bot into uncharted territory.

    This may be related to the recent phenomenon where for some people, extended interactions with a chatbot, over the course of weeks or months, appear to be exacerbating mental health issues. The common theme is that when machines transition from bit parts to leading roles, unexpected things happen.

    (Those unexpected things don’t have to be bad! But until AIs get better at managing the big picture, and/or people learn a lot more about the dynamics of AI-powered processes, the surprises will probably be bad more often than not.)
     
  33. It’s widely recognized that AIs tend to perform better on benchmark tests than in real-world situations. I and others have pointed out that one reason for this is that the inputs to benchmark tasks are usually much simpler and more neatly packaged than in real life. It is less widely recognized that benchmark tasks also have overly simplified outputs.

    We score an AI’s output on a benchmark problem as “correct” or “incorrect”. In real life, each task is part of a complex web of ongoing processes. Subtle details of how the task is performed can affect how those processes play out over time. Consider software development, and imagine that an AI writes a piece of code that produces the desired outputs. This code might be “correct”, but is it needlessly verbose? Does it replicate functionality that exists elsewhere in the codebase? Does it introduce unnecessary complications?

    Over time, a codebase maintained mostly by AI might become a bloated mess of conflicting styles, redundant code, poor design decisions, and subtle bugs. Conversely, the untiring nature of AI may lead to codebases that are inhumanly well-maintained, every piece of code thoroughly tested, every piece of documentation up to date. Just as it was hard to guess that extended conversations could lead early chatbots into deranged behavior, it is hard to guess what will result from giving a coding agent extended responsibility for a codebase. Similarly, it is hard to guess what will result from putting an AI in charge of the long-term course of a scientific research project, or a child’s education.


    Other Thoughts
     
  34. In every field, some people accomplish much more than others. Nobel-winning scientists, gifted teachers, and inspiring leaders are held up as an argument for the potential of “superintelligence”. Certainly, the argument goes, it should be possible to create AIs that are as capable as the most capable people. An Einstein in every research department, a Socrates in every classroom. And if it’s possible for an Einstein to exist, why not an (artificial) super-Einstein?

    I find this argument compelling up to a point, but I suspect we may incorrectly attribute the impact of great scientists to brilliance alone. Einstein contributed multiple profound insights to physics, but he did that at a time of opportunity – there was enough experimental data to motivate and test those insights, but that data was new enough that no one else had found them yet[6]. Edison’s labs originated or commercialized numerous inventions, but his earlier successes provided him with the resources to vigorously pursue further lines of research, and the opportunity to bring his further inventions to market.

    Steve Jobs’ accomplishments owed much to his ability to attract talented employees. Great leaders achieve success in part by edging out other leaders to rise to the top of an organization. If AI allows us to create a million geniuses, we won’t be able to give them all the same opportunities that (some of) today’s geniuses are afforded.
     
  35. As OpenAI progressed from GPT-1 to GPT-2 to GPT-3 to GPT-4, the theme was always scale: each model was at least 10x larger than its predecessor, and trained on roughly 10x as much data.

    In the two and a half years since GPT-4 launched, frontier developers have continued to increase the amount of data used to train their models, but model sizes are no longer increasing; many subsequent cutting-edge models seem to be smaller than GPT-4. This has been interpreted as a sign that the benefits to scaling may have stalled out. However, it might simply be that no one would have much use for a larger model right now, even if it was more intelligent. All of the leading edge AI providers (with the possible exception of Google?) are clearly capacity constrained – they already can’t offer all of their goodies to everyone who would like to use them. Larger models would make this much worse, both increasing demand (presumably, if models were smarter, people would use them more) and reducing supply.

    We might not see much progress toward larger models until we’ve built enough data centers to saturate demand. Given how much room there is for AI to diffuse into further corners of our personal and work lives, that could be quite a while.
     
  36. Everyone is sharing this graph, which compares the level of investment in railroads in the 1880s, telecommunications infrastructure during the dot-com bubble, and AI data centers today:

    source: https://paulkedrosky.com/honey-ai-capex-ate-the-economy/

    The usual takeaway is: wow, the AI boom (or bubble) is bigger than the dot-com bubble. I don’t understand why people aren’t focusing more on the fact that railroad investments peaked at three times the dot-com boom and AI datacenter rollout put together. Holy shit, the 1880s must have been absolutely insane. The people of that time must have really believed the world was changing, to be willing to sustain that level of investment. (I’ve seen arguments that the pace of change in the late 1800s and early 1900s made our current era seem positively static. Steam power, electricity, railroads, the telegraph, telephones, radio, etc. This graph makes that a bit more visceral.)

    (I also wonder whether these numbers may turn out to be wrong. There’s at least one obvious error – the dot-com boom took place around 2000, not 2020. Some commentator noted that older GDP figures may be misleading because the informal economy used to play a much larger role. When a startling statistic spreads like wildfire across the Internet, it often turns out to be incorrect.)

Quick reminder that the regular application deadline for The Curve is next Friday, August 22nd! In case you missed it: on October 3-5, in Berkeley, we’ll bring together ~250 folks with a wide range of backgrounds and perspectives for productive discussions on the big, contentious questions in AI. Featuring Jack Clark, Jason Kwon, Randi Weingarten, Dean Ball, Helen Toner, and many more great speakers! If you’d like to join us, fill out this form.

Thanks to Taren for feedback and images.

  1. ^

    This quote is from a restatement of the paradox by Steven Pinker. Moravec’s original statement, in 1988:

    It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.

  2. ^

    Large Language Models, such as GPT.

  3. ^

    Yes, the some models can now post high scores on the original ARC-AGI-1 test, but they still struggle with ARC-AGI-2 and ARC-AGI-3. Also, yes, it seems likely that one reason models struggle on ARC-AGI problems is that they don’t have much experience looking at pixelated images. But I still stand by the observation that models seem to only be selectively skilled at in-context learning.

  4. ^

    I asked ChatGPT, Claude, and Gemini to compare the number of “facts” known by a typical adult to a frontier LLM. They all estimated a few million for people, and a few billion for LLMs. To arrive at those estimates, they engaged in handwaving so vigorous as to affect the local weather, so take with a grain of salt. (ChatGPT transcript, Claude transcript, Gemini transcript)

  5. ^

    The data does suggest that the rate of progress has accelerated recently, perhaps to a 4 month doubling time, but this is debated and there isn’t enough data to be confident in either direction.

  6. ^

    Though some of the relevant data had been available for several decades.