Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

This is my personal opinion, and in particular, does not represent anything like a MIRI consensus; I've gotten push-back from almost everyone I've spoken with about this, although in most cases I believe I eventually convinced them of the narrow terminological point I'm making.

In the AI x-risk community, I think there is a tendency to ask people to estimate "time to AGI" when what is meant is really something more like "time to doom" (or, better, point-of-no-return). For about a year, I've been answering this question "zero" when asked.

This strikes some people as absurd or at best misleading. I disagree.

The term "Artificial General Intelligence" (AGI) was coined in the early 00s, to contrast with the prevalent paradigm of Narrow AI. I was getting my undergraduate computer science education in the 00s; I experienced a deeply-held conviction in my professors that the correct response to any talk of "intelligence" was "intelligence for what task?" -- to pursue intelligence in any kind of generality was unscientific, whereas trying to play chess really well or automatically detect cancer in medical scans was OK. 

I think this was a reaction to the AI winter of the 1990s. The grand ambitions of the AI field, to create intelligent machines, had been discredited. Automating narrow tasks still seemed promising. "AGI" was a fringe movement.

As such, I do not think it is legitimate for the AI risk community to use the term AGI to mean 'the scary thing' -- the term AGI belongs to the AGI community, who use it specifically to contrast with narrow AI.

Modern Transformers[1] are definitely not narrow AI. 

It may have still been plausible in, say, 2019. You might then have argued: "Language models are only language models! They're OK at writing, but you can't use them for anything else." It had been argued for many years that language was an AI complete task; if you can solve natural-language processing (NLP) sufficiently well, you can solve anything. However, in 2019 it might still be possible to dismiss this. Basically any narrow-AI subfield had people who will argue that that specific subfield is the best route to AGI, or the best benchmark for AGI.

The NLP people turned out to be correct. Modern NLP systems can do most things you would want an AI to do, at some basic level of competence. Critically, if you come up with a new task[2], one which the model has never been trained on, then odds are still good that it will display at least middling competence. What more could you reasonably ask for, to demonstrate 'general intelligence' rather than 'narrow'?

Generative pre-training is AGI technology: it creates a model with mediocre competence at basically everything.

Furthermore, when we measure that competence, it usually falls somewhere within the human range of performance. So, as a result, it seems sensible to call them human-level as well. It seems to me like people who protest this conclusion are engaging in goalpost-moving. 

More specifically, it seems to me like complaints that modern AI systems are "dumb as rocks" are comparing AI-generated responses to human experts. A quote from the dumb-as-rocks essay:

GenAI also can’t tell you how to make money. One man asked GPT-4 what to do with $100 to maximize his earnings in the shortest time possible. The program had him buy a domain name, build a niche affiliate website, feature some sustainable products, and optimize for social media and search engines. Two months later, our entrepreneur had a moribund website with one comment and no sales. So genAI is bad at business.

That's a bit of a weak-man argument (I specifically searched for "generative ai is dumb as rocks what are we doing"). But it does demonstrate a pattern I've encountered. Often, the alternative to asking an AI is to ask an expert; so it becomes natural to get in the habit of comparing AI answers to expert answers. This becomes what we think about when we judge whether modern AI is "any good" -- but this is not the relevant comparison we should be using when judging whether it is "human level".

I'm certainly not claiming that modern transformers are roughly equivalent to humans in all respects. Memory works very differently for them, for example, although that has been significantly improving over the past year. One year ago I would have compared an LLM to a human with a learning disability and memory problems, but who has read the entire internet and absorbed a lot through sheer repetition. Now, those memory problems are drastically reduced.

Edited to add:

There have been many interesting comments. Two clusters of reply stick out to me:

  1. One clear notion of "human-level" which these machines have not yet satisfied is the competence to hold down a human job. 
  2. There's a notion of "AGI" where the emphasis is on the ability to gain capability, rather than the breadth of capability; this is lacking in modern AI.

Hjalmar Wijk would strongly bet that even if there were more infrastructure in place to help LLMs autonomously get jobs, they would be worse at this than humans. Matthew Barnett points out that economically-minded people have defined AGI in terms such as what percentage of human labor the machine is able to replace. I particularly appreciated Kaj Sotala's in-the-trenches description of trying to get GPT4 to do a job

Kaj says GPT4 is "stupid in some very frustrating ways that a human wouldn't be" -- giving the example of GPT4 claiming that an appointment has been rescheduled, when in fact it does not even have the calendar access required to do that.

Comments on this point out that this is not an unusual customer service experience.

I do want to concede that AIs like GPT4 are quantitatively more "disconnected from reality" than humans, in an important way, which will lead them to "lie" like this more often. I also agree that GPT4 lacks the overall skills which would be required for it to make its way through the world autonomously (it would fail if it had to apply for jobs, build working relationships with humans over a long time period, rent its own server space, etc). 

However, in many of these respects, it still feels comparable to the low end of human performance, rather than entirely sub-human. Autonomously making one's way through the world feels very "conjunctive" -- it requires the ability to do a lot of things right.

I never meant to claim that GPT4 is within human range on every single performance dimension; only lots and lots of them. For example, it cannot do realtime vision + motor control at anything approaching human competence (although my perspective leads me to think that this will be possible with comparable technology in the near future).

In his comment, Matthew Barnett quotes Tobias Baumann:

The framing suggests that there will be a point in time when machine intelligence can meaningfully be called “human-level”. But I expect artificial intelligence to differ radically from human intelligence in many ways. In particular, the distribution of strengths and weaknesses over different domains or different types of reasoning is and will likely be different2 – just as machines are currently superhuman at chess and Go, but tend to lack “common sense”.

I think we find ourselves in a somewhat surprising future where machine intelligence actually turns out to be meaningfully "human-level" across many dimensions at once, although not all.

Anyway, the second cluster of responses I mentioned is perhaps even more interesting. Steven Byrnes has explicitly endorsed "moving the goalposts" for AGI. I do think it can sometimes be sensible to move goalposts; the concept of goalpost-moving is usually used in a negative light, but, there are times when it must be done. I wish it could be facilitated by a new term, rather than a redefinition of "AGI"; but I am not sure what to suggest.

I think there is a lot to say about Steven's notion of AGI as the-ability-to-gain-capabilities rather than as a concept of breadth-of-capability. I'll leave most of it to the comment section. To briefly respond: I agree that there is something interesting and important here. I currently think AIs like GPT4 have 'very little' of this rather than none. I also thing individual humans have very little of this. In the anthropological record, it looks like humans were not very culturally innovative for more than a hundred thousand years, until the "creative explosion" which resulted in a wide variety of tools and artistic expression. I find it plausible that this required a large population of humans to get going. Individual humans are rarely really innovative; more often, we can only introduce basic variations on existing concepts.

  1. ^

    I'm saying "transformers" every time I am tempted to write "LLMs" because many modern LLMs also do image processing, so the term "LLM" is not quite right.

  2. ^

    Obviously, this claim relies on some background assumption about how you come up with new tasks. Some people are skilled at critiquing modern AI by coming up with specific things which it utterly fails at. I am certainly not claiming that modern AI is literally competent at everything.

    However, it does seem true to me that if you generate and grade test questions in roughly the way a teacher might, the best modern Transformers will usually fall comfortably within human range, if not better.

New to LessWrong?

New Comment
89 comments, sorted by Click to highlight new comments since: Today at 6:13 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Steven Byrnes1moΩ285822

Well I’m one of the people who says that “AGI” is the scary thing that doesn’t exist yet (e.g. FAQ  or “why I want to move the goalposts on ‘AGI’”). I don’t think “AGI” is a perfect term for the scary thing that doesn’t exist yet, but my current take is that “AGI” is a less bad term compared to alternatives. (I was listing out some other options here.) In particular, I don’t think there’s any terminological option that is sufficiently widely-understood and unambiguous that I wouldn’t need to include a footnote or link explaining exactly what I mean. And if I’m going to do that anyway, doing that with “AGI” seems OK. But I’m open-minded to discussing other options if you (or anyone) have any.

Generative pre-training is AGI technology: it creates a model with mediocre competence at basically everything.

I disagree with that—as in “why I want to move the goalposts on ‘AGI’”, I think there’s an especially important category of capability that entails spending a whole lot of time working with a system / idea / domain, and getting to know it and understand it and manipulate it better and better over the course of time. Mathematicians do this with abstruse mathematical objects, but als... (read more)

[-]abramdemski1moΩ122710

Thanks for your perspective! I think explicitly moving the goal-posts is a reasonable thing to do here, although I would prefer to do this in a way that doesn't harm the meaning of existing terms. 

I mean: I think a lot of people did have some kind of internal "human-level AGI" goalpost which they imagined in a specific way, and modern AI development has resulted in a thing which fits part of that image while not fitting other parts, and it makes a lot of sense to reassess things. Goalpost-moving is usually maligned as an error, but sometimes it actually makes sense.

I prefer 'transformative AI' for the scary thing that isn't here yet. I see where you're coming from with respect to not wanting to have to explain a new term, but I think 'AGI' is probably still more obscure for a general audience than you think it is (see, eg, the snarky complaint here). Of course it depends on your target audience. But 'transformative AI' seems relatively self-explanatory as these things go. I see that you have even used that term at times.

I disagree with that—as in “why I want to move the goalposts on ‘AGI’”, I think there’s an especially important category of capability that entails spending a

... (read more)

I think the kind of sensible goalpost-moving you are describing should be understood as run-of-the-mill conceptual fragmentation, which is ubiquitous in science. As scientific communities learn more about the structure of complex domains (often in parallel across disciplinary boundaries), numerous distinct (but related) concepts become associated with particular conceptual labels (this is just a special case of how polysemy works generally). This has already happened with scientific concepts like gene, species, memory, health, attention and many more. 

In this case, it is clear to me that there are important senses of the term "general" which modern AI satisfies the criteria for. You made that point persuasively in this post. However, it is also clear that there are important senses of the term "general" which modern AI does not satisfy the criteria for. Steven Byrnes made that point persuasively in his response. So far as I can tell you will agree with this. 

If we all agree with the above, the most important thing is to disambiguate the sense of the term being invoked when applying it in reasoning about AI. Then, we can figure out whether the source of our disagreements i... (read more)

My complaint about “transformative AI” is that (IIUC) its original and universal definition is not about what the algorithm can do but rather how it impacts the world, which is a different topic. For example, the very same algorithm might be TAI if it costs $1/hour but not TAI if it costs $1B/hour, or TAI if it runs at a certain speed but not TAI if it runs many OOM slower, or “not TAI because it’s illegal”. Also, two people can agree about what an algorithm can do but disagree about what its consequences would be on the world, e.g. here’s a blog post claiming that if we have cheap AIs that can do literally everything that a human can do, the result would be “a pluralistic and competitive economy that’s not too different from the one we have now”, which I view as patently absurd.

Anyway, “how an AI algorithm impacts the world” is obviously an important thing to talk about, but “what an AI algorithm can do” is also an important topic, and different, and that’s what I’m asking about, and “TAI” doesn’t seem to fit it as terminology.

Yep, I agree that Transformative AI is about impact on the world rather than capabilities of the system. I think that is the right thing to talk about for things like "AI timelines" if the discussion is mainly about the future of humanity. But, yeah, definitely not always what you want to talk about.

I am having difficulty coming up with a term which points at what you want to point at, so yeah, I see the problem.

I agree with Steve Byrnes here. I think I have a better way to describe this.
I would say that the missing piece is 'mastery'. Specifically, learning mastery over a piece of reality. By mastery I am referring to the skillful ability to model, predict, and purposefully manipulate that subset of reality.
I don't think this is an algorithmic limitation, exactly.


Look at the work Deepmind has been doing, particularly with Gato and more recently AutoRT, SARA-RT, RT-Trajectory, UniSim , and Q-transformer. Look at the work being done with the help of Nvidia's new Robot Simulation Gym Environment. Look at OpenAI's recent foray into robotics with Figure AI. This work is held back from being highly impactful (so far) by the difficulty of accurately simulating novel interesting things, the difficulty of learning the pairing of action -> consequence compared to learning a static pattern of data, and the hardware difficulties of robotics.

This is what I think our current multimodal frontier models are mostly lacking. They can regurgitate, and to a lesser extent synthesize, facts that humans wrote about, but not develop novel mastery of subjects and then report back on their findings. This is the... (read more)

1Lukas1mo
From what I understand I would describe the skill Steven points to as "autonomously and persistently learning at deploy time". How would you feel about calling systems that posess this ability "self-refining intelligences"? I think mastery, as Nathan comments above, is a potential outcome of employing this ability rather than the skill/ability itself.
-1adastra2220d
I think there is a fundamental issue here in the history that is causing confusion. The originators of the AGI term did in fact mean it in the context of narrow vs general AI as described by OP. However they also (falsely!) believed that this general if mediocre capability would be entirely sufficient to kickstart a singularity. So in a sense they simultaneously believed both without contradiction, and you are both right about historical usage. But the events of recent years have shown that the belief AGI=singularity was a false hope/fear.

I propose that LLMs cannot do things in this category at human level, as of today—e.g. AutoGPT basically doesn’t work, last I heard. And this category of capability isn’t just a random cherrypicked task, but rather central to human capabilities, I claim.

What would you claim is a central example of a task which requires this type of learning? ARA type tasks? Agency tasks? Novel ML research? Do you think these tasks certainly require something qualitatively different than a scaled up version of what we have now (pretraining, in-context learning, RL, maybe training on synthetic domain specific datasets)? If so, why? (Feel free to not answer this or just link me what you've written on the topic. I'm more just reacting than making a bid for you to answer these questions here.)

Separately, I think it's non-obvious that you can't make human-competitive sample efficient learning happen in many domains where LLMs are already competitive with humans in other non-learning ways by spending massive amounts of compute doing training (with SGD) and synthetic data generation. (See e.g. efficient-zero.) It's just that the amount of compute/spend is such that you're just effectively doing a bunch ... (read more)

[-]Steven Byrnes1moΩ143214

I’m talking about the AI’s ability to learn / figure out a new system / idea / domain on the fly. It’s hard to point to a particular “task” that specifically tests this ability (in the way that people normally use the term “task”), because for any possible task, maybe the AI happens to already know how to do it.

You could filter the training data, but doing that in practice might be kinda tricky because “the AI already knows how to do X” is distinct from “the AI has already seen examples of X in the training data”. LLMs “already know how to do” lots of things that are not superficially in the training data, just as humans “already know how to do” lots of things that are superficially unlike anything they’ve seen before—e.g. I can ask a random human to imagine a purple colander falling out of an airplane and answer simple questions about it, and they’ll do it skillfully and instantaneously. That’s the inference algorithm, not the learning algorithm.

Well, getting an AI to invent a new scientific field would work as such a task, because it’s not in the training data by definition. But that’s such a high bar as to be unhelpful in practice. Maybe tasks that we think of as more suited to ... (read more)

9faul_sname1mo
I think "doesn't fully understand the concept of superradiance" is a phrase that smuggles in too many assumptions here. If you rephrase it as "can determine when superradiance will occur, but makes inaccurate predictions about physical systems will do in those situations" / "makes imprecise predictions in such cases" / "has trouble distinguishing cases where superradiance will occur vs cases where it will not", all of those suggest pretty obvious ways of generating training data. GPT-4 can already "figure out a new system on the fly" in the sense of taking some repeatable phenomenon it can observe, and predicting things about that phenomenon, because it can write standard machine learning pipelines, design APIs with documentation, and interact with documented APIs. However, the process of doing that is very slow and expensive, and resembles "build a tool and then use the tool" rather than "augment its own native intelligence". Which makes sense. The story of human capabilities advances doesn't look like "find clever ways to configure unprocess rocks and branches from the environment in ways which accomplish our goals", it looks like "build a bunch of tools, and figure out which ones are most useful and how they are best used, and then use our best tools to build better tools, and so on, and then use the much-improved tools to do the things we want".
5Alexander Gietelink Oldenziel1mo
 I don't know how I feel about pushing this conversation further. A lot of people read this forum now. 
3Nathan Helm-Burger1mo
I feel quite confident that all the leading AI labs are already thinking and talking internally about this stuff, and that what we are saying here adds approximately nothing to their conversations. So I don't think it matters whether we discuss this or not. That simply isn't a lever of control we have over the world. There are potentially secret things people might know which shouldn't be divulged, but I doubt this conversation is anywhere near technical enough to be advancing the frontier in any way.
2Alexander Gietelink Oldenziel1mo
Perhaps.
4abramdemski1mo
I think Steven's response hits the mark, but from my own perspective, I would say that a not-totally-irrelevant way to measure something related would be: many-shot learning, particularly in cases where few-shot learning does not do the trick.
7Random Developer1mo
Yes, this is almost exactly it. I don't expect frontier LLMs to carry out a complicated, multi-step process and recover from obstacles. I think of this as the "squirrel bird feeder test". Squirrels are ingenious and persistent problem solvers, capable of overcoming chains of complex obstacles. LLMs really can't do this (though Devin is getting closer, if demos are to be believed). Here's a simple test: Ask an AI to open and manage a local pizza restaurant, buying kitchen equipment, dealing with contractors, selecting recipes, hiring human employees to serve or clean, registering the business, handling inspections, paying taxes, etc. None of these are expert-level skills. But frontier models are missing several key abilities. So I do not consider them AGI. However, I agree that LLMs already have superhuman language skills in many areas. They have many, many parts of what's needed to complete challenges like the above. (On principle, I won't try to list what I think they're missing.) I fear the period between "actual AGI and weak ASI" will be extremely short. And I don't actually believe there is any long-term way to control ASI. I fear that most futures lead to a partially-aligned super-human intelligence with its own goals. And any actual control we have will be transitory.
6AnthonyC1mo
  I agree that this is a thing current AI systems don't/can't do, and that aren't considered expert-level skills for humans. I disagree that this is a simple test, or the kind of thing a typical human can do without lots of feedback, failures, or assistance. Many very smart humans fail at some or all of these tasks. They give up on starting a business, mess up their taxes, have a hard time navigating bureaucratic red tape, and don't ever learn to cook. I agree that if an AI could do these things it would be much harder to argue against it being AGI, but it's important to remember that many healthy, intelligent, adult humans can't, at least not reliably. Also, remember that most restaurants fail within a couple of years even after making it through all these hoops. The rate is very high even for experienced restauranteurs doing the managing. I suppose you could argue for a definition of general intelligence that excludes a substantial fraction of humans, but for many reasons I wouldn't recommend it.

Yeah, the precise ability I'm trying to point to here is tricky. Almost any human (barring certain forms of senility, severe disability, etc) can do some version of what I'm talking about. But as in the restaurant example, not every human could succeed at every possible example.

I was trying to better describe the abilities that I thought GPT-4 was lacking, using very simple examples. And it started looking way too much like a benchmark suite that people could target.

Suffice to say, I don't think GPT-4 is an AGI. But I strongly suspect we're only a couple of breakthroughs away. And if anyone builds an AGI, I am not optimistic we will remain in control of our futures.

3AnthonyC1mo
Got it, makes sense, agreed.
4No77e1mo
One way in which "spending a whole lot of time working with a system / idea / domain, and getting to know it and understand it and manipulate it better and better over the course of time" could be solved automatically is just by having a truly huge context window. Example of an experiment: teach a particular branch of math to an LLM that has never seen that branch of math. Maybe humans have just the equivalent of a sort of huge content window spanning selected stuff from their entire lifetimes, and so this kind of learning is possible for them.
3abramdemski1mo
I don't think it is sensible to model humans as "just the equivalent of a sort of huge content window" because this is not a particularly good computational model of how human learning and memory work; but I do think that the technology behind the increasing context size of modern AIs contributes to them having a small but nonzero amount of the thing Steven is pointing at, due to the spontaneous emergence of learning algorithms.
3[anonymous]1mo
You also have a simple algorithm problem. Humans learn by replacing bad policy with good. Aka a baby replaces "policy that drops objects picked up" ->. "policy that usually results in object retention". This is because at a mechanistic level the baby tries many times to pickup and retain objects, and a fixed amount of circuitry in their brain has connections that resulted in a drop down weighted and ones they resulted in retention reinforced. This means that over time as the baby learns, the compute cost for motor manipulation remains constant. Technically O(1) though thats a bit of a confusing way to express it. With in context window learning, you can imagine an LLM+ robot recording : Robotic token string: <string of robotic policy tokens 1> : outcome, drop Robotic token string: <string of robotic policy tokens 2> : outcome, retain Robotic token string: <string of robotic policy tokens 2> : outcome, drop And so on extending and consuming all of the machines context window, and every time the machine decides which tokens to use next it needs O(n log n) compute to consider all the tokens in the window. (Used to be n^2, this is a huge advance) This does not scale. You will not get capable or dangerous AI this way. Obviously you need to compress that linear list of outcomes from different strategies to update the underlying network that generated them so it is more likely to output tokens that result in success. Same for any other task you want the model to do. In context learning scales poorly. This also makes it safe....
0Alexander Gietelink Oldenziel1mo
Yes. This seems so obviously true to me in way that it is profoundly mysterious to me that almost everybody else seems to disagree. Then again, probably it's for the best.  Maybe this is the one weird timeline where we gmi because everybody thinks we already have AGI. 

Furthermore, when we measure that competence, it usually falls somewhere within the human range of performance. 

I think that for this to be meaningfully true, the LLM should be able to actually replace humans at a given task. There are some very specific domains in which this is doable (e.g. creative writing assistant), but it seems to me that they are still mostly too unreliable for this. 

I've worked with getting GPT-4 to act as a coach for business customers. This is one of the domains that it excels at - tasks can be done entirely inside a chat, the focus is on asking users questions and paraphrasing them so hallucinations are usually not a major issue. And yet it's stupid in some very frustrating ways that a human wouldn't be. 

For example, our users would talk with the bot at specific times, which they would schedule using a separate system. Sometimes they would ask the bot to change their scheduled time. The bot wasn't interfaced to the actual scheduling system, but it had been told to act like a helpful coach, so by default it would say something like "of course, I have moved your session time to X". This was bad, since the user would think the session had been... (read more)

I don't mean to belabor the point as I think it's reasonable, but worth pointing out that these responses seem within the range of below average human performance.

I was going to say the same. I can't count the number of times a human customer service agent has tried to do something for me, or told me they already did do something for me, only for me to later find out they were wrong (because of a mistake they made), lying (because their scripts required it or their metrics essentially forced them into it), or foiled (because of badly designed backend systems opaque to both of us). 

[-]Hjalmar_Wijk1moΩ183013

I agree the term AGI is rough and might be more misleading than it's worth in some cases. But I do quite strongly disagree that current models are 'AGI' in the sense most people intend.

Examples of very important areas where 'average humans' plausibly do way better than current transformers:

  • Most humans succeed in making money autonomously. Even if they might not come up with a great idea to quickly 10x $100 through entrepreneurship, they are able to find and execute jobs that people are willing to pay a lot of money for. And many of these jobs are digital and could in theory be done just as well by AIs. Certainly there is a ton of infrastructure built up around humans that help them accomplish this which doesn't really exist for AI systems yet, but if this situation was somehow equalized I would very strongly bet on the average human doing better than the average GPT-4-based agent. It seems clear to me that humans are just way more resourceful, agentic, able to learn and adapt etc. than current transformers are in key ways.
  • Many humans currently do drastically better on the METR task suite (https://github.com/METR/public-tasks) than any AI agents, and I think this captures some i
... (read more)

Current AIs suck at agency skills. Put a bunch of them in AutoGPT scaffolds and give them each their own computer and access to the internet and contact info for each other and let them run autonomously for weeks and... well I'm curious to find out what will happen, I expect it to be entertaining but not impressive or useful. Whereas, as you say, randomly sampled humans would form societies and fnd jobs etc.

This is the common thread behind all your examples Hjalmar. Once we teach our AIs agency (i.e. once they have lots of training-experience operating autonomously in pursuit of goals in sufficiently diverse/challenging environments that they generalize rather than overfit to their environment) then they'll be AGI imo. And also takeoff will begin, takeover will become a real possibility, etc. Off to the races.
 

3Hjalmar_Wijk1mo
Yeah, I agree that lack of agency skills are an important part of the remaining human<>AI gap, and that it's possible that this won't be too difficult to solve (and that this could then lead to rapid further recursive improvements). I was just pointing toward evidence that there is a gap at the moment, and that current systems are poorly described as AGI.
2Daniel Kokotajlo1mo
Yeah I wasn't disagreeing with you to be clear. Just adding.
6abramdemski1mo
With respect to METR, yeah, this feels like it falls under my argument against comparing performance against human experts when assessing whether AI is "human-level". This is not to deny the claim that these tasks may shine a light on fundamentally missing capabilities; as I said, I am not claiming that modern AI is within human range on all human capabilities, only enough that I think "human level" is a sensible label to apply. However, the point about autonomously making money feels more hard-hitting, and has been repeated by a few other commenters. I can at least concede that this is a very sensible definition of AGI, which pretty clearly has not yet been satisfied. Possibly I should reconsider my position further. The point about forming societies seems less clear. Productive labor in the current economy is in some ways much more complex and harder to navigate than it would be in a new society built from scratch. The Generative Agents paper gives some evidence in favor of LLM-base agents coordinating social events.
[-]mic1moΩ12140

I think humans doing METR's tasks are more like "expert-level" rather than average/"human-level". But current LLM agents are also far below human performance on tasks that don't require any special expertise.

From GAIA:

GAIA proposes real-world questions that require a set of fundamental abilities such as reasoning, multi-modality handling, web browsing, and generally tool-use proficiency. GAIA questions are conceptually simple for humans yet challenging for most advanced AIs: we show that human respondents obtain 92% vs. 15% for GPT-4 equipped with plugins. [Note: The latest highest AI agent score is now 39%.] This notable performance disparity contrasts with the recent trend of LLMs outperforming humans on tasks requiring professional skills in e.g. law or chemistry. GAIA's philosophy departs from the current trend in AI benchmarks suggesting to target tasks that are ever more difficult for humans. We posit that the advent of Artificial General Intelligence (AGI) hinges on a system's capability to exhibit similar robustness as the average human does on such questions.

And LLMs and VLLMs seriously underperform humans in VisualWebArena, which tests for simple web-browsing capabilities... (read more)

2Nathan Helm-Burger1mo
I think METR is aiming for expert level tasks, but I think their current task set is closer in difficulty to GAIA and VisualWebArena than what I would consider human expert level difficulty. It's tricky to decide though, since LLMs circa 2024 seem really good at some stuff that is quite hard to humans, and bad at a set of stuff easy to humans. If the stuff they are currently bad at gets brought up to human level, without a decrease in skill at the stuff LLMs are above-human at, the result would be a system well into the superhuman range. So where we draw the line for human level necessarily involves a tricky value-weighting problem of the various skills involved.
6[anonymous]1mo
This is what jumped out at me when I read your post. Transformer LLM can be described as a "disabled human who is blind to motion and needs seconds to see a still image, paralyzed, costs expensive resources to live, cannot learn, and has no long term memory". Oh and they finished high school and some college across all majors. "What job can they do and how much will you pay". "Can they support themselves financially?". And you end up with "well for most of human history, a human with those disabilities would be a net drain on their tribe. Sometimes they were abandoned to die as a consequence. " And it implies something like "can perform robot manipulation and wash dishes, or the "make a cup of coffee in a strangers house" test. And reliably enough to be paid minimum wage or at least some money under the table to do a task like this. We really could be 3-5 years from that, if all you need for AGI is "video perception, online learning, long term memory, and 5-25th percentile human like robotics control". 3/4 elements exist in someone's lab right now, the robotics control maybe not. This "economic viability test" has an interesting followup question. It's possible for a human to remain alive and living in a car or tent under a bridge for a few dollars an hour. This is the "minimum income to survive" for a human. But a robotic system may blow a $10,000 part every 1000 hours, or need $100 an hour of rented B200 compute to think with. So the minimum hourly rate could be higher. I think maybe we should use the human dollar figures for this "can survive" level of AGI capabilities test, since robotic and compute costs are so easy and fast to optimize. Summary : AGI when the AI systems can do a variety of general tasks, completely, you would pay a human employee to do, even a low end one. Transformative AGI (one of many thresholds) when the AI system can do a task and be paid more than the hourly cost of compute + robotic hourly costs. Note "transformation" is reach
2abramdemski1mo
The replace-human-labor test gets quite interesting and complex when we start to time-index it. Specifically, two time-indexes are needed: a 'baseline' time (when humans are doing all the relevant work) and a comparison time (where we check how much of the baseline economy has been automated). Without looking anything up, I guess we could say that machines have already automated 90% of the economy, if we choose our baseline from somewhere before industrial farming equipment, and our comparison time somewhere after. But this is obviously not AGI. A human who can do exactly what GPT4 can do is not economically viable in 2024, but might have been economically viable in 2020.
6[anonymous]1mo
Yes, I agree. Whenever I think of things like this I focus on how what matters in the sense of "when will agi be transformational" is the idea of criticality. I have written on it earlier but the simple idea is that our human world changes rapidly when AI capabilities in some way lead to more AI capabilities at a fast rate. Like this whole "is this AGI" thing is totally irrelevant, all that matters is criticality. You can imagine subhuman systems using AGI reaching criticality, and superhuman systems being needed. (Note ordinary humans do have criticality albeit with a doubling time of about 20 years) There are many forms of criticality, and the first one unlocked that won't quench easily starts the singularity. Examples: Investment criticality: each AI demo leads to more investment than the total cost, including failures at other companies, to produce the demo. Quenches if investors run out of money or find a better investment sector. Financial criticality: AI services delivered by AI bring in more than they cost in revenue, and each reinvestment effectively has a greater than 10 percent ROI. This quenches once further reinvestments in AI don't pay for themselves. Partial self replication criticality. Robots can build most of the parts used in themselves, I use post 2020 automation. This quenches at the new equilibrium determined by the percent of automation. Aka 90 percent automation makes each human worker left 10 times as productive so we quench at 10x number of robots possible if every worker on earth was building robots. Full self replication criticality : this quenches when matter mineable in the solar system is all consumed and made into either more robots or waste piles. AI research criticality: AI systems research and develop better AI systems. Quenches when you find the most powerful AI the underlying compute and data can support. You may notice 2 are satisfied, one eoy 2022, one later 2023. So in that sense the Singularity began and will accel
[-]leogao1mo183

I believe that the important part of generality is the ability to handle new tasks. In particular, I disagree that transformers are actually as good at handling new tasks as humans are. My mental model is that modern transformers are not general tools, but rather an enormous Swiss army knife with billions of specific tools that compose together to only a limited extent. (I think human intelligence is also a Swiss army knife and not the One True Tool, but it has many fewer tools that are each more general and more compositional with the other tools.)

I think this is heavily confounded because the internet is so huge that it's actually quite hard to come up with things that are not already on the internet. Back when GPT-3 first came out, I used to believe that widening the distribution to cover every task ever was a legitimate way to solve the generality problem, but I no longer believe this. (I think in particular this would have overestimated the trajectory of AI in the past 4 years)

One way to see this is that the most interesting tasks are ones that nobody has ever done before. You can't just widen the distribution to include discovering the cure for cancer, or solving alignment. T... (read more)

2Nathan Helm-Burger1mo
I think my comment (link https://www.lesswrong.com/posts/gP8tvspKG79RqACTn/modern-transformers-are-agi-and-human-level?commentId=RcmFf5qRAkTA4dmDo ) relates to yours. I think there is a tool/process/ability missing that I'd call mastery-of-novel-domain. I also think there's a missing ability of "integrating known facts to come up with novel conclusions pointed at by multiple facts". Unsure what to call this. Maybe knowledge-integration or worldview-consolidation?

I agree with virtually all of the high-level points in this post — the term "AGI" did not seem to usually initially refer to a system that was better than all human experts at absolutely everything, transformers are not a narrow technology, and current frontier models can meaningfully be called "AGI".

Indeed, my own attempt to define AGI a few years ago was initially criticized for being too strong, as I initially specified a difficult construction task, which was later weakened to being able to "satisfactorily assemble a (or the equivalent of a) circa-2021 Ferrari 312 T4 1:8 scale automobile model" in response to pushback. These days the opposite criticism is generally given: that my definition is too weak.

However, I do think there is a meaningful sense in which current frontier AIs are not "AGI" in a way that does not require goalpost shifting. Various economically-minded people have provided definitions for AGI that were essentially "can the system perform most human jobs?" And as far as I can tell, this definition has held up remarkably well.

For example, Tobias Baumann wrote in 2018,

A commonly used reference point is the attainment of “human-level” general intelligence (also cal

... (read more)
2Nathan Helm-Burger1mo
I think my comment is related to yours: https://www.lesswrong.com/posts/gP8tvspKG79RqACTn/modern-transformers-are-agi-and-human-level?commentId=RcmFf5qRAkTA4dmDo Also see Leogao's comment and my response to it: https://www.lesswrong.com/posts/gP8tvspKG79RqACTn/modern-transformers-are-agi-and-human-level?commentId=YzM6cSonELpjZ38ET
[-]Nisan1moΩ121513

I'm saying "transformers" every time I am tempted to write "LLMs" because many modern LLMs also do image processing, so the term "LLM" is not quite right.

"Transformer"'s not quite right either because you can train a transformer on a narrow task. How about foundation model: "models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks".

I think this mostly just reveals that "AGI" and "human-level" are bad terms.

Under your proposed usage, modern transformers are (IMO) brutally non-central with respect to the terms "AGI" and "human-level" from the perspective of most people.

Unfortunately, I don't think there is any defintion of "AGI" and "human-level" which:

  • Corresponds to the words used.
  • Also is central from the perspective of most people hearing the words

I prefer the term "transformative AI", ideally paired with a definition.

(E.g. in The case for ensuring that powerful AIs are controlled, we use the terms "transformatively useful AI" and "early tranformatively useful AI" both of which we define. We were initially planning on some term like "human-level", but we ran into a bunch of issues with using this term due to wanting a more precise concept and thus instead used a concept like not-wildly-qualitatively-superhuman-in-dangerous-domains or non-wildly-qualitatively-superhuman-in-general-relevant-capabilities.)

I should probably taboo human-level more than I currently do, this term is problematic.

I also like "transformative AI."

I don't think of it as "AGI" or "human-level" being an especially bad term - most category nouns are bad terms (like "heap"), in the sense that they're inherently fuzzy gestures at the structure of the world. It's just that in the context of 2024, we're now inside the fuzz.

A mile away from your house, "towards your house" is a useful direction. Inside your front hallway, "towards your house" is a uselessly fuzzy direction - and a bad term. More precision is needed because you're closer.

2AnthonyC1mo
This is an excellent short mental handle for this concept. I'll definitely be using it.
5abramdemski1mo
Yeah, I think nixing the terms 'AGI' and 'human-level' is a very reasonable response to my argument. I don't claim that "we are at human-level AGI now, everyone!" has important policy implications (I am not sure one way or the other, but it is certainly not my point).
3kromem1mo
'Superintelligence' seems more fitting than AGI for the 'transformative' scope. The problem with "transformative AI" as a term is that subdomain transformation will occur at staggered rates. We saw text based generation reach thresholds that it took several years to reach for video just recently, as an example. I don't love 'superintelligence' as a term, and even less as a goal post (I'd much rather be in a world aiming for AI 'superwisdom'), but of the commonly used terms it seems the best fit for what people are trying to describe when they describe an AI generalized and sophisticated enough to be "at or above maximal human competency in most things." The OP post, at least to me, seems correct in that AGI as a term belongs to its foundations as a differentiator from narrow scoped competencies in AI, and that the lines for generalization are sufficiently blurred at this point with transformers we should stop moving the goal posts for the 'G' in AGI. And at least from what I've seen, there's active harm in the industry where 'AGI' as some far future development leads people less up to date with research on things like world models or prompting to conclude that GPTs are "just Markov predictions" (overlooking the importance of the self-attention mechanism and the surprising results of its presence on the degree of generalization). I would wager the vast majority of consumers of models underestimate the generalization present because in addition to their naive usage of outdated free models they've been reading article after article about how it's "not AGI" and is "just fancy autocomplete" (reflecting a separate phenomenon where it seems professional writers are more inclined to write negative articles about a technology perceived as a threat to writing jobs than positive articles). As this topic becomes more important, it might be useful for democracies to have a more accurately informed broader public, and AGI as a moving goal post seems counterproductive to those
6ryan_greenblatt1mo
To me, superintelligence implies qualitatively much smarter than the best humans. I don't think this is needed for AI to be transformative. Fast and cheap-to-run AIs which are as qualitatively smart as humans would likely be transformative.
1kromem1mo
Agreed - I thought you wanted that term for replacing how OP stated AGI is being used in relation to x-risk. In terms of "fast and cheap and comparable to the average human" - well, then for a number of roles and niches we're already there. Sticking with the intent behind your term, maybe "generally transformative AI" is a more accurate representation for a colloquial 'AGI' replacement?
2ryan_greenblatt1mo
Oh, by "as qualitatively smart as humans" I meant "as qualitatively smart as the best human experts". I also maybe disagree with: Or at least the % of economic activity covered by this still seems low to me.
2AnthonyC1mo
Oh, by "as qualitatively smart as humans" I meant "as qualitatively smart as the best human experts". I think that is more comparable to saying "as smart as humanity." No individual human is as smart as humanity in general.
[-]Nisan1moΩ9137

I agree 100%. It would be interesting to explore how the term "AGI" has evolved, maybe starting with Goertzel and Pennachin 2007 who define it as:

a software program that can solve a variety of complex problems in a variety of different domains, and that controls itself autonomously, with its own thoughts, worries, feelings, strengths, weaknesses and predispositions

On the other hand, Stuart Russell testified that AGI means

machines that match or exceed human capabilities in every relevant dimension

so the experts seem to disagree. (On the other hand, Stuart & Russell's textbook cite Goertzel and Pennachin 2007 when mentioning AGI. Confusing.)

In any case, I think it's right to say that today's best language models are AGIs for any of these reasons:

  • They're not narrow AIs.
  • They satisfy the important parts of Goertzel and Pennachin's definition.
  • The tasks they can perform are not limited to a "bounded" domain.

In fact, GPT-2 is an AGI.

Cf. DeepMind's "Levels of AGI" paper (https://arxiv.org/abs/2311.02462), calling modern transformers "emerging AGI" there, but also defining "expert", "virtuoso", and "superhuman" AGI.

I agree. GPT-4 is an AGI for the kinds of tasks I care about such as programming and writing. ChatGPT4 in its current form (with the ability to write and execute code) seems to be at the expert human level in many technical and quantitative subjects such as statistics and programming.

For example, last year I was amazed when I gave ChatGPT4 one of my statistics past exam papers and it got all the questions right except for one which involved interpreting an image of a linear regression graph. The questions typically involve understanding the question, think... (read more)

Perhaps AGI but not human level. A system that cannot drive a car or cook a meal is not human level. I suppose it's conceivable that the purely cognitive functions are at human level, but considering the limited economic impact I seriously doubt it. 

Maybe a better question than "time to AGI" is time to mundanely transformative AGI. I think a lot of people have a model of the near future in which a lot of current knowledge work (and other work) is fully or almost-fully automated, but at least as of right this moment, that hasn't actually happened yet (despite all the hype).

For example, one of the things current A(G)Is are supposedly strongest at is writing code, but I would still rather hire a (good) junior software developer than rely on currently available AI products for just about any real program... (read more)

6abramdemski1mo
Yeah, I don't disagree with this -- there's a question here about which stories about AGI should be thought of as defining vs extrapolating consequences of that definition based on a broader set of assumptions. The situation we're in right now, as I see it, is one where some of the broader assumptions turn out to be false, so definitions which seemed relatively clear become more ambiguous. I'm privileging notions about the capabilities over notions about societal consequences, partly because I see "AGI" as more of a technology-oriented term and less of a social-consequences-oriented term. So while I would agree that talk about AGI from within the AGI community historically often went along with utopian visions, I pretty strongly think of this as speculation about impact, rather than definitional.

I agree it is not sensible to make "AGI" a synonym for superintelligence (ASI) or the like. But your approach to compare it to human intelligence seems unprincipled as well.

In terms of architecture, there is likely no fundamental difference between humans and dogs. Humans are probably just a lot smarter than dogs, but not significantly more general. Similar to how a larger LLM is smarter than a smaller one, but not more general. If you doubt this, imagine we had a dog-level robotic AI. Plausibly, we soon thereafter would also have human-level AI by growing... (read more)

4abramdemski1mo
I'm not sure how you intend your predictive-coding point to be understood, but from my perspective, it seems like a complaint about the underlying tech rather than the results, which seems out of place. If backprop can do the job, then who cares? I would be interested to know if you can name something which predictive coding has currently accomplished, and which you believe to be fundamentally unobtainable for backprop. lsusr thinks the two have been unified into one theory. I don't buy that animals somehow plug into "base reality" by predicting sensory experiences, while transformers somehow miss out on it by predicting text and images and video. Reality has lots of parts. Animals and transformers both plug into some limited subset of it. I would guess raw transformers could handle some real-time robotics tasks if scaled up sufficiently, but I do agree that raw transformers would be missing something important architecture-wise. However, I also think it is plausible that only a little bit more architecture is needed (and, that the 'little bit more' corresponds to things people have already been thinking about) -- things such as the features added in the generative agents paper. (I realize, of course, that this paper is far from realtime robotics.) Anyway, high uncertainty on all of this.
3cubefox1mo
No, I was talking about the results. lsusr seems to use the term in a different sense than Scott Alexander or Yann LeCun. In their sense it's not an alternative to backpropagation, but a way of constantly predicting future experience and to constantly update a world model depending on how far off those predictions are. Somewhat analogous to conditionalization in Bayesian probability theory. LeCun talks about the technical issues in the interview above. In contrast to next-token prediction, the problem of predicting appropriate sense data is not yet solved for AI. Apart from doing it in real time, the other issue is that (e.g.) for video frames a probability distribution over all possible experiences is not feasible, in contrast to text tokens. The space of possibilities is too large, and some form of closeness measure is required, or imprecise predictions, that only predict "relevant" parts of future experience. In the meantime OpenAI did present Sora, a video generation model. But according to the announcement, it is a diffusion model which generates all frames in parallel. So it doesn't seem like a step toward solving predictive coding. Edit: Maybe it eventually turns out to be possible to implement predictive coding using transformers. Assuming this works, it wouldn't be appropriate to call transformers AGI before that achievement was made. Otherwise we would have to identify the invention of "artificial neural networks" decades ago with the invention of AGI, since AGI will probably be based on ANNs. My main point is that AGI (a system with high generality) is something that could be scaled up (e.g. by training a larger model) to superintelligence without requiring major new intellectual breakthroughs, breakthroughs like figuring out how to get predictive coding to work. This is similar to how a human brain seems to be broadly similar to a dog brain, but larger, and thus didn't involve a major "breakthrough" in the way it works. Smarter animals are mostly smar
4abramdemski1mo
I haven't watched the LeCun interview you reference (it is several hours long, so relevant time-stamps to look at would be appreciated), but this still does not make sense to me -- backprop already seems like a way to constantly predict future experience and update, particularly as it is employed in LLMs. Generating predictions first and then updating based on error is how backprop works. Some form of closeness measure is required, just like you emphasize.
1cubefox1mo
Well, backpropagation alone wasn't even enough to make efficient LLMs feasible. It took decades, till the invention of transformers, to make them work. Similarly, knowing how to make LLMs is not yet sufficient to implement predictive coding. LeCun talks about the problem in a short section here from 10:55 to 14:19.
[-]jmh1mo31

I found this an interesting but complex read for me -- both the post and the comments. I found a number of what seemed good points to consider, but I seem to be coming away from the discussion thinking about the old parable of the blind men and the elephant.

3Ilio1mo
That’s great analogy. To me the strength of the OP is to pinpoint that LLMs already exhibit the kind of general ability we would expect from AGI, and the weakness is to forget that LLMs do not exhibit some specific ability most thought easy, such as the agency that even clownfishes exhibit. In a way this sounds like again the universe is telling us we should rethink what intelligence is. Chess is hard and doing the dishes is easy? Nope. Language is hard and agency is central? Nope.
4jmh1mo
I'm not even sure where I would try to start but do wonder if John Wemtworth's concept of Natural Latents might not offer a useful framework for better grounding the subject for this type of discussion.
1Ilio1mo
My understanding of this framework is probably too raw to go sane (A natural latent is a convolution basis useful for analyzing natural inputs, and it’s powerful because function composition is powerful) but it could fit nicely with Agency is what neurons in the biological movement area detect.
[-]Phil H1mo2-2

I very much agree with this. You're not the only one! I've been thinking for a while that actually, AGI is here (by all previous definitions of AGI). 

Furthermore, I want to suggest that the people who are saying we don't yet have AGI will in fact never be satisfied by what an AI does. The reason is this: An AI will never ever act like a human. By the time its ability to do basic human things like speak and drive are up to human standards (already happened), its abilities in other areas, like playing computer games and calculating, will far exceed ours... (read more)

I've gotten push-back from almost everyone I've spoken with about this

I had also expected this reaction, and I always thought I was the only one who thinks we have basically achieved AGI since ~GPT-3. But looking at the upvotes on this post I wonder if this is a much more common view. 

I agree that "general intelligence" is a concept that already applies to modern LLMs, which are often quite capable across different domains. I definitely agree that LLMs are, in certain areas, already capable of matching or outperforming a (non-expert) human.

There is some value in talking about just that alone, I think. There seems to be a bias in play - preventing many from recognizing AI as capable. A lot of people are all too eager to dismiss AI capabilities - whether out of some belief in human exceptionalism, some degree of insecurity, some manner of... (read more)

[-]adastra2220d-1-14

Thank you for writing this. I have been making the same argument for about two years now, but you have argued the case better here than I could have. As you note in your edit it is possible for goal posts to be purposefully moved, but this irks me for a number of reasons beyond mere obstinacy:

  1. The transition from narrow AI to truly general AI is socially transformative, and we are living through that transition right now. We should be having a conversation about this, but are being hindered from doing so because the very concept of Artificial General Int

... (read more)

Obvious bait is obvious bait, but here goes.

Transformers are not AGI because they will never be able to "figure something out" the way humans can.

If a human is given the rules for Sudoku, they first try filling in the square randomly.  After a while, they notice that certain things work and certain things don't work.  They begin to define heuristics for things that work (for example, if all but one number appears in the same row or column as a box, that number goes in the box).  Eventually they work out a complete algorithm for solving Sudok... (read more)

6abramdemski1mo
Yeah, I didn't do a very good job in this respect. I am not intending to talk about a transformer by itself. I am intending to talk about transformers with the sorts of bells and whistles that they are currently being wrapped with. So not just transformers, but also not some totally speculative wrapper.
2Matt Goldenberg1mo
It seems likely to me that you could create a prompt that would have a transformer do this.
4Logan Zoellner1mo
In the technical sense that you can implement arbitrary programs by prompting an LLM (they are turning complete), sure. In a practical sense, no. GPT-4 can't even play tic-tac-toe.  Manifold spent a year getting GPT-4 to implement (much less discover) the algorithm for Sudoku and failed. Now imagine trying to implement a serious backtracking algorithm.  Stockfish checks millions of positions per turn of play.  The attention window for your "backtracking transformer" is going to have to be at lease {size of chess board state}*{number of positions evaluated}. And because of quadratic attention, training it is going to take on the order of {number or parameters}*({chess board state size}*{number of positions evaluated})^2 Even with very generous assumptions for {number of parameters} and {chess board state}, there's simply no way we could train such a model this century (and that's assuming Moore's law somehow continues that long).
2Matt Goldenberg1mo
The question is - how far can we get with in-context learning.  If we filled Gemini's 10 million tokens with Sudoku rules and examples, showing where it went wrong each time, would it generalize? I'm not sure but I think it's possible
2Logan Zoellner1mo
It certainly wouldn't generalize to e.g Hidouku
4AnthonyC1mo
I agree that filling a context window with worked sudoku examples wouldn't help for solving hidouku. But, there is a common element here to the games. Both look like math, but aren't about numbers except that there's an ordered sequence. The sequence of items could just as easily be an alphabetically ordered set of words. Both are much more about geometry, or topology, or graph theory, for how a set of points is connected. I would not be surprised to learn that there is a set of tokens, containing no examples of either game, combined with a checker (like your link has) that points out when a mistake has been made, that enables solving a wide range of similar games. I think one of the things humans do better than current LLMs is that, as we learn a new task, we vary what counts as a token and how we nest tokens. How do we chunk things? In sudoku, each box is a chunk, each row and column are a chunk, the board is a chunk, "sudoku" is a chunk, "checking an answer" is a chunk, "playing a game" is a chunk, and there are probably lots of others I'm ignoring. I don't think just prompting an LLM with the full text of "How to solve it" in its context window would get us to a solution, but at some level I do think it's possible to make explicit, in words and diagrams, what it is humans do to solve things, in a way legible to it. I think it largely resembles repeatedly telescoping in and out, to lower and higher abstractions applying different concepts and contexts, locally sanity checking ourselves, correcting locally obvious insanity, and continuing until we hit some sort of reflective consistency. Different humans have different limits on what contexts they can successfully do this in.
2Logan Zoellner1mo
Absolutely.  I don't think it's impossible to build such a system.  In fact, I think a transformer is probably about 90% there.   Need to add trial and error, some kind of long-term memory/fine-tuning and a handful of default heuristics.  Scale will help too, but no amount of scale alone will get us there.
1ReaderM1mo
GPT-4 can play tic-tac-toe  https://chat.openai.com/share/75758e5e-d228-420f-9138-7bff47f2e12d
2Logan Zoellner1mo
sure.  4000 words (~8000 tokens) to do a 9-state 9-turn game with the entire strategy written out by a human.  Now extrapolate that to chess, go, or any serious game. And this doesn't address at all my actual point, which is that Transformers cannot teach themselves to play a game.
3ReaderM1mo
Ok? That's how you teach anybody anything.  LLMs can play chess, poker just fine. gpt 3.5-turbo-instruct plays at about 1800 Elo, consistently making legal moves. - https://github.com/adamkarvonen/chess_gpt_eval Then there is this grandmaster level chess transformer - https://arxiv.org/abs/2402.04494 Poker - https://arxiv.org/abs/2308.12466 Oh so you wrote/can provide a paper proving this or..? This is kind of the problem with a lot of these discussions. Wild Confidence on ability estimation from what is ultimately just gut feeling. You said GPT-4 couldn't play tic-tac-toe. Well it can. You said it would be impossible to train a chess playing model this century. Already done. Now you're saying Transformers can't "teach themselves to play a game". There is 0 theoretical justification for that stance.
2Logan Zoellner1mo
  Have you never figured out something by yourself?  The way I learned to do Sudoku was: I was given a book of Sudoku puzzles and told "have fun". I didn't say it was impossible to train an LLM to play Chess. I said it was impossible for an LLM to teach itself to play a game of similar difficulty to chess if that game is not in it's training data. These are two wildly different things. Obviously LLMs can learn things that are in their training data.  That's what they do.  Obviously if you give LLMs detailed step-by-step instructions for a procedure that is small enough to fit in its attention window, LLMs can follow that procedure.  Again, that is what LLMs do. What they do not do is teach themselves things that aren't in their training data via trial-and-error.  Which is the primary way humans learn things.
4[anonymous]1mo
It seems like this would be because the transformer weights are fixed and we have not built a mechanism for the model to record things it needs to learn to improve performance or an automated way to practice offline to do so. It's just missing all this, like a human patient with large sections of their brain surgically removed. Doesn't seem difficult or long term to add this does it? How many years before one of the competing AI lab adds some form of "performance enhancing fine tuning and self play"?
5Andrew Burns1mo
Less than a year. They probably already have toy models with periodically or continuously updating weights.
1ReaderM1mo
So few shot + scratchpad ? More gut claims.  Setting up the architecture that would allow a pretrained LLM to trial and error whatever you want is relatively trivial. Current state of the art isn't that competent but the backbone for this sort of work is there. Sudoku, Game of 24 solve rate is much higher with Tree of thought for instance.  There's stuff for Minecraft too.
2Logan Zoellner1mo
  I agree.  Or at least, I don't see any reason why not. My point was not that "a relatively simple architecture that contains a Transformer as the core" cannot solve problems via trial and error (in fact I think it's likely such an architecture exists).  My point was that transformers alone cannot do so. You can call it a "gut claim" if that makes you feel better.  But the actual reason is I did some very simple math (about the window size required and given quadratic scaling for transformers) and concluded that practically speaking it was impossible. Also, importantly, we don't know what that "relatively simple" architecture looks like.  If you look at the various efforts to "extend" transformers to general learning machines, there are a bunch of different approaches: alpha-geometry, diffusion transformers, baby-agi, voyager, dreamer, chain-of-thought, RAG, continuous fine-tuning, V-JEPA.  Practically speaking, we have no idea which of these techniques is the "correct" one (if any of them are). In my opinion saying "Transformers are AGI" is a bit like saying "Deep learning is AGI".  While it is extremely possible that an architecture that heavily relies on Transformers and is AGI exists, we don't actually know what that architecture is. Personally, my bet is either on a sort of generalized alpha-geometry approach (where the transformer generates hypothesis and then GOFAI is used to evaluate them) or Diffusion Transformers (where we iteratively de-noise a solution to a problem).  But I wouldn't be at all surprised if a few years from now it is universally agreed that some key insight we're currently missing marks the dividing line between Transformers and AGI.
1ReaderM1mo
If you're talking about this: then that's just irrelevant. You don't need to evaluate millions of positions to backtrack (unless you think humans don't backtrack) or play chess.  There's nothing the former can do that the latter can't. "architecture" is really overselling it but i couldn't think of a better word. It's just function calling. 
2Logan Zoellner1mo
  Humans are not transformers. The "context window" for a human is literally their entire life.
1ReaderM1mo
Not really. The majority of your experiences and interactions are forgotten and discarded, the few that aren't are recalled and triggered by the right input when necessary and not just sitting there in your awareness at all times. Those memories are also modified at every recall. And that's really just beside the point. However you want to spin it, evaluating that many positions is not necessary for backtracking or playing chess. If that's the base of your "impossible" rhetoric then it's a poor one.

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?