The classical meaning for AGI is things like humans rather than chimps or calculators and the implied potential impact; and for ASI being qualitatively more intelligent and immediately more impactful than humanity. The explosion of discussion around LLMs eroded the terms, but these are still the key thresholds of impact.
AGI is a point where humans are no longer needed for some kind of civilization to keep developing towards superintelligence and further, and superintelligence is a point where humanity's efforts are washed away by its capabilities, and so it's definitely too late to fix anything the superintelligence wouldn't cooperate in fixing.
Drowning in nuance doesn't change the utility of these simple concepts where they centrally apply. But the words might need to change at some point to protect the concepts from erosion of meaning, if it gets too bad.
I like both of those definitions. They do match the centroids of how I see those terms used, including my own usage. Perhaps I'll call them V-AGI and V-ASI and link the terms here.
When we have a particular query at hand, we don't need the broad terms, and that's great. But we do need shorthands when defining a particular query doesn't fit the time/space/attention span. Having a wider variety of sub-terms within the community would be extremely helpful, but I'm not sure how to work toward that.
I think about this a fair amount. I think terminology matters, and having terms of art is practically necessary. We're not going to restate a functional definition in every article, let alone every comment. AGI centrally means "the type of AI that changes everything", which is the central topic of discussion in the safety community for good reason.
I agree that the term AGI is used in so many different ways that it's problematic and probably not useful. I think we'd benefit a lot by having some standard reference terminology that's carefully defined.
To that end, I wrote a couple of articles trying to give a better definition, which includes talking about what type of cognitive capabilities are necessary to create a system that "thinks for itself" and is smarter than humans in most useful ways. I wrote Sapience, understanding, and "AGI" on this in 2023, and "Real AGI" more recently.
One problem I discovered in my attempts to create better defined terms is that the project was wrapped up in theories about exactly how AI will progress to have transformative properties. I expect a certain set of cognitive capacities to be the turning point, so I define it with those. Others have different theories. Which would be fine if we had multiple definitions in circulation. Transformative AI is an attempt to sidestep addressing the particulars of AI capabilities that will be important, but I think the relative disuse of that term suggests that addressing that question is pretty critical to most of the discussion.
I think the term ASI is usually used distincly (at least on LessWrong) and is pretty useful; it usually points to something more capable than the first things you'd call AGI, and can designate takeover-capable AI (one of my recently preferred terms for pointing to the important properties).
Maybe if somebody with sufficient reputation wrote a post saying "hey let's settle on some more specific sub-terms" that would work. But I suspect that would feel locally like a waste of time to that busy person.
There's a joke that academic philosophers would rather use each other's toothbrushes than each other's terminology. This tendency creates a profusion of different terms, and subtly different uses of the same terms. We could at least improve on that by having some reference posts intended to define the terms in particular ways.
One of the cruxes here is whether one believes that "AGI" is in fact a real distinct thing, rather than there just being a space of diverse cognitive algorithms of different specializations and powers, with no high-level organization. I believe that, and that the current LLM artefacts are not AGI. Some people disagree. That's a crux.
If you're one of the people who thinks that "general intelligence" is a thing, then figuring out whether a given system is an AGI or not, or whether a given paradigm scales to AGI, can be a way to figure out whether that system/paradigm is going to be able to fully automates the economy/AI research. An AGI would definitely be able to do that, so "LLMs are an AGI" implies "LLMs will be transformative". "LLMs are not AGI" does not imply "LLMs won't be transformative/omnicide-capable", so determining the AGI-ness doesn't give you all the answers (such as the exact limits of the LLM paradigm). Nevertheless, "is this an AGI?" is still a useful query to run[1], and getting "no" does somewhat update you away from "LLMs will be transformative", since it removes some of the reasons they may be transformative.
Tertiarily, one question here may be, "what the hell do you mean by a 'real AGI' as distinct from a 'good-enough AGI approximant'"? Here's a long-winded semi-formal explanation:
I deny the assumption that behavioral definitions of AGI/ASI are akin
to denying the distinction between the Taylor-polynomial approximation of a function, and that function itself (at least if we assume infinite compute like this).
Consider a function that could be expanded into a power series with an infinite convergence radius, e. g. .[2] The power series is that function, exactly, not an approximation; it's a valid way to define .
However, that validity only holds if you use the whole infinite-term power series, . Partial sums , with finite , do not equal , "are not" . In practice, however, we're always limited to such finite partial sums. If so, then if you're choosing to physically/algorithmically implement as some , the behavior of as you turn the "crank" of is crucial for understanding what you're going to achieve.
Suppose that we define an AGI as some function . As above, can have many exactly-equivalent representations/implementations, call their set . One of them may be some entity such that .
Does it matter which member of we're working with, if we're reasoning about 's behavior or capabilities? No: they are all exactly equivalent. Some members of may define behaviorally, some may do it structurally; we don't care. We pick whichever definition is more convenient to work with.
However, implementing in practice – incarnating it into algorithms and physics – is a completely different problem. To do so, we have to pick some structural representation/definition from , and we have to mind its length/complexity, because it'll be upper-bounded (by our finite compute).
In this case, the pick of representation matters. Notably, for some , is not a member of . If so, we have to start wondering about the error , and how it behaves as we move around, and what is the largest such that 's description complexity is below our upper bound.
For example, perhaps is an expansion of around some abstract "point" , and while exactly equals everywhere, only serves as a good approximation around that point , with the error growing rapidly as you move away from . (We may call that "failure to generalize out of distribution".)
If so, nitpicking about the definition/representation/implementation of that is being used to incarnate absolutely matters.
For some members of , say , even their complexity-limited approximations are good enough everywhere. Formally, we may say that , for some small . The informal statement " is a genuine AGI" is then only slightly imprecise.
For other members of , like , their complexity-limited approximations are not that good. Formally, , for some large and large sets of . The informal statement " is an AGI" is then drastically incorrect: it essentially says . Translated into practical considerations, reasoning about is not a good way to predict 's behavior.
(Edit: Further, if what you care about are practically feasible AGI designs, you may restrict the set only to those definitions/representations whose small-error approximations have a finite description length. Hence you may somewhat-imprecisely say that " is not a real AGI" – since it's true for all finite and is implicitly ruled to be outside the consideration.
I note that this is what I was doing in this comment, and yeah, my using the practical and the theoretical definitions of is unhelpful. I'll try to avoid that in the future.)
Since figuring out whether it's an AGI/AGI-complete may be easier than predicting its consequences from the first principles.
Formally, suppose all members of some set have a property , and suppose we're wondering whether some entity has that property. Proving is a valid way to prove that possesses , and it may be easier than proving that in a more "direct" way (without referring to ).
Formally, assume is an entire function.
Inspired by this thread, where a whole lot of discussion around what the term AGI actually means, and I'm starting to wonder if the term is at this point far too generally used and people not distinguishing similar outcomes precisely enough, now that we've made progress in AI.
Now, @Thane Ruthenis at least admitted that his talk around AGI was trying to address the query over whether or not we actually can have AI that fully automates the economy/AI research soon with the level of resources we have, and he was claiming that LLMs might just fail to be impactful with the level of resources that we have without algorithmic improvements (which is very plausible).
But once we have that, I don't see a reason to actually use the AGI/ASI distinction anymore, and I deny the assumption that behavioral definitions of AGI/ASI are akin
to denying the distinction between the Taylor-polynomial approximation of a function, and that function itself (at least if we assume infinite compute like this).
I think talking and reasoning about approximations are fine, and I think the question of what AIs and humans can do in practice given limits to resources is an excellent question to be studying, but I currently see no reason why we need the AGI/ASI word once we actually have the query at hand.
And I'm currently confused about why people care so much about the distinction between AGIs/ASIs and non-AGIs/ASIs in 2025.