Posts

Sorted by New

Wiki Contributions

Comments

ACCount1mo10

I agree that "general intelligence" is a concept that already applies to modern LLMs, which are often quite capable across different domains. I definitely agree that LLMs are, in certain areas, already capable of matching or outperforming a (non-expert) human.

There is some value in talking about just that alone, I think. There seems to be a bias in play - preventing many from recognizing AI as capable. A lot of people are all too eager to dismiss AI capabilities - whether out of some belief in human exceptionalism, some degree of insecurity, some manner of "uncanny valley" response, something like "it seems too sci-fi to be true", or something else entirely.

But I don't agree that the systems we have are "human level", and I'm against using "AGI", which implies human or superhuman level of intelligence, to refer to systems like GPT-4.

Those AIs are very capable. But there are a few glaring, massive deficiencies that prevent them from being broadly "human level". Off the top of my head, they are deficient in:

  • Long term memory
  • Learning capabilities
  • Goal-oriented behavior

I like the term "subhuman AGI" for systems like GPT-4 though. It's a concise way of removing the implication of "human-level" from "AGI", and refocusing on the "general intelligence" part of the term.