Building AGI Using Language Models

by leogao1 min read9th Nov 20201 comment

11

Language ModelsAI
Frontpage

Despite the buzz around GPT-3, it is, in and of itself, not AGI. In many ways, this makes it similar to AlphaGo or Deep Blue; while approaching human abiliy in one domain (playing Chess/Go, or writing really impressively), it doesn’t really seem like it will do Scary AGI Things™ any more than AlphaGo is going to be turning the Earth into paperclips anytime soon. While its writings are impressive at emulating humans, GPT-3 (or any potential future GPT-x) has no memory of past interactions, nor is it able to follow goals or maximize utility. However, language modelling has one crucial difference from Chess or Go or image classification. Natural language essentially encodes information about the world—the entire world, not just the world of the Goban, in a much more expressive way than any other modality ever could. By harnessing the world model embedded in the language model, it may be possible to build a proto-​AGI.

2 comments, sorted by Highlighting new comments since Today at 10:44 PM
New Comment

Sure. It might also be worth mentioning multimodal uses of the transformer algorithm, or the use of verbal feedback as a reward signal to train reinforcement learning agents.

As for whether this is a fire alarm, this reminds me of the old joke: "In theory, there's no difference between theory and practice. But in practice, there is."

You sort of suggest that in theory, this would lead to an AGI, but that in practice, it wouldn't work. Well in theory, if it fails in practice that means you didn't use a good enough theory :)

[+][comment deleted]1y 1