I've argued before against the view that intelligence is a single coherent concept, and that AI will someday suddenly cross the threshold of general intelligence resulting in a hard takeoff. This paper doesn't resolve that debate entirely, but it provides strong evidence that language models often have surprising jumps in capabilities. 

From the abstract: 

Scaling up language models has been shown to predictably improve performance and sample efficiency on a wide range of downstream tasks. This paper instead discusses an unpredictable phenomenon that we refer to as emergent abilities of large language models. We consider an ability to be emergent if it is not present in smaller models but is present in larger models. Thus, emergent abilities cannot be predicted simply by extrapolating the performance of smaller models. The existence of such emergence implies that additional scaling could further expand the range of capabilities of language models.

Key Figures:

Related: More is Different for AI, Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets, Yudkowsky and Christiano on Takeoff Speeds

2 comments, sorted by Click to highlight new comments since: Today at 1:32 AM
New Comment

This is very cool.  Thanks for link posting!

Those are fascinating emergent behaviors, and thanks for sharing your updated view.

New to LessWrong?