LESSWRONG
LW

Language Models (LLMs)Natural AbstractionPhilosophy of LanguageAI
Frontpage

30

[Linkpost] Scaling laws for language encoding models in fMRI

by Bogdan Ionut Cirstea
8th Jun 2023
1 min read
0

30

Language Models (LLMs)Natural AbstractionPhilosophy of LanguageAI
Frontpage

30

New Comment
Moderation Log
Curated and popular this week
0Comments

This is a linkpost for https://arxiv.org/abs/2305.11863

Representations from transformer-based unidirectional language models are known to be effective at predicting brain responses to natural language. However, most studies comparing language models to brains have used GPT-2 or similarly sized language models. Here we tested whether larger open-source models such as those from the OPT and LLaMA families are better at predicting brain responses recorded using fMRI. Mirroring scaling results from other contexts, we found that brain prediction performance scales log-linearly with model size from 125M to 30B parameter models, with ~15% increased encoding performance as measured by correlation with a held-out test set across 3 subjects. Similar log-linear behavior was observed when scaling the size of the fMRI training set. We also characterized scaling for acoustic encoding models that use HuBERT, WavLM, and Whisper, and we found comparable improvements with model size. A noise ceiling analysis of these large, high-performance encoding models showed that performance is nearing the theoretical maximum for brain areas such as the precuneus and higher auditory cortex. These results suggest that increasing scale in both models and data will yield incredibly effective models of language processing in the brain, enabling better scientific understanding as well as applications such as decoding.

(bolding mine)

One particular quote from the twitter summary thread from the senior author:

And these models aren't just good at predicting a few parts of temporal lobe — they're good at predicting ALL of the language-responsive areas in cortex. Individual voxels in precuneus, PFC, and ITL are all predicted with correlation > 0.75. (this is insane to me)


 

Mentioned in
331Against Almost Every Theory of Impact of Interpretability
23Inducing human-like biases in moral reasoning LMs