LESSWRONG
LW

AI CapabilitiesFree Energy PrincipleLanguage Models (LLMs)TransformersAI
Frontpage

7

Energy-Based Transformers are Scalable Learners and Thinkers

by Matrice Jacobine
8th Jul 2025
1 min read
5

7

This is a linkpost for https://energy-based-transformers.github.io/
AI CapabilitiesFree Energy PrincipleLanguage Models (LLMs)TransformersAI
Frontpage

7

Energy-Based Transformers are Scalable Learners and Thinkers
2Raemon
2lemonhope
1anaguma
4Martin Vlach
2anaguma
New Comment
5 comments, sorted by
top scoring
Click to highlight new comments since: Today at 11:24 AM
[-]Raemon2mo21

I do sure wish that abstract was either Actually Short™, or broken into paragraphs. (I'm assuming you didn't write it but it's usually easy to find natural paragraph breaks on the authors' behalf)

Reply
[-]lemonhope2mo20
  1. Inference-time computation techniques, analogous to human System 2 Thinking, have recently become popular for improving model performances.
  2. However, most existing approaches suffer from several limitations:
    1. They are modality-specific (e.g., working only in text).
    2. They are problem-specific (e.g., verifiable domains like math and coding).
    3. They require additional supervision/training on top of unsupervised pretraining (e.g., verifiers or verifiable rewards).
  3. In this paper, we ask the question “Is it possible to generalize these System 2 Thinking approaches, and develop models that learn to think solely from unsupervised learning?”
  4. Interestingly, we find the answer is yes, by learning to explicitly verify the compatibility (unnormalized probability) between inputs and candidate-predictions, and then re-framing prediction problems as optimization with respect to this verifier.
  5. Specifically, we train Energy-Based Transformers (EBTs)—a new class of Energy-Based Models (EBMs)—to assign an energy (unnormalized probability) value to every input and candidate-prediction pair, enabling predictions through gradient descent-based energy minimization until convergence.
  6. This formulation enables System 2 Thinking to emerge from unsupervised learning, making it modality and problem agnostic.
  7. Across both discrete (text) and continuous (visual) modalities, we find EBTs scale faster than the dominant Transformer++ approach during training, achieving an up to 35% higher scaling rate with respect to data, batch size, parameters, FLOPs, and depth.
  8. During inference, EBTs improve performance with System 2 Thinking (i.e., extra computation) by 29% more than the Transformer++ on language tasks, and EBTs outperform Diffusion Transformers on image denoising while using 99% fewer forward passes.
  9. Further, we find that System 2 Thinking with EBTs yields larger performance improvements on data that is farther out-of-distribution, and that EBTs achieve better results than existing models on most downstream tasks given the same or worse pretraining performance, enabling EBTs to out-generalize existing paradigms.
  10. Consequently, EBTs are a flexible and exciting new paradigm for scaling both the learning and thinking capabilities of models.
Reply
[-]anaguma2mo10

Unfortunately they extended the scaling curves to ~10 B tokens, less than 3OOMs of the data used to train frontier models. So it’s unclear whether this will work at scale, and the fact that they didn’t extend it further is some evidence against it working.

[This comment is no longer endorsed by its author]Reply
[-]Martin Vlach2mo40

 you seem to report one OOM less than this picture in https://alexiglad.github.io/blog/2025/ebt/#:~:text=a%20log%20function).-,Figure%208,-%3A%20Scaling%20for

Reply
[-]anaguma2mo20

Interesting, I was looking at figure 7, but that seems to be a much smaller run. I retract my original comment.
 

Reply
Crossposted to the EA Forum. Click to view.
Moderation Log
More from Matrice Jacobine
View more
Curated and popular this week
5Comments

Inference-time computation techniques, analogous to human System 2 Thinking, have recently become popular for improving model performances. However, most existing approaches suffer from several limitations: they are modality-specific (e.g., working only in text), problem-specific (e.g., verifiable domains like math and coding), or require additional supervision/training on top of unsupervised pretraining (e.g., verifiers or verifiable rewards). In this paper, we ask the question "Is it possible to generalize these System 2 Thinking approaches, and develop models that learn to think solely from unsupervised learning?" Interestingly, we find the answer is yes, by learning to explicitly verify the compatibility (unnormalized probability) between inputs and candidate-predictions, and then re-framing prediction problems as optimization with respect to this verifier. Specifically, we train Energy-Based Transformers (EBTs)---a new class of Energy-Based Models (EBMs)---to assign an energy (unnormalized probability) value to every input and candidate-prediction pair, enabling predictions through gradient descent-based energy minimization until convergence. This formulation enables System 2 Thinking to emerge from unsupervised learning, making it modality and problem agnostic. Across both discrete (text) and continuous (visual) modalities, we find EBTs scale faster than the dominant Transformer++ approach during training, achieving an up to 35% higher scaling rate with respect to data, batch size, parameters, FLOPs, and depth. During inference, EBTs improve performance with System 2 Thinking (i.e., extra computation) by 29% more than the Transformer++ on language tasks, and EBTs outperform Diffusion Transformers on image denoising while using 99% fewer forward passes. Further, we find that System 2 Thinking with EBTs yields larger performance improvements on data that is farther out-of-distribution, and that EBTs achieve better results than existing models on most downstream tasks given the same or worse pretraining performance, enabling EBTs to out-generalize existing paradigms. Consequently, EBTs are a flexible and exciting new paradigm for scaling both the learning and thinking capabilities of models.