LESSWRONG
LW

Wikitags

Transformers

This page is a stub.
Subscribe
1
Subscribe
1
Discussion0
Discussion0
Posts tagged Transformers
37Striking Implications for Learning Theory, Interpretability — and Safety?
RogerDearnaley
2y
4
137How LLMs are and are not myopic
Ω
janus
2y
Ω
16
218Modern Transformers are AGI, and Human-Level
Ω
abramdemski
1y
Ω
87
87Google's PaLM-E: An Embodied Multimodal Language Model
SandXbox
2y
7
77Residual stream norms grow exponentially over the forward pass
Ω
StefanHex, TurnTrout
2y
Ω
24
62Tracr: Compiled Transformers as a Laboratory for Interpretability | DeepMind
Ω
DragonGod
3y
Ω
12
57Concrete Steps to Get Started in Transformer Mechanistic Interpretability
Ω
Neel Nanda
3y
Ω
7
53How fast can we perform a forward pass?
jsteinhardt
3y
9
33AGI will be made of heterogeneous components, Transformer and Selective SSM blocks will be among them
Ω
Roman Leventov
2y
Ω
9
27How Do Induction Heads Actually Work in Transformers With Finite Capacity?
Fabien Roger
2y
0
7If I ask an LLM to think step by step, how big are the steps?
Q
ryan_b
10mo
Q
1
425Transformers Represent Belief State Geometry in their Residual Stream
Ω
Adam Shai
1y
Ω
100
92An Analogy for Understanding Transformers
CallumMcDougall
2y
6
78Attention SAEs Scale to GPT-2 Small
Ω
Connor Kissane, robertzk, Arthur Conmy, Neel Nanda
1y
Ω
4
70Adam Optimizer Causes Privileged Basis in Transformer LM Residual Stream
Diego Caples, rrenaud
10mo
7
Load More (15/56)
Add Posts