The key details and FAQ are copied below for convenience, the rest are in the doc
Advice on producing a good application in 20 hours
What does a good application look like?
My core goal is to teach you how to do great mechanistic interpretability research.
I run the Google DeepMind mechanistic interpretability team and I have a lot of experience supervising research. In the past 3 years, I have mentored 50 junior researchers and supervised 30+ MATS papers, and 15 top conference papers[3].
The program often helps scholars get into mech interp careers
Past scholars also do excellent research in the program itself, even those totally new to mech interp! Some highlights:
Yes. In fact, I strongly recommend it! LLMs are a crucial research tool nowadays, and are especially useful for those getting into a new field.
You're welcome to use them for coding, writing, etc, whatever you want - I want to gauge how well you’ll do as a researcher, which includes whatever tools you’d actually use.
I recommend using Cursor for coding (replacing eg VS Code) and using Gemini 2.5 Pro[4] for browser based tasks
I've compiled a folder of useful text files for mech interp research, containing a bunch of relevant docs & source code of key libraries, tutorials from ARENA and key libraries, key papers and my relevant blog posts.
See a bunch more info and guidance in the other tabs of the doc
Other MATS apps open in late August ↩︎
In my 4 most recent cohorts, I’ve had 3 independent researchers, 9 ML PhD students/recent PhD grads, 7 undergrads, 3 ML masters students, 5 former software engineers, 1 physics PhD student, 1 ML postdoc, 1 neuroscience postdoc, 2 quant traders, and 1 former entrepreneur ↩︎
Note that almost all scholars in recent cohorts have published at least one co-first author conference, and many of the 30 papers are too recent to have finished peer review - list here. But my top priority is to help you do great research, publishing is a bonus. ↩︎
I'm not just saying this because I work for Google! It's a frontier model, it's free, it's pretty fast, and it can take a million tokens of context. The best paid models from other providers are also great choices but can’t take as much context. ↩︎
It starts with a table of contents explaining what’s in it. ↩︎