TLDR: We’re running a small iteration of MLAB (~10 participants) in Oxford towards the end of September. If you’re interested in participating, apply here by 7 September. If you’re interested in being a TA, please email us directly at oxfordmlab@gmail.com

Edit: The dates are now confirmed for the 23 September- 7 October. 

Background

MLAB is a program, originally designed by Redwood Research, to help people upskill for alignment work. We think it’s a good use of time if you want to eventually get into technical alignment work, or if you want to work on theoretical alignment or related fields and think understanding ML would be useful. The program we’re running is slightly shorter than the full MLAB—two weeks instead of three. We’ve condensed the curriculum similarly to how WMLB was condensed last year. 

We plan to have just under 10 participants, and 2-3 TAs.

Curriculum

This curriculum might change slightly. Depending on participant interest, we might also have two optional days before the course to work through prereqs (the W0 materials) together.

W0D1 - pre-course exercises on PyTorch and einops (CPU)
W1D1 - practice PyTorch by building a simple raytracer (CPU)
W1D2 - build your own ResNet (GPU preferred)
W1D3 - build your own backpropagation framework (CPU)
W1D4 - model training Part 1: model training and optimizers (CPU) Part 2: hyperparameter search (GPU preferred)
W1D5 - GPT Part 1: build your own GPT (CPU) Part 2: sampling text from GPT (GPU preferred)
W2D1&2 - transformer interpretability (CPU)
W2D3 - transformer interpretability on algorithmic tasks (CPU)
W2D4 - intro to RL Part 1: multi-armed bandit (CPU) Part 2: DQN (CPU)
W2D5 - policy gradients and PPO (CPU) 

Other activities will include guest speakers and reading groups.  

Logistics

Dates: 23 September - 7 October

Location: Oxford

Housing will be covered for participants not already living in Oxford.

Travel from within the UK is covered. Travel from outside the UK is not covered.

Questions

Feel free to comment questions below or DM any of us.

New Comment