This post may be interest people who
- are interested in getting into AI alignment / the MATS program
- are interested in the soft skills that I've found valuable in developing when working on a research project


In 2023 I was working as a machine learning engineer. I wanted to work on AI alignment problems. I quit my job and participated in the MATS Summer 2023 program. The MATS program puts you together with others to work on AI alignment problems under a specific mentor.  Would recommend.
I was under Ethan's Perez stream where we worked on improving Chain of Thought transparency. I started in June 2023, and continued until ~March 2024 under the extension program, until my paper was submitted.

Soft skills

I think a big part of what I took away was more on the intangible side of soft skills. I did not expect this to be the case before joining MATS, but looking back I really appreciate it. During MATS, you'll have access to Scholar Support - think of it as a coach that helps you with your personal development. Some things that I've learnt from them are:
- How to tackle the planning fallacy. For example, if I aimed to have a draft out by friday, I would ask myself "Would I be surprised if on Saturday, I still didn't have the draft out". Most of the time, I wouldn't be that surprised if I did not reach my goal on time. This question helped me to identify areas that I needed to address first.
- How to tackle procrastination. It turns out that I procastinate when one side of myself really wants to do something, but the other side thinks doing that thing is dumb. I did some Internal Double Crux sessions - to resolve disagreements between my two selves. It helped me with my motivation to write the paper, and to do things like leetcode for 
- How to resolve disagreements. I think that before MATS, my style of resolving disagreements was to try to persuade the other person to my point of view, but that could lead to a lot of frustration. I learnt that when disagreeing, it is better to try to fully understand the other person's point of view. Keep asking and digging deeper into their point of view to understand their mental model, rather than to try and persuade them. This helps you two to converge faster to a common understanding.

How to communicate my ideas better. I had to make slides for my weekly meetings with Ethan. At the start, my slides were quite poor, but I got better at it over time. Showing my slides before my weekly meetings to my stream mates helped me to communicate my ideas better. Rob Miles gave a talk about why communication is so important. In the end, we probably won't be in the room where AGI gets invented. If we want to influence the development of safe AGI, we need to publish research that actually influences the people who invent AGI. We want them to read our research. That means writing and conveying our ideas well.

How to research something - do the minimal thing that will update your beliefs the most. For example, Ethan suggested that we can few-shot a model as a proxy for fine-tuning it. You few-shot a model much faster than finetuning - and that gives us a fast clue of whether optimizing a model for a certain thing will work. I also trained my sense of what experiments to run next -  I think that by the end of MATS, I had a distilled model of Ethan's feedback in my head, which told me what kind of experiments he was looking for in my weekly meetings.

Making friends / networking. Coming from Singapore, where few people work on AI alignment, I greatly appreciated the opportunity to make friends in the AI alignment community. For example, through this project, I worked with Miles Turpin - who mentored us - and he gave us so much help in our research direction. From him, I learnt what kind of alignment research was interesting and impactful to other researchers. Furthermore, being based physically in Berkeley and London allowed me to meet other researchers in the area. Knowing other researchers is important for getting feedback on your research, so that you know that you're on the right track.

Tangible outcomes

One tangible outcome was that I co-wrote a paper for submission to a conference.  Before MATS, I was a machine learning engineer, without any publications, and was looking for an alignment related job. I got rejected from all organizations I applied for. I don't think having a paper is an absolute must - I was at least able to get past resume screening for a few alignment organisations before joining MATS. Still, I think that writing a paper has showed that I am capable of working on research, to increase my chances of getting hired as a research engineer. Counterfactually, I believe that without MATS, publishing any work would have been much more challenging. Being someone outside academia, and not working in an organization that publishes research, I struggle to think of how I would have gotten the opportunity to write a paper, so I really appreciate MATS for that.

In terms of technical skills, I learnt productivity and coding tricks from my co-author Edward Rees. He taught me many terminal / vim tricks to help me become a 2X engineer. We also had to implement a whole bunch of language model evaluations (evals), which made me learn the pain of managing many different types of evals. Pair programming with him was really fun - I would recommend any future MATS participants to pair program with their stream-mates. You learn a lot from each other.

New Comment