Five months ago, I received a grant from the Long Term Future Fund to upskill in AI alignment. As of a few days ago, I was invited to Berkeley for two months of full-time alignment research under Owain Evans’s stream in the SERIMATS program. This post is about how I got there.
The post is partially a retrospective for myself, and partially a sketch of the path I took so that others can decide if it’s right for them. This post was written relatively quickly - I’m happy to answer more questions via PM or in the comments.
Summary
* I was a software engineer for 3-4 years with little to no ML experience before I was accepted for my grant.
* I did a bunch of stuff around fundamental ML maths, understanding RL and transformers, and improving my alignment understanding.
* Having tutors, getting feedback on my plan early on, and being able to pivot as I went were all very useful for not getting stuck doing stuff that was no longer useful.
* I probably wouldn’t have gotten into SERIMATS without that ability to pivot midway through.
* After SERIMATS, I want to finish off the last part of the grant while I find work, then start work as a Research Engineer at an alignment organisation.
* If in doubt, put in an application!
My Background
My background is more professional and less academic than most. Until I was 23, I didn’t do much of anything - then I got a Bachelor of Computer Science from a university ranked around 1,000th, with little maths and no intent to study ML at all, let alone alignment. It was known for strong graduate employment though, so I went straight into industry from there. I had 3.5 years of software engineering experience (1.5 at Amazon, 2 as a senior engineer at other jobs) before applying for the LTFF grant. I had no ML experience at the time, besides being halfway through doing the fast.ai course in my spare time.
Not going to lie, seeing how many Top-20 university PhD students I was sharing my cohort with (At least three!) was a tad intimi