I became aware of the AI safety problem around 8 years ago. I was intrigued by it and when I re-read Superintelligence, I decided to get a master's degree in Computer Science (my bachelor's degree is in Information Systems). Shortly after I graduated however, my interest for AI safety waned. It waned mainly because I felt that I couldn't make a significant contribution if I'm not employed by companies like OpenAI or DeepMind, and I didn't think I really could work there because:
After I graduated, I got employed as a machine learning engineer. I have stumbled upon How To Get Into Independent Research On Alignment/Agency and Study Guide, but I kept them in my bookmarks and haven't read them, until now.
Very recently, I decided that I'd like to have a stab at being an AI safety researcher. This was mainly due to the fact that I never really tried striking it out on my own and I want to try it. My constraint is that I'm doing this as a side project, alongside my full-time job and other things in my life (gym, social life etc.).
The other reason why I haven't tried so far was because I also felt a bit unsure of myself because during my college education, I didn't have some courses such as physics (my physics stopped in high school) and while I did have some math, I never took a test where I had to do multi-dimensional integrals or solve differential equations. I wouldn't say my math background is non-existent, I just felt that people at leading research organizations had a much better background than me and I thought to myself: "What's the point?".
With that being said, I recently realized that I'm not getting any younger (I'm 26 now) and that I'd like to at least try to become a professional AI safety researcher. If I fail that will hurt, but at least I will sleep peacefully knowing that I tried.
Hence, I designed a plan on how I intend to do this, which I demonstrate below. Before that, I'd like to state my goal clearly:
My goal: Test the viability of me being an AI safety researcher, as fast as possible. I'm treating this as a side project with no set deadline; this is something I will do until my interest wanes. Also note that it may be possible that I'm a good AI safety researcher, but that I'm not a professional, that is, I don't get paid for it. This could be the case if no one wants to fund me since I want to work remotely, for example. I'm still thinking about this possibility, as ideally I'd like to get paid for AI safety research if I'm good at it, but I'm also open to the possibility of doing this as a hobby, if I'm actually making contributions.
My plan, based on what I learned from the posts I linked above, is as follows:
I'd prefer to do it this way, because I found that I'm not really good at learning things for their own sake if I'm not motivated (i.e. I don't want to learn math just because I love math; I will learn it if I see an application of it which will help me). I think this "recursive" way of learning is more natural to me, but it does have a danger of not knowing when to take the plunge and go learn the topic that is consistently tripping you up. Also, I'd like to test my fit of being a good AI safety researcher as fast as I can; I think trying to contribute directly is the best way to do this, but I may be wrong.
One note on the above paragraph: I am not saying that I am not serious or self-disciplined about this. However, I am practical. If I went the other way around and I said: "Let me go learn all of these topics in the Study Guide in my free time and then I'll revisit AI safety research after I'm done studying them", that would be a huge time and effort investment from me before I even know if this is a viable career (or hobby) opportunity for me or no. That's another reason why my plan is constructed as is; I get to see feedback and if it's good, I may consider bigger and bigger time and effort investments. Also, since I have a bachelor's degree in Information Systems and a master's degree in Computer Science, I think I have enough practical and theoretical technical knowledge to initially test things out. I may not be on the same level as an average Berkley PhD, but I think I'll have enough knowledge to test out how I'm doing in the AI safety research field.
With all this being said, I have some questions in the end.
Thank you for reading.