Introduction & context

I became aware of the AI safety problem around 8 years ago. I was intrigued by it and when I re-read Superintelligence, I decided to get a master's degree in Computer Science (my bachelor's degree is in Information Systems). Shortly after I graduated however, my interest for AI safety waned. It waned mainly because I felt that I couldn't make a significant contribution if I'm not employed by companies like OpenAI or DeepMind, and I didn't think I really could work there because:

  • they seem to be hiring people from top-tier American universities (i.e. MIT, Berkley etc.)
  • I wanted to work remotely, since I'm not from the USA and there are other things besides AI safety research that I want to do with my life

After I graduated, I got employed as a machine learning engineer. I have stumbled upon How To Get Into Independent Research On Alignment/Agency and Study Guide, but I kept them in my bookmarks and haven't read them, until now.

Very recently, I decided that I'd like to have a stab at being an AI safety researcher. This was mainly due to the fact that I never really tried striking it out on my own and I want to try it. My constraint is that I'm doing this as a side project, alongside my full-time job and other things in my life (gym, social life etc.).

The other reason why I haven't tried so far was because I also felt a bit unsure of myself because during my college education, I didn't have some courses such as physics (my physics stopped in high school) and while I did have some math, I never took a test where I had to do multi-dimensional integrals or solve differential equations. I wouldn't say my math background is non-existent, I just felt that people at leading research organizations had a much better background than me and I thought to myself: "What's the point?".

With that being said, I recently realized that I'm not getting any younger (I'm 26 now) and that I'd like to at least try to become a professional AI safety researcher. If I fail that will hurt, but at least I will sleep peacefully knowing that I tried. 

Hence, I designed a plan on how I intend to do this, which I demonstrate below. Before that, I'd like to state my goal clearly:

My goal: Test the viability of me being an AI safety researcher, as fast as possible. I'm treating this as a side project with no set deadline; this is something I will do until my interest wanes. Also note that it may be possible that I'm a good AI safety researcher, but that I'm not a professional, that is, I don't get paid for it. This could be the case if no one wants to fund me since I want to work remotely, for example. I'm still thinking about this possibility, as ideally I'd like to get paid for AI safety research if I'm good at it, but I'm also open to the possibility of doing this as a hobby, if I'm actually making contributions.

My plan

My plan, based on what I learned from the posts I linked above, is as follows:

  1. Read up on the topics I am most interested in. What interests me most is value learning, so I could start there.
  2. Make comments / ask questions about things I don't understand or disagree with.
  3. Write up my own posts about some topics.
    1. An important note: If I notice that my knowledge is breaking somewhere consistently, then I'll go study that topic. For example, if multiple texts make heavy use of differential equations and I haven't used them so far, I'll go study differential equations on their own, then come back to the texts.
  4. I will stay attuned to the feedback I'm getting. Am I getting a decent amount of engagement on my comments/questions/posts? Am I having thoughtful discussions, or are people telling me that I'm missing a fundamental piece of knowledge?
  5. Eventually (if ever), I can apply for a grant when I have a clearly defined research direction I can go in.

I'd prefer to do it this way, because I found that I'm not really good at learning things for their own sake if I'm not motivated (i.e. I don't want to learn math just because I love math; I will learn it if I see an application of it which will help me). I think this "recursive" way of learning is more natural to me, but it does have a danger of not knowing when to take the plunge and go learn the topic that is consistently tripping you up. Also, I'd like to test my fit of being a good AI safety researcher as fast as I can; I think trying to contribute directly is the best way to do this, but I may be wrong.

One note on the above paragraph: I am not saying that I am not serious or self-disciplined about this. However, I am practical. If I went the other way around and I said: "Let me go learn all of these topics in the Study Guide in my free time and then I'll revisit AI safety research after I'm done studying them", that would be a huge time and effort investment from me before I even know if this is a viable career (or hobby) opportunity for me or no. That's another reason why my plan is constructed as is; I get to see feedback and if it's good, I may consider bigger and bigger time and effort investments. Also, since I have a bachelor's degree in Information Systems and a master's degree in Computer Science, I think I have enough practical and theoretical technical knowledge to initially test things out. I may not be on the same level as an average Berkley PhD, but I think I'll have enough knowledge to test out how I'm doing in the AI safety research field.

With all this being said, I have some questions in the end.

Questions

  1. How will I know if I'm not a good fit for AI safety research? Conversely, how will I know if I'm a good fit for AI safety research? I think step #4 of my plan (being attuned to the feedback I'm getting) is essentially the answer to this question, but maybe someone can further elaborate on what they would consider indicators of a good fit or a bad fit for an AI safety research position.
  2. How will I know when I have to study a topic on its own (i.e. physics) instead of taking the thing I don't understand and asking for help on some forum (i.e. StackExchange)? This one is a bit tricky for me, since learning a topic on its own is a big time investment and I wouldn't like to do this if I don't have to.
  3. If you were me, would you do things differently? If yes, how? Again, my goal is to test if I would be good at AI safety research as fast as possible.

Thank you for reading.

New Answer
New Comment