(Caveat: as an aspiring AI Safety researcher myself, I'm both qualified and unqualified to answer this. Also, I'll focus on technical AI Safety, because it's the part of the field I'm most interested in.)
As a first approximation, there is the obvious advice: try it first. Many of the papers/blog posts are freely available on the internet (which might not be a good thing, but that's a question for another time), and thus any aspiring researcher can learn what is going on and try to do some research.
Now, to be more specific about AI safety, I see at least two sub-questions here:
- Am I the right "kind" of researcher for working in AI Safety? Here, my main intuition is that the field needs more "theory-builders" than "problem-solvers", to take the archetypes of Gower's Two Cultures of Mathematics. By that I mean that AI Safety has not yet cristallize into a field where the main approaches and questions are well understood and known. Almost every researcher has a different perspective on what is fundamental in the field. Therefore, the most useful works will be the ones that clarify, deconfuse and characterize the fundamental questions and problems in the field.
- Can I get a job at a research lab in AI Safety? Of course, new researchers can also get funding, from the Long Term Future Fund for example. But every grant write-up that I saw mentioned a recommendation by someone already in the field. So even looking out for funding probably requires to make some team interested in you. As for the answer to the question, it really depends on the lab (because they all have different approaches to AI Safety). For example, MIRI is interested in brilliant programmers (if possible in Haskell) that can understand and master complex maths and dependent type theory; CHAI is interested in researchers with (or able to build) an expertise in the theory of deepRL; OpenAI is interested in both good researches in practical deepRL and researchers in the theoretical computer science used by Christiano's agenda; and so on. The great thing about most of these labs is that you can find someone to ask questions on what they are looking for.
I'd say a pretty good way is to try out AI alignment research as best you can, and see if you like it. This is probably best done by being an intern at some research group, but sadly these spots are limited. Perhaps one could factor it into "do I enjoy AI research at all", which is easier to gain experience in, and "am I interested in research questions in AI alignment", which you can hopefully determine through reading AI alignment research papers and introspecting on how much you care about the contents.
In my mind it's something like you need:
- strong interest in solving AI safety
- being okay with breaking new ground and having to figure out what "right" means
- strong mathematical reasoning skills
- decent communication skills (you can less rely on strong existing publication norms and may have to get more creative to convey your ideas than in other fields)
- the courage and care to work on something where the stakes are high and if you get it wrong things could go very badly
I think people tend to emphasize the technical skills the most, and I'm sure other answers will offer more specific suggestions there, but I also think there's an import aspect of having the right mindset for this kind of work such that a person with the right technical skills might not make much progress on AI safety without these other "soft" skills.