I am currently very new to the AI safety space. Unlike lots of others who are either students pursuing ML/ AI or are working on their PhDs or have strong technical background, I don't fit into these buckets. I was previously working in the crypto industry as an infra analyst, with Math background and a beginner to intermediate understanding of ML/ AI concepts. I have been exploring interpretability, Reward models, scheming etc by reading papers.
I want to understand if its possible for someone like me to build up competence and enter the safety field. Lack of understanding of requirements, current demand, barriers, pathways and even probability of success is giving me anxiety and holding me back from fully committing.
It would be really helpful if someone clarified these things.
What's your background and familiarity/experience with Safety/ Alignment research?