I think it's hard to help if you don't have anything specific questions? The standard advice is to check out 80,000 hours's guide, resources on AISafety.com, and, if you want to do technical research, go through the ARENA curriculum.
AI Safety Fundamentals' alignment or governance class are the main intro classes that people recommend, but I honestly think it might have lost its way a bit (i.e., it does not focus enough on x-risk prevention). You might be better off looking at older curricula from Richard Ngo or Eleuther, and then get up to speed with the latest research by looking at this overview post, Anthropic's recommendations on what to study, and what mentors in SPAR and MATS are interested in.
Hi! I'm a rising junior in undergrad, working on a cognitive science major with neuroscience and AI focuses, and I was hoping to get some advice/pointers on AI safety work. I'm interested in both the governance and technical sides, but my academic work slightly predisposes me to the latter. Any advice, help, ideas, links to other posts, that could point me in the right direction would be appreciated!