I'm about to begin doctoral studies in multiagent RL as applied to crowd simulation, but somewhere on the horizon, I see myself working on AI Safety-related topics. (I find the Value Alignment problem to be of particular interest)

Now, I'm asking myself the question - if my PhD is in a roughly related area of AI, but not really closely compatible with AI Safety, does that make anything more difficult further down the line? Or is it still perfectly fine?

New Answer
New Comment

1 Answers sorted by

evhub

May 15, 2020

170

Hi Ariel—I'm not sure if I'm the best person to weigh in on this, since I opted to go straight to OpenAI after completing my undergrad rather than pursue a PhD (and am now at MIRI), but I'm happy to schedule a time to talk to you if you'd be interested. I've also written a couple of different posts on possible concrete ML experiments relevant to AI safety that I think might be exciting for somebody in your position to work on if you'd be interested in chatting about any of those.