As part of the work Nonlinear is doing to try and improve the state of AI safety research, we’re going to be talking with a range of researchers. One of our aims is to help people who are unsure about doing AI safety research figure out if this is a good fit for them. We’re planning on talking about things like:

  • What skills are actually useful?
  • What the day-to-day work is like?
  • Misconceptions about the job
  • And any other advice they have

We’d love to hear questions about doing AI safety research from people who are considering moving into the field but are still currently on the fence.


 

New Answer
Ask Related Question
New Comment

8 Answers sorted by

What are some boring parts of your job that you have to do?

What are some frustrating parts of your job that you have to do?

What aspects of your job/place of work are different from what you expected from the outside?

Do you feel like you have good job security?

What is the day to day work like?

From reading this site it looks like the “heavy-hitters” mostly debate timelines, and possible architectures. There doesn’t seem to be too much concrete work for new researchers to do…

What proportion of independent researchers in AI safety manage to secure a second research grant after the first one runs out? What does it take to stay in the field? 

Are you supposed to publish papers? In which journals? Or do you just post to the Alignment forum?

What happens to people who leave the field, is it difficult to find a job? What if you leave MIRI after working on a secret project?

Can you give something like a Glassdoor review for your employer?

On working in industrial software companies:

What is the transfer between working on near-to-medium term Applied AI Alignment problems (e.g. "How to make my chatbot not coercive" or "How to align my recommender system?") and long term AI safety / reduction of AI X-risk?

On working in academic labs:

How important is it that my research advisor sees eye-to-eye with me vis-a-vis AI Safety?  Is it possible to do good AI Safety research even if my advisor doesn't care about long term risks?

What do employers look for in a college student who wants to work on AI safety (please be blunt)?  What is the typical starting salary and average number of hours per week worked for people coming right out of college?  How are female applicants viewed?  (I write this as a professor at a women's college who incorporates AI safety into one of his econ classes.)

Say it turns out you're only mediocre at doing AI safety research. How much of a positive impact can you have then?

How many places did you apply for before getting your current role or position?

How much time have you spent on applying for open opportunities?

What are some things that your org has that others don’t and should?

What are some things that other orgs have that your org should have?

1 comments, sorted by Click to highlight new comments since: Today at 9:59 PM

The field seems to highly competitive, an example being OpenAI residency only accepts 5% of their applicants. What are the most effective ways for someone with no AI/ML background to get into the field?