I feel there is a particular lack of introduction materials for people who do not have a computer science background, despite the fact that those people can have very outside the box ideas.
Also on doomerism - have been dealing with that a lot in my climate activism. A lot of us have profited immensely from books like "Hope in the dark" and "Active hope", which frame hope as something arising not from justified optimism, but from the importance of the goal and the action taken for it, even when it seems unimaginable to actually succeed at this time. Solnit in particular makes a good argument that a complete lack of certainty that we can save ourselves is not the same as certainty that we are doomed, and that uncertainty brings with it opportunities and potential.
Thank you for this piece -- simply identifying the issues is encouraging for newcomers to the field (like me). More established fields (such as those represented by university departments) often have a centralized and searchable jobs board. Is there something like that already? I could easily have missed it. If not, what are the obstacles to aggregating information about open jobs in AI safety?
Another thought -- Computer Science was a new field not that long ago. The first department was founded only 61 years ago. There was significant resistance initially from more established fields. Newcomers didn't know where to start. Curricula and mentorship were far from standardized. Are there lessons to learn from CS's success?
TL DR
I talked to people who got interested in AI safety recently to discuss their problems. The interviewees reported that the field is hard to navigate for those who start their way into it. It is also hard to advance a career after the initial introduction to AI safety, which is probably a major bottleneck for the field. Also, some people experience anxiety, hopelessness, which affects their mental health and well-being, and this problem, in my opinion, gets less attention than it deserves.
Background
It seems like people who are new to the AI safety, face a number of difficulties, and, although there are some studies exploring members of the AI safety community, I did not find any studies exploring these difficulties in-depth, which are relatively new, and explore topics similar to what I am interested in, so I decided to talk to people new to the field and listen about their problems.
Methodology
I conducted 14 interviews with people who recently joined the AI safety field. Most of them got interested in AI safety more than 6 months and less than 18 months ago. I had a script for interviews, but they were semi-structured, and each interview was somewhat different from the others, so the data I collected is more qualitative rather than quantitative, and my goal is to get intuitions about everyday problems of my interviewees and possible bottlenecks in the field, so I can get ideas for field-building projects. Also, the sample size is too small for a meaningful statistical analysis so I decided not to focus on counting percentages of certain responses since they might mislead.
Results
How do people get interested in AI safety? Are there any common patterns in their stories, or do they significantly different?
Several people said they got interested in AI safety by reading Nick Bostrom's "Superintelligence” book. I did not find any other patterns.
What projects made by EA and AI safety community are the most valuable for people who are completely new to the field?
Two projects people often mentioned as valuable to them are 80 000 hours career advising and AGI safety fundamentals course.
80 000 hours career advising helps people to better understand their preferences and abilities, connect with the right people and suggest the next steps. AGI safety fundamentals is a course covering the basics of AI safety which people may complete by themselves or with a learning group.
These projects helped my interviewees, but it seems like in their current state they are hard to scale. Right now they help only a fraction of people applying for these programs because their resources are limited.
What do interviewees think about AGI-related existential risks and how it affects their mental health?
My interviewees’ AGI timelines vary a lot from several years to several decades. Some think that we are basically doomed, and others predict that P(Doom) < 10%. It seems like distribution resembles that of the field in general.
The effect of doomerism on mental health also has a large degree of variability. Some people do not report any AGI-related anxiety at all, but nearly half of the respondents report some degree of anxiety or depressive symptoms from mild to strong and debilitating. Also, some people mentioned that they don't have plans long into the future. One of my interviewees said that they only do short-term money investments, and several others mentioned that they are unsure whether they want to have kids if the world will die anyways.
Do most people struggle to get a position in the AI safety field?
Only 2 of my 14 interviewees have a financially stable job in the AI safety field although many of them applied for numerous fellowships and positions in AI safety organizations. Two of them said they don't know how to earn money by contributing to AI safety, and they only continue doing it because they have enough money so they can do whatever they want.
I recognized 3 patterns of career development among people with no stable positions:
Is the AI safety field hard to navigate for newcomers?
Almost all my interviewees mentioned that the AI safety field is hard to navigate in. Especially for people with backgrounds unusual for the field, like psychology or product management in startups. People mentioned that there are many research topics, and many people to follow and it is usually not structured in a newcomer-friendly way
There are several projects aimed to mitigate this problem. As I mentioned earlier, many of my interviewees mentioned that the AGI safety fundamentals course and 80 000 hours career advising helped them to navigate the field and better understand the career paths they want to pursue, but do not eliminate the problem completely as the field is very complex and consists of many narrow subfields that are not structured in a newcomer-friendly way and are hard to navigate.
Also, both AGI safety fundamentals and 80 000 career advising accept only a fraction of applications they receive since they have limited capacity, so they mitigate the problem but do not solve it completely.
A number of the interviewees mentioned that the AI safety community is very helpful and generally people are willing to help newcomers to navigate the field.
Is there a request for mentorship or guidance from experienced people?
Despite the problems I described earlier, there are way more people in junior positions than the field can absorb.
This problem combined with the previous problems that the field is complex and consists of many specialized subfields makes it hard to navigate in cutting-edge research. It's hard to know for sure what to do to get a job and make a meaningful contribution, and too few people to steer in the right direction.
For example, one interviewee said, they are interested in interpretability research for AI safety, but they are unsure what are the best ways to do useful research and the only people who are qualified to help and to steer in the right direction are people doing interpretability research in the small number of organizations like OpenAI or Anthropic.
Do people feel emotionally isolated due to the lack of connections with people who care about AI safety?
Almost half of the interviewees said they don't have people around them who are also interested in AI safety. Almost all my interviewees who live outside of the US, and the UK mentioned this problem, and a number of them said that if they want to pursue a career in AI safety they probably have to move to the US or the UK, which is also a barrier
How hard is it to keep up with AI safety research, news, and other important information?
Most of my interviewees did not mention any problems in keeping up with important information in the AI safety field, but some said that there is a lot of important stuff, scattered all over the places, like LessWrong, personal blogs, Twitter, news websites, and so on. These bits of valuable information are usually poorly structured so it is hard to digest everything and keep up with everything.
Study limitations
Sample biases
The people I interviewed are mostly members of the “AI alignment” Slack community, and it takes some knowledge of the field to get there, so I suspect, that people who got interested in AI safety very recently, or people who have a more casual interest in AI alignment are underrepresented in my sample.
Another possible bias stems from the fact that when I asked people to give me interviews, I stated that I want to ask about their problems, so people who don't experience problems in AI safety might have been less prone to contact me.
Heterogeneity among respondents
Due to the relatively small sample size and heterogeneity among respondents in terms of their location, background, interests, and employment status, I did not find patterns within specific groups of my target audience (e.g. people interested in AI governance, or people from the EU) This might be done with bigger sample size or more uniformity among respondents.
Discussion
One of my interviewees mentioned that they know a couple of talented people who initially got interested in AI safety but because of the problems I discuss in this post they lost interest and decided to work on other topics. It was sad to hear. The field might have an enormous impact on humanity's future, and problems make it unfriendly to bright people who want to contribute to it.
I believe that soon the Overton window on public perception of AI-related risks is moving and the AI safety field will experience a big surge of money and talent. Some problems will be much easier to solve with the influx of money, other resources, and new people with new expertise, but I believe that some other problems require action now, before this surge.
In the end, I want to discuss my thoughts on what projects I would love to see so we can alleviate critical bottlenecks now and help the field to scale in the future.
Onboarding into the field for beginners
As I showed earlier, it is hard for beginners to understand where to start. There are many subfields in AI safety, and often there are no beginner-friendly introductions to them.
Most of my interviewees reported feeling disoriented when they tried to navigate the field at first. Although projects like 80 000 hours career advising, AGI safety fundamentals, and others help to mitigate this problem to some degree, they do not resolve it completely, and it also seems to me that they can not easily be scaled at the moment.
This led me to two ideas: a. It's a good idea to support existing projects which bring the most value and develop them in a way that makes them easier to scale. b. People who were the target audience of my research, who are not top researchers, but competent enough so they can help people who don't know the field at all. In my experience, there are plenty of people who might be ready to do it and I think that such advising might bring a lot of value.
Scalable Mentorship
As I discussed earlier, one of the major bottlenecks in the AI safety field is that there are far more "junior” people than the industry absorbing. Since the field is very diverse and is rapidly evolving, it seems that introductory projects like the AGI safety fundamentals course with defined curriculum and learning groups might work better for absolute beginners, but for more experienced ones, mentorship seems to me as a natural solution. There are projects focused on solving this problem, for example, SERI MATS, GovAI, PIBBSS, and AI safety camp. But to my knowledge, there is fierce competition to get there, and they accept only a fraction of applicants.
In my opinion, this problem is a major bottleneck for the AI safety field in general, and, in my opinion, the most important to solve among others I discuss here.
Mental health and doomerism
Some of my interviewees mentioned that they feel like humanity might be doomed, and they experience anxiety and depressive mood because of it. Some people say that they do not make long-term investments, unsure whether they should bring children into this doomed world, or think about long-term health.
Also, as I described earlier, some of my interviewees mentioned that they feel lonely. They have nobody to talk about AI safety in their daily lives.
I think these problems are important and they get less attention than they deserve both because I generally prefer people to not suffer rather than to suffer and because mentally healthy people are more productive than the ones who suffer, and the productivity of AI safety researchers is very important for the world.
Unfortunately, my current research didn't mean to focus on this particular topic so I did not explore it in-depth, but as a clinical psychologist I would love to explore it, so if you believe that you are on a doomer side and this affects your mental health or your life choices, feel free to write me a direct message, I will be happy to talk to you, and, hopefully, provide some help if you think you need it.
Post scriptum
I would be glad to implement my ideas as well as any other good field-building ideas, so feel free to write me direct messages on this topic if you are interested in this kind of stuff.