Epistemic Status: Low-Medium (I’ve spent 25-30hrs thinking and discussing this idea as a non-expert in AI Safety)
 

Disclaimer: This claim could be biased to my work in the Bay Area this summer, but it seems to me that most individuals who are interested in AI safety or reducing existential risks from AGI are interested in and leaning towards doing technical AI research. Most AI safety community builders (in my limited knowledge) appear to engage in this: let’s find the best mathematicians and programmers for technical research in AI Alignment. While I do believe that there is tremendous value for those, I do think that there is also value for individuals without a technical background to also contribute to this field which has so far mostly been inaccessible for them. Thus, I want to outline reasons why I think this can be valuable, what the intersection of different mainstream academic fields and AI Safety might look like, how we could do this outreach and the potential pitfalls for this. 

TL;DR: It seems important to involve non-technical experts in AI safety research and governance in order to benefit from their unique knowledge and skills, and to encourage interdisciplinary collaboration that can lead to innovative ideas and insights. Non-technical experts from fields such as economics, politics, and public policy can all contribute valuable perspectives and expertise to the field of AI safety. For example, economists can provide an understanding of incentives and the potential impacts of AI on the economy, and political scientists and public policy professionals can help to develop policies and governance frameworks for the development and use of AI. 

Why should we do outreach to non-technical academics and/or non-technical professionals?

  1. Non-technical academics and professionals can bring diverse perspectives and expertise to the field of AI safety: It is important to involve people from a variety of backgrounds in order to benefit from the unique knowledge and skills that they bring. For example, a philosopher may bring insights into ethical considerations, while a sociologist may offer insights into how AI could impact society. By including people from diverse fields, we can more effectively address the complex and multifaceted challenges of AI safety.
  2. Interdisciplinary collaboration can lead to innovative ideas and insights: By combining knowledge from various fields, we can generate new approaches and find solutions that may not have been possible otherwise. This approach has been shown to be effective through extensive research, making it a valuable approach for tackling complex issues such as those related to AI safety.
  3. Involving non-technical experts can save time for AI safety researchers: When AI safety researchers need to learn about a new topic outside of their own field, they can save time by seeking out the expertise of non-technical experts. For example, if a computer scientist is working on a project related to the economic impact of AI, they may seek out the insights of an economist to better understand the topic in a more efficient manner. 
     

What does this intersection between different fields and AI Safety look like?


Some examples come to mind to highlight the intersection of different fields. 

  1. Economics

Economists have valuable insights and expertise that can help us achieve the goal of making safer AI systems. One specific example that comes to mind is mechanism design. Mechanism design is a branch of microeconomics that focuses on designing institutions and mechanisms that achieve desired outcomes, which could be useful in AI alignment research. Aligning AI systems with human values and ensuring that they do not pose unnecessary risks is a central challenge in AI safety. Mechanism design could be used to develop governance structures and regulations that ensure that AI systems are aligned with human values and do not pose unnecessary risks.

Economists' understanding of incentives can be useful in AI safety by helping to identify the factors that may motivate organizations to prioritize short-term goals over long-term risks in the development and deployment of AI systems. This understanding can be used to design incentives that encourage AI organizations, such as OpenAI, to prioritize safety and consider the long-term consequences of their actions. For example, economists can help to develop policy frameworks or economic incentives that encourage AI organizations to prioritize safety and accountability in the development and use of AI systems. This could include measures such as requiring organizations to bear a portion of the liability for any negative consequences of their AI systems, or providing financial rewards for organizations that demonstrate a commitment to safety and responsible AI practices.

In addition, economists can help us understand the potential impacts of AI on the economy, including issues related to job displacement, job creation, and inequality. For example, AI could have significant consequences for the labor market, both in terms of replacing human jobs and creating new ones. Economists can help us understand these impacts and identify strategies to mitigate any negative consequences, such as providing support for workers who may be displaced by automation. Similarly, economists can help us understand the potential impacts of AI on issues related to inequality, such as the distribution of wealth and opportunities. By working together with economists and other experts from diverse fields, we can develop more comprehensive and effective strategies for addressing these issues and ensuring that AI systems are used in a safe and ethical manner.

2. Politics/Public Policy 

I do think that political science and public policy students/professionals could be quite valuable for AI Governance, supporting the bigger goal of reducing x-risks from AI.  Their expertise in policy analysis, regulation, and forecasting can be valuable in developing strategies for ensuring that AI systems are aligned with human values and do not pose unnecessary risks.

For example, public policy and political science students could help identify the most effective regulatory frameworks for governing AI and ensure that it is used in a responsible and ethical manner. They could also assist in developing strategies for addressing the potential impacts of AI on issues such as job displacement, inequality, and privacy.

In addition, public policy and political science students could be helpful in analyzing the political and social dynamics related to AI governance and identifying ways to build support and consensus around these issues. This could involve working with policymakers, industry stakeholders, and civil society groups to develop and advocate for effective AI governance policies.

3. Communications

Communications professionals, including writers, artists, and journalists, have the power to shape public understanding and engagement with the risks posed by advanced AI. By using their skills and platforms to educate and engage the public, they can help to build support for research and initiatives that aim to ensure the safe and responsible development of AI and reduce the existential risks posed by advanced artificial intelligence.

One way that communications professionals can contribute to AI safety is by producing high-quality journalism and other forms of content that accurately and thoughtfully explore the potential risks and benefits of AI. For example, the Washington Post's article on the potential for AI to be racist and sexist received four times the engagement of posts on social media platforms, demonstrating that there is a significant appetite for this type of content. By producing and promoting well-researched and engaging arguments/discussions of AI safety, journalists and writers can help to educate and inform the general public and build support for initiatives that address these risks.

In addition to journalism, fiction writing can also be a powerful tool for raising awareness about existential risks from advanced AI. While dystopian science fiction has long explored the dangers of AI, more realistic and accurate fiction that is written in consultation with experts in AI safety could be especially effective in helping the public to understand and grapple with these complex issues. By using their storytelling skills to paint a picture of what the future could look like with advanced AI, writers can help to inspire thought and discussion about how we can mitigate these risks.

Finally, artists and curators can also help to raise awareness about AI safety through their work. Artistic mediums such as comics and visual art can provide an engaging and accessible way to explore complex issues and can be particularly effective in reaching audiences who might not be interested in traditional forms of content. By creating and promoting artwork that touches on the risks of advanced AI, artists and curators can help to build understanding and support for initiatives that aim to reduce these risks.
 

How can this outreach be done?

On first thoughts, here are some ways I think this outreach could be done: 

  1. Host events such as Intro to AI Safety talks and workshops in non-CS faculties of universities: These can be held in a variety of settings, such as community centers, libraries, or professional associations, and can be tailored to the specific interests and needs of different audiences. For example, AI Safety for Economists/AI Safety for Public Policy students. 
  2. Involve non-technical professionals as consultants or collaborators: By reaching out to experts in other fields and inviting them to help with specific projects, it is possible to bring new insights and approaches to bear on AI safety research and initiatives. For example, more AI Safety researchers could collaborate with economists to explore the potential economic impacts of advanced AI, understand economic growth and how that impacts AI capabilities, or design mechanisms for ensuring its safe and responsible development. 
  3. Create more accessible materials for learning about AI safety: While there are already many excellent resources available for those with a technical background such as AGI Safety Fundamentals, these can be difficult for the general public to understand. By developing materials that are more accessible to non-technical audiences, it is possible to broaden the reach of AI safety research and initiatives. For example, a group of AI safety experts could create a series of videos or interactive resources that explain key concepts in a way that is easy for non-technical audiences to understand. Additionally, just learning materials that make learning about how an AI works seems pretty good and impactful for a simpler learning curve for people just getting started.
  4. More interdisciplinary research programs: There have been recent efforts to engage people from different disciplines such as PIBBSS, and these seem to be quite promising. I’d be curious to see learn how successful their outcomes were and believe that more programs for other academic fields and on a larger scale could be quite impactful. 

 

Some reasons to not outreach to traditional academics 

In this section, I’d like to red-team my own arguments by highlighting potential reasons to not work on outreach to non-technical experts for assisting in AI Safety work. 

  1. Most non-technical professionals might not be interested: It is possible that many non-technical professionals, particularly those who are well-established in their careers, might not be particularly interested in AI safety. They might have other priorities and concerns, and may not be willing to devote time and resources to this issue.
  2. Encouraging novel ideas might be difficult: Traditional academics and other non-technical professionals might be more prone to traditional ideas and approaches, and might not be as open to radical or innovative thinking as experts in the field. This could limit the scope and impact of AI safety research and initiatives.
  3. Too much interest in AI safety might not be a good thing: While it is generally desirable to build support and understanding for AI safety, having too many people interested in this issue could present its own challenges. For example, it could be difficult to coordinate and lead a large and diverse community, and there might be more disagreements and competing agendas.
  4. A larger community could lead to more disagreements: With more people involved, it might be harder to achieve consensus and maintain a cohesive group. There could be more disagreements and competing agendas, which could make it harder to influence and lead the community.


Concluding Thoughts

I do believe that it might be worth finding ways to go minimize these risks and dangers. One way to mitigate these risks is by conducting a thorough screening process to ensure that only qualified and reliable individuals are invited to join the community. By examining candidates' past experiences, backgrounds, and references, it is possible to identify those who have the skills and commitment necessary to contribute to AI safety research and initiatives.

Additionally, it may be useful to focus on targeted outreach to the best and potentially most impactful professionals and experts in specific fields, such as writing, economics, political science, etc. I do think that these individuals might bring valuable perspectives and expertise to the field and can help to ensure that AI safety research and initiatives are informed by a diverse range of disciplines.

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 12:45 PM

Thanks - interesting article. As someone from a Communications background who has retrained in AI & Data Science I see both sides of the discussion.  The issue for many leaders of organisations that could use AI, the issue is they lack the basic knowledge of AI and AI safety so they are not able to ask the right questions or bring the right expertise together.  So executive education on the basics of the possibilities and pitfalls of AI is essential. 

I am in strong agreement here. There are definitely aspects of AI Safety that rely on a confluence of skills beyond technical research. This is clearly a multi-disciplinary endeavor that has not (yet) fully capitalized on the multiple perspectives talented, interested people can bring.

One cautionary tale comes from my own perspective that there is a bit of a divide between "AI Safety" folks and "AI Ethics" folks, at least when it comes to online discourse. There isn't a ton of overlap, and potential animosity between strong adherents to one perspective or another. I think this is borne out of a scarcity mindset, where people see a tradeoff between X-risk focus and other ethical goals, like fairness.

However, while that divide seems to be real (and potentially well-founded) in some conversations, many safety practitioners I know are more pragmatic in their approaches. Institutional capacity building, regulations focusing on responsible AI dimensions, technical alignment research - all can coexist and represent different and complementary hypotheses on how we can best develop AI systems.

The full breadth of this endeavor is beyond any one community, but a problems-focused view could be attractive to a broad range of people and benefit from perspectives that have not typically been part of this community. It's inevitable that many more people will want to work in this space, given the recent popularization of generative systems.

Also consider including non-ML researchers in the actual org building. Project management for example, or other administration folks. People who've got experience in ensuring organizations don't fail, ML researchers need to eat, pay their taxes etc.

Another risk is that if we broaden the focus too much, then that distracts us from preventing existential risks.