Epistemic status

Written as a non-expert to develop and get feedback on my views, rather than persuade. It will probably be somewhat incomplete and inaccurate, but it should provoke helpful feedback and discussion. 

Aim

This is the first part of my series ‘A proposed approach for AI safety movement building’. Through this series, I outline a theory of change for AI Safety movement building. I don’t necessarily want to immediately accelerate recruitment into AI safety because I take concerns (e.g., 1,2) about the downsides of AI Safety movement building seriously. However, I do want to understand how different viewpoints within the AI Safety community overlap and aggregate.

I start by attempting to conceptualise the AI Safety community. I originally planned to outline my theory of change in my first post. However, when I got feedback, I realised that i) I conceptualised the AI Safety community differently from some of my readers, and ii) I wasn’t confident in my understanding of all the key parts.

TLDR

  • I argue that the AI Safety community mainly comprises four overlapping, self-identifying, groups: Strategy, Governance, Technical and Movement Building. 
  • I explain what each group does and what differentiates it from the other groups
  • I outline a few other potential work groups 
  • I integrate these into an illustration of my current conceptualisation of the AI Safety community
  • I request constructive feedback.

My conceptualisation of the AI Safety community

At a high level of simplification and low level of precision, the AI Safety community mainly comprises four overlapping, self-identifying, groups who are working to prevent an AI-related catastrophe. These groups are Strategy, Governance, Technical and Movement Building. These are illustrated below.

https://lh3.googleusercontent.com/6o3RLxNR5hhwRULlcoYhXcbe_yS-fLosxMh2m5pHbwXmlvEPJ1cQ2kwD5IYcLB1Jt6hBpf-PnGd-Rkm8lI70bC-5MXMaVbHZI8a9_LTvU7y3Q249FHgm4BpHQVpgFfD5KsFUilZE-FKrSoDTNDZTf19og74TyQJUIpPJae4XT7-_vRari4vwn-isv_TBfA

We can compare the AI Safety community to a government and relate each work group to a government body. I think this helps clarify how the parts of the community fit together (though of course, the analogies are imperfect). 

Strategy

The AI Safety Strategy group seeks to mitigate AI risk by understanding and influencing strategy. 

Their work focuses on developing strategies (i.e., plans of action) that maximise the probability that we achieve positive AI-related outcomes and avoid catastrophes. In practice, this includes researching, evaluating, developing, and disseminating strategy (see this for more detail). 

They attempt to answer questions such as i) ‘how can we best distribute funds to improve interpretability?’, ii) ‘when should we expect transformative AI?’, or iii) “What is happening in areas relevant to AI?”. 

Due to a lack of ‘strategic clarity/consensus’ most AI strategy work focuses on research. However, Toby Ord’s submission to the UK parliament is arguably an example of developing, and disseminating an AI Safety related strategy.

We can compare the Strategy group to the Executive Branch of a government, which sets a strategy for the state and parts of the government, while also attempting to understand and influence the strategies of external parties (e.g., organisations and nations). 

AI Safety Strategy exemplars: Holden Karnofsky, Toby Ord and Luke Muehlhauser. 

AI Safety Strategy post examples (1,2,3,4). 

Governance

The AI Safety Governance group seeks to mitigate AI risk by understanding and influencing decision-making. 

Their work focuses on understanding how decisions are made about AI and what institutions and arrangements help those decisions to be made well. In practice, this includes consultation, research, policy advocacy and policy implementation (see 12 for more detail). 

They attempt to answer questions such as i) ‘what the best policy for interpretability in a specific setting?’, ii) ‘who should regulate transformative AI and how?’, or iii) “what is happening in areas relevant to AI Governance?”.

AI strategy and governance overlap in cases where i) the AI Safety governance group focus on their internal strategy, or ii) AI Safety governance work is relevant to the AI Safety strategy group (e.g., when strategizing about how to govern AI).

Outside this overlap, AI Safety Governance is distinct from AI Safety strategy because it is more narrowly focused on relatively specific and concrete decision-making recommendations (e.g., Organisation X should not export semiconductors to country Y) than relatively general and abstract strategic recommendations (e.g., ‘we should review semiconductor supply chains’). 

We can compare the Governance group to a government’s Department of State, which supports the strategy (i.e., long term plans) of the Executive Branch by understanding and influencing foreign affairs and policy. 

AI Safety Governance group exemplars: Alan Dafoe and Ben Garfinkel. 

AI Safety Governance group post examples (see topic).

Technical 

The AI Safety Technical group seeks to mitigate AI risk by understanding and influencing AI development and implementation

Their work focuses on understanding current and potential AI systems (i.e., interactions of hardware, software and operators) and developing better variants. In practice, this includes theorising, researching, evaluating, developing, and testing examples of AI (see this for more detail).

They attempt to answer questions such as i) ‘what can we do to improve interpretable machine learning?’, ii) ‘how can we safely build transformative AI and how?’, or iii) “what is happening in technical areas relevant to AI Safety?”.

AI technical work overlaps with AI strategy work in cases where i) the AI technical group focuses on their internal strategy or ii) AI Safety technical work is relevant to the AI Safety strategy group (e.g. when strategizing how to prevent AI from deceiving us). 

AI technical work overlaps with AI governance work where AI Safety technical work is relevant to decision-making policy (e.g. policy to reduce the risk that AI will deceive us). 

Outside these overlaps, AI Safety technical work is distinct from AI governance and strategy because it is focused on technical approaches (e.g., how to improve interpretable machine learning) rather than strategic approaches (e.g., how to distribute funds to improve interpretability) or governance approaches (e.g., what the best policy for interpretability in a specific setting). 

We can compare the Technical group to the US Cybersecurity and Infrastructure Security Agency (CISA) which supports the strategy (i.e., long term plans) of the Executive Branch by identifying and mitigating technology-related risks. While CISA does not directly propose or drive external strategy or policy, it can have indirect influence by impacting the strategy of the Executive Branch, and the policy of the Department of State.

AI Safety Technical group exemplars: Paul Christiano, Buck Schlegeris and Rohin Shah. 

AI Safety Technical group post examples(1). 

Movement Building

The AI Safety Movement Building group seeks to mitigate AI risk by helping the AI Safety community to succeed. 

Their work focuses on understanding, supporting and improving the AI Safety community. In practice, this includes activities to understand community needs and values such as collecting and aggregating preferences, and activities to improve the community such as targeted advertising, recruitment and research dissemination. See 12 for more detail. 

They attempt to answer questions such as i) ‘what does the AI safety community need to improve its work on interpretable machine learning?’, ii) ‘when does the AI safety community expect transformative AI to arrive?’, or iii) “what is happening within the AI safety community?”.

AI Movement Building work overlaps with the work of the other groups where (i) Movement Building work is relevant to their work, or (ii) the other groups' work is relevant to Movement Building. It also overlaps with Strategy when the Movement Building group focuses on their internal strategy

Outside these overlaps, movement building work is distinct from the work of other groups because it is focused on identifying and solving community problems (e.g., a lack of researchers, or coordination) rather than addressing strategy, governance or technical problems.

We can compare the AI Technical group to the Office of Administrative Services and USAJOBS, organisations which support the strategy (i.e., long term plans) of the Executive Branch by identifying and solving resource and operational issues in government. Such organisations do not manage strategy, policy, or technology but indirectly affect each via impacts on the Executive Branch, Department of State and CISA.

AI Safety Movement Building group exemplars: Jamie Bernardi, Akash Wasil and Thomas Larsen. AI Safety Movement Building group example posts (1) 

Outline of the major work groups within the AI safety community

GroupFocusGovernment Analog
StrategyDeveloping strategies (i.e., plans of action) that maximise the probability that we achieve positive AI -related outcomes and avoid catastrophesAn Executive Branch (e.g., Executive Office of the President)
GovernanceUnderstanding how decisions are made about AI and what institutions and arrangements help those decisions to be made wellA Foreign Affairs Department (e.g., Department of State)
TechnicalUnderstanding current and potential AI systems (i.e., interactions of hardware, software and operators) and developing better variantsA Technical Agency (e.g., CISA)
Movement Building                      Understanding, supporting and improving the AI Safety communityA Human Resources Agency (e.g., USAJOBS)

Many people in the AI Safety community are involved in more than one work group. In rare cases, someone might have involvement in all four. For instance, they may do technical research, consult on strategy and policy and give talks to student groups. In many cases, movement builders are involved with at least one of the other work groups. As I will discuss later in the series, I think that cross-group involvement is beneficial and should be encouraged.

Other work groups

I am very uncertain here and would welcome thoughts/disagreement.

Field-builders

Field-building refers to influencing existing fields of research or advocacy or developing new ones, through advocacy, creating organisations, or funding people to work in the field. I regard field-building as a type of movement building with a focus on academic/research issues (e.g., increasing the number of supportive researchers, research institutes and research publications). 

Community builders

I treat community building as a type of movement building focused on community growth (e.g., increasing the number of contributors and sense of connections within the AI Safety community). 

AI Funders

I regard any AI safety-focused funding as a type of movement building (potentially at the overlap of another work group), focused on providing financial support.

AI Ethics

The ethics of artificial intelligence is the study of the ethical issues arising from the creation of artificial intelligence. I regard safety-focused AI ethics as a subset of strategy, governance and technical work.

Summary of my conceptualisation of the AI Safety community

Based on the above, I conceptualise the AI Safety community as shown below.

https://lh5.googleusercontent.com/p-8aaHT8m8spXoyHd6hSmHw6UoFnr062HgT5B3eVeIK8_cVUxiHkGSJw72WhI_5nZ3cPaAUKNmGeYss9F3R6JeXOyvSqfE05H3sOfr87uKXxxcms0cBhJVctZkWSzckRcqDQwSPy3AQMoqd_Xhm1tOczov0lMTJ-qTxHObeJMYAZ17BkcjQBsvULeJt4vA

Feedback

Does this all seem useful, correct and/or optimal? Could anything be simplified or improved? What is missing? I would welcome feedback. 

What next?

In the next post, I will suggest three factors/outcomes that AI Safety Movement Building should focus on: contributors, contributions and coordination. 

Acknowledgements

The following people helped review and improve this post: Amber Ace, Bradley Tjandra, JJ Hepburn, Greg Sadler, Michael Noetel, David Nash, Chris Leong, and Steven Deng. All mistakes are my own.

This work was initially supported by a grant from the FTX Regranting Program to allow me to explore learning about and doing AI safety movement building work. I don’t know if I will use it now, but it got me started.

Support

Anyone who wants to support me to do more of this work can help by offering:

  • Feedback on this and future posts
  • A commitment to reimburse me if I need to repay the FTX regrant 
  • Any expression of interest in potentially hiring or funding me to do AI Safety movement building work.

New to LessWrong?

New Comment