Authors: Siebe Rozendal, Justin Shovelain, David Kristoffersson

Crossposted to the Effective Altruism Forum


To achieve any ambitious goal, some strategic analysis is necessary. Effective altruism has ambitious goals and focuses heavily on doing research. To understand how to best allocate our time and resources, we need to clarify what our options in research are. In this article, we describe strategy research and relate it to values research, tactics research, informing research, and improvement research. We then apply the lens of strategy research to existential risk reduction, a major cause area of effective altruism. We propose a model in which the marginal value of a research type depends strongly on the maturity of the research field. Finally, we argue that strategy research should currently be given higher priority than other research in existential risk reduction because of the significant amount of strategic uncertainty, and we provide specific recommendations for different actors.


Effective altruism is regularly framed as “figuring out how to do the most good, and then doing it.” However, figuring out how to do the most good is not easy. Different groups reach different conclusions. So how do we figure out how to do the most good?

Quite obviously, the first step is to figure out our values. We need to know what we roughly mean by ‘the most good.’ However, once our moral uncertainty is significantly diminished, what is the next step in figuring out how to do the most good? We believe the next step should be strategy research: high-level research on how to best achieve a high-level goal. A brief case was made for strategic analysis by Nick Bostrom in Superintelligence (p. 317):

"Against a backdrop of perplexity and uncertainty, [strategic] analysis stands out as being of particularly high expected value. Illumination of our strategic situation would help us target subsequent interventions more effectively. Strategic analysis is especially needful when we are radically uncertain not just about some detail of some peripheral matter but about the cardinal qualities of the central things. For many key parameters, we are radically uncertain even about their sign…”

This was written in the context of existential risk from AI. We believe it applies to existential risks in general, and that strategy research should be a core part of other effective altruism areas as well. However, this leaves many open questions. What is strategy research? How does strategy research relate to other types of research? When should strategy research be prioritized and when should it not be? These questions are relevant to potential and current researchers, research managers, and funders. The answers are necessary to allocate resources effectively. This article also provides motivation for the founding of the existential risk strategy research organization Convergence. Convergence will be publishing more strategic analyses going forward. This article represents our current best (and somewhat simplified) understanding of the concepts outlined. Because we strive to clarify basic concepts and arguments, we have left out some of the finer details and complexities. We intend to further disentangle, clarify, and develop the ideas in the future. Furthermore, the underlying ideas presented here generalize to other fields, but some fields are in a different stage than existential risk reduction is and therefore need different research priorities.

To clarify what we are arguing for, we first describe strategy research and relate it to other types of research. We then argue that strategy research is important for reducing existential risk. We propose that the marginal value of strategy research depends on the maturity of the research field. We conclude that the current immaturity of the existential risk research field makes further strategy research highly valuable.

What is strategy research?

Strategy research seems intuitively valuable. But what is it about more precisely? Understanding this and the different options in research will help us make good decisions about how to allocate our resources and how to direct our research efforts. In this section, we position strategy research within a framework of different research types in effective altruism, we give an explicit definition, and we distinguish our terms from other commonly used terms.

Five classes of effective altruism research

To put strategy research in context to other types of research, we have developed a classification of different research types. Naturally, the classification is a simplification and research will often not fit neatly into a single category.

The research spine of effective altruism: three levels

We can approach ‘figuring out what to do’ at three different levels of directness (which are inspired by the same kind of goal hierarchy as the Values-to-Actions Chain). Most indirectly, we can ask ‘what should we value?’ We call that values research, which is roughly the same as ethics. From our values, we can derive a high-level goal to strive for. For longtermism values, such a goal could be minimize existential risk.[1] For another set of values , such as animal-inclusive neartermism, the high-level goal could be to minimize the aggregate suffering of farm animals.[2]

More directly, we can ask ‘given our goal, how can we best achieve it?’ We call the research to answer that question strategy research. The result of strategy research is a number of strategic goals embedded in a strategic plan. For example, in existential risk reduction, strategy research could determine how to best allocate resources between reducing various existential risks based on their relative risk levels and timelines.

Most directly, we can ask ‘given our strategic plan, how should we execute it?’ We call the research to answer that question tactics research. Tactics research is similar to strategy research, but is at a more direct level. This makes tactics more specific. For example, in existential risk reduction, tactics research could be taking one of the sub goals from a strategic plan, say ‘reduce the competitive dynamics surrounding human-level AI’, and ask a specific question that deals with part of the issue: ‘How can we foster trust and cooperation between the US and Chinese governments on AI development?’ In general, less direct questions have more widely relevant answers, but they also provide less specific recommendations for actions to take.

Finally, the plans can be implemented based on the insights from the three research levels.

Each level of research requires some inputs, which it then processes to produce some outputs for the more direct level of research. For example, strategy research requires a goal or value to strive for, and this needs to be informed by moral philosophy.[3] When strategy research is skipped, tactics research and implementation are only driven by implicit models. For example, a naive and implicit model is ‘when something seems important, try to persuade influential people of that.’ Acting on such a model can do harm. In emerging research fields, implicit models are often wrong because they have received less thought and have not been exposed to feedback. For tactics research and implementation to be effective, they should often be driven by a well-informed and thoughtfully crafted strategy.

The boundary between strategy and tactics is gradual rather than sharp. Thus, some research questions fall somewhere in between. Note as well that implementation is very simplified here; it refers to a host of actions. Implementation can be ‘doing more research’, but it can also be ‘trying to change opinions of key stakeholders’ or ‘building up research capacity.’

A spine is not sufficient: informing and improvement research

You could say that these levels form a spine: they create a central structure that supports and structures the rest of the necessary building blocks. For instance, strategic clarity makes information more useful by giving it a structure to fit into. To illustrate this, imagine learning a piece of information about an improved method of gene writing. Without any strategic understanding, it would just be an amorphous piece of information; it would not be clear how learning it should affect your actions. However, with more strategic clarity it would be more clear how this new method could affect important parameters, the possible consequences of that, and how one should best react to it.

Still, a spine is not a complete body; it needs additional building blocks. Strategic clarity can not be achieved without being sufficiently informed about the state of the world, or without understanding how to effectively conduct research in a domain.

Therefore, in addition to the research levels, we also identify two additional research classes:[4] informing research and improvement research. Informing research mostly concerns questions about what the world is like. They can be very important questions, and science has built an enormous trove of such knowledge that effective altruism can draw from. Improvement research helps to improve other types of research by identifying important considerations, by improving existing research methods, and by identifying useful models from other fields. Philosophy of science, epistemology, mathematics, economics, and computer science can all be used for improvement research. For example, improvement research focused on ethics could discuss the role that intuitions should have in the methodology of moral philosophy.

A definition of strategy research

Based on the model of the research classes above, we will formulate a definition of strategy research. We want a definition that is simple and captures the core purpose of strategy research. Strategy research is an imprecise concept, so the definition should reflect that. We also want the term to correspond to how people have used it in the past. For these reasons, we propose the following definition for strategy research:

   High-level research on how to best achieve a high-level goal.

Thus, the central strategy question is “how to best achieve our high-level goal?” And to achieve a goal, you implicitly or explicitly need to form and act on plans. The challenge of strategy research is to figure out the best plans: those that best achieve a particular high-level goal given the existing constraints. To figure out the best plans, a lot of different activities are necessary. It requires, among others, understanding which parts of the world are relevant for making plans, what actions lead to what consequences, how to compose actions into plans, and how to prioritize between plans.

This means that, in order to figure out the best plans, strategy research will involve a substantial amount of informing research, as well as improvement research. For example, if you have a model of how different risk levels and timelines should affect resource allocation, you also need to know what the different risk levels and timelines are (i.e. informing research) in order to form a comprehensive strategic plan. This research is high-level because it is focused on plans to achieve a high-level goal. In contrast, research on figuring out one’s values is top-level, and research on how to best achieve a tactical goal is low-level.[5]

How do other research terms in effective altruism relate to this framework?

In effective altruism, there have been many terms used for different types of research. Each organization uses a term slightly differently, and it is difficult to find precise definitions of these terms. Let’s briefly consider some research terms in effective altruism that relate to strategy research.

Cause prioritization, prioritization research, global priorities research

These three terms have been used interchangeably to describe roughly similar types of research: prioritization between and within cause areas.[6] Prioritization between cause areas overlaps significantly with values research, although in practice it often does not deal with the more fundamental issues in ethics. Prioritization within cause areas overlaps significantly with strategy research.

Macro-strategy research

This term is mostly used by FHI, and seems to refer to uncovering crucial considerations with regard to improving the long-term future. Crucial considerations can “radically change the expected value of pursuing some high-level sub goal.”[7] A high-level sub goal refers here to things like “increase economic progress” or “decrease funding into AGI research”. The intention appears to focus on the higher-level questions of strategy research (hence “macro”) although FHI also classifies their paper on the unilateralist’s curse as macro-strategy. That concept does not seem to be a crucial consideration, but a strategic consideration for multiple existential risks.

AI strategy

As the term has been used in effective altruism, AI strategy research is simply strategy research focused on reducing existential risk from AI specifically.[8]

Charity evaluation

A number of organizations evaluate interventions and charities, or select charities to donate to (e.g. GiveWell, Animal Charity Evaluators, Open Philanthropy Project, Founders Pledge, Rethink Priorities). Although we would not classify charity evaluation itself as strategy research, it heavily relies on strategic views and many of the mentioned organizations perform some kind of strategy research. For an example for neartermism human-centric values, we would call GiveWell’s research to identify their priority programs strategy research, and would call their evaluation of charities tactics or tactics-informing research.

Why strategy research is important to reduce existential risk

Because of strategic uncertainty, we believe that more strategy research is currently particularly important for reducing existential risk. In this section, we give our main reasons and support them with a model in which the value of a research class depends on the maturity of the field. We then note some other considerations that affect the importance of strategy research and discuss how strategy research could do harm.

The current stage of existential risk research makes strategy research valuable

Strategy research makes the most sense when (1) a community knows roughly what it wants (e.g. reduce existential risk), when (2) it is unlikely that this goal will undergo substantial changes from further research or reflection on values, and (3) when the field has not yet reached strategic clarity. Strategic uncertainty is the stage where the expected value of strategy research is the highest. It is in between the stages of value uncertainty and strategic clarity.

Here we argue that doing strategy research should be a high priority because it is currently unclear how to best achieve existential risk reduction. Strategic uncertainty means that we are uncertain which actions are (in expectation) valuable, which are insignificant, and which are harmful. This implies that there is valuable information to be gained.

We are currently strategically uncertain

To see whether we are actually strategically uncertain, we can ask what strategic clarity would look like. The further we are away from that ideal, the more strategically uncertain we are. With strategic clarity we would know what to do. Specifically, we would know...

  • who the relevant actors are
  • what actions are available to use
  • how the future might develop from those actions
  • what good sequences of actions (plans) are
  • how to best prioritize plans
  • that we have not missed any important considerations

We currently have only a basic understanding of each of these in existential risk reduction. The claim that we are strategically uncertain in the field of existential risk seems widely shared. For example, it is echoed in this post by Carrick Flynn, and again in Superintelligence (p. 317).

Strategic uncertainty implies there is information to be gained

The cost of strategy research is only worth it if it significantly improves our understanding of which actions are (in expectation) valuable, which are insignificant, and which are harmful. Useful information has been gained in the past by uncovering crucial considerations that had a massive influence on our current priorities and plans. These include the separate realizations that AI and synthetic biology might be existential risks. More crucial considerations could be uncovered by strategy research. In addition, there are many current open questions to which different answers would imply substantially different priorities. Examples include ‘how widely is existential risk distributed over different possible causes?’, ‘when would an AI takeoff happen?’, and ‘how likely is human civilization to recover after collapse?’. There is still substantial disagreement on these questions, and progress on these questions would reduce our strategic uncertainty.

In addition, the information needs to be acquirable for a reasonable amount of effort. Strategy research would not be valuable if it was completely intractable. We believe some actors and attempts at strategy research can succeed, but it is hard to predict success beforehand.

Strategic uncertainty implies that interacting with the ‘environment’ has a reduced net value of information

Interacting with one’s environment can be highly informative. Interacting with a complex system often yields a substantial amount of information that cannot be obtained by outside observation. For example, it is hard to assess how receptive policy makers are towards existential risk reduction without engaging with them. Interacting with them would allow efficient learning about the domain.

However, this information comes with a risk. Strategic uncertainty also implies that tactical recommendations and direct implementations can be harmful. For example, approaching the wrong policy makers or approaching them in the wrong way can reduce the chance for existential risk to be taken seriously by governments. Taking uninformed action to reduce existential risk may backfire catastrophically in hard-to-reverse and hard-to-predict ways. This reduces the net value of that action.[9]

In contrast, strategy research is less likely to directly cause harm because it gives general and imprecise recommendations. This means they are less likely to be wrong and that they are further away from implementation, which allows for more opportunities to correct mistakes. Strategy research is also self-correcting: it can change its focus and method based on its own generated insights; part of strategy research is to analyze whether we should continue doing strategy research.

A model of research value as a function of a field’s maturity

We have argued that we are currently strategically uncertain with respect to existential risk reduction and that this implies that strategy research is high priority. However, we can make a more complex model than “first solve values, then solve strategy, then solve tactics, then implement plans”. In practice, resources (e.g. capital and labour) are spread over multiple levels of research and resources become specialized. The optimal allocation of marginal resources depends on the current state of knowledge.

We propose a model in which the cumulative value of research levels (i.e. values, strategy, and tactics research) follows s-curves. S-curves are described as “fundamental patterns that exist in many systems that have positive feedback loops and constraints. The curve speeds up due to the positive feedback loop, then slows down due to the constraints.” In this section, we describe the different constraints and the positive feedback loop that creates the s-shaped curve we expect the value of a research level to exhibit.

Early phase: constraints need to be addressed

When research on a particular level (e.g. strategy research) in a particular field (e.g. x-risk reduction) is just getting started, we expect progress to be slowed down by two constraints. The first constraint is a lack of clarity on the higher level. For instance, it is not valuable to try to figure out a good strategy when you are uncertain about your values, because you are much more likely to work on questions that turn out to be not very relevant to your values. The first constraint should be addressed at the higher level.

The second constraint is that doing early research in a field is hard. There is not yet an established paradigm; the problems are messy, entangled, and vague, rather than structured, independent, and clear. What is needed in an early stage is disentanglement - structuring the research field, identifying the central questions, and clarifying concepts. This constraint cannot be addressed by research at a higher level (resolving moral uncertainty does not help us any further in our strategic uncertainty). Consequently, it needs to be addressed head-on, which means that progress will be slow at first.

Middle phase: positive feedback loops create exponential growth

The middle phase starts when the constraints become weaker. Answers to higher-level questions narrow down the range of relevant questions at the lower level. Generally, we expect that a higher proportion of research projects produce value, because irrelevant questions can be better identified beforehand. Furthermore, as the field becomes more structured, each successful piece of research tends to identify multiple new and compelling research questions. This is a period of exponential growth.

Late phase: new constraints arise

The late phase starts when new constraints arise. One constraint is that the big questions have either been solved or have been found intractable. The remaining questions will be either conceptually hard, will require information that is not (yet) available, or will be lower-level questions. At this point, the lower research level has progressed through its own early phase, and the marginal value of doing research at a lower research level surpasses the value of doing research at the current level.

In summary, as our insight progresses, the marginal value of research shifts towards lower-level questions. A good heuristic in this model is to ‘do research at the highest level that is most sensitive to new information’.

Implications of the model

First, this model does not imply that, at any point in time, we should invest all resources into a single level of research. Rather, it suggests where to spend our marginal effort, which depends on the stage we are in. It is often useful to keep some resources in an earlier type, because those resources have become specialized and may be in their best position. For example, moral philosophers who believe in longtermism and existential risk reduction may want to keep working on moral philosophy to improve the rigour of the arguments and potentially uncover new (though most likely more minor) considerations. Furthermore, insights down the line might give rise to new questions higher up, so we should maintain some capacity to answer these questions.

Second, even if most of the marginal expected value today lies within strategy research, it would be useful to invest some marginal resources into tactics research and even some into implementation. There might be some easy-to-uncover tactical insights applicable to a wide range of strategic plans, trying out some tactics research might illuminate some strategic uncertainties, and building the capacity to do tactics research allows for a faster response to strategic insight.

Third, the model assumes that research at each level also involves improvement and informing research. However, this does not mean that improvement, strategy, and strategy-informing research are equally represented in each phase. It is possible that early research involves more improvement than informing research or vice versa, but it is unclear what is more likely.

This model also addresses a common criticism that the effective altruism community frequently receives, namely that the community spends so much time thinking, discussing, and doing research, and so little about taking action. (This criticism is not completely off-mark: there is productive discussion and unproductive discussion.) It is tempting to reply by pointing out all the things the effective altruism community has achieved: moved money to effective charities, set up new organisations, et cetera. However, we can also give another answer based on this model: "Yes, currently we are still focusing on research. But we are progressing at what seems to be the appropriate speed and we will increase the amount of implementation we do as we gain more clarity."

Other considerations that affect the value of strategy research

We believe the reasons in the previous section provide enough support for the claim that strategy research should be highly prioritized. However, there are additional important considerations that affect the strength of our claim. We believe they pose important questions, but have significant uncertainty about them. Analyzing these considerations and providing evidence for them is beyond the scope of this article. We welcome further discussion on these points.

How much time is there for strategic insights to compound or mature into implementation?

Before a robustly good strategy can be implemented, models need to be created and refined and crucial considerations need to be uncovered. This means that strategy research needs enough time to pay off.

The higher one’s credence is that we will encounter an existentially risky event soon - such as the invention of human-level AI - the more likely it is that acting on our current best guess for handling existential risk is better than systematically creating a top-down strategy.

However, we (the authors) are significantly uncertain about the timelines of various existential risks, especially of AI. Therefore we are reluctant to act as if timelines are short. Such short-term actions (e.g. raising the alarm without nuance, or trying to build a capable and reputable research field rapidly) often seem costly or are harmful in the long-term. In addition, many promising strategies can only affect existential risk on a medium or long timeframe. Even discounted by the probability that there is not enough time for them to be impactful, strategies with medium to long timeframe probably have a high expected value.

How likely are the strategic insights to affect concrete actions and the environment?

Information is only valuable if it eventually affects the world. It is possible that there is already enough actionable strategic knowledge available, but that only a few people are willing and able to act on it. In such a case, resources would be better spent on lobbying influential people so they make better decisions for the future of humanity, or on increasing the influence of people who are expected to make good decisions for the future of humanity.

We believe it is hard to assess how likely insights are to affect other actors. Lobbying influential people and coalition building could be the best action for some people. In addition, influence and coalition building may take decades, which would imply that early action on this front is valuable. Nonetheless, some strategy research also takes a long time to fruition.

How likely is it that there are hard-to-reverse developments that require immediate action?

Sometimes it is necessary to act on insufficient information, even if we would prefer to do more strategic analysis. Our hands may be forced by other actors that are about to take hard-to-reverse actions, such as implementing premature national AI policies. New policies by major actors could significantly limit the range of possible and desirable strategies in the future if these policies are implemented prematurely or with a lack of nuance. In cases where key decision makers cannot be persuaded to exercise ‘strategic restraint’, it may be beneficial to step in and do ‘damage control’ even if everything would have been better if no one had moved early.

We believe that some hard-to-reverse actions are in fact being taken, but only some actors could find good opportunities to effectively advocate strategic restraint or do ‘damage control’. Some could even create good conditions for further (strategic) action.

How could strategy research do harm?

Just like for every other project, it’s important to consider the possibility of doing harm. We identify the following three important ways strategy research might do harm.

Strategy research may carry information hazards. Some knowledge may be dangerous to discover, and some knowledge may be dangerous if it spreads to the wrong people. In mapping possible existential risks, strategy research may uncover new ways for humans to risk existential catastrophe. Sharing those possible risks could make them more likely to occur by inspiring malicious or careless actors. Another information hazard is when plans become known to actors with conflicting (instrumental) goals, which allows them to frustrate those plans. Some goals are more likely to conflict with other agents’ goals than others. We generally recommend against publicly identifying these conflicts, unless the other party is definitely already aware of you and your plans.

Strategy research may create strategic confusion. Badly executed or communicated research could confuse, rather than illuminate, important actors. Creating bad research makes it more difficult to find good research. Furthermore, strategy research could overstate the amount of strategic uncertainty and thereby excessively limit the behavior of careful actors while less careful actors could take the lead.

Strategy research may waste resources. It is hard to assess the expected value of specific strategy research projects, even after they have been completed, because it is difficult to trace consequences back to specific research projects. Even if strategy research is not worse than inaction, resources like money and talent still carry opportunity costs: they might have been used better elsewhere. We believe it is very likely that a number of projects are a waste of resources in this sense. This waste can be reduced by effective feedback loops, such as the evaluation of research organizations (like this one).


The goal of this article was to describe strategy research more clearly and to argue that it should currently be given a high priority in the field of existential risk reduction. This article has introduced some terms and models that can increase our collective understanding of different research classes, as well as provide input for fruitful discussion. Based on our model, we proposed the heuristic to ‘do research at the highest level that is most sensitive to new information’. We believe that strategy research is currently this highest level in the field of existential risk reduction.


Our main recommendation is to expand the existential risk strategy field. We would like to see more strategy research from both existing and new actors in the field. What follows are some recommendations for particular groups. We encourage readers to come up with other implications.

Researchers: explore the big picture and share strategic considerations[10]

We recommend current existential risk researchers to grapple with the questions of how their research focus fits within the larger picture. We especially encourage researchers to share their strategic insights and considerations in write ups and blog posts, unless they pose information hazards. We believe most researchers have some implicit models which, when written up, would not meet the standards for academic publication. However, sharing them will allow these models to be built upon and improved by the community. This will also make it easier for outsiders, such as donors and aspiring researchers, to understand the crucial considerations within the field.

Research organizations: incentivize researchers

Research organizations should incentivize researchers to explore doing strategy research and to write their ideas and findings up in public venues, even if those are provisional ideas and therefore do not meet the standards for academic publication.

Donors: increase funding for existential risk strategy

We encourage donors to explore opportunities to fund new existential risk strategy organizations, as well as opportunities within existing organizations to do more strategy research. Given the newness of the research field and given that there are few established researchers, we believe this is currently a space to apply hits-based giving. Not all projects will pay off, but those that do will make a big difference. As funders learn and the field matures, we expect strategy research to become ‘safer bets’.

Effective altruists: learn, support, start

For those that aspire to move into existential risk strategy research, we recommend exploring one’s fit by doing an internship with a strategy organization or writing and sharing a simple model of a strategy-related topic. People with operations skills can make a large impact by supporting existing strategy research, or even starting up a new organization, since we believe there is enough room for more existential risk strategy organizations.

Limitations & further research

We have simplified a number of points in this article, and it contains a number of gaps that should be addressed in further research.

Focused on basics → elaborate on the details of strategy research

We have strived to make the basics of strategy research clear, but many details have been left out. Further research could delve deeper into the different parts of strategy research to assess what they are, which parts are most valuable, and to examine how to do effective strategy research. This research could also disentangle the difference between ‘narrow’ and ‘broad” strategy research we allude to in footnote 4.

Focused on x-risk → assess the need for strategy research in other areas

This article, because it is written by Convergence, focuses on existential risk strategy. However, we could also have chosen to focus on effective altruism strategy, longtermism strategy, or AI strategy. Further research could approach the strategic question for a wider, narrower, or otherwise different high-level goal. For example, it appears that both community building and animal welfare would benefit greatly from more strategy research.

Incomplete risk analysis → research how strategy research can do harm

We have only briefly discussed how strategy research can do harm, and have argued that it is less likely to do harm because it is more indirect. Further research could investigate this claim further and draft guidelines to reduce the risk of harmful strategy research.


This article has explained, in part, why we believe strategy research is important and neglected. We hope it contributes towards strategic clarity for important goals such as reducing existential risk. Finally, we hope this article motivates other research groups, as well as donors and other effective altruists, to focus more on strategy research.


This post was written by Siebe Rozendal as a Research Assistant for Convergence in collaboration with Justin Shovelain, who provided many of the ideas, and David Kristoffersson, who did a lot of editorial work. We are especially grateful for the thorough feedback from Ben Harack, and also want to thank Tam Borine and Remmelt Ellen for their useful input.

  1. Other high-level goals for longtermism have also been suggested, such as Beckstead’s “make path-dependent aspects of the far future go as well as possible.” ↩︎

  2. Interestingly, animal-inclusive neartermism values do not have a clear analogue goal to ‘minimize x-risk’. We understand the focus on farm animals might not be the optimal goal, because it excludes suffering of non-farm animals. ↩︎

  3. Actors do not necessarily need to share the same values to have the same high-level goals. For example, many cause areas would benefit from an effective altruism community that is healthy, full of capable people, and strategically wise. ↩︎

  4. Research often falls under multiple of these classes at the same time. For instance, research into how to build prudent national AI policies may be highly informing to strategy research (important to high-level strategy) and tactical (important to tactical questions of policy making) at the same time. Further, if a researcher is figuring out important improvement and informing issues for strategy, isn't that strategy research? We believe it is; we prefer a “broad” definition of strategy research. In contrast, a “narrow” definition of strategy research would refer only to pure questions of strategy construction. We think there are some important distinctions and tradeoffs here that we hope to illuminate in further work. ↩︎

  5. That something is low-level does not mean it is not high quality, or not important. The level refers to the level of directness: how closely it informs action. ↩︎

  6. Whether some research is between or within a cause area depends on how a ‘cause area’ is defined. However, just like the term ‘prioritization research’, different people use the term ‘cause area’ differently. In this article, we regard ‘existential risk reduction’ as a single cause area. ↩︎

  7. Bostrom (2014). ‘Crucial Considerations and Wise Philanthropy.’ ↩︎

  8. AI governance and AI policy are two related terms. Possibly, AI policy maps to AI-risk specific tactics research and AI governance maps to the combination of AI strategy and AI policy, but we are uncertain about this classification. We also advise against the use of the term ‘AI tactics research’ as it may sound adversarial/military-like. ↩︎

  9. Actions during strategic uncertainty can be harmful, but trying to take action could still provide useful information. This is a good reason to focus current AI policy on the near- and medium-term; those policies will still yield a good (though smaller) amount of information while carrying significantly lower risk of doing long-term harm. ↩︎

  10. Allan Dafoe, director of the Centre for the Governance of AI, has a different take: “Some problems are more important than others. However, we are sufficiently uncertain about what are the core problems that need to be solved that are precise enough and modular enough that they can be really focused on that I would recommend a different approach. Rather than try to find really the highest-leverage, most-neglected problem, I would advise people interested in working in this space to get a feel for the research landscape.” ↩︎

New Comment
19 comments, sorted by Click to highlight new comments since:

I was recently told that there's a "fair bit" of AI strategy/policy/governance research and discussion happening non-publicly (e.g., via Google docs) by people at places like FHI and OpenAI. Looking at the acknowledgements section of this post, it appears that the current authors are not very "plugged in" to those non-public discussions. I am in a similar situation in that I'm interested in AI strategy but am not "plugged in" to the existing discussions. It seems like there's a few different ways to go from here and I'm not sure which is best:

  1. Try to get "plugged in" to the non-public discussions.
  2. Assuming there's not serious info hazard concerns, try to make the current discussions more public, e.g., by pushing for the creation of a public forum for discussing strategy and inviting strategy researchers to participate.
  3. Try to create a parallel public strategy discussion.

My guess is that assuming resources (and info hazards) aren't an issue, 3 is best because different kinds of research/discussion setups create different biases and it's good to have diversity to avoid blind spots. (For example Bitcoin and UDT both came out of informal online discussion forums instead of academia/industry/government research institutions.) But:

  1. Are there enough people and funding to sustain a parallel public strategy research effort and discussion?
  2. Are there serious info hazards, and if so can we avoid them while still having a public discussion about the non-hazardous parts of strategy?

I'd be interested in the authors' (or other people's) thoughts on these questions.

I agree with you that #3 seems the most valuable option, and you are correct that we aren't as plugged in - although I am much less plugged in (yet) than the other two authors. I hope to learn more in the future about

  • How much explicit strategy research is actually going on behind close doors, rather than just people talking and sharing implicit models.
  • How much of all potential strategy research should be private, and how much should be public. My current belief is that more strategy research should be public than private, but my understanding of info hazards is still quite limited, so this belief might change drastically in the future.

To respond to your other questions:

  1. Are there enough people and funding to sustain a parallel public strategy research effort and discussion?

I am not sure whether I get the question: I don't think there is currently enough people or funding being allocated to public strategy research, but I think there could be a sustained public strategy research field. I also think there is not a high threshold for a critical mass: just a few researchers communicating with an engaged audience seems enough to sustain the research field.

  1. Are there serious info hazards, and if so can we avoid them while still having a public discussion about the non-hazardous parts of strategy?

Yes, there are serious info hazards. And yes, I think the benefits of having a public discussion outweigh the (manageable) risk that comes with public discussion. If there is a clear place for info-hazardous content to be shared (which there is: the draft-sharing network) and when there is a clear understanding and policy for limiting info-hazards (which can be improved on a lot), public discussion will have at least the following advantages:

  • Exposure to wider array of feedback will, on expectation, improve the quality of ideas
  • Outsiders have more accessible knowledge to learn from to contribute later. There are probably also a lot of benefits to be gained from making other people more strategically savvy!
  • It makes it easier for non-affiliated/less-connected individuals to create and share knowledge


  • In my experience there are infohazard/attention hazards concerns. Public strategy has likely negative expected value - if it is good, it will run into infohazards. If it is bad, it will create confusion.
  • I would expect prudent funders will not want to create parallel public strategy discussion.

I am not sure why you believe good strategy research always has infohazards. That's a very strong claim. Strategy research is broader than 'how should we deal with other agents'. Do you think Drexler's Reframing Superintelligence: Comprehensive AI Systems or The Unilateralist's Curse were negative expected value? Because I would classify them as public, good strategy research with a positive expected value.

Are there any specific types of infohazards you're thinking of? (E.g. informing unaligned actors, getting media attention and negative public opinion)

Depends on what you mean by public. While I don't think you can have good public research processes which would not run into infohazards, you can have nonpublic process which produces good public outcomes. I don't think the examples count as something public - e.g. do you see any public discussion leading to CAIS?

My guess is that the ideal is to have semi-independent teams doing research. Independence in order to better explore the space of questions, and some degree of plugging in to each other in order to learn from each other and to coordinate.

Are there serious info hazards, and if so can we avoid them while still having a public discussion about the non-hazardous parts of strategy?

There are info hazards. But I think if we can can discuss Superintelligence publicly, then yes; we can have a public discussion about non-hazardous parts of strategy.

Are there enough people and funding to sustain a parallel public strategy research effort and discussion?

I think you could get a pretty lively discussion even with just 10 people, if they were active enough. I think you'd need a core of active posters and commenters, and there needs to be enough reason for them to assemble.

We especially encourage researchers to share their strategic insights and considerations in write ups and blog posts, unless they pose information hazards.

I've been doing quite a bit of this recently, and I'd love to other researchers do more of this:

However I haven't gotten much engagement from people who work on strategy professionally. I'm not sure if they just aren't following LW/AF, or don't feel comfortable discussing strategically relevant issues in public. So this kind of ties into my other comment, and is part of what I'm thinking about as I try to puzzle out how to move forward, both for myself and for others who may be interested in writing up their strategic insights and considerations.

Allan Dafoe, director of the Centre for the Governance of AI, has a different take

I'm not sure I understand what Allan is suggesting, but it feels pretty similar to what you're saying. Can you perhaps explain your understanding of how his take differs from yours?

Nice work, Wei Dai! I hope to read more of your posts soon.

However I haven't gotten much engagement from people who work on strategy professionally. I'm not sure if they just aren't following LW/AF, or don't feel comfortable discussing strategically relevant issues in public.

A bit of both, presumably. I would guess a lot of it comes down to incentives, perceived gain, and habits. There's no particular pressure to discuss on LessWrong or the EA forum. LessWrong isn't perceived as your main peer group. And if you're at FHI or OpenAI, you'll have plenty contact with people who can provide quick feedback already.

I'm not sure I understand what Allan is suggesting, but it feels pretty similar to what you're saying. Can you perhaps explain your understanding of how his take differs from yours?

I believe he suggests that there is a large space that contains strategically important information. However, rather than first trying to structure that space and trying to find the questions with the most valuable answers, he suggests that researchers should just try their hand at finding anything of value. Probably for two reasons:

  1. By trying to find anything of value, you get much more rapid feedback on whether you are good at finding information than by taking a longer time to find high-value information.
  2. When there is a lot of information acquirable ('low-hanging fruit'), it doesn't matter as much where you start, as long as you start quickly.

In addition, he might believe that fewer people are good at strategy research than at tactics or informing research, and he might have wanted to give more generalizable advise.

I'm very confused why you think that such research should be done publicly, and why you seem to think it's not being done privately.

Also, regarding the following:

Strategy research would not be valuable if it was completely intractable. We believe some actors and attempts at strategy research can succeed, but it is hard to predict success beforehand.

Given the first sentence, I'm confused as to why you think that "strategy research" (writ large) is going to be valuable, given our fundamental lack of predictive ability in most of the domains where existential risk is a concern.

What do you expect the signal of successful private strategy research to be?

There don't seem to be that many outliers around to me, which strongly suggests either the research isn't being done or it is failing to yield results.

I'm very confused why you think that such research should be done publicly, and why you seem to think it's not being done privately.

I don't think the article implies this:

Research should be done publicly

The article states: "We especially encourage researchers to share their strategic insights and considerations in write ups and blog posts, unless they pose information hazards."
Which means: share more, but don't share if you think there are possible negative consequences of it.
Though I guess you could mean that it's very hard to tell what might lead to negative outcomes. This is a good point. This is why we (Convergence) is prioritizing research on information hazard handling and research shaping considerations.

it's not being done privately

The article isn't saying strategy research isn't being done privately. What it is saying is that we need more strategy research and should increase investment into it.

Given the first sentence, I'm confused as to why you think that "strategy research" (writ large) is going to be valuable, given our fundamental lack of predictive ability in most of the domains where existential risk is a concern.

We'd argue that to get better predictive ability, we need to do strategy research. Maybe you're saying the article makes it looks like we are recommending any research that looks like strategy research? This isn't our intention.

Glad to hear that you aren't recommending strategy research in general - because that's what it looked like.

And yes, I think it's incredibly hard to make sure we're not putting effort into efforts with negative expected value, and I think that attention hazards are critical, and are the biggest place where I think strategy research has the potential to increase risks rather than ameliorate them. (Which is exactly why I'm confused that anyone would suggest that more such research should be done publicly and/or shared. And it's why I don't think that a more detailed object level discussion makes sense here, in public.)

The methodological diversity necessary to get any consilience in highly abstract areas makes it very hard for donors to evaluate such projects. Many of the ideas that form the basis of the AI memeplex, for instance, came from druggy-artist-scientists originally. So what happens in practice is that this stuff revolves around smoking gun type highly legible philosophical arguments, even though we know this is more hedgehog than fox, and that this guarantees we'll only, on average, prepare for dangers that large numbers of people can comprehend.

Concretely: the more money you have, the higher the variance on weird projects you should be funding. If the entire funding portfolio of the Gates' foundation are things almost everyone thinks sound like good ideas, that's a failure. It's understandable for small donors. You don't want to 'waste' all your money only to have nothing you fund work. But if you have a 10 billion and thus need to spend 500 million to 1 billion a year just to not grow your fund, you should be spending a million here and there on things most people think are crazy (how quickly we forget concrete instances like initial responses to the shrinking objects to nanoscale idea?). This is fairly straightforward porting of reasoning from startup land.

the more money you have, the higher the variance on weird projects you should be funding.

Only if you're sure the mean is positive - and there's no reason to think that. In fact, it's arguable that in a complex system, a priori, we should consider significant changes destabilizing and significantly net negative unless we have reason to think otherwise.

I think this argument is too general a counter argument and if extended to its logical conclusion becomes self defeating as consequentialist cluelessness applies equally to action and inaction.

No, it just means you just need an actual system model which is at least somewhat predictive in order to make decisions, and therefore have a better grasp on the expected value of your investments than "let's try something, who knows, let's just take risks."

I agree, I think I was mostly responding to the " and there's no reason to think that" since it is a case by case thing.

With strategic clarity we would know what to do. Specifically, we would know...
- who the relevant actors are
- what actions are available to use
- how the future might develop from those actions
- what good sequences of actions (plans) are
- how to best prioritize plans
- that we have not missed any important considerations

Out of curiosity, has your research so far uncovered any example domains which have strategic clarity? Or do you have an intuition for domains that do?