Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.
Arguably, a key, if not the key, contribution of Effective Altruism is that it helps us prioritize opportunities for doing good. After all, answering EA’s central question, ‘How do we do the most good?’, requires just that: comparing the value of different altruistic efforts.
Prima facie, we might wonder why we need to consider different levels of prioritization at all. Why not simply estimate the cost-effectiveness of all possible altruistic projects, regardless of their cause area, and then fund the most impactful? Yet, in practice, EAs don’t tend to do this. Instead, much of EA prioritization work has focused on cause prioritization and, as we argue below, most current EA prioritization work focused on within-cause prioritization.
Indeed, as of 2025, most key organizations and actors in Effective Altruism operate within the framework of 3 broad categories of problems, or ‘cause-areas’ as EAs call them: Global Health and Development, Animal Welfare, and Global Catastrophic Risks.[2] [3] (See, for example, Open Philanthropy’s focus areas, the four EA funds (the fourth being ‘EA infrastructure’), and this 80,000 Hours’ article on allocation across issues.[4])
This particular way of prioritizing altruistic projects, splitting interventions into cause areas and ranking them, is only one of several approaches.[5] This piece seeks to explicitly put on the table what could go wrong with this and other types of prioritization work that we observe in the Effective Altruism movement.
We think the different types have a variety of shortcomings, and they may well complement each other under the right circumstances. As such, failing to appreciate the shortcomings and unintentionally, or mistakenly, backing a single horse would be a mistake. By primarily focusing on within-cause prioritization, and sidelining the others, the EA community may well be radically misallocating its prioritization efforts.
Prioritization can take many shapes but three main types of prioritization arise when we examine what the subject of some prioritization work is and within which domain the prioritization work occurs.
Cause Prioritization (CP): work that ranks causes.[6]
Despite investing a lot of resources in prioritization, there's been little explicit discussion of how we should balance these kinds of prioritization. Ideally, we’d know how many resources EA should allocate to each type. With that in mind, we would evaluate the current state and how to bring that closer to our ideal outcome.
Providing a full answer to the question of balance is well beyond the ambition of this post, but we will still try to better grasp the strengths and weaknesses of each prioritization type. Since there can be room for reasonable disagreement about the benefits of prioritization of each kind, this suggests that EA should at least take all three seriously. In fact, EA should arguably invest substantial resources into each of them, and, to spoil the next section, EA doesn't currently do that.
A quick reading of EA history suggests that when the movement was born, it focused primarily on identifying the most cost-effective interventions within pre-existing cause-specific areas (e.g. the early work of GiveWell and Giving What We Can). Subsequently, it paid increased attention to additional causes, and evaluated which of these (Global Health, Animal Welfare, or Global Catastrophic Risk) was most promising. However, as we shall see below, almost all prioritization work in the community takes place either at the level of within-cause prioritization, with little devoted to cause prioritization, and even less to cross-cause prioritization.
In this section we consider some salient organizations in EA and provide a quick classification of their activities into different types of prioritization.[7]
In 2022, GiveWell directed $439 million and, as of January 2025, it employed 77 people.[8] The key figure is the amount of time spent on prioritization work, which is approximately 39 full-time equivalents (FTE) based on the number of researchers listed on their website (distributed among the Commons, Cross-cutting, Malaria, New Areas, Nutrition, Research Leadership, Vaccines, and Water & Livelihoods teams, but excluding Research Operations). Because GiveWell is focused on Global Health, this figure counts towards the within-cause type of prioritization.[9]
Rethink Priorities engages in research across various EA causes. All research teams focus on prioritization work within specific causes, except for the Surveys Team – which does 85% within-cause (i.e. 1.9 FTEs) and 15% cross-cause prioritization (i.e. 0.3 FTEs) – and the Worldview Investigations Team, with 4.9 FTEs across CCP (60%, i.e. 3 FTEs) and CP (40%, i.e. 1.9 FTEs). The rest adds up to 29 FTEs on WCP, including 7 FTEs for Animal Welfare and 7 FTEs for Global Health and Development.[10]
The Global Priorities Institute doesn't fit neatly into any single category; it addresses foundational questions relevant to all levels of prioritization. Some of its work assumes worldviews like longtermism and focuses on issues such as risks from artificial intelligence. However, GPI generally doesn't compare specific interventions by name, or indeed causes, but provides considerations useful for prioritization across the board. With 16 full-time researchers and 19 affiliates or scholars, GPI can be thought of as (sometimes indirectly) doing a mix of the three, with a higher load of CP at 15.8 FTEs, some WCP at 4.2 FTEs, and almost no CCP at 0.6 FTEs.[11]
Open Philanthropy disburses on the scale of 0.75 billion dollars annually and now employs nearly 150 individuals. Its work spans object-level analysis, grantmaking, and prioritization research. As part of its prioritization efforts, some teams explicitly conduct within-cause prioritization: approximately 7 people focus on Global Health cause prioritization, 5 are assigned to Global Catastrophic Risks cause prioritization, and 2 contribute to Farm Animal Welfare research. On the grantmaking front, the majority of work falls to about 25 people whose principal responsibility is Global Catastrophic Risks, 15 for Global Health, and 5 for Farm Animal Welfare. In total, we estimate 59 FTEs on WCP and 1.5 FTEs on CP (given, for example, internal cause prioritization exercises and needs) including some participation of the leadership members.[12]
Ambitious Impact (formerly Charity Entrepreneurship) is dedicated to enabling and seeding effective charities through its research process and incubation program. Over the past year, it has allocated roughly 4 FTEs to within-cause prioritization, including 3 full-time permanent research team members, with additional support from fellows and contractors on a fluctuating basis. In contrast, only around 0.1 to 0.2 FTEs was spent on cause prioritization, reflecting its emphasis on deeper within-cause reports and analyses.[13]
80,000 Hours dedicated the majority of its prioritization capacity to cause prioritization, with roughly 3 FTE (primarily from the web team that researches problems and recommends cause rankings), though that figure decreased to about 2 FTE in 2024 after the departure of one researcher. This total included the team lead but excluded, for instance, the content associate. An additional 1 FTE stemmed from the collective effort of the leadership team and other staff members in investigating questions such as the importance of AI safety. (Adding up to 3 FTEs for CP). For cross-cause prioritization, our estimate was about 1 FTE for the web team’s work on neglectedness, tractability, and importance comparisons. Finally, about 1 FTE was devoted to within-cause prioritization from a combination of advisory work and the job board — e.g. in determining which roles at AI labs should appear on the job board. However, given their recent announcement of going all-in on AI, we’re classifying this total capacity of 5 FTEs as within-cause.[14]
Longview Philanthropy works with donors to increase the impact of their giving. Three full-time staff focus on within-cause prioritization in artificial intelligence, and two are dedicated to nuclear risk within-cause work (total 5 FTEs). Additionally, a small portion — about 0.05 full-time equivalent — is devoted to cause prioritization. (In this role, the content team and leadership periodically assess resource allocations, such as how to best deploy The Emerging Challenges Fund among artificial intelligence, biosecurity, and nuclear risk initiatives.)[15]
EA Funds supports organizations and projects within the Effective Altruism community through its grantmaking and funding prioritization activities. The organization carries out a range of prioritization work across its key areas: Global Health and Development Fund, Animal Welfare, Long-Term Future, and EA Infrastructure. Most team members hold full-time positions elsewhere in addition to their EA Funds role; for example, the current fund chair at the Animal Welfare Fund is the only full-time member dedicated exclusively to that fund. Overall, EA Funds’ combined capacity is estimated at 7 FTEs, and this work is classified as within-cause prioritization, where each fund can be thought of as a distinct cause area. We assigned an additional 0.1 FTEs on cause prioritization for the relevant strategic and big-picture thinking that the leadership and advisors perform.[16]
A summary of the relevant figures for each organisation is presented below.
Organisation | Within-Cause Prioritization (FTEs) | Cause Prioritization (FTEs) | Cross-Cause Prioritization (FTEs) |
GiveWell | 39 | 0 | 0 |
Animal Charity Evaluators | 12 | 0 | 0 |
Rethink Priorities | 30.9 | 1.9 | 3.3 |
Global Priorities Institute | 4.2 | 15.8 | 0.6 |
Open Philanthropy | 59 | 1.5 | 0 |
Ambitious Impact | 4 | 0.2 | 0 |
80,000 Hours | 5 | 0 | 0 |
GovAI | 32 | 0 | 0 |
Longview | 5 | 0.05 | 0 |
EA Funds | 7 | 0.1 | 0 |
Total | 198.1 | 19.5 | 3.95 |
Proportion | 89.4% | 8.8% | 1.8% |
It's striking that, at least in the context of the organizations above, the revealed effort devoted to within-cause area prioritization is about 9 times that allocated to cause prioritization and cross-cause prioritization combined.[17] Perhaps there are practical considerations that justify this distribution, but it's also possible that this simply isn't the optimal allocation of efforts among the different types of prioritization work. Let us consider the strengths and weaknesses of each type next.
This section is a summary of our more comprehensive breakdown of the strengths and pitfalls of each type, available in the appendix.
Cause prioritization involves comparing broad cause areas (like global health vs. existential risk) to figure out which problems most deserve attention. Since there are fewer causes than there are interventions, a shorter target list simplifies the task at hand, freeing up researcher time.
One specific benefit of engaging in prioritization at the level of causes is that this process can highlight neglected but high-potential areas that might otherwise be overlooked. If people try to prioritize interventions directly (rather than causes), then they risk missing out on whole large areas because they are not salient. e.g. people will just prioritize all the neglected tropical diseases they can think of, and totally ignore AI. Thus, engaging in prioritization at a higher level of abstraction (that of causes) can make it less likely that we will miss large areas of high-impact interventions, which were not hitherto salient.
Moreover, perhaps counterintuitively, it can be easier to do prioritization research at the level of causes (for example thinking about the value of solving general problems) than thinking about an intervention’s impact – especially for those that are more speculative. Put another way, it is often easier to sketch out how much the world would benefit from preventing risky AI technologies than to know how much a particular intervention would ultimately do towards mitigating that risk, given our uncertainty about future events.[18] That said, this is tentative. The reverse can be true, and practical considerations should ideally be taken into account in cause prioritization work.
However, in other ways, comparing the value of different causes can be especially challenging. Researchers must consider ethical trade-offs, uncertainty, and the potential for model errors. At its best, this means that cause prioritization can lead to the beneficial development of frameworks, metrics, and criteria that improve prioritization methods overall. At its worst, and sometimes more commonly, it just leads to lots of intuition-jousting between vague qualitative heuristics.
Another issue is the tendency to overlook practical difficulties of implementation and infrastructure. It’s one thing to identify “AI risk” as critical; it’s another to actually fund and execute effective projects in that area. High-level cause analysis can gloss over tractability – e.g. a cause like artificial intelligence safety might score as hugely important, but there may be a shortage of shovel-ready interventions, experienced organizations, or clear pathways to impact in the short term.[19] In contrast, a cause like global health has an extensive infrastructure (proven charities, supply chains for bednets, etc.) that makes turning funding into impact much more straightforward. Cause prioritization sometimes underestimates these on-the-ground realities, risking plans that sound great on paper but falter in practice.
Organizations and movements might also sensibly choose to diversify their efforts – perhaps because of normative uncertainty, decision-theoretic uncertainty, or empirical uncertainty about causes in practice. For cross-cause prioritization to enable altruistic organizations to spread risk across multiple causes, cause prioritization must first decide how to make sense of these causes, and point cross-cause work in the right direction.
Ultimately, cause prioritization can be of particularly high stakes: if it goes wrong, it can lead to the dismissal of entire classes of interventions and deprioritization of key problems. Conversely, it can also lead to incorrectly ruling in large classes of interventions, which fall in a superficially promising cause area, but which are not, say, actually tractable. Even if the new prioritized causes do well, there is a risk of having deprioritized the most promising interventions (because they do not fall within the most promising causes).
Within-cause prioritization zooms in on a single cause area (for example, global health, climate change, or animal welfare) and asks: which interventions or projects in this domain do the most good? Within-cause prioritization has several advantages which might allow more rigorous estimates of fine-grained interventions. One such possible virtue is the specialization of within-cause research. This specialization means analysts and organizations become domain experts, often uncovering nuanced improvements that dramatically boost impact (such as optimizing vaccine schedules or finding new treatments). This evidence-driven, granular approach excels a) in areas where institutional expertise can be leveraged and b) where success can be measured and repeated, producing confident recommendations. In short, within-cause prioritization offers precision. When you hold the cause area fixed, you can more genuinely compare apples to apples – and often find some apples are amazingly juicier than others.
Another strength of this approach is its empirical tractability. Working within one field means we can often gather concrete data and use consistent metrics to compare options. In global health, for instance, researchers can run randomized trials or use epidemiological data to measure outcomes like lives saved or DALYs averted. This yields clear rankings of interventions by cost-effectiveness. We’ve seen that rigor pay off – some studies found that in health, the top interventions (like distributing insecticide-treated bednets for malaria or, more recently, lead elimination) were dozens of times more effective than more average interventions.
A cause-specific approach can also better attract certain classes of funders. Potential donors, especially those outside of the EA space, are more likely to fund interventions within familiar contexts tied to their cause-specific values, preferences, and commitments. Moreover, it can be easier to build movements (of both funders and problem-solvers) around the importance of a particular problem or cause. This can be because of shared worldviews, identifiable recruiting pathways (e.g. from existing research departments, conferences, and organizations), and a shared language.
All that said, a laser focus within one cause can lead to local optima and tunnel vision. By keeping our heads down in one field, we might miss the bigger picture across causes. An intervention can be the best in its category and still be suboptimal from an overall welfare perspective if the category itself isn’t where resources can do the most good. For example, a global health expert might spend time debating whether anti-malaria bednets are more cost-effective than malaria vaccines – a valuable comparison within global health – yet completely overlook animal welfare interventions as an alternative use of funds. Moreover, within-cause prioritization is prone to neglect out-of-cause sources of value or disvalue (e.g. global health specialists risk neglecting animal welfare, or vice versa). Ultimately, it can become easy to assume some cause area as a given and not question it, especially as institutions and individuals’ careers become consolidated. These dynamics can be exacerbated by the increased potential for groupthink as prioritization becomes dominated by specialists with an interest in one particular cause. Such tunnel vision can result in overlooking opportunities in other causes, or synergies between causes, that potentially dwarf the gains of even the best in-cause option.
There’s also the danger of metric myopia: within a single domain, people tend to optimize for what’s easily measurable (e.g. DALYs for health, CO₂ emissions for climate), which can marginalize important but harder-to-measure benefits. Thus, while within-cause prioritization brings scientific rigor and specialization, it must be balanced with a willingness to occasionally look up and zoom out. Otherwise, we might perfect the wrong plan – achieving the best outcome in a narrower field that no longer is, or never was, the top priority overall.
Cross-cause prioritization takes the widest view, while still operating at the level of interventions – it tries to compare and allocate resources across fundamentally different causes based on impact. This approach is maximally flexible – instead of committing to one cause, it allows an altruist to continually ask “Where can my next dollar or hour do the most good, anywhere?” The strength here is that it embraces an impact-first mindset, unconstrained by silos.
A cross-cause framework can detect synergies and neglected opportunities that a single-cause focus might miss. For example, it might reveal that a dollar spent on a particular pandemic prevention intervention yields benefits for both global health and existential risk reduction, a two-for-one impact that a siloed analysis wouldn’t fully capture. Many EA-aligned philanthropists use cross-cause comparisons on some level to build a portfolio of interventions – funding a mix of global health, climate, animal welfare, and long-term future projects in proportion to how much good they expect additional resources in each would do.
This approach is also adaptive: as new evidence or cause areas emerge, cross-cause reasoning can redirect effort dynamically. For example, given its cause-transcending and evolving nature, the movement is potentially able to quickly shift resources between interventions across many causes. The advantage is an ever-present focus on what we ultimately care about: overall impact maximization. In cross-cause work you’re constantly evaluating trade-offs, which helps ensure that easy wins in any domain aren’t left on the table. Taking stock of these strengths: cross-cause prioritization empowers EAs to find the best interventions, period – it’s the tool that attempts to put everything on a common ledger and identify where resources can do the most good across all of reality’s suffering and opportunity.
In the end, while this type of work can be more challenging, it forces researchers to navigate difficult trade-offs and compare apples and oranges, which can lead to progress in decision theory, ethics, and other fields that make cross-cause prioritization possible.
With that broad a mandate, however, come significant challenges. One of the biggest issues is ethical commensurability – essentially, how do you compare ‘good done’ across wildly different spheres? Each cause tends to have its own metrics and moral values, and these don’t easily line up. Saving a child’s life can be measured in DALYs or QALYs, but how do we directly compare that to reducing the probability of human extinction, or to sparing chickens from factory farms? Cross-cause analysis must somehow weigh very different outcomes against each other, forcing thorny value judgments. One concrete example is comparing global health vs. existential risk. Global health interventions are often evaluated by cost per DALY or life saved, whereas existential risk reduction is about lowering a tiny probability of a huge future catastrophe. A cross-cause perspective has to decide how many present-day lives saved is “equivalent” to a 0.01% reduction in extinction risk – a deeply fraught question. Likewise, comparing human-centric causes to animal-focused causes requires assumptions about the relative moral weight of animal suffering vs. human suffering. If there’s no agreed-upon exchange rate (and people’s intuitions differ), the comparisons can feel too disparate. Researchers have attempted to resolve this by creating unified metrics or moral weight estimates (for instance, projects to estimate how many shrimp-life improvements rival a human-life improvement), but there’s often no escaping the subjective choices involved. This means cross-cause prioritization can be especially contentious and uncertain: small changes in moral assumptions or estimates can flip the ranking of causes, leading to debate.
Another downside is that there is less institutional expertise that can be leveraged in cross-cause comparisons, as well as the potential for metric inconsistency and complexity. Aggregating evidence across causes is very hard – the data and methodology you use to assess a poverty program vs. an AI research project are entirely different. Having worked on broad cross-domain analyses of this kind, we have previously noted how difficult it is to incorporate “the vast number of relevant considerations and the full breadth of our uncertainties within a single model” when comparing across domains. Despite these difficulties, cross-cause prioritization remains a key tool in EA’s toolkit for optimally allocating resources. More than that, we might say that being able to identify the highest impact interventions, across all causes, is what we as EAs ultimately want to achieve with any prioritization work. And that we should be cautious about succumbing to the streetlight effect merely because it is difficult.[20]
We have put together the relevant considerations (see their breakdown here) into a table below. Often, the strengths of one approach are directly related to the weaknesses of another.[21]
There are clear tradeoffs between the different modes of prioritizing: not a single consideration is a weakness or strength for all three types. At a glance, this points towards a mixed strategy.
In particular, there are some cruxes that would change the ideal composition of the prioritization strategy. We outline eight of these below.
Cruxes
If the differences in impact among individual interventions across causes are very large, then cross-cause prioritization becomes a more compelling strategy.
In situations where the distribution of impact across individual projects spans a wider range than the distribution across cause categories, prioritizing at the level of causes may be less effective and prioritizing at the level of interventions across causes more so.[22] For example, if an artificial intelligence risk reduction project proves to be vastly more cost-effective than any intervention in global health, cross-cause analysis will highlight this exceptional potential. This strategy allows decision-makers to compare disparate outcomes and select the intervention with the highest overall impact.
With the previous analysis in mind, below are several potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the reliability of the different methods.
Regardless of whether they engage in more systematic research into questions like those above, we encourage people in this space to think critically about both the quantitative and qualitative value of the prioritization types we’ve discussed, especially their value relative to one another.
This post sought to introduce different types of prioritization work and make the case that we should be deliberate about how we devote resources to each type. We saw some examples of organizations in the EA space and how their research might be classified using this framework. We outlined the main strengths and weaknesses of each type. Finally, it seems that though there are clear virtues of within-cause prioritization – the dominant type of research today by 9 times to 1 – the EA movement would likely benefit from spending more research time on the other two types given their importance relative to within-cause research, the impartial orientation of the movement, and the uncertainty we’re enveloped by.
This suggestion calls for more detailed research into the ideal levels of effort that should be put into each kind of prioritization. Every type of prioritization has its pros and cons (see the appendix’s breakdown), and by integrating them, effective altruists can aim to overcome the limitations of each — choosing causes with big upside, pursuing the best interventions in those causes, and constantly checking if our focus should shift elsewhere. This balance is how we try to do the most good with the resources we have.
This post was written by Rethink Priorities' Worldview Investigations Team. Thank you to Oscar Delaney, Elisa Autric, Shane C., and Sarah Negris-Mamani for their helpful feedback. Rethink Priorities is a global priority think-and-do tank aiming to do good at scale. We research and implement pressing opportunities to make the world better. We act upon these opportunities by developing and implementing strategies, projects, and solutions to key issues. We do this work in close partnership with foundations and impact-focused non-profits or other entities. If you're interested in Rethink Priorities' work, please consider subscribing to our newsletter. You can explore our completed public work here.
This appendix breaks down the strengths and potential weaknesses of each form of prioritization work.
Greater depth can spark ideas for promising new interventions, allow researchers to better diagnose problems, and result in more detailed recommendations to improve existing interventions and cover key funding gaps.[23] For example, consider wild animal prioritization research that leads to the discovery that targeted habitat modifications can reduce heat stress for certain species.
There are numerous strengths of the within-cause approach. But, given the risks and likely pitfalls, it would be reckless to not do a minimal amount of cross-cause work. We explore that next.
Given that many of the strengths and weaknesses are parallel (and opposite) to those of within-cause prioritization, the reader should feel free to jump to the summary table. Cause prioritization is also definitionally cross-cause, and it shares many of the considerations below.
The number and dimensions of regions and dots are not meant to indicate anything substantive.
Cause-areas will generally be ‘causes’ hereafter.
While the previous paragraph suggests an ordering of intervention groups based on overall impact, it is worth noting that some view cause areas differently. Many see the three major cause areas — global health, animal welfare, and catastrophic risks — as clusters of altruistic opportunities reflecting fundamentally different values rather than merely sets of interventions (that ought to be ranked). In practice, however, people often favor certain causes and implicitly make trade-offs between them according to their values, resulting in an informal cause ranking. Additionally, some individuals express uncertainty about the relative importance of these values and therefore advocate for diversification across causes as a hedge. Open Philanthropy’s explicit framing of these three cause areas as driven by uncertainty over values is a clear example of this perspective.
Of course, this is an oversimplification of Effective Altruism more broadly. There are EA projects that don’t neatly fit into these three – like thinking about how to healthily grow and expand social movements (see also the fourth EA fund on movement infrastructure), or investigating foundational questions like ‘what is good in the first place?’
Rankings should be thought of as cardinal not merely ordinal throughout this piece. That is to say we use ‘ranking’ to mean more than just ordering causes or interventions from highest to lowest value, and instead they are to be evaluated and positioned in a scale.
There could be more layers. For example, there could be worldviews (e.g. shortermist and animal-focused, shortermist and human-focused, longtermist and suffering-focused, etc), then causes as they arise in each worldview, and finally, interventions as they arise in each cause. This is a natural alternative but we have opted for the above framework for this post.
It should be emphasised that the estimations are rough and not necessarily fully accurate or up to date. The list of organizations is meant as an illustration and by no means exhaustive of all the relevant efforts in this space.
These figures are drawn from https://www.givewell.org/about/impact and https://www.givewell.org/about/people. 2022 is “the most recent year for which data is available and analyzed”.
GiveWell’s cross-cutting team is the closest to doing cross-cause flavoured research. However, as expected, so far their work has focused on cutting across subareas within global health and development, and thus falls into WCP for the purposes of the categories here. We’ve excluded the research operations team from the FTE calculations. All the other organizations’ figures are as of March 2025, unless specified.
Rethink Priorities is also the fiscal sponsor of the Institute of AI Policy and Strategy. While prioritization is only a subset of their portfolio, we include a 15 FTEs WCP estimate on their behalf.
Our figures are drawn from public information available through their website, especially https://www.openphilanthropy.org/team/ and a brief discussion with them. Our estimates are rough, and stem from our best understanding of the organization. In particular, several team members often wear multiple hats, meaning that the actual full-time equivalent numbers allocated to each category might vary in either direction. Here is an older but potentially relevant link to CP: https://www.openphilanthropy.org/research/update-on-cause-prioritization-at-open-philanthropy/.
Our figures are drawn from public information available through their website, especially https://www.charityentrepreneurship.com/about-us and https://www.charityentrepreneurship.com/research. Our estimates stem from our best understanding of the organization, after a superficial consultation with them.
Our figures are drawn from public information available through their website, especially https://80000hours.org/about/meet-the-team/ and https://80000hours.org/latest/. Our estimates are rough, and stem from our best understanding of the organization.
Our estimates are rough, and stem from our best understanding of the organization. They are based on information available through their website, especially https://www.longview.org/about/#team and a quick clarification with them.
These (particularly rough) estimates are drawn from our current best understanding, https://funds.effectivealtruism.org/team and https://forum.effectivealtruism.org/posts/sbLaCdguxZPhZkEon/awf-is-looking-for-full-time-or-part-time-fund-managers.
This captures institutional efforts, but doesn't not reflect individual prioritization, which might change the balance a bit. Though, of course, individual efforts may be more informal. And, in fact, it seems plausible that most cause prioritization done by the community is informal individual prioritizations.
A potentially relevant analogy is modeling the climate vs. modeling the weather.
Common EA cause prioritization frameworks, such as the ITN framework, often explicitly include consideration of ‘Tractability’. However, when applied to a whole cause area, rather than to specific interventions, such assessments often rely on abstract or heuristically driven assessments of in-principle tractability, rather than on identifying specific interventions or opportunities that are tractable and cost-effective.
That said, the fact that what we ultimately want to be able to do is prioritize interventions across causes does not immediately imply that the decision process we should use to achieve this is to directly attempt cross-cause prioritization directly. Hence, why all of the pros and cons of the different approaches to prioritisation outlined here remain live considerations.
Some considerations do not point in favor or against a type of prioritization work. For example, discovering neglected or emerging problems is not automatically more or less likely when doing cross-cause research, and could be somewhere in between how likely it is in WCP and CP work.
In the scenario we illustrated, variation is both high at the intervention level and low at the cause level, for all causes. However, even in cases where variation is very high only within certain causes, cross-cause prioritization at the level of interventions may still be recommended, as the highest impact interventions might be found within the high variation cause even if that cause is, on average, lower impact than other causes.
Prioritization is inevitably tied to problem solving and idea generation.