OpenPhil recently posted an update on their overall cause prioritization strategy. In particular, exploring the concept of worldview diversification:

When choosing how to allocate capital, we need to decide between multiple worldviews. We use “worldview” to refer to a highly debatable (and perhaps impossible to evaluate) set of beliefs that favor a certain kind of giving. Worldviews can represent a mix of moral values, views on epistemology and methodology, and heuristics. Worldviews that are particularly salient to us include:
• a “long-termist” worldview that ascribes very high value to the long-term future, such that it assesses grants by how well they advance the odds of favorable long-term outcomes for civilization;
• a “near-termist, human-centric” worldview that assesses grants by how well they improve the lives of humans (excluding all animal welfare considerations) on a relatively shorter time horizon;
 a “near-termist, animal-inclusive” view that focuses on a similarly short time horizon but ascribes significant moral weight to animals.
If we evaluated all grants’ cost-effectiveness in the same terms (e.g., “Persons helped per dollar, including animals” or “Persons helped per dollar, excluding animals”), this would likely result effectively putting all our resources behind a single worldview.
For reasons given below (and previously), we don’t want to do this. Instead, we’re likely to divide the available capital into buckets, with different buckets operating according to different worldviews and in some cases other criteria as well. E.g., we might allocate X% of capital to a bucket that aims to maximize impact from a “long-termist” perspective, and Y% of capital to a bucket that aims to maximize impact from a “near-termist” perspective, with further subdivisions to account for other worldview splits (e.g., around the moral weight of animals) and other criteria.
These buckets would then, in turn, allocate “their” capital to causes in order to best accomplish their goals; for example, a long-termist bucket might allocate some of its X% to biosecurity and pandemic preparedness, some to causes that seek to improve decision-making generally, etc.

The full post is quite long, but seemed useful both to highlight a useful way of thinking, and to generally stay up-to-date on how OpenPhil is strateg

New Comment