Today, Open Philanthropy announced that our Potential Risks from Advanced Artificial Intelligence program will now be called Navigating Transformative AI. Excerpts from the announcement post:
We're making this change to better reflect the full scope of our AI program and address some common misconceptions about our work. While the vast majority of work we fund is still aimed at catastrophic risk mitigation, the new name better captures the full breadth of what we aim to support: work that helps humanity to successfully navigate the transition to transformative AI.
…some have mistakenly understood us to be broadly opposed to AI development or technological progress generally. We want to correct that misconception.
As explained in two recent posts, we are strongly pro-technology and believe that advances in frontier science and technology have historically been key drivers of massive improvements in human well-being. As a result, we are major funders of scientific research and "Abundance" policies …
We think AI has the potential to be the most important technological development in human history. If handled well, it could generate enormous benefits: accelerating scientific discovery, improving health outcomes, and creating unprecedented prosperity. But if handled poorly, it could lead to unprecedented catastrophe: many experts think the risks from AI-related misuse, accidents, loss of control, or drastic societal change could endanger human civilization.
Today, we continue to believe that most of the highest-impact philanthropic work for navigating the AI transition successfully is aimed at mitigating the worst-case AI catastrophes, because (a) catastrophic AI risk mitigation is a public good that market incentives neglect, (b) competitive dynamics create local incentives to underinvest in AI safety, and (c) governments and civil society have been slow to respond to rapid AI progress.
However, we also think that some of the most promising philanthropic opportunities in AI may focus on realizing high-upside possibilities from transformative AI, such as those articulated here.[1] This is a nascent area of research where we’re still developing our thinking – more work is needed. More broadly, we want to remain open-minded about how we can help others as much as possible in the context of the AI transition, and "Navigating Transformative AI" captures that better than the previous name.[2]
In this post, we'd like to clarify some additional points that may be of particular interest to the LessWrong audience. In particular:
This work could be framed as "mitigating the risk of humanity missing the highest-upside outcomes," but we think a clearer program name better serves the audiences for our work. ↩︎
Additional less central reasons for the name change include (a) following the principle of "saying what you're for, not what you're against," and (b) preferring a more succinct new name. ↩︎