I think this is very hard, but FWIW, me and a bunch of other people from the Long Term Future Fund are likely to start working more on the AI Risk Mitigation Fund, which is hoping to fill some of this gap (though to be clear, will likely end up doing much more hits-based things than GiveWell).
Agreed with the other answers on the reasons why there's no GiveWell for AI safety. But in case it's helpful, I should say that Longview Philanthropy offers advice to donors looking to give >$100K per year to AI safety. Our methodology is a bit different from GiveWell’s, but we do use cost-effectiveness estimates. We investigate funding opportunities across the AI landscape from technical research to field-building to policy in the US, EU, and around the world, trying to find the most impactful opportunities for the marginal donor. We also do active grantmaking, such as our calls for proposals on hardware-enabled mechanismsand digital sentience. More details here. Feel free to reach out to aidan@longview.org or simran@longview.org if you'd like to learn more.
Are there any public cost-effectiveness analyses of different AI Safety charities? For instance, I'm aware of Larks' AI Alignment Literature Review and Charity Comparison, but he didn't include any concrete impact measures and stopped doing it in 2021.
I'm looking for things like "donating $10M to this org would reduce extinction risk from AI by 2035 by 0.01-0.1%" or even "donating $10M to this org would result in X-Y QALYs".
(I understand there are many uncertain variables here that could affect results quite a lot, but I think that some quantitative estimates, even quite rough, would be quite useful for donors).
Ideally would also include comparison to the most cost-effective charities listed by GiveWell, though I understand this would imply comparing things with very different error bars.