I get the sense that we can't trust Open Philanthropy to do a good job on AI safety, and this is a big problem. Many people would have more useful things to say about this than I do, but I still feel that I should say something.
My sense comes from:
A lot of people (including me as of ~one year ago) consider Open Phil the gold standard for EA-style analysis. I think Open Phil is actually quite untrustworthy on AI safety (but probably still good on other causes).
I don't know what to do with this information.
Epistemic status: Speculating about adversarial and somewhat deceptive PR optimization, which is inherently very hard and somewhat paranoia inducing. I am quite confident of the broad trends here, but it's definitely more likely that I am getting things wrong here than in other domains where evidence is more straightforward to interpret, and people are less likely to shape their behavior in ways that includes plausible deniability and defensibility.
I agree with this, but I actually think the issues with Open Phil are substantially broader. As a concrete example, as far as I can piece together from various things I have heard, Open Phil does not want to fund anything that is even slightly right of center in any policy work. I don't think this is because of any COIs, it's because Dustin is very active in the democratic party and doesn't want to be affiliated with anything that is right-coded. Of course, this has huge effects by incentivizing polarization of AI policy work with billions of dollars, since any AI Open Phil funded policy organization that wants to engage with people on the right might just lose all of their funding because of that, and so you can be confident they will s...
Adding my two cents as someone who has a pretty different lens from Habryka but has still been fairly disappointed with OpenPhil, especially in the policy domain.
Relative to Habryka, I am generally more OK with people "playing politics". I think it's probably good for AI safely folks to exhibit socially-common levels of "playing the game"– networking, finding common ground, avoiding offending other people, etc. I think some people in the rationalist sphere have a very strong aversion to some things in this genre, and labels like "power-seeking" and "deceptive" get thrown around too liberally. I also think I'm pretty with OpenPhil deciding it doesn't want to fund certain parts of the rationalist ecosystem (and probably less bothered than Habryka about how their comms around this wasn't direct/clear).
In that sense, I don't penalize OP much for trying to "play politics" or for breaking deontological norms. Nonetheless, I still feel pretty disappointed with them, particularly for their impact on comms/policy. Some thoughts here:
It feels sorta understandable to me (albeit frustrating) that OpenPhil faces these assorted political constraints. In my view this seems to create a big unfilled niche in the rationalist ecosystem: a new, more right-coded, EA-adjacent funding organization could optimize itself for being able to enter many of those blacklisted areas with enthusiasm.
If I was a billionare, I would love to put together a kind of "completion portfolio" to complement some of OP's work. Rationality community building, macrostrategy stuff, AI-related advocacy to try and influence republican politicians, plus a big biotechnology emphasis focused on intelligence enhancement, reproductive technologies, slowing aging, cryonics, gene drives for eradicating diseases, etc. Basically it seems like there is enough edgy-but-promising stuff out there (like studying geoengineering for climate, or advocating for charter cities, or just funding oddball substack intellectuals to do their thing) that you could hope to create a kind of "alt-EA" (obviously IRL it shouldn't have EA in the name) where you batten down the hatches, accept that the media will call you an evil villain mastermind forever, and hop...
not even ARC has been able to get OP funding (in that case because of COIs between Paul and Ajeya)
As context, note that OP funded ARC in March 2022.
I think OP has funded almost everyone I have listed here in 2022 (directly or indirectly), so I don't really think that is evidence of anything (though it is a bit more evidence for ARC because it means the COI is overcomable).
Imo sacrificing a bunch of OpenPhil AI safety funding in exchange for improving OpenPhil's ability to influence politics seems like a pretty reasonable trade to me, at least depending on the actual numbers. As an extreme case, I would sacrifice all current OpenPhil AI safety funding in exchange for OpenPhil getting to pick which major party wins every US presidential election until the singularity.
Concretely, the current presidential election seems extremely important to me from an AI safety perspective, I expect that importance to only go up in future elections, and I think OpenPhil is correct on what candidates are best from an AI safety perspective. Furthermore, I don't think independent AI safety funding is that important anymore; models are smart enough now that most of the work to do in AI safety is directly working with them, most of that is happening at labs, and probably the most important other stuff to do is governance and policy work, which this strategy seems helpful for.
I don't know the actual marginal increase in political influence that they're buying here, but my guess would be that the numbers pencil and OpenPhil is making the right call.
...I cannot think of anyone
Furthermore, I don't think independent AI safety funding is that important anymore; models are smart enough now that most of the work to do in AI safety is directly working with them, most of that is happening at labs,
It might be the case that most of the quality weighted safety research involving working with large models is happening at labs, but I'm pretty skeptical that having this mostly happen at labs is the best approach and it seems like OpenPhil should be actively interested in building up a robust safety research ecosystem outside of labs.
(Better model access seems substantially overrated in its importance and large fractions of research can and should happen with just prompting or on smaller models. Additionally, at the moment, open weight models are pretty close to the best models.)
(This argument is also locally invalid at a more basic level. Just because this research seems to be mostly happening at large AI companies (which I'm also more skeptical of I think) doesn't imply that this is the way it should be and funding should try to push people to do better stuff rather than merely reacting to the current allocation.)
Wasn't the relevant part of your argument like, "AI safety research outside of the labs is not that good, so that's a contributing factor among many to it not being bad to lose the ability to do safety funding for governance work"? If so, I think that "most of OpenPhil's actual safety funding has gone to building a robust safety research ecosystem outside of the labs" is not a good rejoinder to "isn't there a large benefit to building a robust safety research ecosystem outside of the labs?", because the rejoinder is focusing on relative allocations within "(technical) safety research", and the complaint was about the allocation between "(technical) safety research" vs "other AI x-risk stuff".
Imo sacrificing a bunch of OpenPhil AI safety funding in exchange for improving OpenPhil's ability to influence politics seems like a pretty reasonable trade to me, at least depending on the actual numbers. As an extreme case, I would sacrifice all current OpenPhil AI safety funding in exchange for OpenPhil getting to pick which major party wins every US presidential election until the singularity.
Yeah, I currently think Open Phil's policy activism has been harmful for the world, and will probably continue to be, so by my lights this is causing harm with the justification of causing even more harm. I agree they will probably get the bit right about what major political party would be better, but sadly the effects of policy work are much more nuanced and detailed than that, and also they will have extremely little influence on who wins the general elections.
We could talk more about this sometime. I also have some docs with more of my thoughts here (which I maybe already shared with you, but would be happy to do so if not).
...Separately, this is just obviously false. A lot of the old AI safety people just don't need OpenPhil funding anymore because they're working at labs or governments
sacrificing a bunch of OpenPhil AI safety funding in exchange for improving OpenPhil's ability to influence politics seems like a pretty reasonable trade
Sacrificing half of it to avoid things associated with one of the two major political parties and being deceptive about doing this is of course not equal to half the cost of sacrificing all of such funding, it is a much more unprincipled and distorting and actively deceptive decision that messes up everyone’s maps of the world in a massive way and reduces our ability to trust each other or understand what is happening.
Isn't it just the case that OpenPhil just generally doesn't fund that many technical AI safety things these days? If you look at OP's team on their website, they have only two technical AI safety grantmakers. Also, you list all the things OP doesn't fund, but what are the things in technical AI safety that they do fund? Looking at their grants, it's mostly MATS and METR and Apollo and FAR and some scattered academics I mostly haven't heard of. It's not that many things. I have the impression that the story is less like "OP is a major funder in technical AI safety, but unfortunately they blacklisted all the rationalist-adjacent orgs and people" and more like "AI safety is still a very small field, especially if you only count people outside the labs, and there are just not that many exciting funding opportunities, and OpenPhil is not actually a very big funder in the field".
Open Phil is definitely by far the biggest funder in the field. I agree that their technical grantmaking has been a limited over the past few years (though still on the order of $50M/yr, I think), but they also fund a huge amount of field-building and talent-funnel work, as well as a lot of policy stuff (I wasn't constraining myself to technical AI Safety, the people listed have been as influential, if not more, on public discourse and policy).
AI Safety is still relatively small, but more like $400M/yr small. The primary other employers/funders in the space these days are big capability labs. As you can imagine, their funding does not have great incentives either.
what actually happened is that Open Phil blacklisted a number of ill-defined broad associations and affiliations
is there a list of these somewhere/details on what happened?
You can see some of the EA Forum discussion here: https://forum.effectivealtruism.org/posts/foQPogaBeNKdocYvF/linkpost-an-update-from-good-ventures?commentId=RQX56MAk6RmvRqGQt
The current list of areas that I know about are:
There are a bunch of other domains where OP hasn't had an active grantmaking program but where my guess is most grants aren't possible:
I don't have a long list, but I know this is true for Lightcone, SPARC, ESPR, any of the Czech AI-Safety/Rationality community building stuff, and I've heard a bunch of stories since then from other organizations that got pretty strong hints from Open Phil that if they start working in an area at all, they might lose all funding (and also, the "yes, it's more like a blacklist, if you work in these areas at all we can't really fund you, though we might make occasional exceptions if it's really only a small fraction of what you do" story was confirmed to me by multiple OP staff, so I am quite confident in this, and my guess is OP staff would be OK with confirming to you as well if you ask them).
Yep, my model is that OP does fund things that are explicitly bipartisan (like, they are not currently filtering on being actively affiliated with the left). My sense is in-practice it's a fine balance and if there was some high-profile thing where Horizon became more associated with the right (like maybe some alumni becomes prominent in the republican party and very publicly credits Horizon for that, or there is some scandal involving someone on the right who is a Horizon alumni), then I do think their OP funding would have a decent chance of being jeopardized, and the same is not true on the left.
Another part of my model is that one of the key things about Horizon is that they are of a similar school of PR as OP themselves. They don't make public statements. They try to look very professional. They are probably very happy to compromise on messaging and public comms with Open Phil and be responsive to almost any request that OP would have messaging wise. That makes up for a lot. I think if you had a more communicative and outspoken organization with a similar mission to Horizon, I think the funding situation would be a bunch dicier (though my guess is if they were competent, an or...
And I agree with Bryan Caplan's recent take that friendships are often a bigger conflict of interest than money, so Open Phil higher-ups being friends with Anthropic higher-ups is troubling.
No kidding. From https://www.openphilanthropy.org/grants/openai-general-support/:
OpenAI researchers Dario Amodei and Paul Christiano are both technical advisors to Open Philanthropy and live in the same house as Holden. In addition, Holden is engaged to Dario’s sister Daniela.
Wish OpenPhil and EAs in general were more willing to reflect/talk publicly about their mistakes. Kind of understandable given human nature, but still... (I wonder if there are any mistakes I've made that I should reflect more on.)
"Open Phil higher-ups being friends with Anthropic higher-ups" is an understatement. An Open Philanthropy cofounder (Holden Karnofsky) is married to an Anthropic cofounder (Daniela Amodei). It's a big deal!
I want to add the gear of "even if it actually turns out that OpenPhil was making the right judgment calls the whole time in hindsight, the fact that it's hard from the outside to know that has some kind of weird Epistemic Murkiness effects that are confusing to navigate, at the very least kinda suck, and maybe are Quite Bad."
I've been trying to articulate the costs of this sort of thing lately and having trouble putting it into words, and maybe it'll turn out this problem was less of a big deal than it currently feels like to me. But, something like the combo of
a) the default being for many people to trust OpenPhil
b) many people who are paying attention think that they should at least be uncertain about it, and somewhere on a "slightly wary" to "paranoid" scale. and...
c) this at least causes a lot of wasted cognitive cycles
d) it's... hard to figure out how big a deal to make of it. A few people (i.e. habryka or previously Benquo or Jessicata) make it their thing to bring up concerns frequently. Some of those concerns are, indeed, overly paranoid, but, like, it wasn't actually reasonable to calibrate the wariness/conflict-theory-detector to zero, you have to make guesses. Thi...
Some of those concerns are, indeed, overly paranoid
I am actually curious if you have any overly paranoid predictions from me. I was today lamenting that despite feeling paranoid on this stuff all the time, I de-facto have still been quite overly optimistic in almost all of my predictions on this topic (like, I only gave SPARC a 50% chance of being defunded a few months ago, which I think was dumb, and I was not pessimistic enough to predict the banning of all right-associated project, and not pessimistic enough to predict a bunch of other grant decisions that I feel weird talking publicly about).
My sense is my predictions about Anthropic have also not been pessimistic enough, though we have not yet seen most of the evidence. Maybe a good time to make bets.
What's going on with /r/AskHistorians?
AFAIK, /r/AskHistorians is the best place to hear from actual historians about historical topics. But I've noticed some trends that make it seem like the historians there generally share some bias or agenda, but I can't exactly tell what that agenda is.
The most obvious thing I noticed is from their FAQ on historians' views on other [popular] historians. I looked through these and in every single case, the /r/AskHistorians commenters dislike the pop historian. Surely at least one pop historian got it right?
I don't know about the actual object level, but a lot of /r/AskHistorians' criticisms strike me as weak:
I was reading some scientific papers and I encountered what looks like fallacious reasoning but I'm not quite sure what's wrong with it (if anything). It does like this:
Alice formulates hypothesis H and publishes an experiment that moderately supports H (p < 0.05 but > 0.01).
Bob does a similar experiment that contradicts H.
People look at the differences in Alice's and Bob's studies and formulate a new hypothesis H': "H is true under certain conditions (as in Alice's experiment), and false under other conditions (as in Bob's experiment)". They look at...
Have there been any great discoveries made by someone who wasn't particularly smart?
This seems worth knowing if you're considering pursuing a career with a low chance of high impact. Is there any hope for relatively ordinary people (like the average LW reader) to make great discoveries?
What's the deal with mold? Is it ok to eat moldy food if you cut off the moldy bit?
I read some articles that quoted mold researchers who said things like (paraphrasing) "if one of your strawberries gets mold on it, you have to throw away all your strawberries because they might be contaminated."
I don't get the logic of that. If you leave fruit out for long enough, it almost always starts growing visible mold. So any fruit at any given time is pretty likely to already have mold on it, even if it's not visible yet. So by that logic, you should never eat frui...
When people sneeze, do they expel more fluid from their mouth than from their nose?
I saw this video (warning: slow-mo video of a sneeze. kind of gross) https://www.youtube.com/watch?v=DNeYfUTA11s&t=79s and it looks like almost all the fluid is coming out of the person's mouth, not their nose. Is that typical?
(Meta: Wasn't sure where to ask this question, but I figured someone on LessWrong would know the answer.)