Hi Jameson,
I lead the EA Grants program at CEA and anyone should feel free to contact me (nicole.ross@centreforeffectivealtruism.org) if they have any questions or if a time sensitive opportunity comes up before the next grant round opens. Please feel very free to reach out!
Also, in case it's helpful: I looked at your other post briefly, and I don't think the topic automatically excludes it from EA Grants.
More generally, I'd be interested in hearing your thoughts about the types of projects that might be falling through the cracks. I only recently started at CEA and am still thinking through what EA Grants should look like in the future (e.g. what niche it should fill within the funding space, how it can be better and more efficient). If you (or others) have thoughts on this topic, please email me: nicole.ross@centreforeffectivealtruism.org.
Thanks for writing this up!
I wouldn't translate "organization isn't currently holding a funding round" into "dead end." Yes, it does mean you can't get money right now, but grants are (afaict) almost always a fairly involved process that takes awhile to pay out. The way to get money via grants involves multiple-month-long time horizons even in the best of circumstances.
I think each of the organizations involved has done at least one funding round per year and often multiple ones, which means your time horizon is something like 6-9 months instead of 1-6 months (i.e. the way it'd be if you'd lucked into asking this question right as a funding round was underway)
So I'd say the "not currently in a funding round" organizations should be translating into a next action of "making sure you'll be notified when the next round opens. Possibly send a 1-3 sentence email to them sooner to check if the general scope of your work is something they're interested in." (Which they might or might not reply to)
If it is the case that you need money now to get started, I think that sort of circumstance almost always requires you to either:
This does suck, but any improvements on that status quo would be fairly difficult
My time horizon is about 6 months. I could probably extend that by a few months but that would involve (tolerable but noticeable) sacrifices. So the difference between 1-6 months and 6-9 is meaningful to me, though not completely dispositive.
Just a short note to say that CEA’s “EA Grants” programme is funded in part by OpenPhil.
https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-general-support-2017#Budget_and_room_for_more_funding
Paul Christiano might still be active in funding stuff. (There are a few more links to funding opportunities in the comments of that post.)
Thanks.
I'll look into those possibilities. However, though my proposed work relates to AI alignment, it is not focused on that issue; and I'd consider it "outside the dominant paradigm" of AI alignment work.
Edited to add: I was going to do a separate post about those possibilities, but it appears that this website is a reasonably up-to-date summary of all the funding sources that are linked from that post, so me repeating that work would be redundant..
I'm considering applying for some kind of a grant from the effective altruism community. A quick sketch of the specifics is here. Raemon replied there with a list of possibilities. In this post, I'll look into each of those possibilities, to make this process easier for whoever comes next. In the order Raemon gave them, those are:
OpenPhil
This looks like a case where it's at least partially about "who you know". I do in fact have some contacts I could approach in this regard, and I may do so as this search proceeds.
But this does seem like a bias that it would be good to try to reduce. I understand that there are serious failure modes for "too open" as well as "too closed", but based on the above I think it currently tilts towards the latter. Perhaps a publicly-announced process for community vetting? I suspect there are people who are qualified and willing to help sort the slush-pile that would create.
CEA (Center for Effective Altruism)
This would seem to be a dead end for my purposes in two regards. First, applications are not currently open, and it's not clear when they will be. And second, this appears to focus on projects with immediate benefits, and not meta-level basic research like what I propose.
BERI (Berkeley Existential Risks Initiative) individual grants
Another dead end, at the moment, as applications are not open.
EA funds
There are 4 funds (Global Development, Animal Welfare, Long-Term Future, and Effective Altruism Meta). Of these 4, only Long-Term Future appears to have a process for individual grant applications, linked from its main page. (Luckily for me, that's the best fit for my plan anyway.)
This is definitely the most promising for my purposes. I will be applying with them in the near future.
Conclusions
I'm looking for funds in the $10K-$100K range for a short-term project that would probably fall through the gaps of traditional funding mechanisms — an individual basic research project. It seems the EA community is trying to address this issue of funding this kind of project in a way that has fewer arbitrary gaps while still having rigorous standards. Nevertheless, I think that the landscape I surveyed above is still fragmented in arbitrary ways, and worthy projects are still probably falling through the gaps.
Raemon suggested in a comment on my earlier post that "something I'm hoping can happen sometime soon is for those grantmaking bodies to build more common infrastructure so applying for multiple grants isn't so much duplicated effort and the process is easier to navigate, but I think that'll be awhile". I think that such "common infrastructure" would help a more-unified triage process so that the best proposals wouldn't fall through the cracks. I think this benefit would be even greater than the ones Raemon mentioned (less duplicated effort and easier navigation). I understand that this refactoring takes time and work and probably won't be ready in time for my own proposal.
PS: see also this website on AI alignment funding options, which came up in comments.