This is fantastic, tons of things I agree with strongly.
That said, my big undressed question is about scale; obviously it's easier to fund one $1m project than 5 $200k projects, but the smaller projects are often higher leverage. And that goes for smaller things too.
So taking this much further, in my experience lots of really great early stage opportunities are $5k or $10k grants (help someone write a paper, or fund a small experiment to check if a new idea works,) which can have as much expected impact as a marginal $200k on different opportunities; how do you manage these, both in terms of filtering and finding them, and managing the relatively very high overhead costs for them? (Or do you not find that this is true, or do you have a minimum?)
Good point; I agree small opportunities can be great.
how do you manage these, both in terms of filtering and finding them, and managing the relatively very high overhead costs for them?
This post is more like I have a priori observations than I know what processes work well in practice. I don't claim the latter. But since you asked:
I don't do a good job of finding small opportunities. When small opportunities come to my attention, my process is something like:
An abbreviated heuristic is: if it's in-scope and it seems great and it's hard to imagine regretting it substantially more than if you lit the money on fire, just fund all such small opportunities. Funding lots of small opportunities is better than funding few.
Note that being exploitable has downsides beyond wasting money. (Internet people reading this, please don't ask me for money because you read this; I'm very unlikely to give you money even for good things because my expertise is limited to a small fraction of good things.)
Probably in my domain relative to yours, (1) there's way fewer small one-off opportunities and (2) a greater fraction of them have substantial downside risk.
Thanks for writing this, Zach. After spending the last 2.5 years working as a grantmaker, a lot of this resonates with me!
Rather than flag the specific bits I agree with, I’ll just say: this seems to me like a pretty useful piece for anyone trying to understand the mental models many AI safety grantmakers tend to use in practice.
I really liked this, thank you for writing it.
If you have reading recommendations, please share!
What I didn't expect about being a funder by James Ozden came to mind.
On the BOTEC maximalism and your bar, can you say more? I guess I've been a bit cluster-pilled, especially in practice given how bad the thinking I've seen is in many BOTECs, so if anyone else said this I'd be skeptical, but I respect your thinking and I thought Eric's CEA of donating $1k to Alex Bores was good, so I'm intrigued.
Written to a new grantmaker.
Note: I subscribe to BOTEC maximalism: I put numbers on things whenever possible and those numbers are pretty load-bearing. As far as I know, nobody outside my team does that. I think most people are correct not to do it. It works great for us, especially for comparing interventions that target different desiderata, e.g. "make the US government better on AI safety" vs "make technical AI safety research happen." But it only works because we're good at quantifying the value (for the long-term future) of many (AI safety, better futures, politics, etc.) desiderata and interventions (and we can share state and resolve disagreements — it would be worse for large teams). For most people—even many math-y people—their BOTECs are often terrible, much worse than mere intuition. Sometimes it's crucial to assess value in abstract units, especially for comparing different kinds of interventions. But it mostly seems fine if you're like "here are some different things that are similarly good (and how they compare to our bar)" and then just compare new stuff to those things.
Note: many of these takes are a priori observations. You shouldn't update as if these are all based on real-world experience.
Grantmaking reading recommendations
The best thing is Linch's Some unfun lessons I learned as a junior grantmaker (which loosely inspired this post's title). After that, consider (these all happen to be from CG):
If you have reading recommendations, please share! I asked various grantmakers and they didn't really have others.
This post is the beginning of my sequence inspired by my prioritization research and donation advising work.
You counterfactually generated $9M of value. The people/orgs that actually do the project, if relevant, are also counterfactual for that value, but that's fine; counterfactuals don't sum to the total. The donor generated $1M of value. I assume your 10x judgment is after accounting for the opportunity cost of people/orgs, if relevant — the value you generate is the value of the project minus the opportunity cost of the people/orgs and the money required.