Thanks for writing this up. I have various thoughts, but here's the counterargument that I think people are most likely to miss, so I'll make it here:
I think that one year from now, we will be a decent amount wiser than we are now, when considering what the best donation opportunities are. This means that one year from now, we may regret donation opportunities made today.
An example: last year I put a decent fraction of my wealth in a DAF. At the time, I hadn't heard any warnings not to do that. Today, I think that it would have been better if I had not put that money in the DAF, because I think the best donation opportunities are not 501(c)(3)s.
Similarly, I find it plausible that if today I donate to the cause that I consider to be the best, a year from now I would wish to have that money back, because actually I'm currently wrong about what the best cause is.
I don't think that this effect trumps the various effects that you guys point out. I just think it's a substantial consideration in the opposite direction.
I think that the consideration you raise is important, but here's something that came to mind while reading your comment:
An example: last year I put a decent fraction of my wealth in a DAF. At the time, I hadn't heard any warnings not to do that. Today, I think that it would have been better if I had not put that money in the DAF, because I think the best donation opportunities are not 501(c)(3)s.
There's an interpretation of this experience which has the opposite conclusion than the one you draw. Specifically, you didn't try to make a donation, you tried to punt the choice down the road. If you had been more focused on maximizing impact using donations last year, that might have forced you to learn more about the situation and you might have noticed that political donations were a good opportunity.
I think this is a good point. At the same time, I suspect the main reason we're likely to be wiser a year from now is that we'll have done stuff over the coming year that we'll learn from. And the more we spend over the next year, the more we'll be able to do, leading to more learning. In some ways this feels like "yes, maybe from an individual level it'll feel better to wait and learn more, but your spending now not only lets you learn better but also lets others learn better." I think the factor I'm pointing to is actually substantial, in particular if you're funding highly promising areas that are relatively new and that others are skeptical of or feel insufficiently knowledgeable about.
Example with fake numbers: my favorite intervention is X. My favorite intervention in a year will probably be (stuff very similar to) X. I value $1 for X now equally to $1.7 for X in a year. I value $1.7 for X in a year equally to $1.4 unrestricted in a year, since it's possible that I'll believe something else is substantially better than X. So I should wait to donate if my expected rate of return is >40%; without this consideration I'd only wait if my expected rate of return is >70%.
I don't really buy this as a significant concern. (I agree it's nonzero, just, pretty swamped by other things). It also feels like it's abstracting over stuff that doesn't make sense to abstract over.
Just looking at the arguments in the OP, this feels pretty dominated by "in the future there will be way more money around." The bottleneck in the future will not be money, it'll be attention on projects that are important but hard to reason about. Anything you can make a pretty clear case for being important, you'll probably be able to get funding for.
This argument made sense as a consideration to me in the past, but, man we just look like we're in the endgame[1] now. We will learn more, but not until the window for new projects to spin up is much much shorter. Now is the moment all the previous "wait till we have more information" might possibly have been for.
...
I think my main reason for sort of (awkwardly backwardsly) agree with this argument is "well, I think the people with a lot of frontier lab equity are probably systematically wrong about stuff, undervaluing "technical philosophy", being too bullish on AI projects that seem likely to be net negative or sort of neutrally following a tide to me. So, from that case, maybe I do hope they wait.
But mostly, if you are uncertain or feel like you don't know enough to start confidently making donations by now, you should specifically be looking for ways to invest in stuff that improves your understanding.
This argument also feels pretty swamped by "compounding growth of the various altruistic AI enterprises". We want to be finding compounding resources that actually can help with the problems.
("Money" isn't actually a good proxy resource for this because it's not the main bottleneck. Two compounding resources that feel more relevant are "Good (meta)cognitive processes entangled with the territory" an "Coordination capital pointed at the right goals." See Compounding Resource X for more thoughts there)
If there is a project that could be getting off the ground now, or hiring more people to spin up more subprojects, or spearhead more communication initiatives that change the landscape of what future billionaires/politicians/researchers are thinking about... those projects could be growing and having second order effects. They could accumulating reputation that lets them help direct attention of new billionaires to more subtly important but undervalued things in tomorrow's landscape.
Instead of thinking generically "I might learn more", I think you should be making lists of the things you aren't sure about, or, if you changed your mind about, it'd radically change your strategy, and figuring out how to find and invest in projects that reduce those uncertainties.
Even if you think LLMs are a dead end, there's a pretty high chance of a ton of investment producing new trailheads, compute's getting more/cheaper. If you wait a couple years, it seems pretty likely that you'll know but you'll have lost most of your potential leverage, and there won't be enough time left for whatever projects you're more knowledgeable enough about to pay off.
Perhaps. I expect there to be massively more donor interest after the CAIS letter, but it didn't really seem to eventuate.
I think this stuff just takes a while, and things happened to coincide with the collapse of FTX which masked much of the already existing growth (and the collapse of FTX indirectly also resulted in some decrease in other funders withdrawing funds).
I will gladly take bets with people that there will be a lot more money interested in the space in 2 years than there is now.
I'm not sure about funding-size, but, one think to note is there's government agencies and I think more government funding now.
I think the deal is we're bottlenecked on vetting/legitimacy/legibility (and still will be in a couple years, by default). If you're a billionaire, and aren't really sure what would meaningfully help, right now it may feel like a more obvious move to found a company that do donations.
But I think "donate substantially to a thing you think is good and write up your reasons for thinking that thing is good", is pretty useful. (If you do a good job with the writeup, I bet you get a noticeable multiplier on the donation target, somewhat via redirection and somewhat via getting more people to donate at all)
This does require being a more active philanthropist who's treating it a bit more like a job. But I think if you have the sort of money the OP is talking about, it's probably worth prioritizing that. But even if you're not, I think we're just bottlenecked on time so much more than money.
I mean, this argument holds generally for any kind of investment in future events. Supposing that some kind of TAI gets produced in the year y, investments made in the year y-10 are probably less likely to be accurate than investments made in year y-9, and so on for y-8... All the way to y-0 when we know for sure which group of actors will make TAI (which, of course, happens when they succeed). Unfortunately, the commensurate difficulty of using funding to make an impact also increases as we approach y-0.
So I agree with you that such considerations cannot provide too much sway, because on their own they justify indefinite inaction until it is definitely too late.
Tl;dr: We believe shareholders in frontier labs who plan to donate some portion of their equity to reduce AI risk should consider liquidating and donating a majority of that equity now.
Epistemic status: We’re somewhat confident in the main conclusions of this piece. We’re more confident in many of the supporting claims, and we’re likewise confident that these claims push in the direction of our conclusions. This piece is admittedly pretty one-sided; we expect most relevant members of our audience are already aware of the main arguments pointing in the other direction, and we expect there’s less awareness of the sorts of arguments we lay out here.
This piece is for educational purposes only and not financial advice. Talk to your financial advisor before acting on any information in this piece.
For AI safety-related donations, money donated later is likely to be a lot less valuable than money donated now.
There are several reasons for this effect, which we elaborate on in this piece:
Given the above reasons, we think donations now will have greater returns than waiting for frontier lab equity to appreciate and then donating later. This perspective leads us to believe frontier lab shareholders who plan to donate some of their equity eventually should liquidate and donate a majority of that equity now.
We additionally think that frontier lab shareholders who are deliberating on whether to sell equity should consider:
4. Reasons to diversify away from frontier labs, specifically.
For donors who are planning on liquidating equity, we would recommend they do not put liquidated equity into a donor-advised fund (DAF), unless they are confident they would only donate that money to 501(c)(3)s (i.e., tax-deductible nonprofits). Money put into a DAF can only ever be used to fund 501(c)(3)s, and there are many high-value interventions, in particular in the policy-influencing space, that cannot be pursued by 501(c)(3)s (e.g., 501(c)(3)s can only engage in a limited amount of lobbying). We think the value of certain non-501(c)(3) interventions far exceeds the value of what can be funded by 501(c)(3)s, even considering multipliers for 501(c)(3)s due to tax advantages.
We understand the obvious counterargument to our main claim – that frontier lab equity will likely see outsized returns in the run up to powerful AI, such that holding the equity may allow you to grow your donation budget and do more good in the future. Nevertheless, we believe our conclusions hold even if you expect frontier lab equity to grow substantially faster than the market as a whole. In part, we hold this view because donations enable organizations and activities to grow faster than they otherwise could have, which lets them raise and absorb more funding sooner, which lets them have more impact (especially if donation budgets soon balloon such that the bottleneck on valuable AI safety work shifts away from available funding and towards available funding opportunities). In this way, donations themselves are similar to held equity, in that they serve as investments with large, compounding growth.
Below, we elaborate on our reasons for holding these views, which we outlined above.
The AI safety community is likely worth tens of billions of dollars, but most of this is tied up in illiquid assets or otherwise invested to give in the future. Safety-oriented Anthropic shareholders alone are likely worth many billions of dollars (often with up to 80% of their worth planned to non-profit donations), Dustin Moskovitz is also worth over ten billion (with perhaps half of that to be spent on AI safety), and there remain other collections of safety donors investing in the future.
Yearly spending on AI safety, meanwhile, is in the hundreds of millions (spending on just technical AI safety research and related areas is ~$100M/yr, and total spending on AI safety writ large is presumably some multiple of this figure). This yearly spending likely represents a few percent of the community's wealth. This is a somewhat modest spending rate for such an altruistic community, which we think largely represents the fact that much of this wealth is tied up in illiquid assets. As some of these assets become more liquid (e.g., with Anthropic tender offers), the spending rate of the community will likely rise substantially.
Further, we think it is likely that frontier AI investments will do quite well in the coming years, ballooning wealth in the AI safety community, and raising donation amounts with it. If Anthropic were to 5x again from its current valuation and Anthropic employees had access to liquidity, it’s conceivable that several Anthropic employees may set up counterparts to Open Philanthropy.
We’ve already seen that as AI has become more powerful, more people have started paying attention to AI risk – including wealthy people. For instance, billionaire investor Paul Tudor Jones has recently begun talking publicly about catastrophic risks from AI. It’s plausible that some portion of people in this class will start donating substantial sums to AI safety efforts.
Additionally, the U.S. government (and other governments) may ramp up spending on AI safety as the technology progresses and governing bodies pay more attention to the issue.
The sentiment from many grantmakers and people doing direct nonprofit AI safety work is that the current bar for funding is pretty high, and good opportunities are going unfunded. One way to get a sense of this is to browse Manifund for AI safety projects that are in search of funding. If you do this, you’ll likely notice some promising projects that aren’t getting funded, or are only partially getting funded, and many of these projects also have difficulty getting sufficient funding elsewhere. Another way to get a sense of this is to look at the recent funding round from the Survival and Flourishing Fund. Our impression is that many of the projects funded in this recent funding round are doing fantastic work, and our understanding is some of these projects were highly uncertain whether they would secure funding or not (from SFF or elsewhere).
We also are personally aware of organizations that we believe are doing great work, which members of the AI safety community are generally enthusiastic about, but which have been struggling to fundraise to the degree they want. If you want to check out some donation opportunities that we’d recommend, view the last section of this piece.
Collectively for the AI safety community as a whole, there are diminishing returns to donations within any time period. This phenomena is also true for specific donation categories, such as technical AI research, AI governance research, and donations for influencing AI policy. This phenomena is a reason to want to spread out donations across time periods, as the alternative (concentrating donations within time periods) will force donations into lower impact interventions. It’s also an argument against holding onto equity to donate later if many other AI safety advocates will also be holding on to highly correlated (or even the same) equity to donate later – in the event that your investments grow, so would other donors, making the space flush and reducing the value per dollar substantially (meanwhile, if no one donated much in earlier time periods, the missed opportunities of low-hanging fruit there may simply be gone).
Spending by AI safety advocates to influence U.S. policy (through activities such as lobbying) is only in the ballpark of ~$10M per year. Enthusiasm for this area, meanwhile, has been rising rapidly – a couple of years ago, spending on it was basically zero. The amount spent on it could easily increase by an order of magnitude or more.
Insofar as you buy the argument that we should want to increase spending in earlier time periods where spending is by default lower, this argument should be particularly strong for interventions aimed at influencing policy. As an example of how this phenomena works in the AI policy space – the political landscape regarding AI is uncertain, and it isn’t clear when the best opportunities for passing legislation will be. For instance, it’s possible that at any time there could be a mid-sized AI accident or other event which creates a policy window, and we want AI safety policy advocates to be ready to strike whenever that happens. Certain types of interventions can help better position the AI safety community to advance AI safety policies in such an event. Spreading donations across time periods can help ensure the best of these interventions are pursued throughout time periods, increasing the chances that AI safety advocates will have a major seat at the table whenever such a policy window opens.
Notably, major figures in the AI industry recently announced intentions to spend >$100M in efforts to stave off AI regulations in the U.S. Until our side of these policy debates is able to muster a large response (say, a substantial fraction as much spending on influencing policy as what they’ve announced), we’ll likely be ceding a large portion of our seat at the table.
In worlds where your financial investments see huge returns, you probably won’t be alone in making large gains. Other interests will also likely see large returns, increasing the cost of changing the direction of society (such as via policy).
Even if you manage to beat the market by a huge factor, opponents of AI regulation may see similar gains to you, increasing the total amount donated to affect AI policy (on both sides), and decreasing the value per dollar donated on the issue, specifically. Notably, opponents of AI regulation include Big Tech firms (especially those with major exposure to AI in particular) as well as ideological accelerationists (who tend to have large exposure to both AI and crypto) – in order for your investment gains to give you a major advantage in terms of the value of influencing AI policy, you’d likely need substantial gains above those groups.
Again, this is an argument for AI safety advocates as a whole to spread donations across time, not for ignoring future time periods. But it does cut against the argument that investing now can lead to much more money and thus much more impact, as the money would wind up being less valuable per dollar.
Further, AI policy is currently a relatively low salience issue to voters (i.e., approximately no voters are actually changing their votes based on stances politicians take on AI). At some point in the future, that’s likely to no longer be true. In particular, after an AI warning shot or large-scale AI-driven unemployment, AI policy may become incredibly high salience, where voters consistently change their vote based on the issue (e.g., like inflation or immigration are today, or perhaps even higher, such as the economy in the 2008 elections or anti-terrorism in the immediate aftermath of 9/11).
Once AI is a very high salience issue, electoral incentives for politicians may strongly push toward following public preferences. As public preference becomes a much larger factor in terms of how politicians act on the issue, other factors must (on average) become smaller. Therefore, donations to interventions to influence policy may become a relatively smaller factor.
Notably, money spent today may still be helpful in such situations. For instance, preexisting relationships with policymakers and past policy successes on the issue may be key for being seen as relevant experts in cases where the issue becomes higher salience and politicians are deciding where to turn to for policy specifics.
The impact of donations often accrues over time, just like equity in a fast growing company. So even if the dollar value of the money donated now is lower than it would be in the future, the impact is often similar or greater, due to the compounding.
For instance, funding can allow for unblocking organizational growth. Successful organizations often grow on a literal exponential, so donating earlier may help them along the exponential faster. Further, donations aimed at fieldbuilding or talent development can allow for AI safety talent to grow faster, likewise helping these areas along an exponential. And even interventions that aren’t explicitly geared toward talent cultivation can indirectly have benefits in that domain for the grant recipient, potentially increasing the number of mentors in the field.
In the AI policy space, where reputation and relationships are highly valuable, early donations can also act as a lever on later donations. It also takes time to cultivate relationships with policymakers or to establish a reputation as a repeat player in the policy-space, and successful policy interventions rarely emerge fully formed without prior groundwork. Furthermore, legislative successes early on create foundations that can be expanded upon later. Many major government institutions that now operate at scale were initially created in much smaller forms. Social Security began modestly before expanding into the comprehensive program it is today. The 1957 Civil Rights Act, though limited in scope, established crucial precedents that enabled the far more sweeping Civil Rights Acts of 1964 and 1965. For AI safety, early successes like the establishment of CAISI (even if initially modest) create institutional footholds and foundations which can be expanded in the future. We want more such successes, even if their immediate effects seem minor.
If relationships and proto-policies are essential precursors to later, more substantial policies, then money spent on advancing policy now is not merely “consumption” but an “investment” – one which very well may outstrip the returns to investment in AI companies. If we don't spend the money now, the opportunity to build these relationships and develop these early successes is lost forever.
The AI safety community has extremely concentrated exposure to frontier AI investments, creating significant collective risk. Outside of Dustin Moskovitz, most of the community's wealth appears to be tied up in Anthropic and other AI institutions (both public and private). This concentration means the community's collective financial health is heavily dependent on the performance of a small number of AI companies.
We think there is a strong case for being concentrated in the AI industry (both due to broader society under-appreciating the possible impact of AI, and due to mission hedging), but at the same time we suspect the community may currently have overdone it. From a community-wide perspective, moving some funds out of frontier AI investments increases valuable diversification.
And if AI investments dramatically increase in value, the community will be extremely wealthy. Due to all the reasons for diminishing returns to donations, that would imply each dollar donated would become much less valuable than today.
Selling private AI stock and obtaining liquid assets creates substantially more option value, even for those who wish to remain leveraged on AI. Private stock holdings, while potentially valuable, lack the flexibility and liquidity of public market instruments, which is very valuable for being able to use the assets if strong opportunities arise (for either donations or personal use).
It's entirely possible to maintain high leverage on AI performance using public markets. For example, investing in companies like Nvidia and Google can allow for maintaining large AI exposure while increasing liquidity. Note that most financial advisors would tend to advise against keeping much of personal wealth invested in one industry, but keeping it invested in one particular company is even riskier and would tend to be less advised. Admittedly, investing in public AI stocks would present less AI exposure in your portfolio than investing in hyperscalers. Of course, talk to your financial advisor before acting on this information.
For those uncertain about whether they'll want to donate funds early or later, selling private stock when there’s an opportunity to sell and creating liquidity provides significantly more option value, even while remaining substantially leveraged on AI outcomes. As long as a donor puts a substantial probability that they will decide to donate more in the near term, there’s large gains to be had from liquidity and moving to public markets over private markets.
In worlds where frontier lab returns are particularly high, timelines are likely short. In that case, the community will probably regret not having spent more sooner on interventions. On the other hand, if timelines are longer, it’s likely that holding frontier lab stock won’t be quite as good of an investment anyway.
From an individual perspective, the marginal utility of additional wealth decreases substantially as total wealth increases. For many safety-minded individuals with significant exposure to AI companies, reducing concentration risk may be personally optimal even if it means lower expected absolute returns.
This creates a case for some diversification even if it costs funds in expectation. Being better off personally with less variance, even if absolute returns are lower, can be the rational choice when facing such concentrated exposure to a single sector or company. The psychological and financial benefits of reduced variance may outweigh the potential opportunity cost of somewhat lower returns, particularly when those returns are already likely to be substantial.
While there is an argument for investing donation money to give more later, there are several counterarguments to prioritize donating now. Donors with frontier lab stock should carefully weigh these factors against expected investment returns, only retaining their frontier lab stock to invest to give later if they believe their expected returns are strong enough to outweigh the arguments here. We would also advise donors to think from a community-wide level – even if you think the community as a whole should retain substantial frontier lab equity, if the community is overinvested in frontier labs then you may think individual donors should liquidate to a substantial degree to better move the community in line with the optimal exposure to frontier AI.
If you’re swayed by the logic in this piece and you want to give, we think the following donation opportunities are among the best places in the world to donate for advancing AI safety. All can also absorb more funds effectively. Notably, the largest philanthropic donor in the AI safety space by far is Dustin Moskovitz (primarily via his foundation Good Ventures acting on the recommendations of his donor advisor organization Open Philanthropy), and all the following opportunities either aren’t being funded by Dustin Moskovitz or are limiting the amount of funding they accept from Dustin. We therefore consider all the following opportunities to be at the intersection of impactful and neglected, where your donation can go a long way.
501c3 opportunities (i.e., tax-deductible nonprofit donations):
If you’re open to donating to entities besides 501c3 nonprofits, there are also donation opportunities for influencing AI policy to advance AI safety which we think are substantially more effective than even the best 501c3 donation opportunities, even considering foregone multipliers on 501c3 donations (e.g., from tax benefits). If you’re interested in these sorts of opportunities, you should contact Jay Shooster (jayshooster@gmail.com) who is a donor advisor and can advise on policy-influencing opportunities.