Thanks for the pointers! I think there should probably be more, but I'm glad to know there's more than I was aware of.
idk about other labs, but openai employees are not permitted to take out loans against openai equity.
I know that AI companies have policies including matching donations for approved organizations. It seems like influencing which organizations are elligible for matching could be very valuable, and like employees should not restrict their giving to already approved organizations.
I am not aware of OpenAI or Anthropic having such policies
(Google does, but it's just "which organisations are on benevity.com" and matches are capped at $10K, so it's not too relevant here IMO)
If you genuinely believe that your future net worth will be measured in galaxies, and that your own work is the most likely to reduce AI x-risk, that would plausibly change the calculus quite a bit.
Update (March 12): Transformer published on this. Their article (and comments here) note that there has already been some public discussions, which I hadn't seen.
Many Anthropic employees, especially, are sympathetic to AI safety and (will) have lots of money. This is something that is being talked about a lot (semi-)privately, but I haven't seen any public discussion of it.
I find that striking. It seems like the topic is worthy of extensive public discussion, and it seems to me that perhaps this community is inheriting anti-helpful cultural norms against publicly discussing how individuals make use of their money.
It also seems likely that many/most of AI company employees who are passionate about reducing AI risk should rapidly give much/most of their money to effective projects that would otherwise not be adequately funded.
There's a lot of potential for this to do tremendous good. There are of course things like political giving. But I think most of this potential would come from employees having different theories of change than institutional funders, moving faster, and having higher risk appetite. This is especially true given short timelines.
A few specific thoughts:
To the extent things like the above are issues, it seems like coordination failures amongst company employees might be a large contributing factor. Groups of AI company employees could address this by delegating relevant work to individual members who volunteer or are selected randomly.
I'm fundraising for my nonprofit, Evitable, and might benefit from such things. But my purpose in writing this is to promote public discussion that I think can benefit others in similar situations to me/Evitable.
I haven't put much effort into fundraising for Evitable yet, and expect I will learn a lot more about the situation as I do.
Much of the discussion here could equally well apply to individual HNWI giving more broadly.