TL;DR: * I think many of the marginal hires at larger organizations doing AI safety technical or policy work right now (including e.g. Apollo, Redwood, METR, RAND, GovAI, Epoch, UKAISI, and Anthropic’s safety teams) would be capable of founding (or being early employees of) organizations focused on building capacity in...
[cross-posted from the EA Forum] Post authors: Eli Rose, Asya Bergal Posting in our capacities as members of Open Philanthropy’s Global Catastrophic Risks Capacity Building team. Note: This program, together with our separate program for work that builds capacity to address risks from transformative AI, has replaced our 2021 request...
(cross-posted from the Effective Altruism Forum) We’ve recently made a few updates to the program page for our career development and transition funding program (recently renamed, previously the “early-career funding program”), which provides support – in the form of funding for graduate study, unpaid internships, independent study, career transition and...
EDIT 2023/11/21 We have finished evaluating the first batch of our applications. As we have not yet finished the hiring process, people are still welcome to apply to us. We are still very likely to skim future applications, however we have no firm commitments to do so. Candidates are welcome...
The Long-Term Future Fund has an AMA up on the Effective Altruism Forum. There’s no real deadline for questions, but let’s say we have a soft commitment to focus on questions asked on or before September 8th. I'd prefer centralizing the questions to one place. If you don't want to...
(Cross-posted from the EA Forum.) Introduction This payout report is meant to cover the Long-Term Future Fund's grantmaking starting January 2022 (after our December 2021 payout report), going through April 2023 (1 January 2022 - 30 April 2023). * Total funding recommended: $13.0M * Total funding paid out: $12.16M *...