Review

It’s been another busy year at Open Philanthropy; after nearly doubling the size of our team in 2022, we’ve added over 30 new team members so far in 2023. Now we’re launching a number of open applications for roles in all of our Global Catastrophic Risks (GCR) cause area teams (AI Governance and Policy, Technical AI SafetyBiosecurity & Pandemic Preparedness, GCR Cause Prioritization, and GCR Capacity Building[1]).

The application, job descriptions, and general team information are available here. Notably, you can apply to as many of these positions as you'd like with a single application form!


We’re hiring because our GCR teams feel pinched and really need more capacity. Program Officers in GCR areas think that growing their teams will lead them to make significantly more grants at or above our current bar. We’ve had to turn down potentially promising opportunities because we didn’t have enough time to investigate them[2]; on the flip side, we’re likely currently allocating tens of millions of dollars suboptimally in ways that more hours could reveal and correct.

On the research side, we’ve had to triage important projects that underpin our grantmaking and inform others’ work, such as work on the value of Open Phil’s last dollar and deep dives into various technical alignment agendas. And on the operational side, maintaining flexibility in grantmaking at our scale requires significant creative logistical work. Both last year’s reduction in capital available for GCR projects (in the near term) and the uptick in opportunities following the global boom of interest in AI risk make our grantmaking look relatively more important; compared to last year, we’re now looking at more opportunities in a space with less total funding[3].

GCR roles we’re now hiring for include: 

  • Program associates to make grants in technical AI governance mechanisms, US AI policy advocacy, general AI governance, technical AI safety, biosecurity & pandemic preparedness, EA community building, AI safety field building, and EA university groups. 
  • Researchers to identify and evaluate new areas for GCR grantmaking, conduct research on catastrophic risks beyond our current grantmaking areas, and oversee a range of research efforts in biosecurity. We’re also interested in researchers to analyze issues in technical AI safety and (separately) the natural sciences.
  • Operations roles embedded within our GCR grantmaking teams: the Biosecurity & Pandemic Preparedness team is looking for an infosec specialist, an ops generalist, and an executive assistant (who may also support some other teams); the GCR Capacity Building team is looking for an ops generalist. 

Most of these hires have multiple possible seniority levels; whether you’re just starting in your field or have advanced expertise, we encourage you to apply. 

If you know someone who would be great for one of these roles, please refer them to us. We welcome external referrals and have found them extremely helpful in the past. We also offer a $5,000 referral bonus; more information here

How we’re approaching these hires

  1. You only need to apply once to opt into consideration for as many of these roles as you’re interested in. A checkbox on the application form will ask which roles you’d like to be considered for. We’ve also made efforts to streamline work tests and use the same tests for multiple roles where possible; however, some roles do use different work tests, so it’s possible you’ll still have to take different work tests for different roles, especially if you’re interested in roles across a wide array of skillsets (e.g., both research and operations). You may also have interviews with multiple hiring managers if multiple teams are still interested in considering you as a candidate later in the process, and you may receive an offer for one or multiple of the roles you expressed interest in.
  2. Many of these roles have specific location requirements. We offer relocation benefits to assist with the cost of moving, and we’re able to sponsor visas for many (though not all) of these roles, so we encourage international candidates to apply for any role unless otherwise specified. We don’t control who is and isn’t eligible for visas and can’t guarantee visa approval, but we have a dedicated Business Immigration team to advise and assist candidates with visa applications. 
  3. If it’s important to you to know your chances of being hired, or to know whether you’ll receive an offer by a certain date, just ask. We’ll be glad to communicate the number of people at your stage in the pipeline, and may be willing to expedite especially promising candidates who need a decision sooner. Inversely, if you need more turnaround time for work tests, please flag this also; we may be able to offer an extended timeline. 
  4. We let candidates opt into being referred to other organizations if they are rejected later in our process. If you check this box on the application form, we may share some basic information (your email, resume, how far you made it in our process, and a high-level sentence or two about why we think you should be on their radar) with organizations looking for similar talent. We will not share work tests or work test scores, and will not respond to inquiries asking for more detailed evaluative feedback without first checking with candidates. The goal of this change is to make matchmaking for promising candidates more streamlined by asking for candidates’ consent up front, rather than after rejection. 
  5. Candidates rejected at the final interview stage will receive individualized feedback. A desire for more feedback is the most frequent piece of feedback we get from applicants. Unfortunately, due to the volume of candidates and requests for feedback we receive, we can’t guarantee this for candidates rejected earlier in the process, but we’re working on potential alternatives (e.g., providing a summary of the kinds of mistakes most commonly made on work tests), so there’s a chance we’ll be able to share something along those lines. 

If you have a question or concern or are a recent applicant to Open Philanthropy and want to discuss your specific situation before applying again, please email jobs@openphilanthropy.org

  1. ^

    Formerly known as the Effective Altruism Community Growth (Longtermism) team.

  2. ^

    For more color, see this comment by Ajeya Cotra, Program Officer for our technical AI safety grantmaking.

  3. ^

    Grantmaking work can be looked at in financial terms, just like “earning to give” work. Suppose your investigation correctly causes a funder to update their assessment of a $200k opportunity from “roughly as good as my last dollar” to “roughly zero,” causing them to not make the grant. This is the same impact as that funder having another $200k to give. We haven’t done a quantitative analysis, but we think it’s likely that GCR-focused grantmaking work at Open Phil has had higher financial return than most GCR-focused earning-to-give work, since our small teams dispense tens to hundreds of millions per year in grants. Of course, this is a fairly simplistic analysis that focuses only on one aspect of grantmaking.

New Comment