Join a global cohort of ambitious researchers in Cape Town for a fully-funded cooperative AI research fellowship! Spend 3 months from January to April 2026 researching with world-class mentors from Google DeepMind, Oxford, MIT, CMU, and Toronto, among others. Join us for an online information session on September 18th and apply here by September 28th.
Interested in mentoring for this program? Fill out this form.
The fellowship is a full-time 3-month research program for participants from diverse backgrounds around the world to pursue AI safety research from a cooperative AI perspective. The fellowship will run from January to April 2026 in Cape Town, South Africa, and kicks off with a week-long retreat.
While working from a co-working space in Cape Town, participants will receive mentorship from top researchers in the field of cooperative AI, including from organisations such as Google DeepMind, the University of Oxford, and MIT. Alongside this, participants will be provided with resources for building their knowledge and network in cooperative AI, and financial support covering their living and travel expenses.
The aim of this program is to prepare fellows for research careers in cooperative AI, and to support the burgeoning AI safety and cooperative AI ecosystem in South Africa. In line with this, the University of Cape Town (UCT) will be launching the African AI Safety Hub at the UCT AI Initiative We aim to support this emerging institution with research direction setting and talent from this program.
Location: In-person in Cape Town, South Africa.
Application Deadline: 28 September 2025.
Start Date: 10 January 2026.
Duration: Full-time for 3 months, ending 13 April 2026.
Stipend: $3000/month for living expenses. Note that these are generous given the comparatively low cost of living in South Africa.
Accommodation: Private room in a group house with other fellows.
Amenities: We will provide an office space (with a beautiful view of Table Mountain), and workday meals.
Travel Support: We cover flights to and from Cape Town.
Visas: We are unable to provide visa sponsorship; however visitor visas are easy to acquire for many countries and last up to 90 days, with relatively simple processes for extension. We can provide support with handling your visitor visa extension process.
Compute Budget: We will provide compute based on your project requirements.
Participants: We are looking for candidates across the globe for this program. Additionally, we aim to provide special consideration to applicants who would otherwise have trouble accessing in-person programs in the UK or US due to visa requirements.
Multi-Agent Safety
This research area focuses on understanding and preventing risks from systems of many autonomous AI agents. This includes work on cooperation failures (especially in mixed-motive settings, where not all agents have the same objective), as well as risks from AI collusion, and systemic failures emerging from unstable or insecure networks of agents. Representative topics include:
AI for Facilitating Human Cooperation
Many of the greatest challenges that humanity faces can be understood as cooperation challenges, where we would benefit from working together. Yet we often see tragedies of the commons, where environmental incentives make cooperation difficult or unstable. In this area, we would like to see proposals to develop AI tools that help humans resolve major cooperation challenges. By virtue of their potentially greater ability to identify mutually beneficial agreements or to create novel institutional designs, for example, AI systems could have a huge positive impact via helping humans to cooperate.
See the associated grant area on the Cooperative AI Foundation website for more details.
Mitigating Gradual Disempowerment
As AI deployment increases and critical social systems — like economy, state, culture — become less reliant on human labor and cognition, the extent to which humans can explicitly or implicitly align such social systems could dramatically decrease. Competitive pressures and 'wicked' interactions across systems and scales could make it systematically difficult to avoid outsourcing critical societal functions to AI. As a result, these systems —and the outcomes they produce— might drift further from providing what humans want. In this area, we're looking to develop mitigations to preserve human agency and ensure that our institutions serve us.
You can read more about this topic here.
Wildcard
Pitch us a project! The outcome of your project must necessarily aim to reduce catastrophic risks that arise as a result of AI, but other than that there are no constraints here. Note that you may be less likely to be matched with a mentor (and therefore accepted to the fellowship) if you choose this option, but we will make an effort to find mentors for exceptional candidates that don't exactly fit the tracks above.
Each fellow will be matched with an expert mentor, who will provide supervision for the duration of the fellowship. In addition, fellows will be supported by a research manager who will provide general research advice, career coaching, and ensure they are on track to meet their goals.
We have gathered some truly world-class mentors for this fellowship, including:
There will be a four-phase application process. We encourage you to submit your application even if it feels unpolished; we value authenticity and substance over perfect presentation. We value inclusion and encourage applicants from diverse backgrounds. Please contact us if you require special accommodation in order to apply.
Phase 1 - Initial Review: Applications are reviewed on a rolling basis. Deadline is 28 September 2025. Decisions will be made by October 6, 2025. Early submission is encouraged.
Phase 2 - Paid Work Sample: Selected candidates will be asked to complete a compensated research task (2-3 hours long). Successful applicants will be notified by the 20th of October.
Phase 3 - Interview: Selected candidates will participate in a 45-60 minute interview with program staff. Successful applicants will be notified by October 29th.
Phase 4 - Mentor Matching and Offers: Selected candidates will be interviewed by one or more mentors based on research interests and compatibility, who will make suitable candidates an offer.
Phase 5 - Final Offers: Here, our team reviews the final mentor-mentee pairings to ensure their projects are within scope. We expect the vast majority of candidates who pass phase 4 to be accepted at this stage.
We welcome participants from anywhere in the world, from many levels of experience, but a basic understanding of machine learning is required (i.e. the equivalent of having completed one undergraduate course in ML). Our intention for this fellowship is to catalyse career growth in early-stage researchers who aim to contribute significantly to the field of cooperative AI and AI safety. As such, we are looking for candidates with high potential whose careers could be significantly accelerated by this program.
As evidence of this, we evaluate applications based on the following criteria:
General Program Fit: We look for candidates who have demonstrated the ability to complete projects, solve problems independently, and drive results despite obstacles or uncertainty. We also look for candidates who have demonstrated prior engagement with topics in AI safety or cooperative AI through reading, coursework, workshops, conferences, or other learning activities.
Domain Competence & Research Skills: We value experience relating to the field that you wish to contribute to. This may include relevant coursework, skills, publications or other evidence of track record, appropriate to your career stage. We also strongly value research experience, though this is not strictly required.
Career Goals: We expect clear alignment between the fellowship and your career aspirations in cooperative AI or AI safety research. We look for candidates with thoughtful, well-articulated plans for contributing to the field.
Research Proposal Potential: We will examine the quality and feasibility of your proposed research within our tracks. We evaluate understanding of the research area, connection to existing literature, and potential for meaningful contribution within the 3-month timeline. Note that we expect many fellows will end up working on projects quite different from their original proposal. Our main motivation for including this section is to test your ability to synthesize ideas and develop a promising direction.
Counterfactual: We also consider the potential counterfactual impact of the fellowship on your career trajectory. We particularly look at candidates who would have limited access to similar opportunities elsewhere, those from underrepresented communities, or those who could significantly benefit from exposure to the cooperative AI research community.
Powerful AI systems are increasingly being deployed with the ability to autonomously interact with the world. This is a profound change from the more passive, static AI services with which most of us are familiar, such as chatbots and image generation tools.
In the coming years the competitive advantages offered by autonomous, adaptive agents will likely drive their adoption in high-stakes domains with increasingly complex and important tasks. In order to fulfil their roles, these advanced agents will need to communicate and interact with each other and with people, giving rise to new multi-agent systems of unprecedented complexity.
While the broader fields of AI safety and AI governance often focus on individual AI systems, cooperative AI focuses specifically on multi-agent safety and how AI can overcome cooperation challenges between many actors. This includes reducing risks associated with interactions between advanced AI agents, as well as making use of AI to overcome human cooperation challenges. You can learn more about cooperative AI through the Cooperative AI self-paced online course.
Through the fellowship, we are supporting global talent in advancing research during a crucial phase of AI development. Our partners in South Africa and abroad aim to facilitate collaboration across continents to solve safety and alignment problems, enabling researchers to build ongoing relationships that lead to impactful careers.
We expect rapid AI adoption in Africa given that it is, demographically, the youngest, fastest growing continent. We believe that preparing African nations with societal safeguards for the mass-adoption of AI will be crucial for preventing and mitigating human suffering. We also believe that AI can be used beneficially in this context to uplift human coordination and resolve resource sharing problems. This perspective aligns with the Continental Strategy on AI outlined by the African Union.
Given this, South Africa is an excellent home for this program as it hosts the top academic institutions on the continent. In particular, the University of Cape Town – the continent's highest ranked institution and a core partner of this program – has strong national and continental academic ties and a rapidly expanding AI ecosystem internally. AI Safety South Africa (AISSA) has been working alongside the University of Cape Town to integrate AI safety topics into the university's curriculum since AISSA's inception. With this in mind, we aim to support the burgeoning AI safety and cooperative AI ecosystem in South Africa with the fellowship, including supporting the establishment of the African AI Safety Hub at the University of Cape Town.
Furthermore, due to more lenient visa requirements, we expect hosting this program in South Africa will result in a more diverse pool of applicants to be able to contribute to this critical, globally-relevant field. Lastly, as an added bonus, the fellowship takes place in summertime in South Africa with sunny beaches just 15 minutes away from the co-working space!
The fellowship is a collaboration between the Cooperative AI Foundation (CAIF), Principles of Intelligent Behavior in Biological and Social Systems (PIBBSS), The AI Initiative at the University of Cape Town (UCT) and AI Safety South Africa (AISSA). AISSA is driving this project, and building on PIBBSS fellowship methodology with CAIF research oversight. This initiative serves as both a talent pipeline and research direction-setting mechanism for UCT's emerging African AI Safety Hub. This initiative is funded by the AI Safety Tactical Opportunities Fund and the Cooperative AI Foundation.
AI Safety South Africa (AISSA) A capacity building organisation focused on developing skills, networks, and community for preventing global catastrophic outcomes from advanced AI. AISSA drives impact through events, courses, partnerships and AI Safety Cape Town, a co-working and events space.
The Cooperative AI Foundation (CAIF) A charitable entity, backed by a $15 million philanthropic commitment from Macroscopic Ventures. CAIF's mission is to support research that will improve the cooperative intelligence of advanced AI for the benefit of all.
The UCT AI Initiative A research, teaching and knowledge translation ecosystem, dedicated to advancing world-class AI rooted in African realities. The initiative's mission is to design technologies that drive justice, dignity, and collective flourishing. The AI Initiative will have focus areas in AI applications including improving outcomes in health, climate, and poverty as well as AI safety and foundational AI theory.
Principles of Intelligent Behavior in Biological and Social Systems (PIBBSS) A research initiative aiming to leverage insights on the parallels between intelligent behaviour in natural and artificial systems towards progress on important questions in AI risk, governance and safety. PIBBSS has successfully run multiple research fellowships, developing a unique methodology for mentoring early-career researchers in AI safety.