Glad to see this! Suggestion - if you have lots of things that you want to screen candidates for, people who value their time aren't going to want to gamble it on an application that is high time cost and low chance of success - there is a way to solve this, by splitting it into stages.
E.g. Stage 1
- easy things for candidate - checkboxes, name, email, CV, link to works
Stage 2
Stage 3
Saves the candidates time and also makes things better for you, since you get to fast track people who seem especially promising at earlier stages and are more likely to get such candidates, since they don't feel that their time is at risk of being wasted as much - and having the competence and care to notice this lets them positively update about you.
Thanks for the suggestion. We considered this but decided against it for various reasons (though we did cut down the app length from our first draft). I agree that it's frustrating that application time costs are high. One consideration is that we often find ourselves relying on free-response questions for app review, even in an initial screen, and without at least some of those it would be considerably harder to do initial screening.
One consideration is that we often find ourselves relying on free-response questions for app review, even in an initial screen, and without at least some of those it would be considerably harder to do initial screening.
Why not just have the initial screening only have one question and say what you're looking for? So that the ones who happen to already know what you're looking for aren't advantaged and able to Goodhart more?
MIRI’s Technical Governance Team plans to run a small research fellowship program in early 2026. The program will run for 8 weeks, and include a $1200/week stipend. Fellows are expected to work on their projects 40 hours per week. The program is remote-by-default, with an in-person kickoff week in Berkeley, CA (flights and housing provided). Participants who already live in or near Berkeley are free to use our office for the duration of the program.
Fellows will spend the first week picking out scoped projects from a list provided by our team or designing independent research projects (related to our overall agenda), and then spend seven weeks working on that project under the guidance of our Technical Governance Team. One of the main goals of the program is to identify full-time hires for the team.
If you are interested in participating, please fill out this application as soon as possible (should take 45-60 minutes). We plan to set dates for participation based on applicant availability, but we expect the fellowship to begin after February 2, 2026 and end before August 31, 2026 (i.e., some 8 week period in spring/summer, 2026).
Strong applicants care deeply about existential risk, have existing experience in research or policy work, and are able to work autonomously for long stretches on topics that merge considerations from the technical and political worlds.
Unfortunately, we are not able to sponsor visas for this program.
Here are a few example projects we could imagine fellows approaching
Adversarial detection of ML training on monitored GPUs: Investigate which hardware signals and side-channel measurements can most reliably distinguish ML training from other intensive workloads in an adversarial setting.
Confidence-building measures to facilitate international acceptance of the agreement: Analyze historical arms control and treaty negotiations to identify which confidence-building measures could help distrustful nations successfully collaborate on an international AI development halt before verification mechanisms are in place.
Interconnect bandwidth limits / "fixed-sets": Flesh out the security assumptions, efficacy, and implementation details of a verification mechanism that would restrict AI cluster sizes by severely limiting the external communication bandwidth of chip pods.
The security of existing AI chips for international agreement verification: Investigate whether the common assumption that current AI chips are too insecure for remote verification is actually true, or whether existing chips (potentially augmented with measures like video surveillance) could suffice without requiring years of new chip development.
Monitoring AI chip production during an AI capabilities halt: Produce detailed technical guidance for how governments and international institutions could effectively monitor AI chip production as part of an international agreement halting AI capabilities advancement.
Executive power to intervene in AI development: Analyze the legal powers relevant to the U.S. President’s ability to halt AI development or govern AI more broadly.
Subnational and non-state actor inclusion in AI governance: Analyze how international AI agreements could account for non-state actors (companies, research institutions, individuals) who control critical capabilities, drawing on precedents from environmental and cyber governance.
Mapping and preparing for potential AI warning shots: Identify the most plausible near-term AI incidents or capability demonstrations that could shift elite and public opinion toward supporting stronger AI governance measures. For each scenario, develop policy responses, communication strategies, and institutional preparations.