Copying over a comment from the EA forum (and my response) because it speaks to something that was in some earlier drafts, that I expect to come up, and that is worth just going ahead and addressing imo.
IMO it would help to see a concrete list of MIRI's outputs and budget for the last several years. My understanding is that MIRI has intentionally withheld most of its work from the public eye for fear of infohazards, which might be reasonable for soliciting funding from large private donors but seems like a poor strategy for raising substantial public money, both prudentially and epistemically.
If there are particular projects you think are too dangerous to describe, it would still help to give a sense of what the others were, a cost breakdown for those, anything you can say about the more dangerous ones (e.g. number of work hours that went into them, what class of project they were, whether they're still live, any downstream effect you can point to, and so on).
My response:
(Speaking in my capacity as someone who currently works for MIRI)
I think the degree to which we withheld work from the public for fear of accelerating progress toward ASI might be a little overrepresented in the above. We adopted a stance of closed-by-default research years ago for that reason, but that's not why e.g. we don't publish concrete and exhaustive lists of outputs and budget.
We do publish some lists of some outputs, and we do publish some degree of budgetary breakdowns, in some years.
But mainly, we think of ourselves as asking for money from only one of the two kinds of donors. MIRI feels that it's pretty important to maintain strategic and tactical flexibility, to be able to do a bunch of different weird things that we think each have a small chance of working out without exhaustive justification (or post-hoc litigation) of each one, and to avoid the trap of focusing only on clearly legible short chains of this—>that (as opposed to trying both legible and less-legible things).
(A colleague of mine once joked that "wages are for people who can demonstrate the value of their labor within a single hour; I can't do that, which is why I'm on a salary." A similar principle applies here.)
In the past, funding MIRI led to outputs like our alignment research publications and the 2020/2021 research push (that didn't pan out). In the more recent past, funding MIRI has led to outputs like the work of our technical governance team, and the book (and its associated launch campaign and various public impacts).
That's enough for some donors—"If I fund these people, my money will go into various experiments that are all aimed at ameliorating existential risk from ASI, with a lean toward the sorts of things that no one else is trying, which means high variance and lots of stuff that doesn't pan out and the occasional home run."
Other donors are looking to more clearly purchase a specific known product, and those donors should rightly send fewer of their dollars to MIRI, because MIRI has never been and does not intend to ever be quite so clear and concrete and locked-in.
(One might ask "okay, well, why post on the EA forum, which is overwhelmingly populated by the other kind of donor, who wants to track the measurable effectiveness of their dollars?" and the answer is "mostly for the small number who are interested in MIRI-like efforts anyway, and also for historical reasons since the EA and rationality and AI safety communities share so much history." Definitely we do not feel entitled to anyone's dollars, and the hesitations of any donor who doesn't want to send their money toward MIRI-like efforts are valid.)
This year, we received a $1.6M matching grant from the Survival and Flourishing Fund, which means that the first $1.6M we receive in donations before December 31st will be matched 1:1. We will only receive the grant funds if it can be matched by donations.
Does anyone know (i.e. actually know, not just have a speculative guess) why SFF does this sort of thing? Naively it seems like a weird way to operate from a perspective of maximizing the good done by their donations. Is the reasoning something like "if other people donate to MIRI, that increases our estimate of how valuable MIRI is, and so we want to donate more to it"?
this is an experiment to incentivise/help applicants to fundraise more widely, as SFF funds are getting ridiculously contested. also, TBC, this is an option the applicants are free to ignore. more here: https://survivalandflourishing.fund/matching-pledges.
I just wanted to confirm that the matching donations are managed on MIRIs end, I don't need to do anything special?
Two things you have to do to make sure your donation is matched: 1) Confirm that the date is no later than Dec 31 2025; 2) check the fundraiser progress bar, as to whether MIRI has already exceeded the $1.6M in donations that will be matched by SFF, here: https://intelligence.org/2025/12/01/miris-2025-fundraiser/
If it's before the new year, and MIRI hasn't yet raised $1.6M, your donation will be matched.
(There may be minor caveats to this; I'm pretty confident of the above, per private communications and from reading https://survivalandflourishing.fund/matching-pledges#what_donations_are_eligible. I will shortly donate myself, and am confident I will be matched.)
MIRI is running its first fundraiser in six years, targeting $6M. The first $1.6M raised will be matched 1:1 via an SFF grant. Fundraiser ends at midnight on Dec 31, 2025. Support our efforts to improve the conversation about superintelligence and help the world chart a viable path forward.
MIRI is a nonprofit with a goal of helping humanity make smart and sober decisions on the topic of smarter-than-human AI.
Our main focus from 2000 to ~2022 was on technical research to try to make it possible to build such AIs without catastrophic outcomes. More recently, we’ve pivoted to raising an alarm about how the race to superintelligent AI has put humanity on course for disaster.
In 2025, those efforts focused around Nate Soares and Eliezer Yudkowsky’s book (now a New York Times bestseller) If Anyone Builds It, Everyone Dies, with many public appearances by the authors; many conversations with policymakers; the release of an expansive online supplement to the book; and various technical governance publications, including a recent report with a draft of an international agreement of the kind that could actually address the danger of superintelligence.
Millions have now viewed interviews and appearances with Eliezer and/or Nate, and the possibility of rogue superintelligence and core ideas like “grown, not crafted” are increasingly a part of the public discourse. But there is still a great deal to be done if the world is to respond to this issue effectively.
In 2026, we plan to expand our efforts, hire more people, and try a range of experiments to alert people to the danger of superintelligence and help them make a difference.
To support these efforts, we’ve set a fundraising target of $6M ($4.4M from donors plus 1:1 matching on the first $1.6M raised, thanks to a $1.6M matching grant), with a stretch target of $10M ($8.4M from donors plus $1.6M matching).
Donate here, or read on to learn more.
The Big Picture
As stated in If Anyone Builds It, Everyone Dies:
The leading AI labs are explicitly rushing to create superintelligence. It looks to us like the world needs to stop this race, and that this will require international coordination. MIRI houses two teams working towards that end:
Activities
Communications
If Anyone Builds It, Everyone Dies has been the main recent focus of the communications team. We spent substantial time and effort preparing for publication, executing the launch, and engaging with the public via interviews and media appearances.
The book made a pretty significant splash:
The end goal is not media coverage, but a world in which people understand the basic situation and are responding in a reasonable, adequate way. It seems early to confidently assess the book's impact, but we see promising signs.
The possibility of rogue superintelligence is now routinely mentioned in mainstream coverage of the AI industry. We’re finding in our own conversations with strangers and friends that people are generally much more aware of the issue, and taking it more seriously. Our sense is that as people hear about the problem through their own trusted channels, they are more receptive to concerns.
Our conversations with policymakers feel meaningfully more productive today than they did a year ago, and we have been told by various U.S. Members of Congress that the book had a valuable impact on their thinking. It remains to be seen how much this translates into action. And there is still a long way to go before world leaders start coordinating an international response to this suicide race.
Today, the MIRI comms team comprises roughly seven full-time employees (if we include Nate and Eliezer). In 2026, we’re planning to grow the team. For example:
We will be making a hiring announcement soon, with more detail about the comms team’s specific models and plans. We are presently unsure (in part due to funding constraints/budgetary questions!) whether we will be hiring one or two new comms team members, or many more.
Going into 2026, we expect to focus less on producing new content, and more on using our existing library of content to support third parties who are raising the alarm about superintelligence for their own audiences. We also expect to spend more time responding to news developments and taking advantage of opportunities to reach new audiences.
Governance
Our governance strategy primarily involves:
There's a ton of work still to be done. To date, the MIRI Technical Governance Team (TGT) has mainly focused on high-level questions such as "Would it even be possible to monitor AI compute relevant to frontier AI development?" and "What would an international halt to the superintelligence race look like?" We're only just beginning to transition into more concrete specifics, such as writing up A Tentative Draft of a Treaty, with Annotations, which we published on the book website to coincide with the book release, followed by a draft international agreement.
We plan to push this a lot further, and work towards answering questions like:
We need to extend that earlier work into concrete, tractable, shovel-ready packages that can be handed directly to concerned politicians and leaders (whose ranks grow by the day).
To accelerate this work, MIRI is looking to support and hire individuals with relevant policy experience, writers capable of making dense technical concepts accessible and engaging, and self-motivated and competent researchers.[1]
We’re also keen to add additional effective spokespeople and ambassadors to the MIRI team, and to free up more hours for those spokespeople who are already proving effective. Thus far, the bulk of our engagement with policymakers and national security professionals has been done either by our CEO (Malo Bourgon), our President (Nate Soares), or the TGT researchers themselves. That work is paying dividends, but there’s room for a larger team to do much, much more.
In our conversations to date, we’ve already heard that folks in government and at think tanks are finding TGT’s write-ups insightful and useful, with some calling it top-of-its-class work. TGT’s recent outputs and activities include:
The above isn’t an exhaustive description of what everyone at MIRI is doing; e.g., we continue to support a small amount of in-house technical alignment research.
As noted above, we expect to make hiring announcements in the coming weeks and months, outlining the roles we’re hoping to add to the team. But if your interest has already been piqued by the general descriptions above, you’re welcome to reach out to contact@intelligence.org. For more updates, you can subscribe to our newsletter or periodically check our careers pages (MIRI-wide, TGT-specific).
Fundraising
Our goal at MIRI is to have at least two years’ worth of reserves on hand. This enables us to plan more confidently: hire new staff, spin up teams and projects with long time horizons, and balance the need to fundraise with other organizational priorities. Thanks to generous support we received in 2020 and 2021, we didn’t need to run any fundraisers in the last six years.
We expect to hit December 31st having spent approximately $7.1M this year (similar to recent years[2]), and with $10M in reserves if we raise no additional funds.[3]
Going into 2026, our budget projections have a median of $8M[4], assuming some growth and large projects, with large error bars from uncertainty about the amount of growth and projects. On the upper end of our projections, our expenses would hit upwards of $10M/yr.
Thus, our expected end-of-year reserves puts us $6M shy of our two-year reserve target of $16M.
This year, we received a $1.6M matching grant from the Survival and Flourishing Fund, which means that the first $1.6M we receive in donations before December 31st will be matched 1:1. We will only receive the grant funds if it can be matched by donations.
Therefore, our fundraising target is $6M ($4.4M from donors plus 1:1 matching on the first $1.6M raised). This will put us in a good place going into 2026 and 2027, with a modest amount of room to grow.
It’s an ambitious goal and will require a major increase in donor support, but this work strikes us as incredibly high-priority, and the next few years may be an especially important window of opportunity. A great deal has changed in the world over the past few years. We don’t know how many of our past funders will also support our comms and governance efforts, or how many new donors may step in to help. This fundraiser is therefore especially important for informing our future plans.
We also have a stretch target of $10M ($8.4M from donors plus the first $1.6M matched). This would allow us to move much more quickly on pursuing new hires and new projects, embarking on a wide variety of experiments while still maintaining two years of runway.
For more information or assistance on ways to donate, view our Donate page or contact development@intelligence.org.
The default outcome of the development of superintelligence is lethal, but the situation is not hopeless; superintelligence doesn't exist yet, and humanity has the ability to hit the brakes.
With your support, MIRI can continue fighting the good fight.
Donate Today
In addition to growing our team, we plan to do more mentoring of new talent who might go on to contribute to TGT's research agenda, or who might contribute to the field of technical governance more broadly.
Our yearly expenses in 2019–2024 ranged from $5.4M to $7.7M, with the high point in 2020 (when our team was at its largest), and the low point in 2022 (after scaling back).
It’s worth noting that despite the success of the book, book sales will not be a source of net income for us. As the authors noted prior to the book’s release, “unless the book dramatically exceeds our expectations, we won’t ever see a dime”. From MIRI’s perspective, the core function of the book is to try to raise an alarm and spur the world to action, not to make money; even with the book’s success to date, the costs to produce and promote the book have far exceeded any income.
Our projected expenses are roughly evenly split between Operations, Outreach, and Research, where our communications efforts fall under Outreach and our governance efforts largely fall under Research (with some falling under Outreach). Our median projection breaks down as follows: $2.6M for Operations ($1.3M people costs, $1.2M cost of doing business), $3.2M Outreach ($2M people costs, $1.2M programs), and $2.3M Research ($2.1M people costs, $0.2M programs). This projection includes roughly $0.6–1M in new people costs (full-time-equivalents, i.e., assuming the people are not all hired on January 1st).
Note that the above is an oversimplified summary; it's useful for high-level takeaways, but for the sake of brevity, I've left out a lot of caveats, details, and explanations.