Our summer fundraiser is ongoing. In the meantime, we're writing a number of blog posts to explain what we're doing and why, and to answer a number of common questions. Previous posts in the series are listed at the above link.


I'm often asked whether donations to MIRI now are more important than donations later. Allow me to deliver an emphatic yes: I currently expect that donations to MIRI today are worth much more than donations to MIRI in five years. As things stand, I would very likely take $10M today over $20M in five years.

That's a bold statement, and there are a few different reasons for this. First and foremost, there is a decent chance that some very big funders will start entering the AI alignment field over the course of the next five years. It looks like the NSF may start to fund AI safety research, and Stuart Russell has already received some money from DARPA to work on value alignment. It's quite possible that in a few years' time significant public funding will be flowing into this field.

(It's also quite possible that it won't, or that the funding will go to all the wrong places, as was the case with funding for nanotechnology. But if I had to bet, I would bet that it's going to be much easier to find funding for AI alignment research in five years' time).

In other words, the funding bottleneck is loosening — but it isn't loose yet.

We don't presently have the funding to grow as fast as we could over the coming months, or to run all the important research programs we have planned. At our current funding level, the research team can grow at a steady pace — but we could get much more done over the course of the next few years if we had the money to grow as fast as is healthy.

Which brings me to the second reason why funding now is probably much more important than funding later: because growth now is much more valuable than growth later.

There's an idea picking up traction in the field of AI: instead of focusing only on increasing the capabilities of intelligent systems, it is important to also ensure that we know how to build beneficial intelligent systems. Support is growing for a new paradigm within AI that seriously considers the long-term effects of research programs, rather than just the immediate effects. Years down the line, these ideas may seem obvious, and the AI community's response to these challenges may be in full swing. Right now, however, there is relatively little consensus on how to approach these issues — which leaves room for researchers today to help determine the field's future direction.

People at MIRI have been thinking about these problems for a long time, and that puts us in an unusually good position to influence the field of AI and ensure that some of the growing concern is directed towards long-term issues in addition to shorter-term ones. We can, for example, help avert a scenario where all the attention and interest generated by Musk, Bostrom, and others gets channeled into short-term projects (e.g., making drones and driverless cars safer) without any consideration for long-term risks that are more vague and less well-understood.

It's likely that MIRI will scale up substantially at some point; but if that process begins in 2018 rather than 2015, it is plausible that we will have already missed out on a number of big opportunities.

The alignment research program within AI is just now getting started in earnest, and it may even be funding-saturated in a few years' time. But it's nowhere near funding-saturated today, and waiting five or ten years to begin seriously ramping up our growth would likely give us far fewer opportunities to shape the methodology and research agenda within this new AI paradigm. The projects MIRI takes on today can make a big difference years down the line, and supporting us today will drastically affect how much we can do quickly. Now matters.

I encourage you to donate to our ongoing fundraiser if you'd like to help us grow!


This post is cross-posted from the MIRI blog.

New to LessWrong?

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 11:26 AM

So, to my understanding, doing this in 2015 instead of 2018 is more or less exactly the sort of thing that gets talked about when people refer to a large-scale necessity to "get there first". This is what it looks like to push for the sort of first-mover advantage everyone knows MIRI needs to succeed.

It seems like a few people I've talked to missed that connection, but they support the requirement for having a first-mover advantage. They support a MIRI-influenced value alignment research community, but then they perceive you asking for more money than you need! Making an effort to remind people more explicitly why MIRI needs to grow quickly may be valuable. Link the effect of 'fundraiser' to the cause of 'value learning first-mover'.

So if I understand you correctly, you have enough money to run, enough money to grow, but not enough money to grow even faster, and you call that a need. Is that a fair summary?

Aren't you getting a significant chunk of that Elon Musk money? If not, why not? If so, why do you not mention this?

Good questions! I don't see Nate using the word "need," but I can confirm that your summary's otherwise right: MIRI has the money to start growing, but we could scale up more quickly with more funds.

MIRI received two FLI grants: $250,000 for our main research program (spread over three years), and $49,320 for the AI Impacts project. (We're also collaborating on two other grants, with Owain Evans and Ramana Kumar.)

Nate probably left a lot of important stuff out because this is the fourth post in a series, linked from the main fundraiser post. The first three were:

  1. Grants and Fundraisers
  2. An Astounding Year
  3. Targets 1 and 2: Growing MIRI

Post 1 announces the FLI grants, the completion of the Singularity Summit sale, and recent large donations. These are collectively enough for MIRI expand to a 5-person research team while maintaining one year's runway. 2 is a more general retrospective on the past year (though it also mentions the FLI grant). 3 details how MIRI could expand to a 10-person team (while maintaining a year's runway) with more funding, and talks about new projects it would be able to take on at that level.

[-][anonymous]9y00

I donated 50 dollars last year and 100 dollars last week. Kurzweil could draw you a very reassuring graph! I'm considering a larger donation, but I'm wavering on it - I lack the IQ and training to judge the quality of MIRI's work, so I may be inclined to donate more once MIRI is more affiliated with mainstream AI people, but by then donations will have less impact. So I'm conflicted.

[This comment is no longer endorsed by its author]Reply