(Crossposted to the Effective Altruism Forum)


Introduction

In effective altruism, people talk about the room for more funding (RFMF) of various organizations. RFMF is simply the maximum amount of money which can be donated to an organization, and be put to good use, right now. In most cases, “right now” typically refers to the next (fiscal) year.  Most of the time when I see the phrase invoked, it’s to talk about individual charities, for example, one of Givewell’s top-recommended charities. If a charity has run out of room for more funding, it may be typical for effective donors to seek the next best option to donate to.
Last year, the Future of Life Institute (FLI) made the first of its grants from the pool of money it’s received as donations from Elon Musk and the Open Philanthropy Project (Open Phil). Since then, I've heard a few people speculating about how much RFMF the whole AI safety community has in general. I don't think that's a sensible question to ask before we have a sense of what the 'AI safety' field is. Before, people were commenting on only the RFMF of individual charities, and now they’re commenting of entire fields as though they’re well-defined. AI safety hasn’t necessarily reached peak RFMF just because MIRI has a runway for one more year to operate at their current capacity, or because FLI made a limited number of grants this year.

Overview of Current Funding For Some Projects


The starting point I used to think about this issue came from Topher Hallquist, from his post explaining his 2015 donations:

I’m feeling pretty cautious right now about donating to organizations focused on existential risk, especially after Elon Musk’s $10 million donation to the Future of Life Institute. Musk’s donation don’t necessarily mean there’s no room for more funding, but it certainly does mean that room for more funding is harder to find than it used to be. Furthermore, it’s difficult to evaluate the effectiveness of efforts in this space, so I think there’s a strong case for waiting to see what comes of this infusion of cash before committing more money.


My friend Andrew and I were discussing this last week. In past years, the Machine Intelligence Research Institute (MIRI) has raised about $1 million (USD) in funds, and received more than that  for their annual operations last year. Going into 2016, Nate Soares, Executive Director of MIRI, wrote the following:

Our successful summer fundraiser has helped determine how ambitious we’re making our plans; although we may still slow down or accelerate our growth based on our fundraising performance, our current plans assume a budget of roughly $1,825,000 per year [emphasis not added].


This seems sensible to me as it's not too much more than what they raised last year, and it seems more and not less money will be flowing into AI safety in the near future. However, Nate also had plans for how MIRI could've productively spent up to $6 million last year, to grow the organization. So, far from MIRI believing it had all the funding it could use, it was seeking more. Of course, others might argue MIRI or other AI safety organizations already receive enough funding relative to other priorities, but that is an argument for a different time.

Andrew and I also talked about how, had FLI had enough funding to grant money to all the promising applicants for its 2015 grants in AI safety research, that would have been millions more flowing into AI safety. It’s true what Topher wrote: that, being outside of FLI, and not otherwise being a major donor, it may be exceedingly difficult for individuals to evaluate funding gaps in AI safety. While FLI has only received $11 million to grant in 2015-16 ($6 million already granted in 2015, with $5 million more to be granted in the coming year), they could easily have granted more than twice that much, had they received the money.

To speak to other organizations, Niel Bowerman, Assistant Director at the Future of Humanity Institute (FH)I, recently spoke about how FHI receives most of its funding exclusively for research, and bottlenecks like the operations he runs more depend on private donations FHI could use more of.  Sean O HEigeartaigh, Executive Director at the Centre for the Study of Existential Risk (CSER), at Cambridge University, recently stated in discussion that CSER and the Leverhulme Centre for the Future of Intelligence (CFI), which CSER is currently helping launch, face the same problem with their operations. Nick Bostrom, author of Superintelligence, and Director of FHI, is in the course of launching the Strategic Artificial Intelligence Research Centre (SAIRC), which received $1.5 million (USD) in funding from FLI. SAIRC seems good for funding for at least the rest of 2016.

 


The Big Picture
Above are the funding summaries for several organizations listed in Andrew Critch’s 2015 map of the existential risk reduction ecosystem.There are organizations working on existential risks other than those from AI, but they aren’t explicitly organized in a network the same way AI safety organizations are. So, in practice, the ‘x-risk ecosystem’ is mapable almost exclusively in terms of AI safety.

It seems to me the 'AI safety field', if defined just as the organizations and projects listed in Dr. Critch’s ecosystem map, and perhaps others closely related (e.g., AI Impacts), could have productively absorbed between $10 million and $25 million in 2016 alone. Of course, there are caveats rendering this a conservative estimate. First of all, the above is a contrived version of the AI safety "field", as there is plenty of research outside of this network popping up all the time. Second, I think the organizations and projects I listed above could've themselves thought of more uses for funding. Seeing as they're working on what is (presumably) the most important problem in the world, there is much millions more could do for foundational research on the AGI containment/control problem, safety research into narrow systems aside.


Too Much Variance in Estimates for RFMF in AI Safety

I've also heard people setting the benchmark for truly appropriate funding for AI safety to be in the ballpark of a trillion dollars. While in theory that may be true, on its face it currently seems absurd. I'm not saying there won't be a time in even the next several years when $1 trillion/year couldn't be used effectively. I'm saying that if there isn't a roadmap for how to increase the productive use of ~$10 million/year to AI safety, to $100 million to $1 billion dollars, talking about $1 trillion/year isn't practical. I don't even think there will be more than $1 billion on the table per year for the near future.

This argument can be used to justify continued earning to give on the part of effective altruists. That is, there is so much money, e.g., MIRI could use, it makes sense for everyone who isn't an AI researcher to earn to give. This might make sense if governments and universities give major funding to what they think is AI safety, give 99% of it to only robotic unemployment or something, miss the boat on the control problem, and MIRI gets a pittance of the money that will flow into the field. The idea that there is effectively something like a multi-trillion dollar ceiling for effective funding for AI safety is still unsound.

When the range for RFMF for AI safety ranges between $5-10 million (the amount of funding AI safety received in 2015) and $1 trillion, I feel like anyone not already well-within the AI safety community cannot reasonably make an estimate of how much money the field can productively use in one year.
On the other hand, there are also people who think that AI safety doesn’t need to be a big priority, or is currently as big a priority as it needs to be, so money spent funding AI safety research and strategy would be better spent elsewhere.

All this stated, I myself don’t have a precise estimate of how much capacity for funding the whole AI safety field will have in, say, 2017.

Reasonable Assumptions Going Forward

What I'm confident saying right now is:

  1. The amount of money AI safety could've productively used in 2016 alone is within an order of magnitude of $10 million, and probably less than $25 million, based on what I currently know.
  2. The amount of total funding available will likely increase year over year for the next several years. There could be quite dramatic rises.. The Open Philanthropy Project, worth $10+ billion (USD), recently announced AI safety will be their top priority next year, although this may not necessarily translate into more major grants in the next 12 months. The White House recently announced they’ll be hosting workshops on the Future of Artificial Intelligence, including concerns over risk. Also, to quote Stuart Russell (HT Luke Muehlhauser): "Industry [has probably invested] more in the last 5 years than governments have invested since the beginning of the field [in the 1950s]." This includes companies like Facebook, Baidu, and Google each investing tons of money into AI research, including Google’s purchase of DeepMind for $500 million in 2014. With an increasing number of universities and corporations investing money and talent into AI research, including AI safety, and now with major philanthropic foundations and governments paying attention to AI safety as well, it seems plausible the amount of funding for AI safety worldwide might balloon up to $100+ million in 2017 or 2018. However, this could just as easily not happen, and there's much uncertainty in projecting this.
  3. The field of AI safety will also grow year over year for the next several years. I doubt projects needing funding will grow as fast as the amount of funding available. This is because the rate at which institutions are willing to invest in growth will not only depend on how much money they're receiving now, but how much they can expect to receive in the future. Since how much those expectations reasonably vary is so uncertain, organizations are smartly conservative to hold their cards close to their chest. While OpenAI has pledged $1 billion for funding AI research in general, and not just safety, over the next couple decades, nobody knows if such funding will be available to organizations out of Oxford or Berkeley like AI Impacts MIRI, FHI or CFI. However,

 

  • i) increased awareness and concern over AI safety will draw in more researchers.
  • ii) the promise or expectation of more money to come may draw in more researchers seeking funding.
  • iii) the expanding field and the increased funding available will create a feedback loop in which institutions in AI safety, such as MIRI, make contingency plans to expand faster, if able to or need be.

Why This Matters

I don't mean to use the amount of funding AI safety has received in 2015 or 2016 as an anchor which will bias how much RFMF I think the field has. However, it seems more extreme lower or upper estimates I’ve encountered are baseless, and either vastly underestimate or overestimate how much the field of AI safety can productively grow each year. This is actually important to figure out.

80,000 Hours rates AI safety as perhaps the most important and neglected cause currently prioritized by the effective altruism movement. Consequently, 80,000 Hours recommends how similarly concerned people can work on the issue. Some talented computer scientists who could do best working in AI safety might opt to earn to give in software engineering or data science, if they conclude the bottleneck on AI safety isn’t talent but funding. Alternatively, small but critical organization which requires funding from value-aligned and consistent donors might fall through the cracks if too many people conclude all AI safety work in general is receiving sufficient funding, and chooses to forgo donating to AI safety. Many of us could make individual decisions going either way, but it also seems many of us could end up making the wrong choice. Assessments of these issues will practically inform decisions many of make over the next few years, determining how much of our time and potential we use fruitfully, or waste.

Everything above just lays out how estimating room for more funding in AI safety overall may be harder than anticipated, and to show how high the variance might be. I invite you to contribute to this discussion, as it only just starting. Please use the above info as a starting point to look into this more, or ask questions that will usefully clarify what we’re thinking about. The best fora to start further discussion seem to be the Effective Altruism Forum, LessWrong, or the AI Safety Discussion group on Facebook, where I initiated the conversation leading to this post.

New to LessWrong?

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 1:29 PM

I think that too much investment could result in more noise in the field. First of all because it will result in large number of published materials, which could exceed capacity of other researchers to read it. In result really interesting works will be not read. It will also attract in the field more people than actually clever and dedicated people exist. If we have 100 trained ai safety reserchers, which is overestimation , and we hire 1000 people, than real reasesrchers will be dissolved. In some fields like nanotech overinvestment result even in expel of original reaserchers because they prevent less educated ones to spent money as they want. But most dangerous thing is creating of many incomparable theories of friendliness, and even AIs based on them which would result in AI wars and extinction.

Yeah, I read Eliezer's chapter "Artificial Intelligence as a Positive and Negative Factor in Global Risk" in Global Catastrophic Risks, and it was impressed with how far in advance he anticipated reactions to the rising popularity of AI safety, what it might be like when the public finally switched from skepticism to genuine concern, and what it might start to look like. Eliezer has also anticipated even safety-conscious work on AI might increase AI risk.

The idea some existing institutions in AI safety, perhaps MIRI, should expand much faster than others so it can keep up with all the published material coming out, and evaluate it, is neglected.

But most dangerous thing is creating of many incomparable theories of friendliness, and even AIs based on them which would result in AI wars and extinction.

I strongly disagree.

First, because there are multiple reasons that the creation of many distinct theories of friendliness would not be dangerous: The first one to get to superintelligence should be able to establish a monopoly on power, and then we wouldn't have to worry about the others. Even if that didn't happen, a reasonable decision theory should be able to cooperate with other agents with different reasonable decision theories, when it is in both of their interests to do so. And even if we end up with multiple friendly AIs that are not great at cooperation, it is a particularly easy problem to cooperate with agents that have similar goals (as is implied by all of them being friendly). And even if we end up with a "friendly AI" that is incapable of establishing a monopoly on power but that will cause a great deal of destruction when another similarly capable but differently designed agent comes into existence, even if both agents have broadly similar goals (I would not call this a successful friendly AI), convincing people not to create such AIs does not actually get much easier if the people planning to create the AI have not been thinking about how to make it friendly, so preventing people from developing different theories of friendliness still doesn't help.

But beyond all that, I would also say that not creating many incomparable theories of friendliness is dangerous. If there is only one that anyone is working on, it will likely be misguided, and by the time anyone notices, enough time may have been wasted that friendliness will have lost too much ground in the race against general AI.

Of course I meant not only creating, but implementing of different friendliness theories. Your objection about cooperation based on good decision theories also seems to be sound.

But from history we know that christian countries had wars between them, and socialist countries also had mutual wars. Sometimes small difference leaded to sectarian violence, like between shia and sunni. So adopting a value system which promote future good for everybody doesn't prevent a state-agent to have wars this another agent with simillar positive value system.

For example we have two FAIs, and they both know that for the best one of them should be switched off. But how they would decide about which one will be switched off?

Also FAI may work fine until creation of the another AI, but could have instrumental value to switched off all other AIs and not be switched off by any AI, and they have to go to war because of this flaw in design which only appear if we have two FAIs.

Just pointing out I upvoted Turchin's comment above, but I agree with your clarification above here, of the last part of his comment. Nothing I've read thus far raises concern about warring superintelligences.

I wonder if MIRI's General Staff or Advisors deal with issues like this.

Your last point was interesting. I tried making a few, narrow comparisons with other fields that are important to people emotionally and physically i.e. cancer research and poverty charities. Upon a cursory glance, things like quacks, deceit and falsification seem present in these areas. So I suppose stuff like that's possible in AI safety.

Though I guess the people involved in AI safety would try much harder to lock out people like that or publicly challange people who have no clue what they're saying. However, its possible that some group might emerge that promotes shaky ideas which gain traction.

Though I think the scrutiny of those in the field and their judgements would cut down things like that.

By the way if OpenAI were suggested before Musk, it would likely be regarded as such shaky idea.

Many people do regard OpenAI as a shaky idea.

Do you mean the whole field of AI would regard OpenAI as a shaky idea before Musk, or just safety-conscious AI researchers?

I was speaking about safety researchers.

In that case, yeah, it's still shaky, albeit less so than if Musk wasn't involved.