I'm trying to build up a picture of how "much" research is going into general AI capabilities, and how much is going into AI safety.

The ideal question I'd be asking is "how much progress [measured in "important thoughts/ideas/tools" was being made that plausibly could lead to AGI in 2018, and how much progress was made that could plausibly lead to safe/aligned AI].

I assume that question is nigh impossible, so instead asking the approximation:

a) how much money went into AI capabilities research in 2018

b) how much money went into AI alignment research in 2018

c) how many researchers (ideally "research hours" but I'll take what I can get) were focused on capabilities research in 2018

d) how many researchers were focused on AI safety in 2018?

New Answer
Ask Related Question
New Comment

4 Answers sorted by

Some numbers related to c (how many capabilities researchers):

In 2018 about 8,500 people attended NeurIPS and about 4,000 people attended ICML. There are about 2,000 researchers who work at Google AI, and in December 2017 there were reports that about 700 total people work at DeepMind including about 400 with a PhD.

Turning this into a single estimate for "number of researchers" is tricky for the sorts of reasons that catherio gives. Capabilities researchers is a fuzzy category and it's not clear to what extent people who are working on advancing the state of the art in general AI capabilities should include people who are primarily working on applications using the current art and people who are primarily working on advancing the state of the art in narrower subfields. Also obviously only some fraction of the relevant researchers attended those conferences or work at those companies.

I'll suggest 10,000 people as a rough order-of-magnitude estimate. I'd be surprised if the number that came out of a more careful estimation process wasn't within a factor of ten of that.

(This seems like a reasonable answer for the "number of capabilities researchers" part of the question. Still interested in answers for capabilities funding, safety researchers and safety funding)

I counted 37 researchers with safety focus plus MIRI researchers in September 2018. These are mostly aimed at AGI and at least PhD level. I also counted 38 who do safety at various levels of part-time. I can email the spreadsheet. You can also find it in 80k's safety Google group.

By my quick mental count, CHAI's Berkeley branch had something like the equivalent of 8 to 11 researchers focussing on AI alignment in 2018. Kind of tricky to count because we had new PhD students coming in in August, as well as some interns over the summer (some of whom stayed on for longer periods).

Hmm. I notice that in the case of AI safety, it's probably possible to just literally count the researchers by hand. I assume for "broader work on AI" it'd be necessary to either consult some kind of research that already had them counted, since there's just way too much stuff going on.

I think this is probably not true for the average LW reader, or even the average person who's kind of interested in AI alignment, since many orgs are sort of opaque about how many people work there and what team people are on. For example my guess is that most people don't know how many interns CHAI takes, or how many new PhD students we get in a given year, and similarly, I'm not even confident that I could name everybody in OpenAI's safety team without someone to catch my errors. Seems correct to me.
Nod. I didn’t mean you could count them trivially, but I hadn’t e en been thinking of the solution ‘someone from each org just mentions the approximate number of researchers and then you add them’ as a possible solution

I think the "Number AI Safety Research" turned out to be at least partially answered by this website, although I haven't yet reviewed it that thoroughly.

2 comments, sorted by Click to highlight new comments since: Today at 6:45 PM

Two observations:

  • I'd expect that most "AI capabilities research" that goes on today isn't meaningfully moving us towards AGI at all, let alone aligned AGI. For example, applying reinforcement learning to hospital data. So "how much $ went to AI in 2018" would be a sloppy upper bound on "important thoughts/ideas/tools on the path to AGI".
  • There's a lot of non-capabilities non-AGI research targeted at "making the thing better for humanity, not more powerful". For example, interpretability work on models simpler than convnets, or removing bias from word embeddings. If by "AI safety" you mean "technical AGI alignment" or "reducing x-risk from advanced AI" this category definitely isn't that, but it also definitely isn't "AI capabilities" let alone "AGI capabilities".

Nod. Definitely open to better versions of the question that carve at more useful joints. (With a caveat that the question is more oriented towards "what are the easiest street lamps to look under" than "what is the best approximation")

So, I guess my return question is: do you have suggestions on subfields to focus on, or exclude, from "AI capabilities research" that more reliably points to "AGI", that you think there's likely to exist public data on? (Or some other way to carve up AI research space)

It does seem good to have a separate category for "things like removing bias from word embeddings" that is separate from "Technical AGI alignment". (I think it's still useful to have a sense of how much effort humanity is putting into that, just as a rough pointer at where our overall priorities are)

New to LessWrong?