I'm trying to build up a picture of how "much" research is going into general AI capabilities, and how much is going into AI safety.
The ideal question I'd be asking is "how much progress [measured in "important thoughts/ideas/tools" was being made that plausibly could lead to AGI in 2018, and how much progress was made that could plausibly lead to safe/aligned AI].
I assume that question is nigh impossible, so instead asking the approximation:
a) how much money went into AI capabilities research in 2018
b) how much money went into AI alignment research in 2018
c) how many researchers (ideally "research hours" but I'll take what I can get) were focused on capabilities research in 2018
d) how many researchers were focused on AI safety in 2018?