Robin Hanson has been
recently that we've been experiencing an AI boom that's not too
different from prior booms.
At the recent Foresight Vision
Weekend, he predicted [not exactly - see the comments] a 20%
decline in the number of Deepmind employees over the next year
(Foresight asked all speakers to make a 1-year prediction).
I want to partly agree and partly disagree.
I expect many companies are cluelessly looking into using AI in their
current business, without having a strategy for doing much more than
what simple statistics can do. I'm guessing that that mini-bubble has
I expect that hype directed at laymen has peaked, and will be lower for
at least a year. That indicates a bubble of some sort, but not
necessarily anything of much relevance to AI progress. It could reflect
increased secrecy among the leading AI developers. I suspect that some
of the recent publicity reflected a desire among both employers and
employees to show off their competence, in order to attract each
I suspect that the top companies have established their reputations well
enough that they can cut back a bit on this publicity, and focus more on
generating real-world value, where secrecy has some clearer benefits
than is the case with Go programs. OpenAI's attitude about disclosing
GPT-2 weakly hints at
such a trend.
that the shift in AI research from academia to industry is worth some
attention. I expect that industry feedback mechanisms are faster and
reflect somewhat more wisdom than academic feedback mechanisms. So I'm
guessing that any future AI boom/bust cycles will be shorter than prior
cycles have been, and activity levels will remain a bit closer to the
VC funding often has boom/bust cycles. I saw a brief indication of a
bubble there (Rocket
three years ago. But as far as I can tell, VC AI funding kept
for two years, then stabilized, with very little of the hype and
excitement that I'd expect from a bubble. I'm puzzled about where VC AI
funding is heading.
Then there's Microsoft's $1billion investment in
OpenAI. That's well outside of
patterns that I'm familiar with. Something is happening here that leaves
AI conference attendance shows
patterns that look
fairly consistent with boom and bust cycles. I'm guessing that many
attendees will figure out that they don't have the aptitude to do more
than routine AI-related work, and even if they continue in AI-related
careers, they won't get much out of regular conference attendance.
A much more important measure is the trend in compute used by AI
researchers. Over the past 7 years, that happened in a way that was
In some sense, that almost guarantees slower progress in AI over the
next few years. But that doesn't tell us whether the next few years will
see moderate growth in compute used, or a decline. I'm willing to bet
that AI researchers will spend more on compute in 2024 than in 2019.
I invested in NVIDIA in 2017-8 for AI-related reasons, until its
price/earnings ratio got high enough to scare me away. It gets 24% of
its revenues from data centers, and a nontrivial fraction of that seems to be
AI-related. NVIDIA experienced a moderate bubble that ended in 2018,
followed by a slight decline in revenues. Oddly, that boom and decline
were driven by both gaming and data center revenues, and it eludes me
what would synchronize market cycles between those two markets.
What about Deepmind? It looks like one of the most competent AI
companies, and its money comes from a relatively competent parent. I'd
be surprised if the leading companies in an industry experienced
anywhere near as dramatic a bust as do those with below-average
competence. So I'll predict slowing growth, but not decline, for
The robocar industry is an important example of AI progress that doesn't
look much like a bubble.
This is not really a central example of AI, but clearly depends on
having something AI-like. The software is much more general purpose than
AlphaGo, or anything from prior AI booms.
Where are we in the robocar boom? Close to a takeoff. Waymo has put a
few cars on the road with no driver already. Some of Tesla's customers
are already acting as if Tesla's software is safe enough to be a
driverless car. In an ideal world, the excessive(?) confidence of those
drivers would not be the right path to driverless cars, but if other
approaches are slow, consumer demand for Tesla's software will drive the
I expect robocars to produce sustained increases in AI-related revenues
over the next decade, but maybe that's not relevant to further
development of AI, except by generating a modest increase in investor
Some specific companies in this area might end up looking like bubbles,
but I can't identify them with enough confidence to sell them short.
Uber, Lyft, and Tesla might all be bubbles, but when I try to guess
whether that's the case, I pay approximately zero attention to AI
issues, and a good deal of attention to mundane issues such as risk of
robocar lawsuits, or competitors adopting a better business strategy.
I wish I saw a good way to bet directly that the robocar industry will
take off dramatically within a few years, but the good bets that I see
are only weakly related, mostly along the lines I mentioned in Peak
I'm considering a few more bets that are a bit more directly about
robocars, such as shorting auto insurance stocks, but the time doesn't
yet look quite right for that.
Finally, it would be valuable to know whether there's a bubble in AI
safety funding. This area has poor feedback mechanisms compared to
revenue-driven software, so I find it easier to imagine a longer-lasting
decline in funding here.
MIRI seems somewhat at risk for reduced
funding over the next few years. I see some risk due to the effects of
cryptocurrency and startup sources of wealth. I don't see the kinds of
discussions of AI risk that would energize more of the MIRI-oriented
donors than we've seen in the past few years.
I'm less clear on FHI's funding. I'm guessing there's more institutional
inertia among FHI's funders, but I can easily imagine that they're
sufficiently influenced by intellectual and/or social fads that FHI will
have some trouble several years from now.
And then there are the safety researchers at places like Deepmind and
OpenAI. I'm puzzled as to how much of that is PR-driven, and how much is
driven by genuine concerns about the risks, so I'm not making any
Note that there are more AI safety
most of which I know less about. I'll guess that their funding falls
within the same patterns as the ones I did mention.
In sum, I expect some AI-related growth to continue over the next 5
years or so, with a decent chance that enough AI-related areas will
experience a bust to complicate the picture. I'll say a 50% chance of an
ambiguous answer to the question of whether AI overall experiences a
decline, a 40% chance of a clear no, and a 10% chance of a clear yes.
To be clear, Foresight asked each speakers to offer a topic for participants to forecast on, related to our talks. This was the topic I offered. That is NOT the same as my making a prediction on that topic. Instead, that is to say that the chance on this question seemed an unusual combination of verifiable in a year and relevant to the chances on other topics I talked about.
At the recent Foresight Vision Weekend, he predicted a 20% decline in the number of Deepmind employees over the next year.
Wow. I'm wondering if Robin meant "Here's a prediction I assign a surprisingly high probability to, like I think it has a 10-15% probability of happening." But if Robin thinks that it's more likely than not, I will happily take a bet against that position.
I can imagine DM deciding that some very applied department is going to be discontinued, like healthcare, or something else kinda flashy. But by default I think they're doing pretty well. I think I assign less than 15% chance to it happening. And 20% of staff is just quite a substantial proportion of employees (that would be 170 people let go). So happy to take a bet against.
Foresight asked us to offer topics for participants to forecast on, related to our talks. This was the topic I offered. That is NOT the same as my making a prediction on that topic. Instead, that is to say that the chance on this question is an unusual combination of verifiable in a year and relevant to the chances on other topics I talked about.
Ah, that makes sense, thanks.
I think Robin implied at least a 50% chance, but I don't recall any clear statement to that effect.
Here is the Metaculus page for the prediction.
I can imagine DM deciding that some very applied department is going to be discontinued, like healthcare, or something else kinda flashy.
With Mustafa Suleyman, the cofounder most focused on applied (and leading DeepMind Applied) leaving for google, this seems like quite a plausible prediction. So a refocusing on being a primarily research company with fewer applied staff (an area that can soak up a lot of staff) resulting in a 20% reduction of staff probably wouldn't provide a lot of evidence (and is probably not what Robin had in mind). A reduction of research staff, on the other hand, would be very interesting.
On the contrary, I'd say a reduction in "applied" work and a re-focus toward research would be quite consistent with an "AI winter" scenario. There's always open-ended research somewhere; a big part of an AI "boom" narrative is trying to apply the method of the day to all sorts of areas (and the method of the day mostly failing to make economically meaningful headway in most areas).
To put it differently: the AI boom/bust narrative usually revolves around faddish ML algorithms (expert systems, SVMs, neural networks...). If people are cutting back on trying to apply the most recent faddish algorithms, and instead researching new algorithms, that sounds a lot like the typical AI winter story. On the other hand, if people are continuing to apply e.g. neural networks in new areas, and continuing to find that they work well enough to bring to market, then that would not sound like the AI winter story.
It would be really surprising to me if DeepMind cuts down staff. It's well-founded from Google and Google has cash where it doesn't know how to spend it.
Protein Folding doesn't seem to be a problem where new insights are needed to solve it. Protein Folding is a problem of a nature that's very near to games like Go because you can score your algorithm well.
Once you get protein folding you can move to protein activity and design all sorts of new enzyme's to catalyze reactions and produce a lot of economic value.
Medical data mining is similar. We have articles about how there's much more information that can be drawn from heart rate data then we previously assumed. With the NHS cooperation DeepMind has access to the necessary data.
DeepMind might continue to spend more time on research then on practical application but if they wanted to do practical application it doesn't seem to me like they would need that much more capabilities.
Where are we in the robocar boom? Close to a takeoff. Waymo has put a few cars on the road with no driver already. Some of Tesla's customers are already acting as if Tesla's software is safe enough to be a driverless car.
Isn't this basically where we were three or four years ago? I feel like I've been hearing that self-driving cars are "close to a takeoff" for about five years now, with plenty of cars able to drive safely most of the time, but without any evidence that they're anywhere near handling the long tail of tricky situations.
I work for a company that, among other things, tracks trends in lots of industries and technologies, and makes near- and long-term forecasts of their evolution. Hype aside, I have seen no evidence of any change in independent assessments of when truly autonomous vehicles would see commercial adoption. Five years is really not a long time, when most people who don't work for car companies or hype-happy news agencies were already projecting timelines into the late 2020s/early 2030s. Whether you consider that "close" to a boom or not is a matter of semantics and perspective.
Seems like you guys might have (or be able to create) a dataset on who makes what kind of forecasts, and who tends to be accurate or hyped re them. Would be great if you could publish some simple stats from such a dataset.
I probably could, but could not share such without permission from marketing, which creates a high risk of bias.
When you say "no evidence of any change" in autonomous vehicle timelines, do you mean
? I.e., "no change" as in "we do seem to be getting closer at roughly the expected rate", or as in "we don't seem to be getting closer at all"?
Sorry for the month+-long delay, but I meant the 1st option. To a first approximations, people seem to still be estimating the same calendar year (say, "2035" or whatever) as they estimated in 2015.