[ Question ]

What are CAIS' boldest near/medium-term predictions?

by jacobjacob 5mo28th Mar 201917 comments

32


Background and questions

Since Eric Drexler publicly released his “Comprehensive AI services model” (CAIS) there has been a series of analyses on LW, from rohinmshah, ricraz, PeterMcCluskey, and others.

Much of this discussion focuses on the implications of this model for safety strategy and resource allocation. In this question I want to focus on the empirical part of the model.

  • What are the boldest predictions the CAIS model makes about what the world will look in <=10 years?

“Boldest” might be interpreted as those predictions which CAIS gives a decent chance, but which have the lowest probability under other “worldviews” such as the Bostrom/Yudkowsky paradigm.

A prediction which all these worldviews agree on, but which is nonetheless quite bold, is less interesting for present purposes (possibly something like that we will see faster progress than places like mainstream academia expect).

Some other related questions:

  • If you disagree with Drexler, but expect there to be empirical evidence within the next 1-10 years that would change your mind, what is it?
  • If you expect there to be events in that timeframe causing you to go “I told you so, the world sure doesn’t look like CAIS”, what are they?

Clarifications and suggestions

I should clarify that answers can be about things that would change your mind about whether CAIS is safer than other approaches (see e.g. the Wei_Dai comment linked below).

But I suggest avoiding discussion of cruxes which are more theoretical than empirical (e.g. how decomposable high-level tasks are) unless you have a neat operationalisation for making them empirical (e.g. whether there will be evidence of large economies-of-scope of the most profitable automation services).

Also, it might be really hard to get this down to a single prediction, so it might be useful to pose a cluster of predictions and different operationalisations, and/or using conditional predictions.

32

New Answer
Ask Related Question
New Comment

4 Answers

One clear difference between Drexler's worldview and MIRI's is that Drexler expects progress to continue along the path that recent ML research has outlined, whereas MIRI sees more need for fundamental insights.

So I'll guess that Drexler would predict maybe a 15% chance that AI research will shift away from deep learning and reinforcement learning within a decade, whereas MIRI might say something more like 25%.

I'll guess that MIRI would also predict a higher chance of an AI winter than Drexler would, at least for some definition of winter that focused more on diminishing IQ-like returns to investment, than on overall spending.

Wei_Dai writes:

A major problem in predicting CAIS safety is to understand the order in which various services are likely to arise, in particular whether risk-reducing services are likely to come before risk-increasing services. This seems to require a lot of work in delineating various kinds of services and how they depend on each other as well as on algorithmic advancements, conceptual insights, computing power, etc. (instead of treating them as largely interchangeable or thinking that safety-relevant services will be there when we need them). Since this analysis seems very hard to do much ahead of time, I think we'll have to put very wide error bars on any predictions of whether CAIS would be safe or unsafe, until very late in the game.

Ricraz writes:

I'm broadly sympathetic to the empirical claim that we'll develop AI services which can replace humans at most cognitively difficult jobs significantly before we develop any single superhuman AGI (one unified system that can do nearly all cognitive tasks as well as or better than any human).

I’d be interested in operationalising this further, and hearing takes on how many years “significantly before” entails.

He also adds:

One plausible mechanism is that deep learning continues to succeed on tasks where there's lots of training data, but doesn't learn how to reason in general ways - e.g. it could learn from court documents how to imitate lawyers well enough to replace them in most cases, without being able to understand law in the way humans do. Self-driving cars are another pertinent example. If that pattern repeats across most human professions, we might see massive societal shifts well before AI becomes dangerous in the adversarial way that’s usually discussed in the context of AI safety.

If research into general-purpose systems stops producing impressive progress, and the application of ML in specialised domains become more profitable, we'd soon see much more investment in AI labs that are explicitly application-focused rather than basic-research focused.