It looks like people around here are now  using the acronym TAI with the accompanying definition "transformative AI is AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution."

Is there some kind of consensus that this hasn't already happened?  

Because my current belief is that if Moore's laws stopped tomorrow and there was absolutely 0 innovation in AI beyond what GANs and Transformers give us, the social implications are already of that magnitude, they're just not "evenly distributed".

Here's what I think a world where our current level of AI becomes evenly distributed looks like:

  • AI is built into every product imaginable and used for almost every task.
  • Most labor (including almost  physical labor) has been replaced by robots.  The jobs that remain consist of research and application of AI and robotics.
    • Note:  jobs like entertainer, teacher, philosopher, historian, YouTube influencer, etc still exist but these are voluntary in the sense that they do not contribute to providing for the ongoing physical needs of humankind.
  • Universal Basic Income means the vast majority of people no longer need to work.
  • "Popular" entertainment is generated using AI and individualized to the taste of the viewer.  That is to say human scripted TV, movies and video games still exist, but in the same way that plays exist in our current world.
  • Space travel becomes routine and humanity is a multi planetary species (I'm not really sure we need AI for this one, but I bet people on Mars will be using robots to clean their solar farms and watching AI generated content instead of waiting for the 30 minute delay to download media from Earth).

So for those who don't think TAI exists, is the claim:

  1. The story you've told requires innovations that do not yet exist
  2. The story you've told doesn't count as FAI
  3. Something else?

Specifically, "If Moore's law stopped tomorrow and there are no more 'breakthroughs' in AI --I'm not counting what an expert in 2021 would consider an obvious or incremental improvement or application-- what would a world where such technology was 'evenly distributed' look like, and how would it fall short of TAI?"

Edit: I thought I should add that I don't think the industrial revolution is "evenly distributed" yet either.  Let's posit the industrial age as ending with the introduction of the personal computer in 1976.  US GDP/capita was then $27,441.89 (in 2012 dollars).  World GDP/capita for 2019 was only $11,442.   And every  country poorer than South Korea has not yet reached this level.

20

New Answer
Ask Related Question
New Comment

5 Answers

Quick answer without any reference, so probably biased towards my internal model: I don't think we reached TAI yet because I believe that if you remove every application of AI in the world (to simplify the definitions, every product of ML), the vast majority of people wouldn't see any difference, and probably some positive difference (less attention manipulation on social media for example).

Compare with removing every computing device, or removing electricity.

And taking as examples the AI we're making now, I expect that your first two points are wrong: people are already trying to build AI into everything, and it's basically always useless/not that much useful.

(An example of the disconnect between AI as thought about here or in research lab, and practical application, is that AFAIK, nobody knows how to make money with RL)

The question of whether we have enoigh resources to scale to TAI right now is one I haven't thought about enough for a decent answer, but you can find discussions of it on LW.

This is the 3D printing hype all over again. Remember how every object in sight was going to be made in a 3D printer? How we won't ever need to go to a store again because we'll be able to just download the blueprint for every product from the internet and make it ourselves? How we're going to print our clothes, furniture, toys and appliances at home and it's only going to cost pennies of raw materials and electricity? Yeah right.

So let me throw down the exact opposite predictions for social implications if there was absolutely 0 innovation in AI:

  • AI continues to try to shoehorn itself into every product imaginable and mostly fail because it's a solution looking desperately for a problem
  • Almost no labor (big exception: self-driving) has been replaced by robots. The robots that do exist are not ML-based
  • Universal Basic Income doesn't see widespread adoption and it has nothing to do with AI, one way or another
  • <1% of YouTube views is produced by AI generated content
  • Space is literally the worst place to apply AI - the stakes couldn't be higher, the training data couldn't be sparser and the tasks are so varied and complex they stretch even the generalization capability of human intelligence; it's the pinnacle of AI-hubris thinking AI will "revolutionize" every single field

(I use ML and AI interchangeably because AI in the broad sense just means software at this point)

In fact, since I don't believe in slow take-off, I'll do one better: these are my predictions for what will actually happen right up until FOOM.

It's time for reality check for not only AI, but many other digital technologies as well (AR/MR, folding phones, 5G, IoT). We wanted flying cars, instead we got AI-recommended 140 characters.

Do you think that with current technology we'll end up with a GWP growth rate of 10%+ per year? If not, then it probably doesn't count as transformative. If so, well, I guess I'd like to see more argument for that.

You say "absolutely 0 innovation in AI" at the top but then say "no more 'breakthroughs' in AI --I'm not counting what an expert in 2021 would consider an obvious or incremental improvement or application" at the bottom. Even leaving aside that those two quotes are not equivalent, I think there's a lot of scope for disagreement and confusion here.

Any company or research group trying to do anything new with ML—industrial robotics, for example—will immediately discover that it doesn't work on the first try, so then they work at it, and probably have clever ideas along the way, and publish or patent them, and maybe eventually they get it to work, or maybe not.

Is that "absolutely 0 innovation"? No. Is that "obvious"? Maybe the application is "obvious", or maybe not, but the steps to get it to work are not obvious. Is that "incremental improvement"? Maybe, maybe not. Is it a "breakthrough"? Well, it depends on what "breakthrough" means. In 100 years, no one will be telling stories of how heroic industrial researcher Esme figured out how to get the drone to avoid hitting branches. But on the other hand, maybe lots of people before Esme were trying to get the drone to not hit branches, and they all failed until Esme succeeded.

If "absolutely 0 innovation" is to be taken literally, well, we don't have industrial robotics, and we don't have human-level movie scripts, etc., and we're not going to get them without innovation. If you mean something like "soon" or "by default", that's a different question.

In any case, my answer is that, however transformative the bullet points you list would be, they're not nearly as transformative as "an AI that can do literally every aspect of my job, and yours, but better and cheaper". That's what IJ Good called "the last invention that man need ever make"—because the AI can take the initiative to come up with all future inventions, found all future companies, discover all future scientific truths, etc. etc. Think "very, very, very transformative". And I do think that, to get there, we need things that most people would call "breakthroughs", even if they have some continuity with existing ideas.

Most labor (including almost  physical labor) has been replaced by robots.  The jobs that remain consist of research and application of AI and robotics.

This conclusion is still 'doubted'.  I generally agree with you that this is possible but there is a huge gap between where we are now, and actually reliable, real time, economical to deploy robotics.  As far as I know, actual robotics using deep learning for commercial tasks is extremely rare.  I have not heard of any, I've just seen OpenAI and Google's demos.  

It's sort of the difference between "have demoed a train that could run in a tunnel" and "have dug a tunnel" and "have a working subway line" to "the whole city is interconnected".

In real life examples the gap there was many decades.  

https://en.wikipedia.org/wiki/Beach_Pneumatic_Transit [1869]

https://en.wikipedia.org/wiki/Tremont_Street_subway [1903]

https://en.wikipedia.org/wiki/IND_Sixth_Avenue_Line [1940] : approximately the completion date of the NYC system

Yeah, I definitely think we're very early in the transition.  I would still say it's extremely likely (>90%) even given no new "breakthroughs".  

The real-life commercial uses of AI+robotics are still pretty limited at this point.  Off the top of my head I can only think of Roomba, Tesla, Kiva and those security robots in malls.

Anecdotally, from the people I talk do deep learning + any application in science seems to yield  immediate low-hanging fruit (one recent example being protein folding).  I think the limiting factor right ... (read more)

1Gerald Monroe11dSure. And the kiva and roomba examples : at a low level both machines could work using pure non deep learning software. 2d SLAM is a 'classic' technique at this point, and nothing in the way kiva robots move in x-y grids requires deep learning to work. Robots that for example do soft complex object picking are using DL, and are an example of a machine that actually needs it to work. Ditto any autonomous car. Yeah Tesla is using DL for the distance estimation. Dunno about the mall robots.
1 comments, sorted by Highlighting new comments since Today at 11:15 PM

Meta-comment on transformative AI.

I'm not sure this terminology has reached fixation yet and is still provisional. For example, I haven't seen it bubble up to replace talk in AI safety writing that would otherwise talk about superintelligence or strong optimization pressure. It seems mostly geared towards folks talking about policy and explaining why policy is needed, so caveat that it's a bit of jargon (like lots of things we say around here) that may have specific meaning that make it hard to answer this question because TAI is going to naturally be geared towards the stuff that's not here or almost here in order to get folks to take action rather than on the "boring" stuff we already live with and can see that it's not immediately transforming everything on the order of hours/days.