I think it's plausible that:
1) For many applications, getting narrow AI to do a task well enough to be valuable doesn't seem worth it, and likely isn't (esp. when considering opportunity cost and alternative applications of AI).
2) Thus proto-AGI is actually not going to change the world that much
3) But OFC, AGI will (once/assuming it's cheap enough)

If correct, this could mean that people probably won't really be impressed by narrow AI at any point, and then all of the sudden, we get AGI and everything changes rapidly. 

I'm just sketching it out and probably didn't do the best job, but my questions are: Is this something people have seen argued?  Is there a name for it?  (or want to propose one?)

New to LessWrong?

New Answer
New Comment

2 Answers sorted by

Bogdan Ionut Cirstea

Sep 03, 2021

60

From https://mailchi.mp/b3dc916ac7e2/an-80-why-ai-risk-might-be-solved-without-additional-intervention-from-longtermists : 'On the point about lumpiness, my model is that there are only a few underlying factors (such as the ability to process culture) that allow humans to so quickly learn to do so many tasks, and almost all tasks require near-human levels of these factors to be done well. So, once AI capabilities on these factors reach approximately human level, we will "suddenly" start to see AIs beating humans on many tasks, resulting in a "lumpy" increase on the metric of "number of tasks on which AI is superhuman" (which seems to be the metric that people often use, though I don't like it, precisely because it seems like it wouldn't measure progress well until AI becomes near-human-level).'

1Bogdan Ionut Cirstea3y
Should work now, I had inadvertently added ':' as part of the link

Sep 03, 2021

10

The opposing theory would be "easy to port" narrow ai. Right now the state of the art algorithms are either locked away in deepminds vcs or are hand written python code by openAI that is kind of hard to port to a new task or drive a realtime robot with.

It reminds me of the situation in the late 90s, where creating an e-commerce website and store was a big undertaking, or a 3d accelerated game was a big project. Now of course anyone can launch a store in hours the hard part is getting customers or making a profit after Amazon/ebay take their cut. And there are lots of examples of a hobbyist game developer re-creating a simple game in lavishly detailed 3d in 24 hours or less by using an existing engine and assets from a library or store.

"Easy to port" narrow ai would be you subscribe to a bunch of reusable ai components- some for perception, some for planning, some for prediction, etc. Import some high level constructs to define your heuristics with. Pick one of many hardware platforms and ask another ai system to evaluate how many robotic arms and what kind of sensors will be optimal for a task.

And in a short amount of time you have a sellable automated robotic system able to do a task that hasn't been automated before for money. Same problem with ebay/Amazon, of course, the lion's share of the revenue would go to the platform owners.

Interesting!  But I downvoted since it's a comment, not an answer.

3 comments, sorted by Click to highlight new comments since: Today at 10:44 PM

A related view is that less advanced/more narrow AI will do be able to do a fair number of tasks, but not enough to create widespread technological unemployment until very late, when very advanced AI quite quickly causes lots of people to be unemployed.

One consideration is how long time it will take for people to actually start using new AI systems (it tends to take some time for new technologies to be widely used). I think that some have speculated that that time lag may be shortened as AI become more advanced (as AI becomes involved in the deployment of other AI systems).

Holden Karnofsky defends this view in his latest blog post.

I think it’s too quick to think of technological unemployment as the next problem we’ll be dealing with, and wilder issues as being much further down the line. By the time (or even before) we have AI that can truly replace every facet of what low-skill humans do, the “wild sci-fi” AI impacts could be the bigger concern.

I've certainly seen this argument before, and even advocated for it somewhat. I haven't seen a specific name for it though.

I do have some doubts about (1). There does seem to be quite a lot of scope for human-guided AI that performs substantially better than human or AI alone. Even 10-20% improvement in any of a wide range of tasks would be a lot, and I think we're going to see a lot more effort into coordinating this sort of thing. It may not even look like AI. It may just look like better software tools despite using ever more generalizable models with specific customization behind the scenes.