Instead, we’d like to train one model that can not only handle many separate tasks, but also draw upon and combine its existing skills to learn new tasks faster and more effectively. That way what a model learns by training on one task – say, learning how aerial images can predict the elevation of a landscape – could help it learn another task -- say, predicting how flood waters will flow through that terrain.

Seems like a model capable of generalisation.

Pathways could enable multimodal models that encompass vision, auditory, and language understanding simultaneously. So whether the model is processing the word “leopard,” the sound of someone saying “leopard,” or a video of a leopard running, the same response is activated internally: the concept of a leopard. The result is a model that’s more insightful and less prone to mistakes and biases.

And a multimodal one, too.

Anyone got more info or demo of that? They seem to claim a lot, but don't have anything to show yet; it's not clear to me why they would release an announcement so abstract. More to come, I guess?

That’s why we’re building Pathways. Pathways will enable a single AI system to generalize across thousands or millions of tasks, to understand different types of data, and to do so with remarkable efficiency – advancing us from the era of single-purpose models that merely recognize patterns to one in which more general-purpose intelligent systems reflect a deeper understanding of our world and can adapt to new needs.

Seems like they have an architecture but they are yet to build on it. And they don't share any details of the architecture; maybe they consider it a memetic hazard?

This all sounds very concerning and checks many of the boxes of potential true AGI for me.

What are your thoughts?


New Comment
1 comment, sorted by Click to highlight new comments since: Today at 6:19 PM

Sounds  like they're  planning  to  build a multimodal transformer.  Which isn't surprising, given that Facebook and OpenAI are working on in this as well.  Think of this as Google's version of GPT-4.

I'm firmly in the "GPT-N is not AGI" camp,  but opinions vary regarding  this particular point.

New to LessWrong?