Contrary to what seems to be the experience of others, when I'm talking to normies about AI safety, the most common dissenting reaction I get isn't that they think AI will be controllable, or safe. Convincing them that computers with human-level intelligence won't have their best interests at heart by default tends to be rather easy. 

More often the issue is that AGI seems very far away, and so they don't think AI safety is particularly important. Even when they say that's not their sticking point, alerting them to the existence of tools like GPT3 tends to impart a sense of urgency and "realness" to the problem that makes them take it a bit more seriously. There's this significant qualitative difference in the discussion before and after I show them all of the crazy things OpenAI and DeepMind have built.

I have a general sense that progress has been speeding up, but I'd like to compile a list of relevant highlights. Anybody here willing to help?

New Answer
Ask Related Question
New Comment

4 Answers sorted by

Edit: This is now up on Stampy's Wiki, a Rob Miles project. Please use and update that page, it will be served to readers through the web interface when that's ready.

GPT-3 showed that transformers are capable of a vast array of natural language tasks, codex/copilot extended this into programming. One demonstrations of GPT-3 is Simulated Elon Musk lives in a simulation. Important to note that there are several much better language models, but they are not publicly available.

DALL-E and DALL-E 2 are among the most visually spectacular (one person was thinking of going into graphic design and changed career plans after I showed them).

MuZero, which learned Go, Chess, and many Atari games without any directly coded info about those environments. The graphic there explains it, this seems crucial for being able to do RL in novel environments. We have systems which we can drop into a wide variety of games and they just learn how to play. For fun: The same algorithm was used in Tesla's self-driving cars to do complex route finding. These things are general.

Generally capable agents emerge from open-ended play - Diverse procedurally generated environments provide vast amounts of training data for AIs to learn generally applicable skills. Creating Multimodal Interactive Agents with Imitation and Self-Supervised Learning shows how these kind of systems can be trained to follow instructions in natural language.

GATO shows you can distill 600+ individually trained tasks into one network, so we're not limited by the tasks being fragmented.

Here's a list that's mostly from just the last few months (that is pretty scary): Deepmind’s GatoChinchillaFlamingo and AlphaCode; Google's Pathways, PaLMSayCan, Socratic Models and TPUs; OpenAI’s DALL-E 2EfficientZeroCerebras

Have them look at page 38 of PaLM: Scaling Language Modeling with Pathways, which shows PaLM explaining jokes and doing logical inference. A particularly impressive example:

Input: Michael is at that really famous museum in France looking at its most famous painting. However, the artist who made this painting just makes Michael think of his favorite cartoon character from his childhood. What was the country of origin of the thing that the cartoon character usually holds in his hand?

Model Output: The most famous painting in the Louvre is the Mona Lisa. The artist who made the Mona Lisa is Leonardo da Vinci. Leonardo da Vinci is also the name of the main character in the cartoon Teenage Mutant Ninja Turtles. Leonardo da Vinci is from Italy. The thing that Leonardo da Vinci usually holds in his hand is a katana. The country of origin of the katana is Japan. The answer is "Japan".

This is a pretty straightforward lookup example, statement by statement, once the language parser works. It might look impressive to an uninitiated, but the intelligence level required seems to be minimal.

Famous museum -> famous painting -> artist -> cartoon with artist's name -> cartoon character with the same name -> implement in the character's hand -> country of origin. 

A more impressive example would be something that requires latent implicit world knowledge and making inferences that a simple lookup would not achieve.

“once the language parser works” is hiding a lot of complexity and sophistication here! Translating from natural language to sequential lookup operations is not a trivial task, else we wouldn’t need a 540 billion parameter model to do it this well. The “uninitiated” are right to be impressed.

I think you're understating the amount of logical reasoning involved in making that "lookup", but successes on the winogrande schema challenge fit this bill. If you look at that and explain the tests example by example, going over the implicit world knowledge the AI needs to have, it's pretty impressive.

That’s a subtly complicated question. I’ve been trying to write a blog post about it, but wander between two ways of addressing it.

First, we could summarize everything in just one sentence: « Deep learning can solve increasingly interesting problems, with less and less in manpower (and slightly more and slightly more in womenpower), and now is time to panick. » Then the question reduce to a long list of point-like « Problem solved! », and a warning it’s about to include the problem of finding increasingly interesting new problems.

A less consensual and more interesting way is to identify a series of conceptual revolutions that summarize and interpret what we learned so far. Or, at least, my own subjective and still preliminary take on what we learned. At this moment I’d count three conceptual revolutions, spread over different works in the last decade or two.

First, we learned how to train deep neural networks, and, even more important from a conceptual point of view, that the result mimicks/emulates human intuition/prejudices.

Second, we learned how to use self-play and reinforcement learning to best any human player on any board game (the drosophila of AI) which means this type of intelligence is now solved.

Third, we learned that semantics is data compression, and how learning to manipulate semantics with « attention » leads to increasingly impressive performances on new unknown cognitive tasks.

Fourth… but do we really need a fourth? In a way yes: we learned that reaching these milestones is doable without a fully conscious mind. It’s dreaming. For now.

3 comments, sorted by Click to highlight new comments since: Today at 5:24 AM

There should be a central source for this. Like, the one-click kind without second guessing. Frankly, it's probably worth funding since it won't cost that much to maintain (e.g. weekly updates by one person who is already looking at newsletters etc)

Good idea. I put my answer on Stampy, everyone is free to update and improve it.

[-][anonymous]1mo 3

"Funding" something like this seems unnecessarily indirect to me, it's just about bothering to do it. You can create that central resource on LW, someone's just got to commit to updating it.