Contrary to what seems to be the experience of others, when I'm talking to normies about AI safety, the most common dissenting reaction I get isn't that they think AI will be controllable, or safe. Convincing them that computers with human-level intelligence won't have their best interests at heart by default tends to be rather easy.
More often the issue is that AGI seems very far away, and so they don't think AI safety is particularly important. Even when they say that's not their sticking point, alerting them to the existence of tools like GPT3 tends to impart a sense of urgency and "realness" to the problem that makes them take it a bit more seriously. There's this significant qualitative difference in the discussion before and after I show them all of the crazy things OpenAI and DeepMind have built.
I have a general sense that progress has been speeding up, but I'd like to compile a list of relevant highlights. Anybody here willing to help?
This is a pretty straightforward lookup example, statement by statement, once the language parser works. It might look impressive to an uninitiated, but the intelligence level required seems to be minimal.
Famous museum -> famous painting -> artist -> cartoon with artist's name -> cartoon character with the same name -> implement in the character's hand -> country of origin.
A more impressive example would be something that requires latent implicit world knowledge and making inferences that a simple lookup would not achieve.
“once the language parser works” is hiding a lot of complexity and sophistication here! Translating from natural language to sequential lookup operations is not a trivial task, else we wouldn’t need a 540 billion parameter model to do it this well. The “uninitiated” are right to be impressed.