I'm far from an expert on AI and alignment but I really liked this Ted talk and think it serves as a good introduction to the topic. Here are some highlights:

It starts with a demo of what ChatGPT is currently or soon will be capable and it's bound to shocking to plenty of noobs like me.

Then OpenAI cofounder is also pressed on the company's controversial, perhaps reckless, approach. He explains why OpenAI has often been the first to release AI products despite safety concerns (particularly at 24:17) In short:

The cofounder says that OpenAI's views programing Chatgpt like teaching a child insofar as regular servings of positive and negative reinforcement are better than trying to create a master plan ahead of time. OpenAI currently want the world's help feeding feedback into their "children" and although their view may change, Alan Turing suggested this approach in 1950 and OpenAI  still doesn't see a better approach.

Also (at 20:52) I think OpenAI's cofounder claims that the company is "starting to really get good at is predicting" emergent capabilities of AI. That sounds a little too good to be true so I'm curious what yall think.

And please correct me if I'm wrong or missing something important (hopefully before I share this talk with my friends :)

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 6:02 PM

My notes:

  • ChatGPT will be way more useful when integrated to other existing systems
  • "we can inspect how ChatGPT interacts with other systems" - the developer doth protest too much, methinks; yes the API calls will be the only part of the entire system that is somehow transparent to an ordinary human (an ordinary human with programming skills, I mean); also I expect that as humans get used to using ChatGPT, the API calls will be hidden again as a design decision
  • yes it is amazing how with an intelligent machine you can have things that require some thinking done automatically and quickly (in hindsight, the "agile programming" was invented for ChatGPT, not for humans, because human developers get annoyed when you keep fundamentally changing their requirements every week, but the AI does not mind if you do it once per minute, haha)
  • we get better at predicting how the AI capabilities will change when it is scaled 100 or 1000 times, so we can do experiments on smaller models and scale them when needed
  • his idea of safety seems to be "learning new capabilities step by step and providing feedback" and "if we go full speed ahead, at least we do not create capability overhang, which would be even more dangerous"

Overall, very good video! I am not really convinced about the safety part, but I am not sure what we can do about it anyway, the cat is already out of the bag.