Review
This is a linkpost for https://www.youtube.com/watch?v=IeVY_Ag8JI8
So his expectation-suggestion is that we will wait until we are one breakthrough away from uncontrolled takeoff, and then after we are in the middle of uncontrolled takeoff, attempt to stop it via legislation?
This sounds reasonable, until you examine the hidden assumptions in the phrase:
"We're going to unplug them all":
In the next few hours we’ll get to noticable flames [...] Some number of hours after that, the fires are going to start connecting to each other, probably in a way that we can’t understand, and collectively their heat [...] is going to rise very rapidly. My retort to that is, do you know what we’re going to do in that scenario? We’re going to unkindle them all.
Recently, Eric Schmidt gave a talk at Harvard called “Our AI Future: Hopes and Hurdles Ahead”. The entire talk is available here but there was one part that was interesting to me (around the 1:11:00 mark), and that is, his views on AI safety and trust in AI labs to stop scaling if recursive self-improvement starts happening. Emphasis my own.