A thing I've been a bit surprised to see discussion taking at face value is "what arguments people made." One of the big degrees of freedom you have as a filmmaker is to ask a bunch of questions and then only include answers that fit your narrative.
i.e. it's at least plausible that the optimists or CEOs gave better arguments that got cut either because they didn't fit the narrative, or were too hard to understand or poorly phrased.
(Not saying this is necessarily true, just, flagging the question should be asked when you're evaluating the film)
I would bet there were at least some pessimists who talked positively about kids who were cut because he wanted to paint a streamlined narrative about that.
Babies Are Awesome
The overarching personal journey is about Daniel having a son. The movie takes one very clear position, that we need to see taken more often, which is that getting married and having a family and babies and kids are all super awesome. This turns into the first question he asks those he interviews. Would you have a child today, given the current state of AI? Many of those worried about AI killing everyone say no. They don’t try to dissuade anyone else, but we see Eliezer Yudkowsky saying he won’t do it in this timeline, we see Dario Amodei saying you should do what you would have done anyway, and a bit of ‘well let’s deal with this AI situation first and then we’ll see.’ Whereas basically all the optimists say today is the best time in history to have a kid, or to be born as a kid, the future is going to blow your mind. On this issue, I am with the optimists. I’m not sure I’d say today is the best time ever to have a child, given the existential risks, but barring that it definitely is a great time, and the upside potential for those children has never been greater. Most importantly, I don’t think that you’ve made things worse if you do have children, and then humanity fails to make it. Children are inherently valuable, and are far better off with whatever time you can give them than not having existed at all.People Are Worried About AI Killing Everyone
The first set of interviews outlines the danger. This is not a technical film. We get explanations that resonate with an ordinary dude. We get Jeffrey Ladish explaining the basics of instrumental convergence, the idea that if you have a goal then power helps you achieve that goal and you cannot fetch the coffee if you’re dead. That it’s not that the AI will hate us, it’s that it will see us like we see ants, and if you want to put a highway where the anthill is that’s the ant’s problem. We get Connor Leahy talking about how creating smarter and more capable things than us is not a safe thing to be doing, and emphasizing that you do not need further justification for that. We get Eliezer Yudkowsky saying that if you share a planet with much smarter beings that don’t care about you and want other things, you should not like your chances. We get Ajeya Cotra explaining additional things, and so on. Aside from that, we don’t get any talk of the ‘alignment problem’ and I don’t think the word alignment even appears in the film that I can remember. It is hard for me to know how much the arguments resonate. I am very much not the target audience. Overall I felt they were treated fairly, and the arguments were both strong and highly sufficient to carry the day. Yes, obviously we are in a lot of trouble here.Freak Out
Daniel’s response is, quite understandably and correctly, to freak out. Then he asks, very explicitly, is there a way to be an optimist about this? Could he convince himself it will all work out? It is hard to properly express how much I appreciated this being so explicit. The second section is not a quest for truth. It is a quest to stop freaking out, regardless of the underlying truth.Other People Are Not Worried About AI Killing Everyone
The tech optimists and accelerationists are happy to oblige. They come bearing positive vibes and the promise of technology to solve all of our problems. Peter Diamandis starts us off pointing out that technology has done great things for people throughout history. Beff Jezos promises even more of this to come, that the future will be awesome. People are always afraid of new tech, you see, but that’s a natural part of it, and the fear can be useful. That is almost entirely the argument. Tech was good before, so tech will be good now. The vibes, among this group, are excellent. The careful observer will notice that this does not constitute much of an argument. Yes, it is Bayesian evidence that people previously worried and thought things were ending, but it is an extremely bad sign if this is all you have got. The fact that humans use technology and tools to make life better does not mean that creating superior sufficiently advanced artificial minds is a safe thing to do likely to turn out well. It does not answer any of the cases made for existential risk or ‘doom.’ Indeed, when we flip back to the first group of worried people, they, especially Tristan Harris but also others, readily affirm that the promises and upsides are real and technology is awesome for humans. The problem is that none of that means we’re not all going to die, or provides a reason to think the existential risks aren’t there. We even have, verbatim, someone saying the question is not whether we can survive AGI, the question is whether we can survive without AGI. He even directly cites a potential asteroid strike, with a straight face. Note that Daniela Amodei, Dario’s sister and the President of Anthropic, appears in this section, rather than in the first section. She doesn’t actively dismiss AI existential risks, but she focuses almost entirely on the upside potential. Very curious. As Robin Hanson points out, that does not mean there are not better arguments for existential risks being unlikely. But it seems that no one brought such arguments. Who needs arguments when you have vibes? Aella left the movie mad at the optimists for not making any arguments. Whereas I’m not mad about that, because they’re not seriously claiming to make any arguments, so presenting their argumentless pitch provides key information about this fact. Doing this in a way those people endorse as fair lets outsiders see that there is no debate, as there are no good arguments on the ‘nothing to worry about’ side, although there are good arguments for higher chances of success than MIRI believes in.Deepfaketown and Botpocalypse Soon
We then get a third group of interviews and worries, which is where we bring in Emily Bender and Timnit Gebru and company, and we talk about deepfake videos and inequality and power and water usage and all the other various boogeymen. This brings the vibes back to ‘oh no’ without digging into any of the particular claims. Some of the concerns here are real, some are nonzero but essentially fake, and wisely the fake ones are not focused upon. The main focuses are deepfakes, which for now are contained but certainly are real and a problem, and inequality and the prospect of humans being unable to hold jobs. Given we have already covered actual existential risks, I will allow this, you do have to cover your bases.Stopping The AI Race and A Narrow Path
Discussion now shifts into the dynamics of the AI Race. We see various people point out that racing to build more capable AI as fast as possible is bad, as Connor Leahy says several projects racing for AGI at the same time is the worst possible situation and, well, here we are. Tristan Harris frames things as needing to chart between twin dangers. If we fully ‘let it rip’ then that definitely ends disastrously, with misuse cited as the central reason. I agree, but note that the movie did not properly justify this, and should have pointed out that if everyone has sufficiently advanced AI available then the AIs are effectively in charge because everyone has to use their AI to compete for resources and run their life on their behalf, and so on. If we ‘shut it down,’ we miss out on AI’s promise indefinitely, and as many point out including Demis Hassabis this only works if you have everyone’s buy-in, including China, and this is not so easy. I was disappointed we didn’t get more on the fact that such buy-in is possible, but it felt reasonable to put this beyond scope. Instead, we must chart, the movie says reasonably, a narrow path between these two options. You can’t go full speed or full stop. One place I find the arguments weak is ‘the lab with the least safety wins,’ since that assumes both that safety trades off with usefulness (that the alignment tax is large and positive, which so far it hasn’t been), and also that the participants are roughly equal.CEOs Know Their Roles
Given this is all being run by ‘five guys’ he then sets out to talk to the five CEOs of OpenAI, Anthropic, Google, xAI and Meta. The results are impressive and also kind of perfect.- Sam Altman of OpenAI shows up soft-spoken, friendly but somber. They congratulate each other on starting families, and Altman acknowledges the whole thing is scary. His answer to how to make AI safe is iterative deployment and testing, and his reason why OpenAI can make it safe is they can use their lead. I don’t think it was fair, even then, for Altman to claim a lead over Anthropic, but unless he was going to break news Altman came off about as well as he could.
- Dario Amodei showed up his usual self as well. He acknowledged the situation, and noted the need for government help with coordination and safety.
- Demis Hassabis pointed out that coordination would need to be international, and emphasized some of his favorite AI upsides.
- Elon Musk said he would participate, but got too busy, and left us with nothing.
- Mark Zuckerberg declined to participate at all.
Did he grill the CEOs? No. He did not grill the CEOs. The questions were not all easy, but he kept it friendly, and asked questions he clearly needed to ask. I think this was the right approach in a spot like this, because he doesn’t have the chops necessary to ask the ‘hard hitting’ questions I would want to ask. Keep ‘em talking, and get them into as earnest a mode as you can rather than a combative one.The Call To Action
I did appreciate the fake ending, on both the real and meta level. I am curious what level of fake it was, whether he did consider ending things there or not. The real ending is a standard audacity of hope, call your Congressman, seek an international treaty to solve this coordination future and save the world, the future is up to us pitch that ends so many documentaries. In this case, yes, the world really does need saving. There is a call to action link. Often one rolls one’s eyes here. I would not begrudge anyone doing the same. But in this case, the very thesis that the future is unwritten, and that humanity can choose a different path other than ‘wreck everything and either tank civilization or hand things to the bad guys’ is rather controversial, thus you shouldn’t try. Tyler Cowen, in response to this section, as an example, says explicitly that ‘in reality, for better or worse, the final decisions will continue to be made by the national security establishment,’ which implies that they were previously making the final decisions on such matters, or that they will in the future do so, and also that you cannot impact what decisions such folks make or that such folks can’t be instructed and can’t take part in international cooperation. Well, the correct reply goes, not with that attitude.