Major news outlets have published articles about the Future of Life Institute's Open Letter. Time Magazine published an opinion piece by Elizer Yudkowsky. Lex Friendman featured EY on his podcast. Several US Members of Congress have spoken about the risks of AI. And a Fox News reporter asked a what the US President is doing to combat AI x-risk at a White House Press Conference.


Starting an Open Thread to discuss this, and how best to capitalize on this sudden attention.



WH Press Conference:

Time Magazine: 

FLI Letter: 

New Comment
8 comments, sorted by Click to highlight new comments since: Today at 3:20 PM

The public early Covid-19 conversation (in like Feb-April 2020) seemed pretty hopeful to me -- decent arguments, slow but asymmetrically correct updating on some of those arguments, etc.  Later everything became politicized and stupid re: covid.

Right now I think there's some opportunity for real conversation re: AI.  I don't know what useful thing follows from that, but I do think it may not last, and that it's pretty cool.  I care more about the "an opening for real conversation" thing than for the changing overton window as such, although I think the former probably follows from the latter (first encounters are often more real somehow).

Sundar Pichai was on the Hard Fork podcast today and was asked directly by Kevin Roose and Casey Newton about the FLI letter as well as long-term AI risk. I pulled out some of Pichai's answers here:


The FLI letter proposal:

kevin roose

[...] What did you think of that letter, and what do you think of this idea of slowing down the development of big models for six months?

sundar pichai

Look, in this area, I think it’s important to hear concerns. I mean, there are many thoughtful people, people who have thought about AI for a long time. I remember talking to Elon eight years ago, and he was deeply concerned about AI safety then. And I think he has been consistently concerned.

And I think there is merit to be concerned about it. So I think while I may not agree with everything that’s there in the details of how you would go about it, I think the spirit of it is worth being out there. I think you’re going to hear more concerns like that.

This is going to need a lot of debate. No one knows all the answers. No one company can get it right. We have been very clear about responsible AI — one of the first companies to put out AI principles. We issue progress reports.

AI is too important an area not to regulate. It’s also too important an area not to regulate well. So I’m glad these conversations are underway. If you look at an area like genetics in the ‘70s, when the power of DNA and recombinant DNA came into being, there were things like the Asilomar Conference.

Paul Berg from Stanford organized it. And a bunch of the leading experts in the field got together and started thinking about voluntary frameworks as well. So I think all those are good ways to think about this.

Game theory:

kevin roose

And just one more thing on this letter calling for this six-month pause. Are you willing to entertain that idea? I know you haven’t committed to it, but is that something you think Google would do?

sundar pichai

So I think in the actual specifics of it, it’s not fully clear to me. How would you do something like that, right, today?

kevin roose

Well, you could send an email to your engineers and say, OK, we’re going to take a six-month break.

sundar pichai

No, no, no, but How would you do — but if others aren’t doing that. So what does that mean? I’m talking about the how would you effectively —

kevin roose

It’s sort of a collective action problem.

sundar pichai

To me at least there is no way to do this effectively without getting governments involved.

casey newton


Long term AI x-risk:

kevin roose

Yeah, so if you had to give a question on the AGI or the more long-term concerns, what would you say is the chance that a more advanced AI could lead to the destruction of humanity?

sundar pichai

There is a spectrum of possibilities. And what you’re saying is in one of that possibility ranges, right? And so if you look at even the current debate about where AI is today or where LLMs are, you see people who are strongly opinionated on either side.

There are a set of who believe these LLMs, they’re just not that powerful. They are statistical models which are —

kevin roose

They’re just fancy autocomplete.

sundar pichai

Yes, that’s one way of putting it, right. And there are people who are looking at this and saying, these are really powerful technologies. You can see emergent capabilities — and so on.

We could hit a wall two iterations down. I don’t think so, but that’s a possibility. They could really progress in a two-year time frame. And so we have to really make sure we are vigilant and working with it.

One of the things that gives me hope about AI, like climate change, is it affects everyone. And so these are both issues that have similar characteristics in the sense that you can’t unilaterally get safety in AI. By definition, it affects everyone. So that tells me the collective will come over time to tackle all of this responsibly.

So I’m optimistic about it because I think people will care and people will respond. But the right way to do that is by being concerned about it. So I would never — at least for me, I would never dismiss any of the concerns, and I’m glad people are taking it seriously. We will.

A reason for optimism:

kevin roose

I hear you saying that what gives you hope for the future when it comes to AI is that other people are concerned about it — that they’re looking at the risks and the challenges. So on one hand, you’re saying that people should be concerned about AI. On the other hand, you’re saying the fact that they are concerned about AI makes you less concerned. So which is —

sundar pichai

Sorry, I’m saying the fact that the way you get things wrong is by not worrying about it. So if you don’t worry about something, you’re just going to completely get surprised. So to me, it gives me hope that there is a lot of people — important people — who are very concerned, and rightfully so.

Am I concerned? Yes. Am I optimistic and excited about all the potential of this technology? Incredibly. I mean, we’ve been working on this for a long time. But I think the fact that so many people are concerned gives me hope that we will rise over time and tackle what we need to do.

The increased public attention towards AI Safety risk is probably a good thing. But, when stuff like this is getting lumped in with the rest of AI Safety, it feels like the public-facing slow-down-AI movement is going to be a grab-bag of AI Safety, AI Ethics, and AI... privacy(?). As such, I'm afraid that the public discourse will devolve into "Woah-there-Slow-AI" and "GOGOGOGO" tribal warfare; from the track record of American politics, this seems likely - maybe even inevitable? 

More importantly, though, what I'm afraid of is that this will translate into adversarial relations between AI Capabilities organizations and AI Safety orgs (more generally, that capabilities teams will become less inclined to incorporate safety concerns in their products). 

I'm not actually in an AI organization, so if someone is in one and has thoughts on this dynamic happening/not happening, I would love to hear.

Yeah, since the public currently doesn't have much of an opinion on it, trying to get the correct information out seems critical. I fear some absolutely useless legislation will get passed, and everyone will just forget about it once the shock-value of GPT wears off.

I'm currently thinking that if there are any political or PR resources available to orgs (AI-related or EA) now is the time to use them. Public interest is fickle, and currently most people don't seem to know what to think, and are looking for serious-seeming people to tell them whether or not to see this as a threat. If we fail to act, someone else will likely hijack the narrative, and push it in a useless or even negative direction. I don't know how far we can go, or how likely it is, but we can't assume we'll get another chance before the public falls back asleep or gets distracted (the US has an election next year, so most discourse will then likely become poisoned). This is especially important for those in the community who are viewed as "serious people" or "serious organizations" (lots of academic credentials, etc.)

I agree that AI absolutely needs to be regulated ASAP to mitigate the many potential harms that could arise from its use. So, even though the FLI letter is flimsy and vague, I appreciate its performance of concern.

Yudkowsky’s worry about runaway intelligence is, I think, an ungrounded distraction. It is ungrounded because Yudkowsky does not have a coherent theory of intentionality that makes sense of the idea of an algorithm gaining a capacity to engage in its own goal directed activity. It is a distraction from the public discourse about the very real, immediate, tangible risks of harms caused AI systems we have today.

The independent red-teaming organization ARC Evals that OpenAI partnered with to evaluate GPT-4 seems to disagree with this. While they don't use the term "runaway intelligence", they have flagged similar dangerous capabilities that they think will possibly be in reach for the next models beyond GPT-4:

We think that, for systems more capable than Claude and GPT-4, we are now at the point where we need to check carefully that new models do not have sufficient capabilities to replicate autonomously or cause catastrophic harm – it’s no longer obvious that they won’t be able to.

Thanks for sharing the link to ARC. It seems to me the kinds of things they are testing for and worried about are analogous to the risks of self-driving cars: when you incorporate ML systems into a range of human activities, their behaviour is unpredictable and can be dangerous. I am glad ARC is doing the work they are doing. People are using unpredictable tools and ARC is investigating the risks. That's great.

I don't think these capabilities ARC is looking at are "similar" to runaway intelligence, as you suggest. They clearly do not require it. They are far more mundane (but dangerous nonetheless, as you rightly point out).

At one point in the ARC post, they hint vaguely at being motivated by Yudkowsky-like worries: "As AI systems improve, it is becoming increasingly difficult to rule out that models might be able to autonomously gain resources and evade human oversight – so rigorous evaluation is essential." They seem to be imagining a system giving itself goals, such that it is motivated to engage in tactical deception to carry out its goals--a behaviour we find in a range of problem-solving non-human animals. It strikes me as a worry that is extraneous to the good work ARC is doing. And the end of the quote is odd, since rigorous evaluation is clearly essential regardless of autonomous resource gains or oversight evasion.

New to LessWrong?