In July, Ben Garfinkel scrutinized the classic AI Risk arguments in a 158 minute-long interview with 80000 hours, which I strongly recommend.

I have formulated a reply, and recorded 80 minutes of video, as part of two presentations in the AISafety.com Reading Group:

196. Ben Garfinkel on Scrutinizing Classic AI Risk Arguments

197. Ben Garfinkel on Scrutinizing Classic AI Risk Arguments 2

I strongly recommend turning subtitles on. Also consider increasing the playback speed.


"I have made this longer than usual because I have not had time to make it shorter."
-Blaise Pascal

The Podcast/Interview format is less well suited for critical text analysis, compared to a formal article or a LessWrong post, for 3 reasons:

  1. Lack of precision. It is a difficult skill to place each qualifier carefully and deliberately when speaking, and at several points I was uncertain if I was parsing Ben's sentences correctly.

  2. Lack of references. The "Classic AI Risk Arguments" are expansive, and critical text analysis require clear pointers to the specific arguments that are being criticized.

  3. Expansiveness. There are a lot of arguments presented, and many of them deserve formal answers. Unfortunately, this is a large task, and I hope you will forgive me for replying in the form of a video.

tl;dw: A number of the arguments Ben Garfinkel criticize are in fact not present in "Superintelligence" and "The AI Foom Debate". (This summary is incomplete.)

New Comment
6 comments, sorted by Click to highlight new comments since:

The Podcast/Interview format is less well suited for critical text analysis, compared to a formal article or a LessWrong post, for 3 reasons:

Lack of precision. It is a difficult skill to place each qualifier carefully and deliberately when speaking, and at several points I was uncertain if I was parsing Ben's sentences correctly.

Lack of references. The "Classic AI Risk Arguments" are expansive, and critical text analysis require clear pointers to the specific arguments that are being criticized.

Expansiveness. There are a lot of arguments presented, and many of them deserve formal answers. Unfortunately, this is a large task, and I hope you will forgive me for replying in the form of a video.

tl;dw: A number of the arguments Ben Garfinkel criticize are in fact not present in "Superintelligence" and "The AI Foom Debate". (This summary is incomplete.)

Hi Soren,

I agree that podcasts/interviews have some major disadvantages, though they also have several advantages. 

Just wanted to link to Ben's written versions of some (but not all) of these arguments in case you haven't seen them. I don't know whether they address the specific things you're concerned about. We linked to these in the show notes and if we didn't explicitly flag that these existed during the episode, we should have.[1] 

  1. On Classic Arguments for AI Discontinuities
  2. Imagining the Future of AI: Some Incomplete but Overlong Notes
  3. Slide deck: Unpacking Classic AI Risk Arguments
  4. Slide deck: Potential Existential Risks from Artificial Intelligence

(1) and (3) are most relevant to the things we talked about on the podcast. My memory's hazy but I think (2) and (4) also have some relevant sections. 

Unfortunately, I probably won't have time to watch your videos though I'd really like to.[2] If you happen to have any easy-to-write-down thoughts on how I could've made the interview better (including, for example, parts of the interview where I should've pushed back more), I'd find that helpful. 

[1] JTBC, I think we should expect that most listeners are going to absorb whatever's said on the show and not do any additional reading.

[2] ETA: Oh - I just noticed that youtube now has an 'open transcript' feature which makes it possible I'll able to get to this.

Hi Howie,

Thank you for reminding me of these four documents. I had seen them, but I dismissed them early in the process. That might have been a mistake, and I'll read them carefully now.

I think you did a great job at the interview. I describe one place where you could have pushed back more here: https://youtu.be/_kNvExbheNA?t=1376 You asked: "...Assume that among the things that these narrow AIs are really good at doing, one of them is programming AI...", and Ben Garfinkel made a broad answer about "doing science".

On the documents:

Unfortunately I read them nearly a year ago so my memory's hazy. But (3) goes over most of the main arguments we talked about in the podcast step by step, though it's just slides so you may have similar complaints about the lack of close analysis of the original texts.

(1) is a pretty detailed write up of Ben's thoughts on discontinuities, sudden emergence, and explosive aftermath. To the extent that you were concerned about those bits in particular, I'd guess you'll find what you're looking for there.

Thanks! Agree that it'd would've been useful to push on that point some more.

I know Ben was writing up some additional parts of his argument at some point but I don't know whether finishing that up is still something he's working on.

Sorry for the unfortunate timing of this post and Petrov Day! When the frontpage goes back up tomorrow, I will bump this post to make sure it gets some proper time on the frontpage.