Eliezer has been on a podcast tour of late. This appears to be the latest one on YouTube.

I feel like this one didn't go so well. It's also over two hours, so maybe don't watch this one if you have better things to do, but for those that did, I thought I'd make a space for discussion.

Most of it was Eliezer trying to do some Socratic questioning to try to get to the bottom of the host's intuition that AI isn't "really" intelligent, or just imitates it in fragments, and therefore is not dangerous. I'm not sure I understood. It didn't seem to be expressed very well.

Eliezer's goal in doing the interview seems to have been to figure out how to talk to a different audience. People in the know ask very different questions from those who are just learning about this.

New to LessWrong?

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 9:08 AM
[-]niplav1y1212

I predict the interviews would be more successful if Eliezer was more polite to the interviewers. That would include things like not interrupting the interviewer and not becoming too exasperated at him.

On the plus side, he summarized the interviewer's position at him in the end, which felt collegial and cooperative.

I think this comes out of Eliezer trying to debug the interviewer's thinking while holding the interview (a thing I can empathise with).

I managed about an hour of the discussion... Very painful -- like Communist Re-Education Camp painful.  From what I saw, EY was much more patient with the interviewer than I would have been...  Maybe I'm old school, but generally, the interviewer tries to maximize the guest's time & engagement.  His rambling & near incoherent excursions... Ouch!!

I hoped his first question to EY would be, "Why are you so convinced that an AGI will Kill Us All?"  Yes, the response would be 'old hat' on this forum, but probably not for this podcast's audience...