Ezra Klein of NYT put out a surprisingly sympathetic post on AI risk in the Sunday edition. It even quotes Paul Christiano and links back to LessWrong!
But what I'm actually here to talk about is the top reader-recommended comment on the article as of Sunday 11pm UTC:
I wonder how many of these AI researchers have children. What Ezra describes here is what I see every day with my teenager. Of course, no one understands teenagers, but that's not what I mean. I taught my daughter to play chess when she was very young. I consider myself a reasonably good player, and for many years (as I was teaching her), I had to hold myself back to let her win enough to gain confidence. But now that she is thirteen, I suddenly discovered that within a span of weeks, I no longer needed to handicap myself. The playing field was level. And then, gradually and then very suddenly, she leapt past my abilities. As with AI, could understand the broad outlines of what she was doing--moving this knight or that rook to gain an advantage--but I had no clue how to defend against these attacks. And worse (for my game, at least), I would fall into traps where I thought I was pursuing a winning hand but was lead into ambush after ambush. It was very humbling: I had had the upper hand for so long that it became second nature, and then suddenly, I went to losing every game.
As parents, we all want our children to surpass us. But with AI, these "summoners" are creating entities whose motives are not human. We seem to be at the cusp of where I was before my daughter overtook me: confident and complacent, not knowing what lay ahead. But, what we don't realize is that very soon we'll begin to lose every game against these AIs. Then, our turn in the sun will be over.
Generally NYT comments on AI risk are either dismissive, or just laden with general anxiety about tech. (Indeed, the second-most recommended comment is deeply dismissive, and the third is generic anxiety/frustration.) There's hopefully something to learn from commentor "Dwarf Planet" in terms of messaging.
If we want to know what arguments resonate with New York Times articles we can actually use surveys, message testing, and focus groups to check and we don't need to guess! (Disclaimer: My company sells these services.)
Does RP have any results to share from these studies? What arguments seem to resonate with various groups?
At the time of me writing, this comment is still the most recommended comment with 910 recommendations. 2nd place has 877 recommendations:
3rd place has 790 recommendations:
4th place has 682 recommendations:
After that, 5th place has 529, 6th place has 390, and the rest have 350 or fewer.
I think it may be necessary to accept that at first, there may need to be a stage of general AI wariness within public opinion before AI Safety and specific facets of the topic are explored. In a sense, the public has not yet fully digested the 'AI is a serious risk' or perhaps even 'AI will be transformative to human life' in the relatively near term future. I don't think it is very likely that is a phase that can simply be skipped and it will probably be useful to get as many people broadly on topic before the more specific messaging, because if they are not then they will reject your messaging immediately, perhaps becoming further entrenched in the process.
If this is the case then right now sentiments along the lines of general anxiety about AI are not too bad, or at least they are better than dismissive sentiment.
Great post. This type of genuine comment (human-centered rather than logically abstract) seems like the best way to communicate the threat to non-technical people. I've tried talking about the problem to friends in social sciences and haven't found a good way to convey how serious I feel about it and how there is no current logical prevention of this problem.
One thing I notice is that it doesn’t link to a plan of action, or tell you how you should feel. It just describes the scenario. Perhaps that’s what’s needed - less of the complete argument, and more just breaking it down into digestible morsels.
The article also references Katja Grace and an AI Impacts survey. Ezra seems pretty plugged into this scene.
I can see an undertone of concern about large-scale job loss due to cognitive automation. It's a catastrophe to personally prepare for, and it's also a very important dynamic for forecasting and world modelling. But blurring the line between job automation and AI safety will cause some serious problems down the line, and that line was likely blurred for a sizeable proportion of the people who upvoted that comment (hopefully not a majority).
I still think it did a great job, but I was much more impressed with the twelve page introduction that Toby Ord wrote about AI safety in The Precipice.
That comment is well-constructed. However, some may object to the analogy on the grounds that AI may not reach human-level intelligence outside of constrained tasks such as playing chess. While storytelling and emotional appeals can be effective forms of persuasion, what I appreciate about the analogy is that it is relatable, and it highlights the exponential development of AI.
A nice read, however it does not present a valid argument, is his time over because his daughter is better at chess then him? This is the beginning of something, not an end.
I didn't read it as an argument so much as an emotionally compelling anecdote that excellently conveys this realization:
The final paragraph does seem to be making several arguments, or at least presuming multiple things that are not universally accepted as axioms.
Yeah, the author is definitely making some specific claims. I'm not sure if the comment's popularity stems primarily from its particular arguments or from its emotional sentiment. I was just pointing out what I personally appreciated about the comment.