Douglas Hofstadter changes his mind on Deep Learning & AI risk (June 2023)?
A podcast interview (posted 2023-06-29) with noted AI researcher Douglas Hofstadter discusses his career and current views on AI (via Edward Kmett), and amplified to David Brooks. Hofstadter has previously energetically criticized GPT-2/3 models (and deep learning and compute-heavy GOFAI). These criticisms were widely circulated & cited, and apparently many people found Hofstadter a convincing & trustworthy authority when he was negative on deep learning capabilities & prospects, and so I found his most-recent comments (which amplify things he has been saying in private since at least 2014) of considerable interest. This interview (EDIT: and earlier material, it turns out), appears to have gone under the radar, perhaps because it's a video, so below I excerpt from the second half where he discusses DL progress & AI risk: > * * Q: ...Which ideas from GEB are most relevant today? > > * Douglas Hofstadter: ...In my book, I Am a Strange Loop, I tried to set forth what it is that really makes a self or a soul. I like to use the word "soul", not in the religious sense, but as a synonym for "I", a human "I", capital letter "I." So, what is it that makes a human being able to validly say "I"? What justifies the use of that word? When can a computer say "I" and we feel that there is a genuine "I" behind the scenes? > > I don't mean like when you call up the drugstore and the chatbot, or whatever you want to call it, on the phone says, "Tell me what you want. I know you want to talk to a human being, but first, in a few words, tell me what you want. I can understand full sentences." And then you say something and it says, "Do you want to refill a prescription?" And then when I say yes, it says, "Gotcha", meaning "I got you." So it acts as if there is an "I" there, but I don't have any sense whatsoever that there is an "I" there. It doesn't feel like an "I" to me, it feels like a very mechanical process. > > But in the case of more advanced thi
Depends on the field, at best. In the psychology Replication Crisis, this was one of the classic excuses to not publish failures-to-replicate: "we did it right, so you must just have done it wrong; so it's good that you can't get published and no one will cite you even if you do. You'd just pollute the literature and distract from our important success." Of course, it turns out that even if you involve the original experimenters in the followup to sign off with their magic touch, it doesn't replicate once you lock down the analysis and get a proper sample size.