Douglas Hofstadter changes his mind on Deep Learning & AI risk (June 2023)?
A podcast interview (posted 2023-06-29) with noted AI researcher Douglas Hofstadter discusses his career and current views on AI (via Edward Kmett), and amplified to David Brooks. Hofstadter has previously energetically criticized GPT-2/3 models (and deep learning and compute-heavy GOFAI). These criticisms were widely circulated & cited, and apparently many people found Hofstadter a convincing & trustworthy authority when he was negative on deep learning capabilities & prospects, and so I found his most-recent comments (which amplify things he has been saying in private since at least 2014) of considerable interest. This interview (EDIT: and earlier material, it turns out), appears to have gone under the radar, perhaps because it's a video, so below I excerpt from the second half where he discusses DL progress & AI risk: > * * Q: ...Which ideas from GEB are most relevant today? > > * Douglas Hofstadter: ...In my book, I Am a Strange Loop, I tried to set forth what it is that really makes a self or a soul. I like to use the word "soul", not in the religious sense, but as a synonym for "I", a human "I", capital letter "I." So, what is it that makes a human being able to validly say "I"? What justifies the use of that word? When can a computer say "I" and we feel that there is a genuine "I" behind the scenes? > > I don't mean like when you call up the drugstore and the chatbot, or whatever you want to call it, on the phone says, "Tell me what you want. I know you want to talk to a human being, but first, in a few words, tell me what you want. I can understand full sentences." And then you say something and it says, "Do you want to refill a prescription?" And then when I say yes, it says, "Gotcha", meaning "I got you." So it acts as if there is an "I" there, but I don't have any sense whatsoever that there is an "I" there. It doesn't feel like an "I" to me, it feels like a very mechanical process. > > But in the case of more advanced thi
Can you point to 3 well-accepted examples of animals which do this - deliberately pass up prey at personal cost where kin selection or inclusive fitness or other concerns cannot explain it, where the gains exist only at the species level and is hardwired into them despite the incentives for individuals to defect? If not, it seems unlikely that humans would be the first and only species to evolve such a mechanism.