Douglas Hofstadter changes his mind on Deep Learning & AI risk (June 2023)?
A podcast interview (posted 2023-06-29) with noted AI researcher Douglas Hofstadter discusses his career and current views on AI (via Edward Kmett), and amplified to David Brooks. Hofstadter has previously energetically criticized GPT-2/3 models (and deep learning and compute-heavy GOFAI). These criticisms were widely circulated & cited, and apparently many people found Hofstadter a convincing & trustworthy authority when he was negative on deep learning capabilities & prospects, and so I found his most-recent comments (which amplify things he has been saying in private since at least 2014) of considerable interest. This interview (EDIT: and earlier material, it turns out), appears to have gone under the radar, perhaps because it's a video, so below I excerpt from the second half where he discusses DL progress & AI risk: > * * Q: ...Which ideas from GEB are most relevant today? > > * Douglas Hofstadter: ...In my book, I Am a Strange Loop, I tried to set forth what it is that really makes a self or a soul. I like to use the word "soul", not in the religious sense, but as a synonym for "I", a human "I", capital letter "I." So, what is it that makes a human being able to validly say "I"? What justifies the use of that word? When can a computer say "I" and we feel that there is a genuine "I" behind the scenes? > > I don't mean like when you call up the drugstore and the chatbot, or whatever you want to call it, on the phone says, "Tell me what you want. I know you want to talk to a human being, but first, in a few words, tell me what you want. I can understand full sentences." And then you say something and it says, "Do you want to refill a prescription?" And then when I say yes, it says, "Gotcha", meaning "I got you." So it acts as if there is an "I" there, but I don't have any sense whatsoever that there is an "I" there. It doesn't feel like an "I" to me, it feels like a very mechanical process. > > But in the case of more advanced thi
I was thinking mostly along the lines of, it sounds like you made money, but not nearly as much money as you could have made if you had instead invested in or participated more directly in DL scaling (even excluding the Anthropic opportunity), when you didn't particularly need any money and you don't mention any major life improvements from it beyond the nebulous (and often purely positional/zero-sum), and in the mean time, you made little progress on past issues of importance to you like decision theory while not contributing to DL discourse or more exotic opportunities which were available 2020-2025 (like doing things like, eg. instill particular decision theories into LLMs by writing online during their most malleable years).