A podcast interview (posted 2023-06-29) with noted AI researcher Douglas Hofstadter discusses his career and current views on AI (via Edward Kmett), and amplified to David Brooks.

Hofstadter has previously energetically criticized GPT-2/3 models (and deep learning and compute-heavy GOFAI). These criticisms were widely circulated & cited, and apparently many people found Hofstadter a convincing & trustworthy authority when he was negative on deep learning capabilities & prospects, and so I found his most-recent comments (which amplify things he has been saying in private since at least 2014) of considerable interest.

This interview (EDIT: and earlier material, it turns out), appears to have gone under the radar, perhaps because it's a video, so below I excerpt from the second half where he discusses DL progress & AI risk:

    • Q: ...Which ideas from GEB are most relevant today?

    • Douglas Hofstadter: ...In my book, I Am a Strange Loop, I tried to set forth what it is that really makes a self or a soul. I like to use the word "soul", not in the religious sense, but as a synonym for "I", a human "I", capital letter "I." So, what is it that makes a human being able to validly say "I"? What justifies the use of that word? When can a computer say "I" and we feel that there is a genuine "I" behind the scenes?

      I don't mean like when you call up the drugstore and the chatbot, or whatever you want to call it, on the phone says, "Tell me what you want. I know you want to talk to a human being, but first, in a few words, tell me what you want. I can understand full sentences." And then you say something and it says, "Do you want to refill a prescription?" And then when I say yes, it says, "Gotcha", meaning "I got you." So it acts as if there is an "I" there, but I don't have any sense whatsoever that there is an "I" there. It doesn't feel like an "I" to me, it feels like a very mechanical process.

      But in the case of more advanced things like ChatGPT-3 or GPT-4, it feels like there is something more there that merits the word "I." The question is, when will we feel that those things actually deserve to be thought of as being full-fledged, or at least partly fledged, "I"s?

      I personally worry that this is happening right now. But it's not only happening right now. It's not just that certain things that are coming about are similar to human consciousness or human selves. They are also very different, and in one way, it is extremely frightening to me. They are extraordinarily much more knowledgeable and they are extraordinarily much faster. So that if I were to take an hour in doing something, the ChatGPT-4 might take one second, maybe not even a second, to do exactly the same thing.

      And that suggests that these entities, whatever you want to think of them, are going to be very soon, right now they still make so many mistakes that we can't call them more intelligent than us, but very soon they're going to be, they may very well be more intelligent than us and far more intelligent than us. And at that point, we will be receding into the background in some sense. We will have handed the baton over to our successors, for better or for worse.

      And I can understand that if this were to happen over a long period of time, like hundreds of years, that might be okay. But it's happening over a period of a few years. It's like a tidal wave that is washing over us at unprecedented and unimagined speeds. And to me, it's quite terrifying because it suggests that everything that I used to believe was the case is being overturned.

    • Q: What are some things specifically that terrify you? What are some issues that you're really...

    • D. Hofstadter: When I started out studying cognitive science and thinking about the mind and computation, you know, this was many years ago, around 1960, and I knew how computers worked and I knew how extraordinarily rigid they were. You made the slightest typing error and it completely ruined your program. Debugging was a very difficult art and you might have to run your program many times in order to just get the bugs out. And then when it ran, it would be very rigid and it might not do exactly what you wanted it to do because you hadn't told it exactly what you wanted to do correctly, and you had to change your program, and on and on.

      Computers were very rigid and I grew up with a certain feeling about what computers can or cannot do. And I thought that artificial intelligence, when I heard about it, was a very fascinating goal, which is to make rigid systems act fluid. But to me, that was a very long, remote goal. It seemed infinitely far away. It felt as if artificial intelligence was the art of trying to make very rigid systems behave as if they were fluid. And I felt that would take enormous amounts of time. I felt it would be hundreds of years before anything even remotely like a human mind would be asymptotically approaching the level of the human mind, but from beneath.

      I never imagined that computers would rival, let alone surpass, human intelligence. And in principle, I thought they could rival human intelligence. I didn't see any reason that they couldn't. But it seemed to me like it was a goal that was so far away, I wasn't worried about it. But when certain systems started appearing, maybe 20 years ago, they gave me pause. And then this started happening at an accelerating pace, where unreachable goals and things that computers shouldn't be able to do started toppling. The defeat of Gary Kasparov by Deep Blue, and then going on to Go systems, Go programs, well, systems that could defeat some of the best Go players in the world. And then systems got better and better at translation between languages, and then at producing intelligible responses to difficult questions in natural language, and even writing poetry.

      And my whole intellectual edifice, my system of beliefs... It's a very traumatic experience when some of your most core beliefs about the world start collapsing. And especially when you think that human beings are soon going to be eclipsed. It felt as if not only are my belief systems collapsing, but it feels as if the entire human race is going to be eclipsed and left in the dust soon. People ask me, "What do you mean by 'soon'?" And I don't know what I really mean. I don't have any way of knowing. But some part of me says 5 years, some part of me says 20 years, some part of me says, "I don't know, I have no idea." But the progress, the accelerating progress, has been so unexpected, so completely caught me off guard, not only myself but many, many people, that there is a certain kind of terror of an oncoming tsunami that is going to catch all humanity off guard.

      It's not clear whether that will mean the end of humanity in the sense of the systems we've created destroying us. It's not clear if that's the case, but it's certainly conceivable. If not, it also just renders humanity a very small phenomenon compared to something else that is far more intelligent and will become incomprehensible to us, as incomprehensible to us as we are to cockroaches.

    • Q: That's an interesting thought. [nervous laughter]

    • Hofstadter: Well, I don't think it's interesting. I think it's terrifying. I hate it. I think about it practically all the time, every single day. [Q: Wow.] And it overwhelms me and depresses me in a way that I haven't been depressed for a very long time.

    • Q: Wow, that's really intense. You have a unique perspective, so knowing you feel that way is very powerful.

    • Q: How have LLMs, large language models, impacted your view of how human thought and creativity works?

    • D H: Of course, it reinforces the idea that human creativity and so forth come from the brain's hardware. There is nothing else than the brain's hardware, which is neural nets. But one thing that has completely surprised me is that these LLMs and other systems like them are all feed-forward. It's like the firing of the neurons is going only in one direction. And I would never have thought that deep thinking could come out of a network that only goes in one direction, out of firing neurons in only one direction. And that doesn't make sense to me, but that just shows that I'm naive.

      It also makes me feel that maybe the human mind is not so mysterious and complex and impenetrably complex as I imagined it was when I was writing Gödel, Escher, Bach and writing I Am a Strange Loop. I felt at those times, quite a number of years ago, that as I say, we were very far away from reaching anything computational that could possibly rival us. It was getting more fluid, but I didn't think it was going to happen, you know, within a very short time.

      And so it makes me feel diminished. It makes me feel, in some sense, like a very imperfect, flawed structure compared with these computational systems that have, you know, a million times or a billion times more knowledge than I have and are a billion times faster. It makes me feel extremely inferior. And I don't want to say deserving of being eclipsed, but it almost feels that way, as if we, all we humans, unbeknownst to us, are soon going to be eclipsed, and rightly so, because we're so imperfect and so fallible. We forget things all the time, we confuse things all the time, we contradict ourselves all the time. You know, it may very well be that that just shows how limited we are.

    • Q: Wow. So let me keep going through the questions. Is there a time in our history as human beings when there was something analogous that terrified a lot of smart people?

    • D H: Fire.

    • Q: You didn't even hesitate, did you? So what can we learn from that?

    • D H: No, I don't know. Caution, but you know, we may have already gone too far. We may have already set the forest on fire. I mean, it seems to me that we've already done that. I don't think there's any way of going back.

      When I saw an interview with Geoff Hinton, who was probably the most central person in the development of all of these kinds of systems, he said something striking. He said he might regret his life's work. He said, "Part of me regrets all of my life's work." The interviewer then asked him how important these developments are. "Are they as important as the Industrial Revolution? Is there something analogous in history that terrified people?" Hinton thought for a second and he said, "Well, maybe as important as the wheel."

(YouTube transcript cleaned up by GPT-4 & checked against audio.)

New to LessWrong?

New Comment
55 comments, sorted by Click to highlight new comments since: Today at 10:47 AM
[-]Ben Amitay10mo119108

It is beautiful to see that many of our greatest minds are willing to Say Oops, even about their most famous works. It may not score that many winning-points, but it does restore quite a lot of dignity-points I think.

[-]mishka10mo509

But one thing that has completely surprised me is that these LLMs and other systems like them are all feed-forward. It's like the firing of the neurons is going only in one direction. And I would never have thought that deep thinking could come out of a network that only goes in one direction, out of firing neurons in only one direction. And that doesn't make sense to me, but that just shows that I'm naive.

I felt exactly the same, until I had read this June 2020 paper: Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention.

It turns out that using Transformers in the autoregressive mode (with output tokens being added back to the input by concatenating the previous input and the new output token, and sending the new versions of the input through the model again and again) results in them emulating dynamics of recurrent neural networks, and that clarifies things a lot...

[-]dr_s10mo2311

Yeah, there's obviously SOME recursion there but it's still surprising that such a relatively low bandwidth recursion can still work so well. It's more akin to me writing down my thoughts and then rereading them to gather my ideas than the kind of loops I imagine our neurons might have.

That said, who knows, maybe the loops in our brain are superfluous, or only useful for learning feedback purposes, and so a neural network trained by an external system doesn't need them.

It's more akin to me writing down my thoughts and then rereading them to gather my ideas than the kind of loops I imagine our neurons might have.

In a sense, that is what is happening when you think in words. It's called the phonological loop.

I think it seems that way, in your conscious thoughts, but actually there's a lot more inter-brain-region communication going on simultaneously. I think that without this, you'd see far worse human outputs. And I think once we add something like higher-bandwidth-recursive-thought into language models, we're going to see a capabilities jump.

It sounds a lot like what we do when we write (as opposed to talk). I recall Kurt Vonnegut once said something like (can't find cite sry) 

'The reason an author can sound intelligent is because they have the advantage of time. My brain is so slow, people have thought me stupid. But as a writer, I can think at my own speed.'

Think of it this way: how would it feel to chat with someone whose perception of time is 10X slower? Or 100X or 1000X - or, imagine playing chess where your clock was running orders of mag faster than your opponent's.

[-]mishka10mo73

Pondering this particular recursion, I noticed that it looks like things change not too much from iteration to iteration of this autoregressive dynamics, because we just add one token each time.

The key property of those artificial recurrent architectures which successfully fight the vanishing gradient problem is that a single iteration of recurrence looks like Identity + epsilon (so, X -> X + deltaX for a small deltaX on each iteration, see, for example, this 2018 paper, Overcoming the vanishing gradient problem in plain recurrent networks which explains how this is the case for LSTMs and such, and explains how to achieve this for plain recurrent networks; for a brief explanation see my review of the first version of this paper, Understanding Recurrent Identity Networks).

So, I strongly suspect that it is also the case for the recurrence which is happening in Transformers used in the autoregressive mode (because the input is changing mildly from iteration to iteration).

But I don't know to which extent this is also true for biological recurrent networks. On one hand, our perceptions seem to change smoothly with time, and that seems to be an argument for gradual change of the X -> X + deltaX nature in the biological case as well. But we don't understand the biological case all that well...


I think recurrence is actually quite important for LLMs. Cf. Janus' Simulator theory which is now relatively well developed (see e.g. the original Simulators or brief notes I took on the recent status of that theory May-23-2023-status-update). The fact that this is an autoregressive simulation is playing the key role.

But we indeed don't know whether complexity of biological recurrences vs. relative simplicity of artificial recurrent networks matters much...

I'd speculate that our perceptions just seem to change smoothly because we encode second-order (or even third-order) dynamics in our tokens. From what I layman-understand of consciousness, I'd be surprised if it wasn't discrete.

Can you explain what you mean by second or third order dynamics? That sounds interesting. Do you mean e.g. the order of the differential equation or something else?

I just mean like, if we see an object move we have a qualia of position but also of velocity/vector and maybe acceleration. So when we see for instance a sphere rolling down an incline, we may have a discrete conscious "frame" where the marble has a velocity of 0 but a positive acceleration, so despite the fact that the next frame is discontinuous with the last one looking only at position, we perceive them as one smooth sequence because the predicted end position of the motion in the first frame is continuous with the start point in the second.

This seems to me the opposite of a low bandwidth recursion. Having access the the entire context window of the previous iteration minus the first token, it should be pretty obvious that most of the relevant information encoded by the values of the nodes in that iteration could in principal be reconstructed, excepting the unlikely event that first token turns out to be extremely important. And it would be pretty weird if much if that information wasn't actually reconstructed in some sense in the current iteration. An inefficient way to get information from one iteration to the next, if that is your only goal, but plausibly very high bandwidth.

excepting the unlikely event that first token turns out to be extremely important.

Which is why asking an LLM to give an answer that starts with "Yes" or "No" and then gives an explanation is the worst possible way to do it.

[-]der10mo52

This was thought provoking. While I believe what you said is currently true for the LLMs I've used, a sufficiently expensive decoding strategy would overcome it. Might be neat to try this for the specific case you describe. Ask it a question that it would answer correctly with a good prompt style, but use the bad prompt style (asking to give an answer that starts with Yes or No), and watch how the ratio of the cumulative probabilities of Yes* and No* sequences changes as you explore the token sequence tree.

[-]dr_s10mo52

I'd say it's pretty low bandwidth compared to the wealth of information that must exist in the intermediate layers. Even just the distribution of logits gets collapsed into a single returned value. You could definitely send back more than just that, but the question is whether it's workable or if it just adds confusion.

It could also be that LLMs don't do it like we do it and simply offer a computationally sufficient platform.

The loops in our neurons can't be that great, otherwise I wouldn't benefit so much from writing down my thoughts and then rereading them. :P

(Not a serious disagreement with you, I think I agree overall)

In what sense do they emulate these dynamics?

[-]mishka10mo102

The formulas and a brief discussion are in Section 3.4 (page 5) of https://arxiv.org/abs/2006.16236

Thanks!

Further discussion on Twitter of feedforward vs recurrent.

Thanks!

that paper is one of many claiming some linear attention mechanism that's as good as full self attention. in practice they're all sufficiently much worse that nobody uses them except the original authors in the original paper, usually not even the original authors in subsequent papers.

the one exception is flash attention, which is basically just a very fancy fused kernel for the same computation (actually the same, up to numerical error, unlike all these "linear attention" papers).

Being an autoregressive language model is like having a strange form of amnesia, where you forget everything you thought about so far as soon as you utter a new word, and you can remember only what you said before.

>It turns out that using Transformers in the autoregressive mode (with output tokens being added back to the input by concatenating the previous input and the new output token, and sending the new versions of the input through the model again and again) results in them emulating dynamics of recurrent neural networks, and that clarifies things a lot...

I'll bite: Could you dumb down the implications of the paper a little bit, what is the difference between a Transformer emulating a RNN and  some pre-Transformer RNNs and/or not-RNN?

My much more novice-level answer to Hofstadter's intuition would have been: it's not the feedforward firing, but it is the gradient descent training of the model on massive scale (both in data and in computation). But apparently you think that something RNN-like about the model structure itself is important?

[-]mishka10mo50

I think that gradient descent in computation is super-important (this is, apparently, the key mechanism responsible for the phenomenon of few-shot learning).

And, moreover, massive linear combinations of vectors ("artificial attention") seem to be super-important (the starting point in this sense was adding this kind of artificial attention mechanism to the RNN architecture in 2014).

But apparently you think that something RNN-like about the model structure itself is important?

Yes, this might be related to my personal history, which is that I have been focusing on whether one can express algorithms as neural machines, and whether one can meaningfully speak about continuously deformable programs.

And, then, for Turing completeness one would want both unlimited number of steps and unbounded memory, and there has been a rather involved debate on whether RNNs are more like Turing complete programs, or are they, in practice, only similar to finite automata. (It's a long topic, on which there is more to say.)

So, from this viewpoint, a machine with a fixed finite number of steps seems very limited.

But autoregressive Transformers are not machines with a fixed finite number of steps, they just commit to emitting a token after a fixed number of steps, but they can continue in an unbounded fashion, so they are very similar to RNNs in this sense.

[-]dxu10mo52

I’ll bite even further, and ask for the concept of “recurrence” itself to be dumbed down. What is “recurrence”, why is it important, and in what sense does e.g. a feedforward network hooked up to something like MCTS not qualify as relevantly “recurrent”?

[-]mishka10mo10

"Hooked up to something" might make a difference.

(To me one important aspect is whether computation is fundamentally limited to a fixed number of steps vs. having a potentially unbounded loop.

The autoregressive version is an interesting compromise: it's a fixed number of steps per token, but the answer can unfold in an unbounded fashion.

An interesting tid-bit here is that for traditional RNNs it is one loop iteration per an input token, but in autoregressive Transformers it is one loop iteration per an output token.)

[-]GeneSmith10mo4319

It's not clear whether that will mean the end of humanity in the sense of the systems we've created destroying us. It's not clear if that's the case, but it's certainly conceivable. If not, it also just renders humanity a very small phenomenon compared to something else that is far more intelligent and will become incomprehensible to us, as incomprehensible to us as we are to cockroaches.

Q: That's an interesting thought. [nervous laughter]

Hofstadter: Well, I don't think it's interesting. I think it's terrifying. I hate it. I think about it practically all the time, every single day. [Q: Wow.] And it overwhelms me and depresses me in a way that I haven't been depressed for a very long time.

I don't think I've ever seen a better description of how I feel about the coming creation of artificial superintelligence. I find myself returning over and over again to that post by benkuhn about "Staring into the abyss as a core life skill" I think that is going to become a necessary core life skill for almost everyone in the coming years.

It has been morbidly gratifying to see more and more people develop the same feelings about AI as I have had for about a year now. Like validation in the worst possible way. I think if people actually understood what was coming there would be a near total call to ban improvements in this technology and only allow advancement under very strict conditions. But almost no one has really thought through the consequences of making a general purpose replacement for human beings.

[-]dr_s10mo115

Yeah. I particularly hate the handwavium which makes this sound like it's super simple, just make the ASI, have it churn out labour for us, surely human society will adapt just nicely to the new state of things and chill. It's easy to say this if you think you're going to be the one in charge of the ASI because you're a CEO or big shot in some company (you may still be vastly overestimating your chances of controlling the ASI of course but that's just plain old hubris). But not so easy to believe it if instead you're more the kind of person who usually gets the short end of the stick. Like, as much as we may celebrate automation, it DOES often have hugely disruptive effects. There's a lot of pain hidden in that "some jobs are destroyed but others are created so it evens out". And that's not even a fraction of the pain that would be possible if ALL jobs were destroyed and never replaced and we would somehow have to find a way to deal with that.

And that's, again, just the most optimistic scenario. The more pessimistic one is more along the lines of "you're a megatherium and these strange new hairless apes with sticks have started flooding in from the north".

The clip of the most touching part of the interview: "Well, I don't think it's interesting. I think it's terrifying. I hate it. I think about it practically all the time, every single day. And it overwhelms me and depresses me in a way that I haven't been depressed for a very long time."

 

[-]JoshuaFox10mo2414

At the time of Hofstadter's Singularity Summit talk , I wondered why he wasn't "getting with the program", and it became clear he was a mysterian:  He believed -- without being a dualist --  that some things, like the mind, are ultimately, basically, essentially, impossible to understand or describe.

This 2023 interview shows that the new generation of AI has done more than chagne his mind about the potential of AI: it has struck at the core of his mysterianism

the human mind is not so mysterious and complex and impenetrably complex as I imagined it was when I was writing Gödel, Escher, Bach and writing I Am a Strange Loop.

He was only a de facto mysterian: thought mind is so complicated that it may as well be mysterious (but ofc he believed it's ultimately just physics). This position is updateable, and he clearly updated.

Gwern's comment makes it clear to me that Hofstadter has never been a mysterian. 

It has become slightly more plausible that Melanie Mitchell could come around. 

[-]gwern10mo7410

But only slightly. It appears that Hofstadter's doubts have been building for a long time in private, even to organizing informal conferences/meetings about it, to an extent that his op-eds don't convey (compare his comments in OP to his comments published in the Atlantic just a week before! they are so drastically different I was wondering if this was some sort of bizarre deepfake prank, but some cursory searching made it seemed legit and no one like Mitchell was saying it was fake and the text sounds like Hofstadter). On Twitter, John Teets helpfully notes that Mitchell has a 2019 book Artificial Intelligence: A Guide for Thinking Humans where she records some private Hofstadter material I was unfamiliar with:

Prologue: Terrified ...The meeting, in May 2014, had been organized by Blaise Agüera y Arcas, a young computer scientist who had recently left a top position at Microsoft to help lead Google’s machine intelligence effort...The meeting was happening so that a group of select Google AI researchers could hear from and converse with Douglas Hofstadter, a legend in AI and the author of a famous book cryptically titled Gödel, Escher, Bach: an Eternal Golden Braid, or more succinctly, GEB (pronounced “gee-ee-bee”). If you’re a computer scientist, or a computer enthusiast, it’s likely you’ve heard of it, or read it, or tried to read it...Chess and the First Seed of Doubt: The group in the hard-to-locate conference room consisted of about 20 Google engineers (plus Douglas Hofstadter and myself), all of whom were members of various Google AI teams. The meeting started with the usual going around the room and having people introduce themselves. Several noted that their own careers in AI had been spurred by reading GEB at a young age. They were all excited and curious to hear what the legendary Hofstadter would say about AI.

Then Hofstadter got up to speak. “I have some remarks about AI research in general, and here at Google in particular.” His voice became passionate. “I am terrified. Terrified.”

Hofstadter went on. [2. In the following sections, quotations from Douglas Hofstadter are from a follow-up interview I did with him after the Google meeting; the quotations accurately capture the content and tone of his remarks to the Google group.] He described how, when he first started working on AI in the 1970s, it was an exciting prospect but seemed so far from being realized that there was no “danger on the horizon, no sense of it actually happening.” Creating machines with human-like intelligence was a profound intellectual adventure, a long-term research project whose fruition, it had been said, lay at least “one hundred Nobel prizes away.” [Jack Schwartz, quoted in G.-C. Rota, Indiscrete Thoughts (Boston: Berkhäuser, 1997), pg22.] Hofstadter believed AI was possible in principle: “The ‘enemy’ were people like John Searle, Hubert Dreyfus, and other skeptics, who were saying it was impossible. They did not understand that a brain is a hunk of matter that obeys physical law and the computer can simulate anything … the level of neurons, neurotransmitters, et cetera. In theory, it can be done.” Indeed, Hofstadter’s ideas about simulating intelligence at various levels---from neurons to consciousness---were discussed at length in GEB and had been the focus of his own research for decades. But in practice, until recently, it seemed to Hofstadter that general “human-level” AI had no chance of occurring in his (or even his children’s) lifetime, so he didn’t worry much about it.

Near the end of GEB, Hofstadter had listed “10 Questions and Speculations” about artificial intelligence. Here’s one of them: “Will there be chess programs that can beat anyone?” Hofstadter’s speculation was “no.” “There may be programs which can beat anyone at chess, but they will not be exclusively chess players. They will be programs of general intelligence.”<sup>4</sup>

At the Google meeting in 2014, Hofstadter admitted that he had been “dead wrong.” The rapid improvement in chess programs in the 1980s and ’90s had sown the first seed of doubt in his appraisal of AI’s short-term prospects. Although the AI pioneer Herbert Simon] had predicted in 1957 that a chess program would be world champion “within 10 years”, by the mid-1970s, when Hofstadter was writing GEB, the best computer chess programs played only at the level of a good (but not great) amateur. Hofstadter had befriended Eliot Hearst, a chess champion and psychology professor who had written extensively on how human chess experts differ from computer chess programs. Experiments showed that expert human players rely on quick recognition of patterns on the chessboard to decide on a move rather than the extensive brute-force look-ahead search that all chess programs use. During a game, the best human players can perceive a configuration of pieces as a particular “kind of position” that requires a certain “kind of strategy.” That is, these players can quickly recognize particular configurations and strategies as instances of higher-level concepts. Hearst argued that without such a general ability to perceive patterns and recognize abstract concepts, chess programs would never reach the level of the best humans. Hofstadter was persuaded by Hearst’s arguments.

However, in the 1980s and ’90s, computer chess saw a big jump in improvement, mostly due to the steep increase in computer speed. The best programs still played in a very unhuman way: performing extensive look-ahead to decide on the next move. By the mid-1990s, IBM’s Deep Blue machine, with specialized hardware for playing chess, had reached the Grandmaster level, and in 1997 the program defeated the reigning world chess champion, Garry Kasparov, in a 6-game match. Chess mastery, once seen as a pinnacle of human intelligence, had succumbed to a brute-force approach.

Music: The Bastion of Humanity... Hofstadter had been wrong about chess, but he still stood by the other speculations in GEB...Hofstadter described this speculation as “one of the most important parts of GEB---I would have staked my life on it.”

I sat down at my piano and I played one of EMI’s mazurkas “in the style of Chopin.” It didn’t sound exactly like Chopin, but it sounded enough like Chopin, and like coherent music, that I just felt deeply troubled.

Hofstadter then recounted a lecture he gave at the prestigious Eastman School of Music, in Rochester, New York. After describing EMI, Hofstadter had asked the Eastman audience---including several music theory and composition faculty---to guess which of two pieces a pianist played for them was a (little-known) mazurka by Chopin and which had been composed by EMI. As one audience member described later, “The first mazurka had grace and charm, but not ‘true-Chopin’ degrees of invention and large-scale fluidity … The second was clearly the genuine Chopin, with a lyrical melody; large-scale, graceful chromatic modulations; and a natural, balanced form.” [ 6.  Quoted in D. R. Hofstadter, “Staring Emmy Straight in the Eye—and Doing My Best Not to Flinch,”† in Creativity, Cognition, and Knowledge, ed. T. Dartnell (Westport, Conn.: Praeger, 2002), 67–100.] Many of the faculty agreed and, to Hofstadter’s shock, voted EMI for the first piece and “real-Chopin” for the second piece. The correct answers were the reverse.

In the Google conference room, Hofstadter paused, peering into our faces. No one said a word. At last he went on. “I was terrified by EMI. Terrified. I hated it, and was extremely threatened by it. It was threatening to destroy what I most cherished about humanity. I think EMI was the most quintessential example of the fears that I have about artificial intelligence.”

Google and the Singularity: Hofstadter then spoke of his deep ambivalence about what Google itself was trying to accomplish in AI---self-driving cars, speech recognition, natural-language understanding, translation between languages, computer-generated art, music composition, and more. Hofstadter’s worries were underlined by Google’s embrace of Ray Kurzweil and his vision of the Singularity, in which AI, empowered by its ability to improve itself and learn on its own, will quickly reach, and then exceed, human-level intelligence. Google, it seemed, was doing everything it could to accelerate that vision. While Hofstadter strongly doubted the premise of the Singularity, he admitted that Kurzweil’s predictions still disturbed him. “I was terrified by the scenarios. Very skeptical, but at the same time, I thought, maybe their timescale is off, but maybe they’re right. We’ll be completely caught off guard. We’ll think nothing is happening and all of a sudden, before we know it, computers will be smarter than us.” If this actually happens, “we will be superseded. We will be relics. We will be left in the dust. Maybe this is going to happen, but I don’t want it to happen soon. I don’t want my children to be left in the dust.”

Hofstadter ended his talk with a direct reference to the very Google engineers in that room, all listening intently: “I find it very scary, very troubling, very sad, and I find it terrible, horrifying, bizarre, baffling, bewildering, that people are rushing ahead blindly and deliriously in creating these things.”

Why Is Hofstadter Terrified? I looked around the room. The audience appeared mystified, embarrassed even. To these Google AI researchers, none of this was the least bit terrifying. In fact, it was old news...Hofstadter’s terror was in response to something entirely different. It was not about AI becoming too smart, too invasive, too malicious, or even too useful. Instead, he was terrified that intelligence, creativity, emotions, and maybe even consciousness itself would be too easy to produce---that what he valued most in humanity would end up being nothing more than a “bag of tricks”, that a superficial set of brute-force algorithms could explain the human spirit.

As GEB made abundantly clear, Hofstadter firmly believes that the mind and all its characteristics emerge wholly from the physical substrate of the brain and the rest of the body, along with the body’s interaction with the physical world. There is nothing immaterial or incorporeal lurking there. The issue that worries him is really one of complexity. He fears that AI might show us that the human qualities we most value are disappointingly simple to mechanize. As Hofstadter explained to me after the meeting, here referring to Chopin, Bach, and other paragons of humanity, “If such minds of infinite subtlety and complexity and emotional depth could be trivialized by a small chip, it would destroy my sense of what humanity is about.”

...Several of the Google researchers predicted that general human-level AI would likely emerge within the next 30 years, in large part due to Google’s own advances on the brain-inspired method of “deep learning.”

I left the meeting scratching my head in confusion. I knew that Hofstadter had been troubled by some of Kurzweil’s Singularity writings, but I had never before appreciated the degree of his emotion and anxiety. I also had known that Google was pushing hard on AI research, but I was startled by the optimism several people there expressed about how soon AI would reach a general “human” level. My own view had been that AI had progressed a lot in some narrow areas but was still nowhere close to having the broad, general intelligence of humans, and it would not get there in a century, let alone 30 years. And I had thought that people who believed otherwise were vastly underestimating the complexity of human intelligence. I had read Kurzweil’s books and had found them largely ridiculous. However, listening to all the comments at the meeting, from people I respected and admired, forced me to critically examine my own views. While assuming that these AI researchers underestimated humans, had I in turn underestimated the power and promise of current-day AI?

...Other prominent thinkers were pushing back. Yes, they said, we should make sure that AI programs are safe and don’t risk harming humans, but any reports of near-term superhuman AI are greatly exaggerated. The entrepreneur and activist Mitchell Kapor advised, “Human intelligence is a marvelous, subtle, and poorly understood phenomenon. There is no danger of duplicating it anytime soon.” The roboticist (and former director of MIT’s AI Lab) Rodney Brooks agreed, stating that we “grossly overestimate the capabilities of machines---those of today and of the next few decades.” The psychologist and AI researcher Gary Marcus went so far as to assert that in the quest to create “strong AI”---that is, general human-level AI---“there has been almost no progress.”

I could go on and on with dueling quotations. In short, what I found is that the field of AI is in turmoil. Either a huge amount of progress has been made, or almost none at all. Either we are within spitting distance of “true” AI, or it is centuries away. AI will solve all our problems, put us all out of a job, destroy the human race, or cheapen our humanity. It’s either a noble quest or “summoning the demon.”

That is, whatever the snarky "don't worry, it can't happen" tone of his public writings about DL has been since ~2010, Hofstadter has been saying these things in private for at least a decade*, starting somewhere around Deep Blue which clearly falsified a major prediction of his, and his worries about the scaling paradigm intensifying ever since; what has happened is that only one of two paradigms can be true, and Hofstadter has finally flipped to the other paradigm (with ChatGPT-3.5 and then GPT-4 apparently being the straws that broke the camel's back). Mitchell, however, has heard all of this firsthand long before this podcast and appears to be completely immune to Hofstadter's concerns (publicly), so I wouldn't expect it to change her mind.

* I wonder what other experts & elites have different views on AI than their public statements would lead you to believe?

† Hofstadter's semi-mysterian/irreducible-complexity view:

So . . . chess-playing fell to computers? I don't feel particularly threatened or upset; after all, sheer computation had decades earlier fallen to computers as well. So a computer had outdone <a href="https://en.wikipedia.org/wiki/Daniel_Shanks">Daniel Shanks</a> in the calculation of digits of π—did it matter? Did that achievement in any way lower human dignity? Of course not! It simply taught us that calculation is more mechanical than we had realized. Likewise, <a href="https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)">Deep Blue</a> taught us that chess is more mechanical than we had realized. These lessons serve as interesting pieces of information about various domains of expertise, but to my mind they hardly seem to threaten the notion, which I then cherished and which I still cherish, that human intelligence is extraordinarily profound and mysterious.

It is not, I hasten to add, that I am a mystic who thinks that intelligence intrinsically resists implantation in physical entities. To the contrary, I look upon brains themselves as very complex machines, and, unlike <a href="https://en.wikipedia.org/wiki/John_Searle">John Searle</a> and <a href="https://en.wikipedia.org/wiki/Roger_Penrose">Roger Penrose</a>, I have always maintained that the precise nature of the physicochemical substrate of thinking and consciousness is irrelevant. I can imagine silicon-based thought as easily as I can imagine carbon-based thought; I can imagine ideas and meanings and emotions and a first-person awareness of the world (an "inner light", a "ghost in the machine") emerging from electronic circuitry as easily as from proteins and nucleic acids. I simply have always run on faith that when "genuine artificial intelligence" (sorry for the oxymoron) finally arises, it will do so precisely because the same degree of complexity and the same overall kind of abstract mental architecture will have come to exist in a new kind of hardware. What I do not expect, however, is that full human intelligence will emerge from something far simpler, architecturally speaking, than a human brain.

[-]Yitz10mo1710

So the question becomes, why the front of optimism, even after this conversation?

[-]gwern10mo120

Also on Twitter, Experimental Learning highlights a 2022-11-01 podcast by Melanie Mitchell which confirms the book description, "Increments Podcast: #45---4 Central Fallacies of AI Research (with Melanie Mitchell)":

    • Q: ...Okay, so I want to respect your time and not go too long over an hour, but I'd love to ask you some slightly more personal questions. One about Douglas Hofstadter.

      You open up your book with a very interesting anecdote about him giving a talk at Google and basically telling all the Google Engineers that AI could be calamitous, but not in the ways that we typically hear about. It seems like he was more worried about the possibility that we would succeed in building AI and this would mean that current approaches worked and would sort of trivialize human intelligence in some sense. We would lose the magic of our thinking.

      I'm wondering if you could tell us a bit more about that and then, you seem to have different concerns. I'm wondering about your journey of deviating from his thinking.

    • Melanie Mitchell: Interestingly, Hofstadter's worry stems from him reading some of Ray Kurzweil's books about the Singularity. They range between very science fiction-like to somewhat compelling arguments about technology and its exponential increase in progress. Hofstadter was pretty worried that something like what Kurzweil was describing might actually happen. He kept saying he didn't want this to happen in the lifetime of his children. He didn't want the human race to be made irrelevant because these machines are now much smarter, much more creative than humans. He didn't think it was going to happen but he was kind of worried about it. He actually organized two different conferences about this topic and he invited Ray Kurzweil and a bunch of other people. It was kind of an early version of what you might call the current things that we see on AI future predictions and AI alignment kind of stuff.

      That was one of the things that really worried him and this would have been in the 1980s or 1990s. He started organizing these conferences in the 1990s. Kurzweil had already come up with the singularity idea. I was a lot less worried about the singularity scenario for some reason. I just didn't see AI going in that direction at all. Kurzweil's arguments were all about hardware and the exponential increase in hardware. But clearly, software is different from hardware and doesn't follow the same exponential rules. Our knowledge and our ideas about how intelligence works in the biological world were not increasing exponentially and so I didn't see us being able to replicate biological AI anytime soon.

      There was actually a program at DARPA in the 90s where they were trying to recreate the intelligence of a cat using neural networks and it was a total failure. [Mitchell seems to misremember here and be referring to a 2008 program where $5m [$7.2m ~2023] was spent on an IBM spiking-neural net chip hardware project, DARPA’s System of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project, Ananthanarayanan et al 2009] It really impressed me that that's actually super hard and we're just so far away from that. How can we think that in just like 20–30 years we're going to get to human-level intelligence? Hofstadter and I have had a lot of discussions about this. He's obviously disturbed by these large language models and their behavior. He's not sure what to think, like a lot of us. You see their behavior and it's amazing at least some of the time but it's also you realize we don't have intuitions about how to deal with statistical models at that scale.

      I don't think of myself as an AI skeptic necessarily. I work in AI. I think it's a very hard problem but I feel like a lot of times people who criticize current approaches are kind of labeled as overall AI skeptics. I even saw an article that called people like that AI deniers. I don't like being labeled as an AI skeptic or AI critic because I do think that AI is really interesting and is going to produce a lot of really interesting insights and results. I just don't think it's going to be as easy to achieve something like human-level intelligence.

    • Q: I have a couple of questions about analogies because that's a huge area of your thought that I'd just love to know more about. I know that with Hofstadter, your PhD thesis was on analogy making. I had never thought of analogies as being particularly insightful into human intelligence but I'm curious what your interest in analogies is and what we all stand to learn from the study of analogy.

    • M. Mitchell: I think most people have a narrow view of what analogy means. We all take IQ tests or SATs that have these single-word analogies and those are a lot less interesting. But I think analogy is much broader than that. It's when we notice some kind of abstract similarity between two different situations or two different events. If you've ever had somebody tell you a little story about something that happened in their life and you say "oh the same thing happened to me", you're making an analogy. It's not the same thing but it's reminding you of something that is abstractly similar.

      In science, analogy in scientific invention is paramount. There's a nice article I saw today about this recent result from DeepMind about matrix multiplication. The idea was that the researchers saw that matrix multiplication could be mapped into a kind of game-playing framework and therefore they had these reinforcement systems that could be applied to that framework. But it was that initial leap of analogy that allowed them to apply these AI systems.

      We also are constantly in our language making analogies. One of Hofstadter's examples is anytime there's a scandal we say "oh it's another Watergate" or we call it something-gate. That's an analogy. That kind of thing is just all over the place. I think it's a really important part of transfer learning which is sort of the Holy Grail of machine learning. It's about learning something in one domain and applying it to another domain. That's really about making analogies. It's about saying what is it that I've learned that's important here that I can now apply to this new situation.

    • Q: I have one final question. Do you have advice for someone who wants to both be engaged in academic research but also sort of keep their head above the water and write general articles and be able to engage with a general audience? I'm impressed by the ability to be writing NeurIPS papers but also writing books for a general audience and articles. I'm curious whether you have any advice for someone who would like to do that and how you sort of resist the pressure to get caught in the academic rabbit hole where it's just like always the next paper because there's always something new to do. How do you balance that with writing a New York Times op-ed or something?

    • Mitchell: I haven't really published papers in NeurIPS and all those places as much as many people who are prominent in AI and machine learning. I try to have time to think which is often hard if you're under pressure all the time to publish the next paper. It's a challenge because if you're in an academic position and you're on a tenure track, there's all kinds of pressure to publish and get citations for your publications and publish in top-tier venues. Some of the people I know who are really successful at writing for the popular audience often aren't publishing as much in academic venues.

      Publishing for a popular audience is hard because imagine trying to explain your research to someone in your family who isn't a technical person. It's pretty hard to do. Learning how to do that is like learning how to have a theory of mind of people. I think that's also the key to being a good teacher. It's about having a theory of mind of the students and sort of knowing what they don't know and making sure that you address that. It's a challenge and it takes practice.

    • Q: Thank you so much for coming on the podcast. This was a very wonderful and enlightening conversation. Where can our audience find more of your work?
    • M M: They can go to my webpage which is MelanieMitchell.me or they can follow me on Twitter at @MelMitchell1.
    • Q: We'll put links into the show notes. I just want everyone to explore your work and enjoy it as much as we have.
    • M: Great, well thank you very much. It's been a lot of fun.

(The DARPA cat example is a weird one. If I'd heard of it, I'd long since forgotten it, and I'm not sure why she'd put so much weight on it; $5m was a drop in the bucket then for chip development - $5m often doesn't even cover simple NREs when it comes to chip designing/fabbing - especially compared to Blue Brain, and it's not like one could train useful spiking networks in the first place. I hope she doesn't really put as much weight on that as a reason to dismiss DL scaling as she seems to.)

I think the conferences Mitchell refers to are the same ones mentioned by Chalmers 2010:

...With some exceptions: discussions by academics include Bostrom (1998; 2003), Hanson (2008), Hofstadter (2005), and Moravec (1988; 1998). Hofstadter organized symposia on the prospect of superintelligent machines at Indiana University in 1999 and at Stanford University in 2000, and more recently, Bostrom’s Future of Humanity Institute at the University of Oxford has organized a number of relevant activities.

(The 1 April 2000 conference was covered at length by Ellen Ullman; it's an interesting piece, if only for showing how far the AI zeitgeist was then from now in 2003.)

Another Mitchell followup: https://www.science.org/doi/10.1126/science.adj5957 tldr: argues that LLMs still aren't intelligent and explains away everything they do as dataset contamination, bad benchmarks and sloppy evaluation, or shallow heuristics.

I heard something like this might be true for Yann also; like, allegedly being more worried about extinction-risk-from-AI in private, but then publicly doing the same snarky tweets.

[-]25Hour10mo1310

This seems doubtful to me; if Yan truly believed that AI was an imminent extinction risk, or even thought it was credible, what would Yann be hoping to do or gain by ridiculing people who are similarly worried?

It often crosses my mind that public discourse about AI safety might not be useful. Tell men that AGI is powerful and they'll start trying harder to acquire it. Tell legislators and, perhaps Yann thinks they'll just start an arms race and complicate the work and not do much else.

I wonder if that's what he's thinking.

That's also my confusion, yes.

I could imagine someone suppressing their alignment fears temporarily, to work their way up to a position of power in a capabilities lab and then steer outcomes from there.

But that doesn't seem to work, since:

  • The top AI capabilities labs (OpenAI, DeepMind, Anthropic) are more vocal about capabilities. Meta AI is a follow-the-leader lab anyway.
  • I don't think "bringing up concerns later, instead of now" is a strategically great way to do this. I don't know a ton about the politics of historical programs for e.g. atomic weapons and bioweapons. But based on my cursory knowledge, I don't think "be worried in secret" is anything like a slam-dunk for those situations.
  • Yann, specifically, is already the Chief AI Person at Meta/Facebook! Unless Meta is really quick to fire people (or Yann is angling for Zuckerberg's position), what more career capital could he gain at this stage?
[-]TLK10mo20

I wonder how many AI experts hold back their thoughts because they remember what happened to Copernicus when he presented that the Earth was not the center of the universe. Thank you for your post. I’m new here and am, therefore, not permitted to upvote it, but I would, if I could.

But one thing that has completely surprised me is that these LLMs and other systems like them are all feed-forward. It's like the firing of the neurons is going only in one direction. And I would never have thought that deep thinking could come out of a network that only goes in one direction, out of firing neurons in only one direction. And that doesn't make sense to me, but that just shows that I'm naive.

What was the argument that being feed-forward limited the potential for deep thought in principle? It makes sense that multi-directional nets could do more with fewer neurons but Hofstader seemed to think there were things that feed-forward system fundamentally couldn't do. 

He explained a bunch of his position on this in Godel, Escher, Bach. If I remember correctly, it describes the limits of primitive recursive and general recursive functions this in chapter XIII. The basic idea (again, if I remember), is that a proof system can only reason about itself if its general recursive, and will always be able to reason about itself if its general recursive. Lots of what we see that makes humanity special compared to computers has to do with people having feelings and emotions and self-concepts, and reflection about past situations & thoughts. All things that really seem to require deep levels of recursion (this is a far shallower statement than what's actually written in the book). Its strange to us then that ChatGPT can mimic those same outputs with the only recursive element of its thought being that it can pass 16 bits to its next running.

with the only recursive element of its thought being that it can pass 16 bits to its next running

I would name activations for all previous tokens as the relevant "element of thought" here that gets passed, and this can be gigabytes.

From how the quote looks, I think his gripe is with the possibility of in-context learning, where human-like learning happens without anything about how the network works (neither its weights nor previous token states) being ostensibly updated.

From how the quote looks, I think his gripe is with the possibility of in-context learning, where human-like learning happens without anything about how the network works (neither its weights nor previous token states) being ostensibly updated.

I don't understand this. Something is being updated when humans or LLMs learn, no?

For every token, model activations are computed once when the token is encountered and then never explicitly revised -> "only [seems like it] goes in one direction"

David Brooks's 2023-07-13 NYT column covers this post's excerpts and the original podcast, and includes quotes/paraphrases of Brooks's phonecall interview with Hofstadter about it. "‘Human Beings Are Soon Going to Be Eclipsed’":

...So I was startled this month to see the following headline in one of the A.I. newsletters I subscribe to: “Douglas Hofstadter Changes His Mind on Deep Learning & A.I. Risk.” [possibly AI Supremacy] I followed the link to a podcast and heard Hofstadter say: “It’s a very traumatic experience when some of your most core beliefs about the world start collapsing. And especially when you think that human beings are soon going to be eclipsed.”

Apparently, in the five years since 2018, ChatGPT and its peers have radically altered Hofstadter’s thinking. He continues: It “just renders humanity a very small phenomenon compared to something else that is far more intelligent and will become incomprehensible to us, as incomprehensible to us as we are to cockroaches.”

I called Hofstadter to ask him what was going on. He shared his genuine alarm about humanity’s future. He said that ChatGPT was “jumping through hoops I would never have imagined it could. It’s just scaring the daylights out of me.” He added: “Almost every moment of every day, I’m jittery. I find myself lucky if I can be distracted by something — reading or writing or drawing or talking with friends. But it’s very hard for me to find any peace.”

Hofstadter has long argued that intelligence is the ability to look at a complex situation and find its essence. “Putting your finger on the essence of a situation means ignoring vast amounts about the situation and summarizing the essence in a terse way,” he said. Humans mostly do this through analogy. If you tell me that you didn’t read my column, and I tell you I don’t care because I didn’t want you to read it anyway, you’re going to think, “That guy is just bloated with sour grapes.” You have this category in your head, “sour grapes.” You’re comparing my behavior with all the other behaviors you’ve witnessed. I match the sour grapes category. You’ve derived an essence to explain my emotional state. Two years ago, Hofstadter says, A.I. could not reliably perform this kind of thinking. But now it is performing this kind of thinking all the time. And if it can perform these tasks in ways that make sense, Hofstadter says, then how can we say it lacks understanding, or that it’s not thinking? And if A.I. can do all this kind of thinking, Hofstadter concludes, then it is developing consciousness. He has long argued that consciousness comes in degrees and that if there’s thinking, there’s consciousness. A bee has one level of consciousness, a dog a higher level, an infant a higher level, and an adult a higher level still. “We’re approaching the stage when we’re going to have a hard time saying that this machine is totally unconscious. We’re going to have to grant it some degree of consciousness, some degree of aliveness,” he says.

Normally, when tech executives tell me A.I. will soon achieve general, human level intelligence, I silently think to myself: “This person may know tech, but he doesn’t really know human intelligence. He doesn’t understand how complex, vast and deep the human mind really is.”

But Hofstadter does understand the human mind — as well as anybody. He’s a humanist down to his bones, with a reverence for the mystery of human consciousness, who has written movingly about love and the deep interpenetration of souls. So his words carry weight. They shook me.


But so far he has not fully converted me. I still see these things as inanimate tools. On our call I tried to briefly counter Hofstadter by arguing that the bots are not really thinking; they’re just piggybacking on human thought. Starting as babies, we humans begin to build models of the world, and those models are informed by hard experiences and joyful experiences, emotional loss and delight, moral triumphs and moral failures — the mess of human life. A lot of the ensuing wisdom is stored deep in the unconscious recesses of our minds, but some of it is turned into language.

A.I. is capable of synthesizing these linguistic expressions, which humans have put on the internet and, thus, into its training base. But, I’d still argue, the machine is not having anything like a human learning experience. It’s playing on the surface with language, but the emotion-drenched process of learning from actual experience and the hard-earned accumulation of what we call wisdom are absent.

In a piece for The New Yorker, the computer scientist Jaron Lanier argued that A.I. is best thought of as “an innovative form of social collaboration.” It mashes up the linguistic expressions of human minds in ways that are structured enough to be useful, but it is not, Lanier argues, “the invention of a new mind.”

I think I still believe this limitationist view. But I confess I believe it a lot less fervently than I did last week. Hofstadter is essentially asking, If A.I. cogently solves intellectual problems, then who are you to say it’s not thinking? Maybe it’s more than just a mash-up of human expressions. Maybe it’s synthesizing human thought in ways that are genuinely creative, that are genuinely producing new categories and new thoughts. Perhaps the kind of thinking done by a disembodied machine that mostly encounters the world through language is radically different from the kind of thinking done by an embodied human mind, contained in a person who moves about in the actual world, but it is an intelligence of some kind, operating in some ways vastly faster and superior to our own. Besides, Hofstadter points out, these artificial brains are not constrained by the factors that limit human brains — like having to fit inside a skull. And, he emphasizes, they are improving at an astounding rate, while human intelligence isn’t. It’s hard to dismiss that argument.

I don’t know about you, but this is what life has been like for me since ChatGPT 3 was released. I find myself surrounded by radical uncertainty — uncertainty not only about where humanity is going but about what being human is. As soon as I begin to think I’m beginning to understand what’s happening, something surprising happens — the machines perform a new task, an authority figure changes his or her mind. Beset by unknowns, I get defensive and assertive. I find myself clinging to the deepest core of my being — the vast, mostly hidden realm of the mind from which emotions emerge, from which inspiration flows, from which our desires pulse — the subjective part of the human spirit that makes each of us ineluctably who we are. I want to build a wall around this sacred region and say: “This is essence of being human. It is never going to be replicated by machine.” But then some technologist whispers: “Nope, it’s just neural nets all the way down. There’s nothing special in there. There’s nothing about you that can’t be surpassed.”

Some of the technologists seem oddly sanguine as they talk this way. At least Hofstadter is enough of a humanist to be horrified.

It's not clear whether that will mean the end of humanity in the sense of the systems we've created destroying us. It's not clear if that's the case, but it's certainly conceivable. If not, it also just renders humanity a very small phenomenon compared to something else that is far more intelligent and will become incomprehensible to us, as incomprehensible to us as we are to cockroaches.

It's interesting that he seems so in despair over this now. To the extent that he's worried about existential/catastrophic risks, I wonder if he is unaware of efforts to mitigate those, or if he is aware but thinks they are hopeless (or at least not guaranteed to succeed, which -- fair enough). To the extent that he's more broadly worried about human obsolescence (or anyway something more metaphysical), well, there are people trying to slow/stop AI, and others trying to enhance human capabilities -- maybe he's pessimistic about those efforts, too.

I am working on human capability enhancement via genetics. I think it's quite plausible that we could create humans smarter than any that have ever lived within a decade. But even I think that digital intelligence wins in the end.

Like it just seems obvious to me. The only reason I'm even working in the field is because I think that enhanced humans could play an extremely critical role in the development of aligned AI. Of course this requires time for them to grow up and do research, which we are increasingly short of. But in case AGI takes longer than projected or we get our act together and implement a ban on AI capabilities improvements until alignment is solved, it still seems worth continuing the work to me.

Hofstadter's long-time associate/friend/co-author, Daniel Dennett, has discussed Hofstadter's change of heart in a recent (December 2023?) Theories of Everything podcast/interview: https://www.youtube.com/watch?v=bH553zzjQlI&t=7195s

I have not watched it myself but AI Safety Memes has excerpted it as follows:

Legendary scholar @DanielDennett agrees w/Douglas Hofstadter, is “very distressed” by AI

Hofstadter: "[AI] is terrifying. I hate it. I think about it practically all the time, every single day”

“[Humanity] is about to be eclipsed and left in the dust”

“An oncoming tsunami that is going to catch all of humanity off guard.”

DENNETT: [The alignment problem is extremely hard]

“It would be like someone saying ‘I know the solution to the problem of Israel in the Arab world, the Palestinians. It’s simple.’ No, it isn’t. No, it isn’t. And if you think it is, that’s almost self-disqualifying.

That is such a complex issue. You have to know so much, and appreciate so much, and set aside so many misconceptions and oversimplifications to make any sense of it. And I think the alignment problem is like that.

If someone tells you they’ve got the alignment problem solved, that’s two strikes against them.

They are wildly optimistic. They say “we know how to write control architectures that prevent them from doing X Y and Z.”

Oh really? These systems are huge. They’re gigantic software entities. Has Microsoft or anyone ever invented a program remotely that size that didn’t have bugs in it? No. No, absolutely not.”

Programs can get out of control. There is no magic bullet that is going to make such huge systems transparent.”

HN comments.

Ben Goertzel:

  1. Fascinating account by AI hero Douglas Hofstadter about his struggle with the realization that human-level AGI is near and probably ASI also [LW link to here]
  2. Fascinating both because someone as old and stubborn and genius as Hoftstadter changing his mind is not that common, and because I think he's changing his mind to an excessive extent here.
  3. I.e. yes he was wrong in his view that "Singularity is far" ... however I don't think he's right that transformers can get to human-level AGI without an infusion of a bunch of cognitive-science-based architecture and strange-loopy stuff.
  4. I.e. I think a bunch of the ideas Hofstadter spent his life working on are actually going to be critical in getting from LLMs onward to human-level AGI.
  5. It seems he was previously thinking that DNNs were useless and totally off in the wrong direction from AGI, so now he's taken aback by recent developments more-so than those of us who always thought DNNs were cool but not the whole picture
  6. So now that he sees he was wrong in his underestimation of DNNs, instead of saying "Hmm OK maybe they're part of the story, but maybe I can use my lifetime of AI/cognitive insights to fill in the rest of the story" he's sort of throwing his hands up....
  7. I am reminded (though it's not a precise analogy mapping) of Dostoevsky's observation that the atheist and the evangelist are clustered closely together compared to the agnostic...

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?

[+][comment deleted]9mo10