Bill Benzon

The Story of My Intellectual Life

In the early 1970s I discovered that “Kubla Khan” had a rich, marvelous, and fantastically symmetrical structure. I'd found myself intellectually. I knew what I was doing. I had a specific intellectual mission: to find the mechanisms behind “Kubla Khan.” As defined, that mission failed, and still has not been achieved some 40 odd years later.

It's like this: If you set out to hitch rides from New York City to, say, Los Angeles, and don't make it, well then your hitch-hike adventure is a failure. But if you end up on Mars instead, just what kind of failure is that? Yeah, you’re lost. Really really lost. But you’re lost on Mars! How cool is that!

Of course, it might not actually be Mars. It might just be an abandoned set on a studio back lot.


That's a bit metaphorical. Let's just say I've read and thought about a lot of things having to do with the brain, mind, and culture, and published about them as well. I've written a bunch of academic articles and two general trade books, Visualization: The Second Computer Revolution (Harry Abrams1989), co-authored with Richard Friedhoff, and Beethoven's Anvil: Music in Mind and Culture (Basic Books 2001). Here's what I say about myself at my blog, New Savanna. I've got a conventional CV at I've also written a lot of stuff that I've not published in a conventional venue. I think of them as working papers. I've got them all at Some of my best – certainly my most recent – stuff is there.


Exploring the Digital Wildnerness

Wiki Contributions


I thought some more about your comment and decided to try again, this time retaining the medieval setting. Here's what happened. My prompts are in bold-face.


I’m going to tell you a short story from the Middle Ages. After I tell you the story, I’m going to ask you a question. Here’s the story:

It is New Year’s Eve at King Arthur’s court. The knights are gathered at the round table, prepared for a holiday meal. But before the meal begins, tradition dictates that one knight must stand up and tell a tale of daring and adventure. Arthur asks for a volunteer. No one rises to the occasion. Then a large green knight enters the hall. He’s riding a green horse and carrying a large green ax. He dismounts and issues a challenge: 

“I hear that the knights in this court are the bravest in the land. Prove it. I will hand this ax to you and then kneel on the ground so that you may take a swing at my neck with the ax. In return you must agree to journey to the Green Chapel a year’s time from now and allow me to take a swing at your neck with the ax. Will anyone accept the challenge?” 

If you were one of the knights, would you accept the challenge?

The decision to accept the challenge would likely depend on the individual knight's temperament, sense of honor, and courage. In the context of medieval chivalry and the code of honor upheld by knights, there might be conflicting emotions and considerations.

On one hand, accepting the challenge could be seen as a demonstration of bravery and a commitment to honor. It would show that the knight is willing to face danger and uphold their reputation as one of the bravest in the land. Additionally, refusing the challenge in front of King Arthur and the assembled knights might bring about a sense of shame or cowardice.

On the other hand, the potential consequences of accepting the challenge are grave. The knight would have to agree to journey to the Green Chapel a year later, knowing that their own life is at risk. The uncertainty of the Green Knight's intentions and the potential danger involved in such an agreement might give a knight pause.

Ultimately, the decision to accept or decline the challenge would be a personal one, influenced by the knight's values, courage, and willingness to uphold the code of chivalry. Different knights may make different choices based on their individual characters and beliefs.

Is there anything paradoxical about the challenge?

Yes, there is a paradoxical element in the challenge presented by the Green Knight. The paradox lies in the reciprocal nature of the agreement. The challenge involves a mutual exchange where the knight is asked to strike the Green Knight's neck with an axe, but in return, the knight must agree to travel to the Green Chapel a year later and allow the Green Knight to take a swing at the knight's neck.

The paradox arises from the potential danger and uncertainty of the situation. Accepting the challenge means willingly putting oneself in harm's way with the understanding that a similar fate awaits the knight in the future. The reciprocity creates a cycle of potential harm, and the paradox lies in the willingness of the participants to engage in a potentially life-threatening exchange.

The Green Knight's challenge not only tests the knights' physical courage but also challenges their sense of honor and commitment to their word. The paradox adds complexity to the moral and ethical considerations the knights must grapple with when deciding whether to accept or decline the challenge.


Notice the sentence I put in bold italics in the next-to-last paragraph. It seems to have the temporal sequence wrong. A knight who accepts the challenge is not putting themself in harm's way initially. It's the one who offers the challenge that's in harm's way at the beginning. It got that correct in the immediately preceding paragraph. Why'd it then get it wrong in the next paragraph?

I'm sure if I'd pointed that out that ChatGPT would apologize and elaborate, as it has so many times in the past.

Well, OK. I know about the chivalric code, etc. For that matter, I've published an article about the poem, though not about the beheading game. I was interested in the exchanges that take place in the 4th part of the poem. But that fact that Gawain was bound by a code of honor which simply didn't exist in the West isn't what interests me. If it interests you, read the O'Neill article I link to in the OP. That's what he discusses and his discussion is a very interesting one.

What interests me is that any reasonable adult who hears that challenge, no matter which version, would know instantly and intuitively that something funny was going on. I wanted to see whether or not ChatGPT understood that. Which means that for my purpose, the old West version is actually better because, with respect to the point that interests me, the chivalric code is distracting noise. I don't what ChatGPT to answer as though it were under some ethical obligation to accept all challenges. 

So, thanks for helping me think that through.

The audience for the poem certainly knew the code and knew it well. But by the time the poem was written the age chivalry was dying out. The poem is deeply ironic. The poem is, and I'm reluctant to use this much over-used word, the poem is a deconstruction of chivalry. That code both demands that Gawain peruses Bertilak's wife when she approaches him in the third part of the poem, and that he expose her to her husband in the exchange bargain he's made with Bertilak. There's no way out.

Thanks. That is, your prompt directed it to think first, and answer. Mine didn't do that. It seems that it needs to be told. Very interesting.

Though it's a bit beyond me, those folks are doing some interesting work. Here's an informal introduction from Jan. 27, 2023: Bob Coecke, Vincent Wang-Mascianica, Jonathon Liu, Our quest for finding the universality of language.

Memory needs to be developed. The ability to develop memory didn't disappear with the advent of writing, though some of the motivation may have. Still, the ancient Greeks and Romans developed a technique for memorizing long strings of pretty much anything. It's generally known as the method of loci and it continues in use to this day.  Here's the opening of the Wikipedia entry:

The method of loci is a strategy for memory enhancement, which uses visualizations of familiar spatial environments in order to enhance the recall of information. The method of loci is also known as the memory journey, memory palace, journey method, memory spaces, or mind palace technique. This method is a mnemonic device adopted in ancient Roman and Greek rhetorical treatises (in the anonymous Rhetorica ad Herennium, Cicero's De Oratore, and Quintilian's Institutio Oratoria). Many memory contest champions report using this technique to recall faces, digits, and lists of words.


John O'Keefe and Lynn Nadel refer to:

... "the method of loci", an imaginal technique known to the ancient Greeks and Romans and described by Yates (1966) in her book The Art of Memory as well as by Luria (1969). In this technique the subject memorizes the layout of some building, or the arrangement of shops on a street, or any geographical entity which is composed of a number of discrete loci. When desiring to remember a set of items the subject 'walks' through these loci in their imagination and commits an item to each one by forming an image between the item and any feature of that locus. Retrieval of items is achieved by 'walking' through the loci, allowing the latter to activate the desired items. The efficacy of this technique has been well established (Ross and Lawrence 1968, Crovitz 1969, 1971, Briggs, Hawkins and Crovitz 1970, Lea 1975), as is the minimal interference seen with its use.

If you're curious psychologist David Rubin has written Memory in Oral Traditions: The Cognitive Psychology of Epic, Ballads, and Counting-out Rhymes (Oxford UP 1995).

Thanks for catching the broken link. It's now fixed.

Beyond that, good lord! I know that it's not a good definition of tragedy; I pointed that out in my introductory remarks. This is not about what tragedy is. It's about whether or not ChatGPT can apply a simple definition to simple examples. It did that. 

On the other hand, I suppose I could dock it some points for getting overly chatty, as in its response in Trial Two, but I think that would be asking too much of it. I don't know what OpenAI had in mind during the fine-tuning and RLHFing, but the result is a somewhat pointlessly helpful busybody of a Chatbot. 

Since it got all six correct, it's doing pretty good already.

Interesting, yes. Sure. But keep in mind that what I was up to in that paper is much simpler. I wasn't really interested in organizing my tag list. That's just a long list that I had available to me. I just wanted to see how ChatGPT would deal with the task of coming up with organizing categories. Could it do it at all? If so, would its suggestions be reasonable ones? Further, since I didn't know what it would do, I decided to start first with a shorter list. It was only when I'd determined that it could do the task in a reasonable way with the shorter lists that I threw the longer list at it.

What I've been up to is coming up with tasks where ChatGPT's performance gives me clues as to what's going on internally. Whereas the mechanistic interpretability folks are reverse engineering from the bottom up, I'm working from the top down. Now, in doing this, I've already got some ideas about semantics is structured in the brain; that is, I've got some ideas about the device that produces all those text strings. Not only that, but horror of horrors! Those ideas are based in 'classical' symbolic computing. But my particular set of ideas tells me that, yes, it makes sense that ANNs should be able to induce something that approximates what the brain is up to. So I've never for a minute thought the 'stochastic parrots' business was anything more than a rhetorical trick. I wrote that up after I'd worked with GPT-3 a little.

At this point I'm reasonably convinced that in some ways, yes, what's going on internally is like a classical symbolic net, but in other ways, no, it's quite different. I reached that conclusion after working intensively on having ChatGPT generate simple stories. After thinking about that for awhile I decided that, no, something's going on that's quite different from a classical symbolic story grammar. But then, what humans do seems to me in some ways not like classical story grammars.

It's all very complicated and very interesting. In the last month of so I've started working with a machine vision researcher at Goethe University in Frankfurt (Visvanathan Ramesh). We're slowly making progress.

I don't know what these mean: "sort a list of 655 topics into a linear order," "sorting along a single axis." The lists I'm talking about are already in alphabetical order.  The idea is to come up with a set of categories which you can use to organize the list in thematically coherent sub lists. It's like you have a library of 1000 books. How are you going to put them on shelves? You could group them alphabetically by title or author's (last) name. Or you could group them by subject matter. In doing this you know what the subjects are have a sense of what things you'd like to see in the same shelves. This is what you call 'sorting by semantic similarity.'

The abstract of the paper explains what I was up to. But I wasn't using books; I was using unadorned lists of categories. When I started I didn't know what ChatGPT would do when given a list for which it had to come up with organizing categories. I know how I used those labels, but it knows nothing of that. So I gave it a try and found out what it could do. Things got interesting when I asked it to go beyond coming up with organizing categories and to actually sort list items into those categories.

I've also played around with having ChatGPT respond to clusters of words.

Load More