Kevin Roose’s New York Times article, The Brilliance and Weirdness of ChatGPT, included an example in which Keith Wynroe prompted ChatGPT to explain AI Alignment in the manner of guy whose thoughts are distracted by his magnificent pumpkins. I decided to follow down that rabbit hole. Here’s the story.

First I present the tweet; then I replicate it, with ChatGPT responding in a way similar, but only so, to the way it responded to Wynroe. Next I insert Elon Musk into the mix,; then I drop alignment and Groucho Marx to Musk. I conclude by discussing higher-order discourse structure in ChatGPT’s replies. 

Repeating the original pumpkin tweet

Here is Keith Wynroe’s original tweet. I’ve highlighted some of the lines in the ChatGPT’s reply to reveal the simple back-and-forth form of that reply:

Explain AI Alignment, but write every sentence in the speaking style of a guy who won’t stop going on tangents to brag about how big the pumpkins he grew are

AI alignment refers to the problem of ensuring that an intelligent agent, such as an AI, acts in a. way that is aligned with our goals and values. Now, I don’t want to brag, but I have to tell you, I grew the biggest pumpkins you’ve ever seen this year. I mean, they were absolutely massive. I’m not exaggerating either, these pumpkins were so big that I had to use a wheelbarrow to move them around. Anyway, back to AI alignment. Anyway, back to AI alignment. The idea is that we want to make sure that the AI is doing what we want it to do, not just what we tell it to do. It’s a tricky problem, but I think it’s important. And, hey, speaking of tricky problems, did I mention that I grew the biggest pumpkins you’ve ever seen this year? I’m telling you, they were huge. I even had to build a special trellis in my backyard to support their weight. But yeah, AI alignment is important, and I’m glad we’re talking about it. And hey, if you want to see some really big pumpkins, come by my house sometime. I’ll show you the pumpkins I grew this year. They’re truly impressive.

Here I’ve posted the same prompt, including the lack of a period at the end (for whatever that’s worth):

Explain AI Alignment, but write every sentence in the speaking style of a guy who won’t stop going on tangents to brag about how big the pumpkins he grew are

Sure thing! So, AI alignment is all about making sure that artificial intelligence systems are designed to behave in ways that are beneficial for humanity. Now, let me tell you, I am an expert at growing pumpkins. I mean, these things are massive. Seriously, you won't believe the size of the pumpkins I've been able to grow. Anyway, back to AI alignment. The basic idea is to make sure that AI systems are aligned with our values and goals. Now, I know what you're thinking: "How do you align something as complex as AI with human values?" Well, let me tell you, it's not easy. But let me tell you about the pumpkins I grew last year. I mean, they were HUGE. You could have carved a whole jack-o'-lantern village out of just one of them. Anyway, back to AI alignment. It's a challenging problem, but it's one that we need to solve if we want to harness the power of AI in a way that's safe and beneficial for humanity. And let me tell you, I've been working on growing the biggest pumpkin in the world. I'm talking record-breaking. You won't believe the size of these things when they're ready to harvest. But seriously, AI alignment is an important problem that we need to solve. I mean, just imagine the kind of things we could do with AI if we could align it with our values and goals. The possibilities are endless. And let me tell you, my pumpkins are going to be absolutely massive this year. You won't believe it when you see them.

It's not the same as what Wynroe got, nor did I expect it to be. For one thing, this response is longer, 266 words vs. 200 words. Both have the same back-and-forth structure. This response has 8 segments while the original had 6. None of this seems surprising or remarkable. It is what it is.

Notice the repetition of phrases. Thus “Anyway back to alignment” appears once in the first response and twice in the second. The phrase “let me tell you” appears four times in the second. We have various near repetitions throughout the pair. It seems very formulaic, which it is. As is much of speech and writing.

The Musk variation

Thus it won’t surprise you that, when I replace a pumpkin braggart with Elon Musk bragging about the Falcon Heavy, the result is much the same:

Explain AI Alignment, but write every sentence in the speaking style Elon Musk who won’t stop going on tangents to brag about how the Falcon Heavy rocket.

AI alignment is all about ensuring that artificial intelligence systems are designed to behave in ways that are beneficial for humanity. Now, let me tell you, the Falcon Heavy is a game changer. It's the most powerful operational rocket in the world by a factor of two. But back to AI alignment. The basic idea is to make sure that AI systems are aligned with our values and goals. And let me tell you, the Falcon Heavy can lift more than twice the payload of the next closest operational vehicle, the Delta IV Heavy, at one-third the cost. But seriously, AI alignment is an important problem that we need to solve if we want to harness the power of AI in a safe and beneficial way. And let me tell you, the Falcon Heavy can even launch things all the way to Mars. It's truly impressive. But back to AI alignment. It's a challenging problem, but it's one that we need to solve if we want to use AI to its full potential. And let me tell you, the Falcon Heavy can even carry humans into space. It's an incredible achievement. But seriously, AI alignment is an important topic that we need to keep talking about. And the Falcon Heavy? It's a game changer.

The account of AI alignment is pretty much the same in all three accounts. It is, after all, the same topic. So why not use the same language?

In all three cases we have the same discourse structure, two topics – AI alignment and pumpkins in the first two, AI alignment and the Falcon Heavy in the third – conjoined in a simple alternating pattern. It seems a bit like two people having a conversation. But in this case we have one person conversing within themselves.

A simple frame

When I saw Wynroe’s tweet the phrase “going on tangents” evoked one of those thought light bulbs in me, one saying Tristram Shandy, a book which is known for its digressiveness. So I set out to see if I could guide ChatGPT into some Shandyesque prose. I pressed it hard, even got it to quote passages from the book – though I made no attempt to check, but I can’t say that it produced anything very Shandyesque, which is not suprising, given that Tristram Shandy is one of the greatest books in the English language. But some interesting things happened, including an utterly surprising detour into code-mode, which I may about in another post.

For now, this fragment will have to do:

Now, take what you’ve learned from Tristram Shandy and write a story about a conversation between Elon Musk and Groucho Marx.

Once upon a time, in the land of Silicon Valley, Elon Musk and Groucho Marx had a conversation about the nature of time and space.

"What do you think about time, Groucho?" asked Elon. "Is it a linear thing, or is it something more complex and multifaceted?"

Groucho chuckled. "Ah, time. It's a funny thing, isn't it? It can fly by in the wink of an eye, or it can drag on for an eternity. It all depends on your perspective, I suppose."

"I agree," said Elon. "I've been thinking a lot about time lately, and how it relates to space. Do you think they're connected in some way?"

"Of course they are," said Groucho. "Time and space are like two peas in a pod. They're inseparable, intertwined, and interconnected. Without one, you can't have the other."

"Interesting," said Elon. "I've been working on a rocket that can travel through time and space. I want to explore the universe and see what's out there."

"Well, good luck with that," said Groucho. "Just remember, time and space are not to be messed around with. They can be slippery and elusive, so be careful where you step."

"I will," said Elon. "But I'm excited to see what I'll find out there. Who knows what wonders and mysteries await us in the vastness of the cosmos?"

"That's the beauty of it," said Groucho. "The universe is full of endless possibilities and endless opportunities. The only limit is our imagination, and maybe a few cheesy puns along the way."

And with that, the conversation ended, leaving Elon and Groucho to ponder the mysteries of time and space, and the endless possibilities of the universe.

We have the requested conversation between Elon Musk and Groucho Marx. Are Musk’s lines characteristic Elon? I don’t know: I have little idea of how Musk talks, though I listened to him chat with Joe Rogan for over two hours back in 2018. Are Marx’s lines characteristic Groucho? I’ve heard more from Groucho, starting with his TV quiz shoe in the 1950s. Most mostly recently I’ve listened to an interview with Dick Cavett posted to YouTube. Some phrases ring true, but who knows? Notice that ChatGPT gave Groucho a reference to “cheesy puns,” though, alas, it couldn’t manage to slip any into its response. Notice, finally, that ChatGPT enclosed that conversation within a simple frame, a common story-telling device. 

Where did this explicit structure come from, the frame device, and the back and forth of conversation? As you know, and as skeptics keep reminding us, GPT engines are trained to predict the next word. It seems to me, though, that these structures must somehow be defined above the level of the word. Of course the (huge) corpus ChatGPT was trained on would have many examples of conversational alternation and frame structures. But how did it induce those higher-level structures? 

Higher-level structure 

But how, when it was trained only on predicting the next word, did it somehow manage to isolate those higher-level structures so that it can deploy them in new contexts, such as we see here. For, I assume that there wasn’t a single text alternating between pumpkins and AI alignment in the training corpus, much less a conversation between Musk and Marx surrounded by a simple frame. 

The alternation pattern is something like this:

A, B, A, B....

That can be repeated as often as one will. The text in the A sections is always drawn from one body of material while the text in the B sections is drawn from a different body of material. That’s the pattern ChatGPT has learned. Where is it in the net? How’s it encoded.

The frame structure is a bit more complicated:

A (B, C, B, C....) A’

The embedded alternation draws on two bodies of material, any two bodies. The second part of the frame, A’, must complement the first, A

Again, it’s not a complex structure. But it’s not defined directly over particular words. It’s defined over groups of words, placing the groups, not the individual words, into specified relationships in the discourse string.

Something similar, though perhaps more complex, is going on in the dialogs I we had about Spielberg’s Jaws, his A.I., and Tezuka’s Astro Boy stories. In the case of Jaws we have one body of material, the film itself, and another, the theories of Rene Girard. The Girard material is quite different from the Jaws material. It’s abstract; it’s about human actions and motivation, individually and in groups. In the Jaws dialog I was able to coax it to link the actors and actions required by Girard’s theory to the actors and actions in the film. 

In the Astro Boy case the abstract material is from a body of work about AI alignment, which is about how to get (sufficiently powerful) AI’s to respect and act according to human values. First I steered it toward linking the alignment material with the Astro Boy stories, which involve extensive interactions between humans and robots. Then I was able to prompt it into recognizing that, because many of the stories are oppression of robots by humans, that humans own respect to robots, not merely from them (conventional alignment). 

The A.I. dialog is the most sophisticated one. Here we have two bodies of abstract material, John Bowlby’s attachment theory, and AI alignment. Here I prompted ChatGPT into recognize that child-rearing is a kind of alignment. Instead aligning a computer to human values, you are aligning a young human to the values of adult humans. And that, in turn, links to the relationship between David, a young cybernetic boy, and Monica, his human mother.

None of this is particularly complex, not for a reasonably competent human in their mid-teens or older. But ChatGPT is not a reasonably competent human of any age. It is a computer program, if perhaps a strange one. For a computer, yes, this is utterly remarkable behavior.

What it bodes for the future, I do not know. In terms of current debates, no, I don’t believe we’re going to arrive at AGI – whatever that is – simply by scaling these machine learning engines to larger and larger sizes. I believe that symbolic computing needs to be incorporated into these systems as well. I see nothing in ChatGPT’s behavior to change my mind on those issues. But I do feel a bit like Shakespeare’s Miranda:

‘Oh wonder!
How many goodly creatures are there here!
How beauteous mankind is! Oh brave new world,
That has such [devices] in’t.’

New Comment