Bill Benzon

The Story of My Intellectual Life

In the early 1970s I discovered that “Kubla Khan” had a rich, marvelous, and fantastically symmetrical structure. I'd found myself intellectually. I knew what I was doing. I had a specific intellectual mission: to find the mechanisms behind “Kubla Khan.” As defined, that mission failed, and still has not been achieved some 40 odd years later.

It's like this: If you set out to hitch rides from New York City to, say, Los Angeles, and don't make it, well then your hitch-hike adventure is a failure. But if you end up on Mars instead, just what kind of failure is that? Yeah, you’re lost. Really really lost. But you’re lost on Mars! How cool is that!

Of course, it might not actually be Mars. It might just be an abandoned set on a studio back lot.

 

That's a bit metaphorical. Let's just say I've read and thought about a lot of things having to do with the brain, mind, and culture, and published about them as well. I've written a bunch of academic articles and two general trade books, Visualization: The Second Computer Revolution (Harry Abrams1989), co-authored with Richard Friedhoff, and Beethoven's Anvil: Music in Mind and Culture (Basic Books 2001). Here's what I say about myself at my blog, New Savanna. I've got a conventional CV at Academia.edu. I've also written a lot of stuff that I've not published in a conventional venue. I think of them as working papers. I've got them all at Academia.edu. Some of my best – certainly my most recent – stuff is there.

Sequences

Exploring the Digital Wildnerness

Wiki Contributions

Comments

Whatever one means by "memorize" is by no means self-evident. If you prompt ChatGPT with "To be, or not to be," it will return the whole soliloquy. Sometimes. Other times it will give you an opening chunk and then an explanation that that's the well known soliloquy, etc. By poking around I discovered that I could elicit the soliloquy by giving it prompts that consisting of syntactically coherent phrases, but if I gave it prompts that were not syntactically coherent, it didn't recognize the source, that is, until a bit more prompting. I've never found the idea that LLMs were just memorizing to be very plausible.

In any event, here's a bunch of experiments explicitly aimed at memorizing, including the Hamlet soliloquy stuff: https://www.academia.edu/107318793/Discursive_Competence_in_ChatGPT_Part_2_Memory_for_Texts_Version_3

I was assuming lots of places widely spread. What I was curious about was a specific connection in the available data between the terms I used in my prompts and the levels of language. gwern's comment satisfies that concern.

By labeled data I simply mean that children's stories are likely to be identified as such in the data. Children's books are identified as children's books. Otherwise, how is the model to "know" what language is appropriate for children? Without some link between the language and a certain class of people it's just more text. My prompt specifies 5-year olds. How does the model connect that prompt with a specific kind of language?

Of course, but it does need to know what a definition is. There are certainly lots of dictionaries on the web. I'm willing to assume that some of them made it into the training data. And it needs to know that people of different ages use language at different levels of detail and abstraction. I think that requires labeled data, like children's stories labeled as such.

"Everyone" has known about holography since "forever." That's not the point of the article. Yevick's point is that there are two very different kinds of objects in the world and two very different kinds of computing regimes. One regime is well-suited for one kind of object while the other is well-suited for the other kind of object. Early AI tried to solve all problems with one kind of computing. Current AI is trying to solve all problems with a different kind of computing. If Yevick was right, then both approaches are inadequate. She may have been on to something and she may not have been. But as far as I know, no one has followed up on her insight. 

First I should say that I have little interest in the Frankenstein approach to AI, that is, AI as autonomous agents. I'm much more attracted to AI as intelligence augmentation (as advocated by Stanford's Michael Jordan). For the most part I've been treating ChatGPT as an object of research and so my interactions have been motivated by having it do things that give me clues about how it works, perhaps distant clues, but clues nonetheless. But I do other things with it, and on a few occasions I've gotten into a zone where some very interesting interactive story-telling comes about. ChatGPT's own story-telling abilities are rather pedestrian. I'm somewhat better, but the two of us, what fun we've had on occasion. Not sure how to reach that zone reliably, but I'm working on it.

Interesting. #4 looks like a hallucination.

Thanks.

Load More