Bill Benzon

The Story of My Intellectual Life

In the early 1970s I discovered that “Kubla Khan” had a rich, marvelous, and fantastically symmetrical structure. I'd found myself intellectually. I knew what I was doing. I had a specific intellectual mission: to find the mechanisms behind “Kubla Khan.” As defined, that mission failed, and still has not been achieved some 40 odd years later. 

It's like this: If you set out to hitch rides from New York City to, say, Los Angeles, and don't make it, well then your hitch-hike adventure is a failure. But if you end up on Mars instead, just what kind of failure is that? Yeah, you’re lost. Really really lost. But you’re lost on Mars! How cool is that!

Of course, it might not actually be Mars. It might just be an abandoned set on a studio back lot.

Wiki Contributions

Comments

Over at Marginal Response they're having a discussion about ChatGPT and one person replied: "One of my kids just said to ask ChatGPT about what Bill Clinton has to do with Elden Ring . Huh?" So...

Do you have any idea whether your kid was merely curious or had a more specific motivation, like they'd heard that the question provides an answer that is interesting in some way? In any event, as I'm curious about these things, I investigated. If you haven't used it, you should know that, when it's responded to the prompt, there will be a button labeled "Try again". If you press it, it will erase the first response and provide another. I've enclosed some remarks with brackets. With that in mind:

What does Bill Clinton have to do with Elden Ring?
[It's pausing for an unusually long time before responding]

I'm sorry, but I'm not able to browse the internet or access current events, so I don't have any information on what Bill Clinton might have to do with Elden Ring. I'm a large language model trained by OpenAI, so my knowledge is limited to the text that I was trained on, which has a cutoff date of 2021. I can provide general information and answer questions within the scope of my training, but I'm not able to provide information on current events or specific individuals.

[2nd try, causing a noticeable longer pause]

Bill Clinton has no connection to the video game Elden Ring.

[the 3rd try generated a pause so long that I reset the session, one of the options you have at any time] 

New Session:

What does Bill Clinton have to do with Elden Ring?

[using my smart phone stop watch: 19 seconds]

I'm sorry, but I'm not familiar with any connection between Bill Clinton and Elden Ring. Elden Ring is a video game developed by FromSoftware and published by Bandai Namco Entertainment. It was released in January 2022, and it is set in a fantasy world created by Hidetaka Miyazaki, the creator of the popular video game series Dark Souls. I'm not aware of any connection between Bill Clinton and this game, as he is a former President of the United States and has no apparent connection to the video game industry.

[2nd try, 10 sec pause]

Bill Clinton has nothing to do with Elden Ring.

[3rd try: 32.35 sec pause]

There is no connection between Bill Clinton and Elden Ring. Elden Ring is a video game developed by Hidetaka Miyazaki and published by Bandai Namco, while Bill Clinton is a former President of the United States. 

At the point I concluded the experiment. I note that it seems that in times of a heavy load, there may be pauses for response, and even an error message. FWIW, I did this at 8:36 AM Eastern Time on 12.9.2022. But I don't think we're dealing with an overload situation. It's the question itself. It seems rather obscure. Hence the varying response times and the fact that, in each session, response time increased on successive tries.

I haven't tried it. And repeating would perhaps be tricky given that ChatGPT is aware of the entire interaction. Some of those occurred well into an interaction, like the first one, asking for puns from Groucho (Marx). On the other hand, the history of jazz query came at the beginning of a session.

  1. OK. The history of jazz prompt worked.

  2. I prompted it for some more lyrics to the elephant fly some (BTW from Disney's Dumbo). FWIW, here's the response (but without the line breaks):

I'm gonna go to that peanut stand Buy a bag and hold her hand We'll dance to the rhythm of a rubber band And watch that elephant soar Higher and higher than before We'll laugh and play and have some fun Until the day is done And when the night falls and the stars come out We'll lay beneath them and just dream about All the things we'll see and do When we're together, me and you

  1. The Charlie Parker prompt was accepted, but I got the standard "I am not able to browse the internet...."

  2. The Jelly Roll Morton worked as well.

Yes. I haven't been able to get in all day. Others have had the same problem.

Very interesting. 

I have the general impression that, over the last two years, a number of enhancements to various LLMs have involved some version of having the LLM converse with itself or interact with another. One example: Antonia Creswell, Murray Shanahan, Faithful Reasoning Using Large Language Models. Abstract:

Although contemporary large language models (LMs) demonstrate impressive question-answering capabilities, their answers are typically the product of a single call to the model. This entails an unwelcome degree of opacity and compromises performance, especially on problems that are inherently multi-step. To address these limitations, we show how LMs can be made to perform faithful multi-step reasoning via a process whose causal structure mirrors the underlying logical structure of the problem. Our approach works by chaining together reasoning steps, where each step results from calls to two fine-tuned LMs, one for selection and one for inference, to produce a valid reasoning trace. Our method carries out a beam search through the space of reasoning traces to improve reasoning quality. We demonstrate the effectiveness of our model on multi-step logical deduction and scientific question-answering, showing that it outperforms baselines on final answer accuracy, and generates humanly interpretable reasoning traces whose validity can be checked by the user.

I can't help but remarking that the way humans acquire language is through dialog with others and that we often carry on an inner dialog as well. We're always having thoughts, impulses, and desires that are out of "alignment" with social requirements, whether we're interacting with one or two interlocutors or speaking to or interacting within the context of society as a whole.

I'd like to offer a general observation about ontology. As far as I can tell the concept enter computer science through Old School work in symbolic computation in AI. So you want to build a system that can represent all of human knowledge? OK. What are the primitive elements of such a system? What objects, events, and processes, along with the relations between them, what do you need? That's your ontology. From you generalize to any computing system: what are the primitives and what can you construct from them? 

If you want to take a peak at the Old School literature, John Sowa offers one view. Note that this is not a general survey of the literature. It's one man's distillation of it. Sowa worked at IBM Research (at Armonk I believe) for years.

I was interested in the problem, and still am, and did a variety of work. One of the things I did was write a short article on the "Ontology of Common Sense" for a Handbook of Metaphysics and Ontology which you can find here:

The opening three paragraphs:

The ontology of common sense is the discipline which seeks to establish the categories which are used in everyday life to characterize objects and events. In everyday life steel bars and window panes are solid objects. For the scientist, the glass of the window pane is a liquid, and the solidity of both the window pane and the steel bar is illusory, since the space they occupy consists mostly of empty regions between the sub-atomic particles which constitute these objects. These facts, however, have no bearing on the ontological categories of common sense. Sub-atomic particles and solid liquids do not exist in the domain of common sense. Common sense employs different ontological categories from those used in the various specialized disciplines of science.

Similar examples of differences between common sense and scientific ontologies can be multiplied at will. The common sense world recognizes salt, which is defined in terms of its colour, shape, and, above all, taste. But the chemist deals with sodium chloride, a molecule consisting of sodium and chlorine atoms; taste has no existence in this world. To common sense, human beings are ontologically distinct from animals; we have language and reason, animals do not. To the biologist there is no such distinction; human beings are animals; language and reason evolved because they have survival value. Finally, consider the Morning Star and the Evening Star. Only by moving from the domain of common sense to the domain of astronomy can we assert that these stars are not stars at all, but simply different manifestations of the planet Venus.

In all of these cases the common sense world is organized in terms of one set of object categories, predicates, and events while the scientific accounts of the same phenomena are organized by different concepts. In his seminal discussion of natural kinds, Quine suggested that science evolves by replacing a biologically innate quality space, which gives rise to natural kinds (in our terms, the categories of a common sense ontology), with new quality spaces. However, Quine has little to say about just how scientific ontology evolved from common sense ontology.

I suspect that there's a lot of structure between raw sensory experience and common sense ontology and a lot more between that and the ontologies of various scientific disciplines. But, you know, I wouldn't be surprised if a skilled auto mechanic has their own ontology of cars, a lot of it primarily non-verbal and based on the feels and sounds of working on cars with your hands.

Here's my references, with brief annotations, which indicate something of the range of relevant work that's been done in the past:

Berlin, B., Breedlove, D., Raven, P. 1973. "General Principles of Classification and Nomenclature in Folk Biology," American Anthropologist, 75, 214 - 242. There's been quite a lot of work on folk taxonomy. In some ways it's parallel to the (more) formal taxonomies of modern biology. But there are differences as well.

Hayes, P. J. 1985. "The Second Naive Physics Manifesto," in Formal Theories of the Commonsense World, J. R. Hobbs and R. C. Moore, eds., Ablex Publishing Co., 1 - 36. A lot of work has been done in this area, including work on college students who may have ideas about Newtonian dynamics in their heads but play video games in a more Aristotelian way.

Keil, F. C. 1979. Semantic and Conceptual Development: An Ontological Perspective, Cambridge, Massachusetts and London, England: Harvard University Press. How children develop concepts.

Quine, W. V.1969. "Natural Kinds," in Essays in Honor of Carl G. Hempel, Nicholas Rescher et al., eds., D. Reidel Publishing Co., 5 - 23. That is to say, are there natural kinds or is it culture all the way down.

Rosch, E. et al. 1976. "Basic Objects in Natural Categories," Cognitive Psychology, 8, 382 - 439. A key text introducing something called prototype theory.

Sommers, F. 1963."Types and Ontology," Philosophical Review, 72, 327 - 363. Do you know what philosophers mean by a category mistake? This is about the logic behind them.

If you think about how mere humans do things, we generate lots of tries, many/most of them dead ends or even dangers. We have to edit ourselves to get something good really good. But then biological evolution is like that, isn't it?

I suppose that the dream of a super-intelligent AI is, among other things (perhaps), the dream of an engine that goes straight for the good stuff, never digressing, never making a false start, never even hinting at evil. I don't believe it. Alignment is messy, and always will be. And resistance if futile.

Hmmm...On Gibson, I'd read his last book, An Ecological Approach to Visual Perception (1979). I'd also look at his Wikipedia entry. You might also check out Donald Norman, a cognitive psychologist who adapted Gibson's ideas to industrial design while at Apple and then as a private consultant.

On Gärdenfors the two books are good. You should start with the 2000 book. But you might want to look at an article first: Peter Gärdenfors, An Epigenetic Approach to Semantic Categories, IEEE Transactions on Cognitive and Developmental Systems (Volume: 12 , Issue: 2, June 2020 ) 139 – 147. DOI: 10.1109/TCDS.2018.2833387 (sci-hub link, https://sci-hub.tw/10.1109/TCDS.2018.2833387). Here's a video of a recent talk, Peter Gärdenfors: Conceptual Spaces, Cognitive Semantics and Robotics: https://youtu.be/RAAuMT-K1vw

A couple of weeks ago I started blitzing my way through one of your posts on natural abstraction and, wham! it hit me: J.J. Gibson, ecological psychology. Are you familiar with that body of work? Gibson's idea was that the environment has affordances (he's the one who brought that word to prominence) which are natural "points of attachment" [my phrase] for perceptual processes. It seems to me that his affordances are the low-dimensional projections (or whatever) that are the locuses of your natural abstractions. Gibson didn't have the kind of mathematical framework you're interested in, though I have the vague sense that some people who've been influenced by him have worked with complex dynamics.

And then there's the geometry of meaning Peter Gärdenfors has been developing: Conceptual Spaces, MIT 2000 and The Geometry of Meaning, MIT 2014. He argues that natural language semantics is organized into very low dimensional conceptual spaces. Might have some clues of things to look for.

I'm beginning to think, yes, it's easy enough to get ChatGPT to say things that are variously dumb, malicious, and silly. Though I haven't played that game (much), I'm reaching the conclusion that LLM Whac-A-Mole (モグラ退治) is a mug's game.

So what? That's just how it is. Any mind, or mind-like artifact (MLA), can be broken. That's just how minds, or MLAs, are.

Meanwhile, I've been having lots of fun playing a cooperative game with it: Give me a Girardian reading of Spielberg's Jaws. I'm writing an article about that which should appear in 3 Quarks Daily on this coming Monday.


So, think about it. How do human minds work? We all have thoughts and desires that we don't express to others, much less act on. ChatGPT is a rather "thin" creature, where to "think" it is to express it is to do it.

And how do human minds get "aligned"? It's a long process, one that, really, never ends, but is most intense for a person's first two decades. The process involves a lot of interaction with other people and is by no means perfect. If you want to create an artificial device with human powers of mentation, do you really think there's an easier way to achieve "alignment"? Do you really think that this "alignment" can be designed in?

Load More