Cross posted from New Savanna.

Noam Chomsky was trotted out to write a dubious op-ed in The New York Times about large language models and Scott Aaronson registered his displeasure at his blog, Shtetl-Optimized: The False Promise of Chomskyism. A vigorous and sometimes-to-often insightful conversation ensued. I wrote four longish comments (so far). I’m reproducing two of them below, which are about meaning in LLMs.

Meaning in LLMs (there isn’t any) 
Comment #120 March 10th, 2023 at 2:26 pm

@Scott #85: Ah, that’s a relief. So:

But I think the importantly questions now shift, to ones like: how, exactly does gradient descent on next-token prediction manage to converge on computational circuits that encode generative grammar, so well that GPT essentially never makes a grammatical error?

It's not clear to me whether or not that’s important to linguistics generally, but it is certainly important for deep learning. My guess – and that’s all it is – is that if more people get working on the question, that we can make good progress on answering it. It’s even possible that in, say five years or so, people will no longer be saying LLMs are inscrutable black boxes. I’m not saying that we’ll fully understand what’s going on; only that we will understand a lot more than we do know and are confident of making continuing progress.

Why do I believe that? I sense a stirring in the Force.

There’s that crazy ass discussion at LessWrong that Eric Saund mentioned in #113. I mean, I wish that place weren’t so darned insular and insisting on doing everything themselves, but it is what it is. I don’t know whether you’ve seen Stephen Wolfram’s long article (and accompanying video) but has some nice visualizations of the trajectory GP-2 takes in completing sentences and is thinking in terms of complex dynamic – he talks of “attractors” and “attractor basins” – and seems to be thinking of getting into it himself. I found a recent dissertation in Spain that’s about the need to interpret ANNs in terms of complex dynamics, which includes a review of an older literature on the subject. I think that’s going to be part of the story.

And a strange story it is. There is a very good reason why some people say that LLMs aren’t dealing with meaning despite that fact that they produce fluent prose on all kinds of subjects. If they aren’t dealing with meaning, then how can they produce the prose?

The fact is that the materials LLMs are trained on don’t themselves have any meaning.

How could I possibly say such a silly thing? They’re trained on texts just like any other texts. Of course they have meaning.

But texts do not in fact contain meaning within themselves. If they did, you’d be able to read texts in a foreign language and understand them perfectly. No, meaning exists in the heads of people who read texts. And that’s the only place meaning exists.

Words consist of word forms, which are physical, and meanings, with are mental. Word forms take the form of sound waves, graphical objects, physical gestures, and various other forms as well. In the digital world ASII encoding is common. I believe that for machine learning purposes we use byte-pair encoding, whatever that is. The point is, there are no meanings there, anywhere. Just some physical signal.

As a thought experiment, imagine that we transform every text string into a string of colored dots. We use a unique color for each word and are consistent across the whole collection of texts. What we have then is a bunch of one-dimensional visual objects. You can run all those colored strings through a transformer engine and end up with a model of the distribution of colored dots in dot-space. That model will be just like a language model. And can be prompted in the same way, except that you have to use strings of colored dots.

THAT’s what we have to understand.

As I’ve said, there’s no meaning in there anywhere. Just colored dots in a space of very high dimensionality.

And yet, if you replace those dots with the corresponding words...SHAZAM! You can read it. All of a sudden your brain induces meanings that were invisible when it was just strings of colored dots.

I spend a fair amount of time thinking about that in the paper I wrote when GPT-3 came out, GPT-3: Waterloo or Rubicon? Here be Dragons, though not in those terms. The central insight comes from Sydney Lamb, a first-generation computational linguist: If you conceive of language as existing in a relational network, then the meaning of a word is a function of its position in the network. I spend a bit of time unpacking that in the paper (particularly pp. 15–19) so there’s no point trying to summarize it here.

But if you think in those terms, then something like this

king – man + woman ≈ queen

is not startling. The fact is, when I first encountered that I WAS surprised for a second or two and then I thought, yeah, that makes sense. If you had asked me whether that sort of thing was possible before I had actually seen it done, I don’t know how I would have replied. But, given how I think about these things, I might have thought it possible.

In any event, it has happened, and I’m fine with it even if I can’t offer much more than sophisticated hand-waving and tap-dancing by way of explanation. I feel the same way about ChatGPT. I can’t explain it, but it is consistent with how I have come to think about the mind and cognition. I don’t see any reason why we can’t made good progress in figuring out what LLMs are up to. We just have to put our minds to the task and do the work.

B333 asks: how does meaning get in people’s heads anyway? 
Comment #135 March 10th, 2023 at 6:12 pm

@Bill Benzon 120

Ok, well if meaning isn’t in texts, but only in people’s heads, how does meaning get in people’s heads anyway? Mental events occur as physical processes in the brain, and one could well wonder how a physical process in the brain “means” or has the “content” of something external.

Language is highly patterned, and that pattern is an (imperfect) map of reality. “The man rode the horse” is a more likely sentence than “The horse rode the man” because humans actually ride horses, not vice verse. If we switched out words for colored dots those correspondences would still hold. So there is in fact an awful lot of information about reality encoded in raw text.

Meaning = intention + semanticity  
Comment #152 March 11th, 2023 at 7:58 am

@B333 #135: “...how does meaning get in people’s heads anyway?” From other people’s heads in various ways, one of which is language. The key concept is in your last sentence, “encoded.” For language to work, you have to know the code. If you can neither speak nor read Mandarin, that is, if you don’t know the code, then you have no access to meanings encoded in Mandarin.

Transformer engines don’t know the code of any of the languages deployed in the texts they train on. What they do is create a proxy for meaning by locating word forms at specific positions in a high-dimensional space. Given enough dimensions, those positions encode the relationality aspect of (word) meaning.

I have come to think of meaning as consisting of an intentional component and a semantic component. The semantic component in turn consists of a relational component and an adhesion component. (I discuss those three in an appendix to the dragons paper I linked in #120.)

Take this sentence: “John is absent today.” Spoken with one intonation pattern it means just what it says. But when you use a different intonation pattern, it functions as a question. The semanticity is the same in each case. This sentence: “That’s a bright idea.” With one intonation pattern it means just that. But if you use a different intonation pattern is means the idea is stupid.

Adhesion is what links a concept to the world. There are a lot of concepts about physical phenomena as apprehended by the senses. The adhesions of those concepts are thus specified by the sensory percepts. But there are a lot of concepts that are abstractly defined. You can’t see, hear, smell, taste or touch truth, beauty, love, or justice. But you can tell stories about all of them. Plato’s best-known dialog, Republic, is about justice.

And then we have salt, on the one hand, and NaCl on the other. Both are physical substances. Salt is defined by sensory impressions, with taste being the most important one. NaCl is abstractly defined in terms of a chemical theory that didn’t exist, I believe, until the 19th century. The notion of a molecule consisting of an atom of sodium and an atom of chlorine is quite abstract and took a long time and a lot of experimentation and observation to figure out. The observations had to be organized and discipline by logic and mathematics. That’s a lot of conceptual machinery.

Note that not only are “salt” and “NaCl” defined differently, but they have different extensions in the world. NaCl is by definition a pure substance. Salt is not pure. It consists mostly of NaCl plus a variety of impurities. You pay more for salt that has just the right impurities and texture to make it artisanal.

Relationality is the relations that words have with one another. Pine, oak, maple, and palm are all kinds of trees. Trees grow and die. They can be chopped down and they can be burned. And so forth, through the whole vocabulary. These concepts have different kinds of relationships with one another – which have been well-studied in linguistics and in classical era symbolic models.

If each of those concepts is characterized by a vector with a sufficient number of components, they can be easily distinguished from one another in the vector space. And we can perform operations on them by working with vectors. Any number of techniques have been built on that insight going back to Gerald Salton’s work on document retrieval in the 1970s. Let’s say we have collection of scientific articles. Let’s encode each abstract as a vector. One then queries the collection by issuing a natural language query which is also encoded as a vector. The query vector is then matched against the set of document vectors and the documents having the best matches are returned.

It turns out that if the vectors are large enough, you can produce a very convincing simulacrum of natural language. Welcome to the wonderful and potentially very useful world of contemporary LLMs.

[Caveat: from this point on I’m beginning to make this up off the top of my head. Sentence and discourse structure have been extensively studied, but I’m not attempting to do anything remotely resembling even the sketchiest of short accounts of that literature.]

Let’s go back to the idea of encoding the relational aspect of word meaning as points in a high-dimensional space. When we speak or write, we “take a walk” though that space and emit that path as a string, a one-dimensional list of tokens. The listener or reader then has to take in that one-dimensional list and map the tokens to the appropriate locations in relational semantic space. How is that possible?

Syntax is a big part of the story. The words in a sentence play different roles and so are easy to distinguish from one another. Various syntactic devices – word order, the uses of suffixes and prefixes, function words (articles and prepositions) – help us to assemble them in the right configuration so as to preserve the meaning.

Things are different above the sentence level. The proper ordering of sentences is a big part of it. If you take a perfectly coherent chunk of text and scramble the order of the sentences, it becomes unintelligible. There are more specific devices as well, such as conventions for pronominal reference.

A quantitative relationship between concepts, dimensions, and token strings

Now, it seems to me that we’d like to have a way of thinking about quantitative relationships [at this point my temperature parameter is moving higher and higher] between 1) Concepts: the number of distinct concepts in a vocabulary, 2) Dimensions: the number of dimensions in the vector space in which you embed those concepts, and 3) Token strings: the number of tokens an engine needs to train on in order to locate the map the tokens to the proper positions (i.e. types) in the vector space so that they are distinguished from one another and in the proper relationship.

What do I mean by “distinct concepts” & what about Descartes’ “clear and distinct ideas”? I don’t quite know. Can the relationality of words be resolved into orthogonal dimensions in vector space? I don’t know. But Peter Gärdenfors has been working on it and I’d recommend that people working LLMs become familiar with his work: Conceptual Spaces: The Geometry of Thought (MIT 2000), The Geometry of Meaning: Semantics Based on Conceptual Spaces (MIT 2014). If you do a search on his name you’ll come up with a bunch of more recent papers.

And of course there is more to word meaning than what you’ll find in the dictionary, which is more or less what is captured in the vector space I’ve been describing to this point. Those “core” meanings are refined, modified, and extended in discourse. That gives us the distinction between semantic and episodic knowledge (which Eric Saund mentioned in #113). The language model has to deal with that as well. That means more parameters, lots more.

I have no idea what it’s going to take to figure out those relationships. But I don’t see why we can’t make substantial progress in a couple of years. Providing, of course, that people actually work on the problem.

Addendum: What about the adhesions of abstract concepts?
Added 3.12.23

Within the semanticity component of meaning I have distinguished between adhesion and relationality: “Adhesion is what links a concept to the world” and “relationality is the relations that words have with one another.” But what about the adhesion of words that are not directly defined in relation to the physical world? Since they are defined over other words, doesn’t their adhesion reduce to relationality?

Not really. Take David Hays’s standard example: Charity is when someone does something nice for someone else without thought of reward. Any story that satisfies the terms of that definition (“when someone does...reward”) is considered an act of charity. The adhesion of the definiendum, charity, is not with any of the words, either in the definiens or in any of the stories that satisfy the definiens, but with the pattern exhibited by the words. It’s the pattern that characterizes the connection to the world, not the individual words in stories or in the defining pattern.

New Comment
35 comments, sorted by Click to highlight new comments since: Today at 10:28 AM

I think this is just a 21st century version of dualism. There's two kinds of information, meaning and meaningless data. One can only exist in humans. Why? Humans are special. Why are humans special? Because we said so.

I said nothing about humans being special. This is an argument about LLMs, not about all possible artificial intelligences. Multi-modal devices have access to the physical world in a way that LLMS do not. That changes things.

I think that, particularly in the case of massively popular systems trained with RLHF (such as ChatGPT), these systems are "embodied" in cyberspace. They certainly are grounded in what humans want and don't want, which isn't the same as the physical world, but it's still a source of meaning.

Let me quote a passage from ChatGPT intimates a tantalizing future:

Here is my problem: I actually believe those old arguments about why machines can’t (possibly) think, the arguments from intention. That belief clashes with my appraisal of the behavior I’ve seen from ChatGPT in the last two months. Damn it! It looks like a duck.

How do I reconcile the two sides of that conflict? I don’t. Rather, I decide to hold the concept of thought, and a whole mess of allied concepts, in abeyance. I’m going to toss them between phenomenological brackets.

The people who insist that what these large language models are doing might as well be the work of thousands of drunken monkeys pounding on typewriters, they have no way of accounting for the coherent structure in ChatGPT’s output. Oh, they can pounce on its many mistakes – for it makes many – and say, “See, I told you, drunken monkeys!” But the AI boosters who insist that, yes, these guys can think, we’re on the way to AGI, they can’t tell us what it’s going on either. All they can say is that the models are “opaque” – almost a term of art by now – so we don’t know what’s going on, but we’re working on it. And indeed they are. 

In this context, “think” is just a label that tells us nothing about what humans are doing that machines are not. That denial does not point the way to knowledge about how to improve these systems – for they surely need improving. I conclude, then, for certain purposes, such as discriminating between human behavior and that of advanced artificial intelligence, the idea of thought has little intellectual value.

Let me be clear. I am not denying that people think; of course we do. Nor am I asserting that advanced AI’s think. They (most likely) do not. But “to think” is an informal common-sense idea. It has no technical definition.[1] We are rapidly approaching an intellectual regime where the question of whether or not machines can think – reason, perceive, learn, feel, etc. – becomes a tractable technical issue. In this regime, common sense ideas about minds and mentation are at best of limited value. At worst, they are useless.

I take that as a sign that we are dealing with something new, really new. We have sailed into those waters where “Here be dragons” is written on the charts. It is time that we acknowledge that we don’t know what we’re doing, that the old ideas aren’t working very well, and get on with the business of creating new ones. Let us learn to fly with and talk with the dragons. It is time to be wild.

Perhaps we have to retire of the concept of meaning as well. As far as I know it doesn't have a technical definition so perhaps we shouldn't use it in technical conversations. What does "meaning" tell you about how LLMs work? Nothing. So what conceptual work is it doing? It seems to me it functions mostly as a way of keeping score in some ill-defined race to Mount AGI. But it doesn't help you run the race.


[1] Some years ago I constructed a definition of the informal concept, “to think,” within a cognitive network. See, William Benzon, Cognitive Networks and Literary Semantics, MLN 91: 1976, pp. 961-964. For a similar approach to the common-sense notion, see William Benzon, First Person: Neuro-Cognitive Notes on the Self in Life and in Fiction, PsyArt: A Hyperlink Journal for Psychological Study of the Arts, August 21, 2000, pp. 23-25, https://www.academia.edu/8331456/First_Person_Neuro-Cognitive_Notes_on_the_Self_in_Life_and_in_Fiction

[+][comment deleted]1y10

I’ve looked at all those articles and read the abstracts, but not the articles themselves. It’s not obvious to me that they provide strong evidence against my view, which is more subtle than what you may have inferred from my post. 

I fully accept that words can give meaning to, can even by defined by, other words. That’s the point of the distinction between relationality and adhesion as aspects of semanticity. The importance of the relational aspect of semanticity is central to my thinking, and is something I argue in some detail in GPT-3: Waterloo or Rubicon? Here be Dragons. At the same time I also believe that there is a significant set of words whose semanticity derives exclusively, even predominantly (I’ve not thought it through recently) through their adhesion to the physical world. 

Without access to those adhesions the whole linguistic edifice is cut off from the world. Its relationality is fully intact and functioning. That’s what LLMs are running on. That they do so well on that basis is remarkable. But it’s not everything.

If you want to pursue this farther, I’ll make you a deal. I’ll read three of those articles if you read two of mine.

I’m particularly interested in the last two, convergence of language and vision, and language processing in humans and LLMs. You pick the third. For my two, read the Dragons piece I’ve linked to and an (old) article by David Hays, On "Alienation": An Essay in the Psycholinguistics of Science.  

In reply to B333's question, "...how does meaning get in people’s heads anyway?”, you state: From other people’s heads in various ways, one of which is language.

I feel you're dodging the question a bit.

Meaning has to have entered a subset of human minds at some point to be able to be communicated to other human minds. Could hazard a guess on how this could have happened, and why LLMs are barred from this process?

Human minds have life before language, they even have life before birth. But that's a side issue. The issue with LLMs is that they only have access to word forms. And language forms, by themselves, have no connection to the world. What LLMs can do is figure out the relationships between words as given in usage.

[-]gjm1y21

A few thoughts.

1. We could do with more clarity on what constitutes "meaning". If you say "there is no meaning in LLMs" and someone else says "sure there is", for all I can tell this just indicates that you're using the word in different ways.

2. I think you want to say something like "meaning necessarily involves some sort of coupling with the actual external world, and a text on its own doesn't have that, especially one produced by an LLM". But I dispute that. LLMs are trained on a big corpus of text. That text is mostly written by humans. A lot of it is written by humans to describe the actual external world. (A lot more of it is written by humans to describe fictional versions of the external world; it seems to me that fiction has meaning in some reasonable sense, and I'm not sure how your notion of meaning deals with that.) I think that couples it enough to the actual external world for it to be reasonable to say that it has meaning, at least to some extent. (In some cases, the same sort of meaning as fiction has. In some cases, the same sort of meaning as it has when I say that cloudless daytime skies are usually blue, and for the same sort of reason. The coupling is maybe more indirect for the LLM's productions than for mine, at least in many cases, but why should that deprive it of meaning?)

3. You say that if we took all the world's texts and put them through some sort of transformation that consistently replaces words with coloured dots then there would be no meaning in there anywhere. I think that's utterly false.

In particular, imagine the following. We somehow procure a population of people basically similar to us, with their own civilization broadly like ours, but with no language in common with us and living in a different world so that they don't know any of the contingent details of ours. And we dump on them the entirety of human writing. Or, let's say for simplicity, the entirety of human writing-in-English. No pictures, no video, just text.

This is all in languages they don't know, encoded in writing systems they don't know. If I understand you correctly, you would want to say that with us out of the picture there's "no meaning" there. And yet I bet that these people would, after a while, be able to work out most of how the English language works, and many things about our world. They would have gained a lot of information from the big literature dump. Do you deny that? Or do you say that it's possible for a pile of writing with no meaning to transmit a huge amount of useful information? Or do you say that in the process that I might call "discovering the meaning of the texts" what actually happened was that meaning was created out of not-meaning, and it just happened to be roughly the meaning that we attach to those texts? Or what?

"Meaning" may well be a ruined word by this point, but I'm using it sorta' like certain philosophers use it. But I'm saying that we can analyze it into components. On of them is intention. Those philosophers tend to think that's all there is to meaning. I think they're wrong.

We've also got what I call semanticity, and it has two components, relationality and adhesion. LLMs are built on relationality. As far as I can tell, most of what you want for "meaning" is being done by relationality. The point of my colored dots transformation is that it removes adhesion by relationality remains. That's why LLMs work as well as they do. They are relational engines.

Note: In various discussions you'll read about inferential meaning and referential meaning. The former more or less corresponds to what I'm calling relationality and the latter more or less to adhesion.

[-]gjm1y20

So, once again, my thought experiment. We take all the English-language text ever written, and we hand it to (hypothetical) humans who live in a different world and never learned English or anything like it.

Clearly that corpus has "relationality". Does it have "adhesion"?

If it does, then doesn't that mean your "colored dots transformation" doesn't destroy "adhesion"?

If it doesn't, then how does it come about that after those hypothetical humans have spent a while studying all those texts they are able to extract information about our world from them? (Or do you think they wouldn't be able to do that?)

It seems to me that "adhesion" -- which I think is what I called "coupling with the actual external world" -- comes in degrees, and that it can be transmitted indirectly. I can say things about the Higgs boson or the Quechua language without direct experience of either[1], and what I say can be meaningful and informative. For that to be so, what I say has to be grounded in actual contact with the world -- it has to be the case that I'd be saying different things if CERN's experiments had yielded different results -- but that can be true for LLMs just as it can for people.

[1] Except in whatever rather unhelpful sense everyone and everything in the universe has direct experience of the Higgs boson.

The processes by which real-world facts about the Higgs boson lead to bozos like me being able to say "the Higgs boson has a much larger rest mass than the proton", have some idea what that means, and almost certainly not be wrong, are (if I am understanding your terminology correctly) almost entirely "relational". An LLM can participate in similar processes and when it says the same things about the Higgs boson they don't necessarily mean less than when I do.

(With the present state of the art, the LLM understands much less about the Higgs boson than even a Higgs bozo like me. I am not claiming that things LLMs say should be treated just the same as things humans say. Perhaps at present LLMs' statements do mean less than humans' otherwise similar statements. But it's not simply because they're trained on text. A lot of things humans say that I don't think we'd want to declare meaningless are also the result of training-on-text.)

  1. I haven't the foggiest idea about your thought experiment. It's too complicated.
  2. Adhesion isn't something that word forms or texts or corpora possess. It's a process in some device, like a brain and nervous system, but it could be an artificial systems as well. The same for relationality. .3 Meaning isn't an ethereal substance that gets transported though etheric tubes embedded in language, though that's how we seem to talk about it. It needs to be understood as a property of specific processes and mechanisms.
[-]gjm1y30

If my thought experiment is too complicated, then I think we have a problem; I don't expect it to be possible to talk usefully about this stuff with no thoughts that complicated or worse.

Of course meaning isn't an ethereal substance, and if I wrote something that suggests I think it is one then I apologize for my lack of clarity.

I took "adhesion" to be describing both a process (whereby utterances etc. get coupled to the real world) and the quality that things that have been shaped by that process have. I don't think anything I wrote depends on viewing it as quality rather than process.

What I'm trying to understand is what fundamental difference you see between the following two chains of processes.

  1. Scientists at CERN do complicated experiments whose results depend on the mass of the Higgs boson. They (and their computers) do the calculations necessary to estimate the mass of the Higgs boson. They write down the results, and information about those results propagates out among people whose own experience isn't so directly tied to the actual properties of the Higgs field as theirs. Eventually some several-steps-removed version of that information reaches me; I incorporate it into my (rather sketchy) understanding of fundamental physics, and later I say "the Higgs boson has a much larger rest mass than the proton".
  2. Scientists at CERN do complicated experiments whose results depend on the mass of the Higgs boson. They (and their computers) do the calculations necessary to estimate the mass of the Higgs boson. They write down the results, and information about those results propagates out among people whose own experience isn't so directly tied to the actual properties of the Higgs field as theirs. Eventually some several-steps-removed version of that information reaches an LLM's training data; it incorporates this into its model of what strings of words have high probability in the context of factual discussions of particle physics, and later in that context it emits the words "the Higgs boson has a much larger rest mass than the proton".

It seems to me that in both cases, the linkage to the real world comes at the start, via the equipment at CERN and the scientists working directly with that equipment and the data it produces. Everything after that is "relational". It's likely that I understand more about particle physics than today's LLMs (i.e., the mental structures I have into which e.g. the information about the Higgs boson gets incorporated are richer and more reflective of the actual world), but that isn't because my understanding is "adhesional" rather than "relational"; everything I know about particle physics I know very indirectly, and much of it is only at the level of having some ability to push words and symbols around in a hopefully-truth-preserving manner.

But -- if I'm understanding you right, which I might not be -- you reckon that something in chain-of-processes 2 means that when the LLM emits that statement about the Higgs boson, there is no "meaning" there (whatever exactly that, er, means), and I think you wouldn't say the same about chain-of-processes 1. But I don't understand why, and the relationality/adhesion distinction doesn't clarify it for me because it seems like that goes basically the same way in both cases.

"If my thought experiment is too complicated, then I think we have a problem; I don't expect it to be possible to talk usefully about this stuff with no thoughts that complicated or worse."

OK, then explain to me in some detail just how those people are going to figure out the meaning of all that text. They have no idea what any of it is. This piece of text may be a children's story, or a historical document, a collection of prayers, maybe a work of philosophy. But to them it's just a bunch of ...what? How are they going identify the word for apple, assuming they have apples in their world? Can we assume they have apples in their world? For that matter, are we assume that all this text is about a world like their own a every significant respect? Do they assume that? Why? How? What if their world doesn't have apple trees? How are they going to figure that out? What if their world has a kind of plant that doesn't exist in ours? How will they figure that out? How are they going to identify the word for Higgs boson?

How are they going to receive and organize this text, as an enormous pile of paper, electronic files? Do the files present the text to them in visual symbols, and ASCII? How do they organize an catalogue it? How many workers are assigned to the project? What kind of knowledge do they have? And on and on and on...

None of that seems simple to me. But if those folks are to make sense of all that text, all that needs to be specified.

[-]gjm1y64

Ah, so "it's too complicated" meant "it describes a process involving many people doing difficult things" rather than "the thought experiment itself is too complicated"; fair enough.

Obviously I can only guess, never having been hundreds of people poring over millions of books trying to figure out what they mean. But I would think it would go something like this. (The following is long, but not nearly as long as the process would actually take; and handwavy, because as already mentioned I can only guess. But I think every part of it is pretty plausible.)

I am assuming that in the first instance what they have is printed books. Once they've figured out a crude model of the structure they will doubtless start digitizing, though obviously they are unlikely to end up with the ASCII code specifically. I would think that "a large subset of the writings of a whole other civilization" would be an incredibly appealing object of study, and there would be thousands of people on the job, mostly fairly loosely coordinated, with the majority being language experts of some sort but others getting involved out of curiosity or because a language expert thought "this looks like science, so let's ask some scientists what they make of it".

First, a superficial look at the texts. It would be pretty easy to figure out that the language is generally written in horizontal lines, each line left to right, lines running top to bottom; that it's mostly built out of a repertoire of ~30 symbols; that most likely the divisions imposed by whitespace are significant. It would be an obvious guess that those 60ish symbols correspond to sounds or modifications of sounds or something of the kind, and maybe with sufficient ingenuity it would be possible to make some reasonable guesses about which ones are which on the basis of which ones don't like to go together, but that isn't particularly important.

The things lying between whitespace -- call them "units" -- seem probably significant. (Maybe our hypothetical other-humans have languages with words. Quite likely. But maybe not. I'm using terminology that isn't too question-begging.) Across millions of books with (up to) millions of units in each, there's a somewhat limited repertoire of units; seems like the right sort of number for units to correspond broadly to concepts or things or something like that. Probably some of them are used to provide structure rather than to denote substantial concepts. Most likely the ones that are very short and very frequently repeated.

There's this weird thing where the units that are possible after one of those little dots are a bit different from the units that are possible in other contexts. The only difference seems to be in the first symbol. Ah, it's like there's a correspondence between these symbols and these symbols -- often one is just a bigger version of the other -- and you have to use the big ones after one of the little dots. Do the little dots maybe break things up into ... what? how long are the runs of units between little dots? Seems like maybe the right sort of length for each run to correspond to a proposition or a blargle[1] or a question or something.

[1] A term in the hypothetical-other-humans' linguistics that doesn't quite correspond to anything in our own.

OK, so we have units which maybe correspond to something like concepts. We have runs-of-units which maybe correspond to something like little ideas made out of these concepts. Let's take a look at short runs-of-units and see if we can work out anything about their structure. (And, more generally, at any patterns we can find in units and runs-of-units.) Well, there are these short units "I", "you", etc., and there's a large class of units that often have members of that set of short units in front of them. Most of these units also have the property that there are two versions, one with an "s" on the end and one without. Curiously, there's a different set of units with that same property, but they seem to play a different role because they seldom appear after I/you/... -- though they often appear after a/an/the... instead.

Probably some subset of the units correspond somehow to things and some other subset to happenings and some other subset to qualities. (Pretty much every actual-human language has something at least a bit like this division; it's reasonable to guess that our hypothetical other humans are at least considering it as a likely hypothesis.) Try using clustering-type algorithms to identify groups of units that tend to occur in similar contexts, pairs of groups where you often get something in group 1 immediately followed by something in group 2, etc. I would expect this to identify sets of units that have some relationship to what we call nouns, verbs, adjectives, etc., though of course at this point our researchers don't know which are which or even what exact categories they're looking for.

Separately, some researchers will have identified numbers (pretty easy to figure out from their use as e.g. page numbers) and others will have found some works of mathematics of various kinds (looking e.g. for tables or lists of numbers, and spotting things like primes, Fibonacci, etc.) It will not be difficult to identify decimal notation and fractions and exponents. Arithmetic operations and relations like "=" and "<" will be identified quickly, too. I don't know how different it's feasible for another advanced civilization's mathematics to be, but my feeling is that a lot is going to match up pretty well between them and us.

This is actually quite likely to be a source of language breakthroughs. Some mathematical text is very straightforward grammatically, and has mathematical formulae in places where there would normally be words. I think it's plausible e.g. that "number" would be identified quite easily -- identified as meaning "number" specifically (though there would be some doubt exactly what range of things it covers) but, more importantly right now, identified as being a noun. Similarly, things like "prime" and "odd" and "even" would be identified as (usually) being attached to "number" to qualify it. This could help start the process of categorizing units by broad grammatical function.

Once some mathematics is understood, science starts being a source of breakthroughs. Our researchers will spot formulae corresponding to e.g. inverse-square laws and make some plausible guesses at what sort of thing the surrounding text might be about. There will still be plenty of guessing to do, but correct guesses will make more things match up than incorrect ones. It will probably be possible to figure out things like "planet", "star", "year", "day", "rotate", "fall", light", etc., and probably some other units of measurement. Probably also a lot of function-words -- in, on, by, the, and, etc. (getting them exactly right will be very difficult, but it'll be possible to get a sketchy idea of many of them; it will be fairly easy to identify which units are likely function-words because they will correspond broadly to the commonest ones).

(If there were pictures or diagrams, many things would be much easier, but I am assuming that that would come too close to establishing contact with the external world directly and that we should assume that for some weird reason those have all gone. Maybe the vast human library passed briefly through the hands of religious zealots who hold that all visual representations of the world are blasphemous.)

At this point the researchers have identified a lot of the function-words, a lot of technical terms from mathematics and science, and some other words that can be inferred from scientific uses. They might also have found out e.g. the ratio of the year and day lengths for our planet. This is probably enough words for the statistical analysis to do a pretty good job of indicating roughly what grammatical classes most words fall into (though of course some are in several, and the way English divides things up may not quite correspond to the divisions in the languages our researchers know). This will be enough to start chipping away at things that make contact with those science-adjacent words. E.g., if we have "year" and "planet" and "rotate" and "day" and "star" and so forth, eventually someone will come across something about seasons and climate and make sense of "summer" and "winter". ("Autumn"/"fall" and "spring" will be more difficult, especially as "fall" and "spring" both have other science-adjacent meanings to cause a bit of confusion.) Someone will probably recognize the equation for simple harmonic motion and get maybe "frequency" and "period" and then "vibration" and, after much head-scratching, "sound" and "air" and maybe even "ear" etc. They'll find Maxwell's equations (in one of their many possible forms), eventually find "light", and get some colour-words from discussions of the spectrum. Etc.

I think that somewhere around here the floodgates will start to open. One sentence after another will be found that's mostly already-known words, and plausible guesses will be made, and researchers will try a bunch of plausible guesses together and see whether other things suddenly make sense, and after a while they'll have identified another word, or another feature of English grammar, and each new discovery will make other discoveries a bit easier.

There will be things that are just impossible. For instance, without any diagrams etc. it may be quite hard to distinguish the world from its mirror image, so maybe left/right will be swapped and likewise east/west, clockwise/anticlockwise, north/south, etc. (Maybe if they find some discussions of parity violation in the weak nuclear interaction they can figure it all out.)

I recommend the following as an intuition pump. Take a look at the online puzzle-game called Redactle; the original version has gone offline because its creator got bored of it and the best version to look at is https://redactle-unlimited.com/. This shows you a random not-too-obscure Wikipedia article with all words other than a small repertoire of common ones censored and only their lengths visible; you can guess at words and when you guess a word present in the article every instance of it will be revealed; you win when you have revealed all words in the article title. Clearly this is not at all the exact same challenge as our hypothetical researchers face, but it's designed for a single person to play casually in a shortish amount of time rather than thousands taking years of hard work.

So, anyway, take a look at a few random Redactle games, and consider the following: It turns out that a skilled player can figure out the title without guessing any non-title words most of the time. When I started playing the game, this felt like a thing that a hypothetical superintelligent being ought to be able to do but that was obviously beyond real humans' capacity in all but the luckiest cases; now I can do it for about 80% of games.

Again, our hypothetical other-human researchers face a much more difficult challenge: they don't know the language, they don't have a few common words filled in for them at the start, they aren't just looking at text from a single source with somewhat-predictable form, the concrete things described are from another world and may not match what they know. On the other hand, there are thousands of them, they have millions of books' worth of text rather than a single Wikipedia article, and they can take a lot longer than anyone wants to spend on a game of Redactle.

Forgive me, but there's so much hand waving in there that, really, there's no reason why I should believe that it would work. You're saying it would work because you want it to. All I've learned is that you're a clever guy and have some interesting ideas. The mirror thing, for example. Martin Gardiner devotes a chapter to it in his book The Ambidextrous Universe. He called it the Ozma Problem. But you said nothing that convinces me that these people could figure out all that language.

I'd always thought that the point of a thought experiment was to present a simple and idealized situation that would clarify your thinking, like Maxwell's demon, or Einstein's various thought experiments. This is nothing like that.

As for LLMs. All they've got to work from is word forms. Text. The text itself, as a physical object, whether ink on a page, or ASCII characters in a file, has no meaning in it. The meaning exists in the heads of the people who know the language, who understand the code. The computer has no access to what's inside people's heads. How then, is it able to produce text that reads as meaningful to humans? No one, as far as I know, has a detailed answer to that question. The behavior of these systems came as a surprise to the people who built the. Sure, statistical patterns, but that in itself is not very helpful. 

[-]gjm1y64

Well, the thing would be a many-smart-people-for-many-years project. Obviously I'm not going to be able to tell you everything that would happen without a ton of handwaving. ("You're saying it would work because you want it to" is flatly false, so far as I can tell; you do not know my motivations and I would prefer you not to make confident assertions about them without evidence.)

In a hypothetical state of affairs where I'm right that it would be possible to figure out a lot of what the books were saying, what sort of answer would you expect me to be able to give, and how does it differ from the one I actually gave?

I agree that all LLMs have to work from is word forms.

I think what they have been able to do, given only that, is itself evidence that there is meaning in those word forms. Or, since I think we are both agreed that "meaning" is a problematic term, there is information in those word forms about the actual world.

(I think you think that that isn't actually so; that the text itself contains no information about the real world without human minds to interpret it. However, while I have given only handwavy reasons to disagree with that, it seems to me that you have given no reasons at all to agree with it, merely asserted it several times with a variety of phrasings. The nearest thing to an argument seems to be this: "But texts do not in fact contain meaning within themselves. If they did, you’d be able to read texts in a foreign language and understand them perfectly." ... which seems to me to be missing a vital step, where you explain how to get from "texts contain meaning" to "anyone looking at a text will understand its meaning even if they don't know the language it's in".)

I did not invent the observation about parity independently; I have read Gardner's book. (I'm not entirely sure that the point of your comment about the book isn't to suggest that I'm trying to pass someone else's ideas off as my own. I am not; if you want to know how original I think something I write is, ask and I'll tell you.)

The nearest thing to an argument seems to be this: "But texts do not in fact contain meaning within themselves. If they did, you’d be able to read texts in a foreign language and understand them perfectly." ... which seems to me to be missing a vital step, where you explain how to get from "texts contain meaning" to "anyone looking at a text will understand its meaning even if they don't know the language it's in." 

This is confused. Who's saying "texts contain meaning"? It's not me. 

Perhaps you can take a look at the Wikipedia entry for the conduit metaphor. It explains why the idea of texts 'containing' meaning is incoherent.

[-]gjm1y20

No one is saying "texts contain meaning" (though I think something a bit like it is true), but you were saying that if texts contained meaning then we'd all be able to understand texts in languages we don't know, and I'm saying that that seems Just Plain Wrong to me; I don't see how you get from "texts contain meaning" (which you claim implies that we should all be able to understand everything) to "we should all be able to understand everything" (which is the thing you claim is implied). Those things seem to me to have nothing to do with one another.

I think texts have meaning. "Contain" is imprecise. I agree that the "conduit" metaphor can be misleading. If that Wikipedia page shows that it's incoherent, as opposed to merely being a metaphor, then I am not sure where. I think that maybe when you say "if texts contain meaning then ..." you mean something like "if it were literally the case that meaning is some sort of substance physically contained within texts then ..."; something like that might be true, but it feels strawman-ish to me; who actually believes that "texts contain meaning" in any sense literal enough to justify that conclusion?

Again, "meaning" has too many meanings, so let me be more explicit about what I think about what texts do and don't have/contain/express/etc.

  • For most purposes, "text T has meaning M" cashes out as something like "the intended audience, on reading text T, will come to understand M".
  • Obviously this is audience-dependent and context-dependent.
  • In some circumstances, the same string of characters can convey very different meanings to different audiences, because it happens to mean very different things in different languages, or because of irony, or whatever.
  • Even for a particular audience in a particular context, "the" meaning of a text can be uncertain or vague.
  • When we ask about the meaning of a text, there are at least two related but different things we may be asking about. We may be considering the text as having a particular author and asking what they, specifically, intended on this occasion. Or we may be considering it more abstractly and asking what it means, which is roughly equivalent to asking what a typical person saying that thing would typically mean.
  • Because text can be ambiguous, "the" meaning of a text is really more like a probability distribution for the author's intention (meaning the intention of the actual author for the first kind of meaning, and of something like all possible authors for the second kind).
  • How broad and uncertain that probability distribution is depends on both the text and its context.
  • Longer texts typically have the effect of narrowing the distribution for the meaning of any given part of the text -- each part of the text provides context for the rest of it. (Trivial example: if I say "That was great" then I may or may not be being sarcastic. If I say "That was great. Now we're completely screwed." then the second sentence makes it clear that the first sentence was sarcastic. Not all examples are so clear-cut.)
  • Greater ignorance on the part of the audience broadens the distribution. In an extreme case, a single sentence typically conveys approximately zero information to someone who doesn't know the language it's in.
  • Extra text can outweigh ignorance. (Trivial example: if I make up a word and say something using my made-up word, you probably won't get much information from that. But if I first say what the word means, it's about as good as if I'd avoided the neologism altogether. Again, not all examples are so clear-cut.)
  • I think that a very large amount of extra text can be enough to outweigh even the ignorance of knowing nothing about the language the text is in; if you have millions of books' worth of text, I think that typically for at least some parts of that large body of text the probability distribution will be very narrow.
  • In cases where the probability distribution is very narrow, I think it is reasonable to say things like "the meaning of text T is M" even though of course it's always possible for someone to write T with a very different intention.
  • In such cases, I think it is reasonable to say this even if in some useful sense there was no original author at all (e.g., the text was produced by an LLM and we do not want to consider it an agent with intentions). This has to be the second kind of "meaning", which as mentioned above can if you like be cashed out in terms of the first kind: "text T means M" means "across all cases where an actual author says T, it's almost always the case that they intended M".

You may of course disagree with any of that, think that I haven't given good enough reasons to believe any of it, etc., but I hope you can agree that I am not claiming or assuming that meaning is some sort of substance physically contained within text :-).

...who actually believes that "texts contain meaning" in any sense literal enough to justify that conclusion?

Once it's pointed out that "texts contain meaning" is a metaphor, no one believes it, but they continue to believe it. So, stop relying on the metaphor. Why do you insist that it can be salvaged? It can't.

[-]gjm1y20

It feels to me as if you are determined to think the worst of those who aren't agreeing with you, here. (B: Obviously gjm thinks that texts contain meaning in the same sort of way as a bag of rice contains rice! G: No, of course I don't think that, that would be stupid. B: Well, obviously you did think that until I pointed it out, whereas now you incoherently continue to believe it while saying you don't!) Why not consider the possibility that someone might be unconvinced by your arguments for reasons other than stupidity and pigheadedness?

Anyway. I'm not sure in what sense I'm "insisting that it can be salvaged". If you think that the term "meaning" is unsalvageable, or that talking about meaning being "in" things, is a disastrous idea because it encourages confusion ... well, maybe don't write an article and give it a title about "meaning in LLMs"?

Maybe I'm misunderstanding what you mean by "insisting that it can be salvaged". What specific thing are you saying I'm insisting on?

[EDITED to add:] I thought of a way in which the paragraph before last might be unfair: perhaps you think it's perfectly OK to talk about meaning being "in" minds but not to talk about meaning being "in" texts, in which case asking whether is "in" LLMs would be reasonable since LLMs are more like minds than they are like texts. And then part of your argument is: they aren't mind-like enough, especially in terms of relationships to the external world, for meaning to be "in" them. Fair enough. Personally, I think that all talk of meaning being "in" things is metaphorical, that it's more precise to think of "mean" as a verb and "meaning" specifically as its present participle, that "meaning" in this sense is a thing that people (and maybe other person-like things) do from time to time -- but also that there's a perfectly reasonable derived notion of "meaning" as applied to the utterances and other actions that people carry out as part of the process of "meaning", so that it is not nonsense to say that a text "means" something, either as produced by a particular person or in the abstract. (And so far as I can currently tell none of this depends on holding on to the literally-false implications of any dubious metaphor.)

It feels to me as if you are determined to think the worst of those who aren't agreeing with you, here.

The feeling is mutual. 

Believe it or not I pretty much knew everything you've said in this exchange long before I made the post. I've thought this through with considerable care. In the OP I linked to a document where I make my case with considerably more care, GPT-3: Waterloo or Rubicon? Here be Dragons

What I think is that LLMs are new kinds of things and that we cannot rely on existing concepts in trying to understand them. Common-sense terms such as "meaning" are particularly problematic, as are "think" and "understand." We need to come up with new concepts and new terms. That is difficult. As a practical matter we often need to keep using the old terms. 

The term "meaning" has been given a quasi-technical .... meaning? Really? Can we use that word at all? That word is understood by a certain diffuse community of thinkers in a way that pretty much excludes LLMs from having it, nor do they have "understanding" nor do they "think." That's a reasonable usage. That usage is what I have in mind.

Some in that community, however, are also saying that LLMs are "stochastic parrots", though it's not at all clear just what exactly they're talking about. But they clearly what to dismiss them. I think that's a mistake. A big mistake. 

They aren't human, they certainly don't relate to the world in the way humans do, and they produce long strings of very convincing text. They're doing a very good imitation of understanding, thinking, of having meaning. But that phrasing, "good imitation of...", is just a work-around. What is it that they ARE doing? As far as I can tell, no one knows. But talking about "meaning" with respect to LLMs where the term is implicitly understood to be identical to "meaning" with respect to humans, no, that's not getting us anywhere. 

[-]gjm1y31

We are in agreement on pretty much everything in this last comment.

Your Waterloo/Rubicon article does the same thing as your shorter post here: flatly asserts that since GPT-3 "has access only to those strings", therefore "there is no meaning there. Only entanglement", without troubling to argue for that position. (Maybe you think it's obvious; maybe you just aren't interested in engaging with people who disagree with enough of your premises for it to be false; maybe you consider that the arguments are all available elsewhere and don't want to repeat them.)

Your article correctly points out that "Those words in the corpus were generated by people conveying knowledge of, attempting to make sense of, the world. Those strings are coupled with the world, albeit asynchronously". I agree that this is important. But I claim it is also important that the LLM is coupled, via the corpus, to the world, and hence its output is coupled, via the LLM, to the world. The details of those couplings matter in deciding whether it's reasonable to say that the LLM's output "means" something; we are not entitled to claim that "there is no meaning there" without getting to grips with that.

In the present discussion you are (sensibly and intelligently) saying that terms like "meaning", "think", and "understand" are problematic, that the way in which we have learned to use them was formed in a world where we were the only things meaning, thinking, and understanding, and that it's not clear how best to apply those terms to new entities like LLMs that are somewhat but far from exactly like us, and that maybe we should avoid them altogether when talking about such entities. All very good. But in the Waterloo/Rubicon article, and also in the OP here, you are not quite so cautious; "there is no meaning there", you say; "from first principles it is clear that GPT-3 lacks understanding and access to meaning"; what it has is "simulacra of understanding". I think more-cautious-BB is wiser than less-cautious-BB: it is not at all clear (and I think it is probably false) that LLMs are unable in principle to learn to behave in ways externally indistinguishable from the ways humans behave when "meaning" and "understanding" and "thinking", and if they do then I think asking whether they mean/understand/think will be like asking whether aeroplanes fly or whether submarines swim. (We happen to have chosen opposite answers in those two cases, and clearly it doesn't matter at all.)

But I claim it is also important that the LLM is coupled, via the corpus, to the world, and hence its output is coupled, via the LLM, to the world.

What? The corpus is coupled to the world through the people who wrote the various texts and who read and interpret them. Moreover that sentence seems circular. You say, "its output is coupled..." What is the antecedent of "its"? It would seem to be the LLM. So we have something like, "The output of the LLM is coupled, via the LLM, to the world." 

I'm tired of hearing about airplanes (and birds) and submarines (and fish). In all cases we understand more or less the mechanics involved. We can make detailed comparisons and talk about similarities and differences. We can't do that with humans and LLMs. 

[-]gjm1y31

It goes: world <-> people <-> corpus <-> LLM <-> LLM's output.

There is no circularity in "The output of the LLM is coupled, via the LLM, to the world" (which is indeed what I meant).

I agree that we don't understand LLMs nearly as well as we do plans and submarines, nor human minds nearly as well as the locomotory mechanics of birds and fish. But even if we had never managed to work out how birds fly, and even if planes had been bestowed upon us by a friendly wizard and we had no idea how they worked either, it would be reasonable for us to say that planes fly even though they do it by means very different to birds.

Um, err, at this point, unless someone actually reads the LLM's output, that output goes nowhere. It's not connected to anything.

So, what is it you care about? Because at this point this conversation strikes me as just pointless thrashing about with words.

[-]gjm1y20

I care about many things, but one that's important here is that I care about understanding the world. For instance, I am curious about the capabilities (present and future) of AI systems. You say that from first principles we can tell that LLMs trained on text can't actually mean anything they "say", that they can have only a simulacrum of understanding, etc. So I am curious about (1) whether this claim tells us anything about what they can actually do as opposed to what words you choose to use to describe them, and (2) if so whether it's correct.

Another thing I care about is clarity of thought and communication (mine and, to a lesser but decidedly nonzero extent, other people's). So to whatever extent your thoughts on "meaning", "understanding", etc., are about that more than they're about what LLMs can actually do, I am still interested, because when thinking and talking about LLMs I would prefer not to use language in a systematically misleading way. (At present, I would generally avoid saying that an LLM "means" whatever strings of text it emits, because I don't think there's anything sufficiently mind-like in there, but I would not avoid saying that in many cases those strings of text "mean" something, for reasons I've already sketched above. My impression is that you agree about the first of those and disagree about the second. So presumably at least one of us is wrong, and if we can figure out who then that person can improve their thinking and/or communication a bit.)

Back to the actual discussion, such as it is. People do in fact typically read the output of LLMs, but there isn't much information flow back to the LLMs after that, so that process is less relevant to the question of whether and how and how much LLMs' output is coupled to the actual world. That coupling happens via the chain I described two comments up from this one: world -> people -> corpus -> LLM weights -> LLM output. It's rather indirect, but the same goes for a lot of what humans say.

Several comments back, I asked what key difference you see between that chain (which in your view leaves no scope for the LLM's output to "mean" anything) and one that goes world -> person 1 -> writing -> person 2 -> person 2's words (which in your view permits person 2's words to mean things even when person 2 is talking about something they know about only indirectly via other people's writing). It seems to me that if the problem is a lack of "adhesion" -- contact with the real world -- then that afflicts person 2 in this scenario in the same way as it afflicts the LLM in the other scenario: both are emitting words whose only connection to the real world is an indirect one via other people's writing. I assume you reckon I'm missing some important point here; what is it?

That's a bunch of stuff, more than I can deal with at the moment. 

On the meaning of "meaning," it's a mess and people in various discipline have been arguing it for 3/4s of a century or more at this point. You might want to take a look at a longish comment I posted above, if you haven't already. It's a passage from another article, where I make the point that terms like "think" don't really tell us much at all. What matters to me at this point are the physical mechanisms, and those terms don't convey much about those mechanisms.

On LLMs, GPT-4 now has plug-ins. I recently saw a YouTube video about the Wolfram Alpha plug-in. You ask GPT-4 a question, it decides to query Wolfram Alpha and sends a message. Alpha does something, sends the result back to GPT-4, which presents the result to you. So now we have Alpha interpreting messages from GPT-4 and GPT-4 interpreting messages from Alpha. How reliable is that circuit? Does it give the human user what they want? How does "meaning" work in that circuit.

I first encountered the whole business of meaning in philosophy and literary criticism. So, you read Dickens' A Tale of Two Cities or Frank Herbert's Dune, whatever. It's easy to say those texts have meaning. But where does that meaning come from? When you read those texts, the meaning comes from you. When I read them, it comes from me. What about the meanings the authors put into them? You can see where I'm going with this. Meaning is not like wine, that can be poured from one glass to another and remain the same. Well, literary critics argued about that one for decades. The issue's never really been settled. It's just been dropped, more or less.

ChatGPT produces text, lots of it. When you read one of those texts, where does the meaning come from? Let's ask a different question. People are now using output from LLMs as a medium for interacting with one another. How is that working out? Where can LLM text be useful and where not? What's the difference? Those strike me as rather open-ended questions for which we do not have answers at the moment.

And so on....

[-]gjm1y53

I think it's clear that when you read a book the meaning is a product of both you and the book, because if instead you read a different book you'd arrive at different meaning, and different people reading the same book get to-some-extent-similar meanings from it. So "the meaning comes from you" / "the meaning comes from me" is too simple. It seems to me that generally you get more-similar meanings when you keep the book the same and change the reader than when you keep the reader the same and change the book, though of course it depends on how big a change you make in either case, so I would say more of the meaning is in the text than in the reader. (For the avoidance of doubt: no, I do not believe that there's some literal meaning-stuff that we could distil from books and readers and measure. "In" there is a metaphor. Obviously.)

I agree that there are many questions to which we don't have answers, and that more specific and concrete questions may be more illuminating than very broad and vague ones like "does the text emited by an LLM have meaning?".

I don't know how well the GPT/Wolfram|Alpha integration works (I seem to remember reading somewhere that it's very flaky, but maybe they've made it better), but I suggest that to whatever extent it successfully results in users getting information that's correct on account of Alpha's databases having been filled with data derived from how the world actually is, and its algorithms having been designed to match how mathematics actually works, that's an indication that in some useful sense some kind of meaning is being (yes, metaphorically) transmitted.

I've just posted something at my home blog, New Savanna, in which I consider the idea that 

...the question of whether or not the language produced by LLMs is meaningful is up to us. Do you trust it? Do WE trust it? Why or why not? 

That's the position I'm considering. If you understand "WE" to mean society as a whole, then the answer is that the question is under discussion and is undetermined. But some individuals do seem to trust the text from certain LLMs at least under certain circumstances. For the most part I trust the output of ChatGPT and GPT-4, with which I have considerably less experience than I do with ChatGPT. I know that both systems make mistakes of various kinds, including what is called "hallucination." It's not clear to me that that differentiates them from ordinary humans, who make mistakes and often say things without foundation in reality.

I'm a (very old) programmer with a non technical interest in AI, 
so I'm not sure how much i can contribute,
but I do have some thoughts on some of this.
It seems to me that the underlying units of intelligence are concepts.
How do i define concepts - it has to be in terms of other related concepts. 
They are the nodes in a connected graph of concepts.
Concepts are ultimately 'grounded' on some pattern of sensory input.
Abstract concepts are only different in the sense that they do not depend directly on sensory input
It seems to me that gaining 'understanding' is the process of building this graph of related concepts.
We only understand concepts in terms of other concepts, in the similar way to a dictionary defining a word in terms of other words, or collections of words.
A concept only has meaning when it references other concepts.
When a concept is common to many intelligent systems (human brains or LLMs) then it can be represented by a token (or word).
Communities of intelligent sytems evolve languages (sets of these tokens) to represent and commnunicate these concepts.
When communites get seperated, and the intelligent systems cannot communicate easily then the tokens in the languages evolve over time and the languages diverge even though the common concepts often remain the same. 
Sections of a community often develop extensions to languages (e.g. jargon, TLAs) to communicate concepts which are often only understood within that section.
My (very basic) understanding of LLMs is that they are pre-trained to predict the next word in a stream of text output by identifying patterns of input data that have a strong correlation with the output. It does seem to me that by iterating through multiple layer of inputs and adjusting the weights on each layer that a neural network could detect the significant underlying connections between concepts within the data, and that higher level (more abstract) concepts could be detected which are based on lower level (more grounded) concepts and that these conceptual connections could be considered to be a model of the real world capable of making intelligent predictions which could then be selectively pruned and refined using reinforcment learning.
 

"Concepts are ultimately 'grounded' on some pattern of sensory input." – I call that adhesion. Your graph of related concepts gives us relationality. LLMs work off of relationality.

My views on this are based on work I did on computational semantics back in the ancient days. That work was based on cognitive or semantic networks, which was a very common approach at the time.



@Bill Benzon 
Hi Bill Benson! You are the first person who I've come across who has come to the same conclusion as I did. LLMs cannot semantically understand the meaning of words. I believe this is because semantic understanding of words and concepts is a form of Qualia. And computers cannot feel qualia. We feel the semantic meaning of words as a sensation, as a qualia, which requires a consciousness. Only a consciousness can feel qualia. 
Meaning has three components,
1) a structural relationship component (how it structurally relates to other words)
2) intention & adhesion (only a consciousness can have an intention and understand its adhesion to real world) 
3) a qualia component (the part of when you said meaning takes place 'in the minds' of the people).
Semantic understanding of meaning of words and concepts is a form of qualia. It is felt in the minds of people as qualia.

An algorithm cannot feel qualia, and can only encode the first part which is the structural relationship between words. This is why I think that an algorithm cannot converge on a language model -- because any model it derives would only be structural and would be missing the essential qualiatic nature of the meaning of the language and concepts that it hopes to describe.

Everything that a LLM outputs is meaningless. A string of symbols strung together in the most probabilistic likely way. It feels nothing and understands nothing. The word "hollow" or "dead" is a good way to describe its output. Everything it outputs is dead, hollow of the essential qualia that ought to inhabit the concept/word.

The idea that words such as “good” or “love” are of vectors parameterized by arrays of rational numbers, acting on a Euclidean space in N dimensions (as Large Language Models operate), is preposterous. As if I can look at the weights of the Neural Network, find the word “love”, and I can say “oh, I understand what love is now, it is this array of rational numbers 0.44, 0.223,…”… this is ridiculous. You cannot map down the word “love” from Qualitic space to a quantitative representation and expect it to be isomorphic or even homomorphic to the real thing. As soon as you map it down to numbers, you have lost information. That’s not love. I don’t know what is is, something ugly, foreign and alien. There is no correspondence between the two. Love is not a numerical vector. It is not a datapoint. Love is Qualia; I feel it, it is real, it is nontrivial. 

AI disgusting. It is the ultimate form of nihilism because it makes a mockery out of consciousness, art, writing, language, and meaning. As if consciousness’s creative intellect is reduced to nothing more than 1s and 0s. There is AI art, but art is supposed to be created by a human consciousness to convey their personal experience. But AI has no experience. AI is just a meaningless statistical structure. A meaningless statistical structure without a soul. When you view AI art you are looking at something hollow. There is no experience inside of the art. No pain. No joy. No suffering. Nothing at all. No intention. There is no experience it is trying to convey. When you view AI art, you are looking at empty hollow nothingness. 

Everything that AI outputs is just dead echoes of things that real people have previously said in its training data. 

-Morph