Posts

Sorted by New

Wiki Contributions

Comments

I feel like this piece is pretty expansive in the specific claims it makes relative to the references given.

  • I don't think the small, specific trial in [3] supports the general claim that "Current LLMs reduce the human labor and cognitive costs of programming by about 2x."
  • I don't think [10] says anything substantive about the claim "Fine tuning pushes LLMs to superhuman expertise in well-defined fields that use machine readable data sets."
  • I don't think [11] strongly supports a general claim that (today's) LLMs can "Recognize complex patterns", and [12] feels like very weak evidence for general claims that today's LLMs can "Recursive troubleshoot to solve problems".

The above are the result of spot-checking and are not meant to be exhaustive.

Thank you, this is helpful. 

I think the realization I'm coming to is that folks on this thread have a shared understanding of the basic mechanics (we seem to be agreed on what computations are occurring, we don't seem to be making any different predictions), and we are unsure about interpretation. Do you agree?

For myself, I continue to maintain that viewing the system as a next-word sampler is not misleading, and that saying it has a "plan" is misleading --- but I try to err very on the side of not anthropomorphizing / not taking an intentional stance (I also try to avoid saying the system "knows" or "understands" anything). I do agree that the system's activation cache contain a lot of information that collectively biases the next word predictor towards producing the output it produces; I see how someone might reasonably call that a "plan" although I choose not to.

Suppose we modify the thought experiment so that we ask the LLM to simplify both sides of the "pick a number between 1 and 100" / "ask yes/no questions about the number." Now there is no new variable input from the user, but the yes/no questions still depend on random sampling. Would you now say that the LLM has chosen a number immediately after it prints out "Ready?"

Then wouldn't you believe that in the case of my thought experiment, the number is also smeared through the parameter weights? Or maybe it's merely the intent to pick a number later that's smeared through the parameter weights?

But if I am right and ChatGPT isn't choosing a number before it says "Ready," why do you think that ChatGPT "has a plan?" Is the story situation crucially different in some way? 

@Bill Benzon:  A thought experiment. Suppose you say to ChatGPT "Think of a number between 1 and 100, but don't tell me what it is. When you've done so, say 'Ready' and nothing else. After that, I will ask you yes / no questions about the number, which you will answer truthfully."

After ChatGPT says "Ready", do you believe a number has been chosen? If so, do you also believe that whatever "yes / no" sequence of questions you ask, they will always be answered consistently with that choice? Put differently, you do not believe that the particular choice of questions you ask can influence what number was chosen?

FWIW, I believe that no number gets chosen when ChatGPT says "Ready," that the number gets chosen during the questions (hopefully consistently) and that, starting ChatGPT from the same random seed and otherwise assuming deterministic execution, different sequences of questions or different temperatures or different random modifications to the "post-Ready seed" (this is vague but I assume comprehensible) could lead to different "chosen numbers."

(The experiment is not-trivial to run since it requires running your LLM multiple times with the same seed or otherwise completely copying the state after the LLM replies "Ready.")

I'm not following the argument here.

"I maintain, for example, that when ChatGPT begins a story with the words “Once upon a time,” which it does fairly often, that it “knows” where it is going and that its choice of words is conditioned on that “knowledge” as well as upon the prior words in the stream. It has invoked a ‘story telling procedure’ and that procedure conditions its word choice."

It feels like you're asserting this, but I don't see why it's true and don't think it is. I fully agree that it feels like it ought to be true: it is in some sense still shocking to me that a next-token predictor trained on trillions of tokens is so good at responding to such a wide variety of prompts. But if you look at the mechanics of how a transformer works, as @tgb and @Multicore, it sure looks like it's doing next-token prediction, and that there isn't a global plan. There is literally no latent state --- we can always generate forward from any previous set of tokens, whether the LLM made them or not.

But I'd like to better understand.  

You seem to be aware of Murray Shanahan's "Talking About Large Language Models" paper. The commenter you quote, Nabeel Q,  agrees with you, but offers no actual evidence; I don't think analogies to humans are helpful here since LLMs work very differently from humans in this particular regard. I agree we should avoid confusing the training procedure with the model, however, what the model literally does is look at its context and predict a next token.

I'll also note that your central paragraph seems somewhat reliant on anthroporphisms like "it "knows" where it is going". Can you translate from anthropomorphic phrasings into a computational claim? Can we think of some experiment that might help us get at this better?