Douglas Summers-Stay

Posts

Sorted by New

Wiki Contributions

Comments

The case for becoming a black-box investigator of language models

Here's a fun paper I wrote along these lines. I took an old whitepaper of McCarthy from 1976 where he introduces the idea of natural language understanding and proposes a set of questions about a news article that such a system should be able to answer. I asked the questions to GPT 3 and looked at what it got right and wrong and guessed at why. 
What Can a Generative Language Model Answer About a Passage? 

History of counting to three?

And from Levana Or, The Doctrine of Education By Jean Paul, 1848:  "Another parental delay, that of punishment, is of use for children of the second five years (quinquennium.) Parents and teachers would more frequently punish according to the line of exact justice, if, after every fault in a child, they would only count four and twenty, or their buttons, or their fingers. They would thereby let the deceiving present round themselves, as well as round the children escape the cold still empire of clearness would remain behind;"

History of counting to three?

Here is an example from The Friend magazine, January 1853: "'Do you hear me, sir!' asked the captain. 'I give you whilst I count ten to start. I do not wish to shoot you, Wilson, but if you do not move before I count ten I'll drive this ball through you-- as I hope to reach port, I will.' Raising his pistol until it covered the boat swain's breast the captain commenced counting in a clear and audible tone. Intense excitement was depicted on the faces of the men and some anxiety was shown by the quick glances cast by the chief mate and steward first at the and then at the crew. Wilson with his eyes fixed in the captain's face and his arms loosely folded across his breast stood perfectly quiet as if he were an indifferent spectator. 'Eight!  Nine!' said the captain. 'There is but one left, Wilson, with it I fire if you do not start." 

interpreting GPT: the logit lens

Could you try a prompt that tells it to end a sentence with a particular word, and see how that word casts its influence back over the sentence? I know that this works with GPT-3, but I didn't really understand how it could.

Can you get AGI from a Transformer?

Regarding "thinking a problem over"-- I have seen some examples where on some questions that GPT-3 can't answer correctly off the bat, it can answer correctly when the prompt encourages a kind of talking through the problem, where its own generations bias its later generations in such a way that it comes to the right conclusion in the end. This may undercut your argument that the limited number of layers prevents certain kinds of problem solving that need more thought?

GPT-3: a disappointing paper

I'm sure there will be many papers to come about GPT-3. This one is already 70 pages long, and must have come not long after the training of the model was finished, so a lot of your questions probably haven't been figured out yet. I'd love to read some speculation on how, exactly, the few-shot-learning works. Take the word scrambles, for instance. The unscrambled word will be represented by one or two tokens. The scrambled word will be represented by maybe five or six much less frequent tokens composed of a letter or two each. Neither set of tokens contains any information about what letters make up the word. Did it see enough word scrambles on the web to pick up the association between every set of tokens and all tokenizations of rearranged letters? that seems unlikely. So how is it doing it? Also, what is going on inside when it solves a math problem?

Does GPT-2 Understand Anything?

Yeah, you're right. It seems like we both have a similar picture of what GPT-2 can and can't do, and are just using the word "understand" differently.

Does GPT-2 Understand Anything?
So would you say that GPT-2 has Comprehension of "recycling" but not Comprehension of "in favor of" and "against", because it doesn't show even the basic understand that the latter pair are opposites?

Something like that, yes. I would say that the concept "recycling" is correctly linked to "the environment" by an "improves" relation, and that it Comprehends "recycling" and "the environment" pretty well. But some texts say that the "improves" relation is positive, and some texts say it is negative ("doesn't really improve") and so GPT-2 holds both contradictory beliefs about the relation simultaneously. Unlike humans, it doesn't try to maintain consistency in what it expresses, and doesn't express uncertainty properly. So we see what looks like waffling between contradictory strongly held opinions in the same sentence or paragraph.

As for whether the vocabulary is appropriate for discussing such an inhuman contraption or whether it is too misleading to use, especially when talking to non-experts, I don't really know. I'm trying to go beyond descriptions of GPT-2 "doesn't understand what it is saying" and "understands what it is saying" to a more nuanced picture of what capabilities and internal conceptual structures are actually present and absent.

Does GPT-2 Understand Anything?

One way we might choose to draw these distinctions is using the technical vocabulary that teachers have developed. Reasoning about something is more than mere Comprehension: it would be called Application, Analysis or Synthesis, depending on how the reasoning is used.

GPT-2 actually can do a little bit of deductive reasoning, but it is not very good at it.

Load More