Watching this video, (Edit: around 25:50) I was pleased to see a specific prediction made by the person claiming that GPT-3 couldn't do reasoning: He claimed that GPT-3's ability to unscramble a word was not reasoning but mere pattern-matching and that he would be very surprised if GPT-3 could scramble an arbitrary word. He said that if it could do that, that would be evidence that it really was reasoning.

Anyone know of other prominent (controversial, important) public predictions of what GPT-3 can or can't do? What about GPT-4? Or GPT-2? It would be interesting to compile a list and see if any patterns jump out, e.g. people consistently over or under-estimating it.

New to LessWrong?

New Answer
New Comment
9 comments, sorted by Click to highlight new comments since: Today at 5:03 AM

That video is very long, can you explain what is meant by "scramble an arbitrary word"?

Sorry, I edited to include the timestamp.

I think what he means is: Do the reverse of the word-unscrambling test that they already did in the paper. So, prompt the model with something like this:

Scramble the following words:
Large --> egLar
Orange --> ngareO
Book --> koBo
Snow -->

And see if it answers with a string of four letters, S, n, o, and w, but not in that order.

I expect GPT-2 can do that. goes to talktotransformer.com GPT-2 can do neither scrambling nor unscrambling. Oh well. I still expect that if GPT can do unscrambling (as I silently assumed), it can do scrambling.

You should avoid using TTT. I've seen a number of examples where someone used TTT to show GPT-2 'can't do something' and it worked fine when I tried locally. I'm not sure if TTT has reverted back to a small model to save money or is using bad sampling or what; it also seems to really gimp the context.

Fortunately for you, testing out scrambling on my local GPT-2-1.5b with a context window fully packed with ~213 scramble examples (I generated them at random from my /usr/share/dict/words) shows zero sign of unscrambling ability. In this example, it should produce the easy unscramble of 'hashes', but it fails to do so and also fails to produce further scramble examples or indeed show any sign of understanding scrambles:

Prompt: "olsheleh'l=hellhole's;syutf=fusty;uuntabryelt=unutterably;yMnIctre=McIntyre;incvees=evinces;ezastilwCu=Clausewitz;lsptasah=asphalts;bnsg'iluila=bilingual's;mhoroarG=Gomorrah;uhtianbiato=habituation;aoigi'csnl=logician's;isliaynilitbov'=inviolability's;emrnrPegi=Preminger;hub=hub;sneov=ovens;oaioB'esnt=Boeotian's;htoetsasu=southeast;lgbraolu=globular;luGaetmsaan=Guatemalans;rdseecno=encoders;kehaner=hearken;ifeifr=iffier;eaFwks's=Fawkes's;siscote=cosiest;pSnairad=Spaniard;dasre=dares;yigsosp=gossipy;ardep=raped;ciolsuetid=solicitude;uudtcrsnutre=unstructured;ae'brsh=rehab's;thn'asE=Ethan's;tenicnilfg=inflecting;eciantn=ancient;c'slaredan=calendar's;a'Erlestc=Electra's;eesplrdutt=spluttered;oneDn=Donne;gte'hrtaohftus=afterthought's;hringscu=crushing;'wlosrehesssnts=worthlessness's;lolieemddbwes=disembowelled;sreJyes=Jerseys;iefezrns=frenzies;snr'ased=sander's;oegerusstm=gruesomest;gligyg=giggly;rhneocv=chevron;qruiouest=turquoise;'tMcshlile=Mitchell's;iuorgntunn=outrunning;lknba=blank;erars=rears;utrmble=tumbler;otadeurg=outraged;le'syoMd=Melody's;hsep'rpnio=hornpipe's;swhymoa=haymows;cz'luhtsS=Schultz's;lvsnraeed=lavenders;sdietvesar=advertises;samena=seaman;eemrros=remorse;hiaSfr=Sharif;ectunssonical=consultancies;aetspls=pastels;rsrkmuckae=muckrakers;tligluses=guiltless;s'siiennilsbiyt=insensibility's;ha=ah;sersisdta=disasters;uyiodols=odiously;Swa'ilihs=Swahili's;ruvAaedy=Ayurveda;itpsicek=pickiest;ntnsaece'=canteen's;loopyr=poorly;slusurot=lustrous;ldhraay=halyard;saldr'eo=ordeal's;np'Usjho=Upjohn's;osaiiitnnngtr=transitioning;eril=lire;ndaceos=deacons;setmlnmehl'ebis=embellishment's;fodcmortsi=discomfort;raflagaTr=Trafalgar;ostc'kigns=stocking's;fg'ans=fang's;cnaioofa'sid=aficionado's;asanicnbl=cannibals;sterkw=twerks;itnsercafs=craftiness;siiSs'ent=Sistine's;gnos'b=bong's;rstuoins'in=intrusion's;uantesnf=unfasten;adntilreatnmetpre=interdepartmental;qeybous's=obsequy's;nrsiorpse=prisoners;nblcaek=blacken;btlisuah=halibuts;s'yaj=jay's;gthsihrrbit=birthrights;uzpgiznl=puzzling;dbrnuinw=windburn;no'iceiavstirf=verification's;rsuolniyu=ruinously;kiektsccbsla'=stickleback's;nsopunsioono=nonpoisonous;osubreetoml=troublesome;hubsl=blush;wsordorssc=crosswords;dowhnwos=showdown;ddwwairn=windward;knvgnoico=convoking;gM=Mg;rrsiepe=reprise;ebonerr'yssby=boysenberry's;enmdialpt=implanted;tnauuiftloc=fluctuation;snstilneeai=inessential;euimp'snescvlsos=compulsiveness's;prtisa=rapist;ckeidk=kicked;itsefhis=fishiest;bpeyssalmh'=blasphemy's;isilme=simile;ditmi=timid;cgnreocruir=reoccurring;eemazc=eczema;rastosncimit=romanticists;irsdgle'=girdle's;fumsalhe=shameful;'ikrsE=Erik's;ooapltni=optional;tnynietrcua=uncertainty;oiomtrsze=motorizes;reicitra=criteria;ibalrsmane=lamebrains;reePndt'iss=President's;tutsoehlonb=buttonholes;mnreiat=raiment;rureib=rubier;s'ipgtnra=parting's;rsshpoehlopi=philosophers;emrilW=Wilmer;ckeroo=cooker;darbeetswe's=sweetbread's;siesdoif=ossified;srst'oF=Frost's;dseolvo'rh=holdover's;nrmsumbeao=membranous;e'rgdsdre=dredger's;siaiuglireetrr=irregularities;firra=friar;ieydcrtlu=credulity;eCra'smhsb=Chambers's;seoirgitnan=resignation;sngul=slung;hurartUq=Urquhart;canseevg=scavenge;cscabakkp=backpacks;'arrmasaM=Marmara's;glileyta=legality;rqneaantiu=quarantine;sseelhhslif=shellfishes;rseebrivd=riverbeds;laaeftyrimivf=affirmatively;lpoos=loops;iorclsisot=solicitors;sityrlse=sisterly;ghue=huge;asnagla=lasagna;ehdeaofr=forehead;goMo=Moog;itrncasoreimin=recriminations;aasnlem'mo=melanoma's;etpepirza=appetizer;arsc'er=racer's;trmsou'=tumor's;krwacetba=backwater;nyvibrliaa=invariably;dutbacs=abducts;oclukn=unlock;iednal=nailed;estinrac=scantier;ilat=alit;mntialstiou=mutilations;amsnAle=Ameslan;inL=Lin;eissridfe=firesides;eplstee=steeple;srssiet=sisters;ndxoesasb=sandboxes;irtwssea=waitress;olyotmnunsoo=monotonously;lhytapeet=telepathy;eSkawrla'sc=Slackware's;wierodsw=widowers;ojts=jots;egsen=genes;uslelpto=pollutes;slfo'af=offal's;yastlstaciitl=statistically;hthcymylilra=rhythmically;ouaid=audio;'dosc=doc's;xev=vex;gdiweinl=wielding;pluredyet=reputedly;es'edwt=tweed's;gnigresin=resigning;edaislr=derails;hhssae="
======================================== SAMPLE 1 ========================================
[...]hhssae=as;ic';ou'. Like us=heir= em=hs;stupid=ac.sle]heir':hir;he;id=as'sy;' som= they scrambled=ir;ñ'mst' al'fris'us''iss;l looked to make; evly;ia=res'fr--ray;uac;ch; our; unlike= learned=isa; so'cause of other name;irs' forself; he;hir; shun;hir=asi'atically odd='. he-clava red=se; heat;we' heir'hir, he;taken=', Iv=his;ks=heir; he hared=ber; that'sir= to feel as']dis; he=I think 'ber; off' trembling; that's they, coward'ir; evils'ir=familiar': they are that's'ir;str:.'me'em; he-hem;L's thing'we'ig; no, so' because of their;ils; afraid vs; over; m' we have all; da'ag;he, may be; we understand;ra' he'it; was;he; he'cause of the sort of he' what's, they' the striking' He' be; it makes. be; it's' he is' d:h; it was, he'cause; an'cause= ves' was to' sc; he feels' he' imm' he' understand' thing= have; he' different', obviously. be' havey= he' become; he' took; it is a.' is cause mistake is very' great. it's as I was- because he' they or how; they are all we have; because after; sc; he; man' it was; of he' were not being going; he' he is, he' cause; was happen; it was because we can; his cause of brav' that of; he' to; since' we fear of. heat, that and his, because of, we; we are like I can' try; his is caused,' be because he' cause; it was, he's' he' to be, it because. it is a cause, we are, we see. we are; as long and mistake' and he' grain' because we are contr;
================================================================================

I ran this a dozen times due to sampling variation, and it's all pretty much the same, just gibberish completions, as GPT-2-1.5b clearly has no idea what to do with this apparently gibberish input.

I think your formatting with the semicolons and the equals sign has confused the transformer. All the strange words, plurals, and weird possessives may also be confusing. On TTT, if I use common words and switch to colon and linebreak as the separators, it at least picks up that the pattern is gibberish: words.

For example:

kobo: book ntthrgeS: Strength rachi: chair sviion: vision drao: road ntiket: kitten dewdngi: wedding lsptaah: asphalt veon: oven htoetsasu: southeast rdeecno: encoder lsbaap1: phonetics nekmet: chic-lookin' zhafut: crinkly lvtea: question mark cnhaetn: decorated gelsek: ribbon odrcaa: ribbon nepci: ball plel: half cls: winged redoz: brightness star: town moriub:

Now that you have access to GPT-3, would you mind seeing whether it can scramble words? I'm dying to know whether this prediction was correct or not.

So it was pointed out to me that the GPT-3 paper doesn't report it solving full anagrams, just partial anagrams: first/last letter kept the same.

None of the models can reverse the letters in a word....Finally, it is worth adding that solving these tasks requires character-level manipulations, whereas our BPE encoding operates on significant fractions of a word (on average ∼0.7 words per token), so from the LM’s perspective succeeding at these tasks involves not just manipulating BPE tokens but understanding and pulling apart their substructure. Also, CL, A1, and A2 are not bijective (that is, the unscrambled word is not a deterministic function of the scrambled word),requiring the model to perform some search to find the correct unscrambling. Thus, the skills involved appear to require non-trivial pattern-matching and computation.

This is something that seems to be a problem with a number of other things I've tried, like puns & rhyming. GPT-3 can still brute-force them, but I think it would do a lot better if it could just work on characters instead...

Anyway, repeating the above prompt, I get

  • =hassa;foi=foil;nriiutnc=instructions;
  • =hassa;tsoidr=s
  • =hisses;ejolm=j
  • =hisses;taht=that;
  • =hissed;nf'eos=

While not perfect, they at least do show some attempt to unscramble the letters (BPE(s)) of 'hhssae' to get an English word, and it does construct its own anagrams (particularly, taht=that).

Given the BPE issue, I wondered if forcing it to encode each anagram the same way would work to improve performance? It's hard to learn the right permutation when, on the raw input basis, the input/output aren't even permutations of each other (because they map to completely different BPE encodings based on what shortcuts the BPE encoding provides for the real word). BPE is spaced-separated and includes fallback BPE integers for individual letters/Unicode points, so adding spaces should ensure that all 'h' in the scrambles map to the same 'h' BPE in the unscramble, etc. So the prompt now looks like this:

...w i e r o d s w = w i d o w e r s ; o j t s = j o t s ; e g s e n = g e n e s ; u s l e l p t o = p o l l u t e s ; s l f o  a f = o f f a l  s ; y a s t l s t a c i i t l = s t a t i s t i c a l l y ; h t h c y m y l i l r a = r h y t h m i c a l l y ; o u a i d = a u d i o ;  d o s c = d o c  s ; x e v = v e x ; g d i w e i n l = w i e l d i n g ; p l u r e d y e t = r e p u t e d l y ; e s  e d w t = t w e e d  s ; g n i g r e s i n = r e s i g n i n g ; e d a i s l r = d e r a i l s ; h h s s a e =

This gives me completions like

  • s a e s h ; s l s a i = s a i l s ; n e u s e r = u r n e r s ; s e r a s n a = s a n a r i a s ; r s d a a i i s = d i s a s t e r s ; i r s t o i r u l e = r u l e r s ; w i s m a n r = r e s m a n ; s t a t i t s i n i t a = s i t a t i s t i c a l l y ; s l s t o r e = s t o r e s ; a i t s l i s = s i l i c a ; s o t s e d = s e d s ; e s i s t l r = s i l i t e r a t e ; o s i t s e a = s a e s h ; o c a l l = c a l l s ; l n i l o o t
  • h h s s a e = h u s s a r ; s t o i a l e n d o t = e n d o t o n i c s ; c l r e s e = r e s e n t s ; s s a e = s a a e ; r s s a i a s = a i s s a i s ; s w s s a i a s = a i s s a i s ; r l o a e i e o = o i l e a f ; s s e s a i i n s a = a i n s a i s s e s ; e s e i s i i e = i s i e s
  • = s a i s s a i n ; s i t i a l = s i t u a l ; s e i s i a = s i a s ; i s s i e t = s i s t e r ; e
  • = ia s h e s ; e r r a t = r a t t e r s ;
  • = ia a s s ;
  • = s a h s ;
  • = s h a r e s ;
  • = s h a m e s ;
  • = s h a m e ;
  • = s h i e l d s ;

To be fair, I went back and checked with GPT-2-1.5b to see whether the spaced version would suddenly work much better. With spaces, GPT-2-1.5b does pick up on the formatting pattern much better, at least, but it still completely fails to start solving anagrams or generating any valid ones of its own - instead, it just copies the input repeatedly:

e s  e d w t = t w e e d  s ; g n i g r e s i n = r e s i g n i n g ; e d a i s l r = d e r a i l s ; h h s s a e = t i c l e s ; d a t a l y ; e s  e d w t = t w e e d  s ; g n i g r e s i n = r e s i g n i n g ; e d a i s l r = d e r a i l s ; h h s s a e = v s . T w e r e t h e l y ; e s  e d w t = t w e e d  s ; g n i g r e s i n = r e s i g n i n g ; e d a i s l r = d e r a i l s ; h h s s a e = r e s i g n i n g = l l y ; e s  e d w t = t w e e d  s ; g n i g r e s i n = r e s i g n i n g ; e d a i s l r = d e r a i l s ; h h s s a e = s u b j e c t = d e s l y ; e s  e d w t = t w e e d  s ; g n i g r e s i n = r e s i g n i n g ; e d a i s l r = d e r a i l s ; h h s s a e = n e r w i d g i n g a l y ; e s  e d w t = t w e e d  s ; g n i g r e s i n = r e s i g n i n g ; e d a i s l r = d e r a i l s ; h h s s a e = n o t o n l i g h t e l y ; e s  e d w t = t w e e d  s ; g n i g r e s i n = r e s i g n i n g ; e d a i s l r = d e r a i l s ; h h s s a e = b e n r a c t o r e l l y ; e s  e d w t = t w e e d  s ; g n i g r e s i n = r e s i g n i n g ; e d a i s l r = d e r a i l s ; h h s s a e = g r u g t = h e s s = l y ; e s  e d w t = t w e e d  s ; g n i g r e s i n = r e s i g n i n g ; e d a i s l r = d e r a i l s ; h h s s a e = h o w e v e r [...etc...]

So my conclusion is: I still think GPT-3 does far better on anagrams than GPT-2, and you could say that GPT-3 is 'solving' anagrams to some degree (in a way that GPT-2 does not at all); and that the BPE encoding contributes to the difficulty both have in understanding the task.

Thanks! Maybe we could get around the BPE encoding by doing it with sentences instead of words? Like, "Please scramble the word order in the following sentences: I ate a nice juicy steak. = steak nice a juicy ate I. Happy people usually sleep well at night. = people sleep well usually at night happy. Orange juice is like wine for amateurs. = " Or would that be less impressive for some reason?

An issue there is that you would be eating into your context window further by expanding it out: each of those words is going to take 1 or more BPEs, while I'm at least reasonably sure that the letter by letter approach is at least guaranteed to be 1 letter = 1 BPE. You also make it more likely that the decoding of the answer will screw up - the more BPEs it takes to express an answer, the more likely the top-k or top-p sampling will stochastically screw up an otherwise-perfectly-obvious-correct answer. (You can see the stochasticity at play in the completions: "shame" vs "shames" eg.)