In case it's useful, I have a threat model writeup here: https://www.danieldewey.net/risk/case.html. (I haven't linked it from many places, and I'm trying to spread it around when I see the chance.)
If I wanted to explain these results, I think I would say something like:
GPT-3 has been trained to predict what the next token would be if the prompt appeared in its dataset (text from the internet). So, if GPT-3 has learned well, it will "talk as if symbols are grounded" when it predicts that the internet-text would "talk as if symbols are grounded" following the given prompt, and not if not.
It's hard to use this explanation to predict what GPT-3 will do on edge cases, but this would lead me to expect that GPT-3 will more often "talk as if symbols are grounded" when the prompt is a common prose format (e.g. stories, articles, forum posts), and less often when the prompt is most similar to non-symbol-groundy things in the dataset (e.g. poetry) or not that similar to anything in the dataset.
I think your examples here broadly fit that explanation, though it feels like a shaky just-so story:
I don't see how to test this theory, but it seems like it has to be kind of tautologically correct -- predicting next token is what GPT-3 was trained to do, right?
Maybe to find out how adept GPT-3 is at continuing prompts that depend on common knowledge about common objects, or object permanence, or logical reasoning, you could create prompts that are as close as possible to what appears in the dataset, then see if it fails those prompts more than average? I don't think there's a lot we can conclude from unusual-looking prompts.
I'm curious what you think of this -- maybe it misses the point of your post?
*(I'm not sure exactly what you mean when you say "symbol grounding", but I'm taking it to mean something like "the words describe objects that have common-sense properties, and future words will continue this pattern".)
Thank you for writing this! I usually have to find a few different angles to look at a paper from before I feel like I understand it, and this kind of thing is super helpful.
I do think both of those cases fit into the framework fine (unless I'm misunderstanding what you have in mind):
In other words, if we imagine a model misbehaving in the wild, I think it'll usually either be the case that (1) it behaved that way during training but we didn't notice the badness (evaluation breakdown), or (2) we didn't train it on a similar enough situation (high-level distribution shift).
As we move further away from standard DL training practices, we could see failure modes that don't fit into these two categories -- e.g. there could be some bad fixed-point behaviors in amplification that aren't productively thought of as "evaluation breakdown" or "high-level distribution shift." But these two categories do seem like the most obvious ways that current DL practice could produce systematically harmful behavior, and I think they take up a pretty large part of the space of possible failures.
(ETA: I want to reiterate that these two problems are restatements of earlier thinking, esp. by Paul and Evan, and not ideas I'm claiming are new at all; I'm using my own terms for them because "inner" and "outer" alignments have different meanings for different people.)
I'm really enjoying Project Hail Mary, the new book from The Martian author Andy Weir, and I think other LW readers might as well.
Avoid spoilers harder than you normally would -- there are a lot of spoilers online that are easy to hit by accident.
Why you might like it:
Thanks, for the info I'm reading through your posts now! I'm sorry your experience was / still is so terrible. Knock on wood, I'm not having as bad a time so far -- I wonder if the most recent booster helped me, or if it's just luck (different strain, different immune system, etc.)
Especially good to know how easy it was to pass to your spouse -- I'll do my best to take that into account.
(I strongly agree w/ your post on Paxlovid, by the way -- it was a game changer for how bad my symptoms were, I'm very glad I could get it.)