Daniel Dewey

Posts

Sorted by New

Wiki Contributions

Comments

More GPT-3 and symbol grounding

If I wanted to explain these results, I think I would say something like:

GPT-3 has been trained to predict what the next token would be if the prompt appeared in its dataset (text from the internet). So, if GPT-3 has learned well, it will "talk as if symbols are grounded" when it predicts that the internet-text would "talk as if symbols are grounded" following the given prompt, and not if not.

It's hard to use this explanation to predict what GPT-3 will do on edge cases, but this would lead me to expect that GPT-3 will more often "talk as if symbols are grounded" when the prompt is a common prose format (e.g. stories, articles, forum posts), and less often when the prompt is most similar to non-symbol-groundy things in the dataset (e.g. poetry) or not that similar to anything in the dataset.

I think your examples here broadly fit that explanation, though it feels like a shaky just-so story:

  • If I saw the first "undead worker" prompt on a webpage, I would think "hm, normal stories don't have this kind of weird repetition -- is this a poem or a metaphor or something? A joke?" I wouldn't think it was 97% to continue going to work, but I wouldn't be surprised if she did -- maybe 30%-50%? 
  • The second undead worker prompt looks a lot more like a normal kind of story, so I'm not that surprised that GPT was more likely to think it continued like a story looked more symbol-groundy -- if I saw that text on the internet, I would still think there was a reasonable chance that it's some kind of joke, but not as high as the first prompt.
  • IDK about the map thing -- this looks like a case where GPT just hadn't seen enough training text in the general vicinity of the prompt to do very well? It's definitely interesting that it figured out the command format, but didn't seem to figure out the layout of the situation.

I don't see how to test this theory, but it seems like it has to be kind of tautologically correct -- predicting next token is what GPT-3 was trained to do, right?

Maybe to find out how adept GPT-3 is at continuing prompts that depend on common knowledge about common objects, or object permanence, or logical reasoning, you could create prompts that are as close as possible to what appears in the dataset, then see if it fails those prompts more than average? I don't think there's a lot we can conclude from unusual-looking prompts.

I'm curious what you think of this -- maybe it misses the point of your post?

*(I'm not sure exactly what you mean when you say "symbol grounding", but I'm taking it to mean something like "the words describe objects that have common-sense properties, and future words will continue this pattern".)

ELK First Round Contest Winners

Nice work everyone! Very excited to see the results.

EfficientZero: How It Works

Thank you for writing this! I usually have to find a few different angles to look at a paper from before I feel like I understand it, and this kind of thing is super helpful.

My understanding of the alignment problem

Thanks, Richard!

I do think both of those cases fit into the framework fine (unless I'm misunderstanding what you have in mind):

  • In the first case, we're training a model in an environment. As it gets more capable, it reaches a point where it can find new, harmful behaviors in some set of situations. Our worries are now that (1) we can't recognize that behavior as harmful, or (2) we don't visit those situations during training, but they do in fact come up in practice (distribution shift). If we say "but the version of the model we had yesterday, before all this additional training, didn't behave badly in this situation!", that just seems like sloppy training work -- it's not clear why we should expect the behavior of an earlier version of a model to bind a later version.
  • In the second case, it sounds like you're imagining us watching evolution and thinking "let's evolve humans that are reproductively fit, but aren't dangerous to other species." We train the humans a lot in the ancestral environment, and see that they don't hurt other species much. But then, the humans change the environment a lot, and in the new situations they create, they hurt other species a lot. In this case, I think it's pretty clear that the distribution has shifted. We might wish we'd done something earlier to certify that humans wouldn't hurt animals a lot under any circumstance, or we'd deployed humans in some sandbox so we could keep the high-level distribution of situations the same, or dealt with high-level distribution shift some other way.

In other words, if we imagine a model misbehaving in the wild, I think it'll usually either be the case that (1) it behaved that way during training but we didn't notice the badness (evaluation breakdown), or (2) we didn't train it on a similar enough situation (high-level distribution shift).

As we move further away from standard DL training practices, we could see failure modes that don't fit into these two categories -- e.g. there could be some bad fixed-point behaviors in amplification that aren't productively thought of as "evaluation breakdown" or "high-level distribution shift."  But these two categories do seem like the most obvious ways that current DL practice could produce systematically harmful behavior, and I think they take up a pretty large part of the space of possible failures.

(ETA: I want to reiterate that these two problems are restatements of earlier thinking, esp. by Paul and Evan, and not ideas I'm claiming are new at all; I'm using my own terms for them because "inner" and "outer" alignments have different meanings for different people.)

Daniel Dewey's Shortform

I'm really enjoying Project Hail Mary, the new book from The Martian author Andy Weir, and I think other LW readers might as well. 

Avoid spoilers harder than you normally would -- there are a lot of spoilers online that are easy to hit by accident.

Why you might like it:

  • Lots of figuring things out on the fly; math, science, and general hypothesizing / problem-solving exercises. Fun to stop and try to figure out on your own, or just relax and watch the character solve them.
  • Requires a lot less physics knowledge than some similar books to "play along at home" (e.g. I could do most of what I've seen so far, vs. something like Egan's Orthogonal, which needs more physics than I know)
  • Nice escapism about how Earth responds to global threats :)