Like if I want to experiment with a steering technique, it would be useful to have a language model that is small, capable, but not so finetuned that it becomes inflexible. (Or maybe ideally, a model which has both a finetuned and a non-finetuned variant.)

I've seen some people use GPT-2. Is that recommended? Are there any alternatives?

New Answer
New Comment

2 Answers sorted by

LawrenceC

65

If you care about having both the instruction-finetuned variant and the base model, I think I'd go with one of the smaller LLaMAs (7B/13B). Importantly, they fit on one 40/80 GB A100 comfortably, which saves a lot of hassle. There's also a bajillion fine-tuned versions of them if you want to experiment. 

Tao Lin

54

Pythia is meant for this

Aren't the larger Pythias pretty undertrained?