* Click here to open a live research preview where you can try interventions using this SAE. This is a follow-up to a previous post on finding interpretable and steerable features in CLIP. Motivation Modern image diffusion models often use CLIP in order to condition generation. Put simply, users use...
We trained a SAE to find sparse features in image embeddings. We found many meaningful, interpretable, and steerable features. We find that steering image diffusion works surprisingly well and yields predictable and high-quality generations. You can see the feature library here. We also have an intervention playground you can try....
By evaluating how well steering vectors perform using GPT-3, we can score a machine-generated set of steering vectors automatically. We also find that, by combining steering vectors that succeed in different ways, we can yield a better and more general steering vector than the vectors we found originally. Introduction Steering...
Voice input has dramatically increased my writing output as well as the quality of my ideas. But I think that it's not as easy as turning on voice dictation and writing an essay from start to finish. There is a particular way or method of doing voice input that I...
A recently published study has found that the AstraZeneca vaccine does not protect against the "African variant" of nCoV-19, which is a worrying outcome, since the paper believes that we probably won't be able to manufacture a specifically targeted new vaccine for the rest of 2021. > CONCLUSIONS > A...