New Comment
4 comments, sorted by Click to highlight new comments since: Today at 9:38 AM

I started this essay last year, and procrastinated on completing it for a long time, until recently the GPT-3 announcement gave me the motivation to finish it up.

If you are familiar with my book, you will notice some of the same ideas, expressed with different emphasis. I congratulate myself a bit on predicting some of the key aspects of the GPT-3 breakthrough (data annotation doesn't scale; instead learn highly complex interior models from raw data).

I would appreciate constructive feedback and signal-boosting.

Sorry if it's a stupid question, but what's the difference (if any) between "developing interior models" and "self-supervised learning" as described/advocated by Yann LeCun?

Not a stupid question, this issue is actually addressed in the essay, in the section about interior modeling vs unsupervised learning. The latter is very vague and general, while the former is much more specific and also intrinsically difficult. The difficulty and preciseness of the objective make it much better as a goal for a research community.

Your reply seems to treat "unsupervised learning" and "self-supervised learning" as synonyms, but I don't think they are. Self-supervised is more specific. Things like clustering and dimensionality reduction and feature extraction are not examples of self-supervised learning, I don't think. Predicting the next few seconds of a video, or predicting the next word of a text, or deleting a word from the middle of a sentence and training your model to guess what it is, would all be examples of self-supervised learning.