Review
The prior is not the gears. The distribution of authors a transformer is learning to imitate doesn't necessarily have an important impact on the collection of features in the meaning of situations it's learning to be aware of, features that pre-training is refining and fine-tuning is assembling into agency. There are some features that classify common kinds of authors, but all other common features of human cognition found in any other kinds of authors are also represented to some degree, to be learned and become available as ingredients for fine-tuning. If a sufficiently powerful LLM learns to predict Carlsen, it doesn't matter that most chess players are worse than that at chess, the features are there to be found.
There has been a lot of talk about how GPT-4 and its ilk represent “a mirror of the human soul” or a kind of distillation of our nature. The assumption here is that mankind’s corpus captures our essence and that large language models (LLMs) are grokking this essence as they “read” the sum total of our writing. There are a multitude of reasons why this is a bad assumption, but here are two that I find interesting.
While many of the noblest minds that have ever graced the planet are counted among the ranks of the writers, we shouldn’t imagine that these outliers represent the norm (beware availability bias). Comparing all those who have ever taken up the quill/pen (the scholars, the journalists, the propagandists, the 2nd rate novelists) to all of those who haven’t (at least not in any way that made it into the LLM corpus), I think we would find certain types of people to be overrepresented in the former group: the loudmouths, the blowhards, the know-it-alls, the self-aggrandizers, the attention-seekers, the fanatics (but of course none of these labels apply to yours truly). Consider the people who spend the most time bloviating into the digital aether: are these the wisest and most level-headed among us?
If you met someone who embodied the “soul of the writer” you would probably think they were an asshole (…so don’t be surprised when an LLM acts like one).
Assholes maybe, but at least your average writer is more intelligent than the average person, right? Sure, if you believe that symbolic reasoning is the highest form of human intelligence.
What about the intelligence of the body—the cunning of the muscle, the intuition of the gut, the wisdom of the loins? Conscious cognition is only the tip of the iceberg, a small cap on a much vaster intellect that permeates our entire being.
This argument is known as Moravec’s paradox; a more explicit formulation:
So here is another reason that we should be worried about artificial intelligence: it represents the concentrated extract of the “dumbest” (i.e. the least evolutionarily-optimized) part of our minds. In creating LLMs, we’ve essentially taken a buggy prototype, scaled it up, and rushed it to market.
If AI is to be our downfall, then creating it may just turn out to be the stupidest thing that us know-it-alls ever did.