How do LLMs and the scaling laws make you update in this way? They make me update in the opposite direction. For example, I also believe that the human body is optimized for tool use and scaling, precisely because of the gene-culture coevolution that Henrich describes. Without culture, this optimization would not have occurred. Our bodies are cultural artifacts.
Cultural learning is an integral part of the scaling laws; the scaling laws show that indefinitely scaling the number of parameters in a model doesn't quite work; the training data also has to scale, with the implication that that data is some kind of cultural artifact, where the quality of that artifact determines the capabilities of the resulting model. LLMs work because of the accumulated culture that goes into them. This is no less true for “thinking” models like o1 and o3, because the way they think is very heavily influenced by the training data. The fact that thinking models do so well is because thinking becomes possible at all, not because thinking is something inherently beyond the training data. These models can think because of the culture they absorbed, which includes a lot of examples of thinking. Moreover, the degree to which Reinforcement Learning determines the capabilities of thinking models is small compared to Supervised Learning, because, firstly, less compute is spent on RL than on SL, and, secondly, RL is much less sample-efficient than SL.
Current LLMs can only do sequential reasoning of any kind by adjusting their activations, not their weights, and this is probably not enough to derive and internalize new concepts à la C.
For me this is the key bit which makes me update towards your thesis.
This is indeed an interesting sociological breakdown of the “movement”, for lack of a better word.
I think the injection of the author’s beliefs about whether or not short timelines are correct distracting from the central point. For example, the author states the following.
there is no good argument for when [AGI] might be built.
This is a bad argument against worrying about short timelines, bordering on intellectual dishonesty. Building anti-asteroid defenses is a good idea even if you don’t know that one is going to hit us within the next year.
The argument that it’s better to have AGI appear sooner rather than later because institutions are slowly breaking down is an interesting one. It’s also nakedly accelerationist, which is strangely inconsistent with the argument that AGI is not coming soon, and in my opinion very naïve.
Besides that, I think it’s generally a good take on the state of the movement, i.e., like pretty much any social movement it has a serious problem with coherence and collateral damage and it’s not clear whether there’s any positive effect.
Ah, I see now. Thank you! I remember reading this discussion before and agree with your viewpoint that he is still directionally correct.
he apparently faked some of his evidence
Would be happy to hear more about this. Got any links? A quick Google search doesn’t turn up anything.
You talk about personhood in a moral and technical sense, which is important, but I think it’s important to also take into account the legal and economic senses of personhood. Let me try to explain.
I work for a company where there’s a lot of white-collar busywork going on. I’ve come to realize that the value of this busywork is not so much the work itself (indeed a lot of it is done by fresh graduates and interns with little to no experience), but the fact that the company can bear responsibility for the work due to its somehow good reputation (something something respectability cascades), i.e., “Nobody ever got fired for hiring them”. There is not a lot of incentive to automate any of this work, even though I can personally attest that there is a lot of low-hanging fruit. (A respected senior colleague of mine plainly stated to me, privately, that most of it is bullshit jobs.)
By my estimation, “bearing responsibility” in the legal and economic sense means that an entity can be punished, where being punished means that something happens which disincentivizes it and other entities from doing the same. (For what it’s worth, I think much of our moral and ethical intuitions about personhood can be derived from this definition.) AI cannot function as a person of any legal or economic consequence (and by extension, moral or ethical consequence) if it cannot be punished or learn in that way. I assume it will be able to eventually, but until then most of these bullshit jobs will stay virtually untouchable because someone needs to bear responsibility. How does one punish an API? Currently, we practically only punish the person serving the API or the person using it.
There are two ways I see to overcome this. One way is that AI eventually can act as a drop-in replacement for human agents in the sense that they can bear responsibility and be punished as described above. With the current systems this is clearly not (yet) the case.
The other way is that the combination of cost, speed and quality becomes too good to ignore, i.e., that we get to a point where we can say “Nobody ever got fired for using AI” (on a task-by-task basis). This depends on the trade-offs that we’re willing to make between the different aspects of using AI for a given task, such as cost, speed, quality, reliability and interpretability. This is already driving use of AI for some tasks where the trade-off is good enough, while for others it’s not nearly good enough or still too risky to try.
Reminds me of mimetic desire:
Man is the creature who does not know what to desire, and he turns to others in order to make up his mind. We desire what others desire because we imitate their desires.
However, I only subscribe to this theory insofar as it is supported by Joseph Henrich's work, e.g., The Secret of Our Success, in which Henrich provides evidence that imitation (including imitation of desires) is the basis of human-level intelligence. (If you’re curious how that works, I highly recommend Scott Alexander’s book review.)
But we already knew that some people think AGI is near and others think it's farther away!
And what do you conclude based on that?
I would say that as those early benchmarks ("can beat anyone at chess", etc.) are achieved without producing what "feels like" AGI, people are forced to make their intuitions concrete, or anyway reckon with their old bad operationalizations of AGI.
The relation between the real world and our intuition is an interesting topic. When people’s intuitions are violated (e.g., the Turing test is passed but it doesn’t “feel like” AGI), there’s a temptation to try to make the real world fit the intuition, when it is more productive to accept that the intuition is wrong. That is, maybe achieving AGI doesn’t feel like you expect. But that can be a fine line to walk. In any case, privileging an intuitive map above the actual territory is about as close as you can get to a “cardinal sin” for someone who claims to be rational. (To be clear, I’m not saying you are doing that.)
They spend more time thinking about the concrete details of the trip, not because they know the trip is happening soon, but because some think the trip is happening soon. Disagreement on and attention to concrete details is driven by only some people saying that the current situation looks like, or is starting to look like the event occurring according to their interpretation. If the disagreement had happened at the beginning, they would soon have started using different words.
In the New York example, it could be that when someone says “Guys, we should really buy those Broadway tickets. The trip to New York is next month already.” they prompt the response “What? I thought we were going the month after!”, hence disagreement. If this detail had been discussed earlier, there might have been the “February trip” and the “March trip” in order to disambiguate the trip(s) to New York.
In the case of AGI, some people’s alarm bells are currently going off, prompting others to say that more capabilities are required. What seems to have happened is that people at one point latched on to the concept of AGI, thinking that their interpretation was virtually the same as those of others because of its lack of definition. Again, if they had disagreed with the definition to begin with, they would have used a different word altogether. Now that some people are claiming that AGI is here or here soon, it turns out that the interpretations do in fact differ. The most obnoxious cases are when people disagree with their own past interpretation once that interpretation is threatened to be satisfied, on the basis of some deeper, undefined intuition (or, in the case of OpenAI and Microsoft, ulterior motives). This of course is also known as “moving the goalposts”.
Once upon a time, not that long ago, AGI was interpreted by many as “it can beat anyone at chess”, “it can beat anyone at go” or “it can pass the Turing test”. We are there now, according to those interpretations.
Whether or not AGI exists depends only marginally on any one person’s interpretation. Words are a communicative tool and therefore depend on others’ interpretations. That is, the meanings of words don’t fall out of the sky; they don’t pass through a membrane from another reality. Instead, we define meaning collectively (and often unconsciously). For example, “What is intelligence?” is a question of how that word is in practice interpreted by other people. “How should it be interpreted (according to me personally)?” is a valid but different question.
What do you make of feral children like Genie? While there are not many counterfactuals to cultural learning—probably mostly because depriving children of cultural learning is considered highly immoral—feral children do provide strong evidence that humans that are deprived of cultural learning do not come close to being functional adults. Additionally, it seems obvious that people who do not receive certain training, e.g., those who do not learn math or who do not learn carpentry, generally have low capability in that domain.
You mean to say that the human body was virtually “finished evolving” 200,000 years ago, thereby laying the groundwork for cultural optimization which took over form that point? Henrich’s thesis of gene-culture coevolution contrasts with this view and I find it to be much more likely to be true. For example, the former thesis posits that humans lost a massive amount of muscle strength (relative to, say, chimpanzees) over many generations and only once that process had been virtually “completed”, started to compensate by throwing rocks or making spears when hunting other animals, requiring much less muscle strength than direct engagement. This begs the question, how did our ancestors survive in the time when muscle strength had already significantly decreased, but tool usage did not exist yet? Henrich’s thesis answers this by saying that such a time did not exist; throwing rocks came first, which provided the evolutionary incentive for our ancestors to expend less energy on growing muscles (since throwing rocks suffices for survival and requires less muscle strength). The subsequent invention of spears provided further incentive for muscles to grow even weaker.
There are many more examples to make that are like the one above. Perhaps the most important one is that as the amount of culture grows (also including things like rudimentary language and music), a larger brain has an advantage because it can learn more and more quickly (as also evidenced by the LLM scaling laws). Without culture, this evolutionary incentive for larger brains is much weaker. The incentive for larger brains leads to a number of other peculiarities specific to humans, such as premature birth, painful birth and fontanelles.