To give another example of this broad pattern: In the cluster of disciplines including ethology, anthropology, linguistics, and evolutionary biology, there was once an ardent argumentative tennis between [those trying to propose a thing or a cluster of things that make humans a different type of animal], and [those trying to show that some animals already have that thing to some meaningful degree and so the difference between humans and other animals is "quantitative, not qualitative"].
[The below is slightly narrativized, largely recounting the debates and memories of debates I witnessed from the professors when studying cognitive science years ago.]
One example is (the cluster of cognitive capacities required for) tool use. It was proposed that a distinctive thing about humans, and maybe a crucial enabler of their takeoff, is that they can pick up a piece of the environment and use it on another piece of the environment or on their own body, or some other type of animal. Or, more importantly, that they can conceive of a plan to do this, and then execute the plan competently. Then it turns out that chimps use tools, and later on also that lots of other animals use tools to various degrees. So one side of the debate "retreats" to the position that some animals can pick up a thing and use it as a tool, but only humans (have the think-oomph needed to) deliberately craft their own tools. When this is also shown to occur in animals (e.g., black-billed crows, woodpecker finches, sometimes chimps), they "retreat" again, now to the position that humans are distinguished by the capacity to use tools to craft other tools ("meta-tooling"). This also happens to occur in animals, but super-sporadically. So some who might want a stronger position might "retreat" once again to "what's important is the capacity to instantiate an open-ended 'in-tool-igence' explosion: tools/technologies recombining and interacting in ways that produce novel tools/technologies and opportunities/niches for development of further tools/technologies".
(To be clear and explicit, it's not tool use per se that's the point. The point is that humans are both continuous with nature but also, like, obviously, a different kind of beast, so it is worthwhile to figure out the plausibly relevant cognitive capacities that are upstream from (ancestral) humans being a different kind of beast.)
Another example: language. Charles Hockett proposed, first 13, later 16 features "that characterize human language and set it apart from animal communication":
Hockett originally believed there to be 13 design features. While primate communication utilizes the first 9 features, Hockett believed that the final 4 features (displacement, productivity, cultural transmission, and duality) were reserved for humans. Hockett later added prevarication, reflexiveness, and learnability to the list as uniquely human characteristics. He asserted that even the most basic human languages possess these 16 features.
Later on, it would turn out that some of the features proposed as distinctly human are not completely absent in animals either, e.g., learning and cultural transmission. One could argue that this implied animal communication is closer to human language than previously thought. And that's true, but also, it turns out that it was in-hindsight wrong to focus too much on those features of language.
The general pattern is something like: we have X and Y. Y is clearly different from X, because it involves A, B, and C. Someone figures out how to patch X so that it also has A, B, and C, and then they proclaim that it makes it essentially a Y. But then someone says "ok, I admit that your patched X has A, B, and C, but now that you applied those patches, I can see through the cracks in those patches, and the thing that strikes me the most about the difference between X and Y is not A, B, or C, but some other thing D, which was not salient to me, until you showed me an ABC-patched X, because my attention was fully drawn to A, B, and C".
A meta-example of this is the "featherless biped" way of pointing at the concept of a human. Someone shows you a featherless chicken, so you "retreat" to "featherless biped with flat nails". After a few rounds like this and/or with a bit of reflection, you realize that you're not going to find the right explication of the concept by stacking all the features that appear to separate examples from non-examples.
To give a maybe-example in the opposite direction: If Europeans 600 years ago were to theorize about what distinguishes civilization (vaguely: "large", "well-organized" human societies) from "savagery", they might have singled out writing as a necessary feature. Inca (or the continuity of pre-Columbian Andean civilizations more generally) are/were a counter-example: a large, well-organized society that didn't use any writing. Someone might then insist that the Inca civilization was not a civilization, because they didn't have any writing, but the right conclusion would be that you can build a well-organized polity with a few million people without writing (although it's more difficult).
Crosspost from my blog.
[Previously: "Views on when AGI comes and on strategy to reduce existential risk", "Do confident short timelines make sense?"]
[Whenever discussing when AGI will come, it bears repeating: If anyone builds AGI, everyone dies; no one knows when AGI will be made, whether soon or late; a bunch of people and orgs are trying to make it; and they should stop and be stopped.]
Arguments for fast AGI progress
Many arguments about "when will AGI come" focus on reasons to think progress will continue quickly, such as:
Intuition pumps for being close to AGI
These are all valid and truly worrying. But, to say anything specific and confident about when (in clock time) AGI will come, we'd also have to know how fast progress is being made in an absolute sense; specifically, in an absolute sense as measured by "how much of what you need to make AGI do you have".
There are various intuition pumps / analogies that people use to inform their sense of how far AI research has come. For example:
I believe these are poor intuition pumps for understanding when AGI comes because they do not evoke the sense that there is some unknown, probably-large blob of complexity that one has to possess in order to make AGI. They paper over differences in how the AI system does what it does.
Synthetic life as an intuition pump
Intuition pumps can only go so far. Each domain has its own central complexities, and there's no good reason that the world has to present a deeply correct analogy for the development of AGI, and in fact I'm not aware of such. That said, as long as we're doing intuition pumps, I want to propose another intuition pump for timelines on progress on a very complex task: synthetic life. We use the analogy:
Specifically, we can compare the general task of making AGI (which, to be clear, is a ~maximally bad thing to do) as analogous to:
Some things I like about this analogy