Sometimes people frame debates about AI progress metaphorically in terms of “continuity”. They talk about “continuous” progress vs “discontinuous” progress. The former is described as occurring without sudden unprecedently-rapid change — it may involve rapid change, but is preceded by slightly less rapid change. (This is said to be safer, more predictable, easier to react to etc.) Whereas discontinuous progress is the opposite. And indeed, these terms do map to different camps in the AI x-risk debate. Christiano and Yudkowsky may be taken as typical embers of each. But I think continuos progress is bad terminology that obscures the core positions of each camp. Instead, I’d advocate for “historical progress” vs “ahistorical progress”.
First, note that actual crux under contention in (this part of) the debate on AI progress is “to what extent will future AI progress be like past AI progress?” This feeds into decision relevant questions like “will we know when we’re close to dangerous capabilities?”, “can we respond in time to misaligned AI before it becomes too capable?” and “how much do pre-ASI alignment failures teach us about post-ASI alignment failures?”
If someone new to the debate hears the first question, the response “AI progress is (dis) continuous” is not clarifying. Here are some examples which illustrate how (dis)continuous progress doesn’t much bear on whether future AI progress is like past AI progress.
Continuous functions include sigmoids, which are notoriously hard to extrapolate unless you’ve got data points on both sides of the inflection.
Continuous functions also include straight lines, which chug along uniformly, even when it seems unreasonable to the extremes.
Discontinuous functions include the rounding function, which advances is trivial to predict once you’ve got a few data points.
Discontinuous functions also include the step function, which is wholly unpredictable.
So we see that when we have some data points about AI progress and want to fit them to a curve, it doesn’t much matter whether the curve is continuous or discontinuous.
But how did we even wind up with these terms? I think it's because Paul Christiano used the term “continuous takeoff” to describe his model in his post on Takeoff Speeds, and “discontinuous” to describe Yudkowsky’s. These are the two prototypical models of AI progress. Note that Christiano was focused on pre-ASI improvements, as he thought that was where their main disagreement was located.
The core of Paul Christiano’s model is that “Before we have an incredibly intelligent AI, we will probably have a slightly worse AI.” In later works, he seems to describe his view as progress happening in steady steps once you’ve got a lot of resources poured into making it happen. Whereas Yudkowsy’s model has more room for big insights that dramatically improve AI systems which are hard to predict ahead of time, but in retrospect clearly followed a trend.
You might model Yudkowsky World as something like a sequence of lumpy insights, perhaps with a low rate poisson point process with each state signifying very different capabilities. Whereas Christinao world has progress happens in regularly, steady chunks. You can also model this as like a Poisson process, but with much higher rate and states corresponding to not too different capabilities.
Notice how the left figure, depicting Christiano world, looks more "continuous"and the right figure looks more continuous, even though they’re both discrete. Little wonder Christiano called one “continuous” and one “discontinuous”.
But these two pictures are just one way things could go, and progress can easily be continuous but Yudkowskian, or discontinuous and Christiano-ian (Christian?).
Which is why I think we should use different terms to describe these two views about AI progress. My current favourites are “historical” vs. “ahistorical” progress. Or perhaps “regular” vs “irrelguar". I'd even settle for smooth vs rough. But not continuous! And if you disagree, please share your ideas.