Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Alternate titles: Deconfusing Take-off; Taboo “Fast” and “Slow” Take-off.

This post has two main purposes: 

  1. To suggest that we taboo the terms "fast" and "slow" take-off in favour of "gradual" and "sudden" take-off, respectively (in order to avoid confusion with “short” and “long” timelines). I also encourage people to discuss take-off "dynamics" instead of take-off "speeds". 
  2. To highlight a point made by Buck S. at EAG London and by Paul C. here: For every choice of "AGI difficulty", gradual take-off implies shorter timelines. 

Although these points have been made before, I expect some people will be deconfused by this post.
 

To clarify terms:

Short vs long timelines is the question of when (i.e. by which year) we will develop AGI/TAI/Superintelligence.

Paul Christiano operationalizes a gradual take-off as: 

There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles. (Similarly, we’ll see an 8 year doubling before a 2 year doubling, etc.)

Sudden take-off is just the negation of this statement. 

Instead of considering economic output, we can also characterize take-off in terms of AI capabilities. For some measure of AI capabilities, we could say a gradual-take-off is a situation in which:

There will be a complete 4 year interval in which AI capabilities double, before the first 1 year interval in which AI capabilities double. (Similarly, we’ll see an 8 year doubling before a 2 year doubling, etc.)

Daniel Kokotajlo also argues against GDP as a metric for AI timelines and take-off dynamics, pointing to other factors that we actually care about (e.g. warning-shots, multi-polarity, etc).

Take-off is therefore not a question about timelines – it’s a question about AI capabilities (or the economic usefulness of AI) before some threshold level which is sufficient for explosive growth (e.g. the level of intelligence needed for recursive self-improvement, or some discontinuous phase-change in capabilities). 

Here is a graph of gradual vs sudden take-off (for completeness I’ve included a “no take-off” curve):

As you can see, the gradual take-off scenario happens at an earlier time. 

In a gradual take-off world, AI systems become increasingly capable and useful before reaching the threshold for explosive growth. In this world, pre-AGI systems can fully automate or rapidly speed up many tasks. They provide lots of economic value and transform the world in many ways. One effect of these systems is to speed up AI research – implying we get to AI sooner than in a sudden take-off world (conditioned on a given level of “AGI difficulty”). Furthermore, the increased value and usefulness of these systems causes more funding, talent, and compute to be invested in AI, leading to further improvements (leading to more funding/talent/compute…). In addition, AI replaces an increasing number of jobs, freeing up more people to work on AI (as researchers, engineers, overseers, data-generators, etc.). The interplay of all of these factors leads to a virtuous circle which ultimately causes an increasing rate of AI capability growth. 

In a sudden take-off world, AIs haven’t been very valuable up until some threshold of capability (at which point something like recursive improvement kicks in). Therefore they haven’t changed the world much and in particular haven’t made AI research go faster, so it's probably taken us longer to get to AGI. 

In some very loose sense, you can think of the area under the take-off curve (cut off at the point where the line crosses the threshold) as having to be similar in both settings (if the y axis corresponds to improving AI research), as then AI research improvement * time (x-axis) = amount of progress towards AGI/Explosive Growth Threshold/TAI, which needs to be the same in both scenarios. Hence, sudden take-off, which has a lower line, has to wait longer until that progress is made. (Thanks to Rob Kirk for this picture.)

Is it inconsistent, then, to think both that take-off will be gradual and timelines will be long? No – people who hold this view probably do so because they think that marginal improvements in AI capabilities are hard. This belief implies both a gradual take-off and long timelines. (Thanks to Sammy Martin for this point.)

Similarly it is not inconsistent to think we will have a sudden take-off soon. This view would stem from a belief that "The threshold level of capabilities needed for explosive growth is very low." Which would imply that we hit explosive growth before AIs are useful enough to be integrated into the economy, i.e. sudden take-off, and also imply that we get AGI on short timelines. 

For every choice of AGI difficulty, conditioning on gradual take-off implies shorter timelines.

 

Acks. Thanks to Rob Kirk, Sammy Martin, Matt MacDermott, and Daniel Kokotajllo for helpful comments.








 

New to LessWrong?

New Comment
13 comments, sorted by Click to highlight new comments since: Today at 9:18 PM

The actual claim is “For every choice of AGI difficulty, conditioning on gradual take-off implies shorter timelines.” I’m OK with that claim.

However, the title is [was—it was changed in response to this comment! :) ] “Gradual take-off implies shorter timelines”, and I think that’s very misleading.

Also, the claim towards the top is “All else being equal, gradual take-off implies shorter timelines.”, and I think that’s somewhat misleading.

I think we both agree that it’s perfectly consistent for Person A to expect FOOM next week, and it’s perfectly consistent for Person B to expect a gradual takeoff between 2150 and 2200. You would say “Person A thinks AGI is much easier than Person B”, so that’s not “all else being equal”.

But I would say: “all else being equal” is kinda a hard thing to think about, and maybe poorly defined in general. After all, any two scenarios wherein one is fast-takeoff and the other is slow-takeoff will differ in a great many ways. Which of those ways is an “else” that would violate the “all else being equal” assumption? For example, are we going to hold “total research effort” fixed, or are we going to hold “total human research effort” fixed, or are we going to hold “resource allocation decision criteria” fixed? Well maybe there’s a sensible way to answer that, or maybe not, but regardless, I think it’s not intuitive.

So I think “Holding AGI difficulty fixed…” is much better than “All else being equal…”, and much much better than the post title which omits that caveat altogether.

I don't think the "actual claim" is necessarily true. You need more assumptions than a fixed difficulty of AGI, assumptions that I don't think everyone would agree with. I walk through two examples in my comment: one that implies "Gradual take-off implies shorter timelines" and one that implies "Gradual take-off implies longer timelines."

I agree and will edit my post. Thanks!

I think the confusion stems from the word "gradual." It seems there's wide agreement that we have two dimensions of the AI timelines problem:

  • Growth rate pattern. This could be linear, polynomial, exponential, or hyperbolic, and perhaps include a shift between two or more trends. Choice of coefficients matters too. Linear growth with a high slope could reach the threshold faster than exponential growth with small coefficients.
  • AI difficulty

Paul's description of "gradualism" as stated actually does make a statement about both dimensions, though it's also somewhat ambiguous:

There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles. (Similarly, we’ll see an 8 year doubling before a 2 year doubling, etc.)

US GDP last doubled from 1998-2021, a 23-year interval. Paul's statement seems to imply that a doubling can occur in no less than 25% the time of its predecessor. Under this assumption, we'd have at least 3 doublings left, over the course of at least 8 years, before doublings can occur in less than a year. This still allows AI takeoff by 2030; it just suggest that we'll be getting a warning shot over the next few years.

Another way to frame this, then, is that "For any choice of AI difficulty, faster pre-takeoff growth rates imply shorter timelines."

This statement is literally and obviously true, and avoids the counterintuitive aspect introduced by the word "gradual."

Another way to frame this, then, is that "For any choice of AI difficulty, faster pre-takeoff growth rates imply shorter timelines."

I agree. Notably, that sounds more like a conceptual and almost trivial claim.

I think that the original claims sound deeper than they are because they slide between a true but trivial interpretation and a non-trivial interpretation that may not be generally true.

I agree with this post that the accelerative forces of gradual take-off (e.g., "economic value... more funding... freeing up people to work on AI...") are important and not everyone considers them when thinking through timelines.

However, I think the specific argument that "Gradual take-off implies shorter timelines" requires a prior belief that not everyone shares, such as a prior that an AGI of difficulty D will occur in the same year in both timelines. I don't think such a prior is implied by "conditioned on a given level of “AGI difficulty”". Here are two example priors, one that leads to "Gradual take-off implies shorter timelines" and one that leads to the opposite. The first sentence of each is most important.

Gradual take-off implies shorter timelines
Step 1: (Prior) Set AGI of difficulty D to occur at the same year Y in the gradual and sudden take-off timelines.
Step 2: Notice that the gradual take-off timeline has AIs of difficulties like 0.5D sooner, which would make AGI occur sooner than Y because of the accelerative forces of "economic value... more funding... freeing up people to work on AI..." etc. Therefore, move AGI occurrence in gradual take-off from Y to some year before Y, such as 0.5Y.

=> AGI occurs at 0.5Y in the gradual timeline and Y in the sudden timeline.

Gradual take-off implies longer timelines
Step 1: (Prior) Set AI of difficulty 0.5D to occur at the same year Y in the gradual and sudden take-off timelines. To fill in AGI of difficulty D in each timeline, suppose that both are superlinear but sudden AGI arrives at exactly Y and gradual AGI arrives at 1.5Y.
Step 2: Notice that the gradual take-off timeline has AIs of difficulties like 0.25D sooner, which would make AGI occur sooner than Y because of the accelerative forces of "economic value... more funding... freeing up people to work on AI..." etc. Therefore, move 0.5D AI occurrence in gradual take-off from Y to some year before Y, such as Y/2, and move AGI occurrence in gradual take-off correspondingly from 1.5Y to 1.25Y.

=> AGI occurs at 1.25Y in the gradual timeline and Y in the sudden timeline.

By the way, this is separate from Stefan_Schubert's critique that very short timelines are possible with sudden take-off but not with gradual take-off, which I personally think can be considered a counterexample if we treat the impossibility of gradual take-off as "long" but not really a counterexample if we just consider the shortness comparison to be indeterminate because there are no very short gradual timelines.

For every choice of AGI difficulty, conditioning on gradual take-off implies shorter timelines.

What would you say about the following argument?

  • Suppose that we get AGI tomorrow because of a fast take-off. If so timelines will be extremely short.
  • If we instead suppose that take-off will be gradual, then it seems impossible for timelines to be that short.
  • So in this scenario - this choice of AGI difficulty - conditioning on gradual take-off doesn't seem to imply shorter timelines.
  • So that's a counterexample to the claim that for every choice of AGI difficulty, conditioning on gradual take-off implies shorter timelines.

I'm not sure whether it does justice to your reasoning, but if so, I'd be interested to learn where it goes wrong.

  • Suppose that we get AGI tomorrow because of a fast take-off. If so timelines will be extremely short.
  • If we instead suppose that take-off will be gradual, then it seems impossible for timelines to be that short.
  • So in this scenario - this choice of AGI difficulty - conditioning on gradual take-off doesn't seem to imply shorter timelines.

Those were two different scenarios with two different amounts of AGI difficulty! In the first scenario, we have enough knowledge to build AGI today; in the second we don't have enough knowledge to build AGI today (and that is part of why the takeoff will be gradual).

Thanks.

My argument involved scenarios with fast take-off and short time-lines. There is a clarificatory part of the post that discusses the converse case, of a gradual take-off and long time-lines:

Is it inconsistent, then, to think both that take-off will be gradual and timelines will be long? No – people who hold this view probably do so because they think that marginal improvements in AI capabilities are hard. This belief implies both a gradual take-off and long timelines.

Maybe a related clarification could be made about the fast take-off/short time-line combination.

However, this claim also confuses me a bit:

No – people who hold this view probably do so because they think that marginal improvements in AI capabilities are hard. This belief implies both a gradual take-off and long timelines.

The main claim in the post is that gradual take-off implies shorter time-lines. But here the author seems to say that according to the view "that marginal improvements in AI capabilities are hard", gradual take-off and longer timelines correlate. And the author seems to suggest that that's a plausible view (though empirically it may be false). I'm not quite sure how to interpret this combination of claims.

I agree with Rohin's comment above.

Maybe a related clarification could be made about the fast take-off/short time-line combination.

Right. I guess the view here is that "The threshold level of capabilities needed for explosive growth is very low." Which would imply that we hit explosive growth before AIs are useful enough to be integrated into the economy, i.e. sudden take-off.  

The main claim in the post is that gradual take-off implies shorter time-lines. But here the author seems to say that according to the view "that marginal improvements in AI capabilities are hard", gradual take-off and longer timelines correlate. And the author seems to suggest that that's a plausible view (though empirically it may be false). I'm not quite sure how to interpret this combination of claims.

If "marginal improvements in AI capabilities are hard" then we must have a gradual take-off and timelines are probably "long" by the community's standards. In such a world, you simply can't have a sudden take-off, so a gradual take-off still happens on shorter timelines than a sudden take-off (i.e. sooner than never).

I realise I have used two different meanings of "long timelines" 1) "long" by people's standards; 2) "longer" than in the counterfactual take-off scenario. Sorry for the confusion!  

I think there's a problem with linking together AI research progress and economic output. The AI research -> economic value pipeline takes some time, the research AI models are not necessarily trivially adapted to generating economic value, and businesses need to actually figure out how to use AIs in their workflows. This means that in a gradual take-off world, where unintuitively AI research is happening faster pre-takeoff, the mechanisms which convert AI progress into value might not kickoff soon enough for us to observe them, for instance deepmind might be making so much progress so fast that it doesn't make sense to stop to monetize their current models because they know that they'll get better ones very soon. We can already notice this from the fact that there is so much economic low-hanging fruit remaining for current AI to pluck. It's in the sudden take-off world, where progress pre-takeoff is slower, that it makes sense to invest ressources to monetize your current models because you expect them to last some time before being deprecated. 

Yeah. I much prefer the take-off definitions which use capabilities rather than GDP (or something more wholistic like Daniel's post.)

Even holding difficulty constant, a gradual takeoff will not result in a shorter timeline if investment is saturated in the sudden timeline (i.e. no additional resources could be effectively deployed).