One common definition of a slow AGI takeoff is

There is a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles.

(For example, this Metaculus question)

But this might not happen even if AGI develops slowly.

For illustration, divide the economy into the part driven by AI and the part driven by other stuff. I imagine a "slow" takeoff looking like this, where AI progress accelerates faster than the rest of the economy and eventually takes over:

But in this world, AI doesn't have a major effect on the economy until it's just about to reach the transformative level. It might be slow in terms of technological progress, but it's fast in terms of GDP.

When I hear the hypothesis that world GDP doubles in 4 years before it doubles in 1 year, I imagine a curve that looks like this:

Which just doesn't really make sense.

I'm not saying a slow takeoff will definitely look fast. I'm not saying that believing in a slow economic takeoff requires drawing a silly graph like the second one above. But I do think it's a little harder to draw a plausible picture where AI progress shows up in GDP well before it becomes superintelligent.

New to LessWrong?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 10:27 PM

When I hear the hypothesis that world GDP doubles in 4 years before it doubles in 1 year, I imagine a curve that looks like this:

I don't think that's the right curve to imagine

If AI is a perfect substitute for humans, then you would have (output) = (AI output) + (human output). If AI output triples every year, then the first time you will have a doubling of the economy in 1 year is when AI goes from 100% of human output to 300% of human output. Over the preceding 4 years you will have the growth of AI from ~0 human output to ~100% of human output, and on top of that you would have had human growth, so you would have had more than a doubling of the economy.

On the perfect complements model the question is roughly whether AI output is growing more or less than 3x per year.

When I wrote this post I gave a 30% to fast takeoff according to the 1-year before 4-year operationalization. I would now give that more like a 40-50% chance. However almost all of my fast takeoff probability is now concentrated on worlds that are quite close to the proposed boundary. My probability on scenarios like the "teleportation" discussed by Rob Bensinger here has continued to fall and is now <10% though it depends exactly how you operationalize them.

I think right now AGI economic output is growing more quickly than 3x/year. In reality there are a number of features that I think will push us to a significantly slower takeoff than this model would imply:

  • Complementarity between humans and AIs. I see plausible arguments for low complementarity owing to big advantages from full automation, but it seems pretty clear there will be some complementarity, i.e. that output will be larger than (AI output) + (human output). Today there is obviously massive complementarity. Even modest amounts of complementarity significantly slow down takeoff. I believe there is a significant chance (perhaps 30%?) that complementarity from horizon length alone is sufficient to drive an unambiguously slow takeoff.
  • Capital will grow more slowly even as AI becomes available, and this will significantly slow down takeoff. Most importantly, building new computers and experimenting with large AI training runs are themselves significant inputs into AI progress, and doubling the supply of labor will not double R&D output. Other drags like stepping-on-toes effects (where 2x as people can't do the same work 2x as fast) will also tend to slow down the rate of takeoff. I don't think any of these effects individually gives you slow takeoff, but they have significant effects on how fast AI progress needs to be in order to generate fast takeoff.
  • The current pace of growth in AI is driven partly by rapid increases in investment. I think it is plausible that we will build TAI at a time when growth is slower (because we are already investing tens or hundreds of billions and it becomes difficult to scale investment more quickly).

On the other hand there are complicating factors that push towards a faster takeoff:

  • It seems very possible that a small minority of humans work on AI R&D while almost all AI systems do (largely because AI labor is more flexible and the actual wages in AI R&D are exploding). Even if we build TAI in 10-20 years it is quite plausible that there are only millions of human-equivalents working on AI. If 50% of AI is reinvested in further AI R&D, then you would start to see significant increases in the rate of AI progress at the point when AI output was equivalent to millions of humans rather than billions of humans. Using actual wages could attenuate this somewhat since I would expect wages in AI to be at least 10x larger than typical and perhaps 100x, but things are still likely too sticky to get to a large fraction of GDP paid out to AI companies prior to transformative AI.
  • Right now the economic returns to increasing the number of AIs are very small. But we ultimately expect those returns to be at least as good (and probably significantly better) than increasing the quantity of human labor. It seems plausible that compute-equivalent is a more stable way of measuring AI progress, and therefore that economic-value-added will accelerate significantly in the run-up. Driven by this dynamic, you could imagine even faster increases in AI investment as we approach TAI than we see today, which is especially possible if timelines to transformative AI are short.

Overall it seems fairly likely that there will be some ways of measuring output for which we have a fast takeoff and plausible ways for which we have a slow takeoff, basically depending on how you value AI R&D relative to other kinds of cognitive output.  A natural way to do so is normalizing human cognitive output to be equal in different domains. I think would be a fair though not maximally charitable operationalization of what I said---more charitable would be the valuations assigned by alien accountants looking at earth and assessing net present values, and on that definition I think it's fairly unlikely to get a fast takeoff.

I think probably the biggest question is how large AGI revenue gets prior to transformative AI. I would guess right now it is in the billions and maybe growing by 2-4x / year. If it gets up to 10s of trillions then I think it is very likely you will have a slow takeoff according to this operationalization, but if it only gets up to tens or hundreds of billions then it will depend strongly on the valuation of speculative investments in AGI. (Right now total valuations of AGI as a technology are probably in the low hundreds of billions.)

But I do think it's a little harder to draw a plausible picture where AI progress shows up in GDP well before it becomes superintelligent.

I don't understand this. It seems extremely easy to imagine a world where AGI systems add trillions rather than billions of dollars of value well before becoming superintelligent. I feel like we can just list the potential applications to add up to trillions, and we can match that by extrapolating current growth rates for 5-10 years. I think the main reason to find slow takeoff hard to imagine is if you have a hard time imagining transformative AI in the 2030s rather than 2020s, but that's my modal outcome and so not very hard to imagine.

Thanks for the reply. If I'm understanding correctly, leaving aside the various complications you bring up, are you describing a potential slow growth curve that (to a rough approximation) looks like:

  • economic value of AI grows 2x per year (you said >3x, but 2x is easier b/c it lines up with the "GDP doubles in 1 year" criterion)
  • GDP first doubles in 1 year in (say) 2033
  • that means AI takes GDP from (roughly) $100T to $200T in 2033
  • extrapolating backward, AI is worth $9B this year, and will be worth $18B next year

This story sounds plausible to me, and it basically fits the slow-takeoff operationalization.

Complementarity between humans and AIs. I see plausible arguments for low complementarity owing to big advantages from full automation, but it seems pretty clear there will be some complementarity, i.e. that output will be larger than (AI output) + (human output). Today there is obviously massive complementarity. Even modest amounts of complementarity significantly slow down takeoff. I believe there is a significant chance (perhaps 30%?) that complementarity from horizon length alone is sufficient to drive an unambiguously slow takeoff.

This is a big crux, in that I believe complementarity is very low, low enough that in practice, it can be ignored.

And I think Amdahl's law severely suppresses complementarity, and this is a crux, in that if I changed my mind about this, then I think slow takeoff is likely.