For a long time, when I heard "slow takeoff", I assumed it meant "takeoff that takes longer calendar time than fast takeoff." (i.e. what is now referred to more often as "short timelines" vs "long timelines."). I think Paul Christiano popularized the term, and it so happened he both expected to see longer timelines and smoother/continuous takeoff.

I think it's at least somewhat confusing to use the term "slow" to mean "smooth/continuous", because that's not what "slow" particularly means most of the time.

I think it's even more actively confusing because "smooth/continuous" takeoff not only could be faster in calendar time, but, I'd weakly expect this on average, since smooth takeoff means that AI resources at a given time are feeding into more AI resources, whereas sharp/discontinuous takeoff would tend to mean "AI tech doesn't get seriously applied towards AI development until towards the end."

I don't think this is academic[1].

I think this has wasted a ton of time arguing past each other on LessWrong, and if "slow/fast" is the terminology that policy makers are hearing as they start to tune into the AI situation, it is predictably going to cause them confusion, at least waste their time, and quite likely lead many of them to approach the situation through misleading strategic frames that conflate smoothness and timelines.

Way back in Arguments about fast takeoff, I argued that this was a bad term, and proposed "smooth" and "sharp" takeoff were better terms. I'd also be fine with "hard" and soft" takeoff. I think "Hard/Soft" have somewhat more historical use, and maybe are less likely to get misheard as "short", so maybe use those.[2]

I am annoyed that 7 years later people are still using "slow" to mean "maybe faster than 'fast'." This is stupid. Please stop. I think smooth/sharp and hard/soft are both fairly intuitive (at the very least, more intuitive than slow/fast, and people who are already familiar with the technical meaning of slow/fast will figure it out).

I would be fine with "continuous" and "discontinuous", but, realistically, I do not expect people to stick to those because they are too many syllables. 

Please, for the love of god, do not keep using a term that people will predictably misread as implying longer timelines. I expect this to have real-world consequences. If someone wants to operationalize a bet about it having significant real-world consequences I would bet money on it.

Curves
The graph I posted in response to Arguments about fast takeoff 

 

  1. ^

    a term that ironically means "pointlessly pedantic."

  2. ^

    the last time I tried to write this post, 3 years ago, I got stuck on whether to argue for smooth/sharp or hard/soft and then I didn't end up posting it at all and I regret that. 

New Comment
69 comments, sorted by Click to highlight new comments since:
[-]Raemon230
Pinned by Ben Pace

Okay since I didn't successfully get buy-in for a particular term before writing this post, here's a poll to agree/disagree vote on. (I'm not including Fast/Slow as an option by default but you can submit other options here and if you really want to fight for preserving it, seems fine).

.

exponential / explosive

I just wanted to add that I proposed this because many other possible terms (like "smooth") might have positive connotations.

[-]Raemon4156

Smooth/Sharp takeoff

[-]Max H2-3

Economic / atomic takeoff

[-]Raemon2-14

Smooth/Lumpy Takeoff

x / y / z
where x is time to transformative AI, y is fully human level AGI, z is superhuman ASI

Predictable/Unpredictable takeoff

[-]Raemon2-18

Long/short takeoff

Long duration/short duration takeoff

Continuous/Discontinuous takeoff

[-]Raemon230

Soft/Hard Takeoff

Steep / Shallow

Iterative/Sudden

hare/tortoise takeoff

Gradual/Sharp

Gradual/Sudden

Fast/Slow takeoff

Gradual/Abrupt

I don't love "smooth" vs "sharp" because these words don't naturally point at what seem to me to be the key concept: the duration from the first AI capable of being transformatively useful to the first system which is very qualitatively generally superhuman[1]. You can have "smooth" takeoff driven by purely scaling things up where this duration is short or nonexistent.

I also care a lot about the duration from AIs which are capable enough to 3x R&D labor to AIs which are capable enough to strictly dominate (and thus obsolete) top human scientists but which aren't necessarly very smarter. (I also care some about the duration between a bunch of different milestones and I'm not sure that my operationalizations of the milestones is the best one.)

Paul originally operationalized this as seeing an economic doubling over 4 years prior to a doubling within a year, but I'd prefer for now to talk about qualitative level of capabilities rather than also entangling questions about how AI will effect the world[2].

So, I'm tempted by "long duration" vs "short duration" takeoff, though this is pretty clumsy.


Really, there are bunch of different distinctions we care about with respect to takeoff and the progress of AI capabilities:

  • As discussed above, the duration from the first transformatively useful AIs to AIs which are generally superhuman. (And between very useful AIs to top human scientist level AIs.)
  • The duration from huge impacts in the world from AI (e.g. much higher GDP growth) to very superhuman AIs. This is like the above, but also folding in economic effects and other effects on the world at large which could come apart from AI capabilities even if there is a long duration takeoff in terms of capabilities.
  • Software only singularity. How much the singularity is downstream of AIs working on hardware (and energy) vs just software. (Or if something well described as a singularity even happens.)
  • Smoothness of AI progress vs jumpyness. As in, is progress driven by a larger number of smaller innovations and/or continuous scale ups rather being substantially driven by a small number of innovations and/or large phase changes that emerge with scale.
  • Predictability of AI progress. Even if AI progress is smooth in the sense of the prior bullet, it may not follow a very predictable trend if the rate of innovations or scaling varies a lot.
  • Tunability of AI capability. Is is possible to get a fully sweep of models which continuously interpolates over a range of capabilities?[3]

Of course, these properties are quite correlated. For instance, if the relevant durations for the first bullet are very short, then I also don't expect economic impacts until AIs are much smarter. And, if the singularity requires AIs working on increasing available hardware (software only doesn't work or doesn't go very far), then you expect more economic impact and more delay.


  1. One could think that there will be no delay between these points, though I personally think this is unlikely. ↩︎

  2. In short timelines, with a software only intelligence explosion, and with relevant actors not intentionally slowing down, I think I don't expect huge global GDP growth (e.g. 25% annualized global GDP growth rate) prior to very superhuman AI. I'm not very confident in this, but I think both inference availability and takeoff duration point to this. ↩︎

  3. This is a very weak property, though I think some people are skeptical of this. ↩︎

A thing that didn't appear on your list, and which I think is pretty important (cruxy for a lot of discussions; closest to what Hanson meant in the FOOM debate), is "human-relative discontinuity/speed". Here the question is something like: "how much faster does AI get smarter, compared to humans?". There's conceptual confusion / talking past each other in part because one aspect of the debate is:

  • how much locking force there is between AI and humans (e.g. humans can learn from AIs teaching them, can learn from AI's internals, can use AIs, and humans share ideas with other humans about AI (this was what Hanson argued))

and other aspect is

  • how fast does an intelligence explosion go, by the stars (sidereal).

If you think there's not much coupling, then sidereal speed is the crux about whether takeoff will look discontinuous. But if you think there's a lot of coupling, then you might think something else is a crux about continuity, e.g. "how big are the biggest atomic jumps in capability".

What does this cache out to in terms of what terms you think make sense?

Not sure I understand your question. If you mean just what I think is the case about FOOM:

  • Obviously, there's no strong reason humans will stay coupled with an AGI. The AGI's thoughts will be highly alien--that's kinda the point.
  • Obviously, new ways of thinking recursively beget powerful new ways of thinking. This is obvious from the history of thinking and from introspection. And obviously this goes faster and faster. And obviously will go much faster in an AGI.
  • Therefore, from our perspective, there will be a fast-and-sharp FOOM.
  • But I don't really know what to think about Christiano-slow takeoff.
    • I.e. a 4-year GDP doubling before a 1-year GDP doubling.
    • I think Christiano agrees that there will later be a sharp/fast/discontinuous(??) FOOM, but he thinks things will get really weird and fast before that point. To me this is vaguely in the genre of trying to predict whether you can usefully get nuclear power out of a pile without setting off a massive explosion, when you've only heard conceptually about the idea of nuclear decay. But I imagine Christiano actually did some BOTECs to get the numbers "4" and "1".
    • If I were to guess at where I'd disagree with Christiano: Maybe he thinks that in the slow part of the slow takeoff, humans can make a bunch of progress on aligning / interfacing with / getting work out of AI stuff, to such an extent that from those future humans's perspectives, the fast part of the slow takeoff will actually be slow, in the relative sense. In other words, if the fast part came today, it would be fast, but if it came later, it would be slow, because we'd be able to keep up. Whereas I think aligning/interfacing, in the part where it counts, is crazy hard, and doesn't especially have to be coupled with nascent-AGI-driven capabilities advances. A lot of Christiano's work has (explicitly) a strategy-stealing flavor: if capability X exists, then we / an aligned thingy should be able to steal the way to do X and do it alignedly. If you think you can do that, then it makes sense to think that our understanding will be coupled with AGI's understanding.

I meant ‘do you think it’s good, bad, or neutral that people use the phrase ‘slow’/‘fast’ takeoff? And, if bad, what do you wish people did instead in those sentences?

Depends on context; I guess by raw biomass, it's bad because those phrases would probably indicate that people aren't really thinking and they should taboo those phrases and ask why they wanted to discuss them? But if that's the case and they haven't already done that, maybe there's a more important underlying problem, such as Sinclair's razor.

I think long duration is way too many syllables, and I think I have similar problems with this naming schema as Fast/Slow, but, if you were going to go with this naming schema I think just saying "short takeoff" and "long takeoff" seems about as clear ("duration" comes implied IMO)

I don't love "smooth" vs "sharp" because these words don't naturally point at what seem to me to be the key concept: the duration from the first AI capable of being transformatively useful to the first system which is very qualitatively generally superhuman[1]. You can have "smooth" takeoff driven by purely scaling things up where this duration is short or nonexistent.

I'm not sure I buy the distinction mattering?

Here's a few worlds:

  1. Smooth takeoff to superintelligence via scaling the whole way, no RSI
  2. Smooth takeoff to superintelligence via a mix of scaling, algorithmic advance, RSI, etc
  3. smoothish looking takeoff via scaling (like we currently see) but then suddenly the shape of the curve changes dramatically due to RSI or similar
  4. smoothish looking takeoff via scaling like we see, but, and then RSI is the mechanism by which the curve continues, but not very quickly (maybe this implies the curve actively levels off S-curve style before eventually picking up again)
  5. alt-world where we weren't even seeing similar types of smoothly advancing AI, and then there's abrupt RSI takeoff in days or months
  6. alt-world where we weren't seeing similar smooth scaling AI, and then RSI is the thing that initiates our current level of growth

At least with the previous way I'd been thinking about things, I think the worlds above that look smooth, I feel like "yep, that was a smooth takeoff."

Or, okay, I thought about it a bit more and maybe agree that "time between first transformatively-useful to superintelligence" is a key variable. But, I also think that variable is captured by saying "smooth takeoff/long timelines?" (which is approximately what people are currently saying?

Hmm, I updated towards being less confident while thinking about this.

But, I also think that variable is captured by saying "smooth takeoff/long timelines?" (which is approximately what people are currently saying?

You can have smooth and short takeoff with long timelines. E.g., imagine that scaling works all the way to ASI, but requires a ton of baremetal flop (e.g. 1e34) implying longer timelines and early transformative AI requires almost as much flop (e.g. 3e33) such that these events are only 1 year apart.

I think we're pretty likely to see a smooth and short takeoff with ASI prior to 2029. Now, imagine that you were making this exactly prediction up through 2029 in 2000. From the perspective in 2000, you are exactly predicting smooth and short takeoff with long timelines!

So, I think this is actually a pretty natural prediction.

For instance, you get this prediction if you think that a scalable paradigm will be found in the future and will scale up to ASI and on this scalable paradigm the delay between ASI and transformative AI will be short (either because the flop difference is small or because flop scaling will be pretty rapid at the relevant point because it is still pretty cheap, perhaps <$100 billion).

I agree with the spirit of what you are saying but I want to register a desire for "long timelines" to mean ">50 years" or "after 2100". In public discourse, heading Yann LeCunn say something like "I have long timelines, by which I mean, no crazy event in the next 5 years" - it's simply not what people think when they think long timelines, outside of the AI sphere.

Long takeoff and short takeoff sound strange to me. Maybe because they are too close to long timelines and short timelines.

Yeah I think the similarity of takeoff and timelines is maybe the real problem.

Like if Takeoff wasn’t two syllables that starts with T I might be happy with ‘short/long’ being the prefix for both.

Back in the day when this discourse first took off I remember meeting a lot of people that told me something in the vein of 'Yudkowskys position that AI will just FOOM like that is silly, not credible, sci-fi. You should really be listening to this Paul guy. His continuous slow takeoff perspective is much more sensible'. 

I remember pointing out to them that "Paul slow" according to Christiano was 'like the industrial revolution but 100x faster'... 

Depending on who you ask, the industrial revolution lasted ~150 years - so that's 1,5 years. That's pretty much what we are seeing. This was just status quo bias and respectability bias. 

Actually, ofc the discourse was double silly. FOOM can still happen even in a 'slow takeoff' scenario. It's a 'slow' straight line until some threshold is hit until it absolutely explodes. [although imho people equivocate between different versions of FOOM in confusing ways]

Whether or not a certain takeoff will happen only partially has to do with predictable features like scaling compute. Part of it may be predicted by straight-line numerology but a lot of it comes down to harder-to-predict or even impossible-to-predict features about the future like algorithmic & architectural changes, government intervention etc. That was the reasonable position in 2020 anyway. Today we have enough bits to have a pretty good guess when where and how superintelligence will happen. 

[-]Akash130

Today we have enough bits to have a pretty good guess when where and how superintelligence will happen. 

@Alexander Gietelink Oldenziel can you expand on this? What's your current model of when/where/how superintelligence will happen?

Hi @Akash here are my current best guesses on the arrival and nature of superintelligence:

Timeframe probably around end of the decade 2028-2032 but possibly even earlier. In the unlikely but not impossible case that AGI has not been achieved by 2035 timelines lengthen again.

Players will be the usual suspects: openAi, Anthropic, deepMind, xAI, Meta, Baidu etc. Compute spend will be in hundreds of billions maybe more. The number of players will be exceedingly small; likely some or all of the labs will join forces.

Likely there will be soft nationalization of AGI under US government under pressure of ai safety concerns, national security and the economics of giant GPU clusters. China will have joined the race to AGI, and will likely build AGI systems not too long after US (conditional on no Crazy Things happening in the interim). There will likely be various international agreements that will be ignored to varying degree.

AGI will be similar in some respects to current day LLMs in having a pre-training phase on large internet data but importantly will differ in being a continually 'open-loop' trained RL agent on top of a LLM chassis. The key will be doing efficient long-horizon RL on thoughts on top of current day pretrained transformers.

An important difference between current day LLMs and future superintelligent AGI is that reactions to prompts (like solve cancer, the Riemann hypothesis, selfreplicating drones) can take many hours, weeks, months. So directing the AGI in what questions to think about will be plausibly very important.

______________________< After AGI is achieved (in the US) the future seems murky (to me). Seems hard to forecast how the public and decision-making elites will react exactly.

Likely the US government will try to control its use to a significant degree. It's very plausible that the best AI will not be accessible fully to the public.

National security interests likely to become increasingly important. A lot of compute likely to go into making AGI think about health &medicine.

Other countries (chief among them China) will want to build their own. Very large and powerful language models will still be available to the public.

Yeah, one of OpenAIs goals is to make AI models think for very long and get better answers the more they think, limited only by available computing power, so this will almost certainly be an important subgoal to this:
 


An important difference between current day LLMs and future superintelligent AGI is that reactions to prompts (like solve cancer, the Riemann hypothesis, selfreplicating drones) can take many hours, weeks, months

And o1, while not totally like the system proposed, is a big conceptual advance to making this a reality.

Just pointing out that you should transition to LessWrong Docs when trying to call out someone by tagging them.

I'll do that right now:

@Akash 

Response to Alexander Oldenziel's models?

'like the industrial revolution but 100x faster'

Actually, much more extreme than the industrial revolution because at the end of the singularity you'll probably discover a huge fraction of all technology and be able to quickly multiply energy production by a billion.

That said, I think Paul expects a timeframe longer than 1.5 years, at least he did historically, maybe this has changed with updates over the last few years. (Perhaps more like 5-10 years.)

I think a problem with all the proposed terms is that they are all binaries, and one bit of information is far too little to characterize takeoff: 

  • One person's "slow" is >10 years, another's is >6 months. 
  • The beginning and end points are super unclear; some people might want to put the end point near the limits of intelligence, some people might want to put the beginning points at >2x AI R&D speed, some at 10, etc. 
  • In general, a good description of takeoff should characterize capabilities at each point on the curve.  

So I don't really think that any of the binaries are all that useful for thinking or communicating about takeoff. I don't have a great ontology for thinking about takeoff myself to suggest instead, but I generally try to in communication just define a start and end point and then say quantitatively how long this might take. One of the central ones I really care about is the time between wakeup and takeover capable AIs. 

wakeup = "the first period in time when AIs are sufficiently capable that senior government people wake up to incoming AGI and ASI" 

takeover capable AIs = "the first time there is a set of AI systems that are coordinating together and could take over the world if they wanted to" 

The reason to think about this period is that (kind of by construction) it's the time where unprecedented government actions that matter could happen. And so when planning for that sort of thing this length of time really matters. 

Of course, the start and end times I think about are both fairly vague. They also aren't purely a function of AI capabilities, and they care about stuff like "who is in government" and "how capable our institutions are at fighting a rogue AGI".  Also, many people believe that we never will get takeover capable AIs even at superintelligence.

I support replacing binary terms with quantitative terms.

I think in most cases it might make sense to give the unit you expect to measure it in. “Days-long takeoff”. “Months-long takeoff.” “Years-long-takeoff”. “Decades-long takeoff”.

Minutes long takeoff...

 

[By comparison, I forget the reference but there is a paper estimating how quickly a computer virus could destroy most of the Internet. About 15 minutes, if I recall correctly.]

(This bit isn't serious) "i mean, a days long takeoff leaves you will loads of time for the hypersonic missiles to destroy all of Meta's datacenters."

serious answer that is agnostic as to how you are responding:

only if you know the takeoff is happening

Fwiw I feel fine, with both slow/fast and smooth/sharp thinking of it as a continuum. Takeoffs and timelines can be slower or faster and compared on that axis.

I agree if you are just treating those as booleans your gonna get confused, but the words seem about as scalar a shorthand as one could hope for without literally switching entirely to more explicit quantification.

IMO all of the "smooth/sharp" and "soft/hard" stuff is too abstract. When I concretely picture what the differences between them are, the aspect that stands out most is whether the takeoff will be concentrated within a single AI/project/company/country or distributed across many AIs/projects/companies/countries.

This is of course closely related to debates about slow/fast takeoff (as well as to the original Hanson/Yudkowsky debates). But using this distinction instead of any version of the slow/fast distinction has a few benefits:

  1. If someone asks "why should I care about slow/fast takeoff?" a lot of the answers will end up appealing to the concentrated/distributed power thing. E.g. you might say "if takeoff is fast that means that there will be a few key points of leverage".
  2. Being more concrete, I think it will provoke better debates (e.g. how would a single AI lab concretely end up outcompeting everyone else?)
  3. This framing naturally concentrates the mind on an aspect of risk (concentration of power) that is concerning from both a misuse and a misalignment perspective.
[-]ryan_b102

The post anchors on the Christiano vs Eliezer models of takeoff, but am I right that the goal more generally is to disentangle the shape of progress from the timeline for progress? I strongly support disentangling dimensions of the problem. I have spoken against using p(doom) for similar reasons.

IMO, soft/smooth/gradual still convey wrong impressions. They still sound like "slow takeoff", they sound like the progress would be steady enough that normal people would have time to orient to what's happening, keep track, and exert control. As you're pointing out, that's not necessarily the case at all: from a normal person's perspective, this scenario may very much look very sharp and abrupt.

The main difference in this classification seems to be whether AI progress occurs "externally", as part of economic and R&D ecosystems, or "internally", as part of an opaque self-improvement process within a (set of) AI system(s). (Though IMO there's a mostly smooth continuum of scenarios, and I don't know that there's a meaningful distinction/clustering at all.)

From this perspective, even continuous vs. discontinuous don't really cleave reality at the joints. The self-improvement is still "continuous" (or, more accurately, incremental) in the hard-takeoff/RSI case, from the AI's own perspective. It's just that ~nothing besides the AI itself is relevant to the process.

Just "external" vs. "internal" takeoff, maybe? "Economic" vs. "unilateral"?

I do agree with that, although I don't know that I feel the need to micromanage the implicature of the term that much. 

I think it's good to try to find terms that don't have misleading connotations, but also good not to fight too hard to control the exact political implications of a term, partly because there's not a clear cutoff between being clear and being actively manipulative (and not obvious to other people which you're being, esp. if they disagree with you about the implications), and partly because there's a bit of a red queen race of trying to get terms into common parlance that benefit your agenda, and, like, let's just not.

Fast/slow just felt actively misleading.

I think the terms you propose here are interesting but a bit too opinionated about the mechanism involved. I'm not that confident those particular mechanisms will turn out to be decisive, and don't think the mechanism is actually that cruxy for what the term implies in terms of strategy.

If I did want to try to give it the connotations that actually feel right to me, I might say "rolling*" as the "smooth" option. I don't have a great "fast" one.

*although someone just said they found "rolling" unintuitive so shrug.

IMO, soft/smooth/gradual still convey wrong impressions. They still sound like "slow takeoff", they sound like the progress would be steady enough that normal people would have time to orient to what's happening, keep track, and exert control.

That is exactly the meaning that I'd thought was standard for "soft takeoff" (and which I assumed was synonymous with "slow takeoff"), e.g. as I wrote in 2012:

Bugaj and Goertzel (2007) consider three kinds of AGI scenarios: capped intelligence, soft takeoff, and hard takeoff. In a capped intelligence scenario, all AGIs are prevented from exceeding a predetermined level of intelligence and remain at a level roughly comparable with humans. In a soft takeoff scenario, AGIs become far more powerful than humans, but on a timescale which permits ongoing human interaction during the ascent. Time is not of the essence, and learning proceeds at a relatively human-like pace. In a hard takeoff scenario, an AGI will undergo an extraordinarily fast increase in power, taking effective control of the world within a few years or less. [Footnote: Bugaj and Goertzel defined hard takeoff to refer to a period of months or less. We have chosen a somewhat longer time period, as even a few years might easily turn out to be too little time for society to properly react.] In this scenario, there is little time for error correction or a gradual tuning of the AGI’s goals.

(B&G didn't actually invent soft/hard takeoff, but it was the most formal-looking cite we could find.)

Assuming this is the important distinction, I like something like “isolated”/“integrated” better than either of those.

I think it’s even more actively confusing because “smooth/continuous” takeoff not only could be faster in calendar time

We're talking about two different things here: take-off velocity, and timelines. All 4 possibilities are on the table - slow takeoff/long timelines, fast takeoff/long timelines, slow takeoff/short timelines, fast takeoff/short timelines.

A smooth takeoff might actually take longer in calendar time if incremental progress doesn’t lead to exponential gains until later stages.

Honestly I'm surprised people are conflating timelines and takeoff speeds.

I agree. I look at the red/blue/purple curves and I think "obviously the red curve is slower than the blue curve", because it is not as steep and neither is its derivative. The purple curve is later than the red curve, but it is not slower. If we were talking about driving from LA to NY starting on Monday vs flying there on Friday, I think it would be weird to say that flying is slower because you get there later. I guess maybe it's more like when people say "the pizza will get here faster if we order it now"? So "get here faster" means "get here sooner"?

Of course, if people are routinely confused by fast/slow, I am on board with using different terminology, but I'm a little worried that there's an underlying problem where people are confused about the referents, and using different words won't help much.

If a friend calls me and says "how soon can you be in NY?" and I respond with "well, the fastest flight gets there at 5PM" and "the slowest flight gets me there at 2PM", my friend sure will be confused and sure is not expecting me to talk about the literal relative speeds of the plane. 

In-general, I think in the context of timeline discussions, people almost always ask "how soon will AI happen?" and I think a reasonable assumption given that context is that "fast" means "sooner" and "slow" means "later".

I agree that in the context of an explicit "how soon" question, the colloquial use of fast/slow often means sooner/later. In contexts where you care about actual speed, like you're trying to get an ice cream cake to a party and you don't want it to melt, it's totally reasonable to say "well, the train is faster than driving, but driving would get me there at 2pm and the train wouldn't get me there until 5pm". I think takeoff speed is more like the ice cream cake thing than the flight to NY thing.

That said, I think you're right that if there's a discussion about timelines in a "how soon" context, then someone starts talking about fast vs slow takeoff, I can totally see how someone would get confused when "fast" doesn't mean "soon". So I think you've updated me toward the terminology being bad.

For specifically discussing the takeoff models in the original Yudkowsky / Christiano discussion, what about:

Economic vs. atomic takeoff

Economic takeoff because Paul's model implies rapid and transformative economic growth prior to the point at which AIs can just take over completely. Whereas Eliezer's model is that rapid economic growth prior to takeover is not particularly necessary - a sufficiently capable AI could act quickly or amass resources while keeping a low profile, such that from the perspective of almost all humanity, takeover is extremely sudden.

Note: "atomic" here doesn't necessarily mean "nanobots" - the goal of the term is to connote that an AI does something physically transformative, e.g. releasing a super virus, hacking / melting all uncontrolled GPUs, constructing a Dyson sphere, etc. A distinguishing feature of Eliezer's model is that those kinds of things could happen prior to the underlying AI capabilities that enable them having more widespread economic effects.

IIUC, both Eliezer and Paul agree that you get atomic takeoff of some kind eventually, so one of the main disagreements between Paul and Eliezer could be framed as their answer to the question: "Will economic takeoff precede atomic takeoff?" (Paul says probably yes, Eliezer says maybe.)


Separately, an issue I have with smooth / gradual vs. sharp / abrupt (the current top-voted terms) is that they've become a bit overloaded and conflated with a bunch of stuff related to recent AI progress, namely scaling laws and incremental / iterative improvements to chatbots and agents. IMO, these aren't actually closely related nor particularly suggestive of Christiano-style takeoff - if anything it seems more like the opposite:

  • Scaling laws and the current pace of algorithmic improvement imply that labs can continue improving the underlying cognitive abilities of AI systems faster than those systems can actually be deployed into the world to generate useful economic growth. e.g. o1 is already "PhD level" in many domains, but doesn't seem to be on pace to replace a significant amount of human labor or knowledge work before it is obsoleted by Opus 3.5 or whatever.
  • Smooth scaling of underlying cognition doesn't imply smooth takeoff. Predictable, steady improvements on a benchmark via larger models or more compute don't tell you which point on the graph you get something economically or technologically transformative.
[-]kave20

I'm not sure I'm understanding your setup (I only skimmed the post). Are you using takeoff to mean something like "takeoff from now" or ("takeoff from [some specific event that is now in the past]")? If I look at your graph at the end, it looks to me like "Paul Slow" is a faster timeline but a longer takeoff (Paul Slow's takeoff beginning near the beginning of the graph, and Fast takeoff beginning around the intersection of the two blue lines).

Please, for the love of god, do not keep using a term that people will predictably misread as implying longer timelines. I expect this to have real-world consequences. If someone wants to operationalize a bet about it having significant real-world consequences I would bet money on it.

 

I would be willing to take the other side of this bet depending on the operationalization.  Certainly I would take the "no" side of the bet for "will Joe Biden (or the next president) use the words 'slow takeoff' in a speech to describe the current trajectory of AI"

The relevant question is ‘Will a policy wonk inform Joe Biden (or, any other major decisionmaker) who either read a report with ‘slow takeoff’ and got confused,

or, read a report by someone who read a report by someone who was confused. (This is the one that seems very likely to me)

There's no way I'd be on "will someone talk to someone who talked to someone who was once confused about something" because that's not what I think "real world impact" means.

 

At a minimum it would have to be in a official report signed by the office of the president, or something that has the force-of-law like an executive order.

I added an option for x/y/z. Mine would thus be something like 1.5/1/0.5 years from mid 2024 perspective. Much more cumbersome, but very specific!

For comparison, I have a researcher friend whose expectations are more like 4/2/0.1 Different shaped lines indeed!

Some people I talk to seem to have expectations like 20/80/never. That's also a very different belief. I'm just not sure words are precise enough for this situation of multiple line slopes, which is why I suggest using numbers.

I was finding it a bit challenging to unpack what you're saying here. I think, after a reread, that you're using ‘slow’ and ‘fast’ in the way I would use ‘soon’ and ‘far away’ (aka. referring to the time it will occur from the present). Is this read about correct?

Something like iterative/cliff, with fast and slow expressing time scales

Can you sort the poll options by popularity?

alas not easily

[-]ZY10

Nice post pointing out this! Relatedly for misused/overloaded terms - I think I have seen this getting more common recently (including overloaded terms that means something else in the wider academic community or society; and self-reflecting - I sometimes do this too and need to improve on this).

Takeoff speed could be measured by e.g. the time between the first mass casualty incident that kills thousands of people vs the first mass casualty incident that kills hundreds of millions.

COVID-19 killed, idk, tens of millions worldwide rather than hundreds of millions.

 

But consider that an example of a (biological) virus takeoff of the order of months.

 

So the question for AGI takeoff .. death rate growing more rapidly than COVID19 pandemic, or slower?

If you’re trying to change the vocabulary you should have settled on an option.

I know, but last time I worried about that I ended up not writing the post at all and it seemed better make sure I published anything at all.

(edit: made a poll so as not to fully abdicate responsibility for this problem tho)