Okay since I didn't successfully get buy-in for a particular term before writing this post, here's a poll to agree/disagree vote on. (I'm not including Fast/Slow as an option by default but you can submit other options here and if you really want to fight for preserving it, seems fine).
.
I don't love "smooth" vs "sharp" because these words don't naturally point at what seem to me to be the key concept: the duration from the first AI capable of being transformatively useful to the first system which is very qualitatively generally superhuman[1]. You can have "smooth" takeoff driven by purely scaling things up where this duration is short or nonexistent.
I also care a lot about the duration from AIs which are capable enough to 3x R&D labor to AIs which are capable enough to strictly dominate (and thus obsolete) top human scientists but which aren't necessarly very smarter. (I also care some about the duration between a bunch of different milestones and I'm not sure that my operationalizations of the milestones is the best one.)
Paul originally operationalized this as seeing an economic doubling over 4 years prior to a doubling within a year, but I'd prefer for now to talk about qualitative level of capabilities rather than also entangling questions about how AI will effect the world[2].
So, I'm tempted by "long duration" vs "short duration" takeoff, though this is pretty clumsy.
Really, there are bunch of different distinctions we care about with respect to takeoff...
Back in the day when this discourse first took off I remember meeting a lot of people that told me something in the vein of 'Yudkowskys position that AI will just FOOM like that is silly, not credible, sci-fi. You should really be listening to this Paul guy. His continuous slow takeoff perspective is much more sensible'.
I remember pointing out to them that "Paul slow" according to Christiano was 'like the industrial revolution but 100x faster'...
Depending on who you ask, the industrial revolution lasted ~150 years - so that's 1,5 years. That's pretty much what we are seeing. This was just status quo bias and respectability bias.
Actually, ofc the discourse was double silly. FOOM can still happen even in a 'slow takeoff' scenario. It's a 'slow' straight line until some threshold is hit until it absolutely explodes. [although imho people equivocate between different versions of FOOM in confusing ways]
Whether or not a certain takeoff will happen only partially has to do with predictable features like scaling compute. Part of it may be predicted by straight-line numerology but a lot of it comes down to harder-to-predict or even impossible-to-predict features about the ...
Today we have enough bits to have a pretty good guess when where and how superintelligence will happen.
@Alexander Gietelink Oldenziel can you expand on this? What's your current model of when/where/how superintelligence will happen?
I think a problem with all the proposed terms is that they are all binaries, and one bit of information is far too little to characterize takeoff:
So I don't really think that any of the binaries are all that useful for thinking or communicating about takeoff. I don't have a great ontology for thinking about takeoff myself to suggest instead, but I generally try to in communication just define a start and end point and then say quantitatively how long this might take. One of the central ones I really care about is the time between wakeup and takeover capable AIs.
wakeup = "the first period in time when AIs are sufficiently capable that senior government people wake up to incoming AGI and ASI"
takeover capable AIs = "the first time there is a set of AI systems that are co...
IMO all of the "smooth/sharp" and "soft/hard" stuff is too abstract. When I concretely picture what the differences between them are, the aspect that stands out most is whether the takeoff will be concentrated within a single AI/project/company/country or distributed across many AIs/projects/companies/countries.
This is of course closely related to debates about slow/fast takeoff (as well as to the original Hanson/Yudkowsky debates). But using this distinction instead of any version of the slow/fast distinction has a few benefits:
The post anchors on the Christiano vs Eliezer models of takeoff, but am I right that the goal more generally is to disentangle the shape of progress from the timeline for progress? I strongly support disentangling dimensions of the problem. I have spoken against using p(doom) for similar reasons.
IMO, soft/smooth/gradual still convey wrong impressions. They still sound like "slow takeoff", they sound like the progress would be steady enough that normal people would have time to orient to what's happening, keep track, and exert control. As you're pointing out, that's not necessarily the case at all: from a normal person's perspective, this scenario may very much look very sharp and abrupt.
The main difference in this classification seems to be whether AI progress occurs "externally", as part of economic and R&D ecosystems, or "internally", as par...
I do agree with that, although I don't know that I feel the need to micromanage the implicature of the term that much.
I think it's good to try to find terms that don't have misleading connotations, but also good not to fight too hard to control the exact political implications of a term, partly because there's not a clear cutoff between being clear and being actively manipulative (and not obvious to other people which you're being, esp. if they disagree with you about the implications), and partly because there's a bit of a red queen race of trying to get terms into common parlance that benefit your agenda, and, like, let's just not.
Fast/slow just felt actively misleading.
I think the terms you propose here are interesting but a bit too opinionated about the mechanism involved. I'm not that confident those particular mechanisms will turn out to be decisive, and don't think the mechanism is actually that cruxy for what the term implies in terms of strategy.
If I did want to try to give it the connotations that actually feel right to me, I might say "rolling*" as the "smooth" option. I don't have a great "fast" one.
*although someone just said they found "rolling" unintuitive so shrug.
I think it’s even more actively confusing because “smooth/continuous” takeoff not only could be faster in calendar time
We're talking about two different things here: take-off velocity, and timelines. All 4 possibilities are on the table - slow takeoff/long timelines, fast takeoff/long timelines, slow takeoff/short timelines, fast takeoff/short timelines.
A smooth takeoff might actually take longer in calendar time if incremental progress doesn’t lead to exponential gains until later stages.
Honestly I'm surprised people are conflating timelines and takeoff speeds.
I don't love it and I don't know if it's possible to have better dynamics, but I feel like certain terms and positions end up having a lot of worldview [lossily?] "compressed" into them. Short and long timelines is one of them, and fast/slow takeoff might be the next big one, where my read is slow takeoff was a reason for optimism because there's time to fix things as AIs get gradually more powerful.
But to the extent the term could mean any number of things or naively is read to mean something other than the originator meant by it, that is bad and kudos to...
I was finding it a bit challenging to unpack what you're saying here. I think, after a reread, that you're using ‘slow’ and ‘fast’ in the way I would use ‘soon’ and ‘far away’ (aka. referring to the time it will occur from the present). Is this read about correct?
For specifically discussing the takeoff models in the original Yudkowsky / Christiano discussion, what about:
Economic vs. atomic takeoff
Economic takeoff because Paul's model implies rapid and transformative economic growth prior to the point at which AIs can just take over completely. Whereas Eliezer's model is that rapid economic growth prior to takeover is not particularly necessary - a sufficiently capable AI could act quickly or amass resources while keeping a low profile, such that from the perspective of almost all humanity, takeover is extremely sud...
I'm not sure I'm understanding your setup (I only skimmed the post). Are you using takeoff to mean something like "takeoff from now" or ("takeoff from [some specific event that is now in the past]")? If I look at your graph at the end, it looks to me like "Paul Slow" is a faster timeline but a longer takeoff (Paul Slow's takeoff beginning near the beginning of the graph, and Fast takeoff beginning around the intersection of the two blue lines).
Please, for the love of god, do not keep using a term that people will predictably misread as implying longer timelines. I expect this to have real-world consequences. If someone wants to operationalize a bet about it having significant real-world consequences I would bet money on it.
I would be willing to take the other side of this bet depending on the operationalization. Certainly I would take the "no" side of the bet for "will Joe Biden (or the next president) use the words 'slow takeoff' in a speech to describe the current trajectory of AI"
I added an option for x/y/z. Mine would thus be something like 1.5/1/0.5 years from mid 2024 perspective. Much more cumbersome, but very specific!
For comparison, I have a researcher friend whose expectations are more like 4/2/0.1 Different shaped lines indeed!
Some people I talk to seem to have expectations like 20/80/never. That's also a very different belief. I'm just not sure words are precise enough for this situation of multiple line slopes, which is why I suggest using numbers.
Nice post pointing out this! Relatedly for misused/overloaded terms - I think I have seen this getting more common recently (including overloaded terms that means something else in the wider academic community or society; and self-reflecting - I sometimes do this too and need to improve on this).
Takeoff speed could be measured by e.g. the time between the first mass casualty incident that kills thousands of people vs the first mass casualty incident that kills hundreds of millions.
For a long time, when I heard "slow takeoff", I assumed it meant "takeoff that takes longer calendar time than fast takeoff." (i.e. what is now referred to more often as "short timelines" vs "long timelines."). I think Paul Christiano popularized the term, and it so happened he both expected to see longer timelines and smoother/continuous takeoff.
I think it's at least somewhat confusing to use the term "slow" to mean "smooth/continuous", because that's not what "slow" particularly means most of the time.
I think it's even more actively confusing because "smooth/continuous" takeoff not only could be faster in calendar time, but, I'd weakly expect this on average, since smooth takeoff means that AI resources at a given time are feeding into more AI resources, whereas sharp/discontinuous takeoff would tend to mean "AI tech doesn't get seriously applied towards AI development until towards the end."
I don't think this is academic[1].
I think this has wasted a ton of time arguing past each other on LessWrong, and if "slow/fast" is the terminology that policy makers are hearing as they start to tune into the AI situation, it is predictably going to cause them confusion, at least waste their time, and quite likely lead many of them to approach the situation through misleading strategic frames that conflate smoothness and timelines.
Way back in Arguments about fast takeoff, I argued that this was a bad term, and proposed "smooth" and "sharp" takeoff were better terms. I'd also be fine with "hard" and soft" takeoff. I think "Hard/Soft" have somewhat more historical use, and maybe are less likely to get misheard as "short", so maybe use those.[2]
I am annoyed that 7 years later people are still using "slow" to mean "maybe faster than 'fast'." This is stupid. Please stop. I think smooth/sharp and hard/soft are both fairly intuitive (at the very least, more intuitive than slow/fast, and people who are already familiar with the technical meaning of slow/fast will figure it out).
I would be fine with "continuous" and "discontinuous", but, realistically, I do not expect people to stick to those because they are too many syllables.
Please, for the love of god, do not keep using a term that people will predictably misread as implying longer timelines. I expect this to have real-world consequences. If someone wants to operationalize a bet about it having significant real-world consequences I would bet money on it.
a term that ironically means "pointlessly pedantic."
the last time I tried to write this post, 3 years ago, I got stuck on whether to argue for smooth/sharp or hard/soft and then I didn't end up posting it at all and I regret that.