I expect "slow takeoff," which we could operationalize as the economy doubling over some 4 year interval before it doubles over any 1 year interval. Lots of people in the AI safety community have strongly opposing views, and it seems like a really important and intriguing disagreement. I feel like I don't really understand the fast takeoff view.
(Below is a short post copied from Facebook. The link contains a more substantive discussion. See also: AI impacts on the same topic.)
I believe that the disagreement is mostly about what happens before we build powerful AGI. I think that weaker AI systems will already have radically transformed the world, while I believe fast takeoff proponents think there are factors that makes weak AI systems radically less useful. This is strategically relevant because I'm imagining AGI strategies playing out in a world where everything is already going crazy, while other people are imagining AGI strategies playing out in a world that looks kind of like 2018 except that someone is about to get a decisive strategic advantage.
Here is my current take on the state of the argument:
The basic case for slow takeoff is: "it's easier to build a crappier version of something" + "a crappier AGI would have almost as big an impact." This basic argument seems to have a great historical track record, with nuclear weapons the biggest exception.
On the other side there are a bunch of arguments for fast takeoff, explaining why the case for slow takeoff doesn't work. If those arguments were anywhere near as strong as the arguments for "nukes will be discontinuous" I'd be pretty persuaded, but I don't yet find any of them convincing.
I think the best argument is the historical analogy to humans vs. chimps. If the "crappier AGI" was like a chimp, then it wouldn't be very useful and we'd probably see a fast takeoff. I think this is a weak analogy, because the discontinuous progress during evolution occurred on a metric that evolution wasn't really optimizing: groups of humans can radically outcompete groups of chimps, but (a) that's almost a flukey side-effect of the individual benefits that evolution is actually selecting on, (b) because evolution optimizes myopically, it doesn't bother to optimize chimps for things like "ability to make scientific progress" even if in fact that would ultimately improve chimp fitness. When we build AGI we will be optimizing the chimp-equivalent-AI for usefulness, and it will look nothing like an actual chimp (in fact it would almost certainly be enough to get a decisive strategic advantage if introduced to the world of 2018).
In the linked post I discuss a bunch of other arguments: people won't be trying to build AGI (I don't believe it), AGI depends on some secret sauce (why?), AGI will improve radically after crossing some universality threshold (I think we'll cross it way before AGI is transformative), understanding is inherently discontinuous (why?), AGI will be much faster to deploy than AI (but a crappier AGI will have an intermediate deployment time), AGI will recursively improve itself (but the crappier AGI will recursively improve itself more slowly), and scaling up a trained model will introduce a discontinuity (but before that someone will train a crappier model).
I think that I don't yet understand the core arguments/intuitions for fast takeoff, and in particular I suspect that they aren't on my list or aren't articulated correctly. I am very interested in getting a clearer understanding of the arguments or intuitions in favor of fast takeoff, and of where the relevant intuitions come from / why we should trust them.

I like this post, but I think it's somewhat misleading to call your scenario a "slow takeoff". To my mind, a "slow takeoff" evokes an image from non-singularitarian science fiction where you have human-level robots running around and they've been running around for decades if not centuries: that is, a very slow and gradual development that gives society and institutions plenty of time to adapt. But your version is clearly not this, since you are talking on the timescale of a few years, and note yourself that time will be of the essence even with a "slow" takeoff:
You also note that much of the safety community seems to believe in a fast takeoff, in disagreement with you. I don't know whether you're including me there, but I've previously talked about a takeoff on the scale of a few years being a fast one, since to me a "fast takeoff" is one where there's little time for existing institutions to prepare and respond adequately, and a few years still seems short enough to meet that criteria.
I'd prefer to use some term like "moderate takeoff" for the scenario that you're talking about.
[edit: sorry if I seem like I'm piling on with the terminology, I first wrote this comment and only then read the other comments and saw that they ~all brought up the same thing.]
A thought: you've been using the phrase "slow takeoff" to distinguish your model vs the MIRI-ish model, but I think the relevant phrase is more like "smooth takeoff vs sharp takeoff" (where the shape of the curve changes at some point)
But, your other comment + Robby's has me convinced that the key disagreement doesn't have anything to do with smooth vs sharp takeoff either. Just happens to be a point of disagreement without being an important.