Here are my largest disagreements with AI 2027.
1. I think the timelines are plausible but solidly on the shorter end; I think the exact AI 2027 timeline to fully automating AI R&D is around my 12th percentile outcome. So the timeline is plausible to me (in fact, similarly plausible to my views at the time of writing), but substantially faster than my median scenario (which would be something like early 2030s).
2. I think that the AI behaviour after the AIs are superhuman is a little wonky and, in particular, undersells how crazy wildly superhuman AI will be. I expect the takeoff to be extremely fast after we get AIs that are better than the best humans at everything, i.e., within a few months of AIs that are broadly superhuman, we have AIs that are wildly superhuman. I think wildly superhuman AIs would be somewhat more transformative more quickly than AI 2027 depicts. I think the exact dynamics aren't possible to predict, but I expect craziness along the lines of (i) nanotechnology, leading to things like the biosphere being consumed by tiny self replicating robots which double at speeds similar to the fastest biological doubling times (between hours (amoebas) and months (rabbits)). (ii) extremely superhuman persuasion and political maneuvering, sufficient to let the AI steer policy to a substantially greater extent than it did in AI 2027. In AI 2027, the AI gained enough political power to prevent humans from interacting with ongoing intelligence and industrial explosion (which they were basically on track to do anyways), whereas my best guess is that the AI would gain enough political power to do defacto whatever it wanted, and would therefore result in the AI consolidating power faster (and not keep up the charade of humans being in charge for a period of several years). I also think there are many unknown unknowns downstream of ASI which are really hard to account for in a scenario like AI 2027, but nonetheless are likely to change the picture a lot.
3. I t