I think that the author of this review is (maybe even adversarially) misreading "OpenBrain" as being as an alias used to refer specifically to OpenAI. AI 2027 quite easily lends itself to such an interpretation by casual readers, though. And to well-informed readers, the decision to assume that in the very near future one of the frontier US labs will pull so far ahead of the others as to make them less relevant competitors than Chinese actors definitively jumps out.
The author also seems to not realize that OpenAI's costs are mostly unrelated to its inference costs?
Thanks for sharing. I read through a few pages of it but didn't find anything that seemed good to me. I'm curious which bits of it you resonate with.
I find it most compelling in the second half. I agree with the author that the order of skill acquisition seems strange, humans seem useful for far too long after programming is automated, the government behaves in ways I don't expect it to, it is weirdly flattering to Trump and JD Vance in a manner that doesn't match my estimate of their competence, things are too normal in ways that make me feel like the end game of the scenario is optimized for respectability over accuracy, and the more general point that it feels more fanficy the closer it gets to the singularity.
I finally got around to reading AI 2027 more closely a couple weeks ago. I have been thinking about it a lot and scenario based planning in general (which I had a mild interest in years ago) and this particular form of highly-specific, highly-salient media-blitzed single scenario. I appreciate that the author actually read it while attempting to keep track of state and coherence.
I am less allergic to conflict theorists than I used to be, as I often find some signal. There are many things I disagree with in the post but I consider it worth reading.
OK, thanks, that's the half I didn't read. I should finish.
The specific points you mention seem reasonable to me, not saying I agree with them but they seem like they could be right.
Re: flattering Vance and Trump: I think a lot of people are under the mistaken impression that AI 2027 depicts competent government action. On the contrary even in the slowdown ending to a large extent (and definitely in the race ending) it depicts an incompetent, bumbling government being captured and bowled over by corporate lobbyists and CEOs. At least that's our opinion as authors, sorry that that didn't come through in the text...
Re: normalcy: that's another critique that I resonate with. In truth the arrival of superintelligence is probably going to be INSANE and CRAZY and WORLDSHAKING in a bunch of ways that we haven't yet imagined, but, alas, precisely because we haven't yet imagined them, we couldn't put them in AI 2027. But maybe we should have done more to emphasize that there will likely be lots of black swans / curveballs / etc. that overall make the post-ASI situation feel even less normal than AI 2027 describes.
Also, like you said, this critic did actually read the whole thing carefully while attempting to keep track of state and coherence, which is commendable and valuable and rare.
I really, really wish that other people would try their hand at writing their own scenario forecasts. I wish I had emphasized that more. If critics had alternative scenarios, then we could make more apples-to-apples comparisons. Besides, I genuinely think it's a really helpful thinking tool and basically everyone should do it at least once every few years.
Regarding the bumbling, I agree you depict some capture and incompetence but also the executive branch has a degree of foresight that seems implausible to me. I am so angry about the stupidity of the tariffs against my country, however, that possibly this is clouding my judgement.
It looks like SE Gyges' response is actually based on a series of misunderstandings. I have prepared a draft response similar to your coverage of Vitalik's post. If you wish, I may publish it or send via a private message.
Thanks! I am sick today and probably won't get to this till next week, so I encourage you to just post the response as your own opinion rather than as something I've endorsed. But I imagine I'd agree with most of it, since I disagree with most of SE Gyges' critiques.
I found this an interesting critique of AI 2027. Even to those reflexively turned off by the psychoanalysis and political critique, I would still recommend reading the whole thing. Though I have similar timelines mostly for vibes-based reasons, there are a few things about the 2027 scenario itself that struck me as absurd or pointless, and the author covers most of them and more.