I've seen the Doom vs Boom GDP forecast a few times and every time I have to ask... Would GDP per capita not go asymptotically up in the extinction case? The numerator would increase as AI generates more domestic product but the denominator would decrease as we are extincified. So the "singularity: extinction" line should match and then exceed the "singularity: benign" line before becoming undefined.
[This can be ignored -- already answered well enough. Kind of a nit but the graph seems a bit confusing. How can GDP per capita continue on it's trend, presumably with the 2.1% growth bust from the AI gains, while population is crashing? Or is the trend GDP just presuming the implied population changes that were historically embedded in the data series?]
But I would like to know more about the model for the extinction plot. I was just wondering about that views about the path between what you guys call the singularity and when extinction is suppose to occur. I think this is the first time I've seen that presented. (But don't rule out it's been a lot of placed I've just never looked.)
This article is more 'pop' than most I put on LessWrong, but I think that there's an important reality to wake up to, which I'm still feeling poorly-calibrated on, regarding the influence of public and political discourse around AI. I expect others here might have insights there and/or benefit from getting more in contact with that.
"Best humans still outperform" means that we know that machines have certain types of advantages over humans such as never getting drunk or tired. To meaningfully outdo a human, the machine has to outdo a non-malfunctioning human. Outdoing the human because it never gets tired may be useful in a practical sense, but it doesn't show that the machine is smarter than the human. Furthermore, you shouldn't be comparing the AI against an average human anyway unless the comparison is done using an average AI (if you can even figure out a definition of 'average AI' that can't be gerrymandered.)
(And no, you don't ignore AI hallucinations, because the hallucination is inextricably a part of the AI's reasoning process. There's no such thing as 'the AI is in a non-hallucinatory state right now', like a human won't always be tired.)
A few years ago I was tickled by an article headline in a serious academic journal:
Remarkable! Man bites dog! It had become newsworthy, it was worth checking (and, I perceive, worth a little self-congratulatory celebration) that there remained any domain where mere man could still hope to possibly contend with the machines — at least, the best humans still could! (Could you?)
Source
A message from the future
That was 2023. I think what stood out to me at the time[1] was that this was in some sense early. Not early in the story of AI — although ChatGPT and StableDiffusion, each less than a year old, had captured the public attention in a way which earlier AI hadn’t, these were merely the latest in a long lineage of gradual developments — but an early sign of a reckoning, an attitude shift in how humanity would grapple with these new machine capabilities we were conjuring fitfully into being.
I’d already been worrying for years that things might get out of hand with AI (and had even started writing about it). I was hardly the first![2] But this had felt almost like a perversely secret concern (how can people not see what’s coming?? — but they didn’t), one which humanity at large appeared destined to ignore until either it was too late… or, if we somehow played it right, until a splendid apotheosis of world peace, unlimited bounty, health and longevity delivered by machine intellect. (In fact I think those remain real prospects, and it’s absolutely in our hands to determine which outcomes we get.)
What this headline implicitly spoke of, the subtextual worldview shift belied by the phrasing — “Best humans still outperform” — was that we had woken up and viscerally felt the reality that even the ‘best’ humans might genuinely need to watch their backs. The machines were coming. It was no longer (had never been) a joke or a fairy tale.
This headline, seemingly from a near future in which it was taken for granted that machines, in general, dominated human capabilities, showed what was coming. Headlines like it are now commonplace — perhaps more common than those (now almost boring!) headlines adding to the litany of tasks AI now outcompetes human experts at.
The world changed
The world changed. Not because the world had actually yet changed (much), but because humanity, in our limited and faltering foresight, had noticed that, soon, it might. That murky perception of the future, humanity’s near-unique hallmark and blessing, memetically reverberated and has worked its way into our collective discourse.
In this way, I’m incredibly grateful to the ‘ChatGPT moment’. Rather than implicitly relying on a plucky band of vaguely foresighted but ultimately underpowered ‘sci-fi weirdos’, humanity as a whole is entering the conversation. We’re all stakeholders in the trajectory of this world-transforming sphere of technology, and all kinds of people are beginning to act like it: people with skillsets and perspectives which we’ll need, which had been lacking, in earlier debates. Law theorists, philosophers, engineers, anthropologists, economists, statespeople. It’s a thickly textured problem. It’ll need more than people like me (aspiring polymath though I may be) to solve it!
Source
These cultural conversation shifts are fickle but surely incredibly consequential. 2025 felt like another shift, to me, and 2026 so far — with AI producing genuine national security implications and at the centre of dirty political manoeuvring — seems to suggest that both the training wheels and the gloves are off, as Dean Ball recently put it. It’s a little scary: powerful and not altogether friendly forces have turned their eye to the potential potency of emerging tech, and they[3] may wrestle for it, even under the risk that they destroy much in the process or that the tech spills entirely out of their control.
The world, changed
We can be doing better! People can get curious, find out what’s what, consider stakes and what realistic paths we might prefer. Don’t make the mistake of ‘nowsight’ bias — today’s AI are the least capable there will ever be! Take seriously where things might go, and notice if the conversation seems to miss something important that you understand well: it’s still early and the ‘experts’ are mainly that by virtue of noticing the importance of AI a little sooner than everyone else[4]. Let’s also grab the new tech building blocks we have and bootstrap the way we do foresight, collective intelligence, and coordination.
Don’t mistake me for naively assuming machines will blast through every bottleneck in short order. There’s a lot of adaptability, dexterity, and generality bottlenecks between here and self-sufficient machines. Perhaps I’ll write something about that soon.
(I intended to blog about it at the time, but… you know how it is with drafts.)
Quoth Turing, some time in the 1950s:
Even Turing was not first to perceive that thinking machines could pose takeover hazards.
I’m not only (or even mainly) talking about countries.
I’ve been bemused several times recently upon being referred to as an ‘expert’, that mythical breed.