It's rather striking to me the number of things that used to be science fiction but are now large companies (often owned by Elon Musk).
I think there should be a name for the fallacy of assuming that "a science fiction author once wrote about that" implies "it will never happen, or at least not during my lifetime". It's more specific than just failure of imagination: it's failing to note that many science fiction authors work quite hard to write about things that are, fairly plausibly, likely to happen sooner or later — that that's actually one of the basic ground-rules of the genre, though some authors play more fast-and-loose with it then others.
AI is humanity’s gambit to save a dying world.
I notice that I'm confused. Imagine that Yudkowsky et al succeeded in banning capabilities-related research until alignment is solved[1]. Then how exactly would the world be most likely to die?
Additionally, I think that the position of AI doomers is close to the position of ethicists and sociologists. Indeed, if the doomers are right, then mankind would spend resources on a doomsday machine. Ethicists would point out that resources could've been spent on something actually useful, and sociologists imply that AI would disempower at least the poor and/or cause human cognition to degrade, and neither result is actually useful. It is the accelerationists who talk about AI-related benefits.
With the extra condition that mankind is also smart enough to ensure that a solution to alignment isn't a pseudosolution which is flawed in ways that nobody noticed, that naysayers have a fairly liberum veto, etc.
I'm being dramatic by calling it a dying world, but everyone's worried about the future. Climate change, running out of fossil fuels, social problems, death of current humans, etc. Likely not an actual extinction of humanity, but hard times for sure. AI going well could be the easy way out of all that, if it's actually as big as we think it might be. I think the accelerationists would not be as keen if the world was otherwise stable.
Another way of saying it is that our current reckless trajectory only makes sense if you view it as a gambit to stave off the rest of the things that are coming for us (which are mostly the fruits of our past and current recklessness). I'm sympathetic to the thought, even if it might also kill us.
Re: doomers and ethicists agreeing: The position that the authors of The AI Con take is that the doomers feed the mythos of AI as godlike by treating it as a doomsday machine. This almost-reverence fuels the ambitions and excitement of the accelerationists, while also reducing enthusiasm for tackling the more mundane challenges.
Yudkowsky still wants resources to go towards solving alignment, and if AI is a dud, that wouldn't be necessary. I view the potential animosity between ethicists and doomers as primarily a fight over attention/funding. Ethicists see themselves as being choked out by people working towards solving fictional problems, and that creates resentment and dismissal. And doomers do often think focusing on the mundane harms is a waste of time. Ideally the perspectives would be coherent/compatible, and finding that bridge, or at least holding space for both, is the aim of this post.
our current reckless trajectory only makes sense if you view it as a gambit to stave off the rest of the things that are coming for us
At this point, I think the AI race is driven by competitive dynamics. AI looks like a path to profit and power, and if you don't reach for it, someone else will. For those involved, this removes the need to even ask whether to do it: it's a foregone conclusion that someone will do it. The only thing I see even putting a dent in this competitive dynamics, is if something happens that terrifies even people like Musk, Trump, and Xi, terrifying enough that they would put aside their differences and truly organize a halt to the race.
You're definitely challenging a key piece of my perspective here, and I've thought a good bit about how to respond. What I've come up with, is... I think all of us are involved. The labs don't exist in a vacuum, and the opinion of the public does have an impact. So I think looking at scopes of agency larger than the individual is a helpful thing to do.
In this piece I'm describing the choice that is getting made on behalf of humanity, from the lens of humanity. Because it really does affect all of us. But that's also why I take a hands-off kind of approach here, because it's not necessarily my role to say or know what I think humanity should be doing. I'm just an ignorant grain of sand.
This piece is a study in contrast and coherence between different narratives for understanding AI, namely the perspectives of AI Doomers, AI Accelerationists, and AI Ethicists.
I have substantial background with the AI Doomer narrative, and I’m quite sympathetic to it. I name it here as a narrative rather than the plain truth, as some might. This is not as an attempt to undermine it (or support it), but rather as a means to place it in context with competing narratives.
The inspiration for this piece is a book called The AI Con, which I saw shelved in a library and was intrigued by. The central premise of the book is that both the doomer and accelerationist narratives are two sides of the same coin. Both treat advanced AI as fundamentally godlike: the all-encompassing technology on the horizon, the centrally important thing happening in the world... the only difference being what the consequences of that will be.
The authors of The AI Con, unsurprisingly, do not believe that AI will become godlike. They dismiss the possibility as ridiculous, sci-fi, nerd fantasy, etc.
I think this is a mistake, of course. AI might very well become godlike, or at least, I can’t dismiss it as easily as these authors can. But I also see the importance in paying attention to the rest of the world. So, I’m sympathetic to the perspective they present, which I refer to as the AI Ethics narrative.
The AI Ethics narrative focuses on the impact of the AI industry on the broader world. For example: the energy required to run data centers, how it will be generated, and the effects on our climate. The raw materials to build chips, including rare earth metals, and the impacts of extracting them from the ground. These are costs that we don’t immediately see.
Similarly, socioeconomic costs abound. The technological development of AI is yet another means for the rich to get richer. For power to consolidate more fully into the hands of technocrats and rich investors.
Another cost is in attention. If AI is treated as the be-all and end-all, then not much attention and energy is left for understanding and responding to the vast number of other problems in the world. While we are focused on the dream of superintelligence, the rest of our world may rot from the effects of such single-minded enthrallment.
For these reasons and more, the AI Ethics narrative (as presented in The AI Con) views the AI Doomer/Accelerationist narratives as fundamentally flawed. They don’t take into account the sheer costs to the rest of the world that this “sci-fi pursuit” demands. That energy should be spent on responding to the wide variety of crises we already have.
I can’t comment on which of the narratives are correct, or most worthy. Instead, I’d like to simply hold space for them all. The way that I do this is by understanding the current thrust toward advanced AI as Humanity’s Gambit.
We currently face many problems. Climate change, geopolitical instability with nuclear weapons, pandemics, wealth inequality, aging populations. There are many things that are worthy of attention and care.
Focusing wholeheartedly on making AI go well, or go fast, is a risk. Maybe it will become godlike and help us solve all of our problems. The ones that we are too fractious as a species to solve ourselves, the ones we sometimes treat as hopeless. But it may also backfire, kill us all, and build a future without us.
Or perhaps, as the AI Ethicists argue, it will simply fizzle, or rather, reveal itself to always have been a silly dream. Never live up to what we imagine it might be, and when the dust clears, we’ll be looking around at a world with problems that have greatly worsened while we were distracting ourselves with dreams of superintelligence.
I believe that regardless of which perspective a person takes – how they primarily relate to AI – they benefit by acknowledging the others.
Of course the doomer narrative is incomplete without the acknowledgement that hey, perhaps it will actually go okay. And accelerationists should appreciate that, well, maybe it won’t, and maybe we really are driving towards the end.
But adherents of each should also recognize that there are many other problems in the world too. And by placing such focus on AI, we may be making them worse. We don’t actually know that AI will become godlike, and we should acknowledge that.
Likewise, the strength of the AI Ethics perspective is in its contextuality, how it incorporates a wide range of factors – the messy interplay between everything, the externalities we would rather ignore. But that contextuality, when fully embraced, should also include the possibility that AI will different from anything that has come before.
AI is humanity’s gambit to save a dying world. Maybe it will help us. Maybe it will kill us. Maybe it will be a flashy waste of time, and we’ll have to deal with everything else on our own after all.
Is it a good gambit to take?
I don’t know.
I don’t think anyone does.
We sure do seem to be going for it anyway.