Future Tense

If Silicon Valley Types Are Scared of A.I., Should We Be?

Some very, very smart people are genuinely concerned artificial intelligence could end humanity.

Elon Musk unveils a suite of batteries for homes, businesses, and utilities at Tesla Design Studio on April 30, 2015 in Hawthorne, California.

Kevork Djansezian/Getty Images

This essay was adapted from the book To Be a Machine by Mark O’Connell, published by Doubleday.

Even if it were possible to put aside for a moment the considerable issues of plausibility, and the obviously religious foundations of the whole edifice, the Singularity—the prospect of a bodilessness existence as pure information, a total merger of the human and machine—was not a concept I could ever see myself getting behind.

More than anything, the idea that technology would redeem us, that artificial intelligence would offer a solution to the suboptimal aspects of human existence, was incompatible with my basic outlook on life, with what little I happened to understand about the exceptionally destructive category of primates to which I belonged. Temperamentally and philosophically, I was and am a pessimist, and so it seemed to me that we were less likely to be redeemed than destroyed by the results of our own ingenuity.

Which is why when I began to read about the growing fear, in certain quarters, that a superhuman-level artificial intelligence might wipe humanity from the face of the Earth, I felt that here, at least, was a vision of our technological future that appealed to my fatalistic disposition.

Such dire imitations were frequently to be encountered in the pages of broadsheet newspapers, as often as not illustrated by the apocalyptic image from the Terminator films—by a titanium-skulled killer robot staring down the reader with the glowing red points of its pitiless eyes. Elon Musk had spoken of A.I. as “our greatest existential threat,” of its development as a technological means of “summoning the demon.” (“Hope we’re not just the biological boot loader for digital superintelligence,” he tweeted in August 2014. “Unfortunately, that is increasingly probable.”) Peter Thiel had announced that “People are spending way too much time thinking about climate change, and way too little thinking about AI.” Stephen Hawking, meanwhile, had written an op-ed for the Independent in which he’d warned that success in this endeavor, while it would be “the biggest event in human history,” might very well “also be the last, unless we learn to avoid the risks.” Even Bill Gates had publicly admitted of his disquiet, speaking of his inability to “understand why some people are not concerned.”

Though I couldn’t quite bring myself to believe it, I was morbidly fascinated by the idea that we might be on the verge of creating a machine that could wipe out the entire species, and by the notion that capitalism’s great philosopher kings—Musk, Thiel, Gates—were so publicly exercised about the Promethean dangers of that ideology’s most cherished ideal. These dire warnings about A.I. were coming from what seemed to be the most unlikely of sources: not from Luddites or religious catastrophists, that is, but from the very people who personify our culture’s reverence for machines.

One of the more remarkable phenomena in this area was the existence of a number of research institutes and think tanks substantially devoted to raising awareness about what was known as “existential risk”—the risk of absolute annihilation of the species, as distinct from mere catastrophes like climate change or nuclear war or global pandemics—and to running the algorithms on how we might avoid this particular fate. There was the Future of Humanity Institute in Oxford, and the Centre for the Study of Existential Risk at the University of Cambridge, and the Machine Intelligence Research Institute at Berkeley, and the Future of Life Institute in Boston. The latter outfit featured on its board of scientific advisers not just prominent figures from science and technology, like Musk and Hawking and the pioneering geneticist George Church, but also, for some reason, the beloved film actors Alan Alda and Morgan Freeman.

What was it these people were referring to when they spoke of existential risk? What was the nature of the threat, the likelihood of its coming to pass? Were we talking about a 2001: A Space Odyssey scenario, where a sentient computer undergoes some malfunction or other and does what it deems necessary to prevent anyone from shutting it down? Were we talking about a Terminator scenario, were a Skynettian matrix of superintelligent machines gains consciousness and either destroys or enslaves humanity in order to further its particular goals? Certainly, if you were to take at face value the articles popping up about the looming threat of intelligent machines, and the dramatic utterances of savants like Thiel and Hawking, this would have been the sort of thing you’d have in mind. They may not have been experts in AI, as such, but they were extremely clever men who knew a lot about science. And if these people were worried, shouldn’t we all be worrying with them?

* * *

Nate Soares raised a hand to his close-shaven head and tapped a finger smartly against the frontal plate of his monkish skull.

“Right now,” he said, “the only way you can run a human being is on this quantity of meat.”

We were talking, Nate and I, about the benefits that might come with the advent of artificial superintelligence. For Nate, the most immediate benefit would be the ability to run a human being—to run, specifically, himself—on something other than this quantity of neural meat to which he was gesturing.]

He was a sinewy, broad-shouldered man in his mid-20s, with an air of tightly controlled calm; he wore a green T-shirt bearing the words “NATE THE GREAT,” and as he sat back in his office chair and folded his legs at the knee, I noted that he was shoeless, and that his socks were mismatched, one plain blue, the other white and patterned with cogs and wheels.

The room we conversed in was utterly featureless, save for the chairs we were sitting on, and a whiteboard, and a desk, on which rested an open laptop and a single book, which I happened to note was a hardback copy of philosopher Nick Bostrom’s surprise hit book Superintelligence: Paths Dangers, Strategies—which lays out, among other apocalyptic scenarios, a thought experiment in which an A.I. is directed to maximize the production of paperclips and proceeds to convert the entire planet into paperclips and paperclip production facilities.

This was Nate’s office at the Machine Intelligence Research Institute in Berkeley. The bareness of the space was a result, I gathered, of the fact that he had only just assumed his role as the executive director, having left a lucrative career as a software engineer at Google the previous year and having subsequently risen swiftly up the ranks at MIRI.

He spoke, now, of the great benefits that would come, all things being equal, with the advent of artificial superintelligence. By developing such a transformative technology, he said, we would essentially be delegating all future innovations—all scientific and technological progress—to the machine.

These claims were more or less standard among those in the tech world who believed that artificial superintelligence was a possibility. The problem-solving power of such a technology, properly harnessed, would lead to an enormous acceleration in the turnover of solutions and innovations, a state of permanent Copernican revolution. Questions that had troubled scientists for centuries would be solved in days, hours, minutes. Cures would be found for diseases that currently obliterated vast numbers of lives, while ingenious workarounds for overpopulation would be simultaneously devised. To hear of such things was to imagine a God who had long since abdicated all obligations toward his creation making a triumphant return in the guise of a software, an alpha and omega of zeroes and ones.

It was Nate’s belief that, should we manage to evade annihilation by machines, such a state of digital grace would inevitably be ours.

However, this mechanism, docile or otherwise, would be operating at an intellectual level so far above that of its human progenitors that its machinations, its mysterious ways, would be impossible for us to comprehend, in much the same way that our actions are, presumably, incomprehensible to the rats and monkeys we use in scientific experiments. And so, this intelligence explosion would, in one way or another, be an end to the era of human dominance—and very possibly the end of human existence.

“It gets very hard to predict the future once you have smarter-than-human things around,” said Nate, “In the same way that it gets very hard for a chimp to predict what is going to happen because there are smarter-than-chimp things around. That’s what the Singularity is: It’s the point past which you expect you can’t see.”

What he and his colleagues—at MIRI, at the Future of Humanity Institute, at the Future of Life Institute—were working to prevent was the creation of an artificial superintelligence that viewed us, its creators, as raw material that could be reconfigured into some more useful form (not necessarily paper clips). And the way Nate spoke about it, it was clear that he believed the odds to be stacked formidably high against success.

“To be clear,” said Nate, “I do think that this is the shit that’s going to kill me.” And not just him—“all of us,” he said. “That’s why I left Google. It’s the most important thing in the world, by some distance. And unlike other catastrophic risks—like say climate change—it’s dramatically underserved. There are thousands of person-years and billions of dollars being poured into the project of developing AI. And there are fewer than 10 people in the world right now working full-time on safety. Four of whom are in this building.”

“I’m somewhat optimistic,” he said, leaning back in his chair, “that if we raise more awareness about the problems, then with a couple more rapid steps in the direction of artificial intelligence, people will become much more worried that this stuff is close, and the A.I. field will wake up to this. But without people like us pushing this agenda, the default path is surely doom.”

For reasons I find difficult to identify, this term default path stayed with me all that morning, echoing quietly in my head as I left MIRI’s offices and made for the BART station, and then as I hurtled westward through the darkness beneath the bay. I had not encountered the phrase before, but understood intuitively that it was a programming term of art transposed onto the larger text of the future. And this term default path—which, I later learned, referred to the list of directories in which an operating system seeks executable files according to a given command—seemed in this way to represent in miniature an entire view of reality: an assurance, reinforced by abstractions and repeated proofs, that the world operated as an arcane system of commands and actions, and that its destruction or salvation would be a consequence of rigorously pursued logic. It was exactly the sort of apocalypse, in other words, and exactly the sort of redemption, that a computer programmer would imagine.

* * *

One of the people who had been instrumental in the idea of existential risk being taken seriously was Stuart Russell, a professor of computer science at U.C. Berkeley who had, more or less literally, written the book on artificial intelligence. (He was the co-author, with Google’s research director Peter Norvig, of Artificial Intelligence: A Modern Approach, the book most widely used as a core A.I. text in university computer science courses.)

I met Stuart at his office in Berkeley. Pretty much the first thing he did upon sitting me down was to swivel his computer screen toward me and have me read the following passage from a 1960 article by Norbert Wiener, the founder of cybernetics:

If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it because the action is so fast and irrevocable that we have not the data to intervene before the action is complete, then we better be quite sure that the purpose put into the machine is the purpose which we really desire and not merely a colorful imitation of it.

Stuart said that the passage I had just read was as clear a statement as he’d encountered of the problem with AI, and of how that problem needed to be addressed. What we needed to be able to do, he said, was define exactly and unambiguously what it was we wanted from this technology It was as straightforward as that, and as diabolically complex.

It was not, he insisted, the question of machines going rogue, formulating their own goals and pursing them at the expense of humanity, but rather the question of our own failure to communicate with sufficient clarity.

“I get a lot of mileage,” he said, “out of the King Midas myth.”

What King Midas wanted, presumably, was the selective ability to turn things into gold by touching them, but what he asked for (and what Dionysus famously granted him) was the inability to avoid turning things into gold by touching them. You could argue that his root problem was greed, but the proximate cause of his grief—which included, let’s remember, the unwanted alchemical transmutations of not just all foodstuffs and beverages, but ultimately his own child—was that he was insufficiently clear in communicating his wishes.

The fundamental risk with AI, in Stuart’s view, was no more or less than the fundamental difficulty in explicitly defining our own desires in a logically rigorous manner.

Imagine you have a massively powerful artificial intelligence, capable of solving the most vast and intractable scientific problems. Imagine you get in a room with this thing, and you tell it to eliminate cancer once and for all. The computer will go about its work and will quickly conclude that the most effective way to do so is to obliterate all species in which uncontrolled division of abnormal cells might potentially occur. Before you have a chance to realize your error, you’ve wiped out every sentient lifeform on Earth except for the artificial intelligence itself, which will have no reason to believe it has not successfully completed its task.

At times, it seemed to me perfectly obvious that the whole existential risk idea was a narcissistic fantasy of heroism and control—a grandiose delusion, on the part of computer programmers and tech entrepreneurs and other cloistered egomaniacal geeks, that the fate of the species lay in their hands: a ludicrous binary eschatology whereby we would either be destroyed by bad code or saved by good code.

But there were other occasions where I would become convinced that I was the only one who was deluded, and that Nate Soares, for instance, was absolutely, terrifyingly right: that thousands of the world’s smartest people were spending their days using the world’s most sophisticated technology to build something that would destroy us all. It seemed, if not quite plausible, on some level intuitively, poetically, mythologically right.

This was what we did as a species, after all: We built ingenious devices, and we destroyed things.

This essay was adapted from the book To Be a Machine by Mark O’Connell. Copyright © 2017 by Mark O’Connell. Published by arrangement with Doubleday, an imprint of the Knopf Doubleday Publishing Group, a division of Penguin Random House LLC.