Yudkowsky and Soares' Book Is Underwhelming
[new version] When someone says “computers will be intelligent”, what do they mean? Anyone can make any claim. Look: I can say “teleportation will be invented soon”. But what’s the interest in that unless I have evidence? You open If Anyone Builds It, Everyone Dies eager to hear an argument, and while you progress, your interest gradually ebbs out as you realise you won’t find one. Eliezer Yudkowsky and Nate Soares’ book is not a review of research on the topic of AI, nor is it an analysis of principles behind thinking and what it would mean to “reason”, “innovate” and other such things. It’s as interesting as a tale of what might happen if wizards or all-powerful aliens came to earth to bother us. Some such story is what you’d get if you changed a few key words—maybe swap in “goblins” for “AIs” and “incantation” for “computation” and in many parts it would read quite fluently. When I open up one of the many books on the imminent genius of AI, what I’m looking for is an argument. While Yudkowsky and Soares believe machines will be intelligent, they, like others, present no evidence. I want evidence. Am I being too demanding? In a book presented as factual or scientific (or at least science-adjacent), the authors might for example have addressed the principles on which AI operates, considered what it might be capable of based on those principles, and asked whether we would label its behaviour or potential behaviour “intelligent”. But Yudkowsky and Soares don’t attempt any such assessment, or direct readers to research or somewhere to find answers to important questions. They just insist AI will be intelligent without making a case. In relatively substantial sections of the book, they do point to some instances of AI being put to use—such as its success in predicting protein structure, or in chess, or in its text production capacities. But they leave unanswered obvious questions like, What is AI doing in such instances and are those tasks analogous to wide