We're pleased to announce the release of "Smarter Than Us: The Rise of Machine Intelligence", commissioned by MIRI and written by Oxford University’s Stuart Armstrong, and available in EPUB, MOBI, PDF, and from the Amazon and Apple ebook stores.

What happens when machines become smarter than humans? Forget lumbering Terminators. The power of an artificial intelligence (AI) comes from its intelligence, not physical strength and laser guns. Humans steer the future not because we’re the strongest or the fastest but because we’re the smartest. When machines become smarter than humans, we’ll be handing them the steering wheel. What promises—and perils—will these powerful machines present? This new book navigates these questions with clarity and wit.

Can we instruct AIs to steer the future as we desire? What goals should we program into them? It turns out this question is difficult to answer! Philosophers have tried for thousands of years to define an ideal world, but there remains no consensus. The prospect of goal-driven, smarter-than-human AI gives moral philosophy a new urgency. The future could be filled with joy, art, compassion, and beings living worthwhile and wonderful lives—but only if we’re able to precisely define what a “good” world is, and skilled enough to describe it perfectly to a computer program.

AIs, like computers, will do what we say—which is not necessarily what we mean. Such precision requires encoding the entire system of human values for an AI: explaining them to a mind that is alien to us, defining every ambiguous term, clarifying every edge case. Moreover, our values are fragile: in some cases, if we mis-define a single piece of the puzzle—say, consciousness—we end up with roughly 0% of the value we intended to reap, instead of 99% of the value.

Though an understanding of the problem is only beginning to spread, researchers from fields ranging from philosophy to computer science to economics are working together to conceive and test solutions. Are we up to the challenge?

Special thanks to all those at the FHI, MIRI and Less Wrong who helped with this work, and those who voted on the name!


New Comment
57 comments, sorted by Click to highlight new comments since: Today at 8:03 PM

What, if anything, should regular LessWrong readers expect to get from reading this book?

A collection of the usual arguments in a (hopefully) concise, discursive, and easy to follow format. Plus a short story of the battle between a Terminator and a true AI...

I didn't get anything new. I did get the knowledge that there was existed a book accessible to very lay people about Existential Risk from AI.

I see you've released it under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported license.

Are you encouraging widespread gratis distribution, or are you keeping that option low-key with the intention of selling more from intelligence.org and Amazon?

Mostly the former.


You should contact Luke about what MIRI's distribution strategy is.

[This comment is no longer endorsed by its author]Reply

Kudos, Stuart. I'm not sold on all of MIRI's arguments, but this is the briefest cogent summary I've seen, and is now definitely the thing I'd hand to people who asked me questions about MIRI.

For years I've been saying that MIRI needs some basic intro texts that people can be pointed to when they first hear about these ideas. Ideally, there should be multiple such texts: Blurb length, short article, short book, and weighty tome say 200 words, 2000, 20000, a 200,000 words.

In recent years, this has started to happen.

Stuart Armstrong's book is the short-book-length overview we need -- perfect. Nick Bostrom's maybe the long-book overview.

Luke Muehlhauser's Facing the Singularity and James Barrat's The Final Invention can fill this role to some extent.

Several good intro articles have been written over the last two years. I'm not sure which is the article I'd point people to -- MIRI should probably choose one and push it as the basic overview article, as they have done for Stuart's book.

I'll be sending people to Stuart's book from now on.


By the way, are there any plans to eventually publish some of these MIRI-related ebooks in print form? Paper books tend to convey greater credibility than ebook-only.

We don't plan to put in the extra effort to make a paper copy available. But later this year, Superintelligence will be available, and it will be both hardback and emblazoned with the word "Oxford."

How much extra effort are you imagining it would be to make paper copies available of any given book? You already have cover art. Although checking the "Look Inside" on Amazon suggests that this one has not been well-proofread, let alone prettily typeset, I have somebody willing to do those things for free (well, for copies of the books) and I'm not a charitable organization - MIRI won't ping its volunteers or throw fifty bucks at somebody to finagle CreateSpace?

Amazon's 'Look Inside' shows the Kindle version, using whatever typesetting Amazon chooses; the PDF is better typeset. The main cost is researching the different options for making it available as a paperback and then verifying that research, which probably costs several hours of staff time, and our operations staff are currently doing higher-value work. If somebody I trust has already analyzed the options recently, and found the best choice or shown that it doesn't matter much, then it should only take Alex 1-2 hours of his time to make the paperback available, which is probably worth it.

This sounds like bad instrumental rationality. If your current option is "don't publish it in paperback at all", and you are presented with an option you would be willing to take, publishing at a certain quality, if that quality was the best quality, then the fact that there may be better options you haven't explored should never return your "best choice to make" to "don't publish it in paperback at all." Your only viable candidates should be: "Publish using a suboptimal option" and "Do verified research about what is the best option and then do that."

As they say, "The perfect is the enemy of the good."

Sure, but I'm not even sure at this stage that publishing a paperback version with CreateSpace is a better use of 2 hours of Alex's time than the other stuff he's doing. Are there hidden gotchas which make publishing worse than not-publishing even if it was totally free? (I've encountered many examples of this while running MIRI.) Will it actually take 5 hours of time rather than 2? I don't know the answers to these questions, and this isn't a priority. Deciding whether to publish a paperback copy of Smarter Than Us is, like, the 20th most important decision I'll make this week. I'm not even sure that explaining all the different considerations I'm weighing for such a minor decision is worth the time I've spent typing these sentences. Anyway, I don't mean to be rude and I understand why you and Alicorn are engaging me about this, it's just that the decision is more complicated and less important (relative to all the invisible-to-LWers things we're doing) than you might realize, and I don't have time to explain it all. Again: if somebody can save us time on the initial research to figure out what's a good idea, it might become competitive with the other things Alex is doing with his MIRI time.

I'm not clear on what Alex in particular has to do with this. Aren't there people with lower opportunity cost you could go "hey, investigate self-publishing options" to? They are marketed to publishing-non-experts and while they don't require zero skill, perhaps it doesn't call for your scarcest and thinnest-spread people. Are you sure you don't want to ask me any questions about my experience self-publishing with Createspace...?

Alex is just the one who would work with the files and CreateSpace, not necessarily the one who has to do the research about which company to publish through.

Another thing Alex is doing, btw, is finding a scalable way to outsource "general internet research" projects, without needing to find new contractor hours, validate them, sign a contract, etc. There was some service that looked awesome that I encountered 6 months ago when we had less money to spend on such things but now I can't find it.

EDIT: Oh, and yes, I'd be happy to hear of your own experiences with (and judgements about) CreateSpace.

I have been happy with Createspace. It produces cheap-for-trade-quality sleek paperbacks, faithfully renders my cover art, is relatively easy to interact with in all the ways I haven't chosen to delegate (and easy enough in those other ways that the delegate-ee is willing to work for one signed copy each of the books in question and a frontmatter acknowledgment and nothing else), and doesn't cost any money up until I actually tell them to send me a book. I will happily show you three different volumes I have had Createspaced if you would like to see a physical copy and arrange to be near mine.

Another thing Alex is doing, btw, is finding a scalable way to outsource "general internet research" projects, without needing to find new contractor hours, validate them, sign a contract, etc. There was some service that looked awesome that I encountered 6 months ago when we had less money to spend on such things but now I can't find it.

Maybe https://www.fancyhands.com/ ?

I think it was a different one, but that's the best match I've found so far, so maybe it is indeed FancyHands.

I've had a few people also asking about physical copies, btw.

it will be both hardback and emblazoned with the word "Oxford."

Sounds expensive...

I second this question, not on credibility grounds but because I prefer reading physical paper books rather than ebooks.

FYI, Smarter Than Us is now available in print form. :-)

Ben Goertzel's review in H+ Magazine. Excerpt:

The booklet is clearly written -- very lucid and articulate, and pleasantly lacking the copious use of insider vocabulary that marks much of the writing of the MIRI community. It's worth reading as an elegant representation of a certain perspective on the future of AGI, humanity and the world.

Having said that, though, I also have to add that I find some of the core ideas in the book highly unrealistic.

The title of this article summarizes one of my main disagreements. Armstrong seriously seems to believe that doing analytical philosophy (specifically, moral philosophy aimed at formalizing and clarifying human values so they can be used to structure AGI value systems) is likely to save the world.

I really doubt it!

My response in the comment section:

What I expect from formal "analytic philosophy" methods:

1) A useful decomposition of the issue into problems and subproblems (eg AI goal stability, AI agency, reduced impact, correct physical models on the universe, correct models of fuzzy human concepts such as human beings, convergence or divergence of goals, etc...)

2) Full or partial solutions some of the subproblems, ideally of general applicability (so they can be added easily to any AI design).

3) A good understanding of the remaining holes.

and lastly:

4) Exposing the implicit assumptions in proposed (non-analytic) solutions to the AI risk problem, so that the naive approaches can be discarded and the better approaches improved.

Ben expanded his original article by editing a reply to your points into the end.

Sigh... I'll have to get round to addressing that point (though I've already addressed it several times already).

Bug report:

I bought the download-package. When I got the email - the link is invalid (that's the error message I got when I pasted it into the browser). Same with the "You can view your order details" link.

However - luckily I still had the payment-thankyou tab still open and could reload that to get a download link. That one worked.

Passed your bug report on to MIRI, thanks!

No probs. I also replied to the email. But I figure if anybody else tries to download in the meantime - at least they know what they can do to fix it. :)

Can I ask why this book is not provided for free, given that it is essentially promotional material for MIRI, commissioned by MIRI and published by MIRI?


It seems that it is released under a Creative Commons licence.
So the price is just the cost of hosting it on Amazon?

It is provided as “pay-what-you-want” package, and is under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported license.

Even if this qualifies as "promotional material", it's not exactly uncommon to charge money for promotional material. For example, politicians who are trying to get elected will charge money for their autobiographies.

I suppose that most people who buy an autobiography have already decided to vote for that politician.

New to LessWrong?