Concisely,
I've just released the book Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World

  • It's an engaging introduction to the main issues and arguments about AI safety and risk. Clarity and accessibility were prioritized. There are blurbs of support from Max Tegmark, Will MacAskill, Roman Yampolskiy and others.  
  • Main argument is that AI capabilities are increasing rapidly, we may not be able to fully align or control advanced AI systems, which creates risk. There is great uncertainty, so we should be prudent and act now to ensure AI is developed safely. It tries to be hopeful.
  • Why does it exist? 
    There are lots of useful posts, blogs, podcasts, and articles on AI safety, but there was no up-to-date book entirely dedicated to the AI safety issue that is written for those without any exposure to the issue. (Including those with no science background.)
  • This book is meant to fill that gap and could be useful outreach or introductory materials. 
  • If you have already been following the AI safety issue, there likely isn't a lot that is new for you. So, this might be best seen as something useful for friends, relatives, some policy makers, or others just learning about the issue. (although, you may still like the framing)
  • It's available on numerous Amazon marketplaces. Audiobook (edit) now available and Hardcover options to follow. 
  • It was a hard journey. I hope it is of value to the community. 

New to LessWrong?

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 8:45 PM
[-]trevor5mo11-4

Strong upvoted. I myself still don't know what form public outreach should take; the billionaires we've had so far (Jaan, Dustin, etc) were the cute and cuddly and friendly billionaires, and there are probably some seriously mean mother fuckers in the greater ecosystem

However, I was really impressed by the decisions behind WWOTF, and MIRI's policies before and after MIRI's shift. I still have strong sign uncertainty for the scenario where this book succeeds and gets something like 10 million people thinking in AI safety. We really don't know what that world would look like, e.g. it could end up as the same death game as right now but with more players. 

But one way or another, it is probably highly valuable and optimized reference material for getting an intuitive sense for how to explain AI safety to people, similar to Scott Alexander's Superintelligence FAQ which was endorsed as #1 by Raemon, or the top ~10% of the AI safety arguments competition.

Is someone planning to review this book? Seems worthwhile.