Strong upvoted. I myself still don't know what form public outreach should take; the billionaires we've had so far (Jaan, Dustin, etc) were the cute and cuddly and friendly billionaires, and there are probably some seriously mean mother fuckers in the greater ecosystem.
However, I was really impressed by the decisions behind WWOTF, and MIRI's policies before and after MIRI's shift. I still have strong sign uncertainty for the scenario where this book succeeds and gets something like 10 million people thinking in AI safety. We really don't know what that world would look like, e.g. it could end up as the same death game as right now but with more players.
But one way or another, it is probably highly valuable and optimized reference material for getting an intuitive sense for how to explain AI safety to people, similar to Scott Alexander's Superintelligence FAQ which was endorsed as #1 by Raemon, or the top ~10% of the AI safety arguments competition.
The book is much better than I expected, and deserves more attention. See my full review on my blog.
Concisely,
I've just released the book Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World
There are lots of useful posts, blogs, podcasts, and articles on AI safety, but there was no up-to-date book entirely dedicated to the AI safety issue that is written for those without any exposure to the issue. (Including those with no science background.)