This post is provided as convenient place to discussion of the new book, The AI Does Not Hate You by Tom Chivers, which covers LessWrong and rationalist community.
The AI Does Not Hate You: Superintelligence, Rationality and the Race to Save the World
This is a book about AI and AI risk. But it's also more importantly about a community of people who are trying to think rationally about intelligence, and the places that these thoughts are taking them, and what insight they can and can't give us about the future of the human race over the next few years. It explains why these people are worried, why they might be right, and why they might be wrong. It is a book about the cutting edge of our thinking on intelligence and rationality right now by the people who stay up all night worrying about it.
Note that the book is available on Kindle only to people with UK/European Amazon accounts. I was able to order it in physical copy in the US, but I haven't received a shipping notification yet.
I was pretty pleased with it, and recommended it to my parents. (Like Ajeya, I've had some difficulty giving them the full picture since I stopped working in industry.) There's a sentence on rationalists and small talk that I read out loud to several people in the office, all of whom thought it fit pretty well.
One correction: he refers several times to UCLA Berkeley, when it should just be UC Berkeley. (UCLA refers to the University of California at Los Angeles, a different university in the same UC system as Berkeley.)
One of the things that I'm sad about is that the book makes no mention of LW 2.0 / the revival. (The last reference I could find was to something in early 2018, but much of the book relates to stuff happening in 2017.) We announced the transition in June 2017, but how much it had succeeded might not have been obvious then (or it may have been the sort of thing that didn't get advertised to Chivers by his in-person contacts), and so there's a chapter on the diaspora which says there's no central hub. Which is still somewhat true--I don't think LW is as much of a central hub as I want it to be--but is not true to the same extent that it was in 2016, say.
Ghenlezo review: https://www.reddit.com/r/slatestarcodex/comments/c52l9w/some_notes_on_the_ai_does_not_hate_you/
Just finished the book today, I'm somewhat impressed by how it came out given the suspicion many people had.
The author managed to take the AI arguments seriously while also striking a balance between writing an honest account of his interactions with the community, keeping it interesting for the typical reader and avoiding lazy potshots against nerds.
My only wish was that there was a section on the practical aspect to rationality, but was widely neglected by many of the hardcore fans, so it's hardly a fair critique of a book about AI safety.
Yes, matches my own thoughts on the book. Might write up some further thoughts if I get the chance.
Scott Aaronson has now written a review.
I'd like to know more about the dark sides part of the book
You mean Part 7 ("The Dark Sides"), or the ways in which the book is bad?
I thought Part 7 was well-done, overall; he asks if we're a cult (and decides "no" after talking about the question in a sensible way), has a chapter on "you can't psychoanalyze your way to the truth", and talks about feminism and neoreactionaries in a way that's basically sensible.
Some community gossip shows up, but in a way that seems almost totally fair and respects the privacy of the people involved. My one complaint, as someone responsible for the LessWrong brand, is that he refers to one piece of community gossip as 'the LessWrong baby' and discusses a comment thread in which people are unkind to the mother*, while that comment thread happened on SlateStarCodex. But this is mostly the fault of the person he interviewed in that chapter, I think, who introduced that term, and is likely a sensible attempt to avoid naming the actual humans involved, which is what I've done whenever I want to refer to the gossip.
*I'm deliberately not naming the people involved, as they aren't named in the book either, and suspect it should stay that way. If you already know the story you know the search terms, and if you don't it's not really relevant.
Yeah, I meant part 7. What did he say about feminism and neoreaction?
Not very much--the feminism chapter is 6 pages, and the neoreaction chapter is 5 pages. Both read like "look, you might have heard rumors that they're bad because of X, but here's the more nuanced version," and basically give the sort of defense that Scott Alexander would give. About feminism, he mostly brings up Scott Aaronson's Comment #171 and Scott Alexander's response to the response, Scott Alexander's explanation of why there are so few female computer programmers (because of the distribution of interests varying by sex), and the overreaction to James Damore. On neoreaction, he brings up Moldbug's posts on Overcoming Bias, More Right, and Michael Anissimov, and says 'comment sections are the worst' and 'if you're all about taking ideas seriously and discussing them civilly, people who have no other discussion partners will seek you out.'