This is good and I approve of it.
A few random notes and nitpicks:
Domain seems to have expired, so I bought it and got it working again.
(Epistemic status: Not fully baked. Posting this because I haven't seen anyone else say it[1], and if I try to get it perfect I probably won't manage to post it at all, but it's likely that this is wrong in at least one important respect.)
For the past week or so I've been privately bemoaning to friends that the state of the discourse around IABIED (and therefore on the AI safety questions that it's about) has seemed unusually cursed on all sides, with arguments going in circles and it being disappointingly hard to figure out what the key disagreements are and what I should believe conditional on what.
I think maybe one possible cause of this (not necessarily the most important) is that IABIED is sort of two different things: it's a collection of arguments to be considered on the merits, and it's an attempt to influence the global AI discourse in a particular object-level direction. It seems like people coming at it from these two perspectives are talking past each other, and specifically in ways that lead each side to question the other's competence and good faith.
If you're looking at IABIED as an argumentative disputation under rationalist debate norms, then it leaves a fair amount to be desired.[2] A number of key assumptions are at least arguably left implicit; you can argue that the arguments are clear enough, by some arbitrary standard, but it would have been better to make them even clearer. And while it's not possible to address every possible counterargument, the book should try hard to address the smartest counterarguments to its position, not just those held by the greatest number of not-necessarily-informed people. People should not hesitate to point out these weaknesses, because poking holes in each others' arguments is how we reach the truth. The worst part, though, is that when you point this out, proponents don't eagerly accept feedback and try to modulate their messaging to point more precisely at the truth; instead, they argue that they should be held to a lower epistemic standard and/or that the hole-pokers should have a higher bar for hole-poking. This is really, really not a good look! If you behaved like that on LessWrong or the EA Forum, people would update some amount towards the proposition that you're full of shit and they shouldn't trust you. And since a published book is more formal and higher-exposure than a forum post, that means you should be more epistemically careful. Opponents are therefore liable to conclude that proponents have turned their brains off and are just doing tribal yelling, with a thin veneer of verbal sophistication applied on top for the sake of social convention.
If you're looking at IABIED as an information op, then it's doing a pretty good job balancing a significant and frankly kind of unfair number of constraints on what a book has to do and how it has to work. In particular, it bends extremely far over backwards to accommodate the notoriously nitpicky epistemic culture of rationalists and EAs, despite these not being the most important audiences. Further hedging is counterproductive, because in order to be useful, the book needs to make its point forcefully enough to overcome readers' bias towards inaction. The world is in trouble because most actors really, really want to believe that the situation doesn't require them to do anything costly. If you tell them a bunch of nuanced hedgey things, those biases will act on your message in their brains and turn it into something like "there's a bunch of expert disagreement, we don't know things for sure, but probably whatever you were going to do anyway is fine". Note that this is not about "truth vs. propaganda"; basically every serious person agrees that some kind of costly action is or will be required, so if you say that the book overstates its case, or other things that people will predictably read as "the world's not on fire", they will thereby end up with a less accurate picture of the world, according to what you yourself believe. And yet opponents insist upon doing precisely this! If you actually believe that inaction is appropriate, then so be it, but we know perfectly well that most of you don't believe that and are directionally supportive of making AI governance and policy more pro-safety. So saying things that will predictably soothe people further asleep is just a massive own-goal by your own values; there's no rationalist virtue in speaking to the audience that you feel ought to exist instead of the one that actually does. Proponents are therefore liable to conclude that opponents either just don't care about the real-world stakes, or are so dangerously naive as to be a liability to their own side.
Though it's likely someone did and I just didn't see it.
I've been traveling, haven't made it all the way through the book yet, and am largely going by the reviews. I'm hoping to finish it this week, and if the book's content turns out to be relevantly different from what I'm currently expecting, I'll come back and post a correction.
We're in the room now and can let people in.
You don't think the GitHub thing is about reducing server load? That would be my guess.
This is addressed in the FAQ linked at the top of the page. TL;DR: The author insists that the gist of the story is true, but acknowledges that he glossed over a lot of intermediate debugging steps, including accounting for the return time.
Does that logic apply to crawlers that don't try to post or vote, as in the public-opinion-research use case? The reason to block those is just that they drain your resources, so sophisticated measures to feed them fake data would be counterproductive.
I didn't downvote (I'm just now seeing this for the first time), but the above comment left me confused about why you believe a number of things:
Also, I think a lot of us don't take claims like "I've been researching this matter professionally for years" seriously because they're too vaguely worded; you might want to be a bit more specific about what kind of work you've done.
For people in Boston, I made a straw poll to gauge community sentiment on this question: https://forms.gle/5BJEG5fJWTza14eL9
I think that either omitting the don't-read-the-citations-aloud stage direction, or making it easier to follow (with a uniform italic-text-is-silent convention), would be fine, and I don't have a strong opinion as to which is better. But before Boston made the change I'm now suggesting, what tended to happen was that people inconsistently read or didn't read the citations aloud, and this was confusing and distracting.