Contra Collier on IABIED
Clara Collier recently reviewed If Anyone Builds It, Everyone Dies in Asterisk Magazine. I’ve been a reader of Asterisk since the beginning and had high hopes for her review. And perhaps it was those high hopes that led me to find the review to be disappointing. Collier says “details matter,” and I absolutely agree. As a fellow rationalist, I’ve been happy to have nerds from across the internet criticizing the book and getting into object-level fights about everything from scaling laws to neuron speeds. While they don’t capture my perspective, I thought Scott Alexander and Peter Wildeford’s reviews did a reasonable job at poking at the disagreements with the source material without losing track of the big picture. But I did not feel like Collier’s review was getting the details or the big picture right. Maybe I’m missing something important. Part of my motive for writing this “rebuttal” is to push back on where I think Collier gets things wrong, but part of it stems from a hope that by writing down my thoughts, someone will be able to show me what I’m missing. (Maybe Collier will respond, and we can try to converge?) I’ll get into a mess of random nitpicking at the end of this essay, but I want to start by addressing two main critiques from Collier’s review, that I think are pretty important: * “The idea of an intelligence explosion [is] a key plank of the MIRI story” * “If one believes that AI progress will be slow and continuous, or even relatively fast and continuous, it follows that we’ll have more than one shot at the goal.” FOOM Collier writes: > [The MIRI worldview says there will be] a feedback loop where AIs rapidly improve their own capabilities, yielding smarter agents, which are even better at AI research, and so on, and so forth — escalating uncontrollably until it yields a single AI agent which exceeds all humans, collectively, in all mental abilities. This process is called an intelligence explosion, or, colloquially, FOOM (rhymes with “doom”
My sense is that the point of the book was to convince people that it's important to take AI x-risk seriously (as BB does). I don't really think it was intended to get people to think it's title thesis is clearly true.
Some things are hard to judge.