It's not only a feint. You don't want to go to war, and you hope that the treaty will prevent war from happening, but you are prepared to go to war if the treaty is violated. This is the standard way treaties work.
The relevant way in which it's analogous is that a head of state can't build [dangerous AI / nuclear weapons] without risking war (or sanctions, etc.).
My preferred mechanism, and I think MIRI's, would be an international treaty in which every country implements AI restrictions within its own borders. That means a head of state can't build dangerous AI without risking war. It's analogous to nuclear non-proliferation treaties.
I don't think I would call it low risk, but my guess is it's less risky than the default path of "let anyone build ASI with no regulations".
This seems like a good thing to advocate for, I'm disappointed that they don't make any mention of extinction risk but I think establishing red lines would be a step in the right direction.
A lukewarmer who believes in , say, a 30% chance of dystopia just isn't on the same.page as an extremist who believes in 98% certain doom. They are not going to support nuking data centers.
Eliezer doesn't support nuking data centers, either. He supports an international treaty that, like all serious international treaties, is backed by a credible threat of violence.
(I suppose someone with a 98% P(doom) might hypothetically support unconditionally nuking data centers, but that is not Eliezer's actual position. I assume it's not the position of anyone at MIRI but I can only speak for Eliezer because he's written a lot about this publicly.)
I feel like every time I write a comment, I have to add a caveat about how I'm not as doomy as MIRI and I somewhat disagree with their predictions. But like, I don't actually think that matters. If you think there's a 5% or 20% chance of extinction from ASI, you should be sounding the alarm just as loudly as MIRI is! Or maybe 75% as loudly or something. But not 20% as loudly—how much you should care about raising concern for ASI is not a linear function of your P(doom).
I'm glad you wrote this, I very much feel the same way but I wasn't sure how to put it. It feels like many reviewers—the ones who agree that AI x-risk is a big deal, but spent 90% of the review criticizing the book—are treating this like an abstract philosophical debate. ASI risk is a real thing that has a serious chance of causing the extinction of humanity.
Like, I don't want to say you're not allowed to disagree. So I'm not sure how to express my thoughts. But I think it's crazy to believe AI x-risk is a massive problem, and then spend most of your words talking about how the problem is being overstated by this particular group of people.
I disagree with this position, but if I held it, I would be saying somewhat similar things to Zach (even having read the book).
I wouldn't. I roughly agree with Zach's background position (i.e. I'm quite uncertain about the likelihood of extinction conditional on YOLO-ing the current paradigm*) but I still think his conclusions are wild. Quoting Zach:
First, it leaves room for AI's transformative benefits. Tech has doubled life expectancy, slashed extreme poverty, and eliminated diseases over the past two centuries. AI could accelerate these trends dramatically.
The tradeoff isn't between solving scarcity at a high risk of extinction vs. never getting either of those things. It's between solving scarcity now at a high risk of extinction, vs. solving scarcity later at a much lower risk.
Second, focusing exclusively on extinction scenarios blinds us to other serious AI risks: authoritarian power grabs, democratic disruption through misinformation, mass surveillance, economic displacement, new forms of inequity. These deserve attention too.
Slowing down / pausing AI development gives us more time to work on all of those problems. Racing to build ASI means not only are we risking extinction from misalignment, but we're also facing a high risk of outcomes such as, for example, ASI being developed so quickly that governments don't have time to get a handle on what's happening and we end up with Sam Altman as permanent world dictator. (I don't think that particular outcome is that likely, it's just an example.)
*although I think my conditional P(doom) is considerably higher than his
I find it very strange that Collier claims that international compute monitoring would “tank the global economy.” What is the mechanism for this, exactly?
I am continually confused at how often people make this claim. We already monitor lots of things. Monitoring GPUs and restricting sales would be no harder than doing the same for guns or prescription medication. Easier, maybe, because manufacturing is highly centralized and you'd need a lot of GPUs to do something dangerous.
Polls suggest that most normal people expect AGI to be bad for them and they don't want it. I'm more speculating here, but I think the typical expectation is something like "AGI will put me out of a job; billionaires will get even richer and I'll get nothing."