Superintelligence Strategy: A Pragmatic Path to… Doom?
I've been reading through this Superintelligence Strategy paper by Dan Hendrycks, Eric Schmidt, and Alexandr Wang. And, to me, it sounds like the authors are calling the current regime (MAIM aka a "Hot War") the "default" (which it probably is, tbh). But, also calling a peaceful, diplomatic, moratorium strategy "aspirational, but not a viable plan"? E.g. > MAIM Is the Default Regime: ... "Espionage, sabotage, blackmail, hackers, overt cyberattacks, targeting nearby power plants, kinetic attacks, threatening non-AI assets"... -- Superintelligence Strategy This sounds like a violent "Hot War"? > Moratorium Strategy: "proposes halting AI development—either immediately or once certain hazardous capabilities, such as hacking or autonomous operations, are detected… aspirational, but not a viable plan" -- Superintelligence Strategy This sounds like a non-violent peaceful diplomatic treaty-based solution.... aka a "Cold War"? Is it just me or, as major thought leaders in the AI/AGI/ASI space, shouldn't the authors of this paper: 1. Realize that the current paradigm leads to a "Hot War", even if their recommended "solutions" are adopted. 2. Then, actually strongly advocate for a diplomatic and peaceful "Cold War" paradigm? E.g. Planning to completely pause when experts agree that the risk of extinction is "three in a million (a “6σ” threshold)—anything higher was too risky". A threshold that most AI Researchers would (likely) agree that we have flown waaay past by now (e.g. 5-20% p(doom) is more common?) Instead of (strongly) advocating for a diplomatic and peaceful solution, they are just calling the Moratorium/Pause strategy "aspirational, but not viable"? The paper's MAIM (Mutual Assured AI Malfunction) framework, suggests stability is maintained through the threat of mutually disabling AI systems. However, the proposed 'solutions' to make MAIM 'more stable' also seem pretty scary/destructive. E.g. > “How to Maintain a MAIM Regime: > > … MAIM requires tha
"With Big Sleep, we’ve demonstrated how we can find vulnerabilities that defenders don’t yet know about. In this case, we found a vulnerability that the attackers knew about and had every intention of using, and we were able to detect and report it for patching before they could exploit it."
-- https://cloud.google.com/blog/products/identity-security/cloud-ciso-perspectives-our-big-sleep-agent-makes-big-leap
"Today, we're excited to share the first real-world vulnerability discovered by the Big Sleep agent: an exploitable stack buffer underflow in SQLite, a widely used open source database engine. We discovered the vulnerability and reported it to the developers in early October, who fixed it on the same day. Fortunately, we found this issue before it appeared in an official release, so SQLite users were not impacted."
-- https://googleprojectzero.blogspot.com/2024/10/from-naptime-to-big-sleep.html