I think the points here are good, but it would be much better as a post if it was more respectful of Ball's position, attempting to understand it instead of just attacking it. (Especially the conclusion.)
I agree that he's not thinking about superintelligence, and I think the actual argument is about how much intelligence, even superintelligence, translates into ability to do useful work. Being really smart and working really hard simply isn't enough to do things that are actually implausibly difficult. And if so the question is whether things that cause existential risk are implausibly difficult. (In Biorisk, the answer may be yes, though it's very unclear. But for exfiltration, persuasion, and scheming, the answer is pretty clearly no.)
This is the kind of rhetoric Dean supports and praises: https://x.com/deanwball/status/2026325817291104728
"This instinct seems to infect the far left across lots of domains: immigration, crime fighting, and the national debt to name a few. You can tell they’re just sort of yearning to submit our society to outside forces: mobs, international councils, or communist China. ... They don’t believe in order, except brutal order under their heels." – blaming resistance to AI datacenters on far left lunatics.
This new post is also not exactly free of mocking language:
"“the AI safety community” is that artificial superintelligence will be able to “do anything.” Now, most people in this world are much too smart to say literally these words, and so it might be fairer to put my criticism this way: “many people in ‘the AI safety community’ are way too willing to resort to extreme levels of hand-waviness when it comes to the supposed capabilities of superintelligent AI.” The tautological pattern of the AI safetyist mind is easy enough to recognize once you encounter it a few times: “Well of course superintelligence will be able to do that. After all, it’s superintelligence. And because superintelligence will obviously be able to do that, you must agree with me that banning superintelligence is an urgent necessity.”"
So I feel like he should be able to handle my tone here, but will possibly adjust it a bit.
To answer briefly here: My understanding of Dean's position is that he totally rules out the possibility of AI wiping out humanity mainly based on this "superintelligence is not omnipotent" argument. we specifically seems to believe that superintelligence won't ever gain the capability to do so. This is the superintelligence is going to be weak view. But it is pretty apparent to me that much less than superintelligence is sufficient to kill us. I don't believe that AI strictly need to something along the lines of "exfiltration, persuasion, and scheming", there are many ways for it to win. Clearly, there exist such ways, it is not impossible purely because ASI isn't omnipotent.
My understanding of Dean's position is that he totally rules out the possibility of AI wiping out humanity mainly based on this "superintelligence is not omnipotent" argument.
He said something like that in the past, but has updated greatly, and since then said AI causing human extinction is only "highly unlikely", then even more recently said that "ai present catastrophic risks" and "alignment may become a more central issue for me again depending on how well alignment seems to work for smarter-than-human widely deployed ai".
But again, overall I agree with your points - I just think it's better not to be insulting about it, and give people like Dean who are engaging in good faith the benefit of the doubt.
Thanks for picking out these quotes from him. However, I do think they seem to support my model of his views pretty much.
To be clear: I understand that he believes ai could cause catastrophic risk, but he's probably thinking about serious catastrophic events here. Like money is lost, some people die.
that's fair, i'll think about softening this piece. though i don't think he is engaging very well with other people here. Like the way he talks about the "doomers" is clearly mocking too:
"One common assumption (though less prevalent with time) among many people in “the AI safety community” is that artificial superintelligence will be able to “do anything.” Now, most people in this world are much too smart to say literally these words, and so it might be fairer to put my criticism this way: “many people in ‘the AI safety community’ are way too willing to resort to extreme levels of hand-waviness when it comes to the supposed capabilities of superintelligent AI.” The tautological pattern of the AI safetyist mind is easy enough to recognize once you encounter it a few times: “Well of course superintelligence will be able to do that. After all, it’s superintelligence. And because superintelligence will obviously be able to do that, you must agree with me that banning superintelligence is an urgent necessity.”"
– from his 2023 text (that's the name, he just posted it)
Also:
it is pretty apparent to me that much less than superintelligence is sufficient to kill us.
That seems incredibly not obvious, and I'd call it a straw man of your position if you hadn't literally said it.
Do you mean "something less than a very strong superintelligence is sufficient" or do you mean "sufficient to do something that probably could kill us, if humans don't pay much attention, and not with anything like certainty"?
I mean there are paths where non-superintelligence kills us. Like looks plausibly that we will just hand AI control over the military and give it direct access to bio labs.
I don't think that either of those guarantees that most of humanity dies, much less everyone. Especially the latter, given what is actually possible.
I don't know but like it spawns a new pathogen each week, each very contagious and deadly. then it spreads pathogens that cause mass crop death. then come ai drones picking up larger groups of survivors. Then ground robots, small airborne drones. Then climate change +10C. One after the other. Whats impossible here?
The study seems to be about what is predicted by experts to be possible, not what is possible afaict?
Is it your position that it is not obvious that a new species can causally drive another species extinct without being orders of magnitude more intelligent? Because the earth has had millions of existence proofs of that in the history of life.
Note that Dean's is also a fully general argument that humans cannot cause human extinction. That should be a sufficient reason to reject it.
Personally I'm somewhat sceptical of AI-doom - but even I must admit, both Ball's "steps that require capital" and his "interfacing with hard-to-predict complex systems" seem like very odd things to propose as insurmountable barriers to AI-doom: one of the things we're using AI for right now is to help us interface with and make predictions about complex systems, and if they weren't capable of generating revenue we probably wouldn't have built them in the first place.
Dangerous capabilities well short of superintelligence are followed by overwhelmingly catastrophic capabilities 20 years later. But superintelligence being impossible, or its sudden emergence (in a matter of years) being impossible, is a position that makes whatever happens 20 years after a more modest milestone less relevant, because whatever happens a few years down the line (such as the state of alignment and control) is shaped by what's done before that, and there's time to figure things out.
Gradual disempowerment is a relevant argument within that framing. Disputing the framing involves arguments that sudden superintelligence is possible, or that eventual superintelligence is a phase change that preceding work won't prepare for (rather than an arbitrary point in a gradual process not distinct from all other points). Disputing the framing is more difficult, but accepting this framing makes people much more tolerant of continuing unbounded development of increasingly capable AI at the pace that the technology itself is asking for. So these two models of AI danger are not very aligned on policy.
In response to “2023 Or, Why I am Not a Doomer” by Dean W. Ball.
Dean Ball is a pretty big voice in AI policy – over 19k subscribers on his newsletter, and a former Senior Policy Advisor for AI at the Trump White House – so why does he disagree that AI poses an existential danger to humanity? In short, he holds the common view that superintelligence (ASI) simply won’t be that powerful. I strongly disagree, and I think he makes a couple of invalid leaps to arrive there.
Better Than Us Is Enough
His main flawed argument is that he implies AI must be omnipotent and omniscient to wipe us out and then explains why that won’t be the case. He states: “one common assumption… among many people in ‘the AI safety community’ is that artificial superintelligence will be able to ‘do anything.’” He then argues that “intelligence is neither omniscience nor omnipotence,” and that even a misaligned AI with “no [..] safeguards to hinder it” would “still fail” because taking over the world “involves too many steps that require capital, interfacing with hard-to-predict complex systems.” But omnipotence or omniscience was never the requirement, it just needs to be smarter and better than us – humans.
Think Forward
Importantly, it doesn’t actually take superintelligence to wipe out or disempower humanity. For me to imagine this, I simply need to think forward to the not-so-distant future. Imagine you get a tiger cub. Think forward to what the tiger will look like in a year and ask yourself: could it kill me in a year? Now do this with AI. Imagine the future with a billion robots, AI running the military, AI doing basically all jobs with perhaps some level of human oversight, AI running the media, biolabs, political and military decisions, critical infrastructure. That metaphorical tiger could kill us. Ball himself imagines a future where AI is “embedded into much of the critical infrastructure and large organizations in America, such that it is challenging to imagine what life would be like if Claude ‘turned off.’”
Ball also discusses scenarios in which superintelligence has almost outlandish abilities, performing science breakthroughs without much experimentation. He focuses on Yudkowsky’s claim that “a sufficiently superintelligent AI system would be able to infer not just the theory of gravity, but of relativity” from a few frames of a falling apple, or that “bootstrap molecular nanoengineering.” Ball may be correct that these specific claims are wrong, but these are not load-bearing parts of any story for why AI might become dangerous. You don’t need to infer relativity from first principles to engineer a bioweapon. Notably, Yudkowsky himself has given other scenarios that do not require the AI to make scientific breakthroughs without experiments (see IABIED, chapter 2).
If your response is “but there will be many AIs and there will be monitoring, so we’ll be safe,” then you’ve shifted to a different (and very flawed) argument.[1] The point is that clearly AI will be able to take over in the future if we haven’t aligned it well by then. In reality, it probably won’t take that long to largely automate all jobs and tasks, since it’s enough to achieve some combination of: secure power, enable actions in the physical world, get rid of or sideline humans. And once it reaches a critical capability level, the AI has to act fast because of competing AI projects that represent future rival agents.
An Old Argument, Made Worse
The core of Ball’s case, that the world is simply too complex and chaotic for any intelligence to control, is not a new argument. Robin Hanson made a similar case in his 2008 Foom Debate with Yudkowsky: innovation is too distributed across many actors, no single AI can race ahead of all competitors fast enough to dominate. But Hanson more correctly understood that this is an argument about the speed and distribution of AI takeoff, not an argument against existential risk. Ball takes Hanson’s position and corrupts it by treating it as a refutation of existential risk from AI entirely.
The “many AIs and monitors” defense is pretty weak: unaligned AIs can cooperate with each other; monitoring can be evaded, there’s simply too much to monitor, AI doing the monitoring for us could itself be jailbroken or could cooperate with the systems it’s supposed to watch, and AIs can hide their reasoning through methods like steganography.