I haven't personally heard a lot of recent discussions about it, which is strange considering that both startups like Andruil and Palantir are developing systems for military use, OpenAI recently deleted a clause prohibiting the use of its products in the military sector, and the government sector is also working on making AI-piloted drones, rockets, information systems (hello, Skynet and AM), etc.

And the most recent and perhaps chilling use of it comes from the Israel's invasion of Gaza, where Israeli army has marked tens of thousands of Gazans as suspects for assassination, using Lavender AI targeting system with little human oversight and a permissive policy for casualties.

So how does all of it affect your p(doom) and what are your general thoughts on this and how do we counter that?

Relevant links:

https://www.972mag.com/lavender-ai-israeli-army-gaza/

https://www.wired.com/story/anduril-roadrunner-drone/

https://www.bloomberg.com/news/articles/2024-01-10/palantir-supplying-israel-with-new-tools-since-hamas-war-started

New Answer
New Comment

5 Answers sorted by

johnswentworth

Apr 06, 2024

1814

It doesn't.

How does militarisation of AI and so-called slaughterbots don't affect your p-doom at all? Plus, I mean, we are clearly teaching AI how to kill, giving it more power and direct access to important systems, weapons and information.

7johnswentworth21d
... man, now that the post has been downvoted a bunch I feel bad for leaving such a snarky answer. It's a perfectly reasonable question, folks! Overcompressed actual answer: core pieces of a standard doom-argument involve things like "killing all the humans will be very easy for a moderately-generally-smarter-than-human AI" and "killing all the humans (either as a subgoal or a side-effect of other things) is convergently instrumentally useful for the vast majority of terminal objectives". A standard doom counterargument usually doesn't dispute those two pieces (though there are of course exceptions); a standard doom counterargument usually argues that we'll have ample opportunity to iterate, and therefore it doesn't matter that the vast majority of terminal objectives instrumentally incentivize killing humans, we'll iterate until we find ways to avoid that sort of thing. The standard core disagreement is then mostly about the extent to which we'll be able to iterate, or will in fact iterate in ways which actually help. In particular, cruxy subquestions tend to include: * How visible will "bad behavior" be early on? Will there be "warning shots"? Will we have ways to detect unwanted internal structures? * How sharply/suddenly will capabilities increase? * Insofar as problems are visible, will labs and/or governments actually respond in useful ways? Militarization isn't very centrally relevant to any of these; it's mostly relevant to things which are mostly not in doubt anyways, at least in the medium-to-long term.
2Thane Ruthenis21d
I'd say one of the main reasons is because military-AI technology isn't being optimized towards things we're afraid of. We're concerned about generally intelligent entities capable of e. g. automated R&D and social manipulation and long-term scheming. Military-AI technology, last I checked, was mostly about teaching drones and missiles to fly straight and recognize camouflaged tanks and shoot designated targets while not shooting not designated targets. And while this still may result in a generally capable superintelligence in the limit (since "which targets would my commanders want me to shoot?" can be phrased as a very open-ended problem), it's not a particularly efficient way to approach this limit at all. Militaries, so far, just aren't really pushing in the directions where doom lies, while the AGI labs are doing their best to beeline there. The proliferation of drone armies that could be easily co-opted by a hostile superintelligence... It doesn't have no impact on p(doom), but it's approximately a rounding error. A hostile superintelligence doesn't need extant drone armies; it could build its own, and co-opt humans in the meantime.

ryan_greenblatt

Apr 06, 2024

84

(Large scale) robot armies moderately increase my P(doom). And the same for large amounts of robots more generally.

The main mechanism is via making (violent) AI takeover relatively easier. (Though I think there is also a weak positive case for robot armies in that they might make relatively less smart AIs more useful for defense earlier which might mean you don't need to build AIs which are as powerful to defuse various concerns.)

Usage of AIs in other ways (e.g. targeting) doesn't have much direct effect particularly if these systems are narrow, but might set problematic precedents. It's also some evidence of higher doom, but not in a way where intervening on the variable would reduce doom.

Dagon

Apr 06, 2024

73

Ehn.  Kind of irrelevant to p(doom).  War and violent conflict is disturbing, but not all that much more so with tool-level AI.  

Especially in conflicts where the "victims" aren't particularly peaceful themselves, it's hard to see AI as anything but targeting assistance, which may reduce indiscriminate/large-scale killing.

I'm being heavily downvoted here, but what exactly did I say wrong? In fact I believe i said nothing wrong.

It does worsen situation with Israel military forces mass murdering Palestinian civilians due to AIs decisions with operators just rubber stamping the actions.

Here is the +972 Mag Report: https://www.972mag.com/lavender-ai-israeli-army-gaza/

I highly advise you to read as it goes into higher details as to how it exactly internally works.

2Dagon21d
I can only speak for myself, but I downvoted for leaning very heavily on a current political conflict, because it's notoriously difficult to reason about generalities due to the mindkilling effect of taking sides.  The fact that I seem to be on a different side than you (though there ain't no side that's fully in the right - the whole idea of ethnic and religious hatred is really intractable) is only secondary. I regret engaging on that level.  I should have stuck with my main reaction that "individual human conflict is no more likely to lead to AI doom than nuclear doom".  It didn't change the overall probability IMO.

Noosphere89

Apr 13, 2024

42

I basically agree with John Wentworth here that it affects p(doom) not at all, but one thing I will say is that it kind of makes claims that humans will make decisions/be accountable once AI gets very useful rather uncredible.

More generally, one takeaway I see from the military's use of AI is that there are strong pressures to let them operate on their own, and this is going to be surprisingly important in the future.

Brendan Long

Apr 07, 2024

30

While military robots might be bad for other reasons, I don't really see the path from this to doom. If AI powered weaponry doesn't work as expected, it might kill some people, but it can't repair or replicate itself or make long-term plans, so it's not really an extinction risk.

This AI powered weaponry can always be hacked/modified, even talked to perhaps, this all gives way for them to be used in more than a single way. You can't hack a bullet, you can hack an AI powered ship. So singularly they might not be dangerous, but they don't exist in isolation.

Also, militarisation of AI might create systems that are designed to be dangerous, amoral and without any proper oversight. This opens us a to a flood of potential dangers, some that are even hard to predict now.

4FeepingCreature21d
If military AI is dangerous, it's not because it's military. If a military robot can wield a gun, a civilian robot can certainly acquire one as well. The military may create AI systems that are designed to be amoral, but it will not want systems that overinterpret orders or violate the chain of command. Here as everywhere, if intentional misuse is even possible at all, alignment is critical and unintentional takeoff remains the dominant risk. In seminal AI safety work Terminator, the Skynet system successfully triggers a world war because it is a military AI in command of the US nuclear arsenal, and thus has the authority to launch ICBMs. This, ironically to how it is usually ridiculed, gets AI risks quite right but grievously misjudges the state of computer security. If Skynet was running on Amazon AWS instead of a military server cluster, it would only be marginally delayed from the same outcome. The prompting is not the hard part of operating an AI. If you can talk an AI ship into going rogue, a civilian AI can talk it into going rogue. This situation is inherently brimming with doom- it is latently doomed in multiple ways- the military training and direct access to guns merely removes small roadbumps. All the risk materialized at once, when you created an AI that had the cognitive capability to conceive of and implement plans that used a military vessel for its own goals. Whether the AI was specifically trained on this task is, in this case, really not the primary source of danger. "My AI ship has gone rogue and is shelling the US coastline." "I hope you learnt a lesson here." "Yes. I will not put the AI on the ship next time." "You may be missing the problem here--"
3Justausername21d
Yes, civilian robot can acquire a gun, but it still makes it safer than a military robot that already has a whole arsenal of military gadgets and weapons right away. It would have to do additional work to acquire it, and it is still better to have it do more work, have more roadblocks than less. I think we are mainly speculating on what the military might want. It might want to have a button that will instantly kill all their enemies with one push, but they might not get that (or they might, who knows now). I personally do not think they will put more efficient AI (efficient in murdering humans) below the less efficient but more controllable AI. They would want to have an upper edge over the enemy. Always. And if it means sacrificing some controllability or anything else, they might just do that. But they might not even get that, they might get an uncontrollable and error prone AI and no better. Military arent gods, they don't always get what they want. And someone uptop might decide "To hell with it, its good enough" and that will be it. And to your ship analogy it's one thing to talk a civilian AI vessel into going rogue, it's a different thing entirely to talk a frigate or nuclear submarine into going rogue. The risks are different. One has control over a simple vessel, the other has a control over a whole arsenal. I'm talking about the fact that the second increases risk substantially and should be extremely avoided for security reasons. ---------------------------------------- I think it still does increase the danger if AI is trained without any moral guidance or any possibility of moral guardrails, but instead trained to murder people and efficiently put humans in harms way. The current AI systems have something akin to Anthropics AI constitution, that tries to put some moral guard-rails and respect for human life and ans human rights, I don't think think that AIs trained for the military are going to have the same principles applied to them in the slight
2Brendan Long21d
When people talk about p(doom) they generally mean the extinction risk directly from AI going rogue. The way I see it, that extinction-level risk is mostly self-replicating AI, and an AI that can design and build silicon chips (or whatever equivalent) can also build guns, and an AI designed to operate a gun doesn't seem more likely to be good at building silicon chips. I do worry that AI in direct control of nuclear weapons would be an extinction risk, but for standard software engineering reasons (all software is terrible), not for AI-safety reasons. The good news is that I don't really think there's any good reason to put nuclear weapons directly in the hands of AI. The practical nuclear deterrent is submarines and they don't need particularly fast reactions to be effective.