When killer robots are outlawed, only rogue nations will have massive drone armies.

An ideal outcome here would be if counter-drones have an advantage over drones, but it's hard to see how this could obtain when counter-counter-drones should be in a symmetrical position over counter-drones. A second-best outcome would be no asymmetrical advantage of guerilla drone warfare, where the wealthiest nation clearly wins via numerical drone superiority combined with excellent enemy drone detection.

...you know, at some point the U.S. military is going to pay someone $10 million to conclude what I just wrote and they're going to get it half-wrong. Sigh.

Showing 3 of 4 replies (Click to show all)

When killer robots are outlawed, only rogue nations will have massive drone armies.

That's not necessarily a huge issue. If all the major powers agree to not have automated killing drones, and a few minor rogue states (say, Iran) ignore that and develop their own killer drones, then (at least in the near term) that probably won't give them a big enough advantage over semi-autonomous drones controlled by major nations to be a big deal; an Iranian automated drone army probably still isn't a match for the American military, the American military has too m... (read more)

-1Epiphany7yThis took effort to parse. I think what you're saying is: * If we're going to have killer drones, there needs to be something to check their power. Example: counter-drones. * If we're going to have counter-drones, we need to check the power of the counter-drones. Example: counter-counter-drones. * If counter-counter-drones can dominate the original drones, then counter-drones probably aren't strong enough to check and balance the original drones. (Either because the counter-counter-drones will become the new original drones or because the counter-drones would be intentionally less powerful than the original drones so that the counter-counter-drones could counter them, making the counter-drones useless.) (I want everyone to understand, so I'm writing it all out - let me know if I'm right.) And you propose "no asymmetrical advantage of guerilla drone warfare... etc" which isn't clear to me because I can interpret multiple meanings: * Trash the drones vs. counter-drones vs. counter-counter-drones idea? * Make sure drones don't have an advantage at guerilla drone warfare? * Decide who wins wars based on who has more drones and drone defenses instead of actually physically battling? What did your statement mean? I think if we're going to check the power of killing drones, we need to start with defining the sides using a completely different distinction unlike "drone / counter-drone". Reading this gave me a different idea for checking and balancing killer robots and advanced weapons [http://lesswrong.com/r/discussion/lw/hpb/nearterm_risk_killer_robots_a_threat_to_freedom/95xa] . I can see some potential cons to it, but I think it might be better than the alternatives. I'm curious about what pros and cons you would think of.
0atucker7yI think that if you used an EMP as a stationary counter-drone you would have an advantage over drones in that most drones need some sort of power/control in order to keep on flying, and so counter-drones would be less portable, but more durable than drones.

Near-Term Risk: Killer Robots a Threat to Freedom and Democracy

by Epiphany 2 min read14th Jun 2013105 comments

10


A new TED talk video just came out by Daniel Suarez, author of Daemon, explaining how autonomous combat drones with a capability called "lethal autonomy" pose a threat to democracy.  Lethal autonomy is what it sounds like - the ability of a robot to kill a human without requiring a human to make the decision.

He explains that a human decision-maker is not a necessity for combat drones to function.  This has potentially catastrophic consequences, as it would allow a small number of people to concentrate a very large amount of power, ruining the checks and balances of power between governments and their people and the checks and balances of power between different branches of government.  According to Suarez, about 70 countries have begun developing remotely piloted drones (like predator drones), the precursor to killer robots with lethal autonomy.

Daniel Suarez: The kill decision shouldn't belong to a robot

One thing he didn't mention in this video is that there's a difference in obedience levels between human soldiers and combat drones.  Drones are completely obedient but humans can throw a revolt.  Because they can rebel, human soldiers provide some obstacles to limit the power that would-be tyrants could otherwise obtain.  Drones won't provide this type of protection whatsoever.  Obviously, relying on human decision making is not perfect.  Someone like Hitler can manage to convince people to make poor ethical choices - but still, they need to be convinced, and that requirement may play a major role in protecting us.  Consider this - it's unthinkable that today's American soldiers might suddenly decide this evening to follow a tyrannical leader whose goal is to have total power and murder all who oppose.  It is not, however, unthinkable at all that the same tyrant, if empowered by an army of combat drones, could successfully launch such an attack without risking a mutiny.  The amount and variety of power grabs a tyrant with a robot army of sufficient power can get away with is unlimited.

Something else he didn't mention is that because we can optimize technologies more easily than we can optimize humans, it may be possible to produce killer robots in less time than it takes to build armies of human soldiers and with less expense than training and paying those soldiers.  Considering the salaries and benefits paid to soldiers and the 18 year wait time on human development, it is possible that an overwhelmingly large army of killer robots could be built more quickly than human armies and with fewer resources.

Suarez's solution is to push for legislation that makes producing robots with lethal autonomy illegal.  There are, obviously, pros and cons to this method.  Another method (explored in Daemon) is that if the people have 3-D printers, then the people may be able to produce comparable weapons which will then check and balance their government's power.  This method has pros and cons as well. I came up with a third method which is here.  I think it's better than the alternatives but I would like more feedback.

As far as I know, no organization, not even MIRI (I checked), is dedicated to preventing the potential political disasters caused by near-term tool AI (MIRI is interested in the existential risks posed by AGI).  That means it's up to us - the people - to develop our understanding of this subject and spread the word to others.  Of all the forums on the internet, LessWrong is one of the most knowledgeable when it comes to artificial intelligence, so it's a logical place to fire up a discussion on this.  I searched LessWrong for terms like "checks and balances" and "Daemon" and I just don't see evidence that we've done a group discussion on this issue.  I'm starting by proposing and exploring some possible solutions to this problem and some pros and cons of each.

To keep things organized, let's put each potential solution, pro and con into a separate comment.

10