Why does this not apply to rifles? / Again, why isn't this isomorphic to "Human equipped with weapon X" versus "unarmed human"?

Killer robots pose a threat to democracy that rifles do not. Please see "Near-Term Risk: Killer Robots a Threat to Freedom and Democracy" and the TED Talk link therein "Daniel Suarez: The kill decision shouldn't belong to a robot". You might also like to check out his book "Daemon" and it's sequel.

Once more: Why are "Killer Robots" different from "machine guns" in this sentence?

Machine guns are wielded by humans, the humans can make better ethical decisions than robots currently can.

humans can make better ethical decisions than robots currently can.

This is not obvious. Many's the innocent who has been killed by some tense soldier with his finger on the trigger of a loaded weapon, who didn't make an ethical decision at all. He just reacted to movement in the corner of his eye. If there was an ethical decision made, it was not at the point of killing, but at the point of deploying the soldier, with that armament and training, to that area - and this decision will not be made by the robots themselves, for some time to come.

If you don... (read more)

[LINKS] Killer Robots and Theories of Truth

by fowlertm 1 min read30th Jun 20138 comments

-4


Peter at the Conscious Entities blog wrote an essay on the problems with using autonomous robots for combat, and attempts to articulate some general principles which allow them to be used ethically.  He says:

In essence I think there are four broad reasons why hypothetically we might think it right to be wary of killer robots: first, because they work well; second because in other ways they don’t work well, third because they open up new scope for crime, and fourth because they might be inherently unethical.

Unpacking this a little, autonomous robots will affect the characteristics of war and make it easier for many to carry out, can be expected to malfunction in especially complex and open-ended situations in very serious ways, might be re-purposed for crime, and because for various reasons they make the ethics surrounding war even more dubious.  

He even takes a stab at laying out restrictive principles which will help mitigate some of the danger in utilizing autonomous robots:

P1. Killer Robots should not be produced or used in a way that allows them to fall into the hands of people who will use them unethically.

P2. Killer Robots should not be used for any mission which you would not be prepared to assign to a human soldier if a human soldier were capable of executing it.

P3. Killer Robots should not be used for any mission in unpredictable circumstances or where the application of background understanding may be required.

P4. Killer Robots should not be equipped with capacities that go beyond the immediate mission; they should be subject to built-in time limits and capable of being shut down remotely.

Though he is a non-expert in the field, I (also a non-expert) find his analysis capable and thorough, though I spotted some possible flaws.  I mention it here at LessWrong because, while we may be decades away from superintelligent AI, work in AI risk and machine ethics is going to become especially important very soon as drones, robots, and other non-human combatants become more prevalent on battlefields all over the world.

Switching gears a bit, Massimo Pigliucci of Rationally Speaking fame lays out some common theories of truth and problems facing each one.  If you've never heard of Charles Sanders Pierce and wouldn't know a verificationist account of truth if it hit you in the face, Massimo's article could be a good place to start getting some familiarity.  It seems relevant because there has been some work on epistemology in these parts recently.  And, as Massimo says:

...it turns out that it is not exactly straightforward to claim that science makes progress toward the truth about the natural world, because it is not clear that we have a good theory of truth to rely on; moreover, there are different conceptions of truth, some of which likely represent the best we can do to justify our intuitive sense that science does indeed make progress, but others that may constitute a better basis to judge progress (understood in a different fashion) in other fields — such as mathematics, logic, and of course, philosophy.

This matters for anyone who wants to know how things are, but is even more urgent for one who would create a truth-seeking artificial mind.  

-4