Thanks for your comments, I'm inclined to basically agree with what you've said.

I am glad to know that my comments have made a difference and that they were welcome. I think LessWrong could benefit a lot from The Power of Reinforcement, so I am glad to see someone doing this.

the only solution is to make these autonomous technologies as absolutely safe as possible.

Actually, I don't think that approach will work in this scenario. When it comes to killer robots, the militaries will make them as dangerous as possible (but controllable, of course). However, the biggest problem isn't that they'll shoot innocent people - that's a problem, but there's a worse one. The worst one is that we may soon live in an age where anyone can decide to make themselves an army. Making killer robots safe is an oxymoron. There needs to be a solution that's really out of the box.

[LINKS] Killer Robots and Theories of Truth

by fowlertm 1 min read30th Jun 20138 comments

-4


Peter at the Conscious Entities blog wrote an essay on the problems with using autonomous robots for combat, and attempts to articulate some general principles which allow them to be used ethically.  He says:

In essence I think there are four broad reasons why hypothetically we might think it right to be wary of killer robots: first, because they work well; second because in other ways they don’t work well, third because they open up new scope for crime, and fourth because they might be inherently unethical.

Unpacking this a little, autonomous robots will affect the characteristics of war and make it easier for many to carry out, can be expected to malfunction in especially complex and open-ended situations in very serious ways, might be re-purposed for crime, and because for various reasons they make the ethics surrounding war even more dubious.  

He even takes a stab at laying out restrictive principles which will help mitigate some of the danger in utilizing autonomous robots:

P1. Killer Robots should not be produced or used in a way that allows them to fall into the hands of people who will use them unethically.

P2. Killer Robots should not be used for any mission which you would not be prepared to assign to a human soldier if a human soldier were capable of executing it.

P3. Killer Robots should not be used for any mission in unpredictable circumstances or where the application of background understanding may be required.

P4. Killer Robots should not be equipped with capacities that go beyond the immediate mission; they should be subject to built-in time limits and capable of being shut down remotely.

Though he is a non-expert in the field, I (also a non-expert) find his analysis capable and thorough, though I spotted some possible flaws.  I mention it here at LessWrong because, while we may be decades away from superintelligent AI, work in AI risk and machine ethics is going to become especially important very soon as drones, robots, and other non-human combatants become more prevalent on battlefields all over the world.

Switching gears a bit, Massimo Pigliucci of Rationally Speaking fame lays out some common theories of truth and problems facing each one.  If you've never heard of Charles Sanders Pierce and wouldn't know a verificationist account of truth if it hit you in the face, Massimo's article could be a good place to start getting some familiarity.  It seems relevant because there has been some work on epistemology in these parts recently.  And, as Massimo says:

...it turns out that it is not exactly straightforward to claim that science makes progress toward the truth about the natural world, because it is not clear that we have a good theory of truth to rely on; moreover, there are different conceptions of truth, some of which likely represent the best we can do to justify our intuitive sense that science does indeed make progress, but others that may constitute a better basis to judge progress (understood in a different fashion) in other fields — such as mathematics, logic, and of course, philosophy.

This matters for anyone who wants to know how things are, but is even more urgent for one who would create a truth-seeking artificial mind.  

-4