While continuing to endorse the necessity to develop a legally binding instrument with respect to LAWS, in 2018 China also proposed a somewhat narrow definition of LAWS comprising five essential features:

The first is lethality, which means sufficient payload (charge) for it to be lethal. The second is autonomy, which means absence of human intervention and control during the entire process of executing a task. Third is impossibility of termination, meaning that once activated there is no way of terminating the device. Fourth is indiscriminate effects, meaning that the device will execute the task of killing and maiming regardless of conditions, scenarios, or targets. Fifth is evolution, meaning that through interaction with the environment the device can learn autonomously and expand its functions and capabilities in a way that exceeds human expectations.

[...] China’s restrictive definition sets an extremely high threshold in regard to the kinds of technologies that may be eligible for legal regulation.

Chinese delegates to the GGE that we interviewed, however, do not view this definition as “narrow.” For them, rapid technological advancements could soon make weapons with zero human oversight a reality, especially in technologically advanced countries like the USA.

Military UAVs with autonomous targeting already exist. Turkey probably used one. Israel has also been developing autonomous weapons. The Switchblade 600 isn't autonomous, but US defense contractors are interested in adding autonomous targeting to such products and seem to be lobbying against US regulations on that. The Chinese government seems to be strongly in favor of autonomous weapons, but also strongly against literal Terminators a la the movies.

Thus far, the focus has been on autonomous targeting for small short-range electric UAVs, but autonomy is easier for larger UAVs that can carry more sensors and computing, and datalinks are harder over long distances - especially if satellites were targeted in eg a war over Taiwan between the US and a China-Russia alliance.

New to LessWrong?

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 2:18 PM

Third is impossibility of termination, meaning that once activated there is no way of terminating the device.

So the law wont apply to anything that anybody would've actually tried to make, short of prepotent superintelligence. :/

[-]qjh8mo20

Is the fifth requirement not a little vague, in the context of agents with external memory and/or few-shot learning?