Assuming this is true, that you're talking about a distinction without a difference, do you have any suggestions or proposals? Full pause on AI for military use? Restrictions on particular uses in the chain at all? Restrictions on using AIs above a particular capacity threshold? Or are we just doomed because we can't make relevant distinctions?
Do AI companies truly have red lines on autonomy in weapons systems and other areas if they cannot clearly define them? If not, they are either deceiving themselves or obscuring operational reality through public statements over the last week. There are massive semantic and legal differences between a fully autonomous weapons system and a 99% autonomous system; in practice, there is often no real difference.
The debate over autonomous weapons systems (AWS) has gone on for at least a decade, so this is not new ground for anyone following. States have taken part mainly at the international level at the UN Group of Governmental Experts on Lethal Autonomous. To this end, international orgs have toiled throughout this time to provide pro-bono legal work for defence departments across the world in defining an appropriate level of autonomy within a weapons system that is compatible with international and humanitarian law, settling roughly on having a ‘human in the loop’ of the kill chain as defining feature of legal compatibility and adherence to Article 36 reviews. The US has codified its use of autonomy in weapons systems in DODD 3000.09, which requires “appropriate levels of human judgement over the use of force”. Technology has clearly developed in parallel to these lengthy bouts of work attempting to define autonomy and the international debate has failed to acknowledge the rapid development and use of AI-based Decision Support Systems (DSS). In a world saturated by data and fought at the speed of AI, the human in the loop is likely to become a bit player to the greater network they reside within. So while there are legal guardrails for the use of autonomy, it currently looks like combat effectiveness is being increased not through fully autonomous weapons systems, but rather the automated systems sitting behind conventional weapons within existing force structures.
Read between the lines of any statement which says something like ‘we will not let our systems be used for fully autonomous weapons systems but its legal use elsewhere’ and steelman it: It begins to look something like ‘our systems can be used to identify a potential target using surveillance and reconnaissance, analyse data, build models, confirm targets, track and monitor for situational awareness across more targets than humans could manage and build an automated AI DSS pipeline within the kills chain, as long a human agrees to push the button’. How much autonomy and what percentage of autonomy of the war machine is acceptable and are you comfortable with? Focusing on fully autonomous becomes a sleight of hand the more these systems become automated and AI embedded, and this should not go unnoticed within current debates.
I post this with the following caveat: defence and national security are public goods, and are non-rivalrous at the state level, but this is reversed when viewed between states in the international system. The discrepancy for AI companies appears when they are global in nature, and wish to define the future for all of humanity, and in-fact may do so, but they are only responsible to a narrow set of democratic institutions which are not global. I do not make any claim as to whether it is right or wrong for AI companies to contribute to the security of a state in this way, but simply that they and their employees are honest about the situation, and their employees admit and and understand they work for a defence contractor in the name of the state. For many this may involve admitting that your role is to identify the most effective ways to improve the security of the state, aiming to do the "most good" possible in relation to national security, and your work therefore is rivalrous at an international level.