AI Enabled Kill Webs and the Slippery Slope towards Autonomous Weapons Systems
Laura Nolan is a computer programmer who resigned from Google over Project Maven. She is now a member of the International Committee for Robot Arms Control (ICRAC) a founding member of the Campaign to Stop Killer Robots. #KeepCtrl.
Last month, the United Nations Convention on Conventional Weapons (CCW) is meeting to discuss Lethal Autonomous Weapons Systems (LAWS), sometimes described as ‘killer robots’. LAWS are systems that have the capability to select targets and apply force autonomously, without meaningful human control. The CCW has been working on this issue since 2014, but hasn’t yet come to an agreement on any action to address concerns raised by these weapons.
As the Campaign to Stop Killer Robots put it at the end of 2019: “Diplomacy is moving forward at a snail’s pace, but pressure is building on states to launch negotiations on a new treaty on fully autonomous weapons without delay. There is increasing recognition that weapons systems that would select and engage targets on the basis of sensor processing and that do not allow for meaningful human control cross the threshold of acceptability and must be prohibited.”
Last month, the US Army concluded a three-week-long exercise, called Project Convergence, aimed at testing and developing an “artificial intelligence (AI) enabled kill web”, a network that would link computerised automated target selection systems with weapons systems. They have boasted of being able to detect and fire on a target (from ground-based artillery or a drone) within 20 seconds. Soldiers can fire missiles from drones via a smartphone app. Reconnaissance data from drones is combined with other sources and ‘the network’ presents threats on a digital map.
The Campaign to Stop Killer Robots aims to preserve meaningful human control over the use of force. The technologies being tested as part of Project Convergence demonstrate many of our concerns. Can an operator make a sound decision about whether to strike a newly-detected target in under 20 seconds, or are they just hitting an ‘I-believe button’ to rubberstamp the system’s recommendation, delegating the true decision making authority to software? In a sociotechnical system that is explicitly optimizing to reduce the time from detecting a potential threat to destroying it, an individual may not be rewarded for exercising vigilance. The idea of making attacks via the very limited user interface of a smartphone is also troubling.
Nowhere in the public reporting about Project Convergence is any discussion about human factors in design of software interfaces, what training users get about how the targeting systems involved work (and on the shortcomings of those systems), or how to ensure operators have sufficient context and time to make decisions. That’s consistent with the Defense Innovation Board (DIB) AI Principles, published last year, which also omits any mention of human factors, computer interfaces, or how to deal with the likelihood of automation bias (the tendency for humans to favor suggestions from automated decision-making systems).
The DIB AI principles do include reliability: ‘DoD AI systems should have an explicit, well-defined domain of use, and the safety, security, and robustness of such systems should be tested and assured across their entire life cycle within that domain of use.’ This principle contrasts with the approach taken for Project Convergence, which uses ‘plug-and-play interface[s] […] to get a new pod to work on the drone for the first time, without laborious technical integration or time-consuming safety recertifications’, and a network system ‘that significantly improves the warfighting capability of our maneuver brigades, but […] was not fielded to do the things we’re doing’.
The sorts of cobbled-together systems being used as part of Project Convergence may not be quite what many of us think of as an autonomous weapons system — there is automated targeting, but use of force is separated from the target selection, and there is a human decision maker in the loop (although we are not sure that the decisionmaker always has sufficient time and context).
However these systems are, at the very least, a significant step along a slippery slope towards fully autonomous weapons. Arthur Michel Holland calls these kinds of tools ‘Lethality Enabling Systems’ and notes that “in the absence of standards on such matters, not to mention protocols for algorithmic accountability, there is no good way to assess whether a bad algorithmically enabled killing came down to poor data, human error, or a deliberate act of aggression against a protected group. A well-intentioned military actor could be led astray by a deviant algorithm and not know it; but just as easily, an actor with darker motives might use algorithms as a convenient veil for an intentionally insidious decision.”
These kinds of concerns are exactly why the Campaign to Stop Killer Robots calls for the retention of meaningful human control over the use of force as a general obligation, rather than seeking to regulate any specific technology. The time is now. This article has discussed the United States’ Project Convergence as a timely (and well-reported-on) example, but many major military powers are exploring AI-based targeting and the arms industry is building systems with increasing levels of autonomy.
This is 2020. We’ve seen the worst global pandemic in a hundred years. We’re in a climate crisis and forests across the globe have burned at a record rate. We’re facing economic downturns, exposure of long existing systemic inequalities and food crises. With every crisis however, that 2020 has thrown our way, there have been people around the world who have stepped up to play their part for humanity. So while 2020 makes for grim accounting, we can still choose a future that won’t add ‘AI enabled kill webs’ and autonomous weapons to the list. The technological developments are not slowing down but there’s time to act, if we act fast.
To find out more about killer robots and what you can do, visit: www.stopkillerrobots.org