Why tech workers should oppose #KillerRobots

Stop Killer Robots
4 min readAug 27, 2019

Laura Nolan is a computer programmer who resigned from Google over Project Maven. She is now a member of the International Committee for Robot Arms Control (ICRAC) a founding member of the Campaign to Stop Killer Robots. #TechAgainstKillerRobots.

There are many ethical, political and legal reasons to oppose autonomous weapons, which can strike without a direct human decision to attack. These are good reasons to worry about autonomous weapons, but I am a software engineer and I also oppose the development and use of autonomous weapons on technological grounds.

Laura Nolan looks, with her hand outstretched looks into the camera with the Campaign to Stop Killer Robots logo on her hand
Laura Nolan. Photo: Clare Conboy.

Software has bugs, and testing cannot find them all

All software has bugs. Testing cannot find and eliminate them all. This is a consequence of the fact that software has state, which changes over time. Every additional variable that a program uses multiplies the number of states it can be in (to say nothing of state in the operating system). We kludge this by testing systems from a newly-started, predictable state, but this does not mean that we understand all the ways that a program can behave.

Methodologies do exist for implementing safety critical software systems. However, research on how to build safety-critical autonomous systems is in its infancy, even as the commercial focus on self-driving autonomous vehicles has grown over the past decade. The problem may well be unsolvable.

Autonomous weapons could be hacked

No computing system has ever been built that cannot be hacked. Even air-gapped systems, which are never connected to the Internet, have been hacked. Remotely operated drones have already been hacked. With autonomous weapons the problem is worse — because there is nobody directly controlling the weapon, attackers may be able to change its behaviour without anyone immediately being aware.

AI is overhyped and fragile

Autonomous weapons are not synonymous with AI (artificial intelligence), but it is highly likely that many autonomous weapons will incorporate AI techniques for target identification, particularly object recognition techniques. Unfortunately, object recognition by computer vision can be fooled and so can lidar based perception.

AIs are unpredictable and tend to fare poorly when human beings try to fool them, or when they are used in environments other than that for which they were trained. Warfare is an arena characterised by deception and constant change in tactics, so AI and decision-making in battle are likely a poor match.

vintage looking robot with the Campaign to Stop Killer Robots logo on it’s chest.
Photo: Ralf Schlesener

Automation bias

Automation bias is a well-known phenomenon in which human beings tend to believe and favour computer-generated suggestions. Military systems that do incorporate human decision-making may still suffer from this problem — operator overconfidence in the Patriot missile system has been cited as a factor in several friendly-fire incidents. Even automated systems that require a human to give a final OK to an attack can therefore be a problem due to automation bias.

The use of AI in decision-making systems only compounds this problem, due to the fact that modern forms of AI cannot provide any human-understandable reasoning for their decisions. Another new field, explainable AI (or XAI) aims to solve this problem, but it has yet to yield any real progress despite enormous interest. Like the research in safety-critical autonomous systems, it is entirely possible that this is an area that will never bear fruit.

We could build a killer robot, but we cannot build one responsibly

We could build robots that can kill today. We cannot build a safe robot, that can’t be hacked, that works predictably in most or all situations, that is free of errors, and that can reliably manage the complexities involved in international law and the laws of war. That is why a treaty banning their development and use is urgently needed.

If this article resonates with you as a technologist, check out the Campaign to Stop Killer Robots resources for technology workers: www.stopkillerrobots.org/tech or join me in using #TechAgainstKillerRobots

Group photo of coalition of 113 non-governmental organizations in 57 countries working to preemptively ban #KillerRobots
www.stopkillerrobots.org Photo: Clare Conboy

--

--

Stop Killer Robots

With growing digital dehumanisation, the Stop Killer Robots campaign works to ensure human control in the use of force. www.stopkillerrobots.org