The Reality: Autonomous Weapons Would be used Against Humans

Stop Killer Robots
4 min readOct 24, 2024

--

by Gillian Flude

The United Nations (UN) Disarmament Week, observed from 24–30 October, seeks to promote awareness and better understanding of disarmament issues and their cross-cutting importance. Young people have an important role to play in the promotion of disarmament as a method to protect people from harm and to build safer and more peaceful societies.

In honour of this important week, members of the Stop Killer Robots Youth Network were asked to share their thoughts on the themes of disarmament and autonomous weapons systems. Disclaimer: The blogs in this series do not necessarily constitute the opinions of Stop Killer Robots, nor should they be considered the opinions and views of all Stop Killer Robots members.

When people think about autonomous weapons, they might think of a battlefield where it’s killer robots versus killer robots, and think that doesn’t sound like a bad idea. However, this is not the reality of how autonomous weapons would be used — the reality is that these weapons would be used against humans. The big military powers who have the means to produce these weapons would use them against populations who do not, including in domestic policing. This would create an asymmetrical environment where one side has the ability to take human decision making out of killing, lowering the threshold to go to war, while the other side does not.

We can see that conflict today is no longer a clear, open battlefield between two easily distinguishable militaries. Conflict has evolved as human development has evolved, and has become increasingly urban. This urban nature of conflict is resulting in a high civilian toll. For example, data clearly shows that 90% of victims are civilians when explosive weapons are used in populated areas. Based on current conflict trends, and looking at conflicts from the last few decades, we can assume that autonomous weapons would be used in urban areas where civilians are concentrated. This would result in civilians being in harm’s way for autonomous weapons in warfare.

It is not just that conflicts are in urban areas that make a killer robot versus killer robot battlefield unlikely. It is also important to consider that most conflicts today are asymmetrical and/or intra-state. This means that conflicts are within borders, between a state and a non-state armed group, or between a state that has a much higher military capacity compared to its counterpart. Taking this reality of warfare into consideration reveals that autonomous weapons most likely would not be possessed by both sides. While a state might have access to developing autonomous weapons, a non-state armed group likely would not. Or, while a large military power might have access to developing autonomous weapons, a smaller state likely would not. What would this result in? Asymmetrical warfare where one side has the ability to kill without meaningful human control, and the other side that is at the mercy of autonomous weapons. If one side has access to autonomous weapons, and the other does not, it also lowers the threshold to go to war since the one side does not have to risk their own soldier’s lives. This is an extremely dangerous and unethical weapon to introduce into warfare.

Now, let’s consider autonomous weapons outside of warfare, because these weapons can also be used in domestic policing and law enforcement. There are a number of rights that autonomous weapons would threaten, including a citizen’s right to safety and liberty. To highlight how close this is to reality, we can consider the widespread use of facial recognition technology in policing which has been consistently criticized for its biases, racial profiling, and breach of privacy. If these systems which collect our data are already being used by law enforcement, it is not wild to imagine that the data could be given to autonomous weapons systems. With the risk of autonomous weapons being used by domestic law enforcement, the weapons quickly feel much closer to home for those living outside of conflict areas and less of a far away battlefield with only fighting robots.

Killer robots versus killer robots on an open battlefield is rooted in fiction. The reality is that warfare is increasingly urban, not large military versus large military, and that domestic policing is also at risk to the dangers of autonomous weapons. When we consider what autonomous weapons would truly mean in practice, it is clear that these weapons are unethical and cross a clear moral line into what artificial intelligence should not be allowed to do. Next time you consider the idea that robots killing each other might not sound like a bad idea, remember that in practice it would be robots killing humans. Which is definitely a bad idea.

Gillian has been working for Mines Action Canada since graduating from her undergraduate degree in Conflict Studies and Human Rights at the University of Ottawa. She initially got involved with Mines Action Canada through a part-time contract with Stop Killer Robots. Since then, she has continued to be concerned about artificial intelligence crossing a clear moral boundary into becoming autonomous weapons. Gillian recently joined the Stop Killer Robots Youth Network and is looking forward to engaging with youth who are also passionate about campaigning against killer robots.

--

--

Stop Killer Robots
Stop Killer Robots

Written by Stop Killer Robots

With growing digital dehumanisation, the Stop Killer Robots campaign works to ensure human control in the use of force. www.stopkillerrobots.org