The accountability, dehumanisation & humanitarian dilemmas of autonomous weapons systems

by Valeria Madrigal Vargas

Stop Killer Robots
6 min readOct 25, 2023
Lines of computer coding overlayed with a blue and pink filter

The United Nations (UN) Disarmament Week, observed from 24–30 October, seeks to promote awareness and better understanding of disarmament issues and their cross-cutting importance. Young people have an important role to play in the promotion of disarmament as a method to protect people from harm and to build safer and more peaceful societies.

In honour of this important week, members of the Stop Killer Robots Youth Network were asked to share their thoughts on the themes of disarmament and autonomous weapons systems. Disclaimer: The blogs in this series do not necessarily constitute the opinions of Stop Killer Robots. Nor should they be considered the opinions and views of all Stop Killer Robots members.

In a world full of rapid and ever-changing technological advances, it poses the question: where do we draw the line when talking about the usage of Artificial Intelligence (AI) in armed conflict? This year the First Committee on Disarmament and International Security of the United Nations General Assembly will be addressing exactly that while discussing the pathway forward for prohibitions and regulations of autonomous weapons systems (AWS). Hence, during Disarmament Week, it is now the opportunity to advocate against AWS in armed conflict and call on states to take decisive action. In light of that, this blog intends to present a simple explanation of autonomous weapons systems and the ensuing human control dilemma. Consequently, it will address the humanitarian repercussions of AWS, addressing topics such as international humanitarian law, the lack of accountability, and the imperative of establishing regulations and prohibitions on autonomous weapons systems.

When explaining AWS one could allude to taking their dog for a walk without a leash. No matter the previous training, or even the amount of times one has done so in other environments, there is still an unavoidable loss of control. Autonomous weapons systems could be comparable to dogs in the way that they would resort to their previous knowledge or programming, but could react in unpredictable ways when positioned in unforeseeable environments. It’s exactly that lack of control that amounts to the main challenge of the usage of autonomy and AI in developing weapons systems.

In order to comprehend the issue with control over autonomous weapons systems, it’s vital to first have an understanding of the differences between an automatic and an autonomous weapon system. Christof Heyns, the UN Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, addressed the differences between these two in the A/HRC/23/47 (2013) report, in which he explains,

“Autonomous” needs to be distinguished from “automatic” or “automated.” Automatic systems, such as household appliances, operate within a structured and predictable environment. Autonomous systems can function in an open environment, under unstructured and dynamic circumstances. As such their actions (like those of humans) may ultimately be unpredictable, especially in situations as chaotic as armed conflict, and even more so when they interact with other autonomous systems. (Heyns, C. 2013. p. 8).

In addition to that, the control dilemma emerges from a lack of human intervention in the management, deployment and supervision of the system. Human Rights Watch separated these levels of control as: human-in-the-loop, when humans can intervene in the decision of a weapons system to select and strike a certain target; human-on-the-loop, when the system executes its own decisions but those can be overridden by a human supervisor; human-out-of-the-loop, referring to systems which are capable of selecting targets and launching assaults without any kind of human intervention whatsoever. There are serious risks in enabling an autonomous system with the capability of choosing and interacting directly with a target without any or nominal human interaction. In the case of no human intervention at all, autonomous weapons systems would lack the ability to assess complex situations by their individuality or surrounding emotional variables. Given these challenges, one could argue that the lack of control over the actions of an autonomous weapons system could result in violations of the laws of war and an inability of states or individuals to be held accountable for these violations. Consequently, there are three main ethical issues surrounding the usage of AWS which dictate the urgency for prohibitions and regulations on this type of offensive weaponry by all states.

The first of these issues refers to the humanitarian consequences of their usage in war, as well as compliance with International Humanitarian Law (IHL), or the laws of war. IHL introduces fundamental principles that every armed conflict must adhere to: proportionality, necessity and distinction. Proportionality refers to the magnitude of the attacks not being excessive when considering the military objective of the operation. Necessity manifests that every attack must be resolved to achieve a certain military goal and the subsequent prohibition of superfluous injury and unnecessary suffering. Lastly, distinction indicates the responsibility of the attacker to distinguish between civilians and combatants in conflict zones and the protection of civilian individuals and objects. On the basis of these three principles of IHL, autonomous weapons systems without human control are not capable of estimating the proportionality, the necessity of an attack or, most worryingly, distinguishing (without fail) between civilians and combatants.

Secondly, another issue that should motivate states to push for prohibitions and regulations of autonomous weapons systems is the lack of accountability in the use of these systems in armed conflicts. Picture an offensive human-out-of-the-loop weapon system being used to attack a civilian refugee camp. Who should international law hold accountable? Should it be the one who activated the system, even if they were just following orders? Should it be the one who gave the order, even if they had no control whatsoever of the software programmed into the machine? Or maybe the system developer, in spite of them just selling the product without knowing its future use? In addition to that, and taking into account what was previously established about the unpredictability of autonomous systems, who could be held accountable when the system carries out actions that were not previously planned by the user?

Lastly, despite the consequences that armed conflicts have on civilians, one must recognize the right of states to wage war (jus ad bellum). For that reason, IHL and the Law of the Hague set the ground rules in order to preserve humanity in armed conflicts and in the transitional justice resulting from conflict zones. One must consider, when discussing autonomous weapons systems, the dehumanising implications their use could have on war. Autonomous weapons completely separate soldiers from their actions and simplify the act of hurting others or taking someone’s life as it is easier to press a button from a safe control room than having to press a trigger when standing in the middle of a battlefield. This could potentially lead to an excessive and unnecessary use of force.

The International Committee of the Red Cross has already issued a statement urging states to develop a treaty. Civil society, composed of organizations that advocate for this same goal, has spent years making significant efforts to convince governments of the relevance of addressing this in international forums. Regional organizations such as the Caribbean Community (CARICOM) have agreed to work toward a legally binding instrument containing prohibitions and regulations of autonomous weapons systems. All their work has led actors in the international community to one focal conclusion: autonomous weapons systems pose a dangerous, unpredictable and immeasurable threat not only to states but to humanity as a whole.

Valeria Madrigal is a college student majoring in International Relations and Criminology. In the past, she was an intern for the Ministry of Foreign Affairs and Worship in Costa Rica. Currently, Valeria is a collaborator for the National Society of the Red Cross in Costa Rica, specifically in the Doctrine and Protection Department where she works as a researcher in Disarmament, International Security and International Humanitarian Law. As an intern, she also aids in the IOM National Office.



Stop Killer Robots

With growing digital dehumanisation, the Stop Killer Robots campaign works to ensure human control in the use of force.