Towards Human Rights — Broadening perception of risks posed by autonomous weapon systems

Stop Killer Robots
6 min readSep 17, 2024

--

by Sai Bourothu with Gugu Dube, researchers at Automated Decision Research, the research and monitoring team at Stop Killer Robots.

Autonomous weapons systems (AWS) are weapons systems that can ‘select targets and apply force without human intervention.’ Autonomous weapons systems are not a particular type or new form of weapons system — it is the functionality of the weapons system, rather than the form, that poses the threat. The Convention on Certain Conventional Weapons (CCW) is a United Nations forum where the issue of AWS has now been discussed for over 12 years through the Group of Governmental Experts on emerging technologies in the area of lethal autonomous weapons systems (GGE on LAWS). The CCW is a key instrument of international humanitarian law (IHL), and its purpose is to ‘ban or restrict the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately’. IHL, also known as the law of armed conflict, aims to ‘mitigate the impact of armed conflicts by safeguarding individuals not involved in hostilities, as well as those who are no longer participating in hostilities, and by limiting the means and methods of warfare.’

The CCW’s proclaimed ability to deal with new developments in armed conflicts and weapons technologies has led to efforts being directed to this forum to address autonomous weapons, including by many states who seek to negotiate a new legally binding instrument with prohibitions and regulations on AWS. However, restricting the conversation on AWS to IHL alone hinders the ability to meaningfully address the threats and risks posed by these systems to other institutions of international law, specifically International Human Rights Law (IHRL), which applies outside of conflicts. Further, as AWS are weapons systems which often rely on dual use technologies, such as image recognition, that might not just be deployed in the civilian sector but actively rely on civilians and populated areas for data and training of algorithms, the manifold impacts of AWS cannot be addressed solely by legal frameworks that envisage a detached and demarcated war zone.

It must be acknowledged that what are initially considered ‘military’ technologies can often impact civilians through their use in border security and law enforcement, among other avenues.

Human Rights Council conference room at the UN in Geneva, Switzerland | Credit: UN Photo/Jean Marc Ferré

The impacts that AWS can have on human rights have been highlighted by numerous international human rights bodies. In 2022, the Fifty First session of the Human Rights Council adopted a resolution on the human rights implications of new and emerging technologies in the military domain. This resolution recognizes that automated decision making in the military domain can

‘contribute to or facilitate the commission of human rights violations and abuses, as well as violations of international humanitarian law’ by reproducing and exacerbating ‘existing patterns of structural discrimination, marginalization, social inequalities, stereotypes and bias and create unpredictability of outcomes.’

Fundamentally, the resolution notes that ‘human rights derive from the inherent dignity of the human person’ and stresses on the imperative of human control remaining central to the use of force. As part of its resolution, the Human Rights Council also requested the Advisory Committee to prepare a study examining the human rights impacts of new and emerging technologies in the military domain.

With these initiatives, there is clearly an accelerating momentum to highlight human rights impacts of AWS, and additionally there have been regional initiatives to add further nuance to these conversations. The 2019 report of the UN Working Group of Experts on the Rights of Peoples of African Descent (A/HRC/42/59) highlights issues of bias and discrimination in algorithmic or automated decisions making, stating that ‘algorithms incorporated the biases that existed as a result of historical injustices and the values of the programmers. Those biases then translated into algorithms that exhibited racial discrimination.’ Algorithmic decision making is based on a system’s ability to be trained on large data sets, which helps the system create patterns that are then used to aid decision making.

Inherently, this contributes to digital dehumanisation by reducing humans to data, and diluting our right to dignity. This is particularly exacerbated by issues of bias and discrimination, which further existing societal predispositions of inequality and prejudice.

Instances of these biases and prejudices by automated decision making impacting people’s right against discrimination are abundant. Facial recognition technologies used by law enforcement agencies have misidentified people of color, racial and cultural biases have tagged migrant communities and people of color as more likely to commit fraud. Similarly, public service delivery governed by opaque automated decisions has denied people the right to basic amenities such as food supplies, healthcare etc.

Human dignity is at the core of human rights discourse, as well as being central to movements surrounding disarmament. A human being is a sum of varied complex identities, societal locations and experiences, which cannot merely be reduced into data points, which could then be used to make decisions of life and death. It is important to note that ‘purporting to be able to distinguish between combatants and civilians, between active combatants and those hors de combat [out of action due to injury], or between civilians and civilians directly participating in hostilities, on the basis of data acquired by sensors and processed and classified by algorithms raises serious legal, ethical and moral concerns, including concerns around the violation of human dignity and of dehumanisation.’ The African Commission on Human and People’s Rights (ACHPR) submitted a comprehensive document in response to the United Nations Secretary-General’s (UNSG) call for views of states and observers on the ‘humanitarian, legal, security, technological and ethical perspectives’ on AWS for the Secretary-General’s report on autonomous weapons systems. In this submission, the ACPHR notes that ‘allowing machines the power over life and death may be inconsistent with the right to dignity.’ The Special Rapporteur on Extrajudicial, Summary and Arbitrary Executions, Morris Tidball-Binz, also made a submission to the UNSG. This submission urges states to ‘ensure that human rights implications of AWS, with particular emphasis given to the right to life and human dignity’ be considered in all international efforts to control AWS.

Use of autonomous weapon systems, and automated decision making in the use of force, is undeniably an affront to human dignity, but the use of force also risks the right to life -the ‘supreme right’, without which a person is unable to experience any other right.

The ACPHR, in its submission, underlines that the obligation of states ‘to uphold the right to life raises critical ethical, legal, and practical considerations’ in the use of AWS. It further notes that states must grapple with the possibility of ‘errors, biases, and other risks’ compromising right to life while deploying AWS and states should be held responsible for these occurrences. The ACHPR stresses that the delegation of life and death decisions to machines or algorithms could eventually contribute to the arbitrary deprivation of the right to life as a machine would be unable to grapple with complex considerations such as ‘appropriateness, justice, predictability, reasonableness, necessity, and proportionality.’

The far-reaching implications of autonomous weapon systems for humanity as a whole, and the specific threat such weapons systems pose to human rights, peace and security, demands that new international law on AWS be urgently negotiated. As AWS threatens these hard earned rights, it is crucial that the conversation on prohibiting and regulating these weapon systems and mitigating their impacts take cognizance of the critical role of human rights, and that international human rights law be central to any international discussions on AWS and the negotiation of a legally binding instrument.

--

--

Stop Killer Robots

With growing digital dehumanisation, the Stop Killer Robots campaign works to ensure human control in the use of force. www.stopkillerrobots.org