Regulating AI & autonomous weapons — preventing a human rights disaster

Stop Killer Robots
5 min readOct 27, 2023

--

by Tara Osler

People dressed in boiler suits with STOP KILLER ROBOTS written on the back stand in front of the “Broken Chair” across from the United Nations in Geneva

The United Nations (UN) Disarmament Week, observed from 24–30 October, seeks to promote awareness and better understanding of disarmament issues and their cross-cutting importance. Young people have an important role to play in the promotion of disarmament as a method to protect people from harm and to build safer and more peaceful societies.

In honour of this important week, members of the Stop Killer Robots Youth Network were asked to share their thoughts on the themes of disarmament and autonomous weapons systems. Disclaimer: The blogs in this series do not necessarily constitute the opinions of Stop Killer Robots. Nor should they be considered the opinions and views of all Stop Killer Robots members.

The rate at which technology has advanced in recent decades has drastically outpaced the development of corresponding legislation, making this area of law a bona fide “Wild West.” Legislative bodies around the world are struggling to catch up — though some states have made public shows of support for restrictions on their use, no state has passed legislation combatting the weaponizing of artificial intelligence (AI). Many states are still in the process of creating legal frameworks around AI as a general topic, if at all.

The path to a new treaty prohibiting and regulating autonomous weapons is mired with obstacles, including the lack of coherence on the topic of AI regulation legislation in different states and intergovernmental organizations like the European Union (EU). In light of UN Disarmament Week, it is crucial to understand the international legal context in which a treaty prohibiting and regulating weaponized AI could come into being.

As stated, one of the major barriers to a cohesive treaty on autonomous weapons is a lack of coherent global standards for the regulation of AI. Between 2016 and 2022, 31 states passed legislation focusing on AI. Of these, even fewer focus on actually regulating AI — some focus instead on funding for government use of AI programs, or provisions for workforce training on new technology. The number of states that have turned a legal eye to protecting their citizens from the dangers of AI is few. A 2019 report found that no state body other than the EU had created an ethical or legislative framework on AI, though even its regulations fall short of adequately protecting civilians from AI weaponization. Since then, only one state has joined the ranks — China, which currently has one law that has come into effect regarding the use of algorithmic recommendation systems (like those used in e-commerce advertising). None of the Organisation for Economic Co-operation and Development (OECD) member-states have enacted legally binding regulations either, though the Canadian government is currently processing potential legislation. 61 states have created non-binding frameworks, though not all are available for public consultation. The scarcity of binding regulation on AI worldwide paints a concerning picture — while the EU and China have taken steps to protect citizens, the majority of the global population remains vulnerable to AI risks. This lack of coherence does not bode well for international cooperation on autonomous weapons — how can states agree on a binding weapons convention when their domestic policies lag so far behind?

Understanding the context in which autonomous weapons could be regulated also requires knowledge of more general human rights-focused technology regulation. Similar to AI regulation, human rights law in the context of technology is extremely limited. Current technological threats to human rights can be seen in personal data usage and digital discrimination (an issue that can be exacerbated by human biases coded into AI systems). Personal data privacy is an area where legislation is a global patchwork — though EU citizens are protected by the General Data Protection Regulation (GDPR), only 48% of countries classified by the UN as “least-developed” have developed legislation on the subject. Considering how AI systems are trained on digital data, the protection of personal information is a major concern in the development of weaponized AI. If personal data cannot be protected from more benign harms like advertising algorithms, what is to prevent it from being used to train autonomous weapons?

As well, the lack of regulation on other digital human rights issues like digital discrimination serves to demonstrate the wild west state of international legislation. Though the EU continues to lead the pack with the Digital Services Act (DSA — aimed at protecting vulnerable groups from discriminatory algorithms and online hate speech), there is no united international framework outside of Europe that can provide coherent governance on digital human rights. Some civilians are protected, but not all. Borders are increasingly porous in the online world — if there are holes in data protection, is anyone truly protected?

The dearth of legislation regulating technology like AI is concerning — has the proverbial horse of autonomous weapons already left the legislative barn? There is still time to prevent weaponized AI from becoming a human rights disaster. Risks beyond “killer robot” AI systems are also considerable — they represent just one piece of the military AI puzzle. As weapons systems with varying levels of autonomy continue to be developed, autonomous weapons systems designed to target humans or inherently lack meaningful human control are not yet a reality. Unlike existing technology whose development outpaced regulation, the global community has an opportunity to get ahead of weaponized AI and regulate its creation before it is deployed against vulnerable populations. The time is now to establish clear regulations and binding legislation on the military applications of AI — starting with a treaty ensuring meaningful human control over the use of force.

Tara Osler (she/elle) is a second-year BCL/JD candidate at the McGill University Faculty of Law. Having worked previously for the UNDP, she currently focuses on international law and is an executive editor of the Inter Gentes Journal of International Law and Legal Pluralism. Tara has also been involved with the Campaign to Stop Killer Robots since 2020 when she represented Canada at the Global Youth Conference on Fully Autonomous Weapons.

--

--

Stop Killer Robots

With growing digital dehumanisation, the Stop Killer Robots campaign works to ensure human control in the use of force. www.stopkillerrobots.org