Imaginations of Autonomy: On Humans, AI-based Weapon Systems and Responsibility at Machine Speed

Stop Killer Robots
9 min readJun 24, 2024

--

by Ishmael Bhila, Swati Malik, Adriano Nogueira Drumond Lopes, Gabriel Udoh

Photo of conference by Larissa Lenze

Introduction

From the 22nd to the 24th of May 2024 leading scholars studying autonomous weapons systems gathered in the small German city of Paderborn to discuss how autonomy in weapons systems and human-machine interaction concerning AI-based weapons systems means, particularly in relation to the concept of responsibility. This conference, organized by Jutta Weber and Jens Hälterlein at Paderborn University (Germany), was the inaugural conference of the interdisciplinary competence network ‘Meaningful Human Control — Autonomous Weapon Systems between Regulation and Reflexion’. It brought together scholars from various disciplines and representatives of Non-Government Organisations (NGOs) to explore the intricate dynamics between artificial intelligence, military technology, and human accountability.

As autonomous weapon systems (AWS) progressively exhibit advanced automation capabilities, the line between machine and human agency is likely to become increasingly blurred. These systems, while highly autonomous, still rely heavily on human-made infrastructures such as big data and machine learning algorithms and on human initiation and possible interventions. This integration raises pressing questions about the limits of machine autonomy and the enduring necessity of human oversight in AWS decision-making processes. The fundamental question of responsibility, however, remains as accountability cannot be transferred to machines but must reside with human beings even as avenues of state, organizational, and corporate responsibility continue to be explored. The conference aimed to delve into these complexities, highlighting the multifaceted nature of AWS and the inherent challenges in ensuring meaningful human control.

It turns out that the notion of responsible AI and the purported autonomy of weapon systems are not merely technical achievements but are deeply rooted in technoscientific and political imaginations. These imaginations have material consequences in terms of reinforcing promises of rapid, precision warfare and influencing global politics by promoting military solutions over diplomatic and peaceful conflict resolution. This conference, therefore, sought to foster interdisciplinary dialogue incorporating diverse global perspectives, while encouraging particular insights from the Global South, to better understand the distributed and situational agency within military human-machine configurations and to challenge the prevailing discourses on the potential and promises of AWS with a more nuanced and reflective approach.

The conference’s programme was structured around four themes, each exploring a different aspect of AWS and their broader socio-political context. The four themes featured presentations and paper submissions from leading experts, setting the stage for a comprehensive exploration of the challenges and opportunities presented by AWS.

1. Autonomy, Subjectivities, and Inequalities

One of the running themes within many of the presentations was how autonomy in weapons not only reflects but also accentuates and creates systems of oppression and inequalities. Lucy Suchman, in her keynote, argued for strategic autonomies that would challenge systems of oppression, asking questions on how the concept of autonomy is mobilised, by whom, and with what effect. For Suchman, AWS feed into processes of erasure and ungovernable injury, which should be open to resistance. In the same vein, Shona Illingworth argued that armed drone practices have led to physiological and psychological harm for those bodies subjected to them, causing far-lasting harm for them. Illingworth’s presentation critiqued the neocolonial aspects of drone warfare, arguing that “neocolonialism is concerned with data extraction” through “practices of scale and processes of abstraction” present in military exercises, that lead to the dehumanisation of those at the receiving end of armed drone psychological harm. Elke Schwarz’s presentation also decried how algorithmic killing has become a practice of productivity rather than any ‘meaningful’ humanity. Jutta Weber’s presentation, which focused on data science and decision-making, noted how technology is not neutral, asking the question, “technologies are about worldmaking practices, what kind of world are we creating?” Similarly, the idea of collateral damage and the ‘otherisation of grievable bodies’ in algorithmic war practices was also emphasised on by Ishmael Bhila, who argued that autonomy in weapons is leading to the invisibility of bodies considered as ‘disposable’ in security assistance in Africa. Further, Christoph Marischka gave a strong argument against data-driven warfare, arguing that it has led to war-crimes due to flawed data and high-tech weapons that do not necessarily achieve clear goals. Marischka suggested that understanding the entire war machinery as a mix of technology and human decisions can help analyse how autonomy is used in these systems and whose interests they really serve. Finally, Gabriel Udoh posited that integrating African perspectives into global AWS debates is crucial for developing inclusive and culturally sensitive frameworks.

2. Decision-Making (and Targeting) Systems beyond Autonomous Weapon Systems

Discussions towards this theme centred around the limits of autonomy in weapon systems, and that these limits should be considered beyond the weapon systems themselves and related instead to other decision making and targeting apparatus in armed conflicts. For Christoph Marischka, the ongoing conflict in Gaza drives home this point. According to his presentation and paper, systems like the HASBORA and Lavender — both of which contribute to the decision-making apparatus of the Israeli “war machine” — are not currently covered by existing regulations, and this raises concerns. Jutta Weber noted how wars have moved beyond mere scientific epistemologies of causality to issues of correlation, further regarding war as an experiment in testing and tinkering which accepts deaths and destruction as collateral damage. Shona Illingworth opined that there is need to create a right to live — beyond what is currently considered, to that which allows one to live under free skies. Further, her paper showed the psychological (and physical) impacts of drone strikes (threats from above) that remain with populations long after the airstrikes have ceased. Shiri Krebs presented on the cognitive components of drone warfare and how they — in the case of vision and sound — can affect human understandings and perceptions in strikes. Erik Reichenborn-Kjennerud argued that a new epistemological logic of tinkering is evolving, gaining momentum at a breath-taking pace producing and developing massive effects in military targeting. Tinkering — experimental warfare — which builds on the productivity of failure, is part of a new technoscientific regime. According to Reichenborn-Kjennerud, data mines the unknown, exploring all manner of (often highly unlikely) possibilities relying on correlation instead of causality. These new epistemic practices, material infrastructures and data logics build on ideas of a world in constant becoming and emergence. This computational epistemic regime including machine learning, data mining, and neural networks builds on abductive reasoning and correlationism and abandons the idea of causality and reproducibility that we know of from traditional science.

3. Imaginaries of War in the Age of Algorithms

A recurrent theme during the conference was the imaginaries of warfare in an era increasingly dominated by advanced technologies and smart systems. The imaginaries of how autonomy is understood and negotiated both in public discourse and the media has become a central issue, reflecting how algorithmic warfare is planned, designed, and operationalised. In this sense, Linda Rupert argued that the imaginaries for ‘future battlefields’ are based on reflections on expectations about technological possibilities, the legitimization of current weapons systems, the production of what visions are desirable, and how these elements influence geopolitical leitmotifs and predict/construct geopolitical futures. For Jens Hälterlein, these imaginaries extend beyond what is understood as ‘human’ — ideas that animate the development of swarming technologies, a branch of autonomous systems designed in imitation of collective animal/insect behaviour — are beyond the usual understanding of autonomy and challenge the concept of meaningful human control. Lucy Suchman examined the continuing logics of a closed-world imaginary in the face of irremediable openness and the related possibilities for thinking of autonomy otherwise. However, due to the attachments of self-directed agency, where the goals are surveillance, mapping, categorizing, and enumeration, military imaginaries take for granted the necessary reductions, erasures, and betrayals of datafication. So, the technological imaginaries of autonomous technology are usually treated variously as relations to be severed or cybernetics circuits to be controlled through systems engineering. In this same context, Elke Schwarz stated that humanity should recover what it means to be human, in the very sense that the speed at which technology operates is incompatible with the deliberate pace required for ethical reasoning and that moral agency should remain a distinctly human attribute. Further, the implementation of ethics in technological entities remains impossible. Under this theme, Christoph Ernst and Thomas C. Bächle also explored the concepts of imaginaries and imagination on the construction of warfare, based on aspects and elements of our society such as the influence of video games on the development of AWS especially in the design of graphical interfaces for compliance with Meaningful Human Control (MHC). Finally, Marijn Hoijtink addressed a broader development or shift in how warfare is thought of, waged, and lived through. Central to this shift, Hoijtink pointed to the ambiguous figure of ‘the platform’ as a model, metaphor, or concrete technology informing warfare today. Also, there is a strong push for technologies, practices, and platforms associated with datafication, surveillance, and new forms of prediction pushed onto the battlefield, whose main criticism points to pervasive surveillance and shrinking human control in the face of an extremely technologically-enabled conflict zone.

4. The Urgent Need for Regulation

The conference also involved discussions on the regulation of AWS through the concept of meaningful human control and the concept’s ramifications for responsibility attribution and accountability safeguarding. The concept was discussed in terms of its potential to contribute to the prevention of unlawful killings and mitigation of risks associated with autonomous decision-making in warfare. Additionally, it was reviewed as a potential safeguard for human dignity, which it could preserve by ensuring that critical decisions about life and death remain under direct human oversight. Overall, the need to regulate AWS was discussed through the prism of MHC and in reference to accountability and liability, legal compliance, and prevention of autonomous escalation and unintended consequences. In this context, several presentations focused on how, if possible, the concept of meaningful human control could be achieved through global governance processes. Ingvild Bode’s presentation analysed the key challenges in the global governance of AWS. Bode provided a comprehensive state of the art on the current efforts to regulate AWS, starting with the ‘two-tier’ approach aimed at coming up with prohibitions on the one hand and regulations on the other. Bode’s analysis also showed the different processes through which autonomy in weapons is being considered at the international level, distinguishing between the possibility to institutionalise norms (soft law) and develop ‘hard’ laws (conventions, treaties, etc.) through UN governance processes on the one hand [via fora such as the United Nations General Assembly (UNGA) and the United Nations Convention on Certain Conventional Weapons (UNCCW)] and cross-regional processes [Belén Communiqué, Responsible Artificial Intelligence in the Military Domain (REAIM), Caribbean Community (CARICOM) Declaration on Autonomous weapons Systems, Freetown Communiqué on Autonomous Weapons Systems by the Economic Community of West African States (ECOWAS), Philippines (Indo-Pacific Perspectives on Autonomous Weapons Systems) and the Vienna Conference on Autonomous Weapons Systems] on the other. In the same spirit, Daniele Amoroso presented a framework for implementing the concept of MHC in global governance efforts. In his submission, Amoroso developed a principled approach that would incorporate a fail-safe enactor, an accountability enactor, and a moral agency enactor to ensure that meaningful human control is achieved in algorithmic warfare. Susanne Beck focused on questioning the concept’s utility in ensuring individual criminal responsibility and pondered if the necessity to ensure meaningful human control may lead to individuals becoming scapegoats in the process of responsibility attribution. In closing, Gabriel Udoh’s presentation contrasted Western and African perspectives on autonomy in the context of AWS. For example, western views on autonomy emphasize individualism and independence. In contrast, African perspectives are rooted in communal values, emphasizing interconnectedness and collective decision-making. Key African philosophies like Ubuntu, Ukama, communalism, and spirituality redefine autonomy as interdependent and collective. Udoh stressed the importance of community consent and ethical considerations in deploying and regulating AWS.

Way Forward

AWS are viewed as a remedy to human shortcomings in warfare, yet various presenters at this conference showed how these systems are inseparable from their human origins. As such, we ought to be more critical of the ideologies, practices, and technologies that animate and exceedingly technologize conflict in contemporary worldmaking. There is an urgent need for critical perspectives on autonomy in weapons that investigate not only the geopolitical impacts of autonomy in weapons but also the practices of human-machine interaction in interface-based decision-making systems, victim-centred approaches, and comprehensive understanding of how worldmaking practices and socio-technical imaginaries influence contemporary and future warfare. The work of this conference will, we hope, influence further developments in the study of AWS from different perspectives, with a strong engagement including pronounced Global South contributions on understandings of autonomy in weapons and its impact

--

--

Stop Killer Robots
Stop Killer Robots

Written by Stop Killer Robots

With growing digital dehumanisation, the Stop Killer Robots campaign works to ensure human control in the use of force. www.stopkillerrobots.org