Investigating the Kill Cloud: A Conference on Information Warfare, Autonomous Weapons, and AI
by Jennifer Menninger and Ishmael Bhila
Jennifer Menninger is a member of the Women’s International League for Peace and Freedom (WILPF). Ishmael Bhila is a Doctoral Researcher at Paderborn University, a Research Fellow at the University of Siegen. Both are members of theStop Killer Robots Campaign. In this article, they document the proceedings of the “Investigating the Kill Cloud” Conference that took place in Berlin, Germany from the 29th of November to the 1st of December 2024. In doing so, they map out avenues for multistakeholder and multidisciplinary collaboration to address the dehumanising effects of the “kill cloud”.
In an era where Artificial Intelligence is reshaping the landscape of warfare, the “Investigating the Kill Cloud” conference held in Berlin from the 29th of November to the 1st of December 2024 served as a crucial venue for discussing the ethical, social, and political implications, and realities of AI and data-driven technologies in modern combat. Organized by the Disruption Network Lab and the “Meaningful Human Control: Between Regulation and Reflexion” (MEHUCO) subproject ‘Swarm Technologies: Control and Autonomy in Complex Weapons Systems’ at Paderborn University, the conference brought together experts, whistleblowers, and activists to delve into the complexities of what has been termed the “Kill Cloud.” We were among the experts who contributed during an internal roundtable on the third day, sharing insights about the Stop Killer Robots campaign’s activities and discussing potential collaborative projects with other specialists in the field.
This article serves two purposes: first, to summarise the various ideas that came out of the “Investigating the Kill Cloud” conference; and second, to map a way forward for fruitful collaborations among civil society, whistleblowers, academia, other practitioners, and the victims of the excesses and injustices inflicted through the use of algorithms and autonomy in weapons systems. As members of the Stop Killer Robots Campaign, our motivation is to achieve a world where humans are not reduced to “ones and zeros”, where human dignity takes primacy over militarism and the military logic that reduces human lives to statistics.
The conference therefore was an important step towards fostering ways with which all those who are concerned about the unrestricted automation of warfare could deliberate. With the excesses of the “kill cloud”, the urgency of new international law to govern the increased automation of warfare has never been clearer.
Understanding the Kill Cloud
The term “Kill Cloud,” introduced by drone whistleblowers Cian Westmoreland and Lisa Ling, encapsulates the intricate networked systems that underpin contemporary warfare. These systems, which integrate AI, surveillance, and automated decision-making, raise profound ethical dilemmas, particularly in the context of ongoing global conflicts, for example in Gaza and Ukraine. The conference aimed to unpack these complexities, fostering a critical dialogue on the implications of such technologies.
Keynote speakers, including Lisa Ling (Whistleblower, former Technical Sergeant, US Air Force Drone Surveillance Programme), Jack Poulson (Executive Director of Tech Inquiry), Naomi Colvin (Whistleblower Advocate and UK/Ireland/Belgium Programme Director at Blueprint for Free Speech), and Joana Moll (Artist and Researcher, Professor of Networks at the Academy of Media Arts Cologne), provided critical insights into the ethical and social dimensions of automated warfare during a keynote panel, moderated by Tatiana Bazzichelli (Director, Disruption Network Institute). These insights were a result of long-term research by the Disruption Network Institute’s associated fellows who presented the results documented in their research papers. The research papers, available for public access, highlight the urgent need for scrutiny and accountability in the face of rapidly advancing military technologies.
Disarming the Kill Cloud
The panel “Disarming the Kill Cloud,” moderated by Jutta Weber (Professor for Media, Culture & Society at Paderborn University), critically examined the implications of military AI-driven human-machine systems. Lucy Suchman (Professor Emerita at Lancaster University) emphasized the normalization of data in facilitating targeted assassinations, urging participants to scrutinise the production of data to delegitimize these operations. Erik Reichborn-Kjennerud (Senior Research Fellow at the Norwegian Institute of International Affairs) introduced the concept of martial epistemology, illustrating how military — rather than scientific — knowledge shapes our understanding of violence and targeting practices. Marijn Holtijnk (Associate Professor in International Relations and Principal Investigator of PLATFORM WARS at the University of Antwerp) discussed the role of digital platforms in modern warfare, revealing how they reinforce the logic of targeting through data integration and analysis. Elke Schwarz (Associate Professor at Queen Mary University London) raised ethical concerns about prioritizing ‘know-how’ over ‘know-what’ in military AI, warning that this shift undermines accountability and ethical reasoning in warfare.
The Airspace Tribunal
Shona Illingworth (Artist and Professor at the University of Kent) and Anthony Downey (Professor of Visual Culture at Birmingham City University) led a particularly thought-provoking discussion that addressed the pressing issues surrounding aerial hyper-surveillance. They highlighted the pervasive use of Unmanned Combat Aerial Vehicles (UCAVs) and the psychological impact of indefinite aerial surveillance on global populations. Their establishment of the Airspace Tribunal in 2018 advocates for a new human right to live free from physical and psychological threats from above, emphasizing the need for accountability in the face of technological advancements.
Automated Surveillance and Targeted Killing in Gaza
A panel featuring Matt Mahmoudi (Researcher & Advisor on AI & Human Rights at Amnesty Tech and Assistant Professor in Digital Humanities at Cambridge University), Sophia Goodfriend (Post-Doctoral Fellow at Harvard Kennedy School’s Middle East Initiative and Journalist at +972 Magazine), and Khalil Dewan (PhD Nomos Fellow in Law at SOAS University), moderated by Matthias Monroy (Editor of Bürgerrechte, Polizei/CILIP and nd.Der Tag), tackled the alarming intersection of automated surveillance and targeted killings in Gaza. Mahmoudi described how Gaza has transformed into a testing ground for violent technologies, with AI enhancing military operations and imposing extensive surveillance on Palestinians. Goodfriend unpacked the origins of Israel’s AI-powered targeting systems, revealing how they are built on a vast, unlawful surveillance infrastructure. Dewan focused on the experiences of those subjected to targeted killings, raising critical questions about accountability and the evolving definitions of legitimate targets in modern conflict. This discussion underscored the urgent need to reevaluate the ethical frameworks guiding military operations in the age of AI.
Fostering Collaboration for Impact
The conference concluded with a roundtable aimed at fostering collaboration among whistleblowers, researchers, artists, journalists, human rights defenders, and digital culture experts. This interdisciplinary approach provided a platform for participants to plan subsequent research phases, public events, and advocacy efforts. The discussions emphasized the importance of integrating diverse perspectives to enhance the impact of their work. In these discussions, we (Ishmael Bhila and Jennifer Menninger) highlighted the critical role of advocacy in shaping the discourse around autonomous weapons and military AI. Their insights, along with those of other participants, underscored a collective commitment to ensuring that the ethical implications of these technologies are thoroughly examined and addressed.
Way Forward: Urgent Need for Action
The “Investigating the Kill Cloud” exposed the realities of information warfare, from its impact on populations in Gaza through massive surveillance techniques and unethical decision support systems, to the physiological and psychological harm on unsuspecting civilians in Afghanistan, Ethiopia, and elsewhere. The conference showed how the scientific and technological practices that underpin modern warfare are fraught with mistakes, experimentation, and unacceptable normalisation of the treatment of innocent lives as collateral damage. It also showed how the increased involvement of tech companies in the development and use of autonomous weapons systems and related technologies has not only increasingly transformed war, targeting, and the taking of human lives as a business even for start-up companies, but also increased the number of actors who are immune from the consequences of war and exempt from legal repercussions despite causing unimaginable harm on humans in far-away places.
These challenges posed by the “kill cloud” require urgent action. First, all these problems are occurring in a legal international vacuum. There is an urgent need for a legally binding instrument that stipulates what is prohibited and what should be regulated.
The targeting of (the wrong) humans has become a normalised tragedy against which humanity should urgently intervene. This is why a new legally binding instrument should ban the targeting of humans by autonomous systems.
Second, with the discussions on autonomous weapons systems having gone on for more than a decade in the United Nations Convention on Certain Conventional Weapons (CCW) for more than a decade now without an outcome in sight, it has become ever more imperative to move the discussions to a more inclusive forum. This is necessitated by the fact that most of those who have been affected by drone warfare, surveillance, and the automation of violence in countries like Syria, Afghanistan, Somalia, Mali, Niger, Burkina Faso, Ethiopia, and others, have barely had the opportunity to voice their concerns in the CCW. Third, the realities of algorithmic warfare, especially as they pertain to its impact on real lives — something the situations in Gaza and Ukraine have exposed as real and not just theoretical — cannot be tackled without deliberate cooperation.
The conference brought together experts, computer scientists, activists, artists, and whistleblowers, all of whom are key if the ongoing injustices are to be addressed. The Stop Killer Robots Campaign has worked with academics, politicians, diplomats, students, roboticists and physicists, and the United Nations among others to call for a legally binding instrument on autonomous weapons systems. These collaborations must broaden their scope to reveal ongoing issues, much like whistleblowers have done, critically examine the systemic, structural, and epistemological foundations of these technologies, and propose and implement effective solutions to address a situation that is increasingly getting out of control.
Disclaimer: The views in this article are the authors’ and do not necessarily represent the position of the Stop Killer Robots Campaign, the Disruption Network Lab, or the MEHUCO research network.
Watch video recordings of the conference panels and read associated research papers.