Charting a new course for military AI: going beyond militaristic narratives and rosy depictions of warfare

by Federico Mantellassi

Stop Killer Robots
4 min readOct 24, 2023
An unmanned aerial vehicle overlayed with a blue, white, and yellow filter

The United Nations (UN) Disarmament Week, observed from 24–30 October, seeks to promote awareness and better understanding of disarmament issues and their cross-cutting importance. Young people have an important role to play in the promotion of disarmament as a method to protect people from harm and to build safer and more peaceful societies.

In honour of this important week, members of the Stop Killer Robots Youth Network were asked to share their thoughts on the themes of disarmament and autonomous weapons systems. Disclaimer: The blogs in this series do not necessarily constitute the opinions of Stop Killer Robots. Nor should they be considered the opinions and views of all Stop Killer Robots members.

Two fallacies have taken hold of the discussion surrounding military artificial intelligence (AI) and are greatly hampering progress toward the development of a legally binding instrument on autonomous weapons systems (AWS). The first is the pervasive “AI race” narrative. Under this militaristic conception of global AI development, states must race ahead with the development of the most disruptive AI applications lest they fall behind their competitors. This narrative resonates especially strongly from a militaristic perspective. The second has to do with the “rosy” depictions of war a military AI enthusiast conjures up when touting the benefits of military AI, portraying it as a panacea for armed forces. As a key moment to raise pressing disarmament issues, the UN Disarmament Week presents us with a unique opportunity to “myth-bust” these fallacies acting as hurdles to the establishment of rules, norms and laws around the development and deployment of autonomous systems on battlefields.

“AI Race”, first to the bottom?

In 2021, the US National Security Commission on Artificial Intelligence advised the US government not to support a global prohibition on autonomous weapon systems. As justification, the report pointed to — among other reasons — the fear of falling behind competitors. Similarly, Palantir CEO Alex Karp recently warned: “Our adversaries will not pause to indulge in theatrical debates about the merits of developing technologies with critical military and national security applications. They will proceed.” Justifications such as these have now become commonplace in various AI fora, be they diplomatic, corporate, or military.

This narrative, however, stems from a militaristic understanding of technological development and international security. One in which technology is at the service of the state’s military interests, and which completely ignores the fact that a large majority of states support some form of legally binding instrument on AWS. Viewing military AI development through the framework of a race risks leading us to a “race to the bottom” between a minority of states, who, for the sake of national security, will erode ethical and legal norms due to fears of “falling behind”. It is vital to reframe the issue as one of human security, one focused on minimising human suffering, and not maximising military efficiency. Doing so will contribute to paving the road to a legally binding instrument on AWS by making it evident that our collective responsibility lies in prioritizing humanitarian values over competitive militarization.

Calling it what it is.

Palantir’s recent demo of its “Artificial Intelligence Platform” portrayed an idealised version of warfare. One in which commanders have a complete picture of events, abundant data and resources, and most importantly a willing and passive adversary. This is consistent with a broader tendency which consists of depicting technology as a panacea for militaries. These portrayals of warfare are dangerous. By playing to the military's existential desire to do away with the “fog of war”, they risk overselling what AWS could do in this regard. The promise of machine efficiency, certainty, and superior analytical capabilities might prove hard to ignore. However, war is inherently human. It is ethically loaded, uncertain, adversarial, gritty, dirty, bloody, emotional, unstructured, non-linear, and requires abductive reasoning. The nature of war means that the military domain presents key structural challenges to the linear, inductive logic based and structured environments for which AI is most suited. Depicting warfare as anything but what it truly is, will only drive armed forces to adopt solutions which not only carry with them substantial risk but are simultaneously catastrophically brittle and ill-suited for the realities of warfare. For decision-making and the application of force, the consequences are too high to accept.

Conclusion

Reports from Ukraine suggest we are perilously close to “crossing the Rubicon” with regard to AWS. With an uptick in conflicts raging throughout the globe, the disarmament community has the moral responsibility to alleviate the already appalling suffering endured by affected populations. We have agency over how technologies are developed and deployed. Turning away from militaristic narratives which are fuelling a race to the bottom in the development of disruptive AI-enabled capabilities, as well as realistically depicting warfare, can begin to give us back some of that agency. Ultimately, it can help us build stronger foundations for international cooperation and agreement for the regulation of autonomous weapon systems.

Federico Mantellassi is a Research and Project Officer at the Geneva Centre for Security Policy where he has worked since 2018. Federico’s research and writing focuses on how emerging technologies such as AI and neurotechnologies impact international security, warfare and geopolitics, as well as on the societal and ethical implications of their development and use. Federico is also the project coordinator of the GCSP’s Polymath Initiative.

--

--

Stop Killer Robots

With growing digital dehumanisation, the Stop Killer Robots campaign works to ensure human control in the use of force. www.stopkillerrobots.org