Five Reasons Why Millennials Need to Ban Killer Robots

authored by Caroline Fox who interned at Mines Action Canada as a Communications Assistant for the Campaign to Stop Killer Robots between September-December 2019, while undertaking graduate studies at the University of Ottawa.

A millennial sits in the corner of a room, reading, headphones on surrounded by millennial furnishings including houseplants
Photo by Austin Distel.

The vision of Millennials as the future leaders on world issues is an unsettling notion for older generations. Coined as lazy, unmotivated, and late because of the long Starbucks line, the Millennial generation has different prejudices working against them as they enter into the workforce.

However, despite our sceptics, we’re seeing the rise of a Millennial-run workforce, and soon to be Millennial international leaders. This demographic change will largely impact what issues gain the most attention. Simply put, we are the future decision-makers of our nation — and what we say matters, goes.

One of the greatest challenges faced by all international leaders is ensuring the safety and security of their citizens. This includes the policing of local communities, securing national borders, and lending participation to international war.

Increasingly, a minority of countries have begun funding the development of fully autonomous weapons, also known as ‘killer robots’; systems built to select targets and kill free from human control. Killer robots pose significant risks to humanity. Here are five reasons why Millennials need to ban killer robots.

1. Killer robots calculate — they don’t think

Killer robots’ actions will be guided by numbers. Fully autonomous weapons, at their most basic level, will be machines with processes. Unlike real-time decision-making, the engagement of robots in policing or war means acting based on pre-established algorithms, kinda like your family’s desktop that constantly needs updating. These algorithms help the computer’s radar system examine its surroundings to determine whether or not the target profile is in its midst.

But there is no reflective mechanism. Killer robots would run on a “sensor-analysis-force process” whereby the machine collects data on the external source, analyzes the source based on its internal programming, and applies force if the conditions to do so are met (Read more at www.article36.org). However, machine learning in relation to data analysis remains unclear, adding to the complexity of this issue. This brings into question the ethics behind the codes that program a system to kill.

2. Algorithms behind killer robots are inherently racist

We’ve all heard it before — Millennials are too sensitive. The driving force behind this misunderstanding is the degree to which younger generations label normalized behaviour as wrong.

Our approach isn’t perfect. After all, we are also the generation that ate Tide Pods. However, it goes without question that Millennials strive for equality among vulnerable groups that face systemic discrimination, including women, people of colour, LGBTQ+ communities, people working at minimum wage, and different religious groups.

Killer robots challenge these efforts through the pre-established algorithms that identify the kill target. These algorithms are latent with their own biases that dictate the production of the machinery. A robot, unlike a human, cannot know who a specific person is or what they are doing. Rather, they scan identification factors that match the criteria that was programmed into their operating system. This programming will be forced to use characteristics like skin colour, bone structure, and facial markings, location, or pattern of behaviour. This opens the door to potential failure when a robot has to distinguish between soldiers and civilians who look similar, but have entirely different motives.

3. Killer robots make mistakes, too

We’ve all been there, Millennials. You go to open your laptop in your morning class, only to find out that the system has crashed. Your schoolwork is gone, and life as you know it is over.

Hardware mishaps are the “uncommon common”. They only happen once in a while, but inevitably, they will find you. We know this because hardware isn’t perfect, despite the level of specialized training behind the engineers of even the most sophisticated MacBook. However, in comparison to a killer robot, a MacBook’s system is quite simple. The danger with fully autonomous weapons is the accountability a roboticist, engineer or programmer is held to regarding the final product. If we haven’t figured out how to create a basic laptop without failures, how do we account for the flawless execution of a machine in a situation where the stakes are more critical than a student in the classroom?

4. Killer robots promote instability

One of the basic arguments about engaging in war circles around worth. The value of political and economic gain is paramount to this debate. We know all about personal gain, Millennials! It’s sorta like when adults call us selfish.

The rise of killer robots promotes itself around the idea that citizens will no longer have to engage in physical warfare, thus making war more precise and minimizing or eliminating human casualties from the equation. But this isn’t entirely true.

There are many questions about killer robots that even the most sophisticated engineers, roboticists, and lawmakers have yet to answer. There is very little evidence to suggest autonomous weapons will have a positive impact on civilian impact. In fact, civilians in war torn countries are likely to be at greater risk of being mislabeled as targets due to similarities in appearance with combatants. Consider, for example, ISIS combat uniforms, involving black face masks that cover all but the eyes. This poses risks for women in proximity wearing black niqabs or burkas. Additionally, this puts children nearby in danger. Civilians of countries deploying killer robots also have cause for concern. For example, if fully autonomous weapons are developed and used, they theoretically could be used in or replace domestic policing. This puts vulnerable communities who face over-policing or police violence at risk of being monitored by a robotic police force. How do we expect a fully autonomous weapon to respond to a 911 call that provides little description as to the perpetrator’s identity? Can we expect a robot to know how to deal with an unforeseen attack if it must identify the target in real-time?

These types of questions go unanswered. In a nutshell, we’re developing machinery blindly.

5. Killer robots devalue life

It’s no secret that the challenges facing Millennials’ long-term wellbeing are unique in comparison to the global concerns of past generations. This isn’t to knock the impacts of past world tragedies. After all, no Millennial would cope well with massive economic depression. That type of suffrage just isn’t the content our Instagram followers are looking for.

However, unlike our predecessors, Millennials face a global challenge that is entirely out of the control of any one individual’s hands: climate change. Without collective commitment from all actors, as small as individual day-to-day habits to the large-scale production practices of multinational corporations, natural disasters will eventually destroy us. The end of humankind will be impersonal — a mere result of the accumulation of harmful fossil fuels, toxic waste, and utter disrespect of the habitats of different species.

In this light, we begin to see many parallels between the effects of climate change and killer robots. Climate change, like a robot, does not recognize a human being as an intrinsic entity. A human is a small cog in the larger system that climate change seeks to destroy. It does not consider the outcomes of bystanders in its path. Similarly, killer robots do not recognize the targets as a human, but rather an object to engage. Its overall mission does not take into account the kill as a significant event, but rather the completion of a task. What it boils down to, Millennials, is if we want soldiers, whether machinery or human, to understand the significance of a life. We should expect similar diligent behaviour as we would from police or military given the authority from the state to end a person’s life.

It’s pretty simple, Millennials. Killer robots pose more risks than triumphs. Technology’s role was never to replace us — but rather to assist us in day-to-day tasks. We need to look at the circumstances and ask ourselves if this is the type of world we want to live in.

Change is within our reach. Let’s just make sure we show up on time to see it happen.

Visit stopkillerrobots.org for more on killer robots and what you can do about them. #TeamHuman

Graphic with Caroline Fox holding her hand out to the camera with ‘Stop Killer Robots’ on her hand.
Caroline Fox interned at Mines Action Canada as a Communications Assistant for the Campaign to Stop Killer Robots between September-December 2019, while undertaking graduate studies at the University of Ottawa.

We are a coalition of non-governmental organizations working to preemptively ban fully autonomous weapons (#KillerRobots) www.stopkillerrobots.org