Part 2: Issues surrounding LAWS

“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow”                   – Open Letter on Autonomous Weapons, Future of Life Institute

The development of Lethal Autonomous Weapons Systems (LAWS or ‘killer robots’) will redefine warfare. The decision to apply (potentially lethal) force will no longer be in the hands of humans. Robots will take over the front line, and soldiers may be removed from the battlefield completely.

Given the unprecedented nature of such a development, many widely ranging issues are raised. Here is a discussion of the most potent issues.

 

  1. International law

The development of LAWS raises questions regarding its ability to abide by two core principles of international humanitarian law: distinction and proportionality.

The principle of distinction is fundamental to legal protection of civilians. It concerns the need to distinguish between civilians and soldiers on the battlefield. By setting a legal standard to prevent the targeting of non-combatants, this attempts to prevent the erroneous targeting of civilians.

The second issue is proportionality – whether expected civilian harm outweighs anticipated military advantage of an attack. Military actors must firstly assess the potential and the level of threat; and secondly, they must judge the appropriate response to the threat in complex and evolving situations.

These issues require judgement calls which necessitate situational awareness and can be highly subjective or ambiguous. Often, it requires nuance. The concern is that robots will be fundamentally unable to process and respond effectively to such issues. This may result in the increased risk of disproportionate harm or erroneous targeting of civilians.

 

  1. Moral imperative

The lack of human checks on inflicting violence is a key concern. Robots will lack human emotions such as compassion and empathy. The emotional weight and psychological burden of harming another human being act as checks on violent actions. Automating the decision to inflict violence precludes a moment of deliberation on a moral scale.

Without human morality serving as a check on the infliction of violence, LAWS can be abused in diabolical ways. Consider the potential for LAWS to end up in the hands of terrorists or dictators – LAWS could become ‘the Kalashnikovs of tomorrow‘. Lethal, morally detached actors being used by despots and criminals is a scary prospect.

Furthermore, to relegate life and death decisions to robots will undermine human dignity. Robots are incapable of understanding the value of a human life nor the significance of its loss. Ceding human control over the decision could undermine the value of life itself.

 

  1. Accountability

Who will be held accountable for the unlawful actions of robots? There is great uncertainty around this issue. Are robots independent actors? If not, will the commander, programmer or manufacturer be legally responsible? Questions of intention and foreseeability of harm need to be considered.

Further issues are raised by the uncertainty of where the accountability lies (if at all).  If human actors are not help accountable for harm caused by robots, there will be no legal deterrence for future violations. Furthermore, accountability for unlawful actions dignifies victims by recognising wrongs and punishing those who inflicted the harm.

 

  1. Increasing likelihood of warfare

The development of LAWS will lead to humans becoming increasingly disconnected and distant from the battlefield. In fact, replacing humans with robots in the battlefield will make going to war easier.

Substituting robots for humans will decrease military casualties. There is no doubt that this is a positive. However, the human cost of warfare effectively acts as a disincentive to go to war. Without such a disincentive, leaders will be more likely to resort to warfare. This could have a destabilising effect on international security.

 

  1. Arms race

If we follow the development of LAWS to its logical conclusion, we will end up in a world where armed forces will be totally comprised of machines. In a situation where all actors respond to the same situation in a programmed manner, it will effectively become a simulation game. Even before any action is taken, the result will follow an inevitable path based on the set of circumstances presented.

This may in fact reduce warfare, as both parties know who will win without having to act it out. However, reaching this conclusion also means that an AI arms race will be unavoidable. In such a world, having the biggest and most sophisticated LAWS capacity means that you win by default.

If any major military power were to begin developing LAWS, other countries would scale up their investments to in a rush to ensure they do not fall behind, leading to a global arms race.

 

In summary, the key issues are:

  • LAWS will likely be unable to abide by principles of international humanitarian law
  • The lack of moral checks will undermine the dignity and value of human life
  • Uncertainty behind who will be responsible for robots’ actions
  • Lower human cost will increase the likelihood of conflict
  • Potential arms race

So, with all these issues and uncertainties surrounding LAWS, why even have it at all? How should we regulate the development of LAWS? Click here to read part 3 of this series.

Part 1: Introduction to LAWS

“Machines have long served as instruments of war…now, there is a real threat that humans would relinquish their control and delegate life-and-death decisions to machines” – Bonnie Docherty, senior arms division researcher at Human Rights Watch

Artificial Intelligence is the next big thing and its continued expansion into all aspects of everyday life seems inevitable. However, one conversation that is missing from mainstream public discourse, despite its grave implications, is the military potential of AI.

Lethal autonomous weapons systems (LAWS) [a.k.a. ‘killer robots’] are machines that will be able to independently select and engage targets without any human intervention.

LAWS do not yet exist; however, there is a clear shift towards greater autonomy for machines in armed conflicts, with many precursors already being deployed. Experts predict that LAWS could be developed within 20 to 30 years.

Currently, human judgement underlies the use of machines in a military capacity. Human beings act as wilful agents that make judgements to control the application of force.

Machines in this context are broadly defined by one of two capacities:

  1. Information generation, gathering and analysis – to inform humans to make better decisions
  2. Application of force – to execute actions based on human decisions

Importantly, it has no role in the decision to apply force.

Screen Shot 2016-07-12 at 11.14.21 pm.png

This decision-making structure even applies to advanced remotely operated vehicles such as drones. Drones still require human pilots who decide what to target and when to fire.

An extension of this model leads to a shift from full human control towards human oversight of automated machines. These systems are able to select targets and deliver force automatically under criteria set by human operators. The role of humans are reduced to their ability to supervise and override the robots’ actions.

Screen Shot 2016-07-12 at 11.58.19 pm

There are already current examples of these machines, in the form of defense systems. These include Israel’s Iron Dome and the US Phalanx, which are programmed to automatically respond to incoming threats.

By introducing machines with the capacity to make decisions without human intervention, such systems have already redefined the role of human involvement in warfare. However, the supervisory capacity provides that it continues to be defined in the realm of exercising ‘meaningful human control’.

LAWS would go a step beyond by completely eliminating the role of human intervention. Robots, once deployed, would be completely independent in its decision-making.

Screen Shot 2016-07-12 at 11.22.16 pm.png

So, what are the key issues surrounding LAWS? Click here to read part 2 of this series.