Part 2: Issues surrounding LAWS

“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow”                   – Open Letter on Autonomous Weapons, Future of Life Institute

The development of Lethal Autonomous Weapons Systems (LAWS or ‘killer robots’) will redefine warfare. The decision to apply (potentially lethal) force will no longer be in the hands of humans. Robots will take over the front line, and soldiers may be removed from the battlefield completely.

Given the unprecedented nature of such a development, many widely ranging issues are raised. Here is a discussion of the most potent issues.

 

  1. International law

The development of LAWS raises questions regarding its ability to abide by two core principles of international humanitarian law: distinction and proportionality.

The principle of distinction is fundamental to legal protection of civilians. It concerns the need to distinguish between civilians and soldiers on the battlefield. By setting a legal standard to prevent the targeting of non-combatants, this attempts to prevent the erroneous targeting of civilians.

The second issue is proportionality – whether expected civilian harm outweighs anticipated military advantage of an attack. Military actors must firstly assess the potential and the level of threat; and secondly, they must judge the appropriate response to the threat in complex and evolving situations.

These issues require judgement calls which necessitate situational awareness and can be highly subjective or ambiguous. Often, it requires nuance. The concern is that robots will be fundamentally unable to process and respond effectively to such issues. This may result in the increased risk of disproportionate harm or erroneous targeting of civilians.

 

  1. Moral imperative

The lack of human checks on inflicting violence is a key concern. Robots will lack human emotions such as compassion and empathy. The emotional weight and psychological burden of harming another human being act as checks on violent actions. Automating the decision to inflict violence precludes a moment of deliberation on a moral scale.

Without human morality serving as a check on the infliction of violence, LAWS can be abused in diabolical ways. Consider the potential for LAWS to end up in the hands of terrorists or dictators – LAWS could become ‘the Kalashnikovs of tomorrow‘. Lethal, morally detached actors being used by despots and criminals is a scary prospect.

Furthermore, to relegate life and death decisions to robots will undermine human dignity. Robots are incapable of understanding the value of a human life nor the significance of its loss. Ceding human control over the decision could undermine the value of life itself.

 

  1. Accountability

Who will be held accountable for the unlawful actions of robots? There is great uncertainty around this issue. Are robots independent actors? If not, will the commander, programmer or manufacturer be legally responsible? Questions of intention and foreseeability of harm need to be considered.

Further issues are raised by the uncertainty of where the accountability lies (if at all).  If human actors are not help accountable for harm caused by robots, there will be no legal deterrence for future violations. Furthermore, accountability for unlawful actions dignifies victims by recognising wrongs and punishing those who inflicted the harm.

 

  1. Increasing likelihood of warfare

The development of LAWS will lead to humans becoming increasingly disconnected and distant from the battlefield. In fact, replacing humans with robots in the battlefield will make going to war easier.

Substituting robots for humans will decrease military casualties. There is no doubt that this is a positive. However, the human cost of warfare effectively acts as a disincentive to go to war. Without such a disincentive, leaders will be more likely to resort to warfare. This could have a destabilising effect on international security.

 

  1. Arms race

If we follow the development of LAWS to its logical conclusion, we will end up in a world where armed forces will be totally comprised of machines. In a situation where all actors respond to the same situation in a programmed manner, it will effectively become a simulation game. Even before any action is taken, the result will follow an inevitable path based on the set of circumstances presented.

This may in fact reduce warfare, as both parties know who will win without having to act it out. However, reaching this conclusion also means that an AI arms race will be unavoidable. In such a world, having the biggest and most sophisticated LAWS capacity means that you win by default.

If any major military power were to begin developing LAWS, other countries would scale up their investments to in a rush to ensure they do not fall behind, leading to a global arms race.

 

In summary, the key issues are:

  • LAWS will likely be unable to abide by principles of international humanitarian law
  • The lack of moral checks will undermine the dignity and value of human life
  • Uncertainty behind who will be responsible for robots’ actions
  • Lower human cost will increase the likelihood of conflict
  • Potential arms race

So, with all these issues and uncertainties surrounding LAWS, why even have it at all? How should we regulate the development of LAWS? Click here to read part 3 of this series.

Advertisements

One thought on “Part 2: Issues surrounding LAWS

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s