“Machines have long served as instruments of war…now, there is a real threat that humans would relinquish their control and delegate life-and-death decisions to machines” – Bonnie Docherty, senior arms division researcher at Human Rights Watch
Artificial Intelligence is the next big thing and its continued expansion into all aspects of everyday life seems inevitable. However, one conversation that is missing from mainstream public discourse, despite its grave implications, is the military potential of AI.
Lethal autonomous weapons systems (LAWS) [a.k.a. ‘killer robots’] are machines that will be able to independently select and engage targets without any human intervention.
LAWS do not yet exist; however, there is a clear shift towards greater autonomy for machines in armed conflicts, with many precursors already being deployed. Experts predict that LAWS could be developed within 20 to 30 years.
Currently, human judgement underlies the use of machines in a military capacity. Human beings act as wilful agents that make judgements to control the application of force.
Machines in this context are broadly defined by one of two capacities:
- Information generation, gathering and analysis – to inform humans to make better decisions
- Application of force – to execute actions based on human decisions
Importantly, it has no role in the decision to apply force.
This decision-making structure even applies to advanced remotely operated vehicles such as drones. Drones still require human pilots who decide what to target and when to fire.
An extension of this model leads to a shift from full human control towards human oversight of automated machines. These systems are able to select targets and deliver force automatically under criteria set by human operators. The role of humans are reduced to their ability to supervise and override the robots’ actions.
There are already current examples of these machines, in the form of defense systems. These include Israel’s Iron Dome and the US Phalanx, which are programmed to automatically respond to incoming threats.
By introducing machines with the capacity to make decisions without human intervention, such systems have already redefined the role of human involvement in warfare. However, the supervisory capacity provides that it continues to be defined in the realm of exercising ‘meaningful human control’.
LAWS would go a step beyond by completely eliminating the role of human intervention. Robots, once deployed, would be completely independent in its decision-making.
So, what are the key issues surrounding LAWS? Click here to read part 2 of this series.