Artificial intelligence (AI) is already used in a range of military applications, from logistics to navigation to HR, but the most debated and the most legally and ethically contentious application is, of course, the use of AI in weapons systems. A primary source of concern, hope, and hype around autonomous and AI-enabled weapons systems (AWS) is the extent to which they can function outside of traditional human control.

For over a decade, an international debate has transpired around what human control over AWS must look like. Many argue that weapons that can function autonomously, especially with respect to selecting and targeting humans, are inherently illegal and unethical: only humans can be held legally responsible and accountable, and only humans can make ethical decisions.

Yet, as these legal debates have transpired, militaries and weapons manufacturers have rapidly added increasingly autonomous features to existing weapons systems, while also developing new, AI-enabled weapons. Given the international consensus that only humans can be held responsible and accountable for weapons systems, the question of what human control or oversight looks like becomes ever more pressing.

Even if the system does what it’s supposed to, there are concerns about human control and human authority over the actions. However, the question becomes especially relevant when considering what might happen if an AWS launches an attack against the wrong target, against civilians, or against people or infrastructure that are protected by International Humanitarian Law (IHL).

The challenge of human control vs. autonomy

Defining and ensuring human control over an AWS is especially tricky because autonomy exists on a spectrum with total human control on one side and full autonomy on the other.

For example, a knife is fully under human control, whereas, with a gun, once the trigger is pulled, humans no longer have control over the bullet. Drones, meanwhile, may be a mix of algorithms and remote piloting, or they may mostly fly and function autonomously, and airplanes can practically fly themselves. The IAI Harpy has been around for many years and, like other loitering munitions, is basically an autonomous system, though technically, its sensors cannot target humans directly. An increasing number of systems exist now that can function autonomously and could target humans.

Many people discuss AWS as if it’s a new category of weapons systems, however, AWS is actually any weapons system built with any number of autonomous and AI-enabled capabilities. As a result, rather than representing an easily defined, new category of weapon, AWS can include a huge range of weaponry, sitting across most of the controlled-to-autonomous spectrum.

Examples of technologies that can increase autonomous capabilities in AWS include: sensors for obstacle avoidance, GPS navigation, sensors for target identification, algorithms for target identification and facial recognition, algorithms to help systems fly smoothly even when remote controlled, and communication technology to relay information to humans or for system-system communication, such as swarm technology.

With the adoption of such autonomous capabilities, weapons can be more readily deployed in locations that humans can’t access and in numbers far greater than the number of soldiers on the ground. How can human soldiers and commanders possibly maintain control of systems that are deployed outside of communication or in numbers too great for human oversight?

One answer is to consider the humans involved beyond just the time of use.

The role of humans across the weapon’s lifecycle

Conventional weapons systems have taught us to look at the time of use. When something goes wrong with a conventional weapon, the problem is likely due to user error, a faulty system, or malicious intent. However, if something goes wrong with an AWS, because it can function so far outside of a commander’s or soldier’s control, determining who is responsible by looking only at the time of use will too often prove to be a futile task.

Instead, with AWS, it is more critical than ever to look at the full lifecycle of the weapon system.

For an autonomous system, the time of use is when the system is in its most autonomous state. However, by looking back across the timeline of development, human control increases. Using the IEEE-SA Lifecycle Framework as an example, humans have the most control during political decision-making, ideation stages, and early research and development stages. As the system is developed, human control shifts to responsibilities around testing and assurance of the system.

Human control and responsibility also include providing human users and commanders the training and consideration necessary to prepare them to use the system correctly and to recognize signs that something is going wrong with the system. Processes like Human Readiness Levels offer one method for ensuring that these practices are built into the design and development stages of the system and that the AWS does not move forward in its lifecycle until these various assurances have been met.

This approach is not without precedent: a number of lessons can be learned from the highly automated and autonomous aerospace industry. For the purpose of this essay, the most important lesson is that, when a plane crashes, the response is consistently to determine: 1) if the problem was due to user error or technical error, and 2), to look back at logs to identify who was responsible for either insufficient training for the user or for the technical components that caused the malfunction.

Human control over AWS is an understandably contentious topic, and in order to meet legal requirements and ethical guidelines, some level of control must be maintained. This may be nearly impossible to achieve when looking only at the time of use, but it may be possible to define and establish human control—and more importantly, human responsibility and accountability—when the full lifecycle is taken into account.

Expand
overlay image