The increasing use of unmanned aerial vehicles—drone aircraft—capable of lethal force, by the United States and other countries around the world, has made more urgent a debate over the ethics, law, and regulation of autonomous weapons systems.

Today’s weapons-carrying drones are not “autonomous”—capable of acting in real-time through their own programmed decision-making—but instead are controlled in real-time by humans. Technological advances and perceptions of strategic and operational advantage, however, are propelling research and development (R&D) toward genuinely autonomous weapons systems. The design and engineering underlying this R&D necessarily includes not only technical material considerations, but also assumptions about how, where, and why such systems will be used—and these include fundamental legal and ethical considerations about what constitutes a lawful weapon and its lawful use.

  the moral case for romneycare 2.0 by scott atlas  
  Photo credit: presstv.com

Debates over the legitimacy of particular weapons or their legitimate use go back to the beginnings of the laws and ethics of war. In some cases, legal prohibitions on the weapon system as such eroded, as happened with submarines and airplanes, and what survived was a set of legal rules for the use of the new weapon. In other cases, such as the ban on use of poison gas, legal prohibitions take hold.

Where in this long history of new weapons and their ethical and legal regulation will autonomous robotic weapons fit? What are the features of autonomous robotic weapons that raise ethical and legal concerns? And how should they be addressed, as a matter of law and policy?  

One answer to these questions is: wait and see. It is too early to know where the technology will go, so the debate over ethical and legal principles for robotic autonomous weapons should be deferred until a system is at hand. A second answer is ban it all—the use of these systems, their production and even efforts at technological development—because they can never be sufficiently good to replace human judgment.

Both these views are short-sighted and mistaken. We shouldn’t wait—this is the time, before technologies and weapons development have become “hardened” in a particular path and whose design architecture is difficult to change, to take account of the law and ethics that ought to inform and govern autonomous weapons systems. This is also the time—before ethical and legal understandings of autonomous weapon systems become hardened in the eyes of key constituents of the international system—to propose and defend a framework for evaluating them that simultaneously advances strategic and moral interests.

An Algorithm for Moral Decision-Making?

The basic legal and ethical principles governing the introduction of any new weapon are distinction (or discrimination) and proportionality. Distinction says that for a weapon to be lawful, it must be capable of being aimed at lawful targets, in a way that discriminates between military targets and civilians and their objects. Proportionality says that even if a weapon meets the test of distinction, any use of a weapon must also involve an evaluation that sets the anticipated military gain against the anticipated civilian harm (to civilian persons or objects); the harm to civilians must not be excessive relative to the expected military gain.

Some leading roboticists have been working on creating algorithms or artificial intelligence systems for autonomous weapons that can take these two fundamental principles into account. Difficult as this may seem to any experienced law-of-war lawyer, these are the fundamental conditions that the ethically designed and programmed autonomous weapons system would have to satisfy and therefore what a programming development effort must take into account.

If this is the optimistic vision of the robot soldier of, say, decades from now, it is subject already to four main categories of objection. The first is a general empirical skepticism that machine programming and artificial intelligence could ever achieve the requisite intuition, cognition, and judgment to satisfy the fundamental ethical and legal principles of distinction and proportionality. This skepticism is essentially factual, a question of how technology will evolve over decades.

While mindful of the risks of over-reliance on technological fixes, we do not want to rule out such possibilities—including the development of technologies of war that might reduce risks to civilians by making targeting more precise and controlled (especially compared to human-soldier failings that may be exacerbated by fear, vengeance, or other emotions). Articulation of the tests of lawfulness that autonomous systems must ultimately meet helps channel technological development toward the law of war’s protective ends.

A second objection is a moral one, and says that it is simply wrong per se to take the human agent, possessed of a conscience, entirely out of the firing loop. This is a difficult argument to address, since it stops with a moral principle that one either accepts or does not. Moreover, it raises a further question as to what constitutes the tipping point into impermissible autonomy given that the automation of weapons functions is likely to occur in incremental steps.

A third objection holds that autonomous weapons systems that remove the human being from the firing loop are unacceptable because they undermine the possibility of holding anyone accountable for what, if done by a human soldier, might be a war crime. If the decision to fire is taken by a machine, who should be held responsible for mistakes: The soldier acting along with it? The commander who chose to employ it? The designer who programmed it in the first place? Post-hoc judicial accountability in war is just one of many mechanisms for promoting and enforcing compliance with the laws of war, though, and devotion to individual criminal liability as the presumptive mechanism of accountability risks blocking the development of machine systems that would, if successful, reduce actual harms to civilians on or near the battlefield.

A final objection to autonomous weapons systems is that removing human soldiers from risk and reducing harm to civilians through greater precision diminishes the disincentive to resort to armed force in foreign crises. As a moral matter, this objection is subject to a troubling counter-objection: that this would entail foregoing easily obtained protections for civilians or soldiers in war for fear that without, in effect, holding these humans hostage, political leaders would resort to war more than they ought. And in application, this concern is not special to autonomous weapons, since precisely the same objection is already raised with respect to remotely-piloted drones or any other technological development that reduces the human and non-human costs of a military operations.

While we find these objections unpersuasive, they all face in any case a practical difficulty: the incremental way autonomous weapon systems will develop. These objections are often voiced as though there is likely to be some determinable line between the human-controlled system and the machine-controlled one. It seems far more likely, however, that the evolution of weapons technology will be gradual, slowly and indistinctly eroding the role of the human in the firing loop.

Drone aircraft, for example, might over time be controlled more and more through automated pre-programming in order to match the beyond-human speed of the counter-systems. The incremental changes that might take place gradually to reduce the role of the drone’s human controller may be directed at its targets, too.

Consider, too, the efforts to protect peacekeepers facing the threat of snipers or ambush in an urban environment: small mobile robots with weapons could act as roving scouts for the human soldiers, with “intermediate” automation—the robot might be pre-programmed to look for certain enemy-weapon signatures and to alert a human operator of the threat, who then decides whether or not to pull the trigger. In the next iteration, the system might be set with the human being not giving an affirmative command, but instead merely deciding whether to override and veto a machine-initiated attack. There are many different possible configurations by which machine autonomy can incrementally be introduced.

Covert or special operations will involve their own evolution toward incrementally autonomous systems. Imagine the Osama bin Laden–compound raid of the future, in which tiny surveillance drones equipped with facial recognition technology might have helped identify him earlier. It is not a large step to weaponize such systems, and then perhaps go the next step of allowing them to act autonomously, perhaps initially with a remote human observer as a failsafe but with very little time to override programmed commands.

At some point in the not-distant future, another state or entity—say, China or Russia—might design, build, deploy, and sell an autonomous weapon system for battlefield use that it says meets the ethical and legal requirements for an autonomous weapon because it will be programmed to target only something—person or position—that is firing a weapon and is positively identified as hostile rather than friendly. A next generation system might include a more advanced motion of its own, shifting from static defense to a mobile robot able not just to repel, but to give chase and to hunt its prey. Such a system might start out with human controllers in the loop, but over time might gradually be automated not just with respect to where the mobile robots will go, but with respect to when they will fire their weapons (perhaps to deal with the possibility that a communications link back to human controllers could be jammed or hacked).

The Drawbacks of a Multilateral Solution

Besides the security and war-fighting implications, the U.S. government might have grave legal and ethical concerns about legally deficient foreign systems offered for sale on the international arms markets: picture a fully autonomous weapon system that lacks the ability to take into account possible civilian harm—it has no way, for instance, of accounting for the presence of civilians, nor does it have programming that would permit it to undertake a proportionality calculation. The United States would now find itself facing a weapon system on the battlefield that conveys significant advantages to its user, but which the United States would not deploy itself because it does not believe it is a legal weapon.

In part because it is easier and faster for states that are competitively engaged with the United States to deploy systems that are, in the U.S. view, ethically and legally deficient, the United States does have a strong interest in seeing that development and deployment of autonomous battlefield robots are regulated, legally and ethically. Moreover, it would be reckless for the United States to pursue its own development of autonomous capabilities without a strategy—including a role for normative constraints—for responding to their military use by other states or actors.

These observations—and alarm at a possible arms race around these emerging and future weapons—lead many to argue that the solution lies in some form of multilateral treaty. A proposed treaty might be prohibitory, along the lines of the Ottawa landmines convention, or it might delineate acceptable uses of autonomous systems. Human Rights Watch recently called for negotiation of a sweeping multilateral treaty banning outright the use, production, and even development of “fully autonomous weapons” programmed to select and engage targets without human intervention.

Ambitions for multilateral treaty regulation in this context are misguided for several reasons. First, limitations on autonomous military technologies, although quite likely to find wide superficial acceptance among non-fighting states and some non-governmental groups and actors, will have little traction among those whose actions most matter in the real world. Even states and groups inclined to support treaty prohibitions or limitations will find it difficult to reach agreement on scope or definitions because lethal autonomy will be introduced incrementally. And, of course, there are the general challenges of compliance, including the collective action problems of failure and defection that afflict all such treaty regimes.

There are also serious humanitarian costs to prohibition, given the possibility that autonomous weapons systems could in the long run be more discriminating and ethically preferable to the alternatives. Blanket prohibition precludes the possibility of such benefits. This is particularly so if prohibitions include even the development of components or technologies that might incrementally lead to much greater humanitarian protection in war.

Nevertheless, the dangers associated with evolving autonomous robotic weapons are very real, and the United States has a serious interest in guiding development in this context of international norms.  By “international norms” here, we do not mean new binding legal rules only—whether treaty rules or customary international law—but instead widely-held expectations about legally or ethically appropriate conduct, whether formally binding or not. Such norms are important to the United States for guiding and constraining its internal practices; helping earn and sustain internal legal legitimacy and ethical buy-in from the officers and lawyers who would actually use such systems in the field; establishing common standards among the United States and its partners and allies to promote cooperation and joint operations; and raising the political and diplomatic costs to adversaries of developing, selling, or using autonomous lethal systems that run afoul of these standards.

A better approach to a global treaty for addressing these systems is the gradual development of internal state norms and best practices that, once worked out, debated, and applied to the United States’ own weapons development process, can be carried outwards to discussions with others in the world. This requires a long-term, sustained effort combining internal ethical and legal scrutiny—including principles, policies, and processes—and external diplomacy.

We already know the core principles from a fundamental law-of-war framework: distinction and proportionality. A system must be capable of being aimed at lawful targets; but how good must that capability be in any particular circumstance? Proportionality requires that any use of a weapon must take into account collateral harm to civilians; this rules out systems that simply identify and aim at other weapons without taking civilians into account, but what is the standard of care for an autonomous lethal system in any particular circumstance?

The Wisdom of Tradition

These questions move from overarching ethical and legal principles to processes that make sure principles are concretely taken into account—not just down the road at the deployment stage but much earlier, during the R&D stage. It will not work to go forward with design and only afterwards, seeing the technology, decide what changes need to be made in order to make the system’s decision-making conform to legal requirements. By then, it may be too late.

The United States must develop a set of principles to regulate and govern its own advanced weapons systems for another reason: in order to assess the systems of other states. This requires that the United States work to bring along its partners and allies—including NATO members and technologically advanced Asian allies—by developing common understandings of norms and best practices as the technology evolves in often small steps. Just as development of autonomous weapon systems will be incremental, so too will development of norms about acceptable systems and uses be incremental.

To be sure, this proposal risks papering over enormous practical and policy difficulties. The natural instinct of the U.S. national security community—likewise that of any other major state power—will be to discuss little or nothing, for fear of revealing capabilities or programming details to adversaries, or inviting industrial espionage and reverse engineering of systems. Furthermore, one might reasonably question whether broad principles such as distinction and proportionality can meaningfully be applied and discussed publicly with respect to technological systems distinguishable only in terms of digital ones and zeroes buried deep in programmed computer code.

These concerns are real, but there are at least two mitigating solutions. First, the United States will need to resist its own impulses toward secrecy and reticence with respect to military technologies, recognizing that the interests those tendencies serve are counterbalanced here by interests in shaping the normative terrain on which it and others will operate militarily as technology quickly evolves.

Of course, there are limits to transparency here, on account of both secrecy concerns and the practical limits of persuading skeptical audiences about the internal and undisclosed decision-making capacities of rapidly evolving robotic systems. But the legitimacy of such inevitably controversial systems in the public and international view matters too. It is better that the United States work to set the global standard by actively explaining its compliance with it than to let other states or groups set it—whether they be those who would impose unrealistic, ineffective or dangerous prohibitions or those who would prefer few or no constraints at all.

A second part of the solution is to emphasize the internal processes by which the United States considers, develops, and tests its weapon systems. Even when the United States cannot disclose publicly the details of its automated systems and their internal programming, it should be quite open about its vetting procedures, including the standards and metrics it uses, both at the research and development stage and at the deployment stage.  The U.S. Defense Department recently took an important step along these lines in promulgating a publishing a directive that clarifies its policy with regard to autonomy in weapon systems.  This is the right approach.

Quite apart from the view that all these weapons and even related development should be banned outright, one might object to any of these proposed solutions that the United States should not unnecessarily constrain itself in advance to a set of normative commitments, given vast uncertainties about the technology and future security environment. This objection, however, fails to appreciate that, while significant deployment of highly-autonomous systems may be far off, R&D decisions are already upon us. Moreover, shaping international norms is a long-term process, and unless the United States and its allies accept some risk in starting it now, they may lose the opportunity to do so later.

In the end, this is a rather traditional approach—relying on the gradual evolution and adaptation of longstanding law-of-war principles—to regulate what seems to many like a revolutionary technological and ethical predicament. Some view these developments as a crisis for the laws of war. To the contrary, provided we start now to incorporate ethical and legal norms into weapons design, the incremental movement from automation to genuine machine autonomy can be made to serve the ends of law on the battlefield.

overlay image