With each new drone strike by the United States military, anger over the program mounts. On Friday, in one of the most significant U.S. strikes, a drone killed Pakistani Taliban leader Hakimullah Mehsud in the lawless North Waziristan region bordering Afghanistan. Coming as Pakistan is preparing for peace talks with the Taliban, the attack on this major terrorist stirred outrage in Pakistan and was denounced by the country's interior minister, Chaudhry Nisar Ali Khan, who said the U.S. had "murdered the hope and progress for peace in the region."

Recent reports from Amnesty International and Human Rights Watch have also challenged the legality of drone strikes. The protests reflect a general unease in many quarters with the increasingly computerized nature of waging war. Looking well beyond today's drones, a coalition of nongovernmental organizations—the Campaign to Stop Killer Robots—is lobbying for an international treaty to ban the development and use of "fully autonomous weapons."

Computerized weapons capable of killing people sound like something from a dystopian film. So it's understandable why some, scared of the moral challenges such weapons present, would support a ban as the safest policy. In fact, a ban is unnecessary and dangerous.

No country has publicly revealed plans to use fully autonomous weapons, including drone-launched missiles, specifically designed to target humans. However, technologically advanced militaries have long used near-autonomous weapons for targeting other machines. The U.S. Navy's highly automated Aegis Combat System, for example, dates to the 1970s and defends against multiple incoming high-speed threats. Without them, a ship would be helpless against a swarm of missiles. Israel's Iron Dome missile-defense system similarly responds to threats faster than human reaction times permit.

Contrary to what some critics of autonomous weapons claim, there won't be an abrupt shift from human control to machine control in the coming years. Rather, the change will be incremental: Detecting, analyzing and firing on targets will become increasingly automated, and the contexts of when such force is used will expand. As the machines become increasingly adept, the role of humans will gradually shift from full command, to partial command, to oversight and so on.

This evolution is inevitable as sensors, computer analytics and machine learning improve; as states demand greater protection for their military personnel; and as similar technologies in civilian life prove that they are capable of complex tasks, such as driving cars or performing surgery, with greater safety than human operators.

But critics like the Campaign to Stop Killer Robots believe that governments must stop this process. They argue that artificial intelligence will never be capable of meeting the requirements of international law, which distinguishes between combatants and noncombatants and has rules to limit collateral damage. As a moral matter, critics do not believe that decisions to kill should ever be delegated to machines. As a practical matter, they believe that these systems may operate in unpredictable, ruthless ways.

Yet a ban is unlikely to work, especially in constraining states or actors most inclined to abuse these weapons. Those actors will not respect such an agreement, and the technological elements of highly automated weapons will proliferate.

Moreover, because the automation of weapons will happen gradually, it would be nearly impossible to design or enforce such a ban. Because the same system might be operable with or without effective human control or oversight, the line between legal weapons and illegal autonomous ones will not be clear-cut.

If the goal is to reduce suffering and protect human lives, a ban could prove counterproductive. In addition to the self-protective advantages to military forces that use them, autonomous machines may reduce risks to civilians by improving the precision of targeting decisions and better controlling decisions to fire. We know that humans are limited in their capacity to make sound decisions on the battlefield: Anger, panic, fatigue all contribute to mistakes or violations of rules. Autonomous weapons systems have the potential to address these human shortcomings.

No one can say with certainty how much automated capabilities might gradually reduce the harm of warfare, but it would be wrong not to pursue such gains, and it would be especially pernicious to ban research into such technologies.

That said, autonomous weapons warrant careful regulation. Each step toward automation needs to be reviewed carefully to ensure that the weapon complies with the laws of war in its design and permissible uses. Drawing on long-standing international legal rules requiring that weapons be capable of being used in a discriminating manner that limits collateral damage, the U.S. should set very high standards for assessing legally and ethically any research and development programs in this area. Standards should also be set for how these systems are to be used and in what combat environments.

If the past decade of the U.S. drone program has taught us anything, it's that it is crucial to engage the public about new types of weapons and the legal constraints on their design and use. The U.S. government's lack of early transparency about its drone program has made it difficult to defend, even when the alternatives would be less humane. Washington must recognize the strategic imperative to demonstrate new weapons' adherence to high legal and ethical standards.

This approach will not work if the U.S. goes it alone. America should gather a coalition of like-minded partners to adapt existing international legal standards and develop best practices for applying them to autonomous weapons. The British government, for example, has declared its opposition to a treaty ban on autonomous weapons but is urging responsible states to develop common standards for the weapons' use within the laws of war.

Autonomous weapons are not inherently unlawful or unethical. If we adapt legal and ethical norms to address robotic weapons, they can be used responsibly and effectively on the battlefield.

Mr. Anderson is a law professor at American University and a senior fellow of the Brookings Institution. Mr. Waxman is a professor at Columbia Law School and a fellow at the Council on Foreign Relations. Both are members of the Hoover Institution Task Force on National Security and Law.

overlay image