- Military
- History
- Security & Defense
- Artificial Intelligence
- Determining America's Role in the World
Introduction: Weapons, Ethics, and the Burden of Law
Throughout history, the creation of new weapons has unsettled the assumptions of statesmen, commanders, and jurists alike. The crossbow was once denounced as inhumane, aircraft were considered too dangerous, nuclear weapons forced the creation of entirely new doctrines of deterrence, and chemical agents demanded the strengthening of international treaty law. At every turn, the legal frameworks governing war—the Hague Conventions, the Geneva Conventions, and the customary law of armed conflict—have struggled to adapt to technologies that hold particular military significance. Artificial intelligence (AI) now presents a comparable challenge. Autonomous systems capable of selecting and engaging targets raise questions about the long-standing rules of accountability and the capacity of AI to comply with core international humanitarian law (IHL) obligations, such as proportionality and distinction.
Central to the concern of AI-enabled autonomous weapons (AWS) is the issue of human control. This refers to the degree of human judgment required to guide and, potentially, contain the performance of an AWS in a combat environment. But there are many ways to exhibit human control that go beyond the role of operators who are tasked with supervising a deployed AWS. Legal advisors and legal actors, more generally, are an important element of human control with military AI. They are tasked with evaluating the performance of an AWS against the legal obligations of a particular nation, to ensure the new weapon system can function in compliance with IHL. When it comes to AWS, this is a challenging task.
The legal community has faced similar dilemmas before, and the mixed record of regulating expanding bullets and chemical weapons provides valuable lessons for AI regulation. Importantly, this does not mean AI weapons are technically comparable to expanding bullets or chemical weapons—far from it. Nonetheless, it is useful to examine the experience of regulation that evolved from these new capabilities to identify lessons for regulating AI-enabled warfare.
Expanding Bullets: Early Humanitarian Law in Practice
A precedent for prohibition, but fragile without enforcement
The late nineteenth century brought a debate over “expanding” bullets (also called “dum-dum” bullets), which expanded upon impact, causing horrific wounds. In 1899, the Hague Peace Conference prohibited their use on humanitarian grounds. This was one of the first times states codified a restriction not for military necessity but for the principle of limiting unnecessary suffering, a significant milestone in the evolution of IHL. However, practice did not always match principle. Many colonial powers argued that the prohibition on expanding bullets only applied to conflicts among European nations, while loopholes and weak enforcement mechanisms undermined compliance. The takeaway from the expanding bullets experience: the mere existence of a legal instrument is not enough. For new weapons, the law requires not only codification but also credible means of verification and application across all contexts of war.
Chemical Weapons: The Power of Legal Taboo
From partial ban to comprehensive regime
World War I seared chemical weapons into the collective memory. The gas clouds over Ypres revealed the inadequacy of existing law and motivated the 1925 Geneva Protocol, which prohibited their use but left certain issues, such as stockpiling and transfer, unchanged. Many states signed, but with hefty reservations. It was later, with the 1993 Chemical Weapons Convention, that international law achieved greater reach, banning production, possession, and transfer, and empowering an international organization—the Organisation for the Prohibition of Chemical Weapons (OPCW)—with intrusive verification powers. In this case, the law evolved into something stronger—not just a formal treaty, but a taboo reinforced by monitoring institutions and near-universal condemnation. This trajectory demonstrates that effective regulation of new weapons depends not only on legal rules but also on the institutional capacity to enforce them and the normative power to delegitimize their use.
The Legal Obstacles to Regulation
Ambiguity, dual-use, strategic rivalry, and verification all undermine law.
The difficulty of regulating AI under international law lies in four interlocking problems. First is definitional ambiguity. Unlike expanding bullets or chemical agents, there is no universally accepted definition of what qualifies as an autonomous weapon system, and States continue to be divided. Second is the problem of dual use—algorithms developed for benign civilian purposes can be repurposed for war, making categorical bans difficult. Third, strategic competition discourages restraint. Major powers fear that restrictions will leave them vulnerable in an AI arms race. Finally, there is the problem of verification. Whereas chemical stockpiles could be counted and destroyed under OPCW supervision, algorithms leave few physical traces and often lack transparency.
There is also the issue of mission-critical technologies. Despite expanding bullets and chemical weapons seeing a certain degree of success in legal regulation, many other technologies throughout history have not moved the legal needle toward enhanced regulation due to the mission-critical nature of the weapon. Regardless of efforts toward prohibition or restraint, when a technology becomes central to military power, efforts to ban it usually collapse.
Conclusion: Law Must Anticipate, Not Follow, Catastrophe
The lesson of history is clear: waiting for disaster leaves law behind.
Artificial intelligence presents a complex challenge for international law. Unlike chemical agents, which could be clearly identified and banned, AI is not a single substance or weapon but a suite of dual-use technologies, many of them developed in civilian industries.
The regulation of expanding bullets shows how fragile humanitarian law can be when enforcement is weak. The Chemical Weapons Convention demonstrates how law can be strengthened through robust institutions and the establishment of international taboos. AI weapons fall somewhere in between: too diffuse to ban outright, too consequential to ignore. The international community thus faces a choice: to adapt the law of armed conflict proactively, find a middle ground through policy for regulation, or wait for the first catastrophe involving AI weapons to trigger political will.
Lena Trabucco is an Assistant Professor at the Centre for Military Studies at the University of Copenhagen. She is also a Nonresidential Research Fellow at the Stockton Center for International Law at the US Naval War College and a Research Fellow with the Tech, Law, and Security program at American University.