You walk into your shower and find a spider. You are not an arachnologist. You do, however, know that one of the following options is possible:
The spider is real and harmless. The spider is real and venomous.
Your next-door neighbor, who dislikes your noisy dog, has turned her personal surveillance spider (purchased from “Drones ‘R Us” for $49.95) loose and is monitoring it on her iPhone from her seat at a sports bar downtown. The pictures of you, undressed, are now being relayed on several screens during the break of an NFL game, to the mirth of the entire neighborhood.
Your business competitor has sent his drone assassin spider, which he purchased from a bankrupt military contractor, to take you out. Upon spotting you with its sensors, and before you have any time to weigh your options, the spider shoots an infinitesimal needle into a vein in your left leg and takes a blood sample.
As you beat a retreat out of the shower, your blood sample is being run on your competitor’s smartphone for a DNA match. The match is made against a DNA sample of you that is already on file at EVER.com (Everything about Everybody), an international DNA database (with access available for $179.99).
Once the match is confirmed (a matter of seconds), the assassin spider outruns you with incredible speed into your bedroom, pausing only long enough to dart another needle, this time containing a lethal dose of a synthetically produced, undetectable poison, into your bloodstream. Your assassin, who is on a summer vacation in Provence, then withdraws his spider under the crack of your bedroom door and out of the house, and presses its self-destruct button. No trace of the spider or the poison it carried will ever be found by law enforcement authorities.
Smaller, Cheaper Weapons & DIY Drones
This is the future. According to some uncertain estimates, insect-sized drones will become operational by 2030. These drones will be able to not only conduct surveillance, but to act on it with lethal effect. Over time, it is likely that miniaturized weapons platforms will evolve to be able to carry not merely the quantum of lethal material needed to execute individuals, but also weapons of mass destruction sufficient to kill thousands. Political scientist James Fearon has even speculated that at some more distant point in time, individuals will be able to carry something akin to a nuclear device in their pockets.
Assessing the full potential of technology as it expands (and shrinks) requires a scientific expertise beyond my ken. The spider in the shower is merely an inkling of what probably lies in store. But even a cursory glance at ongoing projects tells us that the mind-bending speed at which robotics and nanobotics are developing means that a whole range of weapons is growing smaller, cheaper, and easier to produce, operate, and deploy from great distances. If the mis-en-scene above seems unduly alarmist or too futuristic, consider the following: Drones the size of a cereal box are already widely available, can be controlled by an untrained user with an iPhone, cost roughly $300, and come equipped with cameras. Palm-sized drones are commercially available as toys (such as the Hexbug), although they are not quite insect-sized and their sensory input is limited to primitive perception of light and sound.
True minidrones are still in the developmental stages, but the technology is progressing quickly. The technological challenges seem to be not in making the minidrones fly, but in making them do so for long periods of time while also carrying some payload (surveillance or lethal capacity). The flagship effort in this area appears to be the Micro Autonomous Systems and Technology (MAST) Collaborative Technological Alliance, which is funded by the U.S. Army and led by BAE Systems and U.C. Berkeley, among others. The Alliance’s most recent creations are the Octoroach and the BOLT (Bipedal Ornithopter for Locomotion Transitioning). The Octoroach is an extremely small robot with a camera and radio transmitter that can cover up to 100 meters on the ground, and the BOLT is a winged robot designed to increase speed and range on the ground.
Scientists at Cornell University, meanwhile, recently developed a hand-sized drone that uses flapping wings to hover in flight, although its stability is still quite limited and battery weight remains a problem. A highly significant element of the Cornell effort, however, is that the wing components were made with a 3-D printer. This heralds a not-too-distant future in which a person at home can simply download the design of a drone, print many of the component parts, assemble them with a camera, transmitter, battery, etc., and build themselves a fully functioning, insect-sized surveillance drone.
Crawling minidrones have clearly passed the feasibility threshold and merely await improvements in range and speed to attain utility on the battlefield and viability in the private sector. Swarms of minidrones are also being developed to operate with a unified goal in diffuse command and control structures. Robotics researchers
at the University of Pennsylvania recently released a video of what they call “nano quadrotors”—flying mini-helicopter robots that engage in complex movements and pattern formation.
A still more futuristic technology is that of nanobots or nanodrones. The technology for manufacturing microscopic robots has been around for a few years, but recent research has advanced to the point of microscopic robots that can assemble themselves and even perform basic tasks. The robotics industry, both governmental and private, is also exerting great efforts to enhance the autonomous capabilities of robots, that is, to be able to program a robot to perform complex tasks with only a few initial commands and no continuous control. Human testing for a microrobot that can be injected into the eye to perform certain surgical tasks is now on the horizon. Similar developments have been made toward nanobots that will clear blocked arteries and perform other procedures.
Now, situate the robotics technology alongside other technological and scientific advancements—the Internet, telecommunications, and biological engineering—all of which empower individuals to do both good and terrible things to others. From here, it is not hard to conceptualize a world rife with miniature, possibly molecule-sized, means of inflicting harm on others, from great distances and under clandestine conditions.
When invisible remote weapons become ubiquitous, neither national boundaries nor the lock on our front door will guarantee us an effective line of defense. As the means to inflict violence from afar become more widely available, both individual threat and individual vulnerability increase to a hitherto unknown degree. When the risk of being detected or held accountable diminishes, inhibitions regarding violence decrease. Whether political or criminal, violence of every kind becomes easier to inflict and harder to prevent or account for. Ultimately, modern technology makes individuals at once vulnerable and threatening to all other individuals to unprecedented degrees: we are all vulnerable—and all menacing.
The Future of War
In this essay I take on some of the possible ramifications of these technological advances for the potential incidence of violence and its future effects on the existing legal and political order. I first consider the special features of new weapons technologies that, in my mind, are likely to make violence more possible and more attractive; these are proliferation, remoteness, and concealment. All in all, I argue that technology has found a way to create “perfect weapons”—altogether distant, invisible, and untraceable, essentially generating a more leveled playing field among individuals, groups, and states.
I then reflect on the implications of this development for the traditional legal and political categories—national and international, private and public, citizen and alien, war and crime—that still serve as the basis for much of existing regulation of violence, and argue that these juxtapositions are becoming increasingly vague and inapplicable as rationales for regulation of new threats. Finally, I venture to imagine some broader themes of the future defense against the threat of new weapons, both on the international level (a move to global policing) and on the domestic level (privatization of defense).
I argue that as threats increasingly ignore conventional boundaries or nationalities and become more individualized, the traditional division of labor between government and citizens and between domestic and international becomes impractical. National defense will require a different mix of unilateralism and international cooperation. Personal defense will have to rely more on diffuse, private, person-to-person mechanisms of protection, as well as concede more power to the government. The very concept of state sovereignty—what it means domestically and what it means externally—would have to be reimagined, given the new strategic environment.
Several preliminary observations are in order. The first is a caveat: To a significant degree, my essay focuses on the technological threat side of the equation. It does not envisage the full line of possible complementary defenses. This discrepancy inevitably produces a somewhat myopic picture of the consequences of new technology. Most technological innovations relating to weapons, biology, or the cyber world are closely followed by apprehensions of Armageddon—so much so that in some cases, there have been preemptive efforts to ban the use of these technologies.
Take, for instance, the 1899 treaty to ban balloon air warfare, which is still in force for its signatories (among them the United States). Genetic and biological engineering, which can save and improve lives on a mass scale, has also been met with criticism about the impropriety of “Man playing God” and predictions of the end of the human race as we know it. And yet the world still stands, despite the existence of destructive capabilities that can blow up Planet Earth hundreds of times over. In fact, some credit the very existence of nuclear weapons and the accompanying nuclear arms race for a reduction in overall violence.
Still, history has proven that offensive capabilities, at least for a space in time, usually outrun defensive capabilities. In the robotic context especially, as costs go down, availability grows and global threat grows with it. Even if defensive technologies catch up with present threats and many of the concerns raised in this essay could be set aside, it is always useful to continue to think about defense as it should evolve vis-à-vis the micro-world in comparison with more traditional modes of defense. Moreover, it is unclear that the case of meeting threats with equal threats—as in the case of nuclear weapons—would yield a similar outcome of mutual deterrence when it comes to personalized weapons of the kind I imagine here. For all these reasons, I find it appropriate, for the purposes of this essay and with the caveat described in mind, to focus on the threat/offense side of the equation.
A second observation is a variation on the first, and has to do with the different roles and functions of technology in general and robots specifically. Like all machines, robots can be put to good or bad use, and the “good” or “bad” often depends on where one stands. On today’s battlefields, robots serve many functions, ranging from sweeping for Improvised Explosive Devices (IEDs), to medical evacuation, supply chain management, surveillance, targeting, and more. In this essay, I focus on technology’s “life-taking,” rather than “life-saving,” functions, while keeping in mind that the same systems can be put to use to save victims of atrocities just as easily as they can be deployed for more pernicious purposes.
From among all types of robots available, I focus mostly on robots that are miniaturized, potentially invisible, which make detection and accountability a great deal more difficult. This is what also situates the “spiders” within the broader technological developments of the Internet, bioengineering, and the like, which, together, constitute an environment in which the threat is largely invisible. Again, although their full scientific and operational potential is unknown as of yet, I assume that some version of miniaturized drones—whether independently operated or deployed off of larger structures that carry them to their target—is a real possibility.
Socialization has always been essential for survival. The Internet, media, telecommunications, travel, and commerce have all made the world smaller and strengthened global interconnectedness and socialization. In his recent eloquent and wide-ranging book, Harvard psychologist Steven Pinker argues that our present society is the least violent in recorded history, in part, because technology, trade, and globalization have made us more reasoned, and in turn, more averse to violence. Notwithstanding Pinker’s powerful account, the very same technology that brings people closer—computers, telecommunications, robotics, biological engineering, and so on—now also threatens to enable people to do infinitely greater to each other.
Whether advanced technology really threatens single-handedly to reverse our trajectory toward a less violent world is impossible to predict, precisely because the threat that derives from technological advances is only one manifestation of a sea-change in social and cultural relations which technology as a whole brings about.
Put differently, threats derive from the combination of capabilities and motivations to engage in harmful activities. Technology influences both capabilities and motivations, but not in a single trajectory; the technology to inflict harm is countered by the technology to prevent or correct against it, and motivations to inflict harm are constantly shaped and reshaped by the local and global environments. Any assessment of the future level of threat brought about by new weapons technologies must thus engage with predictions about the growth of capabilities to inflict violence on the one hand, and the growth of or decline in motivations to inflict violent harm, on the other.
As I earlier noted, I am focusing here on the capabilities to inflict harm and their effects on the possible motivations to harm, but only in the narrow context in which some underlying motivation to injure others exists. Of course, if political, ideological, or personal motivations for harm are significantly reduced, for instance, because interconnectivity and interdependence diminish violent hatred, demonization of others, or the attractiveness of violence as the means of promoting ideological goals, we have less to worry about.
With all their uncertainties, however, it seems to me that the lethal capabilities of miniaturized technologies will spread well before what has been assumed to be pacifying social forces, whether the Internet or reason, will have operated to turn much more of our world into a peaceful place. Even if they do not, considering those features of new weapons technologies that enhance the capability—and perhaps, therefore, the motivation—for violence warrant special attention.
Violence, of course, does not require fancy weapons. Even in our present age, it took mostly machetes for Hutus to kill 800,000 Tutsis and moderate Hutus in the course of only 100 days. Individuals everywhere are already vulnerable to the proliferation of weapons. According to some estimates, there are 90 guns for every 100 people in the United States, and more than 800 million firearms worldwide. Between 10,000 and 20,000 people are killed annually in gun-related homicides in the United States. These homicides account for two-thirds of all murder cases. When people want to kill other people, they can.
And yet, three features of new weapons technology—proliferation, remoteness, and concealment—make violence more likely. All three are already present to some degree in existing weapons, but new technology integrates and intensifies them in a way that significantly enhances their potential threat. In what follows I discuss these three features in greater detail.
Editor's Note: The remainder of this article, which is part of Hoover’s online essay series Emerging Threats, is available here. Emerging Threats grows out of the work of the Hoover Institution’s Koret-Taube Task Force on National Security and Law. The essays reflect the task force’s determination to seek out and publish thoughtful and timely writings by leading scholars, policy analysts, and journalists on emerging national security threats and the daunting legal challenges they present.
Gabriella Blum is the Rita E. Hauser Professor of Human Rights and International Humanitarian Law at Harvard Law School (HLS) and the codirector of the HLS-Brookings Project on Law and Security. Previously, she was a senior legal adviser in the Israel Defense Forces and a strategic adviser in the Israeli National Security Council. She is the author of Islands of Agreement: Managing Enduring Armed Rivalries (Harvard University Press, 2007) and Laws, Outlaws, and Terrorists: Lessons from the War on Terrorism (with Philip Heymann) (MIT Press, 2010).