Imagine that the Gulf oil spill had taken place as a consequence of a premeditated attack, rather than an accident. The damage is the same as it was; the oil flowed in the same volume. The only difference is volition: In this dark fantasy, someone meant to do it.

In this counter-factual scenario, we would immediately recognize the event not merely as a disaster but as the most successful assault on the United States since September 11, 2001. We would notice something else as well: The United States government—despite its ability to project military force anywhere in the world—lacked the capacity to defend the country effectively and swiftly against this particular attack. That capacity, rather, lay in the hands of a private corporation, one of a select group of corporations that have proven enormously innovative in off-shore oil drilling. Only these corporations have the technological and logistical capacity to defend the country against the national security events their very innovations can now bring about.

Technology is socially beneficial. But it also makes up vulnerable
Illustration by Barbara Kelley

This rather startling conclusion represents a profound challenge to a country whose constitution vests the power to defend the nation in a unitary presidency. Over the past several decades—in a trend that is sure to accelerate rapidly with continued innovation—we have developed a category of non-military technologies outside of government hands and control that, when misused, can threaten extreme harms of various types, harms which the executive lacks clear power, authority, or even the simple capability to prevent.

The trend is not futuristic. It is already well under way across a number of technological platforms—most prominently the life sciences and networked computer technology. The technologies in question are widely proliferated. They are getting cheaper by the day. We are becoming ever more dependent upon them not just for growth, jobs, and development but for health, agriculture, communications, and even culture.

Yet these same technologies—and these same dependencies—make us enormously vulnerable. Whereas once only states could contemplate killing huge numbers of civilians with a devastating drug-resistant illness or taking down another country’s power grids, we must now contemplate the possibility that ever-smaller groupings of people can undertake what are traditionally understood as acts of war. The last few years have seen the migration of the destructive power of states to global non-state actors. And we can expect that migration to continue, ultimately giving to every individual with modest education and a certain level of technical proficiency the power to bring about catastrophic damage.

The technologies which pose the greatest concern here are, perhaps unsurprisingly, also the same technologies that offer the greatest promise in general. The concern and the promise emanate from the same source: These are technologies of mass empowerment, and delivering enormous new capacity to large numbers of individuals creates the certainty that some of those individuals will use that capacity to do evil. These technologies magnify the cost of accidents, and they both facilitate the activities of those with ill-intent and magnify the consequences of their behaviors.

Technologies of mass empowerment—of which biotechnology and globally-networked computers are the paradigmatic examples—have certain common characteristics. First, they are widely disseminated and depend on readily available training and materials. Unlike nuclear technologies, they did not develop principally in classified settings at government-run labs with the government controlling access to the key materials. Rather, they developed publically in open dialogue with non-military purposes in mind. In the cyber arena, attacks have grown up alongside the platform. And in the biotech arena, a public literature now exists to teach bad guys how to do horrific things—and the materials, unlike highly enriched uranium, are neither scarce nor expensive.

Second, the destructive technologies are closely connected to the socially beneficial innovations that give rise to them. The research on how to use genetics to cure and prevent disease in the wrong hands can be used to cause and spread disease. A paper on how to shield computers against viruses necessarily involves analysis of viruses that one can use to write stronger ones. Defensive research in this space will potentially empower the bad guys too.

Third, the use of these technologies blurs the distinction between foreign and domestic threats and, indeed, makes attribution of any attack extremely difficult. Large numbers of cyber attacks already take place with attribution impossible or long delayed. In the case of the anthrax attacks in the wake of September 11, attribution took seven years and remains to this day contested. Indeed, often in these cases, a targeted entity will not be able to determine whether its attacker was another state, some political group, a criminal group, or a lone gunman.

Destructive technologies are closely connected to the socially beneficial innovations that give rise to them. The research on how to use genetics to cure and prevent disease in the wrong hands can be used to cause and spread disease.

It is important to emphasize that cyber- and bio-security problems are examples of a class, not by any means an exhaustive list of members of that class. We should assume that the class of technologies will grow as the pace of innovation grows. Technologies will continue to develop in the civilian sector that magnify the power of individuals; those technologies will compound one another; and the magnitude of the damage we can thus reasonably expect individuals to be capable of bringing about will grow as well. It was once unthinkable that an individual might kill dozens of people with a single machine gun or that a single company with a single oil well could despoil the Gulf coast. The more technology develops and the more dependent on it we become, the more it will not merely be conceivable but inevitable.

It rather understates the matter to say that current governance of technologies of mass empowerment is hopelessly inadequate to the task of preventing the disasters one might reasonably fear from them. This is not chiefly a function of the fact that changing governance in a fashion that carries real costs in the absence of some dramatic precipitating event is always difficult. It principally reflects the fact that the ideal governance approach is far from obvious. Nobody quite knows how to attack the problem or even whether effective governance is possible. Even if one could, for example, classify all of the relevant now-public literature related to biosecurity and slap strict controls on the technologies in question, who would want to? The biotechnology revolution is a wonderful thing, and it has depended precisely on the open culture which has created the vulnerabilities I have been describing. In any event, this cat isn’t going back in the bag. Too many people have too deep an understanding of how genetic engineering works.

The lack of promising options gives rise to what I suspect will be the most profound impact of this class of technologies on our law, one that touches the very structural arrangements of power in American life. That is, it stands to bring about a substantial erosion of the government’s monopoly on security policy, putting in diffuse and private hands for the first time responsibility for protecting the nation.

There are people who would write that sentence with joy in their hearts. I am not one of them. My views on executive capacity are unapologetically Hamiltonian. The Constitutional assumption that the political branches, particularly the executive branch, are both responsible for national security and have the tools necessary to fulfill that responsibility is a comforting one, the destabilization of which I find scary.

That said, I’m not sure how the presumption of executive responsibility for national security holds in the face of the rapid development of these technologies. This point is perhaps most vivid in the cyber arena, where huge amounts of telecommunications traffic into and out of the United States—including government traffic—now takes place over privately-owned lines and the government thus quite literally does not control the channels through which attacks can occur. But it’s also true in the biotechnology sphere. Because the revolution has taken place largely in private, not in government, hands the government employs only a fraction of the capable individuals. And the capacity to respond to or to prevent an attack is therefore as diffuse as the capacity to launch one.

This point is crucial and provides the most promising ray of hope in a generally bleak picture. The advent of technologies of mass empowerment has given enormous numbers of people the capacity to do great harm, but it has also given enormous numbers of people and organizations the capacity to work to prevent that harm. The proliferation of defensive capability has been as rapid as the proliferation of offensive capability—only exponentially more so since the good guys so vastly outnumber the bad guys.

The individual scientist had no ability to prevent the Soviet Union from launching a nuclear attack against the United States or invading Western Europe. But the individual scientist and engineer has an enormous role in bio- and cyber- security—from driving the further innovations that can wipe out infectious diseases, to developing security applications that will make the bad guys' jobs harder, to spotting the security implications of new research, to reporting on colleagues engaged in suspicious activities out of sight of the authorities. The question then becomes how to incentivize people and companies to defend the platforms on which they work.

This question pulls the mind towards themes and ideas eloquently articulated by legal scholars such as James Boyle and Lawrence Lessig in debates over intellectual property. A major current of this body of thought involves the protection of legal space for communities of various sorts to use and borrow one another’s ideas and work in collaborative efforts to build things. The world has seen amazing demonstrations of what large groups of people can do when they pool expertise—even with very limited coordination. The most famous example is Wikipedia, but this is far from the only one. It is an interesting fact, highly salient for our purposes here, that open source software tends to be more stable and secure than proprietary code.

Given that security will be, to borrow a term from the software development lexicon, a more distributed application than it has been in the past, we ought to start thinking about it as such. Collectivized individual security arrangements are nothing new. We see them in neighborhood expectations that people will put locks on their doors and keep an eye out for suspicious loiterers. We see them as well in private innovations—from burglar alarms to security camera operations to inexpensive fake security cameras. These are all part of a non-coordinated distributed security application for residential neighborhoods and business. The question—and it’s one we had better start thinking about—is how to incentivize this sort of combination of security arrangements in the new platforms we are creating.

What, if any, responsibility do tech companies have to act affirmatively in the interests of national security?

This is an impossibly big question for a short essay—or even for a hundred long books. Yet there are, I suspect, certain high-altitude principles that can and should govern platform security in general, and to which a technology-independent focus on the security of platforms tends to lead. I’d like to mention three, each of which will apply differently to different platforms and seem to me to provide some basic building blocks of governance of this difficult space.

The first is that a party that negligently introduces vulnerabilities onto a platform should be at least partly liable for the damages that result. It may seem difficult to object to this point, stated this simply and in the abstract. And applied, say, in the biosecurity arena, it is something of a no-brainer. The principle, however, is deeply controversial with respect to software manufacturers and Internet service providers. In fact, the general state of liability law for those whose insecure products expose others to damage is exceedingly protective. This will simply have to change. The user of software—the banks, the critical infrastructure operators, and the individual users—should not bear all of the risk associated with the platform. If those who introduce new vulnerabilities to the system would bear some of that risk, they would have a powerful incentive to make products more secure.

The second broad principle is that platforms have no privacy rights. Individuals have privacy rights and may have them in their use of platforms, but the platforms themselves are the sometimes-literal and sometimes-metaphorical analogues of public spaces and commons that authorities should patrol. Other examples of platform surveillance include scanning physical mail for anthrax and airline security screening. The legitimacy of such surveillance hinges on several interrelated factors. The most important is that the surveillance does not target any particular individual and it not conducted for investigative purposes. It, rather, examines in a non-discriminatory fashion all platform users and focuses only on conduct that threatens the use of the platform itself.

The distinction between individual surveillance and platform surveillance seems to me key to developing secure platforms over time. We need to develop a comfort level with certain programmatic surveillance of new technological platforms. A system of sensors that scans Internet traffic in real time for malware but does not otherwise examine the content of communications, for example, is much more similar to than different from the anthrax scanning of physical mail. As long as it does not focus on any individual or group and is operated to protect the network, not as part of any investigation, the analogy seems quite close indeed.

Yet such ideas, when even suggested, are treated with horror by many in the business and civil liberties communities, who regard such surveillance as per se threatening to privacy. Our analysis of such programs should focus more narrowly than that. What are the chances of false positives and what are the consequences of them? Is the system prone to abuse, and are the protections against abuse adequate? Is the system treating all users alike or is it targeting individuals or disfavored groups for special scrutiny? If these questions can be answered adequately, such systems should be acceptable.

The third broad principle is perhaps the most challenging: Certain companies by dint of their businesses, their technological capabilities, and their control over certain infrastructure will acquire affirmative national security obligations that have traditionally resided with the state. We can see this point vividly in the Gulf oil disaster. But the point, as I have argued, goes well beyond oil drilling. If we accept that the traditional state monopoly on security policy will erode as more and more private entities develop functions and capabilities essential to security yet absent from government, it follows that the law will and should come to oblige these companies to take responsibility for certain security functions. We already see this happening to a degree in the surveillance arena, but it will not stop there.

Consider, first, the previously-mentioned fact that the federal government literally does not own or control the channels through which cyber attacks take place. The trunk lines into the country, rather, are private, and the domestic routing infrastructure is private as well. This is the rough equivalent of letting a group of private entities both monitor the border and manage the interstate highway system and all local roads. A foreign intruder can—at least in a digital sense—invade the country and steal or sabotage valuable property without coming into substantial contact with government-controlled defenses. If our digital border is patrolled not by the Department of Homeland Security or the military but by Verizon and AT&T, what obligations to national security do these companies then incur? The answer cannot be and will not be that they incur none.

I close with a subtler example, one that involves both the cyber- and bio- platforms at once. Imagine a deranged graduate student who decides to release a pathogenic organism. Such a person would likely begin by running a search in one of several scientific research databases for published papers on the pathogen’s genome. The companies operating such databases will thus have in their users’ search histories a unique window into who is conducting the research about which government should be most concerned.

Imagine now that authorities produced an algorithm to identify prospectively highly-dangerous work in biotechnology. The algorithm would flag any account which requested certain patterns of information highly suggestive of non-innocent biotechnology research. And imagine that authorities asked database companies and, more broadly, search engines to deploy this algorithm and notify the FBI whenever a particular group of searches triggered an alarm.

In some sense, of course, this is merely a species of platform surveillance of the type I have discussed already, but there’s a twist: Government would be coordinating the surveillance but not conducting it. It would be the database and search engine companies who would be policing the platform. And that raises the critical question of what, if any, responsibility these companies have to act affirmatively in the interests of national security?

In a traditional environment, the answer to that question is easy. They have none. That is the government’s job. Their job is to maximize value for their shareholders. Yet in an environment of radically distributed responsibility for collective security, a more communitarian ethos seems necessary. By storing users’ search histories, these companies have amassed the largest dataset anyone has ever collected on what people around the world are thinking. That fact may give them a role to play in identifying those people plotting to do grotesque and terrible things. Over time, the law will need to evolve to require that companies take on the national security responsibility their businesses enable.

overlay image