In January 2010, Google released disturbing news: It had been the subject of a “highly sophisticated and targeted attack” that had originated in China, resulting in the “theft of intellectual property” from Google. The attack seemed to be targeted at Chinese human rights activists. And Google was not alone—at least twenty other major companies spanning sectors including Internet, finance, and the chemical industry were also targeted. At its core, the attack apparently attempted to corrupt some of Google’s source code.
Though it is notoriously difficult to confirm responsibility for cyber intrusions, there seems to be little doubt that official Chinese authorities were behind the attack. Indeed, one of the classified State Department cables released by WikiLeaks reports that the operation was authorized by the Politburo Standing Committee, which is roughly equivalent in authority to America’s National Security Council. The intrusion seems, therefore, to be of a piece with China’s notorious efforts to maintain control over its citizens’ Internet access and is, in many ways, unsurprising.
Far more surprising, however, was Google’s next step: it turned to the National Security Agency (NSA) for help. Google sought to take advantage of the NSA’s expertise in “information assurance.” In other words, it wanted the NSA to help it evaluate the vulnerabilities in its hardware and software and assess the intruders’ level of capability. Using NSA expertise, Google would have a better understanding of how its systems were penetrated. Google, in turn, would share with the NSA any information it had about the precise nature of the malware code that was used to corrupt its system.
This cooperation agreement between Google and NSA is notable for a number of reasons. First, Google turned to the NSA for assistance and not to the Department of Homeland Security (DHS), which has the nominal responsibility for assisting in the protection of private sector infrastructure. Second, the more fundamentally transformative aspect of the agreement is that Google looked to anyone in government at all for assistance.
Google’s business model is controversial in Silicon Valley. But whatever one thinks of its commercial approach, nobody doubts its technical expertise. Google—along with other major cyber actors such as Facebook and PayPal, service providers like Verizon, and software manufacturers like Microsoft—is at the forefront of cutting-edge cyber innovations. Yet even with that deep and sophisticated base of knowledge, Google was impelled to seek governmental assistance.
Informally, private sector leaders in the IT/Telecoms space often say they don’t need anything from the government. Indeed, their repeated refrain is often that government involvement will stifle innovation rather than foster security. As we shall see, that argument has great appeal. Yet one of the most sophisticated players in the entire domain, Google, turned to the government for help. What does that say about the desirability of public/private cooperation in cybersecurity?
From the government’s perspective, the need for robust and effective cooperation seems self-evident. It is commonplace to note that private entities own and operate 85–90 percent of the cyber infrastructure—though no authoritative source for this figure can be found. Most government cyber traffic travels on non-governmental cyber systems. And those systems, in turn, are used to control or communicate with a host of other critical infrastructures—the transportation system, the electric grid, the financial markets, and the like. Thus, core national security functions of command, control, communications, and continuity are all dependent, to greater or lesser degrees, on the resilience of the private-sector networks. As a result, it would seem that the federal government must be deeply concerned with private-sector cybersecurity.
Yet public and private actors often do not coordinate well together. The challenge for the federal government is how to integrate its efforts with those of the private sector. To date the results have been less than stellar, at least in part because of private-sector resistance to the concept.
The Internet is a very unique place. Unlike most human phenomena, it is not bounded in space and it has no physical borders. Its structure is outside of our common experience: even though most lay observers think of it as something like the telephone network, its structure is actually quite different. While the telephone networks are “hub and spoke” systems with the intelligent operation at the central switching points, the Internet is truly a “world wide web” of interconnected servers where the intelligent operations occur at the edges (in our mobile devices and laptops running various “apps”). At its central core, the Internet packet switching protocol is fundamentally pretty dumb.
This fundamental architecture of the Internet gives rise to two factors that are, in effect, built into the system. The first is the problem of anonymity. Given the vastness of the web, it is quite possible for those who seek to do harm to do so at a distance, cloaked in the veil of anonymity. While that veil can be pulled aside, doing so requires a very great investment of time and resources, making malfeasant actors immune, for all practical purposes, from swift and sure response or retaliation. The second factor is the difficulty of distinction. Any successful cyber attack or intrusion requires “a vulnerability, access to that vulnerability, and a payload to be executed.”
But in practice the first two parts of that equation (identifying a vulnerability and gaining access to it) are the same no matter what the payload that is to be delivered. Thus, for those attempting a defense, it is virtually impossible to distinguish ex ante between different types of intrusions because they all look the same on the front end: cyber espionage, where the intrusion is a payload that seeks to hide itself and exfiltrate classified data; cyber theft, where the object is stealing unclassified financial data; and a full-scale cyber attack, where the payload left behind may lie dormant for years before it is activated and causes grave cyber damage. The difference arises only when the effects are felt. The closest kinetic world analogy would be something like never being able to tell whether the plane flying across your border was a friendly commercial aircraft, a spy plane, or a bomber.
Taken together, these two factors make cyber systems highly vulnerable to attack. Indeed, some people say effective cybersecurity is more of a dream than a reality because cyber attacks routinely defeat cyber defenses and that is likely to continue for the foreseeable future. In short, life in the cyber domain is thought of as Hobbesian in nature—often “solitary, poor, nasty, brutish and short.” But how accurate is this portrayal? What, if anything, can we say about the delivery of cybersecurity as an empirical matter? How effective are our efforts?
Sadly, though the question is a vital one, there is little data to drive effective policymaking...
Paul Rosenzweig, Esq., is the founder of Red Branch Law & Consulting, PLLC. Rosenzweig formerly served as deputy assistant secretary for policy in the Department of Homeland Security and twice as acting assistant secretary for international affairs. Rosenzweig is a professorial lecturer in Law at George Washington University and a visiting fellow at the Heritage Foundation. He serves as a senior editor of the Journal of National Security Law & Policy.
Rosenzweig is a cum laude graduate of the University of Chicago Law School. He is the coauthor (with James Jay Carafano) of the book Winning the Long War: Lessons from the Cold War for Defeating Terrorism and Preserving Freedom and author of the forthcoming book Cyberwarfare: How Conflicts in Cyberspace Are Challenging America and Changing the World.