Recent investigations into the September 11 intelligence failure and misestimates of Iraqi weapons of mass destruction have discovered an important new trend. Traditionally, secrecy has been vital to effective intelligence. But now secrecy is causing some of out most significant intelligence failures.

Investigators examining the September 11 terrorist attacks found that intelligence organizations were often unable to share information with intelligence users and thus could not provide effective warning. In other cases, intelligence organizations could not share information with each other and thus were unable to work effectively together.

Sometimes personnel lacked the appropriate clearances. But often intelligence personnel simply assumed they shouldn’t share information with others because of security rules—that is, rules designed to protect secrets. Investigators have also found cases in which intelligence organizations could not assign enough analysts to a problem because people lacked the required clearances. Even worse, sometimes personnel did have the required clearances, but officials refused them anyway because they applied their own overly conservative judgment of who could join their team without putting intelligence sources at risk.

Misguided or poorly designed procedures to protect secrets have been discovered on the technical front, too. Analysts often can’t move data from one computer to another because their networks are operated by different organizations and each uses a different security standard. Often intelligence organizations can’t use new technologies because security regulations prohibit it or because of difficulties in getting a particular piece of hardware or software certified.

Preliminary investigations into the intelligence estimates of the Iraqi WMD program suggest some of the results of these policies. For example, CIA analysts assessing reports from the field sometimes believed they were reading information from several sources that corroborated each other, when in fact the reports all came from a single source—and were wrong. The analysts did not know they were making a mistake because security rules designed to protect secrets—“compartmentation” and “need to know”—prevented them from knowing the identity of the source.

One reason these problems have persisted for some time is that too few experts from within the intelligence community have complained. Most critics of secrecy in the intelligence community are outsiders, mainly critics who are against secrecy in principle. The American Civil Liberties Union is likely to challenge the rules intelligence organizations use to protect secrets, not the Association of Former Intelligence Officers.

Yet the effective use of secrecy to protect sensitive information—and ensuring access to information, even when classified secret, to those who must have it—is one of the most important decisions intelligence officers face. The intelligence profession should be concerned about how we currently handle secrets.

The whole purpose of intelligence is to give us an information advantage over our adversaries. Secrecy protects this advantage by keeping our opponents from knowing what we know. But poorly designed systems for protecting secrecy can give away any advantage we gain when they prevent us from using our intelligence effectively.

 

The New Environment for Secrecy

It was easier to protect secrets during the Cold War. The Soviet target rarely changed dramatically or quickly. Also, the intelligence personnel working on a particular problem did not change much either. So it was possible to implement secrecy through formal rules, codified like military specifications.

Not so today. Now threats like terrorists and WMD traffickers are everywhere and take many different forms. As a result, the information and expertise needed to monitor these threats can change frequently and are also highly varied. Today effective warning often means getting information in front of as many people as possible so as to improve the odds that someone will see a telltale pattern. As a result, the number of people who need to see a given piece of intelligence information has, on average, grown, and the specific people who need to see it are likely to change more rapidly. (See “Spying in the Post–September 11 World,” Hoover Digest, fall 2003.)

Methods for protecting secrets need to be as agile as the intelligence organizations they support. Unfortunately, our traditional methods for protecting secrets are not keeping up. Indeed, as the recent investigations have shown, these approaches are now preventing us from sharing information effectively and getting it to the people who need it. This makes us vulnerable.

A big part of the problem is that our current approach to secrecy is really the result of a patchwork of rules developed over many years. There was no systematic plan, and the way in which it developed has put would-be critics in a poor position to change it.

The real growth of secrecy in government began around the turn of the twentieth century. As more immigrants from eastern and southern Europe began to arrive, xenophobic fears took off, and members of Congress sought a process to determine their loyalty. The result was the Espionage Act of 1917, which made the disclosure of certain official information a crime.

After that, there were a handful of pieces of legislation that, though few in number, provide the statutory basis of our modern system for secrecy and the security systems used to protect secrets. Most were designed to give specific officials authority to control information; for example, the National Security Act of 1947 made the Director of Central Intelligence responsible for protecting intelligence sources and methods. Others, like the Atomic Energy Act of 1954 and the Intelligence Identities Act of 1982, were designed to protect specific kinds of information—usually after a controversy in which the information had been compromised.

Yet most secrecy rules today are based on executive orders—the accumulation of regulations that the executive branch has issued over the years. The courts have often ruled that the executive branch has an inherent right to restrict certain information, and many agencies developed their own procedures for classifying technical and operational information and allowing personnel access to it.

Periodically officials have tried to make this patchwork more effective. Some efforts have succeeded, some have failed, and some have made the situation worse. The last major effort was in 1995, when the Clinton administration tried to consolidate the various executive branch rules protecting information. The result was Executive Order 12958, “Classified National Security Information.”

The current Bush administration made a few—but significant—changes in EO 12958 when it issued its own EO 13292 in March 2003. Some of these modifications reflected concerns that terrorists might use government documents to plan attacks. But the new order retained the basic structure of the old one—as well as the problems that often make security a hindrance to effective intelligence.

For instance, although EO 12958 tried to impose uniformity, it allowed organizations to continue imposing additional rules or to ignore rules of other agencies. To be sure, there is some honest disagreement about best practices. For example, the Central Intelligence Agency and National Security Agency use polygraph examinations in their clearance process; the State Department does not. The Defense Department uses the polygraph for some positions requiring a clearance. But where CIA examiners ask questions about a subject’s lifestyle, Defense Department examiners limit their questions to those concerning security. The result is that two people from two different agencies can have undergone special background investigations and hold top-secret clearances but still not be recognized by the other agency. Agencies are even more reluctant to recognize the credentials of contractors and consultants hired by other agencies. All this turns organizational barriers into information barriers that prevent organizations from sharing information or exchanging personnel—which is what has happened in several of the recent intelligence failures.

 

The Theory Gap

One underlying problem is that there is no theory—that is, a clear and widely accepted set of general principles—that tells us how to use secrecy, how much to use, when to use it, and how best to protect a secret. “Theory” may suggest “ivory tower,” but in reality theories are always essential to sound policy. They explain the logical relationships between whatever it is that policies try to influence. Theories describe how to reconcile two goals that are both desirable but mutually exclusive: for example, the dilemma that secrecy can provide an information advantage over an adversary but security rules almost always makes it harder to use the information that is being protected. Unfortunately, no one has a good understanding of the exact trade-off—let alone how to strike a balance.

Consider, for example, the definitions of basic terms to be used in our system for protecting secrets. Executive Orders 12958 and 13292 define three categories of data: Top Secret is information that, if disclosed, would cause “exceptionally grave damage to the national security.” Secret is information that, if disclosed, would cause “serious damage to the national security.” And Confidential is information that, if disclosed, would cause “damage to the national security.” These definitions are hopelessly vague. What is the difference between “damage,” “serious damage,” and “exceptionally grave damage”? Is “exceptionally grave” twice as bad as “serious”? Ten times?

This may seem like nitpicking, but without a meaningful measure, it is impossible to make sound policy. We don’t need a precise measure, but we at least ought to be able to understand the nature of what happens when we try to reconcile the two goals. A theory about secrecy would enable one to address issues such as

• What is the trade-off between withholding a piece of information and sharing it?

• What is the “latency” of a secret—that is, how long can one expect to keep it secret?

• How long does a secret remain valuable?

• What are the alternatives to secrecy—that is, what is the best mix of concealment, deception, and disclosure to achieve an information advantage over our adversaries?

• What is the relationship between the secrecy required for national security and the freedom of information required for democracy to function? (See “Democracies and Their Spies,” Hoover Digest, winter 2003.)

Without this kind of understanding of how secrecy works, our policies are really just a conglomeration of rules and traditions, most of which were adopted many years ago and many of which are poorly suited for current conditions. It becomes impossible to debate whether we have too much secrecy or too little, and it’s no wonder debates over secrecy and security usually become contests in which civil libertarians and security advocates have no common referents on which to base a productive discussion.

For example, computer security specialists use a procedure called Octave to assess the vulnerability of an information network. The first step in Octave is to identify all the vulnerabilities of a network—that is, the ways in which an attacker could penetrate a system. The second step is to identify the risks of each vulnerability—that is, the potential damage that could be caused by exploiting it. Developing a computer security plan is then a process of addressing all of the vulnerabilities until the overall risk level is within a specified range. Octave follows the same approach as most security protocols, and it is great at identifying where an organization will get the most “bang for the buck” in improving security. But Octave—again, like most security protocols—never requires anyone to ask how a security measure affects the ability of the organization to do its job. So a computer security manager could use Octave to design a network that even the best hacker in the world couldn’t break into—but that is a hopeless slog for any analyst who actually needs to use it.

In fairness, the best security specialists do try to work closely with their users. But that just illustrates the problem—our basic approach is flawed, and it’s only smart, persistent people who make it work as well as it does. As we’ve recently seen, the more likely result is that the balance between security and usability gets knocked out of kilter. This is true not only for computer security but also for physical security and operational security.

Admittedly, there are powerful political and bureaucratic incentives at work that discourage anyone from saying no to a proposed security measure. Officials rarely get credit for making information easier to use, but they always pay a heavy price when a security lapse occurs on their watch.

Recall the reaction of Congress, administration officials, the press, and just about everyone else in 1986, when spies were discovered in several intelligence organizations. Recall the reaction in 1994, when CIA employee Aldrich Ames was arrested. Everyone seemed to accuse intelligence organizations of being lax, and, after these cases, no amount of vigilance was too much.

Yet this atmosphere later contributed to the September 11 intelligence failure. Officials were so reluctant to question any proposed security measure that they were unable to adopt—or even consider—measures that might have made it easier to share data.

Part of the reason we lack a good theory of secrecy is that there are so many gaps in our empirical understanding of the problem. It is remarkable, given all the resources that are devoted to protecting secrets (not to mention the importance of many of the secrets themselves), how much basic information is often lacking or deficient. Examples include

• The exact number of people who hold a given level of clearance. In principle this figure should be knowable, but in practice there are enough disconnects between databases that we really don’t know this statistic. Without knowing this, one cannot know the risk a piece of information is exposed to when it is classified.

• The extent and effect of informal restrictions (e.g., “sensitive but not classified,” “official use only,” and the tendency of officials to add their own ad hoc restrictions).

• The reasons that agencies disagree on specific security practices and technologies and the scientific merit of their disagreements—without this information, it is impossible to reconcile their differences.

• The security methods used by intelligence organizations in other countries and companies in the private sector—and, just as important, the methods they use to share information more effectively.

 

The Ideal Security System

An optimal system for protecting secrets would likely have four basic features.

First, it should consider how much time and money we are willing to spend to protect secrets. One problem with the current approach is that it implicitly assumes we can protect an infinite amount of information. In reality, of course, our resources are limited. Perhaps the most fundamental mistake one can make in security is to believe that some piece of information is protected when, in fact, it is not. We have to match the number of secrets to the resources available to protect them.

Second, the rule in protecting secrets should be risk control, not risk exclusion. This is anathema to many intelligence officials. No one wants to admit that they are going to lose some secrets. Yet this is exactly what top intelligence officials must make clear to Congress and the White House.

Third, we need government-wide standards. There is no reason why anyone should think an organizational boundary equals a secrecy requirement. If there are disagreements about the best way to clear personnel or protect information, we need to settle them. If agency officials disagree, the White House should knock heads together.

And, fourth, we need both transparency and checks and balances. Secrecy should be like other political issues. Congress needs to ensure that the executive branch uses secrecy legitimately and effectively, and Congress should be able to weigh in on secrecy policies. Instead of relying on a hodgepodge of executive orders and statutes, we need a single statute that defines the criteria for when information should be kept secret and how to do so. Drafting this kind of statute will be a daunting undertaking. For one thing, the legislative and executive branches would both have to give up some of their current powers. Congress would have to subject staff—and perhaps even members—to security clearances and rules. The White House would have to give up using security—and perhaps executive privilege—for leverage in political disputes with the Hill. Agencies would have to give up turf. We would probably need an independent body to adjudicate disagreements.

But we ought to be able to establish a better system than the current one—a system that all parties could trust. At a time when everyone is wondering how to make intelligence work better, we need to address the root issues—like how to use and protect secrets better and in a manner consistent with American democracy.

overlay image