The failure of American intelligence agencies to detect the 9/11 terrorist plot and later to discover that Saddam Hussein no longer had weapons of mass destruction has incited a drumbeat of criticism and led to a reorganization of the intelligence system that may leave the CIA a shell of its former self and a graveyard of ruined careers.
Before we go too far in our efforts to “reform” the system, we should remember that although constructive criticism has great value, obtuse criticism leading to imprudent change may make the nation less safe.
Two cliches about our intelligence system are fast becoming dogma. The first is that intelligence failed in the 9/11 and Iraqi WMD cases because the entire intelligence system is “broken.” Usually when we think of something as broken we assume that it can be fixed or replaced and, either way, that the problem can be put behind us; our watch is broken so we fix or replace it, and the problem is solved.
But the intelligence system cannot be fixed like a broken watch (although it can be improved) because the conditions that cause it to fail are inherent in the nature of intelligence. Those conditions are numerous: Intelligence seeks information about people—usually foreigners having their own language and a mentality that may be so alien as to be unfathomable by us—who are assiduously concealing it.
Effective intelligence requires secrecy (particularly as to sources), which is compromised by the widespread sharing of intelligence data—yet without that sharing, it may be impossible to assemble the data into a meaningful mosaic. Intelligence is collected and analyzed in a political context that may warp intelligence analysis. Working conditions in intelligence are bad because of the unavoidable preoccupation with secrecy and security, the disdain of a democratic society for spies, and the asymmetry of failure and success in intelligence operations.
What’s more, congressional oversight is erratic. Congress cut intelligence budgets in the 1990s just as intelligence challenges were mounting. One of the intelligence community’s severest critics, James Bamford, acknowledges that “the real problem [with U.S. intelligence] is simply the nature of the post–Cold War world.
“During the half-century when Moscow sat fixed at the center of a giant bull’s-eye of intelligence targets, prioritization was easy. . . . When the Soviet Union collapsed, the giant bull’s-eye disappeared and was replaced by a shooting gallery with black silhouette targets popping up everywhere—in back, in front, behind rocks, under bushes. The public, the press, and the Congress were requiring the intelligence community to see everywhere at all times, which was not only impossible but also irrational.”
The impression that the intelligence system can be “fixed”—implying that all intelligence failures are avoidable merely by the exercise of due care—leads to overselling intelligence as an element of national defense. To think that changes in organization, practices, and personnel can make intelligence a fail-safe enterprise is a dangerous illusion, encouraging under-investment in other, often more costly, means of defense, such as tightening our porous borders, screening foreign visitors more carefully, and stocking vaccines against possible bioterror attacks.
The second cliche is that American intelligence services are excessively “risk averse.” Risk aversion is an inevitable rather than an accidental tendency of civil servants; we don’t want them engaging in risky behavior, as if they were speculators in the commodity markets. The tendency to risk aversion is exacerbated in the intelligence arena by the asymmetry of failure and success: Failures are vivid, frightening, unforgettable, whereas successes are taken for granted (when they are known, which often they are not). “Nothing happened” is the standard intelligence success.
Consider the criticism that the CIA is too cautious about recruiting as case officers Americans of Middle Eastern origin, especially if they are first-generation Americans with relatives still living in the Middle East, or that the CIA is too reticent about sharing secret information with other agencies that might need it, such as local police departments.
In both cases, if the agency took the risks its critics asked it to take, there would be embarrassing failures—case officers who turned out to be moles or leakers, secrecy that was compromised. These failures would be denounced as scandals; the official who had signed off on the recruit who turned bad, or the person who shared information with an agency that subsequently leaked it, would be disgraced.
Yet if failure were avoided and risk taking in recruitment and sharing improved the agency’s performance, the improvement would be gradual and diffuse, and little or no credit would accrue to the official who had taken the risks. So it is best from a career standpoint to play it safe—and the drumbeat of criticisms of the intelligence agencies as risk averse will, ironically, make them play even safer by underscoring the career repercussions of an intelligence failure.