disastrous “bolt from the blue” attack kills thousands; enraged politicians and pundits point fingers; committees gravely recommend changes; a massive reorganization of the nation’s security and intelligence organs follows. Sound familiar? It’s the chain of events that followed not only the attacks of September 11, but also those of December 7, 1941 in America and October 1973 in Israel; you can find a similar pattern at work in the shocking fall of British Singapore to the Japanese in 1942 and even in the Roman Senate’s reaction to the surprising irruption of Hannibal into the Italian peninsula.

Yet to listen to today’s conventional wisdom — we can make intelligence much better, and doing so will make us much safer — one would think that the problem of surprise attacks and intelligence failures had never before been addressed by governments. After 9/11, political leaders and opinionmakers on both sides of the aisle clamored for sweeping and immediate reform: The Democrats banged the table for the full-fledged adoption of the 9/11 Commission’s recommendations; congressional Republicans and their think tank allies demanded a more “forward-leaning” human intelligence service; and chin-stroking centrists argued the nation should put more dollars into intelligence collection, steering clear of the sharper-edged policies that earn us the world’s opprobrium. Then came the intelligence missteps associated with the Iraq wmd program, at which point the demand for a better intelligence system grew strong enough to overwhelm the natural inertia of the status quo.

Six years after September 11, this wave of reformist zeal has finally crested — for the time being at least — leaving an opening for us to take stock of what all the sound and fury has left behind. Some good ideas have been proposed and implemented, and some bad ones as well; none, however, is likely to make our intelligence dramatically better or the U.S. dramatically safer. Examining why this is so, both theoretically and practically, reveals a more nuanced picture of what intelligence can do, how it can be improved, and how it fits into a smoothly running national security system.

Some needed reforms

First, the good news: There hasn’t been a mad rush to string up the unlucky along with the few incompetent, as there was in the wake of Pearl Harbor. The narrative innards of the 9/11 report on the September 11 attacks and the wmd Commission report on Iraq War intelligence errors did not purport to chronicle sorry tales of stupidity, incompetence, and failure.1 Neither report tried to pin the blame for what had gone wrong on one administration or party. Even the more strident congressional reports on 9/11 and the Iraq wmd fiasco avoided too much mud-slinging. Instead, they all told very complicated stories and drew somewhat mundane lessons: There was insufficient cooperation between the foreign intelligence apparatus and the fbi; cia analysts feared embarrassment if they were shown to have underestimated Iraq’s nuclear program for a second time; and, more generally, the behavior and intentions of al Qaeda and Iraq were, when judged fairly, extremely hard to decipher. Both reports focused on analysis as the most troubled area of intelligence performance and directed their recommendations toward rectifying its faults. Ultimately, both reports implied that the American intelligence system had serious flaws but that the difficulties of understanding the threats of terrorism and weapons of mass destruction are not susceptible to easy fixes.

The tone of the reforms that followed, however, was set by the recommendations of the 9/11 Commission Report, released in late summer 2004. It was these, backed by an aggressive public relations campaign by the commissioners, that were largely written into law with the passage of the Intelligence Reform and Terrorism Prevention Act of 2004. (The wmd Commission recommendations, released in spring 2005, focused chiefly on developing a leadership program for the director of national intelligence and other intelligence community leaders in light of the new law.)

The parade of politicians and pundits calling for immediate action proved too much for the administration to resist.

The 9/11 Commission’s sweeping recommendations were not, unfortunately, nursed to maturity in the same painstakingly careful manner as the narrative. They were drafted in the last few weeks of the Commission’s work, as Chairmen Thomas Kean and Lee Hamilton relate in their somewhat immodestly titled Without Precedent: The Inside Story of the 9/11 Commission. And they did not follow logically from the rather more complicated story told in the report’s narrative.2 But the finer details were lost amid the perfect storm generated by the release of the report in the months before the bitterly fought 2004 election. John Kerry quickly took up the Commission’s recommendations and began casting aspersions against the president for not showing sufficient reverence for their wisdom. The Bush administration was initially resistant to the wholesale adoption of the Commission’s advice. But the parade of politicians and pundits calling for immediate action proved too much to resist, and so the administration made the best of the situation. Thus the Intelligence Reform Act, which became law in December 2004.

The fact that the act’s big-ticket reforms did not necessarily follow from the lessons of the 9/11 attack didn’t mean that it was bad for the nation’s intelligence apparatus overall. Needed reforms were pushed through: the creation of the position of director of national intelligence,3 a single official vested with considerable powers over the budget and personnel of the entire intelligence community; the separation of the functions of the leader of the intelligence community from those of the head of the cia, long a burr under the saddle of the other intelligence agencies; the appointment of a program manager to develop an information-sharing “environment” across the government (an aim admittedly in some tension with the act’s centralizing tendencies); the beginning of a long-overdue integration of intelligence work’s foreign and domestic spheres; the encouragement of language training for intelligence officers; the promotion of a “joint” culture across intelligence elements; and others.

But the act also canonized certain deeply problematic approaches to intelligence reform, which will need to be corrected if reform is going to prove a fruitful rather than misguided enterprise. The habits of a new regime tend to persist, honored by subsequent generations, unless checked by other forces in the system or the hard shock of adverse events. The U.S. intelligence community has witnessed this phenomenon already and suffered from it. Early waves of cia officers idolized the buccaneers of the Office of Strategic Services, who prized intrepidity, bravery, and action over diligence, security, and care. Only after repeatedly having their hats handed to them by the kgb in the 1950s in Ukraine, Albania, and elsewhere did the cia begin to emphasize the drier, but more apposite, work of careful espionage. Likewise, the massive push for intelligence reform in the wake of 9/11 and Iraq is likely to set the parameters for the intelligence community in the coming years. It is therefore imperative that the mistaken emphases and presumptions of the act be identified and corrected.

Deeper problems

The governing thrust of the act was a centralization of control. Often championed as a solution to governmental failure by blue ribbon commissions, centralization is a blade that can cut both ways. In some areas of the intelligence community, centralization could represent a major improvement — for instance, in the development of complex, long-term technical projects. The appointment of a single, final decision-maker with a broad perspective can bring a more rational approach to the development and management of major technical initiatives. This move may well yield considerable savings and a more appropriate technical infrastructure for the intelligence community. In other fields, and particularly that of operations, centralization was carefully limited, and so its influence will, for good or ill (probably more for good), be less marked.

The area where centralized decision-making has the potential to do severe damage is analysis — the main target of reformist zeal in the wake of the various commission reports, which chastised intelligence community analysis for “lack of imagination” and “failing to connect the dots” in dealing with both 9/11 and Iraq. The act shifted the prestigious National Intelligence Council into the dni’s office and gave the dni and his deputies considerable responsibility for the heavy duty of improving the quality of analysis. This has resulted in some very commendable developments, particularly by opening the once-closed channel between the White House and the cia. A wider variety of analytic products now makes its way to the Oval Office and other top decision-makers. For instance, the prestigious President’s Daily Brief now includes submissions from intelligence components other than the cia. Separately, the creation of the National Counterterrorism Center, lambasted by critics as an undue centralizing step, actually has resulted in an increase in serious competitive analysis while reducing duplication and confused warning. Generally, there is a wider range of analytic voices informing policy today than there was before the Intelligence Reform Act, and that is a good thing.

That said, there are reasons to be worried. This centralizing tendency could, particularly under the pressure of another intelligence “failure” (always a possibility), encourage the intelligence community’s leadership to keep a tight leash on analysts and to insist on knowing exactly what is going to the president and senior leadership — an understandable temptation, since they are likely to be held personally accountable for it. (Ask George Tenet.) Such a temptation must be resisted. Centralized analysis funneled through a narrow leadership cordon will gravitate toward the lowest common denominator; it will avoid risky but potentially spot-on judgments, shun possibly illuminating informed speculation, and generally hew closely to a safe middle line. It will continue to focus on the near term, on beating cnn  rather than on providing valuable long-term strategic estimates. Such analysis will be of little use to its consumers. For the issues central to their responsibilities, policymakers do not need safe estimates ginned up through political science models; they need the thought-through estimates of experienced analysts who are knee-deep in both classified and unclassified material.

The CIA has a deep ardor for political science and an interest in not veering too far off the reservation.

But there are deeper, systemic problems reaching to the very discipline of analysis. Indeed, analysis in the intelligence community needs recalibration if it is to prove useful in a world in which politics and society, intentions and behavior, and radicalization and reform are often of greater importance than the traditional pastimes of counting warheads and tanks. The cia’s analytic corps, in particular, is saddled with a dedication to political science as a discipline, multiple layers of review, and an institutional interest in not veering too far off the reservation, which have made it ill-fitted to confront the analytic challenges of the new era.

Unfortunately, the cia’s ardor for political science goes deep. The U.S. intelligence community’s analytic arms grew to maturity during the years of highest confidence in academic social science, and many of the top analysts in the early years of the cia and its affiliated arms seized on the power of social science to tame the unpredictable. Sherman Kent, author of the influential Strategic Intelligence for American World Policy (Princeton University Press, 1949) and longtime head of the Board of National Estimates (roughly equivalent to today’s National Intelligence Council), focused on grounding cia intelligence analysis in empirical social science, aggregating the data that came in from human and technical sources worldwide, and building up a coherent model that would allow cia analysts to deliver viable predictions to their policymaking “consumers.”

Kent’s way of thinking has persisted in the intelligence world, developing into a “tradecraft” of analysis that emphasizes a social science method of models and empiricism. Young analysts are taught to pore over reams of intelligence reports, sift for patterns, construct analyses based on the models of political science, and submit their “products” for review by more senior analysts, who are presumably better schooled in the process. Intuition, contextual details, and history (not to mention humor) tend to get second billing and are generally filtered out as they pass up the chain. The theory behind the approach is that political science, like the natural sciences, provides the techniques for turning masses of data into systematic knowledge and, most preciously, predictions.  This is why the cia  sees relatively little problem in having its analytic workhorses be recently minted graduates of doctoral, master’s, and even undergraduate institutions. If the model works, any bright young fellow can manage it.

Intuition, contextual details, and history (not to mention humor) are filtered out as they pass up the chain.

There are serious problems with this approach, principally in the fields of political and social phenomena. Critics of the intelligence apparatus have correctly pointed to the limits of the empiricist mode of knowing (though they have often veered too far in the opposite direction, overemphasizing ideology as an explanatory factor in political behavior). Meanwhile, government policymakers of all political stripes discount as a matter of course the social scientific jargon they receive from the big intelligence agencies, opting instead for direct access to original reporting, personal contact with individuals (inside or outside government) steeped in a particular region or issue — think, for example, of the long line of senior officials seeking the counsel of Bernard Lewis — or other methods of gathering knowledge. New entrants to policy jobs will remark routinely how disappointed they are that the purported “mighty Wurlitzer” of the intelligence agencies seems only to give them warmed-over political science papers.

It should come as no surprise, however, that the cia has not discovered the alchemist’s key to reducing human phenomena to scientific categories. They can hardly be faulted for it, after all. The impulse of social science to predict, understand, and categorize is fallacious at the deepest level, discounting the complexity of human beings and especially human interaction. We are actors with respondent and intelligent wills, the intermingling of which invariably frustrates the attempt to predict our behavior beyond the most general terms. Human beings cannot and will not be planned for; we will react and frustrate any attempt to bind us into neat predictive categories.

What this means for intelligence is that models of understanding founded on a presumed symmetry between human behavior and the behavior of natural phenomena are fatally flawed. Just as the great planning initiatives of the mid-twentieth century, the era of John Maynard Keynes, John Kenneth Galbraith, Walt Rostow, and the Great Society, failed because they underestimated the inherent complexity of human relations, so too will an analytic model built on fitting facts into strict predictive models, as in a chemical experiment. Ronald Reagan no doubt gleaned as much from talking with and reading Alexander Solzhenitsyn as he did from all the intelligence analysis on life in Soviet Russia presented to him daily. An intelligent reader of Vaclav Havel and Adam Michnik would have seen more deeply into the existential predicament of the Eastern Bloc than one confined to the political science literature. Likewise, Bill Clinton may have learned as much about the mentalities of the peoples of the Balkans from reading Robert Kaplan’s Balkan Ghosts as from the great number of bloodless government reports he received during the 1990s. Nor, from an earlier era, can one ignore Hitler’s intuitive understanding of French weakness, which proved more predictive than his staff’s insistence that the correlation of forces favored the Third Republic over the Third Reich. Indeed, in some sense political leadership is precisely an understanding of and ability to master intuition and practical wisdom about others’ situations.

Reagan no doubt gleaned as much from talking with Solzhenitsyn as from all the intelligence analysis he saw.

When training the new generation of analysts, therefore, the intelligence community should focus not on achieving the hopeless twentieth-century dream of taming human life through predictive social science, but rather on the murkier but more realistic categories of practical wisdom and intuition. History, biography, culture, religion, social life and expectations — these should be critical parts of any political analyst’s field of regard. This is particularly the case when analyzing political and social developments, including the strategies of foreign leaders, the issues that have become ever more salient in a world of terrorists, rogue states, and asymmetric threats.

This is not to discount the absolute importance of empiricism in certain fields of intelligence analysis, particularly the crucial areas of military balances and capabilities, which predominated during the Cold War. When thinking about these things, empiricism is to be preferred over intuition. Counting Soviet icbms did not require considerable political understanding — Solzhenitsyn was of little help on this problem. The social science model, though deeply flawed, was well-suited for some of the biggest challenges of the Cold War intelligence system. Understanding the Kremlin’s mindset was only one of the intelligence community’s duties, after all. It was also charged with keeping tabs on the Soviets’ strategic and conventional forces, a military task that lent itself to a social science approach. It therefore made sense that great energy was invested in determining the intelligence community’s judgment about the numbers and dispositions of Soviet nuclear weapons. So too, today, the intelligence community must provide rigorous and empirically grounded analysis on a range of capabilities issues, from the state of the North Korean or Iranian nuclear programs to China’s delivery systems and so on. It is vitally important that such technical know-how flourish within the intelligence community, since it is only within the community that access to the fullest range of data is available.

Questions of intention and of human decisions cannot be answered by being run through a computer.

But in today’s world the intelligence challenges have become much more political in nature. How will “rogue states” behave? What motivates radical terrorists? What is the drift in Muslim or Iraqi public opinion? These are questions of intention and of human decisions. They are not questions that can be answered satisfactorily by being run through a computer. They require a depth of knowledge, a humility about our ability to understand and predict, and a holy fear of the power of contingency.

It is precisely in the nexus between capabilities and intentions that the toughest intelligence challenges are to be found. A well-ordered analytic infrastructure in the intelligence community would, therefore, redress the excessive tilt towards empiricism by “breathing with both lungs,” empiricist and intuitive-historical. To simplify, hard empirical and technical analysis would provide the foundation from which historical-intuitive estimating would proceed to render judgments about intentions, political development, and so forth. This would stand in marked contrast to the empiricist-dominated, capabilities-based assessment model that was handed down from the Cold War and that, for instance, prevailed in the analysis on Iraq’s wmd programs in the years before 2003. The intelligence community would provide policymakers both with the hard facts and with reasoned, intelligent, and subtle estimates on what those hard facts actually mean.

Institutionally, this would mean a reduction in bureaucratic overhang, particularly at the cia and the other big intelligence agencies, such as the Defense Intelligence Agency. Interesting and incisive products are unlikely to emerge from a succession of reviews that tend to chip away at the sharp-edged analytic judgments that need to be encouraged. As in the academic arena, “peer review” should take the place of review by increasingly disconnected senior analytic managers. cia and dia might take a cue from an old rival, the State Department’s Bureau of Intelligence and Research, whose analysts are incentivized to become true experts, individuals whom senior officials can call on for a thorough understanding of a given country or topic. cia could also become more receptive to recruiting analysts laterally from positions in academia or think tankery, a step that could also have the practical effect of bringing in individuals trained to develop intellectual work in a less bureaucratic but highly competitive atmosphere. In such a system, a strengthened tendency to take interesting positions designed to establish one’s professional reputation would be counterbalanced by a terror of not having one’s facts straight — as in (at least parts of) academia.

With reforms such as these in analysis, coupled with a more national and rational perspective on program development and management, a push for further integration of domestic and foreign intelligence, and an incentivization of information sharing rather than hoarding, the U.S. intelligence system might realistically improve its performance significantly.

No hidden key

But “significantly” means only something roughly on the order of 5 percent to 10 percent — a massive increase when the cost and importance of intelligence is taken into account but certainly no grand “transformation.” This is where the second, deeper, and more intractable fallacy of intelligence reform comes in: the hope for a “transformation” in intelligence — a new, radically different, improved level of performance, which will allow the intelligence community to bear a much heavier burden in the uncertain future of wmd proliferation and catastrophic terrorism.

Such a grand “transformation” seems to be precisely what many, particularly those outside of the intelligence community, hope for. Intelligence, it is said, is fundamentally “broken” and needs to be “fixed,” made “better.” Of course, politicians routinely use such language — intelligence is hardly unique in this respect. What is different, however, is the extent to which hopes for a radical improvement in the quality and quantity of intelligence provide the grounds for opposition to other security policies that officials wish to avoid. The promise of “better” intelligence is presented as grounds for dispensing with the government’s interrogation policies, for instance, or for a blanket denial of the possibility of preventive or preemptive war, for opposition to domestic wiretapping, for a more sanguine attitude towards the brinksmanship of rogue states, and so forth. If, as it is hoped, far better and more intelligence can be derived from the traditional methods of collection — human and technical — then there is no need to make hard decisions on these other, more difficult matters. Intelligence reform becomes an excuse for inaction.

Intelligence cannot bear this burden. Conceptually, as described above, intelligence is the art of prediction about human beings, and this art is in se a supremely inexact one. There is no hidden key that will remove contingency and surprise from calculations about American security. Kim Jong-Il, the Iranian leadership, Hu Jintao, and even our allies will continue to behave in ways that defy many of our predictions, in part because they will take our rational predictive models into account when they decide how to behave. Equally important, as the controversial theorist of intelligence Donald Rumsfeld has pointed out, there will always be unknown unknowns — phenomena we do not see coming at all. As Richard Posner speculates in his book Catastrophe, who is to say that the next great terrorist attack will not come from an obscure Aum Shinrikyo-like cult rather than al Qaeda? The range of possibilities is infinite, while the amounts of mental energy and man hours of our analysts are finite, and so we will always be groping in a great deal of darkness. Furthermore, as a practical matter, the intelligence that we need is almost always extremely hard to come by; it is protected by our targets for a reason. It is worthwhile considering, for instance, that the current difficulties the U.S. appears to be experiencing gathering information on the Iraqi insurgency are not unprecedented; the U.S. had equivalently little success in understanding the Viet Cong.  

The prospect of assured destruction was enough to dissuade Russia from embarking on the century’s third land war.

This profound limitation entails that hard decisions on security policies cannot be avoided. The best intelligence performance cannot, as John Keegan demonstrated in his book Intelligence in War, obscure the fact that wars (including the war against al Qaeda and its ilk) are ultimately won by a combination of force and will. Indeed, American strategic planners should be well aware of this fact. After all, it wasn’t our intelligence services that outlasted the Soviet Union during the Cold War — it was our willingness to use overwhelming force, manifested in our nuclear weapons combined with our impressive conventional forces. If anything, our intelligence services were beaten in the espionage field by the Soviets and their myrmidons, such as the Cubans. The prospect of assured destruction — a devastating retaliatory attack against Soviet value and force targets; in plain terms, the mass annihilation of the Soviet leadership and population — was enough to dissuade the Russian bear from embarking on the century’s third land war in Europe.

So too it is today. Our security is ultimately guaranteed by the strength of our capabilities backed by our will to use them — in particular our credible nuclear and conventional deterrent and retaliatory capabilities — not by our intelligence services or fancy diplomatic footwork, important as these may be. Better intelligence will not, for instance, solve the problem of how to deal with proliferators of wmd. Even if the U.S. and its allies could track all possible wmd  shipments — itself an extremely tall order — we would still have to choose whether to use force against them and thereby risk escalation. Had the intelligence on Iraq been correct, for instance, it would not have changed the core decision facing the country: whether to go to war against a possible proliferator and regional threat. Likewise, better understanding of terrorist or insurgent groups will not by itself enable us to defeat them. Such groups must, once known and understood, still be broken through force.  

The burden intelligence can bear is lighter but still an important one. Intelligence is a secondary part of a solid security system, a mechanism to provide warning, reduce accidents and misunderstanding, and give leaders a clearer picture from which to make decisions. Thus intelligence cannot relieve us from facing squarely the tough security policy questions that embroil us today. No one should point to better intelligence as an excuse for shutting down Guantanamo or enjoining the president’s Terrorist Surveillance Program. Whether right or not, proponents of these moves need to be clear that less controversial intelligence methods will not make up the ground lost from cancelling these and similar initiatives.

This is not to say that intelligence could not lend support to a less “forward-leaning” strategy than has been adopted in the past five years. Indeed, if the U.S. returned to a strategy of deterrence, intelligence could play a very visible and useful, if secondary, role.4 In such a system, intelligence would be embedded in a strategy that would emphasize overwhelming force and the will to use it in appropriate circumstances. Such a posture, recognizing the new world in which weapons of mass destruction will inevitably disseminate outwards to new states and downwards to sub-state actors, would provide for the safety of this country by developing “red lines” beyond which states would face the real possibility or even certainty of destructive American action. It would shift the burden of proof and liability for wmd use against the United States and its allies to those states which proliferate or do not prove to us that they are not proliferating. Within such a system, intelligence would have clear and important roles: keeping close tabs on those state and even sub-state programs not subject to transparent inspection by the U.S. and the international community, tracking possible proliferation moves, and analyzing employed wmd for origin and so forth in order to enable a devastating American response. Intelligence would then serve its natural roles of watchdog and forensic expert, giving our adversaries reason to fear that their hostile actions would not pass unnoticed. Though such a system would mark a significant change from the policies of the past five years, there is much to recommend it.

Eternal vigilance

The reordering of the nation’s intelligence apparatus is one of the most significant national security reforms in recent history. When coupled with the “transformation” at the Department of Defense, it is part of a major shift in the U.S. national security apparatus designed to address the new threats posed by the post-Cold War world. Like shifts in security structures in generations past, the underlying principles of these reforms and their application will be of great consequence. One need only look back at the opposing lessons that the French (and, following them, American) and German militaries drew and applied in the interwar years and their respective successes on the battlefield in the Second World War to see how important these reforms can be for the fate of nations.

Intelligence reform must first and foremost maintain a sense of proportion, of the limited possibilities of intelligence. Contingency and surprise are irremovable elements of any social system, and the international system is perhaps the finest example of this truth. Intelligence is most useful to its consumers, then, when it can marry secret information to deep expertise in thoughtful estimating — not when it pretends to a faux scientific certainty that neither the subject matter nor the available facts will permit. The Butler Report, the British version of the American wmd Commission, began its report with a wise quotation from Clausewitz: “Much of the intelligence that we receive in war is contradictory, even more of it is plain wrong, and most of it is fairly dubious. What one can require of an officer, under these circumstances, is a certain degree of discrimination, which can only be gained from knowledge of men and affairs and from good judgment. The law of probability must be his guide.” If the American intelligence reform effort takes this wisdom as its guiding principle, then there is reason to think that our intelligence establishment will be able to meet the great challenges of the new world of threats that we face. But it will never overcome them entirely, for the intelligence community’s mantra should be not that the truth will make us free, as cia’s is, but that the price of liberty is eternal vigilance.

1 Disclosure: The author was a staff member on the Commission on the Intelligence Capabilities of the United States Regarding Weapons of Mass Destruction (“wmd Commission”) and focused particularly on the Iraq intelligence errors.

2 Richard Posner analyzed this disconnect in an incisive deconstruction of the commission report in the New York Times Book Review (August 29, 2004). He has expanded these arguments in his subsequent books Preventing Surprise Attacks (Rowman & Littlefield, 2005) and Uncertain Shield: The U.S. Intelligence System in the Throes of Reform (Rowman & Littlefield, 2006).

3 Disclosure: The author worked as a staff member in the Office of the Director of National Intelligence in 2005–06.

4 For a fuller description of the author’s view of such a strategy, see Elbridge Colby, “Restoring Deterrence,” Orbis (Summer 2007), 413–428.

overlay image