The cuban missile crisis marks its 50th anniversary this year as the most studied event of the nuclear age. Scholars and policymakers alike have been dissecting virtually every aspect of that terrifying nuclear showdown. Digging through documents in Soviet and American archives, and attending conferences from Havana to Harvard, generations of researchers have labored to distill what happened in 1962 — all with an eye toward improving U.S. foreign policy.

Yet after half a century, we have learned the wrong intelligence lessons from the crisis. In some sense, this result should not be surprising. Typically, learning is envisioned as a straight-line trajectory where time only makes things better. But time often makes things worse. Organizations (and individuals) frequently forget what they should remember and remember what they should forget.

One of the most widely accepted lessons of that frightening time — that the discovery of Soviet missiles in Cuba constituted a stunning American intelligence success — needs to be challenged. An equally stunning intelligence-warning failure has been downplayed in Cuban missile crisis scholarship since the 1960s. Shifting the analytic lens from intelligence success to failure, moreover, reveals surprising and important organizational deficiencies at work. Ever since Graham Allison penned Essence of Decision in 1971, a great deal of research has focused on the pitfalls of individual perception and cognition as well as organizational weaknesses in the policymaking process. Surprisingly little work, however, has examined the crucial role of organizational weaknesses in intelligence analysis. Many of these same problems still afflict U.S. intelligence agencies today.

The pre-crisis estimates of 1962

The empirical record of U.S. intelligence assessments leading up to the crisis is rich. We now know that between January and October 1962, when Soviet nuclear missile sites were ultimately discovered, the cia’s estimates office produced four National Intelligence Estimates (nies) and Special National Intelligence Estimates (snies) about Castro’s communist regime, its relationship with the Soviet Bloc, its activities in spreading communism throughout Latin America, and potential threats to the United States.

These were not just any intelligence reports. nies and snies were — and still are — the gold standard of intelligence products, the most authoritative, pooled judgments of intelligence professionals from agencies across the U.S. government. Sherman Kent, the legendary godfather of cia analysis who ran the cia’s estimates office at the time, described the process as an “estimating machine,” where intelligence units in the State Department, military services, and cia would research and write initial materials; a special cia estimates staff would write a draft report; an interagency committee would conduct “a painstaking” review; and a “full-dress” version of the estimate would go down “an assembly line of eight or more stations” before being approved for dissemination1

The four pre-crisis estimates of 1962 reveal that U.S. intelligence officials were gravely worried about the political fallout of a hostile communist regime so close to American shores and the possibility of communist dominoes in Latin America. But they were not especially worried about risk of a military threat from Cuba or its Soviet patron. The first estimate, released January 17, 1962 (snie 80–62), was a big-think piece that assessed threats to the United States from the Caribbean region over the next 20 years. Although the estimate considered it “very likely” that communism across the region would “grow in size” during the coming decade, it concluded that “the establishment of . . . Soviet bases is unlikely for some time to come” because “their military and psychological value, in Soviet eyes, would probably not be great enough to override the risks involved” (emphasis mine). The time horizon is important, suggesting confidence that Khrushchev would be unwilling to risk establishing a Cuban base for at least several years. Indeed, the estimate later noted that its judgment about Soviet bases “might not extend over the entire period under review.” Considering that the review period stretched 20 years into the future, this is quite a statement; the intelligence community’s long-term estimate anticipated no short- or even medium-term crisis brewing.

The pre-crisis intelligence estimates of 1962 show the U.S. wasn't especially worried about the military threat from Cuba.

The second estimate was issued March 21, 1962 (nie 85–62), and had a narrower time horizon and scope: analyzing “the situation in Cuba and the relationships of the Castro regime with both the Soviet Bloc and Latin American Republics” over the coming year. Again, the estimate discounts heavily the possibility that the Soviet Union would defend Cuba or establish offensive military capabilities there. The estimate notes that despite Castro’s vigorous efforts to secure a security guarantee, the Soviet Bloc “has avoided any explicit military commitment to defend Cuba.” Later, the assessment uses much stronger estimative language, stating that, “the ussr would almost certainly not intervene directly with its own forces” were Castro’s regime overthrown by internal or external forces and that although the Soviets would respond to an overthrow with strong political action, the ussrwould almost certainly never intend to hazard its own safety for the sake of Cuba” (all emphases mine). These terms are not just thrown around. Carefully chosen and reviewed in the estimates process, they are meant to convey a high degree of certainty. In fact, Khrushchev made the decision to deploy nuclear missiles to Cuba about ten weeks later, between May and June of 1962. Massive numbers of Soviet troops were roaming about the island by fall (although total numbers were not known for the next 25 years), and nuclear missiles began arriving in early September.

The third estimate was disseminated on August 1, 1962 (nie 85–2–62), just a couple of weeks after intelligence began indicating a major Soviet arms buildup in Cuba. Although the estimate notes that Soviet bloc “military advisors and instructors” were believed to be in Cuba, along with “Bloc-supplied arms and equipment,” the estimate once again notes that the Soviet Union “has avoided any formal commitment to protect and defend the regime in all contingencies.” The assessment further states, “We believe it unlikely that the Bloc will provide Cuba with the capability to undertake major independent military operations” or that “the Bloc will station in Cuba Bloc combat units of any description, at least for the period of this estimate.”

The fourth and crucial estimate before the crisis was released September 19, 1962 (snie 85–3–62). This time, however, the situation was vastly changed: Starting in mid-July, a stream of intelligence reporting from both technical and human sources began indicating a massive arms buildup. This reporting increased dramatically in August and September, and the estimate’s heading reflected these developments. Whereas the March and August estimates were blandly titled, “The Situation and Prospects in Cuba,” the Special National Intelligence Estimate of September 19th carried a more ominous title: “The Military Buildup in Cuba.” The estimate notes that between mid-July and early September, approximately 70 ships had delivered Soviet weaponry and construction equipment. That number was three to four times greater than total Soviet shipments for the entire first half of 1962. Indeed, so concerned was President Kennedy by the new intelligence that he made explicit public warnings on September 4th and again on September 13th that if the Soviets placed offensive weapons in Cuba, “the gravest issues would arise,” a warning understood to imply potential nuclear confrontation.

Nevertheless, this crucial intelligence estimate still concluded that “Soviet policy remains fundamentally unaltered.” For the fourth time in nine months, a national intelligence estimate asserted that Soviet activities in Cuba were meant to deter an American attack there and sustain a vital ideological victory for the communist cause. Engrossed by the political threat of a strengthened communist regime in the Western hemisphere, the estimate considered but ultimately dismissed the possibility of a major offensive Soviet base. “The establishment on Cuban soil of Soviet nuclear striking forces which could be used against the U.S. would be incompatible with Soviet policy as we presently estimate it,” the estimate starkly concluded. The estimate justified this judgment at some length, noting that the Soviets had never placed any such weapons even in Soviet satellite countries before,2 that missiles would pose significant command and control problems, that they would require “a conspicuously larger number of Soviet personnel” in Cuba, and that the Soviets would “almost certainly” know that such a move would provoke “a dangerous U.S. reaction.”

Two years later, Sherman Kent categorically concluded that the September 19th estimate’s judgments about Soviet intentions turned out to be wrong; Khrushchev had “zig[ged] violently out of the track of ‘normal,’” and U.S. intelligence agencies had missed it. More recently declassified Soviet archives reveal that the intelligence community’s mistakes were not confined to misjudging Khrushchev’s intentions. They also included erroneous conclusions about what Kent terms “indisputable facts.” For example, the estimate confidently asserts that Soviet military personnel increased from 350 to 4,000 during 1962, and that “conspicuously larger” numbers of Soviet personnel would have to be present to indicate a potential nuclear missile site. It turns out that conspicuously larger numbers of Soviet military forces actually were in Cuba at the time — we just didn’t know it. Soviet forces numbered 41,900, a figure ten times higher than the September estimate. cia estimators assumed this key indicator of a Soviet strategic missile base would be easy to see. Indeed, it would be “conspicuous.” Instead, U.S. intelligence officials were unaware of the full size of the Soviet troop deployment for the next 25 years.

The intelligence success narrative

For decades, scholars and practitioners have been reluctant to call the pre-crisis intelligence estimates of 1962 a strategic warning failure. Instead, the intelligence narrative of the Cuban missile crisis has taken two contradictory forms. One argues that intelligence warning succeeded, the other admits that no accurate warning was possible and that U.S. intelligence estimators did the best that anyone could.

The first success narrative conflates causes and outcomes. Because the crisis ended without nuclear confrontation and gave Khrushchev a major political defeat, there is a natural tendency to conclude that the U.S. intelligence warning worked well. As James Blight and David Welch note, “American intelligence did positively identify Soviet missiles prior to their becoming operational, which permitted the Kennedy administration to seize the initiative in attempting to secure their removal.”3 Raymond Garthoff echoes these sentiments, writing, “Intelligence did do its job.”4 

Yet two arguments suggest otherwise. First, although it is clear that U.S. intelligence officials discovered Soviet missiles in Cuba days before they became operational, it is equally clear that they utterly failed to anticipate the presence of Soviet missiles in Cuba every day before then. As Cynthia Grabo notes in her classic work Anticipating Surprise: Analysis for Strategic Warning, the essence of warning is not presenting a list of iron-clad facts but anticipating and preventing the looming and murky danger of strategic surprise. To be sure, for most of 1962, there were no nuclear missiles in Cuba to be found.5  Still, none of the intelligence estimates sounded the alarm to be on the lookout for such a possibility; indicated specifically what factors, other than large numbers of troops, might conceivably change the assessment of Khrushchev’s intentions; or urged policymakers to take seriously the idea that the Soviets could be up to something more. Quite the contrary. All four of the estimates had a distinctly reassuring quality to them, highlighting inferences and evidence in ways that suggested policymakers need not worry about a Soviet offensive base in Cuba. Rather than inoculating the Kennedy administration against the horrors of a possible Soviet missile surprise in Cuba, the estimates made the surprise all the more sudden, shocking, and total.

Second, the contingency of history also cautions against finding intelligence warning success in chancy, happy outcomes. In the case of the Cuban missile crisis, each passing decade brings new and frightening evidence of how Kennedy’s “seizing the initiative” after seeing those u2 photographs of missile sites nearly led to nuclear disaster, not American victory. Transcripts of Kennedy’s secret Excomm meetings reveal that had the president made his decision on the first day of the crisis rather than the seventh, the United States would have launched an air strike against Soviet missiles in Cuba that could very well have triggered thermonuclear war. Scott Sagan has chronicled numerous instances during the crisis where mistakes (an American u2 pilot who accidentally flew into Soviet airspace, bringing with him American f102-a interceptors armed with Falcon nuclear air-to-air missiles) or routine military procedures (including a previously scheduled test tape of a Soviet missile attack that ran during the crisis and was mistakenly identified as a real incoming strike) nearly spiraled out of control. In 2002, scholars unearthed terrifying new evidence that one Soviet submarine captain actually did order preparations to launch a nuclear-tipped torpedo off the American coast. On October 27, bombarded by U.S. Navy depth charges and running out of air, the Soviet Captain gave the order to prepare a nuclear weapon for firing. “We’re going to blast them now! We will die, but we will sink them all. We will not disgrace our navy,” the Soviet intelligence report quotes the Soviet captain as saying. But in the heat of the moment, another submarine officer, Vasili Arkhipov, convinced him to await further instructions from Moscow.6 In short, the mounting evidence of narrow misses during the crisis suggests that luck played a pivotal role, and that the outcome could easily have been tragic. One wonders whether observers would feel quite the same about the performance of U.S. intelligence agencies had Soviet ss-4 nuclear missiles landed in South Florida.

The second variant of the success narrative maintains that U.S. intelligence estimators may have been wrong, but they did the best that anyone could. Sherman Kent himself wrote in 1964, “By definition, estimating is an excursion out beyond established fact into the unknown.” An estimator, he notes, will undoubtedly be wrong from time to time. “To recognize this as inevitable,” however, “does not mean that we estimators are reconciled to our inadequacy; it only means we fully realize that we are engaged in a hazardous occupation.” In this particular case, Kent admits to being dead wrong but then claims no one could possibly have predicted Khrushchev’s irrational behavior. “No estimating process,” he concludes, “can be expected to divine exactly when the enemy is about to make a dramatically wrong decision.” With a few exceptions, examinations of the Cuban missile crisis have picked up this theme.

Blaming the adversary for his unpredictable behavior is an odd argument, to say the least. The logic suggests, for example, that U.S. intelligence agencies should also criticize the Chinese for their surprise entry into the Korean War, the Indians and Pakistanis for their unexpected 1998 nuclear tests, and Iranian President Mahmoud Ahmadinejad for his strange letters and on again/off again nuclear saber rattling (can’t these people act more predictably?). This argument also contradicts one of the most important maxims of intelligence warning: Good warning analysis does not discount anomalies, it targets them. Grabo’s primer, which has been required reading for warning analysts for years, notes, “While not all anomalies lead to crises all crises are made up of anomalies.” By this measure, the Cuban missile crisis seems a textbook case of anomaly leading to crisis. The Soviets had never taken such risks before. Nor had they ever provided such an extraordinary level of military aid to Cuba. But starting in the spring of 1962, ships were sailing, and by summer, crates of weapons — lots of them — were being unloaded. Something different was definitely afoot, and U.S. intelligence officials knew it. Yet their estimates confronted these anomalies and declared them more of the same.

The benefits of calling a failure a failure

Calling something a success or failure is not simply an exercise in semantics. The categorization itself directs researchers to examine different questions, mount different arguments, or as Allison put it so many years ago, fish in different conceptual ponds. In this case, viewing the Cuban missile crisis as an intelligence warning failure naturally shifts the explanatory lens from “showing why warning was so hard” to “identifying what went so wrong.”

Doing so reveals significant research gaps. Seeking to explain “why warning was so hard,” intelligence research on the crisis has focused primarily on cognitive psychology and the pitfalls inherent in human cognition. Organizational explanations, by contrast, have remained an under-tilled area. While much has been made of bureaucratic politics in presidential decision-making, little has been done to examine the silent but deadly role of organizational weaknesses in intelligence during the Cuban missile crisis. But more recent analyses of the September 11 terrorist attacks and the faulty estimates of Iraq’s weapons of mass destruction suggest that organizational weaknesses in intelligence can have devastating effects. And regarding the Cuban missile crisis, there are lingering questions surrounding such weaknesses.

Rarely do we examine the silent organizational structures and processes that determine whether signals get noticed or ignored.

Why did estimators miss the signals of Khrushchev’s true intentions?

Signals and noise have been a major part of every intelligence post-mortem since Pearl Harbor. Roberta Wohlstetter, who coined the terms, observed that intelligence warning requires analysts to separate “signals,” or clues that point to an adversary’s future action, from a background that is filled with “noise,” or intelligence indicators that turn out to be irrelevant, confusing, or just plain wrong. After the fact, of course, the signals are obvious. “Like the detective-story reader who turns to the last page first,” Wohlstetter writes, “we find it easy to pick out the clues.”7 Detecting the right signals before disaster strikes, however, is another matter.

Wohlstetter’s important insight warns against the perils of hindsight bias. But it has also generated analytic pathologies of its own, focusing our sites more on the ratio of signals to noise and the analytic techniques to improve individual perception than the organizational forces that cause signals to get noticed or missed. Each time an intelligence surprise occurs, commissions, congressional committees, and scholars are quick to ask, “How many signals were there? How much noise existed? What analytic mistakes were made?” The answer is always the same: too few signals, too much noise, too many erroneous assumptions or inferences. Rarely, however, do we examine the silent organizational structures and processes that determine whether signals get noticed or ignored, amplified or dispersed. We have missed the crucial role of organizations.

A brief comparison of the Cuban missile crisis and the September 11 terrorist attacks illustrates the point. Immediately after the Cuban missile crisis, the steady refrain was that intelligence noise was tremendous while signals were scarce. cia Director John McCone wrote that his agency received 3,500 human intelligence reports from agents and Cuban refugees (who were debriefed at a special cia center in Opa Locka, Florida) claiming to spot Soviet missiles on the island before the crisis. Nearly all were wildly off the mark. According to the President’s Foreign Intelligence Advisory Board, just 35 of these reports turned out to be signals indicating the actual Soviet deployment. McCone finds even fewer, writing, “only eight in retrospect were considered as reasonably valid indicators of the deployment of offensive missiles to Cuba.”8  And Sherman Kent, who was responsible for the pre-crisis estimates, contends that at most, only three of these signals “should have stopped the clock.”

The crucial warning problem isn’t the number of signals but the organizational deficiencies that ensure such signals get lost.

McCone, Kent, Wohlstetter, Garthoff, and others argue forcefully that these were terrible odds for detecting signals of Khrushchev’s nuclear gambit. But were they really? Looking back, the numbers actually look pretty darn good. Intelligence officials working in the weeks and months before the September 11 terrorist attacks would gladly have traded signals-to-noise ratios with their Cuban missile crisis counterparts. In 1962, there were just 5,000 computers worldwide, no fax machines, and no cell phones. By 2001, the National Security Agency was intercepting about 200 million e-mails, cell phone calls, and other signals a day. Although processing technology had improved dramatically, it was nowhere near enough. The collection backlogs at nsa alone were so enormous that less than 1 percent of the intake was ever decoded or processed. Against this astounding noise level, signal detection remained about the same as it was in 1962. I found that in the two years before 9/11, U.S. intelligence officials picked up a grand total of 23 signals that al-Qaeda was planning a major attack on the U.S. homeland.

As the comparison suggests, quantifying signals and noise tells part of the warning story, but not the most important part. In both cases, the crucial warning problem was not the precise number of signals; whether there were three or 30 or even 300 signals made little difference in the end. Instead, the crucial problem had to do with organizational deficiencies that ensured every signal, once detected, would eventually get lost in the bureaucracy. Chief among these organizational deficiencies was structural fragmentation — jurisdictional divisions within and across intelligence agencies that dispersed and isolated signals in different places.

Seven weeks before 9/11, for example, three of the fbi’s 56 U.S. field offices independently uncovered what turned out to be three key signals. In Phoenix, Special Agent Kenneth Williams identified a pattern of jihadists attending U.S. flight schools and wrote a memo urging that flight schools be contacted, specific individuals be investigated, and other intelligence agencies, including the cia, be notified. In Minneapolis, fbi agents arrested Zacarias Moussaoui, a suspicious extremist who wanted to fly 747s and paid $6,000 in cash to use a flight simulator but lacked all of the usual credentials. He became the only person convicted in the U.S. for his connection to the attacks. Third and finally, the fbi’s New York field office began searching for Khalid al-Mihdhar and Nawaf al-Hazmi, two suspected al-Qaeda operatives who ultimately hijacked and crashed American Airlines Flight 77 into the Pentagon.

It is the case that in 1962, just as in 2001, the Central Intelligence Agency was central in name only.

Yet because the fbi field office structure was highly decentralized, none of the agents working these cases knew about the others. And because a gaping divide separated domestic and foreign intelligence agencies, the cia and the rest of the U.S. intelligence community never seized these or other fbi leads in time, either. Instead, the Phoenix memo gathered dust, alerting no one. Moussaoui’s belongings (which included additional leads to the 9/11 plot) sat unopened for weeks as Minneapolis agents tried to obtain a search warrant — unaware of the Phoenix memo or the existence of another terrorist in fbi custody who could have identified Moussaoui from al-Qaeda’s training camps. An fbi agent went searching blindly for al-Mihdhar and al-Hazmi in New York hotels, unaware that the Bureau’s San Diego field office had an informant who knew both terrorists. In these cases, and 20 others, someone somewhere in the intelligence bureaucracy noticed something important. These and other signals were not drowned out by the noise. They were found, and then subsequently lost in the bowels of the bureaucracy.

Even a cursory look at the Cuban missile crisis suggests that structural fragmentation appears to have played a similar role then, isolating and weakening signals rather than concentrating and amplifying them. In 1962, just as in 2001, the Central Intelligence Agency was central in name only. Created just fifteen years earlier, the cia had been hobbled from birth by existing intelligence agencies in the State, Justice, War, and Navy Departments, all of which vigorously protected their own missions, budgets, and power. The cia, in fact, did not control the intelligence budgets or activities of the Defense Intelligence Agency, the National Security Agency, or any of the military intelligence services, all of which reported to the secretary of defense. What’s more, the Bay of Pigs debacle of April 1961 made a weak cia even weaker. Kennedy’s own distrust was so great that he sacked cia Director Allen W. Dulles and replaced him with a man none of his inner circle trusted: John McCone, a Republican businessman with staunch anticommunist leanings and no professional intelligence experience.

This structure meant that intelligence reporting and analysis of the Cuban situation was handled by half a dozen different agencies with different missions, specialties, incentives, levels of security clearances, access to information, interpretations of the findings, and no common boss to knock bureaucratic heads together short of the president. The Defense Intelligence Agency photographed deck cargoes of Soviet ships en route from the Soviet Union. The Navy conducted air reconnaissance of ships entering and leaving Cuba. The cia ran human agents in Cuba, but jointly operated a special Cuban refugee debriefing center in Florida with the military. The State Department handled diplomatic dispatches. The National Security Agency intercepted communications indicating Soviet ship movements, radio transmissions in Cuba, and other signals intelligence. At first the cia, and then the Strategic Air Command, manned u2 reconnaissance flights over Cuba. Estimates, finally, were technically produced by the cia’s Office of National Estimates but coordinated and approved by an interagency group called the U.S. Intelligence Board. In short, in 1962, as in 2001, there were many bureaucratic players and no one firmly in charge of them all.

Intelligence reporting and analysis of the Cuban situation was handled by half a dozen different agencies.

Although a more thorough organizational analysis of how every signal was processed through the bureaucracy lies beyond the scope of this essay, initial evidence does suggest that organizational fragmentation existed, and that it had the effect of delaying action and hindering signal consolidation. For example, retrospective examinations by both the cia and the President’s Foreign Intelligence Advisory Board (pfiab) found that there was “rigid compartmentation” between aerial imagery collectors and cia analysts and that this structural divide kept the cia from disseminating reports and information about the possibility of offensive Soviet weapons in Cuba before the October 14th discovery of missile installations. The pfiab report on the crisis, which was completed in February 1963, finds that before October 14, cia analysts did not publish any information indicating a potential offensive buildup in Cuba in the president’s daily intelligence checklist, the most important current intelligence product. The reason: The agency’s rules required that any report that could be verified by photographic evidence first had to be sent to the National Photographic Interpretation Center (npic), a separate cia unit located in the Directorate of Science and Technology. For cia analysts housed inside the agency’s Directorate of Intelligence, this was the bureaucratic equivalent of Lower Slobovia. What’s more, there was no systematic process to inform analysts about the status of their aerial verification requests to npic, so requests could languish or simply disappear without any further action. Without any idea whether further action would ever be taken, analysts simply withheld information from their written products. The pfiab found that analysts mistakenly interpreted the verification rule as an outright ban on publishing all reports of offensive Soviet weapons without definitive photographic proof.

This same rigid bureaucratic division between analysis and photographic collection created a filter that appears to have hindered initial signal detection as well. According to the pfiab chronology, a September 9th report from Castro’s personal pilot claimed that there were “many mobile ramps for intermediate range rockets,” an item subsequently deemed significant. At the time, however, it was given only “routine” precedence because the cia analyst who saw it was charged with identifying information relevant for aerial surveillance, and thought the information was “too general” to be of targeting use.

Preliminary evidence suggests that the same organizational barriers operating on 9/11 were also at work during the missile crisis.

In short, preliminary evidence suggests that the same organizational barriers operating on 9/11 were also at work during the missile crisis. Indeed, given the long and sordid history of intelligence coordination problems, it seems unlikely that Cuban intelligence reporting constituted a shining exception where intelligence warning signals were collected, assessed, and disseminated by a well-oiled coordination machine. Instead, in both cases, bureaucratic jurisdictions and standard operating procedures ended up creating invisible fault lines within and across intelligence agencies that kept signals from converging. Structural fragmentation made it likely that signals would get lost, even after they had been found.

Why were all four of the pre-crisis estimates so consistent, even in the face of alarming new evidence of a Soviet military buildup?

The four pre-crisis intelligence estimates of 1962 raise a second perplexing question: Why were these formal intelligence products so consistent even when intelligence reporting showed a dramatic uptick in Soviet military deployments to Cuba? Or more precisely, why did that final September 19th special estimate draw old conclusions about Khrushchev’s intentions despite new evidence that the Soviets were sending weapons and personnel in unprecedented numbers at unprecedented rates in August and September?

Recall that the estimate clearly indicated conditions on the ground had changed since the previous estimate, which was published on August 1, 1962. The September 19th estimate begins by defining its task as assessing “the strategic and political significance of the recent military buildup in Cuba and the possible future development of additional military capabilities there.” And it devotes substantial attention to discussing the precise nature of the buildup, declaring as fact that “In July the Soviets began a rapid effort to strengthen Cuban defenses against air attack and major seaborne invasion.” Notably, there are few estimative caveats in this section such as “we judge,” or “we assess,” or “it is likely.” Instead, the estimate states as a point of fact three developments: That “the bulk of the material delivered” to Cuba is related to the installation of twelve sa-2 surface-to-air missile sites on the Western part of the island; new shipments also include tanks, self-propelled guns, other ground force equipment, and eight “Komar” class guided missile patrol boats to augment Cuban defenses; and a “substantial increase in the number of Soviet military personnel from about 350 early this year to the current level of about 4,000.” The estimate is more speculative and less certain about other aspects of the buildup: when existing sam sites would be operational, the possible installation of additional surface-to-air missile sites on the eastern half of the island, the number of mig-21 interceptors deployed, future deliveries and missile capabilities of Komar class patrol boats, and the identification of recent crates, large boxes, and vans, which were believed to contain electronics and communications gear. Although the estimate notes that mig fighters could be used for offensive purposes, it concludes, “Nevertheless, the pattern of Soviet military aid to date appears clearly designed to strengthen the defense of the island.”

Why were all four pre-crisis estimates so consistent in the face of evidence of a Soviet military buildup.

Other than the mig discussion, the estimate confines its assessment of possible offensive weapons — including a submarine or strategic missile base — to a different section titled “Possibilities for expansion of the Soviet buildup.” This report structure had the effect of sharply distinguishing present intelligence reporting about the military buildup from speculation about future possibilities. According to the estimate, intelligence about the buildup clearly showed the Soviets adopting a defensive posture, just as earlier assessments had concluded. The estimate does ponder the future, noting, “The ussr could derive considerable military advantage from the establishment of Soviet medium and intermediate range ballistic missiles in Cuba, or from the establishment of a Soviet submarine base there.” However, it concludes that “Either development . . . would be incompatible with Soviet practice to date and with Soviet policy as we presently estimate it.” In other words, earlier judgments about Soviet objectives and intentions still held.

In the immediate aftermath of the crisis, Kent took a great deal of criticism for the September 19th estimate. Nearly all of it centered on analytic misjudgments, particularly mirror imaging or the tendency for analysts to believe an adversary will behave as they would. And as noted above, more recent scholarly work also focuses on problems of perception and cognition. According to this work, American estimators failed to see the world or weigh the costs and benefits of the missile deployment through Soviet eyes.

Changing a previous estimate required taking a fresh look, marshaling both new and old facts, and laying out what had shifted and why.

But mirror imaging was not the only problem hindering the estimates process. Organizational pressures were also driving strongly toward conformity and consistency across the four Cuba estimates. These reports were not the product of a single mind, a single view, or even a single agency. They were collective reports that required interagency coordination and consensus. And that organizational fact of life tilted the whole estimating machine toward consistency over time. Why? Because consistency was what policymaking customers expected to find. Presidential advisors did not need to be convinced that the world essentially looked the same today as it did last month. But they did need to be convinced that the world looked different. Where consistency was a given, inconsistency had to be explained, justified, and defended. Changing a previous estimate required taking a fresh look, marshaling both new and old facts, and laying out what had shifted, and why. That, in turn, meant overcoming immense bureaucratic inertia — convincing every intelligence agency involved in the estimating process that what it said or assessed or wrote or agreed to the last time should be discarded or modified this time. Changing an earlier estimate did not just take more work inside each agency. It took more work negotiating new agreement across them. Generating interagency consensus on a new estimate that said “we have changed our collective minds” was invariably harder than producing a report that said “once again, we agree with what we wrote last time.” In short, organizational dynamics naturally gave consistency the upper hand.

Political considerations exacerbated these problems. By political considerations, I do not mean to suggest that estimators bent their judgments to curry favor or told policymakers what they wanted to hear. Instead, my point is that switching course on an analytic judgment is always harder when the political stakes for the country and the administration are known to be high. In these situations, any new estimate that revises earlier judgments can be seized, however unjustifiably, as proof that earlier estimates were wrong.

The political atmosphere surrounding the Cuba estimates was intense. The Cold War stakes had never been greater and the cia had already caused Kennedy a devastating defeat in the Bay of Pigs invasion just eighteen months earlier. Now, with midterm congressional elections just weeks away, the pressure to “get Cuba right” was tremendous. In this environment, an intelligence estimate that gave serious consideration to a new, more ominous reading of the Soviet buildup would almost certainly have been read as an indictment of earlier, less alarming estimates. And it would have contradicted earlier public assurances by the president himself, as well as his closest advisors, that the Soviet buildup was purely defensive in nature. Such considerations may not have been in the foreground of the estimates process, but it is hard to imagine that they were not in the background. At that precise moment, on that particular topic, consistency was a safe and prudent course while inconsistency carried substantial risks, both for the intelligence community and the president.

Why didn’t anyone offer dissenting views in the intelligence estimates?

The above discussion helps illuminate why the estimates were consistent even when confronting dramatically new facts. It does not, however, explain why the estimates failed to contain any dissenting views. As noted earlier, footnotes were used to provide dissenting opinions in estimates of other subjects written during the same period. Why, then, were they not used in the pre-crisis estimates of 1962, particularly the September 19th assessment?

Why didn’t the estimates contain dissenting views, and where were the dissenting footnotes in the 1962 estimates?

The usual explanation is that no strong dissenting opinions existed. As Wohlstetter writes, “let us remember that the intelligence community was not alone. It had plenty of support from Soviet experts, inside and outside the Government. At any rate, no articulate expert now claims the role of Cassandra.” But there was at least one: cia Director John McCone, who suspected Soviet missiles from the start. McCone was a paranoid anticommunist who always seemed to find signs of aggressive Soviet behavior, and was often wrong. This time, however, his hunch proved correct.

McCone was no wallflower. In fact, the historical record shows that he forcefully advocated his hypothesis about Soviet missiles with senior Kennedy advisors on several occasions, starting in August 1962. And after sam sites were discovered, he sent a series of cables to Washington from his European honeymoon, again strenuously asserting his hypothesis (the sam sites, he believed, had to be guarding something important) and requesting additional reconnaissance. The cia director was not afraid to make his case or make it often. The question, then, is why he never did so in the national intelligence estimates.

Some argue that McCone refrained from foisting his opinions or judgments on the estimates process, and conclude that this was a good thing. “dci McCone deserves credit for allowing snie 85–3–62 to contain conclusions that clearly contradicted his views,” writes James Wirtz. “If McCone had interfered in the snie in a heavy-handed way . . . analysts would have objected to what inevitably would have been viewed as politicization of their estimate.”9 Organization theory, however, suggests a very different possibility: that the estimating machine may have been working so smoothly, it failed utterly.

The key idea is a phenomenon called structural secrecy. Briefly put, the notion is that all organizations specialize to increase efficiency, and specialization turns out to be a double-edged sword. On the one hand, dividing labor into subunits enables experts to tackle specialized tasks in specialized ways. On the other hand, however, specialization generates organizational structures and standard operating procedures that filter out information and keep an organization from learning. Standard ways of writing reports, assembly line production processes, and rigid communication channels — all of these things help managers work across sub-units efficiently. But they also keep ideas that do not fit into the normal formats and channels from getting heard. Reports, for example, are written in certain ways, with certain types of information, for certain purposes, and certain audiences. This setup is designed to create a standard product precisely by weeding out nonstandard ideas and approaches. Organizations are filled with these kinds of standard formats and operating procedures. The trouble is that the more that things get done the same way each time, the harder it is to do things differently. The entire system becomes a well-oiled machine that, by its very existence, keeps alternative ideas or ways of operating from getting through. Information that could be valuable to the organization remains hidden. Organizational structure creates its own kind of secrecy.

Fifty years after the Cuban missile crisis, intelligence warning is still plagued by many of the same challenges.

The estimates process in the Cuban missile crisis seemed ripe for structural secrecy problems. It was highly specialized, with multiple units, offices, and agencies collecting and analyzing different pieces of the Cuba intelligence puzzle. It was also highly routinized. Kent himself describes the estimates process as a “machine,” with specific stations, regularized processes, and an “assembly line” production. The process was well-honed, and the product was highly standardized. Notably, one of the key features of the estimating machine was its evidentiary standard for revising earlier estimates or voicing dissenting views. Kent writes extensively about what it would have taken to revise the September 19th estimate or offer a dramatically different, stronger view of the buildup and concludes that the evidence was simply not there. “These pre-October 14 data almost certainly would not, indeed should not, have caused the kind of shift of language in the key paragraphs that would have sounded the tocsin,” he writes. The same was true of footnotes, which were ordinarily used for airing disagreements about evidence.

In other words, the estimating process was all about data: collecting it, interpreting it, distilling it, and assessing what it meant. The machine started with evidence and ended with judgments. The cia director’s approach never fit into this standard operating procedure. Indeed, McCone had it backwards. He did not have evidence in search of a judgment. He had a hypothesis in search of evidence. And there was no place in the National Intelligence Estimates or Special National Intelligence Estimates for such things. No wonder McCone never tried to inject himself into those reports. Instead, he worked within the estimating machine, requesting additional photographic reconnaissance to get the proof he needed. And while he waited for the estimating gears to grind, he made his case — in meeting after meeting, cable after cable — to Kennedy and his top advisors. Structural secrecy led the estimating machine to run smoothly into failure.

Lessons for today

Fifty years after the Cuban missile crisis, intelligence warning is still plagued by many of the same challenges. Evidence misleads. Enemies deceive. Analysts misjudge. Failures result. The September 11 attacks and the faulty estimates of Iraq’s weapons of mass destruction are potent reminders that intelligence warning remains, as Kent put it 48 years ago, “a hazardous occupation.”

And yet, some of the most powerful barriers to effective intelligence warning remain relatively unexplored. Intelligence, at its core, is a collective enterprise. Organizations are not passive players, where individuals do all of the hard thinking and make all of the tough calls. Instead, organizations powerfully influence whether signals get amplified or weakened, whether analysis looks backward at continuity or leans forward toward disjuncture, and whether dissent gets highlighted or hidden.

1 Sherman Kent, “The Cuban Missile Crisis: A Crucial Estimate Relived,” originally published in Studies in Intelligence (Central Intelligence Agency,  Spring 1964).

2 We know now that the Soviets had, in fact, deployed nuclear missiles to East Germany briefly in 1959 and that some U.S. intelligence officials suspected as much before the Cuban missile crisis broke. A January 4, 1961, memo from Hugh S. Cumming to the secretary of state, “Deployment of Soviet Medium Range Missiles in East Germany,” notes that a “special intelligence working group has recently prepared a report” which concluded that “as many as 200 mrbm’s [medium range ballistic missiles] may have been moved into East Germany between 1958 and the fall of 1960.” Yet this working group and its judgments never made it into the September 19, 1962, Cuba Special National Intelligence Estimate. Nor did the possibility of a precedent-setting Soviet nuclear missile deployment to a satellite country appear to reach the president. In the October 22, 1962, ExComm meeting, President Kennedy told his colleagues that no Soviet Eastern European satellite had nuclear weapons and that “this would be the first time the Soviet Union had moved these weapons outside their own” territory. Why the East German special intelligence report seems to have been unknown or disregarded by the estimating machine and its policymaking customers remains unclear.

3 James G. Blight and David A. Welch, “What Can Intelligence Tell Us about the Cuban Missile Crisis, and What can the Cuban Missile Crisis Tell Us about Intelligence?” Intelligence and National Security 13:3 (1998).

4 Raymond L. Garthoff, “U.S. Intelligence in the Cuban Missile Crisis,” Intelligence and National Security 13:3 (1998).

5 Nor did Khrushchev give any indications that something was afoot. The Soviets mounted a substantial denial and deception program to keep the deployment secret.

6 Marion Lloyd, “Soviets Close to Using A-Bomb in 1962 Crisis, Forum Is Told,” Boston Globe (October 13, 2002).

7 Roberta Wohlstetter, “Cuba and Pearl Harbor: Hindsight and Foresight,” Foreign Affairs 43 (1964–65).

8 John McCone, Memorandum for the President (February 28, 1963). Reprinted in Mary S. McAuliffe, ed., cia Documents on the Cuban Missile Crisis (cia History Staff, October 1992).

9 James J. Wirtz, “Organizing for Crisis Intelligence: Lessons from the Cuban Missile Crisis,” Intelligence and National Security 13:3 (1998).

overlay image