Q&A: Harold Trinkunas, Herbert S. Lin, And Benjamin Loehrke On Presidential Leadership And Nuclear Decision Making In The New Information Ecosystem

Wednesday, May 27, 2020
Hoover Institution, Stanford University

In this interview, Herbert S. Lin, Harold A. Trinkunas, and Benjamin Loehrke discuss their newly released book by Hoover Institution Press, Three Tweets to Midnight: The Effects of the Global Information Ecosystem on the Risk of Nuclear Conflict.

The three scholars, all experts in national security policy, describe how social media and the oversaturation of content from conflicting sources potentially impacts leaders’ decisions on the use of nuclear weapons; how small-scale conflicts can become escalated through social media; how information operations can erode states’ defenses against adversaries; and what can be done to diffuse conflict in this new information ecosystem. They also discuss how this media landscape is impacting leaders’ ability to contain and combat the COVID-19 pandemic.

What are the origins of this collaboration?

Benjamin Loehrke: In spring 2017, we were all in the conference room at the Center for International Security and Cooperation [CISAC] discussing the question: could a tweet lead to the outbreak of nuclear war? At the time, President Trump was ratcheting up his rhetoric on Twitter against North Korean leader Kim Jong-un about the communist country’s nuclear arsenal. The war of words certainly felt destabilizing. Scholars since late in the Cold War have helped build our understanding of the psychology of crisis decision making and how miscommunication could lead to deterrence failure. But most of those studies were from another era. In the age of social media, the volume and speed of information during a crisis seemed like it could intensify known problems—or create new ones. From there, we just had this curiosity about how this might affect crisis stability today.

Harold, Herb, and I decided to explore this further in a workshop in October 2017, hosted by the Stanley Center for Peace and Security. It remains one of the more interesting workshops I have ever hosted or been a part of. We came into it having general assumptions about the subject matter and came out of it with a greater appreciation of the scale of the challenge. This wasn’t just about how a single tweet could be misinterpreted and lead to war. There is a whole information ecosystem that shapes the context and psychology of decisions during crises. And social media is radically disrupting that ecosystem.

The Hoover Institution, CISAC, and the Stanley Center soon partnered to commission a series of papers. We hosted a follow-up workshop at the Hoover Institution involving forty of the nation’s top experts in the areas of psychology, information warfare, cybersecurity, international relations, and nuclear strategy. And that partnership led to the publication of Three Tweets to Midnight, in which many of these experts wrote contributing chapters.

Herbert S. Lin: Most nuclear deterrence literature was written in the 1980s and is based on the idea of rational actors. As it turns out, that happens to be around the time when the intellectual community was starting to produce scholarship and literature about social cognition, which is essentially the psychology that underlies behavioral economics. In the last three decades, the study of social cognition has grown considerably (buttressed by Nobel Prizes awarded to Herb Simon, Daniel Kahneman, and Richard Thaler), and it is increasingly apparent that the psychology of non-rational behavior affects all kinds of decision making.

At CISAC, Amy Zegart (who is also a senior fellow at the Hoover Institution) and I run a faculty working group on information warfare. That group focuses on how the information ecosystem is affected by new media and the psychological vulnerabilities that people have in processing information. CISAC and Hoover have been at the forefront of this study, which has added additional perspective to this publication.

How can the new information ecosystem be defined? Who are the senders and receivers of information, and what are the platforms being used?

Lin: Before the global internet age, information was disseminated in one direction: large television, print, and radio media networks broadcasting to the general public. Since then, internet connectivity has created an environment that made for bidirectional creation and promulgation of content. The information ecosystem is no longer defined as communication from one to many, but from many to many.

In the past, media organizations held themselves to high ethical standards and generally tried to be scrupulous in their editorial decisions, even if sometimes their efforts were honored more in the breach of those standards. Today, massive streams of user-generated content on social networks such as Twitter make it more difficult to discern what is true and false, and responsible intermediaries are becoming scarcer. People are given an infinite amount of choices from an infinite number of sources. They thus turn to their own selective judgment for deciding whom they want to accept information from.

Users frequently select sources based on their Facebook newsfeed, which is generated by an algorithm that processes news based on their preferences and the activity of their friends. Most social media companies make money from advertising revenues, so they have great incentives to provide information that will keep users engaged on their respective platforms, so that they can show ads to them. That's a very big change in the environment itself.

Trinkunas: The information that is selected algorithmically is based on what appeals to the end user regardless of quality or truthfulness or need. It is generated for the sole purpose of increasing dwell time on the platform.

The other important aspect about this new environment is how traditional media utilizes social media to promulgate news and makes editorial decisions based on topics that are trending with users.

On the other end of the media spectrum, ordinary people have access to digital tools that can produce content that appears to represent a high-quality source. Oftentimes it is difficult to discern misinformation from accurate reporting based on superficial cues such as how professional a web site appears.

What kinds of dynamics are leaders facing in today’s information ecosystem?

Trinkunas: In the chapter by Rose McDermott of Brown University, she discusses the “post-truth” environment. Unlike academics, most people don’t receive information from scientifically evaluated or peer-reviewed sources. This new information ecosystem often favors the dissemination of content that is partisan, emotionally charged, or faith based. McDermott also argues that users are overwhelmed by the vast amount of information they receive each day, and tend to reach conclusions using heuristics reinforced by preconceived biases instead of factual evidence.

Loehrke: That reliance on mental shortcuts, while a normal response to stress, has obvious hazards for national security decisions. Stress degrades a leader’s cognitive performance. Using these heuristics, for example, he or she would more readily reject information that doesn’t fit beliefs held before the crisis started. He or she is also more likely to misinterpret adversary communications. 

A leader’s mental bandwidth will be throttled during a crisis. That leader will likely be barraged by outside information. The gap between leaders’ bandwidth and the available information – including the truths and untruths—seems like it is widening in today’s information ecosystem. This is a worrying proposition for the quality of deliberation and decision making during international crises.

Lin: In Three Tweets to Midnight, we discuss what might have happened if Twitter had existed during the Cuban missile crisis of October 1962. If President Kennedy had been subject to the same public pressures as leaders across our world today face because of social media, would he have been able to make the deliberate and bold decision to impose a naval quarantine on Cuba and force the Soviet Union to remove its nuclear weapons rather than to invade Cuba with ground troops? 

Trinkunas: Traditionally leaders have received information vetted by bureaucracies composed of intelligence services, militaries, and diplomatic corps. Today, leaders have access to unvetted information entering through their smartphones, and it is impacting how they make decisions. 

In one of the chapters, we document how leaders can be manipulated by other users on social media.

For example, the Labour Party’s former leftist leader Jeremy Corbyn became a target of deliberate disinformation during the 2017 United Kingdom parliamentary elections. Moderate party officials narrowly directed advertisements to Corbyn, his closest aides, and left-wing activists via their Facebook newsfeeds, giving them the false impression that the recipients’ preferred messaging had been widely distributed to voters, contra the more moderate messages that these party officials thought were likely to play well with the electorate. Modern social media platforms enable such narrow segmentation of markets, providing an ability to target fifty users or less, that it seems quite practical that similar information campaigns could be targeted at other political leaders, particularly ones that rely on an active social media presence to advance their ambitions.

When do antagonisms in this new information environment have the potential to escalate into a violent conflict?

Trinkunas: I will give a non-nuclear example and a nuclear example, both of which are addressed in the book.

The non-nuclear example is when Saudi Arabia and the United Arab Emirates launched a de facto blockade against Qatar in 2017. Saudi Arabia and the UAE then took to Twitter to defend their hostile actions, generating a fake narrative that made it seem as if Qatar had taken the hostile first step, thus justifying Saudi and Emirati actions. Even though this was a small-scale, low-intensity conflict, it is an example of how social media is being used to justify escalation of a conflict between states. 

One nuclear example occurred in 2016 when the Pakistani defense minister, after reading false reports that Israel had accused Pakistan of supporting ISIS, reminded the Israelis by tweet that Pakistan was also a nuclear-weapons state. Another example is how, early in his administration, President Trump repeatedly signaled on Twitter that the US was prepared to launch a retaliatory strike against North Korea, should the communist country’s leader decide to follow through on threats to use nuclear weapons.

These examples demonstrate signs that escalatory language via social media can result in increased tensions.

Loehrke: One of the other interesting aspects of this book is from a chapter written by Kate Starbird of the University of Washington, in which she gives a quantitative analysis about how Russia is using social media campaigns targeted to certain political constituencies in order to undermine support and erode cohesion of the North Atlantic Treaty Organization [NATO].

That itself isn’t a crisis. However, if Russia succeeds in these types of low-level information operations for long enough, it could strain allies’ political commitments to defend one another during a conflict. The strength of those commitments is essential to NATO, its deterrence and defense capabilities, and to European security overall.

Trinkunas: There are a couple of important points in Starbird’s chapter. One is that the Russians are not just sending one message to Europeans, they are sending many messages. It is hard to tell if it is just an effort to cause confusion or if they are testing to see which messages get traction and eventually go viral via social media. However, the Russians don’t have to do all the work themselves. Russian agents can generate content that appeals to activist communities among NATO countries, who can then spread the message more authentically across their respective political landscapes.

Starbird finds that there is a common anti-NATO sentiment, shared by the international left that condemned NATO's intervention in Libya and US intervention in Syria, for example, and right-wing alternative media, which is critical of some NATO countries for insufficient sharing of the financial burden for their common defense. So, there is a crossover phenomenon where the far left and the far right share anti-NATO media content among themselves. This behavior is almost counterintuitive, because the right is traditionally considered hawkish and the left dovish. What Starbird shows through quantitative analysis of millions of tweets are the actual patterns by which people accept and share information among themselves.

Does social media impact a decision maker’s ability to diffuse a nuclear crisis?

Trinkunas: Kelly Greenhill’s chapter covers this topic very well. There is a significant amount of literature in social science about the role of “signals” and “commitments” in escalating or deescalating crises. Signals are rhetorical or kinetic action taken by a government to communicate resolve to an adversary. Commitments are the “red lines” that, if crossed by an adversary, may result in the escalation of conflict.

Herein lies the dilemma. If one side backs down, it might lose face and public support. Greenhill explains that it is a little bit more complicated than the traditional literature has shown, which is that there's an incentive for leaders to use what she calls “extra-factual information,” or EFI—that is, information that is either unverified or unverifiable at the time of transmission using secure standards of evidence.

EFI is designed to attract greater support domestically or to escalate signaling during a crisis. Conversely, it also alarms opponents and makes them less likely to de-escalate.

Loehrke: There is a clever argument in a chapter by Jeffrey Lewis in which he posits that if a state starts pushing disinformation to build domestic support for a certain policy, there is a chance of blowback. Elites may come to believe their own information operations and pursue real policies to contend with their own false narratives

He cites two cases within the US-Russia relationship. One involves Russia accusing the United States of basing tactical nuclear weapons in Romania. The second is about Russia's so-called escalate to de-escalate strategy, which alleges Russia’s nuclear doctrine envisions using nuclear weapons early in a conflict to force NATO to back down and end a war on favorable terms to Moscow. Misrepresentations of both within the popular discourse have become significant enough that they have influence in US and Russian policy making. Lewis is not conclusive in his argument, but he raises an interesting question about whether or not leaders can back away from the mistruths they tell or what happens when the lies take on lives of their own.

How is the current information ecosystem affecting political decision making during the COVID-19 crisis?

Trinkunas: Several elements in the book are clearly being played out right now. One is the post-truth environment and that people process information based on faith, prior belief, or emotional reactions rather than empirical evidence. We are also seeing that the issue over how the virus is transmitted has become very polarized. There’s a lot of polling in the United States showing that there is very much a left-right divide about belief in the severity of the crisis, and about its policy response. I wouldn’t be surprised if we polled people in countries around the world and found a very similar phenomenon. On top of this, China, through its sophisticated propaganda efforts, is trying to change the narrative about who is culpable for the spread of the disease.

What can be done to diffuse conflict in this new information environment?

Trinkunas: The book suggests that a regulatory approach to social media and a change in the incentive structure to discourage the transmission and display of low-quality information may be necessary. Social media companies might actually welcome such policies as long as they apply to all platforms and all users equally, so no one company bears the burden or suffers the brunt of reputational damage.

When it comes specifically to the nuclear issue, leaders need sufficient time and access to highly vetted information before making such crucial decisions. One of the recommendations we make in this book is for the US to shift the posture of its nuclear arsenal to submarine-based fleets and de-emphasize its land-based components, as the latter are emplaced in fixed locations and are thus more vulnerable to an adversary’s first strike. In such a case, a leader only has thirty minutes or so to decide whether to fire missiles before the land-based leg of the strategic triad is seriously degraded. The time pressure is likely to lead to a reliance on heuristic thinking, thus limiting the leader’s ability to make better strategic decisions on whether to escalate or de-escalate conflict.

The book also recommends changing the information structures around leaders so they have access to specialists who can help them discern the veracity of the information received, and if they are being targeted by adversaries.

Lin: Everything Harold said is correct. Efforts should be made by the intelligence agencies and bureaucracies to evaluate, synthesize, and present information in ways that consider the new information environment. On the other hand, it is also true that the leader should pay attention to properly vetted information. Otherwise, the bureaucracy’s information-gathering process will be subject to the leaders’ preferences.

Loerhrke: Currently in the world of nuclear policy, the prospects for arms control and disarmament are bleak. Arms race dynamics are back. Expert and policy communities are falling into Cold War habits, with discussions about numbers, capabilities, and rigid interpretations of deterrence strategies. That is why I have appreciated working on this book. It reminded me that nuclear policy—the strategies, weapons systems, and existential threats that it comprises—rests on the assumption that decision makers can cautiously interpret information, deliberate, communicate, and navigate crises in ways that avoid the use of nuclear weapons. Cold War history shows that as a frightening proposition. And today’s information ecosystem does not make it easier. So we need to think differently about the psychology of nuclear crises, how countries manage their deterrent relationships, and what further efforts can be made to avoid the use of nuclear weapons.