The information and communications revolution has complicated governance everywhere. It has broken down traditional borders: people can communicate, organize, and act both with their fellow citizens and across country boundaries. The age-old challenge of governing over diversity grows more difficult by the day.

Prior to the spread of the internet, as Niall Ferguson writes, “only the elite could network globally.” Now nearly two and half billion people do so on social media platforms. At the advent of this networked age, most practitioners pictured a better educated, more knowledgeable populace enlightened by the democratization of information. The public would have up-to-the-minute access to information and the ability to communicate globally. Voters would have more ways to learn about candidates and engage in political speech than ever before.

That Panglossian view of the internet proved simplistic. The spread of information and new means of communication—particularly social media and other network platforms—created new vulnerabilities in democratic states. Joseph Nye explains that while authoritarian regimes can manipulate or even control information flows, democracies, in their commitment to transparency, find themselves on the defensive.

Foreign actors can manipulate information, particularly in the cyber domain, to undermine trust in institutions, sow domestic discord, encourage partisanship, or otherwise complicate electoral and governance processes, as the Russian’s demonstrated in their interference with the 2016 election. But such behavior is not the sole purview of foreign entities. Private citizens and corporations alike have the power to influence election outcomes in new and powerful ways.

If the 2016 presidential election showed the American public the potential of network platforms as tool of political manipulation, it also taught us how thorny the problem is. Russia’s interference in the election achieved an important goal: it helped to undermined faith in the American electoral and political process and in this country’s democratic reputation.

The papers prepared for this program address two separate but related issues: 1) the domestic problem of managing the highly powerful network platforms and 2) the international problem of information warfare enabled by these new communications technologies. The former demands a reconsideration of U.S. policy at home: the current status quo of “self-non-regulation” by the network platforms has proved wanting. The latter requires both U.S. policy corrections and multinational engagement: as Joseph Nye argues, we can improve our resilience to foreign information campaigns and our capacity to deter them while also engaging in diplomacy to define new rules of the road.

What follows is not a definitive statement of what the United States must do to address these two facets of the governance challenge. Instead, we endorse certain recommendations regarding information warfare and propose a set of potential corrections—informed by the roundtable discussion of these papers—to the domestic problem so well-defined by our colleague Niall Ferguson.

Cyber Information Warfare

Information warfare is an old form of competition, and one practiced by friends and foes alike. As Joseph Nye explains, the British cut Germany’s overseas communications cables at the outset of World War I, but they also fed the United States the Zimmermann Telegram to encourage U.S. engagement later in the war. As old as it may be though, new technologies have made information operations faster, more effective, and cheaper.

Russia’s interference in the 2016 election comes to mind again: 126 million Americans saw posts generated by the St. Petersburg-based "Internet Research Agency". Whereas the Soviet Union’s Operation Infektion conspiracy about AIDs took four years to spread into mainstream media, the recent Comet Pizza conspiracy theory spread across the country in a matter of hours. And the cost of creating a Facebook post or generating other online content is microscopic compared to that of traditional human intelligence operations.

In his paper, Joseph Nye draws the distinction between sharp and soft power. Soft power, he writes, rests on persuasion, while sharp power involves deception or coercion. Soft power is exercised openly, sharp power covertly.

The openness of the American system makes it more vulnerable to sharp power than more closed systems are, and states and non-state actors alike have a host of tools available for disrupting democratic processes: manipulation of voters through fake news, targeting candidates anonymously or under false names, creation of inauthentic groups to generate conflict, and sowing of chaos and disruption. Russia, China, North Korea, Iran, and others all wield these tools against the United States.

However, while Russia proved adept at exercising sharp power, Russia and other authoritarian states, including China, are less adept at soft power. Russia’s actions in the 2016 election, for example, fall under the umbrella of sharp power. The Russian news channel RT, on the other hand, generally engages in the above-board exercise of soft power. An American soft power analog would be Radio Free Europe and Voice of America, which were powerful tools of information warfare during the Cold War.

New technologies—chiefly social media and other network platforms—are fertile ground for the exercise of cyber information war. They can promote polarization and spread fake stories. The business of Facebook, YouTube, or Twitter is to maximize the attention of their users.. The more time and attention, the more advertisements seen, the more ad money for the platforms. False stories, outrage, and emotion capture attention far better than sober-minded articles or videos. It is unsurprising then, that YouTube’s algorithms, for example, tend to suggest videos that push viewers towards more extreme ends of the political spectrum. You may begin at the center, but the suggested content will push you to the extreme.

As modern technology, including artificial intelligence, make the manipulation of images and videos easier, bad actors can create fake or altered content that are increasingly difficult to distinguish from authentic ones. Introduced late in a campaign, such altered images could spread quickly enough—before Facebook’s operators, say, could take them down—to influence the outcome of that election.

Russia learned to weaponize social media and use it as a tool against the United States. The Internet Research Agency and similar operators created fake accounts, catalyzing polarization in American society, and amplifying extreme voices on both sides of the aisle in the United States.

How to Protect Our Democracy from Foreign Interference?

The openness of the United States may be a vulnerability, but it is also a great value. We must be careful not to sacrifice it. In other words, the U.S. government should not try to stop transparent information campaigns—legitimate exercises of soft power—in its effort to secure itself against illicit interference. Nor should it look to technology companies to solve the challenges. Facebook has taken steps to address the problem, after not seeing it coming in 2016, hiring new employees and applying artificial intelligence to find and remove hate speech, bots, and false accounts. But the enormous quantity of content, the entanglement of foreign- and domestically-generated content, and the mix of human and bot actors complicate the problem; the vast majority of Russian posts during the 2016 election amplified existing content created by Americans. Moreover, just as AI can help monitor and police content, it can also be used to generate new, harder to identify false content. The technical contest between the network platforms and foreign agents is likely to remain a cat and mouse game.

Joseph Nye proposes a three-fold approach, which we believe is wise: The United States should look to increase resilience and strengthen deterrence at home, while engaging in diplomacy with foreign powers.

Resilience: The United States must take steps to harden its electoral and political systems against cyber information warfare.

The U.S. government and non-governmental institutions, such as the academy, could upgrade the security of U.S. election infrastructure by training local election officials and improving basic cyber hygiene, such as using two-factor authentication. Given how much political campaigns and electoral offices rely on interns, volunteers, and other part-time workers, it may be difficult to train everyone, but even some training and better resilience would make a difference.

We should also encourage development of higher standards in software by revising liability laws and encouraging development of the cyber insurance industry as the number of points of vulnerability to cyber intrusion expands exponentially with the internet of things. Election laws could also change to force candidates to put their names on online political ads just as they do for television ads and to ban the use of bots by political parties or campaigns. As in other areas of cybersecurity, improved information sharing between government and industry would contribute as well.

Deterrence: Deterrence can be established in four ways: through the threat of punishment, denial by defense(resilience), entanglement, and establishment of normative taboos. Effective punishment would, of course, depend on reliable attribution. In nuclear strategy, the aim of deterrence is total prevention of nuclear attack, by maintaining an assured ability to retaliate with a devastating strike. Deterrence of cyber information warfare, at the other end of the spectrum of conflict, need not be perfect to be useful but ought to raise the cost for malevolent actors. Cyber intrusion could be treated as we do crime. When seeking to deter criminal activity, the certainty of getting caught matters more than the severity of the punishment, so better, faster attribution and action will be key. The Trump administration’s September 2018 executive order promising sanctions in response to election interference is a step in the right direction. Entanglement complements punishment; if an attack on the United States hurts the attacker, that reduces the incentive for malicious behavior. Deterrence won’t solve the problem but could increase the cost and difficulty. Defensive measures could then be better focused on those attacks that do still occur.

Diplomacy: This arena is not conducive to arms control. A Twitter account is inherently "dual-use": a tool of disinformation or a means of innocuous social networking; the key variable is the user and the user’s intentions. Instead, as Joseph Nye proposes, we ought to establish rules of the road to limit certain malicious behavior.

We are not proposing a treaty but a set of agreements or understandings, which will depend on the values of the involved parties. Just as the United States and the Soviet Union came to the 1972 Incidents at Sea agreement to reduce the risk of inadvertent crises, so too could the United States and Russia conceivably commit not to interfere covertly in elections, while allowing overt broadcasting and transparent information. Each side could unilaterally propose and share its own expectations of conduct, tracking and communicating how the cyber behaviors it observes over time do or do not comply with those expectations. The United States does not have to act alone here. It could work with its allies and partners—fellow liberal democracies—to coordinate collective action; sharing defensive recommendations, mutually shoring up electoral and political processes, and collaborating on diplomatic agreements.

In other words, work to establish upper-bounds of cyber information activities—thereby allowing U.S. officials and others to focus their resilience-building efforts on a narrower range of challenges—and prepare for prompt retaliation for activities that exceed the bounds.

What to Do About Network Platforms?

As the United States addresses cyber information warfare, it ought to consider the preeminent and uncontested power of network platforms. Manipulation of information to disrupt our electoral process demands a response, but the information challenge to governance extends beyond cyber information war. The technologies, and our relationship to them, must be addressed.

Niall Ferguson ably describes the current status quo: eight technology companies—including Facebook, Alphabet, and Tencent— dominate global internet commerce and advertising. They are near monopolies and immensely profitable. Network platforms have become a “public good,” not just commercial enterprises, trading on the attention of the public. But they are contaminated with fake news and extreme views, some incited by our nation’s adversaries. And network platforms, such as Twitter, have transformed governance in the United States.

They may be public goods, but these platforms are essentially self-regulated, or more accurately self-non-regulated. What regulation exists gives the network platforms significant leeway. Under US federal law, they are generally not regarded as publishers nor are they liable for the content they host, or the content they remove.

It is unsurprising, then, that companies curate and customize content on their platforms. As described above, they seek to maximize user attention and have done so to great effect—the average American spends 5.5 hours per day on digital media. Alongside this comes fake news and polarizing content, which spread more quickly and attract more attention than sober-minded alternatives.

With their vast network of users and grasp of user attention, U.S. internet platforms became a key battleground of the 2016 election—one in which the winning campaign was most focused.

Regulation, Firewalls, and Other Proposals

If the status quo is unacceptable, what should be done to change it? Two foreign models for managing internet platforms suggest what not to do:

Europe has adopted a tax, regulate, fine model. As Ferguson writes, Europe “seeks, at one and the same time, to live off the network platforms, by taxing and fining them, and to delegate the power of public censorship to them.” China, on the other hand, zigged when the West zagged, adopting "internet sovereignty" in contrast to the U.S.-led internet freedom agenda. It built the great firewall and fostered its own domestic industry through total protectionism (for more discussion of this see "China in an Emerging World" in this series).

Neither foreign model appeals. In the wake of the 2016 election, U.S. network platforms responded—within their own largely laissez-faire business environment—by pledging more strenuous self-regulation at the firm level. Facebook, for example, now requires disclosure of who pays for political ads, uses artificial intelligence to detect bad content and bad actors, removes certain foreign government accounts, and reduces access to user data. These are measurably helpful but remain essentially reactionary steps. It is hard to have confidence that they have solved the next threat.

How to Manage the Network Platforms?

While we do not know the precise solution to these problems, let us consider a few options, drawn from the papers included herein and the roundtable discussion of them. It is easy to focus on the new problems generated through these platforms while taking for granted the informational value and personal satisfaction they also do generate. We therefore wish to redress the more damaging effects of these technologies while continuing to take advantage of their promise.

Niall Ferguson’s paper recommends that the U.S. government make network platforms liable for content on their products—essentially scrapping the 1996 Telecommunications Act provisions protecting them—while also imposing First Amendment obligations on them. That is, do not allow the platforms themselves to decide what speech is acceptable by their own rules. His approach would give users and competitors recourse to challenge companies in the courts.

Ferguson’s diagnosis of the fundamental flaw at the heart of the current regulatory framework rings true, and he rightfully focuses on how network platforms have come to dominate the public square. But as discussants noted, there are certain internal contradictions. Asking companies to monitor content for which they could be held liable—a task that would necessarily rely on AI—would likely complicate their ability to post everything permitted by the First Amendment. And what of anonymity, which has been so crucial to the internet freedom agenda? How do we protect anonymity while also enforcing liability? Perhaps a first step would entail banning content generated by bots or nonhumans.

Alternative, but unappealing regulatory steps would include a return to net neutrality, which would empower internet service providers to monitor content, but there is little reason to believe they would do a better job given their own profit incentives. Antitrust efforts intended to break up platform companies would also likely be of limited utility: it would be slow, of questionable effect, and run against the natural "winner takes all" direction of network platforms. Moreover, we must remember that historically regulation tends to cement the dominance of the largest players, stifling innovation and competition.

The government and public could ask more of the tech industry. We could press companies to be more transparent about their criteria for managing content on their platforms, while also establishing a recourse to challenge network platforms’ actions. Along those same lines, companies could be ordered to make some portion of their algorithms available for public review or to the review of a select court, in the vein of the FISA court.

More generally, both the public and U.S. government officials ought to be cognizant of the immense political power internet companies hold. As Robert Epstein has documented, they can shift election outcomes and public opinion through slight manipulation of search results or suggestions, content feeds, and other user interfaces. So-called "dark patterns" are a well-documented aspect of digital interaction design outside of the political arena and are an emergent threat here too: almost imperceptible tweaks to underlying algorithms can swing voters towards a single candidate. As we consider whether network platforms ought to continue to self-regulate, we would do well to recall their power.

Conclusion

Social media and network platforms have come to dominate the public square, enabling broader and more complex social networks and political organizations than ever before. Information has always been an extremely valuable asset—once costly to obtain and share, now essentially free to all strata of society. But the spread of internet platforms comes at a cost. Malicious actors can engage in information warfare more quickly, decisively, and cheaply than ever before, and fake news, disinformation, and polarizing political speech proliferate. Whereas social media were once seen as a tool to disrupt non-tech-savvy authoritarians, we are increasingly aware of how they can be manipulated to transform democratic elections too.

Democracy—both elections and the process of governance—depends on transparency and the spread of open, trustworthy information. That commitment to openness is a vulnerability, but it is also a great virtue, one deserving of protection. When considering what can be done to redress the information challenge to governance, then, we must commit to first and foremost doing no harm to our democracy.

Fortunately, the United States can counter cyber information warfare without curtailing its own openness—indeed while strengthening that core value. It can, as Joseph Nye proposes, pursue a strategy of improving resilience and deterrence, while engaging in diplomacy to establish rules of the road governing interference in elections. Moreover, it is worth noting that the information battleground is not static. Russia caught the United States unprepared in 2016, but it was punished for doing so, primarily in the form of sanctions. And the U.S. government and network platforms have raised the costs of engaging in such behavior, though there is much work to be done.

In the United States, some may look to the past, when symbols of public trust—Walter Cronkite and Huntley and Brinkley being the canonical examples—gave us the news. Those days have passed, but the importance of trust and reputation remain. Internet companies would be wise to regain the trust of the people through careful stewardship of their platforms, giving priority to accuracy over attention, and willing public-private engagement. The public has an important role to play as well. As both creators and consumers of the content that populates network platforms, individuals can refrain from relying on social media for “news” and be discerning in what they share. The government and the companies do not bear sole responsibility for addressing this challenge.

Finally, what happens after an election? The papers in this program and the discussion of them focused on ways to safeguard and improve the electoral process, but the challenge of governing once in office is also formidable. How do these new means of communication affect the capacity of political officials to govern over diversity? We will continue to address this subject in the course of our project.

overlay image