In this episode, ex-Facebook security head Alex Stamos discusses cybersecurity in relation to disinformation and elections, the emerging Internet of Things (IoT) ecosystem, the breaches we don’t read about, and the challenges of securing social networking services.

About the guest: 

Alex Stamos is a cybersecurity expert, business leader, and entrepreneur working to improve the security and safety of the Internet through his teaching and research at Stanford University. Stamos is an adjunct professor at Stanford’s Freeman Spogli Institute, a William J. Perry Fellow at the Center for International Security and Cooperation, and a visiting scholar at the Hoover Institution. Prior to joining Stanford, Alex served as the chief security officer of Facebook.

KEY EXCERPTS FROM THE ALEX STAMOS INTERVIEW

(the text below has been condensed and edited for clarity).

John Villasenor: You were the Chief Security Officer for Facebook, a company with billions of people using its services. That's an incomprehensible responsibility. Can you tell us a little bit about the scale of the challenges involved and the classes of security threats that you had to address while in that position?

Alex Stamos: It is a big responsibility. And when you're in a position like that it's easy to forget, and you have to remind yourself, why you're doing the work. There's really two classes of risks that the security team at Facebook works on. The first is the traditional information security risks, those are the risks of people breaking into Facebook's corporate network, breaking into the production network, finding flaws in the products Facebook ships with a goal of stealing data, effecting data, perhaps attacking individuals or in some cases stealing money or intellectual property from the company. That's kind of the traditional CISO role. But the security and safety teams there have another real important role which is understanding how Facebook's products, which as you said, have something like 2.5 billion people using them, how those products can be misused and cause harm. That's a much more open-ended and therefore difficult set of challenges that have to be tackled.

It's become kind of a stereotype to say something like this but you do have to create a situation where security and safety becomes everybody's problem. The best way to address these issues is you can have a central security team that provides oversight and guidance and support but really what you need to do is build security and safety capabilities and knowledge and responsibility throughout the organization. That's something that happened before I got there and was a big focus of my three years at Facebook was the ways that we could, from the central security team, support the creation of those capabilities throughout the company.

A company like Facebook is a huge target, but you also do get the economies of scale where you can have a relatively small number of people come up with protections that then have very little marginal difficulty in applying them. If you build one secure framework to build a product, when you go from a billion users to two billion users, the amount of work necessary to do that product security work does not change. Those economies of scale can be positive.

The companies at that size—the Googles and the Facebooks and the Amazons—face the same problems as everybody of trying to train your engineers, give them good frameworks, find flaws quickly and get them patched quickly. There's no magic solution here other than continuously trying to invest in building a good baseline and reducing the number of mistakes that individual engineers can make. But one of the advantages you get at a Facebook or a Google, perhaps at an Amazon as well, is that you get to start from scratch. These companies did not exist 20 years ago. Unlike a Microsoft or a Yahoo!, they do not have to build upon software decisions that were made in the 1990s. But the other benefit is, when you're running a small number of incredibly large services you can build all those services off of a shared code base and on shared infrastructure. This is one of the things I learned from being at a Yahoo! versus a Facebook of seeing how web architectures were designed in the late '90s, early 2000s versus a much more modern architecture.

John Villasenor: Cybersecurity is pretty much a daily feature of the news these days. News stories tend to focus on major breaches, which are of course a problem, but they aren't the only problem. What are some examples of cyber security issues that we don't read about as much in the news but that you think are important?

Alex Stamos: These big breaches are important and we do need to talk about them, but the everyday insecurity people are facing, the everyday difficulties normal people face in utilizing the technology that's been built in their lives and to do so safely is something we don't talk about and that's really unfortunate. It is important when 50 million, a hundred million, 200 million accounts are stolen or taken over. But every single day millions of people have their electronic lives turned upside down sometimes with really negative consequences like the loss of a huge amount of their life savings. That is something that we never talk about.

The number one source of account takeover in the world is almost certainly the reuse of passwords, the fact that people reuse passwords over and over again. It's the password paradigm which comes out of 1970s timeshare architectures. The password paradigm is completely inappropriate for 2018 but we still do it because just like nobody used to get fired for buying IBM, nobody gets fired for putting a password into a web form. We've kind of collectively accepted the fact that it's a horrible type of authentication and has huge downsides but because everybody does it, it's not a scandal that you use it. So as a result, at Facebook one of the things we had to deal with is the fact that people reuse their Facebook passwords everywhere, those passwords get stolen and the company had to build a risk base authentication system that catches something between half a million to a million logins per day where the bad guys come in with a correct username and password.

We're only seeing the ones who came to Facebook. If you look at those people, almost by definition, a huge chunk of their lives were now being taken over, their bank accounts, all of their other social media accounts or email accounts. You can talk about, yes, there was a loss of 10 million accounts today that's a couple of weeks of account takeovers just on one of the big services. We don't talk about that and we just kind of internalize the idea that that's an acceptable thing.

John Villasenor: The issue of social media as a means for people outside the United States to try to influence elections has been a major topic of discussion in recent years. What are some of the steps we can take to address this challenge in the future?

Alex Stamos: The social media piece is really part of, if you look at the 2016 election, there were kind of three areas of concern that we need to have. The first is the creation of disinformation that exists completely within social media and that's what there's been a lot of focus on in the case of Russia. A lot of that has come out of the internet research agency and a variety of other private organizations in Russia organized under a big umbrella project for which we now know much more than we did just a year ago based upon indictments from the special prosecutor and from various U.S. attorneys. The second category of misinformation is the hack and leak work which is really less about spreading things that aren't true and more about controlling the flow of narratives. That's the work that GRU did in 2016 where they broke into DNC, broke into John Podesta's emails and then were able to use stolen emails to control the press narrative in the last couple of months of the election and turn it towards the idea that Hillary had rigged the Democratic primary.

The third category is direct attacks against election infrastructure of which there are some hints of interest by the Russians in 2016 but no definite use of that to cause any damage. The first category is mostly the responsibility of social media platforms and I think a lot of the protections that have been put in place have been good. I think the most important thing that's happened is the creation of ad transparency whether or not there's foreign interference. We're hurdling towards the future where campaigns and parties represent themselves completely differently to every single voter online. We just don't want our electorate to be chopped into tinier and tinier little micro-targeted segments. When you talk about like the Cambridge Analytica scandal, part of it is the theft of information from Facebook APIs but the big part of the scandal that hasn't been addressed has been the fact that there's a huge market for services like Cambridge Analyticas because as a candidate you can go buy these huge databases and use it to target 15, 20, 30 people at once.

That is something that has not been addressed enough. We have transparency but we don't have limits around the targeting yet. I think that's something we got to fix. There's got to be a trade off here, at a certain group size you're going to still have to be honest to who you are and there's also the possibility or the likelihood, the larger the group is, that one of the people you target is not a super aggressive supporter of you or does not completely agree with what you said and therefore if you push a message to them that is radical or extreme or a lie it will get called out. When somebody runs a TV ad they are speaking to tens of thousands of people and they can't control who those people are so they have to take responsibility for what they say. If you run an ad to 100 people, the odds of you getting called out on it used to be pretty low.

Now there is transparency from Google and Facebook but there isn't from the thousands of other companies that are part of the ad ecosystem. I think that is where we can have some reasonable legislation is to take the standards that have been self-imposed by the largest companies and make them part of a legal standard that is applied across, at least, the direct campaign contribution funded advertising by the candidates and the campaigns.    

overlay image