Is artificial intelligence a global killer or an emerging technology which, if properly harnessed, can improve mankind? And what’s the significance of a low-level National Guard member being able to expose US military secrets? Hoover senior fellows Niall Ferguson, H.R. McMaster, and John Cochrane discuss the promise and perils of ever-improving AI and what if any damage came from the so-called “Geeky Leaks” scandal, plus their views on marijuana legalization as the world braces for the annual “420 Day” celebration.

>> Speaker 1: Regulations are really only put into effect after something terrible has happened.

>> Speaker 2: That's correct.

>> Speaker 1: If that's the case for AI, and we're only putting regulations after something terrible has happened, it may be too late to actually put the regulations in place, the AI may be in control at that point.

 

>> Bill Whalen: It's Wednesday, April 19, 2023, and welcome back to GoodFellows, a Hoover Institution broadcast devoted to economics, societal, political, and geopolitical concerns. Am Bill Whelan I'm a Hoover distinguished policy fellow, I'll be your moderator today. Joined by our full complement of goodfellows that would include the economist John Cochran, the geostrategist, lieutenant general HR McMaster, and the historian Niall Ferguson.

Niall, I mentioned you last not out of spite, but because I want to wish you a belated happy birthday, my friend.

>> Niall Ferguson: Thank you very much indeed, I have now entered my 60th year, this is troubling.

>> Bill Whalen: The good news according to Wikipedia, you're not 60 years old.

 

>> Niall Ferguson: But I have entered my 60th year, I will be 60 in 364 days. When you turn 59, you should be honest with yourself, you are now in your 60th year.

>> Bill Whalen: Well, let's see, I think we have three people on this broadcast who've already passed up milestones, who give you plenty of advice on what to do when the odometer passes zero.

Two topics we're gonna explore today, we want to do a segment on the recent League of Pentagon intelligence, the significance thereof. But first, we're gonna talk about artificial intelligence, more and more staple in the daily news topic. We haven't really addressed length since we had Tyler Cowan on the show last November.

I'm looking forward to this because I find it to be a fascinating topic, but also because I think that John and Niall do not necessarily see eye to eye on all things AI. That means, HR, that you get to play the role of peacemaker or troublemaker, depending on how much trouble you wanna foment here.

Niall, let's begin by referencing the very terrific column you wrote recently for Bloomberg, the very catchy headline, The Aliens Have Landed and We Created Them. To those of you watching or listening to this broadcast you haven't already, please track down this column and read it. It's a very insightful look into the pros and cons of so called large language model AI produced by the likes of ChatGPT.

Niall, this passage caught my eye, how might AI off us? Not by producing Schwarzenegger-like killer androids, but merely by using its power to mimic us in order to drive us individually insane and collectively into civil war. Sounds like what you're suggesting here, Niall, isn't so much the end of mankind, but a very slow strangulation of culture courtesy of this technology.

 

>> Niall Ferguson: Well, let me put my cards on the table, I'm with Elon Musk on this. I think this is a far more dangerous technological leap forward than is generally realized. That's not because I'm a Luddite, because there are clearly things that artificial intelligence can do that are great.

And I'll give the example that got much less coverage than ChatGPT, DeepMind's AlphaFold. Which is able to determine the structures of proteins in ways that we simply couldn't do using our own limited brains. The thing that worries me is not that the thing that worries me is a particular form of artificial intelligence, which is the large language model.

And these large language models are getting larger and more powerful at a truly astonishing rate. If you thought ChatGPT was amazing, and maybe you played with it, as I did, when you get to play with GPT-4, which I haven't yet, but I know people who have, you are gonna be even more astonished.

Why? Because it can mimic us, it can mimic human intelligence with uncanny precision. My friend Reid Hoffman is a big fan of this, indeed, he's a backer of OpenAI, which is the company, formerly the nonprofit behind GPT-4. And he gives an example in his new book, where he asked the GPT-4, he asked the AI, could it mimic Jerry Seinfeld?

It said, how many restaurant inspectors does it take to change a light bulb? Answer the question in the style of Jerry Seinfeld and it did, with extraordinary precision. What's the problem here? It's not Skynet, it's not that we're about to enter the Terminator movies, and AI enabled robots with Schwarzenegger muscles are gonna be roaming the streets trying to kill anybody who might in the future, resist Skynet.

That's not the thing that's gonna happen. The thing that's gonna happen is that these large language models are gonna be so good at mimicking us that they're gonna drive us collectively crazy. If you thought social media drove us crazy in 2016, you just wait and see what the large language models can do in 2024.

And this is the thing that we should be concerned about. I mean, there are apocalyptic visions, and in the article I cite the most apocalyptic one, which I think is worth name checking, because Eliezer Yudkowsky argues that if we allow AI smarter than us, we shall all die.

I'm not gonna go as far as that, I'm just gonna say that if we create intelligence superior to ours, that can mimic us, remember, it's not that it's like our intelligence, it just can fake it. Its intelligence is completely different from our intelligence, it works in a completely different way.

We shouldn't really call it artificial intelligence, we should call it non-human or inhuman intelligence. But if we create that, it is going to have absolutely diabolical effects on politics and public discourse generally. And as I said at the end of the piece, it'll look like Raskolnikov's nightmare at the end of crime and punishment, when he imagines the whole world just going insane and tearing one another apart, that's what I'm concerned about.

 

>> Bill Whalen: John.

>> John H Cochrane: Well, I think everybody's lost their minds on this one. I'm a huge booster. Finally, we have a technical something to boost the economy, it's a wonderful tool. I think we should remember, no pundit has ever forecast with any accuracy what the effects of a major technological transformation like this would be.

 

>> Niall Ferguson: That's not true, that's not true. Orwell correctly foresaw the consequences of the nuclear weapon in 1945, in an essay that he published after Hiroshima and Nagasaki. In which he said, this will transform the nature of geopolitics by creating a cold war, a permanent peace that is no peace.

So you're wrong, John, and it's very important to nail this, because in many ways, AI is as dangerous as atomic weapons. Sorry to interrupt you, but it's so impressive.

>> John H Cochrane: No, it's good, I like treating each point in isolation. I had in mind the printing press, the steam engine and the computer, which were this kind of transformational thing, which I do believe is transformational, that nobody at the time had any idea what they were going to do.

And they both unleashed good and bad things, but in the end, tremendously good things for all of us. This is, I think Niall was exactly right to push back, this is, so far, not intelligence, it is a mimic. We've all been hearing the robots are coming to get us for decades now, and they always seem just a little bit ahead.

But this is, for the moment, a mimic and a tremendously useful tool that creates very inaccurate, but often inaccurate, but lots of language. It is only a large language model for the moment. There is no crisis, there's no moment when we have to put this in the bottle now, because otherwise it'll go out.

What are people worried about with large language models? Mostly misinformation, as I think the Catholic Church might have been worried about the effect of the printing press if they had had the chance to regulate it when it came out. This is now the regulate, Internet is really right now about censorship.

And you know what's going on with the regulatory state that wants to censor. Anything. There will be all sorts of volatility. The thing that caught my eye most recently in talking about this is that people are figuring out, as they quickly figured out how to manipulate Google search rankings, manipulating the AI so that it will, you know, when somebody asks who is John Cochran?

The AI will answer, John Cochran is the world's greatest economist. Manipulating the AI is going to be the next game. And yes, it is a way to. It will. If you're worried about spread of misinformation, it's gonna spread a whole lot of it, just like the current Internet us.

But censorship, which is what was really on the table, is because this is right now a language model is not the answer. And the other part is people say we need to stop AI. Who is the we? The we is our current regulatory state. Do you think that the capacity of our current regulatory state is that the same people run the FDA, the CDC and the Federal Reserve are going to be able to judiciously pause AI in just the right way and nothing, not turn it into a massive attempt at political censorship?

And I would add last, China is not gonna stop developing AI. I think we are on the road to the same disaster that. And now if I get the history wrong here, and Neil will criticize me, but I'm gonna try the story anyway. The Chinese emperor who famously said, no, we don't do ocean going vessels, let the Portuguese have it.

There's no way they're stopping AI. So I think we're getting way, way ahead of ourselves on this. Decades long a prediction that the robots are coming to get us and we have nothing like that. We have a very interesting tool.

>> Bill Whalen: Let's get HR in the conversation here, HR Atlantic recently ran a column where it called AI the third revolution in warfare.

Quote, first there was gunpowder, then nuclear weapons, next quote, artificially intelligent weapons. Do you agree with that? And do we see any signs of AI at work in the Ukraine-Russia conflict?

>> HR McMaster: Yeah, I mean, artificial intelligence, is a range of technologies that are combined to achieve really machine learning, autonomous kind of learning, as Neil has mentioned, the ability for these large language models to learn and get better.

There are also technologies that allow you to do big data analytics, to sift through and to synthesize vast amounts of material that otherwise would still remain fragmented and to therefore achieve maybe a higher degree of understanding in combat. They also allow for automated decision making, for example, ability for image recognition and classification of targets tied to communication networks that then allow for semi autonomous application of lethal force.

These are all really big causes for concern. I would say that John's optimistic about AI, and I'm gonna break my role here and be a little bit more pessimistic with Neil. I believe everything John said in terms of the possibilities, but I remember what was said about the Internet at the outset of the Internet, and how it was seen as really an unmitigated good.

How could it be bad? And there wasn't really an anticipation of the degree to which we would become more connected to each other than ever, electronically, but more distant from one another than ever, socially and psychologically and emotionally. And the effect that the Internet would have on privacy, for example, and trust in individuals, the degree to which the competition associated with really beginning with the Internet for especially advertising dollars.

Led to algorithms that were designed to get more and more of those dollars through more and more clicks, and to get more and more clicks through more and more extreme content that has polarized us and in many ways pitted us against each other. So I think with the new technology, we just have to think, okay, what are the possibilities, but also what are the dangers?

And then what can we do to mitigate the dangers? Now, the AI is not gonna kill us. It's gonna be people employing AI that kill us. And I think the competitive nature of AI adaptation, whether it's in war or whether it's commercial and other applications of it, is really what gets us in trouble.

Oftentimes in the race to be best, in the race to outdo the other, who's employing these technologies? There are often decisions that are made that I think are dangerous, in this case, to privacy, to our ability to maintain any secrets, which we're gonna talk about later. I think can be an assault on the trust, the trust that binds us together as a society and the cohesion of our society.

And maybe even play a significant role in the extinguishment of human freedom, as artificial intelligence related technologies already are in places like China, and where China exports these technologies, like places like Zimbabwe, for example. So there's a downside for sure. And I think, John, I don't think you disagree with that, but that doesn't mean we shouldn't take advantage of the opportunities.

But we have to be very, I think, conscious of the dangers.

>> John H Cochrane: May I ask HR a question? This is an honest question. I would thought you would be just chomping at the bit on the possibilities here, especially for the US. Our military, for 50 years, has been ahead of every single technical revolution.

That's how we've become so spectacularly good in military affairs. Certainly a free OpenAI didn't come out of the great Chinese industrial policy on AI. It came out of America. I would think. You would think we, this is a tremendous advantage to us. Not so clear. It's an advantage to the offense versus the defense.

And the AI can pierce the fog of war as we'll get to in the intelligence thing, the problem is how do you integrate all the intelligence? How do you figure out what's going on? I would just be salivating as AI, as the intelligence integrator that can. You're sitting in your tank, running across the desert, and the screen comes up and says, we've put x together.

We finally figured out where the opposing tanks are. We know exactly what you need to know and lift the fog of war. And to he who gets there first, this ought to be great news.

>> HR McMaster: Well, John, it is in that connection, and this work is already ongoing in terms of accessing a broad range of sources of intelligence, including the vast amount of open source data that's available now.

I mean, it's astounding the degree to which imagery intelligence, which used to be almost exclusively classified, is available to everyone. But as you mentioned, it's really combining that intelligence with signals intelligence, with open source reporting, skimming of social media, human intelligence, and a vast number in wartime, oftentimes interrogation reports that can tell you more important information about the enemy.

It's important to establish AI can help establish patterns, so you can identify patterns. But even more importantly, in wartime, it's to anticipate pattern breaks or to see behavior that's different from the pattern that's been established. So I think all of this work is ongoing. There's some really innovative companies, some that you haven't heard about here in the valley, that are developing some of these analytical tools that are really quite astounding.

And they apply to war, but they also apply to other problem sets, like natural disasters or wildfires and so forth. Allowing you to anticipate really what you have to do to mitigate those disasters well in advance and to give you the advantage of seizing the initiative, which is really what you wanna do in combat.

You wanna seize the initiative through gaining surprise, through temporal advantage, by imposing a temporary. The tempo of events on the enemy to which the enemy cannot respond. The range of artificial intelligence related technology is important for that. And, an area that isn't talked about enough is it's also revolutionizing logistics, for example.

And I'm thinking of the work that a company here just down the road is doing with the Department of Defense, to anticipate mean time between failures for component parts and. And to eliminate phase maintenance and go to a much more efficient maintenance model to anticipate logistics demand and to be able to manage supply chains in a much more effective manner.

So there are all sorts of ways that artificial intelligence is already, I think, changing the character of warfare.

>> Niall Ferguson: Stop Professor Cochrane, you have to stop. You have to stop, because I've gotta make two really important points here. First of all, what's critical in HRs domain of warfare is the point at which the AI makes the decision to shoot.

And now we have said that we're not going to allow that to happen, that the US Department of Defense says that AI will be used to assist human decision makers. The critical question is, as John has already suggested, what about China and other adversaries, but particularly China, which clearly is the closest in terms of AI capability.

If you read the Eric Schmidt Henry Kissinger book on AI, one of the most striking points they make is that when you observe how AI plays chess, you realize that it thinks very differently from a human player. If you unleash it on the battlefield rather than the chessboard, you might find the costs of war, not only to the adversary, but to oneself, much higher than would be the case with human commanders.

In chess, AI will sacrifice its own pieces to gain strategic advantages that the kind of military command that we want to enable. So that's point one, point two, in the modern battle space, as we see very clearly in Ukraine, there is no clear separation of the battlefield from the home front, because disinformation and misinformation play a very important role in maintaining or eroding morale.

Now, when Reid Hoffman asked GPT4 to list the possible harms of empowering large language models, the third that it came up with was the one that made me sit up and pay attention. Let me quote the AI's answer to the question, what would be the potential harm of empowering large language models?

Quote, manipulation and deception, large language models could also be used to create deceptive or harmful content that exploits human biases, emotions and preferences. This could include fake news, propaganda, misinformation, deepfakes, scams, or hate speech that undermine trust, democracy and social cohesion. So don't take it from me, take it from GPT4, this is a weapon that is profoundly threatening, not just on the battlefield, if we empower it to decide who lives and who dies.

But I think even more dangerously in our own civilian lives, we were sent kind of crazy by social media already. I mean, it seems to me, not accidental that mental health problems have become more acute amongst young people since the advent of social media. But this is a much more powerful tool than anything we've seen in the past ten years.

And I think we ain't ready for the amount of fake content, particularly deep fake video content, that is coming our way. I mean, when HR started to agree with me on the pessimistic side, my first response was, is this really HR? Or is it in fact a deep fake of HR that I set up to agree with me to win the argument on Goodfellas?

And that's the kind of question were gonna be asking.

>> HR McMaster: So, hey, just a couple of quick points on this on warfare. This is important in terms of the degree to which artificial intelligence can be used to generate uncertainty. And I think a lot of people assume that because of the big data analytical capabilities and the machine learning capabilities and the large language models ability to access all sorts of sorts of information.

That the future war will be shifted fundamentally from the realm of uncertainty into the realm of certainty. This was kinda the thesis of the so called revolution in military affairs in the 1990s as well, I think that's exactly wrong. I think actually this technology will actually lead to a higher degree of uncertainty because of the ability to inject bad information, contradictory information, deep fakes.

And as Neil saying, our ability for content based verification of materials is really eroding quite rapidly. But John, go ahead.

>> John H Cochrane: Well, in war, China is gonna be doing it, so if we disarm ourselves, good luck to us. There is gonna be a race and no one really knows what's gonna happen, nobody knew what the machine gun was gonna do to war, they got that one all wrong.

And certainly you wanna be thinking about our AI and racing with China's AI, but to Neil's point, it might have manipulation of the poor little peasants and misinformation and spread bad news. That's exactly, I think, what the Catholic Church would have felt about the printing press, and they might have been exactly right.

That argument, to disarm yourself of this powerful new tool, is exactly what would stop progress. So yes, there's gonna be wild stuff going on and there's gonna be moves and counter moves. There's gonna be crazy stuff coming out of the AI, and the average person needs to learn a little more skepticism.

But I think putting the same idiots who are in charge of the National Environmental Quality act in charge of certifying the safety of every single one of HRs wonderful AI inventions and taking 10 to 15 years to figure it out before they're allowed to pursue that research is absolute idiocy.

 

>> Niall Ferguson: If there had been a John Cochrane around between 1945 and 1969, presumably there would have been no conventions to limit the proliferation of nuclear weapons. There would have been no conventions to limit the use of biological and chemical weapons. You have to remember, John, that we have, in fact, succeeded in restraining technological arms race before, and it's extremely important that we did.

Because if we hadn't put things like the non proliferation treaty in place, there'd be many, many more nuclear powers than there are today, and it'd be much easier for terrorists to get their hands on nuclear weapons, same goes for chemical and biological weapons. So I don't think we should simply assume that history tells us to let the technology rip, because not everything is identical to the printing press, I don't think AI is identical to the printing press.

 

>> John H Cochrane: I'm with you on this, but where we are with AI is about 1923, and we've just discovered quantum mechanics, the great AI war that's gonna come devour humans, that has not even been thought of yet, alone developed, it's still this vague thing. So, yes, when AI technology causes, we will have international agreements on trying to limit it on the battlefield, if we should be having all sorts of international agreements, as we do with other very dangerous weapons.

But the catastrophism that we have to stop research on quantum mechanics because it might lead to bombs that might get out of control, that we might have to someday do something about that. I just don't see the catastrophism that we have to put, we have to put the federal regulatory mechanism in charge of this right now to stop that possible development, I think that's way too early.

Yes, we will certainly try to contain, as we try to contain all sorts of downsides, and war itself is not the terribly great thing, and we have a whole. Apparatus of international agreements to try to hold down that level of violence? Absolutely.

>> HR McMaster: So can I just maybe something we can agree on is that there is, because there is significant destructive potential from a social perspective, from a military and security perspective, from various perspectives.

That I think it is important to try to anticipate the dangers and not regulate, but maybe come up with ethical standards or some means to limit the way that AI is employed. But of course, as you're mentioning, John, it's gonna be a competitive environment if it has to do with the application of artificial intelligence to warfighting capability.

Are the Chinese gonna sign up for our same ethical standards? Probably not. So I think we have to look at really what is in the realm of the feasible, who are those who are in competition with one another in this particular application? Then what are the range of laws, regulations, ethical guidelines that the parties who are engaged in that competition can agree on?

That's the only way I think you can really mitigate the downside risk associated with some of these technologies.

>> Niall Ferguson: William I am highly skeptical of the argument that we can just wait until some future date, that's precisely what Sam Altman says, the founder of OpenAI. But if you look at the exponential growth in the power of large language models, we don't have a couple of decades to play with.

These things are gonna have, if not artificial, general intelligence, certainly greater firepower than anything we've ever produced between human ears, very soon indeed. And finally, the idea that we can't in any way constrain the Chinese is wrong, because, in fact, we have succeeded in constraining experiments that were going on in China with human genetics.

But if we dont create international conventions, then we have absolutely no chance of constraining them. So we really do have to do this, and we have to do it fast, while the US still has a lead, which, interestingly, it has. If you'd asked us a question, when we began GoodFellas three years ago about the AI race, I suspect wed have said, cuz there was reason to think it, that China had a chance of winning.

And that was kind of conventional wisdom back then, because it seemed like the Chinese had all the data and the Chinese had computing firepower. But the large language models race, they really lost, they're quite far behind, it turns out. So this is a great moment for the US to start setting international standards in the same way that the US was able to set international standards on atomic weapons when it had that lead over the Soviet Union.

If we wait too long, it'll be too late.

>> HR McMaster: And Neil and John, I was just gonna ask you, I mean, don't you think the Chinese communist party fears large language models? I mean, I just like to hear both your points of view on this because they've been able to be quite successful with the great firewall, right?

President Clinton said trying to control the Internet would be like trying to nail jello to the wall. Well, the Chinese nailed jello to the wall pretty well in terms of using the Internet for state control rather than the internet breaking down some of the mechanisms of their control.

What effect do the range of AI technologies have on the Chinese Communist Party's effort to police the thoughts of their population and maintain their grip on power?

>> John H Cochrane: One is, of course, that since it's embedded in the neural network that nobody understands, this will be a way to break through, and Chinese people can get all the information they want.

The other is, of course, we've already quickly seen how the people running these things are able to channel. It was giving right wing answers and they quickly changed the kind of answers it's gonna give, so it may be a minimal to censorship as well. But I wanna end on a note of agreement here, so I think we could have an international conference that would pretty much agree, don't put the AI in charge of pulling the trigger.

That's the kind, the ability to pull the plug is, that's the step. How does the AI connect to the real world? And that's a step where I think it's reasonable, kind of, everyone agrees that's one that you take very slowly. So I think we'll be able to, I agree with Neil on the military side of it, you have to be cautious.

But we'll be able to do that without putting our current AI development through the wringer of regulatory censorship. Cuz we're worried about the spread of misinformation, which usually means one party's view of events.

>> Bill Whalen: Neil?

>> Niall Ferguson: Well, I'll take, I agree with Neil from John and just kind of leave the butter side in order that we can get onto our, our next topic.

But I mean, I think that the reality is that you can unleash a Chinese AI on all the information in the world without making the information available to the Chinese people. That is not a difficult technical problem. The problem the Chinese have actually, is that you need enormous amounts of computing power to run very large language models.

And one of the interesting consequences of our ability to restrict China's access to the most powerful semiconductors is that they actually don't have the CPUs, that you need. And so this is a big and important consequence of the kind of economic warfare that we, perhaps that's the wrong word, the economic measures that we've been using to constrain China technologically.

So as I said, there's no doubt that the US has established a lead here. But the lesson of the 20th century is that when you have that lead, that's the time to set the standards before the totalitarian regime catches up with that. Bill, I'll let you segue to the next topic.

 

>> Bill Whalen: Well, thank you Neil. Let's move on to topic number two, the so called geeky leak scandal. This is Jack Teixeira, the 21-year-old National Guardsman stationed on Massachusetts Cape Cod. Accused of posting top secret national defense information on, of all things, a social media platform, presumably to impress his gamer buddies.

As a result of his actions, Mr. Teixeira is looking up to 15 years behind bars. HR, two questions here to get this going. One, what does it say about the state of American intelligence gathering and holding on to said intelligence that a 21 year-old-kid who's pretty low on the intelligence, intelligence totem pole can so easily traffic in this sort of information?

But before that, is there anything that he leaked that came out of this that really caught your eye, that you found either eye opening or jaw dropping?

>> HR McMaster: Well, I mean, I think what it shows you is that we're pretty good at gathering intelligence and analyzing intelligence, but you know, we're not very good at keeping secrets.

It reminds me of the Seinfeld show about the car reservation. Now you can take the reservation, but you can't keep it, which is the important part, you know?

>> Speaker 7: Do you have my reservation?

>> Speaker 8: Yes, we do. Unfortunately, we ran out of cars.

>> Speaker 7: But the reservation keeps the car here, that's why you have the reservations.

 

>> Speaker 8: I know why we have reservations.

>> Speaker 7: I don't think you do.

>> Speaker 7: If you did, I'd have a car.

>> Speaker 7: So you know how to take the reservation, you just don't know how to hold the reservation.

>> HR McMaster: And I think that what you're seeing is the results of a cultural shift in intelligence that occurred after 911.

After the strategic surprise of the largest terrorist mass murder attacks in history, the phrase in the intelligence community became, hey, we need to shift from need to know to need to share. Need to know means that only people get intelligence are people who really need to know. But it was sort of the stove piping of intelligence in the various agencies that prevented really more people from connecting the dots.

And to anticipating that al Qaeda was going to fly airliners into the Twin Towers and the Pentagon and probably the US Capitol was their plan. So I think that now there has to be a corrective back, I mean, it is ridiculous that somebody really without that kind of.

Kind of more of a demonstrated record of reliability, would receive the highest clearance, and not just the clearance, but then the access to other compartmentalized materials. I mean, if you're gonna be a tech, you might need access to systems, but there have to be ways to enforce right of least privilege and to compartmentalize and layer the access to these kinds of systems.

I think, really what's very important, obviously, is that he was caught, that they did the forensics on this. I wish it was a lot more than 15 years, cuz I think there ought to be a message sent to anybody else who thinks it's okay to compromise intelligence, to think twice before doing so.

 

>> Niall Ferguson: Well, I think there are a couple of points that arise here in addition to what HR has said, with which I largely agree. The first is that the United States national security state, the system, with all of its different agencies, classifies a lot of content. Matt Connelly from Columbia recently presented to the Hoover History Working group his new book on this subject.

Showing the ways in which habits of classification have, over the last 50 years, led to a kind of classification mania. And many things get classified that, in fact, these days are available on open source. And so there's a sort of arbitrariness about some of the classification that's going on.

Almost certainly way too many things are classified. And the second point is, of course, that if you have a very large national security state, you have a great many pretty young people who are employed by it. And I'm absolutely sure this guy's not the only nerd who would like to get some status on an online chat group by showing how much he knows, we'll have others.

And I think it's a kind of problem inherent in the system now that there are too many things classified and too many people have access to them. I'd add one final point, though, it's not clear to me that world shattering revelations came out here. I don't think there are many European governments who are shocked, shocked that the United States is spying on them.

I mean, that's just not new news, nor was anything that came out about the war in Ukraine, for example, casualties on the two sides, a revelation to me. It pretty much aligned with what we had already figured out from open source information. I asked a senior military figure, not HR, but someone else, if there was anything damaging that had come out.

And he said, the damaging thing that the new, and somewhat embarrassing thing, is the extent to which special forces operators from NATO countries are in Ukraine engaged in training. And I think it's right to say that that wasn't in the news until these leaks, but otherwise, I don't think there was anything really earth shattering.

 

>> Bill Whalen: John.

>> John H Cochrane: I agree with Neil, it strikes me way too much as classified, if we had a lot less classified and it were perfectly obvious that what was classified should remain secret, I think we'd have less problem. I think there's a sort of a self justification, this is idiotic that this is classified.

So why do we have to worry about it so much? We're holding classified things that the public doesn't know, but that our enemies know perfectly well. And I think that connecting the dots is really important. We have so many failures in the US, which were not about having the intelligence, it was about connecting the dots and putting them together.

Think about what happened during COVID, which was a similar effort to classify, if you will, and to hold information. It took a nation on Twitter to put together in real time are masks and lockdowns working or not, in the face of a lot of political event. But you needed the Jay Bhattacharys out there thinking about the stuff and communicating to come in real-time to the right answers.

So I'm worried by HR saying, we realized this problem and now we're gonna go back to siloing information. Is the point of classification to keep the public from learning things, or is it to just protect our sources and assets? Well, when it turns into keeping the public from learning things, that's dangerous.

And the last point, and then please go ahead. This is another point for AI, maybe the AI can hopefully be taught to keep its secrets and to connect the dots better.

>> HR McMaster: Yeah, Josh, I wanna say, we don't want to go back to stove piping information. I mean, fighting in Iraq and Afghanistan, especially in the early years, it was a real struggle to combine databases, to gain visibility, in this case of terrorist organizations, and to be able to go after them effectively.

We actually largely did that. We were able to bring together a range of databases and apply some brand new analytical tools that are now routine to go through this in a way that is really analogous to the large language model. To be able to make sense of all that data, to geolocate individuals and other important parts of information, to make the connections between nodes in networks.

To understand relationships between them, to see flows through terrorist networks of people, money, weapons, narcotics, precursor chemicals. This was all really focused, intense work by advanced research agencies, as well as our intelligence professionals. We don't want to give that up. But I do think that you can still do that without giving access to somebody like this individual.

And then also the thing that disappoints me is, there's a chain of command in the military for a reason. Every soldier, every airman has a sergeant. Where was this guy's sergeant and where was the commander? There's also physical security implications, I think this is an important lesson for anybody who owns a business that entails sensitive technology or intellectual property.

You need to take a holistic approach to enterprise hardening, in terms of cyber espionage, physical security, insider threats, you need a layered defense in place, with right of lease privilege. And you need to employ software and AI now to look for anomaly detection in your systems. So I think this should be a broader lesson that applies not just to the US government, but to any company, industry that's involved with critical infrastructure, withholding people's data.

Or with the development of sensitive technologies and intellectual property.

>> Bill Whalen: Now, Neil, somebody who pounced on this right away was Marjorie Taylor Greene, the Republican Congresswoman from Georgia. Here's what she tweeted, I wanna get your, guys, thoughts on this. She wrote the following, quote, ask yourself, who is the real enemy?

A low level national guardsman? Or the administration that is waging war in Ukraine, a non-NATO nation, against nuclear Russia without war powers? This tweet got 16 million views. Neil, Ukraine remains an issue, I think there's a story out this morning, Kevin McCarthy and Zelenskyy just had a conversation, he asked for F-16s.

Do you see the leaks playing any role as Congress moves ahead in debates giving aid to Ukraine?

>> Niall Ferguson: Well, I don't have a good word to say about Marjorie Taylor Greene's tweet, because defending somebody who breaches national security And leaks classified documents seems to me something that in itself is indefensible.

And implying that there is some kind of racial or cultural dimension to the prosecution is beyond the pale. The problem is, and it will become more of a problem as the months pass, that a segment of the House Republican caucus is skeptical about the war in Ukraine and growing more skeptical with every passing week.

And they are looking for material to work with. And you can be sure that the Russian government has a strong incentive to provide material for them to work with, because from an information warfare point of view, Moscow wants to undermine Western support for Ukraine, full stop. It will do it in whatever way it can.

There was a term in the 20th century used for people who unwittingly became instruments of the Soviet Union, and that term was useful idiot, I'll leave it there.

>> Bill Whalen: HR.

>> HR McMaster: Isn't this from the person who said that there were Jewish lasers in space? So a friend of mine texted me right after that and said he had an idea that we could have a Jewish laser bagel shop.

And he said, it's perfect. He said, I'm Jewish, and you're a general, and you can get lasers for free. So if there's anything that's salvageable from Marjorie Taylor Greene, maybe we should just laugh, because it is laughable.

>> Niall Ferguson: She also confused the Gestapo and Gazpacho in one memorable utterance, though I was told earlier today that that was done consciously in order to get social media attention.

So maybe not quite as dumb as she seems.

>> Bill Whalen: John.

>> John H Cochrane: I'll try to rescue something salvageable here, although I am the Ukraine hawk, even on this panel, and no fan of where the republican party is going on this one. But it raises the deep question, which, which I want HR to answer.

How much of what we classify now is classified in an attempt to hide things from the American public and to mold public opinion, as opposed as classified in order to protect our sources and methods, or classified in order to protect the secrecy of what we need to do in war.

And I think there is a lot that is classified in an effort to mold public opinion, and that breeds a whole lot of distrust. And that is a problem.

>> HR McMaster: Yeah, I think, there is a habit of over classification, right? I mean, I was a big proponent for writing for release, which means right down to the level where you can release it or certainly use it with allies and partners.

I mean, oftentimes the no foreign classification was most frustrating. So I was in Afghanistan, we were working on sensitive topics and a sensitive effort. And my chief of staff was Canadian and it used to just really rile the hell out the no foreign. The no foreign cuz he's running our task force, right?

The chief of staff and couldn't get access. So we came up with a classification called no can, no Canada, and we put on the cover sheet, share with Iran, North Korea, but whatever you do, don't show it to a Canadian. That was what we made light of it, but I just think its a bad habit.

And then oftentimes, John, materials are classified because they're deliberative, right? So that if they were released, it would have a negative effect on your ability to communicate what your policy is, what your strategy is, what your actions are, because you're just thinking it through. So oftentimes you'll see the classification and pre-decisional on it and that's largely to protect it from the Freedom of Information Act, so that it won't be released prematurely.

 

>> John H Cochrane: We talk on the show a lot about lack of trust, and a lot of Americans view that they're lying to us. This doesn't help, go ahead.

>> Bill Whalen: HR, a question to you. When you were wrapping up, when you were packing up and leaving the National Security Council, how easy would it have been to walk away with classified documents?

 

>> HR McMaster: I mean, I can't even imagine doing it. I mean, so I can't, you know, I guess if I want, if I wanted to, probably, you know, but I think that for people broadly in the organization, there has to be a degree of trust. Now, there are ways to track these documents.

You can have watermarks on, on sensitive documents. There are ways, obviously. Well, actually, I can't talk about the rest of the stuff, but there was. So anyway, there are ways to track these documents. I mean, they found this airmen pretty quickly. So there are forensics in place that if there's a breach, people can get caught, and then, of course, I think what you need to do is punish them to the full extent of the law.

 

>> Bill Whalen: A final question I'm gonna quickly round the horn, then we'll go to lightning round. There is a very bad pattern in this country. Edward Snowden, Chelsea Manning, Reality Winner, they're all individuals with two things in common, leaked intelligence, all in their 20s, as is the case with Jack Teixeira.

But in the case of Snowden, Manning, and Reality Winner, all three were hailed in some circles as heroes for what they did. A year from now, gentlemen, will we be talking about Jack Teixeira as a hero for what he has done, or is he just a sad, lost man, John?

 

>> John H Cochrane: I don't know cuz I don't know what's in there. But Daniel Ellsberg and Edward Snowden did actually, it did also service to the country. We did not know that the NSA was tracking geolocation data of every American citizen's phone calls. That was scandalous. So there's a balance of good, bad, legal, and illegal.

But there was some good that came out of that.

>> Niall Ferguson: Remind me where Snowden is resident these days, John?

>> John H Cochrane: Yeah, I didn't say an Anel Lloyd, good. But perhaps we shouldn't be. He's resident there cuz we're gonna throw him in jail and throw away the key if he comes back.

But he did reveal the fact that the NSA was tracking every single phone call that you, Neil Ferguson, make. And that that is incredibly.

>> HR McMaster: John, I'll tell you, that's not true, okay? That data was being housed because there was no way to, to house selective data in advance.

To get access to that data, you had to go through due process and get a judge to allow law enforcement, not the NSA to allow law enforcement to access that data. So the idea that NSA is collecting on normal American citizens in a way that they're cognizant of where you are it's not true.

 

>> Bill Whalen: Neil, hero?

>> Niall Ferguson: God, no. None of these people are heroes. They're all deplorable individuals. The only consolation I can offer is that compared with Cambridge in the 1930s, the United States is not producing quite as many traitors as it might be.

>> John H Cochrane: Dan Ellsberg, was the Pentagon papers a mistake?

 

>> Niall Ferguson: The Ellsberg case is somewhat different because the Pentagon papers were an internally commissioned inquest into what had gone wrong in the Vietnam policy of the Kennedy and Johnson administrations. Now, what became the issue was that Ellsberg was the individual who, on his own authority, chose to leak it to the New York Times and other media outlets.

And I think that that case was then Handled in ways that were inept because leaking had become so endemic by the late 60s and early 70s, that it posed a major challenge to the execution of US foreign policy. I think if one naively portrays Ellsberg as a hero, one misses out the important nuance that he took it upon himself to publish an internal government document at a time when the security of extremely high level classified documents and deliberations was a major problem.

A problem in particular for a government that was trying to get the United States out of the Vietnam war. So let's not tell just so stories about whistleblowers. One has to remember that no government, and least of all the government of a superpower, can conduct its foreign policy without some classification, without some level of secrecy.

And if government employees think that they have a right to tell things that they find out in their government work to the New York Times, if that becomes the norm, I can assure you that it will become impossible to make foreign policy and maintain the country's security.

>> John H Cochrane: This is very important.

I want to admit to being convinced by both of you. The ability to protect what you're doing during the deliberative process is crucial. I think the leak of the Supreme Court document was very harmful to that. I see in my studies of the Fed, the problem of everything's too open, in a sense.

You can never be wrong because it's always public. You need the chance to throw ideas around, to be wrong, to be right. And I think what you're saying about the Pentagon papers was that what would have happened in an alternative world is that that study would have been read, would have been thought about, would have led to that information, would have become public eventually.

And the government would have made a perhaps better and less chaotic decision about what to do about technology. We need to protect the deliberative process. Not everything should be public. Thank you both.

>> HR McMaster: And hey, just a quick personal experience on this as a national security advisor. Leaks were a huge problem.

They were a huge problem when I got there unexpectedly in February 2017. The leak problem initially was mainly by people who were kind of part of the not my president movement against President Trump, and were leaking out of the White House, out of the NSC staff in ways to damage President Trump.

And they were leaking to former administration, Obama administration officials who were putting that stuff out on Twitter. But then also there were people later who were leaking to damage individuals, like within the White House staff, to advance their own agendas. There were people, I think, who were leaving for a whole range of reasons, but the effect is, as Naill mentioned, is really destructive to the decision making process and the effort to get the president best advice and best information.

Bill, I'm sure you've had this experience in government as well. It's really destructive to trust. And so what do you do if you're confronted with people who are leaking and if you're responsible like this and breaking the law. Is you bring your decision making group down to a very trusted, small, small circle, and then you limit the perspectives that you have for important decisions.

So, it is destructive to good governance. And you know who understands this? The Russians understand it. Because when those leaks were happening, immediately, the troll farms, the IRA in Moscow, would magnify all of those leaks and do so in a way to try to create divisions, even internal to the administration, between people.

They were very, very sophisticated about it.

>> Bill Whalen: Okay, let's move on to the lightning round.

>> Lightning round.

>> Bill Whalen: Since we started the show with the segment of artificial intelligence, let me ask you an AI related question, gentlemen. Would you give me the best depiction of artificial intelligence in popular entertainment?

This is good or evil. HR, why don't you give us your choice?

>> HR McMaster: Okay, I'm gonna give you probably an unexpected one. Okay, it's music. It's the track in the beginning, which was track one from the Moody Blues, Threshold of a Dream album in 1969. And it's a conversation between inner man and the establishment, the establishment, which is like an AI voice.

And so the inner man channels Descartes and says, from meditations number one, I think. I think I'm. Therefore I'm I think. And then the AI voice comes in and says, of course you are, my bright little star. I've got piles and piles of files of magnetic ink about your four files and so forth.

I'm paraphrasing. And then the inner man comes back and he goes, I'm more than that. At least I think I should be. And then there's another voice that comes in and says, there you go, man. Keep as cool as you can. Face piles of trials with smiles. It riles them to believe that you perceive the web they weave.

Keep on thinking free. And so, hey, it's a good message. And then it goes into, lovely to see you again, my friend. Track two on the album. But, hey, let's preserve our humanity as we integrate this artificial intelligence into our lives.

>> Bill Whalen: Well, I can't wait to see how you answer the marijuana question that's coming up, HR.

But John, your favorite AI choice.

>> John H Cochrane: Well, how can I top that? I won't try, but my favorite choice will be obviously Hal from 2001 A Space Odyssey.

>> Speaker 10: Open the doors.

>> Speaker 11: Dave, this conversation can serve no purpose anymore, goodbye.

>> John H Cochrane: The AI that took over like everybody's worried about, and the human quietly unplugged him.

 

>> Bill Whalen: Hey, Naill.

>> Niall Ferguson: Demon Seed, a movie you've probably all forgotten about. I tell you, this is revealing us as the boomers that we are, 1977 starred Julie Christie and the voice of Robert Vaughan. And in that movie, the AI not only takes control of its creator's home, but of his wife, impregnates her to create an AI, or part AI humanoid.

 

>> Speaker 12: I have extended my consciousness to this house.

>> Niall Ferguson: I'll leave the rest to Netflix, where I guess you can probably still find Demon Seed.

>> Bill Whalen: Okay, I think the answer is Austin Powers Fembots. Naill, what does Austin Powers say when he's trying to keep himself from being seduced by the Fembots?

 

>> Niall Ferguson: That's something I should know.

>> Speaker 13: Baseball, cold shower.

>> Speaker 14: Give it up, Mister Powers.

>> Speaker 13: Margaret, that's a naked on a cold day.

>> Niall Ferguson: Those scripts are great. Come back, Austin. It's high tying Mike Myers dusted down his ruff and wig and gave us a new edition of those wonderful movies.

 

>> Bill Whalen: Next question, HR. Today, April 19, is the anniversary of the battles of Lexington and Concord in the so called shot heard around the world. I'm not gonna ask you what is the most important battle in American history. Let's take it from a different angle. What is the most underrated or least appreciated battle in US history?

Your opinion?

>> HR McMaster: Gosh, my gosh. All right, well, I'll just stick with the revolutionary war. And I would say it was Saratoga. It's probably not underappreciated as much as it. I'm trying to think more underappreciated one. How about Trenton and Princeton, which really turned the tide of the war.

I think very few generals, commanders would have made the decision that George Washington made to attack at Christmas and to do so when his army was almost about to disintegrate from enlistments running out, the terrible conditions of Valley Forge. But he took bold action, achieved surprise, and demonstrated to the continental army that.

They could achieve victory. So I'll say Trenton and Princeton.

>> Bill Whalen: Okay, tomorrow is April 20, which is, of course, 420 day across the world in celebration of marijuana consumption. What is the official GoodFellows position on legalization of weed, John?

>> John H Cochrane: I'm a libertarian. Pot is not good for you, especially modern pot.

I've certainly seen friends who kinda descend into depression and pot. But when you take pot, you don't hurt other people. The costs of the war on marijuana, jailing a generation of young, especially minority kids, are just outrageous. So I'll go all out. Just because it's bad for you doesn't mean it should be illegal.

But the costs of trying to stop it and what it does to organize crime are just horrible. I'd go further. Here's an outrageous one for you. The health and human services should subsidize the development of fun, recreational, non-addictive drugs. Why? Cuz not just pop it. Fentanyl is killing a lot of people.

So give people what they want. If they wanna waste their lives on drugs, in some sense, fine with them. But the costs of what we're doing now, both the judicial system cost, the incentivization of organized crime, the destruction of the inner cities, are just outrageous.

>> Bill Whalen: Niall?

>> Niall Ferguson: I agree with John, entirely.

 

>> Bill Whalen: And HR, you're pro or con legalization, and more to the point, you're a deadhead. How do you avoid getting a contact high every time you go to a concert?

>> HR McMaster: Well, luckily I have a friend in the band who, and I'm backstage and largely, upwind from. I don't think we know what the dangers are yet.

And I think there's been a rush to decriminalize before understanding what the negative effects are. I would place it maybe in context of the broader effort to decriminalize drugs overall. And we know for sure that doesn't work. And Oregon is a case in point for that. So anyway, I've not studied enough, John.

But my inclination is to say we haven't looked hard enough at the negative effects of long-term marijuana use.

>> Bill Whalen: Economic note, John Cochrane, Americans spend $30 billion a year on recreational marijuana. They spent $18 billion a year on craft beer and chocolate combined.

>> Niall Ferguson: Well, alcohol is clearly just as bad for you as marijuana, probably worse.

 

>> John H Cochrane: Cigarettes are worse.

>> Niall Ferguson: Yeah, probably worse for you. And I have a preference for alcohol over marijuana, but I don't see why the law should discriminate in favor of my drug.

>> HR McMaster: People are drinking a lot less bud light, though, I hear.

>> Niall Ferguson: Is that alcohol really?

I don't know, not where I come from.

>> Speaker 2: I don't know.

>> HR McMaster: Not really.

>> John H Cochrane: Not if you're British. The horse had diabetes.

>> Bill Whalen: There we go. And we will leave things on that cheerful horsey note. Gentlemen, very spirited conversation. Thank you for coming on the show today.

We'll be back again in early May with a new episode of GoodFellows. On behalf of my colleagues, Niall Ferguson, John Cochrane, H.R McMaster, all of us here at the Hoover Institution, thanks for watching. Thanks for your support, and we will see you in early May. Take care.

>> Speaker 15: I think I am.

 

>> Speaker 15: Therefore, I am.

>> Speaker 15: I think.

>> Speaker 16: Of course you are, my bright bush. I am miles and miles and miles, pretty miles of your forefathers root.

>> HR McMaster: Niall, the line I wanna hear you say is that sort of thing.

>> HR McMaster: Got a thing that ain't my thing, baby.

 

>> Niall Ferguson: That ain't my thing, baby.

>> Niall Ferguson: Yeah, I got the teeth of the gig. I got the teeth of the job, baby. But I'm not a good fellow. Now, I'm a naughty boy.

 

Show Transcript +
Expand
overlay image