On the 78th anniversary of the only wartime use of nuclear weapons, is the human race at another moral crossroads, fearing what artificial intelligence (AI) breakthroughs might unleash? Fei-Fei Li, co-director of Stanford University’s Institute for Human-Centered AI, joins Hoover senior fellows Niall Ferguson, John Cochrane, and H.R. McMaster to discuss AI’s promise and peril, followed by the three “GoodFellows” revisiting Harry Truman’s decision to drop the bombs in 1945. Just as crucial to mankind’s future: they debate the likely winner in an as-yet-unscheduled MMA bout pitting Facebook’s Mark Zuckerberg against X’s Elon Musk.  

>> Robert J Oppenheimer: I remembered the line from the Hindu scripture, the Bhagavad Gita. Now I am become death, the destroyer of worlds. I suppose we all thought that.

>> Bill Whalen: It's Tuesday, August 8, 2023, and welcome back to GoodFellows, a Hoover institution broadcast exploring social, economic, political, and geopolitical concerns. I'm Bill Whelan.

I'm a Hoover distinguished policy fellow. Back as your moderator, I'd like to thank my colleagues for picking up my slack in my absence. And I'm here with my three friends, the GoodFellows, as we call them. That would be the historian Neil Ferguson, the economist John Cochran, the geostrategist general HR McMaster, the are Hoover Institution senior fellows all.

We're covering two topics in today's GoodFellows. The first part will be on AI, and the second will be on a bombs. Specifically, we're gonna talk about artificial intelligence. And this being the 78th anniversary of the dropping of atomic weapons on Japan, we thought a good time to pose to our two historians whether or not Harry Truman made the right call.

So joining us today to talk about AI is Doctor Fei-Fei Li. Doctor Li is the inaugural sequoia professor in the computer science department at Stanford University, and she's co director at Stanford's human centered AI Institute. She served as the director of Stanford's AI lab from 2013 to 2018.

It was during that time that she took a one year sabbatical to serve as the Google vice president and chief scientist of artificial intelligence and machine learning at Google Cloud. Doctor Li joins us today to talk about artificial intelligence, its promises and perils. Fei-Fei Li, welcome to GoodFellows.

 

>> Fei-Fei Li: Thank you, Bill, and thank you, everybody. Very excited to be here.

>> Bill Whalen: So this is a complicated topic, and I'm gonna begin with right? I hope it's not too long winded of a question, but here goes. Doctor Li, a year ago, the CEO of Google told the World Economic Forum, and I quote, AI is one of the most profound things we're working on as humanity.

It's more profound than fire or electricity. This past March, Eliezer Yudkowsky, as you know, he's a leading artificial intelligence safety researcher, wrote the following, as he called for an AI moratorium, quote. If somebody builds a too-powerful AI under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.

So, Doctor Li, let's start the conversation this way. First, why the hullabaloo all of a sudden over AI? This is a technology, as I understand, that's been researched and developed for decades now. But why all the hullablue at this moment? But secondly, as a scientist, in your opinion, will AI profoundly change the world for the better?

Or is Doctor Jodkowski right? Is it destined to end all human species and biological life on the planet as we know it, your take?

>> Fei-Fei Li: Great question. And how many hours do we have? So first of all, I do want to just clarify your perception to the public.

This is all of a sudden, to AI scientists like me, this is inevitable. This is a profound technology. So when I say inevitable, I mean that it's profound impact because AI is a very nascent science, but it's the science that really gets to the core of what intelligent agency can be doing.

And humans are intelligent agents and a lot of who we are, what we do is considered quite advanced or most advanced in animal kingdom. And AI as in silicon form is starting to perform those incredible ability like having a conversation or understanding massive amount of patterns or recommending you the next Amazon shopping list.

So it is profound and I think one of the reason, well, first of all, the public awoke to this partially because chat GPT moment last fall, right? Suddenly we in our hands, at our fingertips, the entire global humanity has an AI tool that can have quite an intelligent conversation with us.

And I think that is even more profound than seen on tv. AI playing go master, because that is still a degree removed. Most of us don't play go and we don't need go to survive or work. But having deep conversation, knowing, understanding things, learning things, getting knowledge, getting, writing, composing essays, it's just profound.

And that's why the public is taking such a deep interest. But this technology is much more horizontal than just a chatbot. It really can touch on every single vertical industry. And I personally work in healthcare and also robotics. We can dive deep in that.

>> John H Cochrane: I'd like to ask a technical question, and I'm jumping in first because I think the technical is an important background before we get to the policy.

It's important to understand what it is and what it isn't to make these speculations about will the robots be taking over? As I understand it, the basics is the large language models they evolve from predict the next word given the last 30,000 words. And for example, the visual stuff that you've worked on, you take 100,000 pictures where people say, that's a duck, that's a dog, that's a chimp.

And you run a big nonlinear regression on the pixels and you figure out some way of saying, that's a dog, that's a chimp and so forth. Now, phrased that way, this is not intelligence. This is very good predicting what humans will say. But that's as far as it goes.

Now, I want you to go beyond and say, well, how does that become intelligence? Because we're already seeing emergent properties, that this is better than just that somehow of course, we want to get what the best human says, not what the average human says. But I gather also it's starting to get better than the training data.

So is that true, or is the way to understand this merely a very useful, I mean, if you can get what the best human says in response to a prompt, that is an exchange of human information, but not itself, yet intelligence? Or do you see actual intelligence logic emerging from this basic idea of just predict the next word?

 

>> Fei-Fei Li: This is, John, this is such a profound question, and frankly, we need a philosopher here. And frankly, even a philosopher here would just be in a debate, right, like, what is.

>> John H Cochrane: I just want a capabilities, not the full. Yeah, there's a philosophy. What is intelligence? But is this should we understand it is?

Fill in the next word that the average human, or even the best human will do, or is it somehow better than that?

>> Fei-Fei Li: Okay, so it is a what we call sequential prediction model. When you're talking about the LLM, which stands for large language model, by the way, the entirety of AI goes beyond large language models.

We can talk about other things, but let's just focus on what's most exciting a lot of people call it generative AI LLMs. And yes, it's largely a sequential model that has learned so much so that it can predict the next word to say. But because of this tremendous power of learning, both by the huge amount of Internet scale data as well as the model being so huge, billions and billions and trillions of parameters.

It has the ability to predict the next word and compose the next sentence, next paragraph very well. You call it intelligence or not, it can do it very well in many circumstances. And this is why it is so profound. And you ask me if it's an average human's ability.

Or a best human's ability. If the best human is Einstein putting out the equation of special relativity, no, it's not that best human capability. But if the best human is Einstein going to buy an apple and ask, how much is the apple, it is actually Einstein's ability. So it depends why we see that.

Because first of all, the training data, it learns. There's not enough training data to give to today's AI so that the AI can create special relativity equation. But there is enough training data to give to today's AI to talk about an apple and how to purchase it. Recently, Stanford HAI has a digital economy lab that put out one study that's very interesting that you look at contact center workers and use ChatGPT to help their productivity, the quality of the work, by in large, on average, it's a 14% increase improvement of the quality of work.

But if you go into the data, you'll notice the bulk of the improvement goes to novel workers, novice workers who are not good at the job yet. The best workers, their productivity or quality of work did not get improved by ChatGPT.

>> John H Cochrane: But that's still giving human capabilities to other humans, which is great, not intelligence on its own, although I am hearing sort of emergent intelligence and logic in the algorithms, but I'm taking too much of, it is you guys turn.

 

>> Niall Ferguson: I've actually read a book that was co-authored by GPT4 which is a more powerful large language model. This is Reid Hoffman-

>> Fei-Fei Li: Reid Hoffman's, yep.

>> Niall Ferguson: Which is co-authored with GPT4. And it is an impressive display of mimicry, because what GPT4 can do is generate language in imitation of almost anybody that you ask it to imitate.

It's impressive that it can come up with a passage of text that sounds like Seinfeld, which is one of the most striking examples in the book. The book does a good job of surveying all the different, good and bad implications of particularly this kind of AI. But there's a stranger mission, and I want to ask you about that.

It doesn't say much about politics. Now, I have a theory, which is that any innovation attracts, first the nerds, that's kind of people like you, Fei-Fei, with all due respect, the-

>> Fei-Fei Li: Yeah, I totally admit that.

>> Niall Ferguson: Like-minded nerds. And then the next people who come along are the crooks.

And then after the crooks, you get the campaign operatives. And I don't understand why, after all we've seen in the last, let's say, four US elections, we're not preparing ourselves for the abuse of AI in the 2024 election. Is that something that your center's thinking about? And if so, should we be worried?

 

>> Fei-Fei Li: Okay, it's loaded, your comment and questions and I lose sleep on this. So I know I'm a nerd, and I'm a very proud nerd, but I'm a nerd who lose sleep on AI's profound political social impact, including democracy and elections. I actually frankly don't think I'm the only nerd losing sleep on this, a lot of our friends on campus are talking about this.

And so the short answer is, Neil, yes, but there is also, this is why Stanford HAI is established. I don't think nerds can solve the world's problem, all of world's problem, we need to work with historians and-

>> Fei-Fei Li: Senior fellows from Hoover and policy thinkers like all of you to work together cuz-

 

>> HR McMaster: Even economists. Even economists.

>> Fei-Fei Li: Yes, economists, even economists.

>> Fei-Fei Li: No, seriously, I think this goes beyond the technical problem, yeah.

>> HR McMaster: If I can just maybe a follow on to this as well. I mean, we're talking about maybe some of the potential nefarious uses of the technology.

And of course, technology is pretty neutral. I think we've learned from the Internet in terms of, it really comes down to the motivations of the people who are using these tools, who are using the technology. There's been this idea that we should declare a moratorium on AI research, but of course, everybody would have to sign up for that if that was even a useful idea.

And everybody would include maybe those who don't have our best interests at heart, this could be criminals, as Neil mentioned. It could be adversarial powers like the Chinese Communist Party. So what are your thoughts on the most significant threats we ought to be concerned about? What are you most concerned about besides what we've talked about already, political interference?

And what can be done? What do you think is realistic in terms of actions we can take together to maximize kind of the benefits of the technology while protecting against the downside and the dangers?

>> Fei-Fei Li: Yeah, HR, great question. So, because this question is so complex, I'll try to answer this in three concentric rings.

One is my biggest concern is at the individual level. The second is biggest concern at the community and society, say American society level. And the third one is humanity in the global scale level, because that we can at least reign in the answers. So, and then we should also talk about, should we pause or can we pause?

So my concerns at the individual level, which is the least talked about in media today and which I think is the most profound, is actually human dignity. Because if you look at our civilization and if you look at the invention and usage of tools and also the entire evolution of human civilization, whether it's political, economical, technical, there is a profound sense of we increase human dignity, individual dignity.

We use tools and governance and all kind of vehicles to increase, that satisfies basic needs and then emotional and then. AI technology is such a funny technology, it really touches to the core of who we are as human, our own agency. And I do worry if we don't use the technology right, we have unintended consequences.

I'll give you an example. I work in healthcare and I especially care about aging population and that is an increasing population. It's taking so much GDP resource and we want every individual aging person to be living healthily with dignity. And there's so many AI solutions we can put for example from smart sensing monitoring, from potential robots, from chatbots.

But in my ten years of work, every time when you work with the medical teams on thinking about how to make patients safer and more healthy, the most profound sometimes pushback or question boils down to. To am I gonna lose dignity or reduce my dignity along the way?

Sometimes to the point this is my personal story with my aging parents that I'd rather not comply with medication if I feel I still hold my agency. This is really interesting struggle with technology and where do we balance that, and when you have onslaught of smart conversation engines, smart monitors, smart cars, smart fridge, smart everything, how are we taking humanity?

So this is one layer we don't talk enough about, I think a lot. Second layer is the society, we already talk about democracy for society like ours, that is one of the concerns I lose sleep, another concerns I lose sleep is definitely about jobs. I don't lose sleep in the sense of suddenly all truck drivers will be losing jobs.

I just watched Oppenheimer, I'm sure you guys already watched it, I missed the IMAX version, but I watched it, this is a technology that's bigger than Atomic bomb in the sense that it is so horizontal from healthcare to environment to agriculture to climate to financial industry to everything.

And we are so entangled humanity and yet we're so complex, and there is the data piece, there is the deployment piece, there is the competition piece. How is this gonna disrupt the global order?

>> Niall Ferguson: Feifi I have a question about this individual dimension. Kissinger, in the book he co-wrote with Eric Schmidt and in an article that he wrote several years ago now for the Atlantic says that AI might transport the individual back to a kind of pre-enlightenment state of bewilderment, maybe even pre-scientific revolution state of bewilderment where the average person doesn't really know why stuff is happening.

Because the AI is arriving at recommendations, maybe even arriving at decisions in ways that the ordinary person doesn't understand. Do you think there's any truth in that, I found it a very interesting idea when I first came across it and I sometimes feel a bit like that medieval peasant when I'm getting nudged by some AI to do something.

 

>> Fei-Fei Li: Yeah, so first of all, I do wanna make sure that I know that HR, you started this whole series of questions with concerns, but I do wanna make sure that I also deliver a sense of hope. So even on the individual side that bewilderment, it's possible depending on how you use it.

Let's face it, if the only way we teach children math is to use everything on a calculator and then our super calculator, they're going to be bewildered by math. But at least for the past, however decades calculators are invented, we haven't completely failed I use the word completely because I'm sure we have opinions about math education, so it really depends on how we use the tool.

I think the flip side of bewilderment is actually empowerment.

>> John H Cochrane: Sorry, I just wanted to disagree violently with Neil, you know, when cars came in, people said, people who used to horses will get bewildered. We've done this hundreds of times before, this is gonna be tremendously empowering because, for example, the call center people now have access to the best of human abilities.

People with low skills, they're going to do great with this, it's like people who didn't have good handwriting once typewriters are invented, great, they can do stuff we couldn't do. That's what, who's got to worry? It's us, because now chat JPT four can, say, deliver a speech on geopolitics in the style of HR McMaster without his speaking fee and bingo.

So we're

>> Fei-Fei Li: Actually, I'll push back on that because HR's next geopolitical speech will be synthesizing new information in ways that ChatGPT or whatever GPT can never be trained fast enough to really, to really do as well as HR.

>> John H Cochrane: And HR is going to keep his speeches off the training data if he's speaking well,

 

>> Fei-Fei Li: Yeah, that is actually a regulatory question, who owns the data, right? Like we're talking about creator economy being disrupted precisely because we need to worry about that. But again, I agree with John, I disagree with Neil in the sense that the empowerment is critical, I do want to say one last thing.

I'm an educator both at my own house and at Stanford, I think GPT technology is one of the best things that has happened to education for a long time, maybe ever. Because now every, especially k twelve curriculum, including college, should be really doing a soul searching about what are we producing these little humans?

Are we producing just little chat GPTs that can regurgitate and memorize facts, or are we really producing critically thinking humans creative humans and GPT demonstrate it's not that hard to create a little agent that can regurgitate and pass LSAT and MCAT with flying colors. Now what, what happens to our k twelve education?

 

>> John H Cochrane: Just more basically every kid in the world can now have access to great teachers, even artificial ones.

>> John H Cochrane: We go on to society, because I'm just chomping at the bit on the society one,

>> HR McMaster: right?

>> Niall Ferguson: Well, yes, I was. I was queuing the next concentric circle, if I've lost the argument on individual alienation and bewilderment, let's see how America will cope with AI as a society, go on, Fei Fei, tell us the good stuff and admit that there'll be some bad stuff.

 

>> Fei-Fei Li: The good stuff is, the world I live in is our still intensely innovative culture, and I sit in the corner of the world or the campus where I see so many entrepreneurs and young technologists are taking this technology to empower teachers, doctors, you know, elderlies and all that.

But I also, I think at the society level, I am also concerned we talk about this, I'm sure, the jobs, the democracy, the, you know, the.

>> John H Cochrane: Go ahead wait, I'm chomping at the bit on this one because I hate the word we. Who's this we, boss? We means some agency of the federal government, and that then I'm really no, we thoughtfully channel the way industry develops.

We have never been able to thoughtfully channel the way industry develops. We're especially large language models where regulation, where pause goes, is censorship, and I think we're on the tails of astonishingly learning what Internet censorship has been doing to us in the past. Government control of where this goes, where information goes, in a democratic society, that's not how you preserve democracy.

That's how do you destroy democracy, if you're worried about misinformation flooding the next election, how about 51 human ex-national security people? Writing letters saying, no, the hunter Biden laptop is Russian disinformation, which they knew to be false, and then censoring the news stories about it, that was certainly disinformation that had an effect on an election.

We don't need to worry about chatbots, we got plenty of it, and the agencies that are now trying to control information are exactly, that's the appeal of controlling AI. So the whole we pause, regulate this thing strikes me is exactly counterproductive to the free flow of information to the developing, we don't even know what this thing's gonna do.

That the new excitement is how to apply large language models and image processing in the hundred different ways you mentioned, so, regulating it ahead of time, I think, is exactly negative for democracy as well as for progress.

>> HR McMaster: Well, faith, what about that? I mean, I think what John is saying is that the cure could be worse than the disease, right?

Associated with the effect that artificial intelligence, technologies, large language models in particular, deep fakes, and the degree to which content based verification is at risk, maybe trying to control that could make things even worse, I'm thinking of the effect that social media has had on us. Which I think we would all agree has been mainly negative in terms of of the algorithms that show people more and more extreme content, to get more and more clicks and more and more advertising dollars, and that drive us apart from one another.

What do you think is an appropriate remedy for what you see as the greatest downside of AI?

>> Fei-Fei Li: Yeah, so, first of all, I think pragmatically, we've got historian here, has humanity ever paused a fire electricity level technology, scientific progress? I don't think so.

>> Niall Ferguson: Yes. I really don't remember printing press, sorry, this was a question for a historian Professor Kaufman.

 

>> Niall Ferguson: Back off.

>> John H Cochrane: How much your historians count, I'm just cheering you on, go for it.

>> Niall Ferguson: No, the printing press was prohibited in the Ottoman Empire for centuries.

>> Fei-Fei Li: For how long?

>> Niall Ferguson: For centuries.

>> Fei-Fei Li: Well, but did it stop worldwide?

>> Niall Ferguson: No, of course, this is the point, that prohibition option doesn't exist at a global level, but we've certainly.

 

>> Fei-Fei Li: No, it doesn't.

>> Niall Ferguson: And in the same way, more successfully, we prohibited the development and certainly the use of biological and chemical weapons during the Cold War. I think that's an important and relevant precedent here.

>> Fei-Fei Li: I think we harnessed it, I don't think we ever stopped by biochemical research, I mean, I'm not saying we completely harnessed it, but we developed regulatory measures.

We developed professional norms, so that the harnessing happened, I think it's really important to recognize the fundamental quest for innovation and expansion of knowledge is in humanity's DNA. But along the way, how do we recognize the danger? How do we harness this, is really critical, especially for technology like AI, I guess I'm starting to see John is on my team.

 

>> John H Cochrane: No, because I'm agreeing with Neil, too.

>> Fei-Fei Li: There are hundreds and thousands of possible positive examples of this.

>> John H Cochrane: You have the ability here to see bad stuff develop and then come to international agreements, you don't have to say, stop, because we don't know.

>> Fei-Fei Li: But we still have an answered HR's question, which is, what do we do?

I think there are many things we need to do, and especially seeing the consequences, or unintended consequences of the previous decades of technology, right. For example, every vertical industry, especially the critical ones like healthcare, finance, transportation, and all this already has a regulatory framework. I think it's so important that we urgently channel the newest wave of technology driven by AI, through some of these regulatory frameworks.

I don't mean they're perfect, I really don't, and I'm not.

>> John H Cochrane: A very benevolent view of how regulation works in our political system.

>> Fei-Fei Li: Okay, but it is important we protect human lives, it is important that our patients know what kind of AI. If they go to get an x-ray and it tells you you have a lung nodule, it is important to know that this lung nodule reading whether it's by humans or AI, is safe, at least professional grade.

These are things we cannot allow, if we train our doctors and make them pass all the exams, we cannot allow an AI agent to just suddenly read x-rays and declare diseases. So whether you like our regulatory framework or not, it actually is a layer of protection, and we need that, and that's an easy example.

 

>> Niall Ferguson: Sorry, Fei Fei, I have a question that's designed to segue to your third circle, the global, and to bring HR in on the all important question of weaponization. Because a very important point that we need to underline is that large language models are not the most potent form of AI.

What I'm impressed more by is what's happening in medical science, I think AlphaFold was a bigger deal than ChatGPT, even if the media disagreed. But if I understand it, there were some careful restrictions placed on what could be generated by AlphaFold, and there have to be restrictions on, for example, the ways in which AI develops potentially lethal toxins.

This is critical, because if there's one thing that scares me about all this, it's the potential for radical innovation in chemical and biological warfare. Talk a bit about that, and then maybe HR, you could chip in, because this is something I'm sure that crossed your desk.

>> HR McMaster: Now, Fei Fei, let me just make a comment on this quickly, cause, Neil, this is really an important question.

My job, before I became, unexpectedly the national security advisor, was to design the future army. And we came up with a requirement that the future force requires the capability to develop and test a vaccine within 24 hours. Because we had this quite in mind that we would, because of artificial intelligence related technologies face a biological threat potentially in the future.

Some, maybe even biological threats that are designed, right, to go after even a particular ethnicity, and so I think it's really important for us to develop countermeasures. That's what I would say, is that all of these evil uses, right, what's important is to conduct research immediately on what could be the countermeasure to these nefarious uses.

But Fei Fei, I think it's a great question, I think that the biological aspect of this has not received the amount of attention it should have.

>> Fei-Fei Li: Yes, actually, I think this is an excellent question, first of all, let me expand it beyond biological, even though I know that that sounds so real and dangerous.

Scientific discovery is about to go on another huge blossoming because of AI, and that's a positive way to put it. Whether we're discovering new materials, the latest quantum advances, as well as fusion technology advances are all largely due to AI. If you look at Google's latest quantum papers, if you talk to our colleagues in the Lawrence lab, that got us.

The fusion results, machine learning AI played a huge role. So whether we're talking about biological, synthetic biology, we're talking about new chemical compound, we're talking about new drug discoveries and vaccines. AI is gonna play a huge role in scientific discovery, that's just a fact but as HR said, it's a neutral fact.

Now, it can be used in many ways and again, this is where regulatory measures, international agreements and all these tools of containment and all that needs to be discussed. Because it is profound, it can do things faster and bigger scale than humans.

>> John H Cochrane: Ishan, why haven't we had more mischief on this area?

Look at the Wuhan lab, look at Anthrax, it's been possible for state and non state actors to do a lot of damage with biological stuff, even with old fashioned methods. And I'm kinda puzzled that it hasn't happened yet.

>> HR McMaster: Yep, well I think one of the reasons is that we're acting against it, every day from an intelligence perspective and in the case of ISIS, which we know and this has become public.

Was trying to develop biological agents, and to deliver them in novel ways by physically destroying that organization in the Euphrates river valley in Syria. So I think, there are a broad range of reasons why it hasn't happened but I think what Fei Fei is outlining is, with this new technology and maybe you could talk more about this, Fei-Fei.

How more easily it is to distribute this know how and the capability, I think that this knowledge and how to design, in this case, biological agents for example, viruses. It could be much more transferable, easily transferable than in the past.

>> Fei-Fei Li: Yeah so HR, let's get a little nuanced because I think when people talk about AI today, there is a little bit of humans filling the mental gap, but we need to be a little more nuanced.

What is easily transferable are algorithms, cuz they're in software form, they tend to be published in the open scientific community and they are replicable in an instant. But, when rubber meets the road, when AI is developed or especially applied, we do have other things that are in the mix.

First of all, AI is built upon compute, which means AI is built on chips. You cannot do AI with a bunch of bricks, or you cannot use AI of compute AI with plutonium's. So there are the hardware for compute and then there is also all the biological concerns, a bunch of algorithm is not gonna cook up a chemical compound.

You need the enzymes, the molecules, so there is more to this. Also, the bunch of algorithm is not gonna give you fusion, you also need machines, engineering and all that. So AI is horizontal, it's embedded but when we have specific concerns, we need to look at the fuller picture to think about regulation measures.

 

>> John H Cochrane: I just want to emphasize something I think Fei-Fei said, okay, anything that can cause this much damage is also gonna be able to cure cancer. Get a head up on fungi and bacteria, dramatically improve human lifespan, maybe figure out what amyloid plaques are and what to do about them.

So it's given that 99% of the researchers in the world are going in that direction. If it can have those bad things, which we do need to control, goodness gracious, what good things it'll bring us. So was that sort of your point, except louder?

>> Fei-Fei Li: Yes, I totally agree, it's an empowering and enhancing technology as well as a destructive technology.

 

>> Bill Whalen: Doctor Li, we're gonna have to cut it off that point, I feel like we left a lot on the table. Maybe you'd like to come back again and we can pick up where we left off and maybe you can invite Sean Cochran over to the computer science department.

I mean have a brown bag lunch with the talk about this.

>> Fei-Fei Li: Yeah always, it's so much fun to talk to you guys.

>> Bill Whalen: Fei-Fei Li, thanks for joining us today.

>> Fei-Fei Li: Thank you, bye guys.

>> Niall Ferguson: Bye.

>> Bill Whalen: Okay gentlemen, our second segment today, on Sunday I went to the movies to see Oppenheimer three hours, it was a very good movie.

I'm not sure if it lived up to the hype but I'm just kinda a curmudgeon when it comes to movies. Anyway, the timing was good, I thought because Sunday August the sixth, was the 78th anniversary of the dropping the bomb on Hiroshima. August ninth is the 78th anniversary of the dropping on Nagasaki, a very simple question to the two historians and one amateur historian on this call.

Did Harry Truman make the right call? HR, why don't you take it?

>> HR McMaster: This is a well won debate, that historians and others have been over many times and I think it comes down to two fundamental questions. Could the war have been ended at a lower cost, of not dropping the bomb?

And there's all these sort of specious arguments that Japan was on the verge of surrender. But Truman famously said that, an invasion of Japan would be an Okinawa, from one end of Japan to the other. And of course, Okinawa was a fight in which 123,000 people perished and so I think it comes down to putting yourself in the Truman shoes.

The belief that Japan was not going to surrender, without the use of the atomic weapons and to recognize, I think that there would have been far fewer casualties. There were far fewer casualties with use of the atomic weapon, than it would have been without it and that includes Japanese civilian casualties as well.

So, I think he made the right decision from the perspective of the time and we forget some minor things. The Japanese, if we had invaded the mainland had planned to execute all the prisoners of war and so forth. So, it was a brutal horrible war, that should highlight to us, okay?

We need to do everything we can to prevent war overall, because that's when you have the prospect of maybe using the most destructive weapons that are even difficult to imagine the use today. So, we wouldn't be talking about potential escalation to a nuclear weapon around the Ukraine war, if that war hadn't begun at all.

So I think, we ought to just try to deter conflict obviously, and to do so through strength.

>> Bill Whalen: Neil?

>> Niall Ferguson: I agree, I wrote a book called War of the World a few years ago, part of my goal was to make western readers think more about the war in the Pacific.

The reason the war in the Pacific was so horrific, from the vantage points of American and other combatants, was the extreme reluctance of the Japanese to surrender. The ethos of the Japanese army was not to surrender and this absence of surrender explains the very high casualties in battles like Okinawa.

If there had been a conventional invasion of Japan, as HR rightly says, the casualties would have been mind blowingly high. That what the atomic bomb did, was to force surrender on Japanese culture, to force the emperor and to force the military to accept surrender. I don't think anything else would have done that, so it was the right decision.

But let's not forget, another important decision which was taken by Franklin Roosevelt, namely to develop the bomb. And we must remember that that was a race, a really extraordinary arms race, which had to be won. Because if German scientists succeeded in winning that race, the war would have had a different outcome.

That is the thought that I'm left with, by the way I didn't go to see, Oppenheimer, because I'm such a books person that I just decided to read the book itself. American Prometheus, Martin Sherwin's and Kai Bird's magnum opus. I recommend reading the book before going to see the movie, it's a really powerful work of historical biography.

 

>> Bill Whalen: Neil, there's also a terrific BBC series done, I think, in 1980, a seven part series with Sam Waterston as Oppenheimer.

>> John H Cochrane: Let me jump in as far as the end of World War II, you guys got it right. In fact, it's funny, we weren't really in a race cuz the Germans developed, put their efforts and their money into V1 and V2 rockets and really weren't that serious, it turns out, about a bomb.

We didn't know that at the time. The interesting question is, and here's where amateur historians can do what real historians would never do, which is to think about the counterfactuals. What would happen if we had not dropped the bombs on Hiroshima and Nagasaki? And what would happen if not, I forget his name, the New Yorker guy who wrote the beautiful and very influential article on just how horrific.

 

>> HR McMaster: John Hersey, and then he wrote the novel, won the Nobel Prize for that.

>> John H Cochrane: But we learned just how horrific with these two small bombs, we learned just how horrific atomic bombs. Now, what would have happened had those bombs not dropped? The bad thing is, I think it's fairly sure that the US and the Soviet Union would have gone to war.

You can correct me, Neil, whether any two great powers contesting for the world had not gone to war for 70 years, or whatever it is in the meantime. And it would have been a nuclear and thermonuclear war if we had not had that example before us. Nuclear weapons have been actually one of the greatest things ever in history cuz we're all so scared of them, that as long as you had faintly rational great powers at the helm, the rules have been you don't lose a war.

We won't invade and take over and get rid of a government. We can go on the edges, we can have a cold war, but we will not have a major confrontation. And that kept a peace for an unbelievably long time and economically the greatest increase in human wellbeing ever from 1950 till today, only under that umbrella of peace.

Now, we might have, had we not seen nuclear weapons, we might have abundant nuclear power right now, so cheap that it wouldn't be worth metering cuz people were confused. Nuclear power and nuclear bombs, that's a downside as well. And that is unraveling, we now have many smaller states and many not so certifiably rational states with nuclear weapons.

And we are now in Ukraine seeing those rules broken, the threat of nuclear weapons to justify an invasion of another country. Not just the rules are these are reserved to stop an invasion of my country. So that's an interesting counterfactual, which maybe my historians can be persuaded to be amateurs for a little and think about, where would the world be today if we had nothing blown those weapons off?

Leaving aside the question of how to end world War II.

>> Niall Ferguson: John, you'll be glad to hear that some professional historians do use counterfactuals, although it's still a minority pursuit. Most historians don't regard them as legitimate. But I did a book many years ago called Virtual History, the whole point of which was to make the what ifs explicit.

The tricky one that you've raised is the Cold War, because it may just be a matter of luck rather than deterrence, that we didn`t have another nuclear conflict. They came very close to using nuclear weapons over Cuba in 1962. And I must say, the more I think about the Cold War and the more I delve into its history, the more I`m struck by how lucky it was that nuclear weapons weren`t used again in anger.

But I take your point, the demonstration effect of the bombing of Hiroshima and Nagasaki has lasted to the present day. And created a taboo which certainly played at least some part in restraining decision makers during the Cuban missile crisis, H.R?

>> John H Cochrane: Well, I would just add to that that sub commander, that Russian sub commander who received the launch order in Cuba, must surely have been thinking of just what a horrible thing.

And maybe let's hold off a minute.

>> HR McMaster: Right and it wasn't launch order, it was launch authority that he had. That was a big surprise, was not given an order. But I think that just the whole history of the development of nuclear weapons, including so called tactical nuclear weapons or intermediate range weapons.

And the interplay between the Jupiter missiles, for example, in Turkey, which we removed in exchange for an end to the Cuban missile crisis. And then, of course, you have the Soviets, who developed the SS20s, which were these intermediate range missiles, which were used to maybe to coerce Europe, right, to maybe accomplish their objectives through the use of nuclear weapons.

And say to the United States, hey, well, you can either respond and it'd be Armageddon, or you can sue for peace on our terms. It's very much the dynamic that Putin's brought back in terms of the use of these tactics or the development and fielding of these tactical nuclear weapons.

And it's sad because obviously Secretary George Shultz was the one who helped put the INF treaty together that eliminated a whole class of weapons. But that class of weapons is back, as John, as you mentioned, nuclear proliferation is a really grave concern. I don't think there's enough attention given to this.

Maybe this is something we can take on in future GoodFellows, but the proliferation of all sorts of horrible destructive weapons is something we're maybe not paying enough attention to.

>> Niall Ferguson: It can't be emphasized enough how rapidly China is building up its nuclear arsenal right now, and this is one of the arms races to which we pay almost no attention.

But it's vitally important, H.R, you don't even need to agree with me on that, I know you agree.

>> HR McMaster: I agree.

>> John H Cochrane: That is the question, so what is the rules of the game? The rules of the game is that's because, so that the US will not invade and take over China, then that's been the rules of the game forever.

If the rules of the game are gonna shift, that this is how we fight wars, then maybe we should remember how horrible it is. I'll add for an economist piece, deterrence fails as game theory. It just doesn't make any sense as game theory. The only reason it works is cuz everybody is equally horrified about where we're going and sort of agrees to play by these rules of the game.

 

>> HR McMaster: Well, and as you know, there's a whole history of sort of nuclear philosophy about the use of nuclear weapons under Mao, which is quite disturbing and not really being very concerned about it. Then you have what is a 400% increase in their strategic forces, their land based strategic forces, which they're embarking on now.

And I believe that the Chinese might be trying to develop a first strike capability, right? And that would mean that they could accomplish their objectives by destroying the United States if they wanted to, but certainly with the threat of it to keep us at bay. So China could have its way with countries in the Indo Pacific region, for example, and to create exclusionary areas of primacy, have their way with Taiwan, for example.

 

>> John H Cochrane: But here, here's the rules of the game are the important things. The Russians had that ability for decades and didn't use it. We used it only to, we're not gonna invade them. So are the Chinese gonna adopt the new Russian model of, we rattle nuclear weapons in order to project our power and invade other things or is it simply the defensive?

No, the US won't invade China, then it's okay.

>> Bill Whalen: Okay, next, a question we have not had a nuclear weapon used for hostile purposes in 78 years. Will we see a nuclear weapon used for hostile purposes in the next 78 years, Neil?

>> Niall Ferguson: Well, the probability just goes up the more countries acquire them, and I worry that it's only a matter of time before this happens.

May not be the war in Ukraine, it might equally well be a conflict in the Middle East or the far east, but I fear we won't be so lucky in the next two generations.

>> Bill Whalen: John.

>> John H Cochrane: I unfortunately have to agree, amateur historian hat, it's hard to put that one in the bottle.

And I agree also, not likely from a major state that we know where they are and can target in response. Although there's the question, would you really murder a whole bunch of civilians just because you got one but small the Iranians could well blow up something in Israel.

That's an example of how this sort of thing could happen.

>> Bill Whalen: HR, give your the last word.

>> HR McMaster: Yeah, I think the chances are high, I think the non proliferation regime is already unraveling in northeast Asia, in the Middle East. I think it's just a matter of time, if there isn't a change in the situation with Iran, that the Emiratis and the Saudis purchase a weapon, obtain a weapon from the Pakistanis.

And as John already alluded to, I think that the chances of Jihadist terrorist organizations, for example, who are impossible to deter, I would say, in many ways, could get their hands on a nuclear device. Especially if you look at maybe, what people were predicting for a long time, the collapse of security in Pakistan, a nuclear armed but very unstable country.

So, yeah, I'm worried about it. That's why we have to be vigilant, and we have certain task forces in the US government that work on this. I think they need more resources, might need to be scaled up, and then we really need to enter into a discussion with the Chinese and the Russians, maybe at some point in the future.

But there needs to be a new non-proliferation regime established, and there need to be multi party talks on arms control and at least confidence building measures to prevent mistakes from happening. But of course, the Chinese don't want to talk at all right now on any of this.

>> John H Cochrane: You guys who know more than me, what we're painting is a picture of one or two Hiroshima Nagasaki style things going off kind of as an act of terrorism, with some plausible deniability.

We're not talking about a major nuclear exchange between large industrial powers like us in China. I am hoping that the chances of that remain small for the next 70 years, but I'm curious what you think.

>> Niall Ferguson: Don't think once you'd assume that John, in the case of a conflict over Taiwan, as Jim Stavridis has shown in his recent book.

There would be a high probability of escalation to the nuclear domain in a way that wasn't true in the 1990s when China simply didn't have that capability. That's why China's nuclear arsenal matters so much. And the danger of the first strike is really key, as HR says. And we sometimes forget the extent to which mutually assured destruction as a notion was a kind of intellectual rationalisation of the decision not to do anti-ballistic missile defense.

And so the rationalization was, well, we'll just all be vulnerable. Both the US and the Soviets will be vulnerable, and then there won't be World War III. I don't think we should assume that that calculus applies in the US, China Cold War. In that sense, we're in very new and dangerous terrain.

 

>> John H Cochrane: But what you're suggesting is not a first strike that wipes out Washington DC, Chicago, and San Francisco. What I think you're suggesting is tactical nuclear weapons that sink all of our aircraft carriers, perhaps sink our bases around the ocean. And then China says, okay, that was bang for the buck, what are you gonna do?

Are you gonna murder civilians in Beijing over this? No, tough luck, you lose Taiwan. And that's kinda how the genie gets out of the bottle, in your view, correct?

>> HR McMaster: Well, I think it's important to pay attention to the proliferation of missiles as well. I mean, I think pretty soon everybody on earth is gonna really be in the position that Londoners were in under the V1 and V2 threat in 43 and 44.

And so what Neil's mentioned, missile defense, I think is immensely important, and especially missile defense that can take out missiles in the boost phase. Because with hypersonic weapons, once they reach hypersonic speeds and are maneuverable, missile defense becomes extremely problematic. And so I think missile defense, maintaining our deterrence, the nuclear triad is immensely important.

Our undersea nuclear capability is vital, I think, in this connection, in terms of maintaining our deterrent capabilities. Modernization of the force, which we put into place in the Nuclear Posture Review in 2018, which there's an unclassified version of that I think is worth reading. And what was significant also about that Nuclear Posture Review, is we identified attacks against the United States that we might perceive as a precursor to a first strike.

And therefore, we would then respond commensurately, which was meant to deter massive attacks on our communications and our space-based communications and surveillance infrastructure, for example. And that we would see those potentially as a precursor to a first strike, and then at least we would have the option of responding with nuclear force to an attack like that.

 

>> Bill Whalen: All right, very good, now the moat you've been waiting for, on to the Lightning Round.

>> Lightning round.

>> Bill Whalen: Start the question for Neil and HR, gentlemen, the US government last fall released new rules banning American companies from exporting to China the technology, software. And equipment used in producing computing chips and supercomputers.

Beijing last month, citing national security concerns, said it would limit exports of products made of gallium and germanium, these are model meters used for chip manufacturing. HR, Neil, where is this headed and how does this fit into the Cold War II narrative?

>> HR McMaster: Well, hey, it's just the beginning, right?

I think what you're gonna see now are restrictions on Chinese access to cloud computing, which they can use for many of those same applications. And you're going to see additional export controls, because it's quite clear that China has not only weaponized its market and real estate model against us in a way that puts us at a profound commercial and economic disadvantage.

But they are applying these technologies to gain differential advantages militarily. And I think that the more we learn about the nature of Chinese companies, the degree to which they have to act as an extension of the Chinese Communist Party and the People's Liberation Army. The more we're gonna be compelled to take these measures.

And of course, China's gonna respond. And so, we are in a downward cycle here that I think in many ways is long overdue. The question is, will we be able to make our supply chains resilient enough such that we're not placed at a profound disadvantage as this competition continues?

 

>> Bill Whalen: Neil.

>> Niall Ferguson: Well, the challenge i,s how do you stop this turning into full scale decoupling? Jake Sullivan has used the term de-risking, suggesting that there can be a more targeted approach to this kind of policy. I don't think the Chinese are buying it, and that's why I think HR is dead right.

This will escalate, and as it escalates, the economic ties between the US and China are gonna fray even further.

>> John H Cochrane: I get to go in on this one, too.

>> Bill Whalen: You brought it up, so go ahead.

>> John H Cochrane: We're saying China can't buy chips, we don't know how to make chips, TSMC knows how to make chips.

Fundamentally, you're telling China it can't import stuff from Taiwan, and boy, boy, is that a potent problem. This didn't work out so well in 1938 to Japan, as we've pointed out before, so I would worry very much about, do we know what we're doing? And China that's dependent on us for crucial imports wouldn't be the worst thing for us either, two can play this game.

So I'm very, very reluctant, and especially that the national security choke China thing soon turns into we can't buy T-shirts from them, protectionism. This can get way too much, get cloaked in national security way too quick.

>> Bill Whalen: Let's stick with the professional economist John Fitch Ratings last week lowered the us credit rating from AAA to AA.

Put on your grumpy economist's hat and tell us, big deal, little deal, no deal.

>> John H Cochrane: I'll advertise my grumpy economist blog post on the subject, small deal. I mean, S&P 500 did the same thing 10 years ago, and we borrowed God knows how many more trillions of dollars since.

It was the right thing to do, For two reasons. The US in the debt ceiling negotiations, we proved that, paying interest in principle on the debt is not the most important thing to our politicians. The formal default is more likely than I think most people recognize, and rightly so, we have no budget process, they said.

We don't have governance, they said. What they did not say and should add is the likelihood of inflating away the debt, is larger than we had thought ten years ago. We have already defaulted in economic terms on about 10 to 15% of the debt by inflating it away.

And bond rating agencies ought to think about inflation as well as form of default.

>> Bill Whalen: Neil, in this date, 1974, Richard Nixon resigned. He saw the writing on the wall, the house was gonna impeach him, the Senate would convict him. What is the likelihood, Neil, of a House impeachment inquiry?

Not an actual impeachment, an impeachment inquiry into Hunter Biden and the Biden family's finances?

>> Niall Ferguson: I think it's very likely, but then I think every president will be impeached from here on, it's becoming an integral part of the partisan process. But I think the public has, with some justification, become more and more aware of this story.

And to go back to something John said earlier about censorship, people are asking themselves, why it was only in the New York Post that they can read? They could read about the Hunter Biden question in the last couple of years. Now you can't even keep it there, it's even in the New York Times.

I think the road to impeachment is fairly clear now for House Republicans, yep, it's coming.

>> Bill Whalen: Mark Zuckerberg is reportedly training for an MAA fight with Mister Musk. He is so serious about this, he's put an octagon in his backyard, he reportedly goes to McDonald's, eats 4,000 calories a day at McDonald's.

Not exactly Neil Ferguson's fair, I think. HR, would you pay to watch this movie? Would you be willing to host a GoodFellows viewing party?

>> HR McMaster: Sure, I mean, I'll help him train, I'm just right around the corner from him. But I think you should switch to protein shakes and lay off the McDonald's if you're serious about it.

But also, I think this is evidence that AI has already taken over, and we're living in a South park episode. That's what's happened to all of us.

>> Speaker 7: Boxing is a man sport. There's nothing in the world more man than boxing. It is man at his most man.

 

>> Bill Whalen: Okay, Neil, who do you have in that fight? Zuckerberg or Musk?

>> Niall Ferguson: This is the supreme illustration of the decadence of the tech lords that it should come to this. It's kind of painful even to think about it. Look, I don't like watching well trained athletes fight.

I kind of lost my appetite for boxing years ago, watching amateurs fight, spare me. But I think after the great tech bust, which eventually the Federal Reserve will engineer, or once the AI bubble deflates, we'll look back and say, you know, peak, Silicon Valley was that fight. That was the moment that they jumped the shark.

 

>> Bill Whalen: Scott Cochran, you're one of the most healthy living people I know, what would happen to you if you consumed the Zuckerberg diet? And here it is, 20 McNuggets, a quarter pounder, large fries, an Oreo McFlurry, an apple pie, and according to the story, cheeseburgers for later.

>> John H Cochrane: I could not even begin to get through it

 

>> John H Cochrane: I'm more kind hearted about this. There's a long history of CEO's of big companies doing crazy publicity stunts and talking about a cage match. Look at all the wonderful free publicity both of their brands have gotten out of it. So this isn't the, there's plenty of end of western civilization all around us, don't worry about this one.

 

>> Bill Whalen: Okay, let's close on a more serious note here, and this would be a David Brooks column from last week in the New York Times, a heading, what if we were the bad guys here? And here's Brooks premise. He wants to take the media narrative that Trump and Trumpism are products of those uneducated, enlightened sorts who fear change and evolution.

And what Brooks does, he turns on his head, what he says is, while the left nobly marches under the banner of progress, the left needs to look at itself and consider what drives Trumpism is what he calls modern meritocracy. And I know John Cochran hates that term, we'll get to that in a minute.

This is the idea that essentially the progressive elites are in a very closed education system, that is a revolving door. They talk in condescending ways, and that this is what brought along Donald Trump. And it has this rather interesting conclusion, quote, we can condemn the Trumpian populist until the cows come home, but the real question is, when will we stop behaving in ways that make Trumpism inevitable?

What do you think?

>> Niall Ferguson: Charles Murray wrote the book about this years ago, Coming Apart, in which he correctly identified the drastic social polarization going on in the United States. A part of which was the emergence of a quasi caste of Ivy League educated elites living in the same zip codes and gaming admissions to make sure that they're not so talented.

Kids could also go to Harvard and Yale. There are two words that sum all this up, legacy admissions. Nothing to me is more shocking as an immigrant than the phenomenon of legacy admissions. And as long as that kind of thing happens, the elites will have it coming. So I agree with this, but Charles Murray said this more than a decade ago.

 

>> Bill Whalen: What do you think, HR?

>> HR McMaster: Yeah, I agree with Neil that this essay was itself, though, I think a little bit condescending as well. I mean, I didn't like the use of the term, the less educated, and the less educated were those people who don't use words like LatinX or cisgender.

Well, if that's what you get out of education, I think it's probably better to be less educated. And I will take the form of genius that you often see in army mechanics over the intellect that you see at times on US universities. So, I thought one of the things was striking to me is it sort of, it was trying to condemn this sort of elite mindset.

But at the same time, I think was captured by it a little bit, and didn't recognize the degree to which we need Americans who do work that doesn't require a college degree. And that they are, in many ways, the backbone of our society.

>> Bill Whalen: Okay, John Cochran, modern meritocracy, go.

 

>> John H Cochrane: Yeah, I think it is funny that our progressive friends on the left. Also, they bleach about democracy, until actually people they don't like vote and vote for policies that they don't like, when all of a sudden they start condescending to them. Yeah, the use of the word meritocracy here bothered me, cuz meritocracy is rule by those with merit or advancement of those by merit.

Whereas, of course, what's happened is an elitocracy, a narrowing of who can go into the Harvard, Yale, for instance. And that these badges are the routes to elite certainly in politics, less so in business, cuz business is more competitive, but a lot of things are less competitive in the US.

So, in fact, it's like the 1900s in the UK maybe, where you had to be born to the right family and go to the right schools, that's not meritocracy. That is formation of a very strange elite that speaks a very strange language. And true meritocracy is what we would like for.

And listening to those army mechanics, I love that, instead of the first 2000 people in the phone book, we should be ruled by army mechanics.

>> HR McMaster: Absolutely.

>> John H Cochrane: There's a genuine skill and a common sense out there, which is why democracy is great. Cuz when people get a chance to vote, they sometimes say, you know, your crazy plans.

 

>> HR McMaster: And, hey, Bill, if I can just recommend another book, Neil mentioned a book from years ago that is relevant to this. I think Christopher lashes, The Revolt of the Elites is actually stands the test of time as well.

>> Bill Whalen: Okay, final question gentlemen, we're gonna go, Neil, what happens to Trumpian populists if there's no Donald Trump?

 

>> Niall Ferguson: I think you can have Trumpism without Trump. And indeed, it might be easier, in some ways, I think Trump's determination to remain in the political game is causing a case of arrested development in American politics. Both parties would benefit from a generational shift in their candidates in 2024.

And if Trump were to step aside, if Joe Biden were to step aside, we would actually have an altogether better election. And it might be quite a spectacle to see Ron DeSantis or Tim Scott take on, say, Gavin Newsom, that's an election I could kind of handle. The prospect of a rerun of 2020 makes me feel almost sick with a combination of boredom and senior fatigue, if that's a permitted term.

 

>> Bill Whalen: That's a permitted term, and we're gonna end on that rather depressing note, too. So good conversation, gentlemen, good to see you after the summer break again. Thanks for doing the show in my absence of, and that's it for this GoodFellows. But sure not, we'll be back soon with the new show and the best way to know when we're coming on the air, rather than sending us concerned emails, subscribe to our show.

And if you wouldn't mind, say a few nice things about us, rate us, push your buzz in those tricky, tricky algorithms. On behalf of my colleagues, Neil Ferguson, John Cochran, HR McMaster, all of us here at the Hoover Institution, we hope you enjoyed today's show. We'll see you soon, till then, take care.

 

>> Speaker 8: If you enjoyed this show and are interested in watching more content featuring HR McMaster, watch Battleground, also available at hoover.org.

 

Show Transcript +
Expand
overlay image