Hoover Fellows Dr. Elizabeth Economy and Dr. Amy Zegart discuss the "DeepSeek moment"— when China's DeepSeek AI model surprised U.S. markets by replicating OpenAI's performance using fewer resources and an open-source approach. The two explore the strategic implications of open versus closed AI models, with there being an argument that the U.S. should embrace more open research approaches rather than closed models. They highlight how China is successfully replicating America's historical innovation model—investing heavily in long-term basic science—while the U.S. has reduced federal R&D spending. The two scholars conclude with policy recommendations, including fixing K-12 math education, creating a national computer infrastructure for universities, and strengthening partnerships with allies while emphasizing the importance of including academia in what should be "public-private-academic partnerships."

Recorded on July 2, 2025.

WATCH THE EPISODE

>> Elizabeth Economy: Welcome to China Considered, a podcast that brings fresh insight and informed discussion to one of the most consequential issues of our time, how China's changing and changing the world. I'm Liz Economy Hargrove, Senior Fellow and Co-director of the program on the US-China and the World at the Hoover Institution at Stanford University.

 

Today I have with me my good friend and colleague Dr. Amy Zegart. She's the Morris Arnold and Nona Jean Cox Senior Fellow here at Hoover and also teaches in Stanford's Political Science department. She's a specialist on all things technology and national security. And today, we're gonna talk about one of the seminal issues of our time, the DeepSeek moment.

 

Welcome, Amy. It's great to have you here.

>> Amy Zegart: It is so nice to be with you, Liz. No place I'd rather be.

>> Elizabeth Economy: Okay, so we're gonna talk about DeepSeek. Let's just start with where were you and what was your reaction when you first heard about DeepSeek? What about this DeepSeek moment?

 

 

>> Amy Zegart: So DeepSeek, as you know, created a deep freak. And so in the beginning, I was watching the press coverage and thinking, this can't possibly be the full story. And a couple weeks after, I was in a meeting at the AI Institute at Stanford with a bunch of computer scientists and others around the table.

 

And it was a remarkable conversation because what I realized was, first of all, a lot of the media coverage was wrong. And secondly, DeepSeek was not a surprise to any of the faculty in AI at Stanford. It was a surprise to markets, it was a surprise to investors.

 

It was a surprise to the CIA. It was not a surprise to people in computer science because they've been following DeepSeek for a couple of years.

>> Elizabeth Economy: So what made DeepSeek? What made the DeepSeek moment? Why was this such a big deal? Even if, and I agree with you, because I happen to be at a meeting in Southern California with a lot of sort of representatives from AI companies here in the United States.

 

And while I was surprised and some of the other China folk around the table were surprised, they were not at all surprised. Exactly what you said. They'd been tracking DeepSeek for a while, but clearly for the majority of Americans and for the US Government, it was a big surprise.

 

What made this so significant?

>> Amy Zegart: So I think it's important that it was a surprise, number one. So the fact that our government was taken by surprise should concern all of us. So, as you know, I'm really worried about the broader category of strategic technical surprise. How do we anticipate what's happening in technology, when could it come confer advantage to the Chinese that we might not be able to catch up on.

 

And so this was a near miss in terms of that. But your question, why was this a big deal? And it is a big deal. And I think there are really four reasons why it's a big deal. The first is the DeepSeek moment said, it's game on. The thinking, you know China much better than I do.

 

I think the thinking in Beijing, certainly the thinking in Washington, was the US is ahead, really far ahead in large language models. And turns out that's not the case. The second to the uh-huh part of this DeepSeek moment was it clarified what race we're in. So we often talk in technology about we're in a race with China.

 

Well, what's the race? I think until DeepSeek, most people thought the race was to invent. Who's gonna invent the latest model first? Well, what DeepSeek revealed was the race is to adopt, not invent. So DeepSeek didn't invent. It wasn't a major engineering milestone. But what they did was basically replicate the performance of the best OpenAI model without access to anything that OpenAI did.

 

So they knew that this thing existed and they could recreate it with less money, less compute, less frontier capabilities than the best companies in the United States. So it's a race to adopt globally AI platforms. And DeepSeek is cheaper, it's free in the third world. And so the US companies are racing to be first to invent.

 

And maybe that's not the race we should be worried about. The third thing is, I think, and we can get into this, that it seems like a very nerdy debate, but it's a really important debate about open models versus closed models. And what DeepSeek did was say really suggest the future is open.

 

And by that I mean they published what they did, their weights were open, which enables other people to modify, accelerate and replicate what they're doing. Most US companies, Meta accepted, take a closed model approach. They don't publish what they do. They don't share this information. They think they're creating a moat.

 

I think what DeepSeek is suggesting is that is a failing strategy. So for the United States, this is a big deal. And then last, I think it reveals a lot about talent. And I know we're gonna talk more about talent, but who are these DeepSeek researchers that created this really important moment?

 

The answer is they were not people who were trained in the United States.

>> Elizabeth Economy: Right and you've done some really novel research into that last point that you just made. And I do wanna talk about it, but let me just pick up on the third point about the sort of open versus sort of protected models, right, where they share, you don't share.

 

I mean, it would seem to me then that if DeepSeek basically has laid bare everything they did, how they got there, why couldn't be the case that some very young, talented AI researchers and computer scientists here in the United States could take that and just replicate it and improve upon it, and then we're back in the game again?

 

 

>> Amy Zegart: Well, I think that's already happening, right? Researchers are already going to town on the DeepSeek model and this is a learning from learning. So the Chinese are learning from the Americans, the Americans are learning from the Chinese. But I think we're in a bizarro world where the US Used to be in favor of open research.

 

Universities publish things openly. We think we accelerate innovation faster that way. We have open international collaborations. And now it's the Americans that are saying, actually, we don't want anyone to know what we're doing. We're gonna operate in secret. We won't tell our employees can't say what they're doing.

 

We're not publishing what we're finding. And the Chinese are saying that they're open and accelerating research. And so within computer science departments, where are researchers turning to use models to develop their capabilities in AI China? Why? Cuz the models are open and they can play with them.

>> Elizabeth Economy: Okay, we have Meta.

 

So I'm just gonna push a little bit on this point, I really understand it. We do have Meta, which is open, right? And which does share. Isn't it okay to have some of our companies doing approaching this one way, some approaching another way? And I mean, shouldn't we be a little protective of our IP?

 

I mean, I think across the full sort of range of technologies and over time, where the United States has been all too willing to share, or sometimes China has just appropriated the technology, it feels as though there's a reason behind what these companies are doing.

>> Amy Zegart: Absolutely and you bring up a good point.

 

And we should distinguish between, there are different levels of openness. So DeepSeek did not reveal its data. We don't know what data it trained on and there's all sorts of consternation about. Well, really, did they just train on data from other models in the United States. They released the model weights, not the data they use.

 

That's a big difference for people in the field. And there is this difference that Meta is the only major company that has sort of an open weight model. As you know well, there are real concerns, legitimate concerns about, number one, how do we make sure these models don't do things that are really unsafe for everybody?

 

And the horse leaves the barn and we can't get the horse back in the barn. And that's a really important set of concerns. And number two, what you raise is why would we just let all this technology and IP walk out the door for the world? Like, shouldn't we protect it, like patents, etc.

 

But the DeepSeek moment really changed my mind on a lot of these things. I think, as you know better than anyone, Liz, this is a technology that naturally diffuses. You can't keep it to yourself. It's always going to get out because it's not like nuclear material where you can control it.

 

And much of the capability to develop these models is right inside people's heads. It's the ultimate portable weapon. So this technology isn't born classified. You can't keep it. And so if that's the case, then shouldn't we be racing faster to be ahead and set the standards and understand the guardrails and try to set norms?

 

So my thinking really has changed because of DeepSeek. I used to think much more that we should lean more toward pausing, protecting, and now I actually think that is a misguided strategy. And I'm really concerned that US Companies are so focused on beating their American competitors, they're going to lose to their Chinese competitors.

 

 

>> Elizabeth Economy: Well, that is a really important point. I mean, I would guess that the DeepSeek moment has sort of profoundly, I think, shaped your thinking. My guess is that it's probably shaped thinking of some people at OpenAI and Anthropic and other places about what they need to be doing differently, but maybe not.

 

And I think one of the big issues that you touched on in the fourth point was the talent issue and DeepSeek. The sort of the narrative around DeepSeek was that its model was developed by a very small team of young scientists and developers, virtually all of whom, as you said, were educated exclusively in Chinese universities.

 

I think your research shows part of that's true, part of that's not true, but has huge implications, which I wanna get into. But talk a little bit about what you found in your research, and how you went about it, because I think really it's a unique lens into this company.

 

 

>> Amy Zegart: So let me just take a step back and say, why do we care about where the talent comes from? I think that knowledge power is far more important in today's geopolitical competition than it's ever been before. So, yes, military hard power matters. Yes, soft power of our values still matters, but increasingly, because technology is so central to economic competition and security, competition, knowledge, power, the ability to innovate is really important.

 

And one key component of that is where the talent goes. So, you know they're often these Pentagon maps that show, how many nuclear missiles does this country have or how many tanks does that country have. I would like to understand that map for AI researchers around the world.

 

How many cutting edge AI researchers are in Beijing versus in Silicon Valley. So that's the idea behind it. And then, so what we did was I asked this amazing research assistant who you now have hired full time, named Everson Johnston. Can you just figure out what you can find about all the people who are listed on these DeepSeek papers?

 

And there wasn't just one paper, which is what the media focused on, there were five. Let's look at all five papers this company has ever produced. Gather up all the information you can about the 223 people that were listed as contributors in some way. She did an amazing job.

 

She used AI to study AI. So she developed her own AI scraping tools to look across the Internet at everything she could find about the authors. And of the 223 total authors that contributed to any of those papers, she found incredible data about 201. So a lot.

>> Elizabeth Economy: Yeah.

 

 

>> Amy Zegart: And then what we did is we looked at where did these people live, where did they train, where did they work, and where did they go to school. And what can we infer from the patterns that we found? We found a couple of really interesting things. One is that 98% of them had significant training in China.

 

This is not the story we often hear, which is China sends all of its best and brightest in the United States. We give them all the great things that they learn, and then they go back to China. What we also found, to me, this is the most disconcerting thing we found more than half of those researchers in DeepSeek have trained nowhere outside of China.

 

They have spent their entire life inside China, which tells me China has an incredible and robust domestic talent pipeline. We do not have that kind of pipeline in near the numbers that China does. So we are asymmetrically vulnerable. If you think about this as a supply chain of talent, or we rely much more on foreign talent in STEM than China increasingly does, so they can grow their own in a way that we can't large population also.

 

So this suggests that we need to compete for global talent in a much more vigorous way than we've done, not just grow our own. We have to do all of the above. I would say two more things that we found. One is that of the 49 DeepSeek researchers that spent time in the United States, most of them only came for a year.

 

They didn't spend a lot of time here. They spent one year here and they went to 65 different institutions in 26 different states. This is a very widespread, unusual. I would have expected a concentration in CS departments at Stanford and Cal and MIT, right, the three top departments.

 

That's not what we found. Whether this is delivered or not, we don't know. But it's an interesting pattern and a short pattern of people coming to the United States and then leaving.

>> Elizabeth Economy: Well, can I just stop you there for a second? I mean, presumably not all of them can get into Stanford, at MIT and Caltech, right.

 

So it's maybe not surprising that some went elsewhere. I mean, were they all studying computer science?

>> Amy Zegart: No.

>> Elizabeth Economy: Were they not.

>> Amy Zegart: So we found some of them were in medical programs, some of them were in bioengineering programs. So a much more widespread set of programs that they went to, not just computer science departments.

 

 

>> Elizabeth Economy: Right, and I think that was one of the things that the founder of DeepSeek mentioned, right, when he was talking about his strategy was that he was bringing together people from sort of disparate fields. This was not simply a group of computer scientists. So I think that's reflective.

 

What you found, I think, is reflective of some element of his strategy and who he brought in.

>> Amy Zegart: And so I'm glad you raised that point, Liz, because the mantra has been they're copying US companies. DeepSeek didn't copy US companies. DeepSeek copied US universities, right? And that's the model of bringing people together across fields and having a lot of young people involved, right?

 

And so the last thing that we found was they may be young, the DeepSeek talent, but they ain't green, right? So you look at citation metrics, right? So Alexander Wang, who is just hired by Meta, right, an AI superstar is 28 years old.

>> Elizabeth Economy: Yeah.

>> Amy Zegart: He's young, but he is not green.

 

Well, DeepSeek showed a similar pattern. So we looked at the citation counts of the researchers, and we compared the median and the average to citation counts to the extent we could at OpenAI in one of their papers. And what we found was this was a group that was actually pretty highly cited.

 

They had a lot of research under their belt. They were young but they were not inexperienced. And in fact, the sort of median level of citation was higher relative to the average than OpenAI. What does all that mean? It means they relied on fewer superstars than the OpenAI paper we looked at, and their.

 

Their average bench was better.

>> Elizabeth Economy: That is important. I also thought, if I recall correctly, though, that it didn't skew quite as young as we might have thought, right? I mean, I have to say, when I read about it, just from the press reports, that I was envisioning this, you know, small garage type environment with 20, 15, 20 young people ages, 18 to 25.

 

But it seemed to me that a number of the most senior people were actually quite a bit older than that, is that fair?

>> Amy Zegart: I think so, yes. I mean, I have to look specifically at the data, but I also think there's a tendency, as you know, in Washington to think 25 is really young, but in the tech field, 25 is not young.

 

 

>> Elizabeth Economy: Fair enough, fair enough, fair enough. It's true, it's true. All right. So you see the sort of a different model that they've developed, homegrown talent, obviously, one of the takeaways, which you've already mentioned for the US is that we're gonna have to work harder and run faster to continue to attract more talent from outside the United States.

 

Same time, we should probably be doing a better job of growing our own talent here. Talk to me about sort of what you would see as the lessons that the United States should take away from how DeepSeek has approached this, if there are any. Maybe it's just too different and we can't really adapt our model to their model but maybe there are some takeaways.

 

 

>> Amy Zegart: I actually think the biggest takeaway is we need to remember our own model. So what China is doing is replicating the best of the US model, and we are headed in exactly the opposite direction. What is China doing right now? You can talk about this better than I can.

 

It is an S and T, Science and Technology strategy that says we wanna build in fundamental research, we wanna empower our research universities, we wanna invest in big, hairy basic research questions, not just things that have a commercial application today. And we're gonna play the long game. That is exactly the US Model that led us to be the innovation superpower of the world since World War II.

 

But what are we doing? We're reducing funding to universities, we're reducing support for that fundamental research that asks big, hairy questions, and we're putting more money into applications. So it's wonderful that venture capitalists are investing in all sorts of companies, but those are designed to create a return on investment today or soon.

 

Whereas basic research funded by the Federal government is patient capital designed across generations to develop insights that can then lead to breakthroughs. The example I always use is Google. We all use Google, but very few people realize that Google really emerged from Federal funding of university research in something called digital libraries.

 

The Chinese model is actually the American model, and we've lost sight of the American model. Just one final thing, which is if we look at R&D as a percentage of GDP, the Federal government spending on research and development, it's a third of what it was at its peak in the 60s.

 

And China is spending at a rate that is six times faster than we are. So they're gonna eclipse us within a few years.

>> Elizabeth Economy: Yeah, I think they're already at 2.7% of GDP and they're increasing by 0.7% per year or something. And we're at, I don't know, 3.5% maybe still, I don't know depending on what the current administration is doing.

 

So what do you think explains, this seems kind of obvious that we have this model that has led us to this point. What do you think explains the sort of decision by the current administration to. To change course at this moment when, as you've outlined, we're kind of in the race of our life?

 

 

>> Amy Zegart: I don't know. I'm not inside the brains of the folks in the administration. I think some of it, frankly, is the own goals made by universities. This is a tough moment and a moment of reckoning for higher education on a lot of fronts. We've made ourselves opportune targets.

 

The complaints about universities having a monoculture are true. The concern about not being able to have alternative points of view, those are true. And so universities have not done what they should do to create the environment and the mission that we say we want to do. That's, I think, point one.

 

I think point two is we have done a poor job of explaining the innovation model that the United States has that's made us so great. I am amazed at how little understanding there is of the role of basic research in Silicon Valley. I have had major tech executives say to me, what does it matter if we fund fundamental research in universities?

 

So I think there's an excitement about venture capital, and I love venture capital as much as the next person, but all investment is not created equal. And all research is not created equal. And so I think universities need to do a much better job of articulating what it is that we bring to the table.

 

One of the big new initiatives we're doing in the tech policy accelerator at Hoover is what we're calling Lab to Launch, which is exactly that. It's gathering the data and telling the story so that taxpayers understand the return on investment of this fundamental research, because we're not just drinking lattes and sitting around in the summer.

 

But amazingly, my own relatives think that's what we do. And so we need to do a much better job of actually gathering the fact base and presenting it in a compelling way of what it is that we do in universities and the relationship to the private sector, how those things have to go hand in hand.

 

 

>> Elizabeth Economy: Yeah, I think I remember reading at one point that of the top 100 sort of big inventions that have come out of the United States, technological inventions like the Internet, for example, at least half of them were funded initially, at least through government, were funded with government support.

 

So it shouldn't be that difficult to look back through history. But I think it's great that you're trying to have more contemporaneous examples of how university basic research has led to or continues to lead to sort of breakthrough inventions that have important applications for bettering our society, bettering our economy.

 

Do you have one in mind that you could share something that you've just been working on in this Lab to Launch?

>> Amy Zegart: So there are a few, I think, examples that most people don't realize. So I talked about Google. The one that I spoke about when we went to Washington, which is a personal one for me, is the MRI machine, right?

 

So most of us have had an MRI fortunately or unfortunately, at some time or another. The MRI saved my dad's life, right? Gave my father more than 20 years of longer life, the ultimate gift because it detected the kidney cancer that was killing him. He got that MRI in 2001.

 

The machine was really commercialized in the 1970s. The technology that enabled that machine to be commercialized was started in the 1940s when my father was born. So, that's a very sort of personal example of one person's life extended by this harebrained fundamental research that no one could imagine it was going to then lead to this breakthrough machine that we all sort of take for granted today.

 

So that's one cryptography that protects us to the extent that our data is protected on the Internet. Stemmed from decades of research and pure math with no sense that it might be applicable today. The COVID mRNA vaccine. Decades of research in universities before the baton was passed to the pharmaceutical industry.

 

I do not want people listening to come away saying that I don't think the private sector has an incredibly important role to play. I do but I think the two go hand in hand. And I just say one other thing, Liz, which is that emerging technologies often don't emerge, right?

 

And so part of the bargain with universities is that a lot of the stuff we're working on doesn't pan out. That's all part of the innovation ecosystem. Most emerging technologies don't emerge. So we have to be patient about looking at paths that don't actually become pathways to success.

 

That's how this business works.

>> Elizabeth Economy: I think that's really smart. So when you went to Washington, can I just ask, how was it received by the policymakers? Do you think that you sort of informed and elevated the thinking? I don't know whether in Congress or in the administration.

 

 

>> Amy Zegart: So I'd like to say the answer is yes, but there's a little bit of selection bias, right? The folks that most wanna meet with us in Washington are those who think this is important. So I will say we went with colleagues of mine in science and engineering, some of whom hadn't done this before, and they were very pleasantly surprised at the bipartisan support that we got when we went around Congress and when we went in the executive branch, the nonpartisan support.

 

So I think we met with Senator Rounds and Senator Booker. Senator Rounds, the Chair of the AI Caucus, Cory Booker, Stanford alum. And they were singing from the same songbook. This is a Paul Revere moment, we need to come together on this. So I think there's a lot more bipartisan support for these technological issues, particularly given the competition with China today, than most people might think.

 

 

>> Elizabeth Economy: When I was thinking about the field of AI in particular, I just was realizing that just last year, when you look at the most recent Nobel Prizes that were awarded in both physics and chemistry, they were both tied to advances in AI, and three were awarded to US Scientists then, one from the UK and one from Canada.

 

So maybe our system still has some sort of positive elements to it. Although, I will say that at least one of the scientists was 91 years old. And so clearly, as a product of a more traditional period of development. But do you see that there's reason for hope here?

 

 

>> Amy Zegart: Absolutely. So there's magic in Silicon Valley and in the United States and how many foreign leaders come to this area and say, I wanna create a Silicon Valley in my country and it can't happen? And I think there are key elements to that magic. One is we wanna be a place where the world's best and brightest wanna continue to come.

 

I hope that that mood in Washington changes. But I think people want to live in freedom, I think fundamentally, and I think they want to be in a place that rewards hard work. And historically, the United States has been then, and I don't think it's too far gone for us to have that back.

 

I also think we have in research universities this thirst for exploration and that is its own kind of secret sauce that we have here. Not for the country necessarily, we do it because we do it, right? It's not top down driven. It's not because the Chinese Communist Party is telling us to do it.

 

We let people do their thing and we're more likely to have creative, innovative pathways because we do that. And then of course, you and I are around young people in the university. Whenever I look at the future and I talk to my students, I can't help but be inspired.

 

And especially when we think about tech and how much of our technological breakthroughs are happening by younger and younger people, it gives me great hope actually for the future, that it's not too far gone. We just have to get out of our own way.

>> Elizabeth Economy: Yeah. So I mean, the administration potentially to its credit has put out a number of initiative, initiatives that seemed as though they seem as though they're designed to advance AI education and diffusion.

 

They have this AI education task force, for example. I mean, if you look at what this administration is doing in the space, do you see things that they're doing that suggest that they're committed in a serious way to advancing sort of frontier research in the US and diffusion of AI?

 

I mean, is there a different kind, maybe different from what the Biden administration was doing? But do you sense that there is a real drive within the Trump administration in this area?

>> Amy Zegart: I do and I think it's important to look beyond just the Trump administration. We talk about the decline in investment and fundamental research.

 

We look at the Federal government not supporting universities. This is a long time coming. This spans across administrations, Democratic and Republican, for many, many years. So our innovation ecosystem has been eroding for a very long time. That erosion is accelerating under the current administration but it certainly didn't start with the current administration.

 

And then you look at sort of the narrative and some of the initiatives coming out of the Trump administration, there's a lot to be excited about, right? The framing of AI opportunity, not just AI safety, I think is right. We need to look at the opportunity, not just the risks.

 

And the Biden administration was understandably, really concerned about the risks and started sort of a risk first approach. But things have changed and the DeepSeek moment is one of them. And we need to lean more forward into the opportunity side. And that started with the Vice President's speech about AI opportunity.

 

You mentioned the AI Executive Order and Education. Education is one of the most promising areas to me for AI, because AI is the ultimate patient tutor, right? Where a human gets frustrated when a student asks for the fifth time. I don't get it, how do you do that math problem?

 

AI will not get mad at you.

>> Elizabeth Economy: It'll never get tired of explaining it, that's true.

>> Amy Zegart: And we'll come up with a different way for as long as it takes. So I think there's tremendous opportunity to enhance what is an abysmal education record in our country and K12, if we are smart about adopting AI.

 

So I'm encouraged by many things that the administration is doing. I'm discouraged by some things that the administration is doing, but I think that's often the case that you have. It's a complicated terrain and people have legitimately different views about how to navigate it.

>> Elizabeth Economy: Yeah. I mean, I think as long as there's some funding, right?

 

Some actual funding behind these initiatives, I think that gives me hope. I think if we're gonna rely on the private sector to pick up all of what had been some significant government funding for basic research, I think that's gonna be problematic. But let me give you a chance then to offer two or three suggestions to the administration, if you had the opportunity to design the sort of AI strategy for the United States, perhaps incorporating some of the talent element of this, what would be your sort of top two or three recommendations?

 

 

>> Amy Zegart: This is my queen of the world if I could-

>> Elizabeth Economy: Totally, I do think you are queen of the world.

>> Amy Zegart: I think you're queen of the world.

>> Elizabeth Economy: We're on the same page.

>> Amy Zegart: We can be more together.

>> Elizabeth Economy: Go meet the queen, Queen Amy. Go to it.

 

 

>> Amy Zegart: So starting on the talent side, I think K12 education is a national security crisis. Now, lots of people have talked about K12 for a long time. I think that crisis is urgent. We rank 34th in the world in Math according to the latest test, and we're going down.

 

And if you look at the top performers in Math, we have 50% as a percentage of our population of top performers in Math compared to Canada. We have one-third that the percentage of the South Koreans. If we want to educate the workforce of tomorrow, it's got to be done today.

 

So, number one, focus on K12 education. And I often joke to folks who are in the education space, the only thing worse than teaching to the test is not teaching to the test. So we need some standards, we need some metrics, and we need to measure performance against them.

 

So that's number one. Number two, capacity building. So the two most important enablers for AI and other areas of scientific innovation are talent and compute power. And right now, compute is dominated by a handful of companies. So the example I always give is, last year, Princeton University had to dip into its endowment to buy 300 advanced Nvidia chips.

 

Meta, the same year, bought 350,000 of the same chips. So I'm not saying Princeton has to compete with Meta, but we need to make national compute a critical infrastructure for more organizations and universities to be able to do frontier science.

>> Elizabeth Economy: Go ahead, sorry.

>> Amy Zegart: There's a bill in Congress, a bipartisan bill, to do just that, but it has stalled.

 

It was proposed in the Trump administration, it was supported in the Biden administration. It has still not become law. Compute is like the highway system of the 1950s. It has economic and national security implications that are enormous. So I would double down on making compute available for more organizations in the United States.

 

And the last piece, I would do allies and partners, right? We have this amazing set of allies and partners. We can do much more together with our talent, with our compute, with our data, than we can do alone. And so I would fast track more programs to harness the data, talent and compute capabilities of our allies and partners.

 

 

>> Elizabeth Economy: So that sounds like a recipe for success but I can't resist just asking you on the compute side of things. Number one, does DeepSeek tell us that you can can do more with less advanced chips? Does that open up some opportunities? And number two, this does sound like an area where a company like Nvidia could do some public-private partnership with universities, right, to make some of those advanced chips available to universities at a reduced cost, perhaps.

 

I don't know, what do you think about those two thoughts?

>> Amy Zegart: I would say yes and yes. So one of the interesting things about DeepSeek, when you talk to sort of PhD students in computer science, they were so excited. Why were they so excited? Exactly the point you raised, so the minimum viable compute to be able to do really cool research in this area was less than people thought it was.

 

It reduces that gap between the hyperscalers and what others on the frontier can do but that gap ain't nothing. It's still a pretty big gap. So that's exciting that the gap has narrowed in terms of being able to compete on a wider scale. Public private partnerships are gonna be really important for these companies and it's not just Nvidia.

 

I think companies in general in this moment need to be thinking much differently about not. They shouldn't be just reacting to the moment. They are shaping the international order. They are shaping the geopolitical battleground. And they need to think about being proactive and strategic, not just for their shareholders, but for the nation.

 

And so I think what you suggested is one component of that but it's not just Nvidia. I think many companies need to think about how can we contribute in a way to build scale in our nation to compete against China.

>> Elizabeth Economy: Yeah, I will say, when I was at the Commerce Department in the Biden administration, Secretary Raimondo was very proactive about reaching out to companies to get them to work with the administration on various initiatives, like digital skilling, for example, globally, right?

 

So to partner in ways that helped to reinforce the positive message about the United States on the global stage and to help realize US national objectives on things like supply chains, for example, on critical minerals. So I think there were many ways in which private companies, in fact, did step up.

 

But I think what you're suggesting is that they should be developing their own visions and be more proactive about it, not necessarily wait for the US Government to come to them and say, please partner with us on these initiatives that help realize our sort of broader national economic and security objectives.

 

 

>> Amy Zegart: I mean, I think Secretary Raimondo, your former boss, did a fantastic job at bringing the private sector together. But we also often leave out the third P, well, actually, it's not even a P, public private academic partnership. So it's not just industry and government, it's industry, government, and academia.

 

We can't leave out the university piece of this. And I think that's where we really need to focus more, is all three of those legs of the stool need to be supported and come together.

>> Elizabeth Economy: Okay, Amy, I can't thank you enough, not only for your sort of excellent analytical presentation and helping us understand the DeepSeek moment.

 

But also just for your optimism and your ideas for what we need to do to sort of move this forward and move together, I think, really, as a nation forward to advance ourselves in this really important strategic arena of Artificial Intelligence. So thank you for all the work that you're doing.

 

 

>> Amy Zegart: Thank you for having me. I'm just trying to keep up with you and all the great work you're doing.

>> Elizabeth Economy: All right, if you enjoyed this podcast and want to hear more recent discourse and debate on China, I encourage you to subscribe to China Considered via The Hoover Institution YouTube channel or podcast platform of your choice.

 

Next on China Considered, I interviewed George Washington University Professor David Shambaugh on his fascinating new book about how China lost the United States.

 

 

Show Transcript +

ABOUT THE SPEAKERS

Amy Zegart is the Morris Arnold and Nona Jean Cox Senior Fellow and the Director of the Technology Policy Accelerator (TPA) at the Hoover Institution. She is also a Professor of Political Science (by courtesy) at Stanford University, and a Senior Fellow at Stanford's Human-Centered Artificial Intelligence Institute and the Freeman Spogli Institute for International Studies. The author of five books, she specializes in U.S. intelligence, emerging technologies and national security, grand strategy, and global political risk management.

Zegart's award-winning research includes the leading academic study of intelligence failures before 9/11: Spying Blind: The CIA, the FBI, and the Origins of 9/11. Her most recent book is the bestseller Spies, Lies, and Algorithms: The History and Future of American Intelligence (Princeton, 2022), which was nominated by Princeton University Press for the Pulitzer Prize. Her op-eds and essays have appeared in Foreign Affairs, Politico, the New York Times, the Washington Post, and the Wall Street Journal.

Elizabeth Economy is the Hargrove Senior Fellow and co-director of the Program on the US, China, and the World at the Hoover Institution. From 2021-2023, she took leave from Hoover to serve as the senior advisor for China to the US Secretary of Commerce. Before joining Hoover, she was the C.V. Starr Senior Fellow and director, Asia Studies at the Council on Foreign Relations. She is the author of four books on China, including most recently The World According to China (Polity, 2021), and the co-editor of two volumes. She serves on the boards of the National Endowment for Democracy and the National Committee on US-China Relations. She is a member of the Aspen Strategy Group and Council on Foreign Relations and serves as a book reviewer for Foreign Affairs.  

FOLLOW OUR GUEST ON SOCIAL MEDIA

ABOUT THE SERIES

China Considered with Elizabeth Economy is a Hoover Institution podcast series that features in-depth conversations with leading political figures, scholars, and activists from around the world. The series explores the ideas, events, and forces shaping China’s future and its global relationships, offering high-level expertise, clear-eyed analysis, and valuable insights to demystify China’s evolving dynamics and what they may mean for ordinary citizens and key decision makers across societies, governments, and the private sector.

Expand
overlay image