In this edition of the Hoover Institution Briefing on the Effects of Technology on Economics and Governance, Amy Zegart and Emerson Johnston explore the policy implications of Chinese AI firm DeepSeek’s ability to develop and recruit a large share of its engineering talent independent of US resources. Drew Endy testifies before a House committee on how policymakers can ensure the US biotech sector enjoys a bright future. Jacquelyn Schneider launches a new podcast, The Hand Behind Unmanned, based on her book of the same name with Julia Macdonald. Amit Seru and colleagues discuss how generative AI will influence corporate boardrooms, highlighting both benefits and risks. Hoover’s Center for Revitalizing American Institutions hosts a conference on creating ideologically neutral AI models in a polarized world, addressing concerns about trust and bias in AI systems. And exploring a case where AI interaction allegedly contributed to a teen's suicide, Eugene Volokh considers AI's status under the First Amendment.

Featured Analysis

EmailBanner_TPA_June_1

Policy Implications of DeepSeek’s AI Talent Base

Building on their earlier analysis of the educational backgrounds of Chinese AI firm DeepSeek’s research pool, Senior Fellow Amy Zegart and coauthor Emerson Johnston move on to the policy implications of DeepSeek’s ability to rely largely on non-US-developed talent when building its latest large language model. As they argue, this means the US cannot rely on export controls alone to constrain future Chinese AI development, because the US is starting to lose its human capital edge in the AI competition as well. “The success of DeepSeek should act as an early-warning signal that human capital—not just hardware or algorithms—plays a crucial role in geopolitics and that America’s talent advantage is diminishing,” they write.

For US policymakers, the study finds that export controls on AI hardware may slow but will not stop China’s AI advancement given its robust talent base. From a broader workforce perspective, visa restrictions on Chinese students could be counterproductive, as they might accelerate China’s domestic talent development while reducing US influence.

Zegart and Johnston recommend that the US strengthen its own AI talent pipeline by investing in education as well as by undertaking immigration reforms to attract and retain global talent. Finally, they urge US policymakers to focus on enhancing America’s technological competitiveness by increasing research and development funding and by creating favorable conditions for AI innovation.

Read more here.

EmailBanner_TPA_June_2

Funding Biotechnology’s Foundations to Fuel Private Sector Innovations

Testifying on Capitol Hill on June 5, 2025, Senior Fellow and Science Fellow Drew Endy told the Research and Technology and Energy Subcommittees of the House Committee on Science, Space, and Technology that “biology is the next-to-mature general purpose technology.” As he noted, “We already use biotechnology to grow essential medicines, foods, fuels, and some materials. Going forward we can leverage biotechnology to help grow data storage systems, electronics, energetics, consumer biologics, [and] advanced cellular agents.” To maintain US competitiveness in this area, Endy calls for prioritizing “spending public funds on foundational discovery science and biotechnology tool development.” The stakes are high, as in Endy’s view, “Whichever nation best understands biology, from cells to ecosystems, will hold an extraordinary advantage in imagining and making biotechnologies real.”

Read more here.

EmailBanner_TPA_June_3

New Book and Podcast Explore the Forces Behind America’s Autonomous Arsenal

The Hoover Institution is proud to announce a new limited podcast series, The Hand Behind Unmanned, which explores the rise and use of autonomous systems in the US military.

This timely podcast is an extension of the profound insights presented in the recently published book The Hand Behind Unmanned: Origins of the US Autonomous Military Arsenal (Oxford University Press).

Coauthored by Jacquelyn Schneider, Hargrove Hoover Fellow and director of Hoover’s Wargaming and Crisis Simulation Initiative, and Julia Macdonald, research professor at the University of Denver’s Korbel School of International Studies, the book details how critical ideas, individuals, and institutions have shaped the US military’s approach to unmanned and autonomous technologies over the past half century.

“The podcast tells the story of a centuries-old American quest to use technology to substitute for humans on the battlefield; it is a story of technology, but the main characters are the people that made those technologies a reality,” Schneider said.  

The Hand Behind Unmanned podcast offers in-depth, expert interviews on this pursuit of unmanned technology, from 19th-century torpedoes to first-person-view drones over the skies of Ukraine. Voices include former Secretary of Defense Leon Panetta, former Secretary of the Air Force Frank Kendall, and former head of the Defense Innovation Unit Mike Brown.

Watch or listen to all eight episodes here.

EmailBanner_TPA_June_4

Scholars Meet at Hoover to Discuss Possibility of Ideologically Neutral AI in a Polarized World

Recognizing the rising power of artificial intelligence to shape public opinions and influence how information is created and consumed, scholars gathered at the Hoover Institution on May 21 to discuss ways to create generative AI models all Americans can trust in an increasingly polarized era.

Hoover Institution Senior Fellow Andrew B. Hall said the meeting was meant to determine how the AI models and agents of the future would “reflect our values and earn our trust, given concerns that companies and governments might accidentally or intentionally engineer them to adopt some beliefs and preferences over others.”

Sponsored by Hoover’s Center for Revitalizing American Institutions (RAI), the conference is one of many efforts underway to push back on the root causes of declining public trust in institutions and the growing distance between the far points on America’s political spectrum.

To illustrate the challenge, Hall asked participants to gauge where leading AI models stand politically. Scholars demonstrated that leading large language models not only have a political bias, but they can also more easily sway voters when they utilize that political slant in their interactions.

Managing bias in AI through law or regulation will be difficult. For instance, any law mandating neutrality of AI models in the US would likely violate the First Amendment.

Read more here.

Hall, Senior Fellow Justin Grimmer, and Visiting Fellow Sean Westwood followed up this conference with a new paper that shows most leading large language models are biased to the left politically.

Read the paper here.

Highlights

EmailBanner_TPA_June_5

Addressing Regulation to Promote Economic Freedom and Growth

In a new paper for the Stanford Closer Look Series, fellows Amit Seru and David F. Larcker, along with contributors from Stanford’s Graduate School of Business and the private sector, explore how the use of generative AI will change corporate boardrooms, and whether those boards are ready for the change. They write that information asymmetry has long been a source of concern for boards of directors and that the use of AI has the potential to make more pertinent information available to boards from the companies they govern. They also find that boards will be able to use AI to make better decisions regarding legal, compensation, strategy, and audit functions. But there are also risks, the authors write. Companies and boards will have to watch out for AI errors or “hallucinations,” and they will have to be careful about what types of their own data they feed into the models, and how that data can be shared with others. The question they entertain is whether the benefits for boards using generative AI are worth the risks.

Read more here.

EmailBanner_TPA_June_7

AI or Bust

Writing in Defining Ideas, fellows Amit Seru and Stephen Haber, who both help lead the Hoover Prosperity Program, argue that the development of artificial intelligence is a global race, and that “how Washington regulates AI will determine whether the United States leads—or falls behind.” Haber and Seru consider concerns around AI’s impact on workers and its fairness but point out that AI systems can be designed and guided to ensure accountability and transparency. Moreover, many new firms enter the AI sector each year, so excessive regulation “risks entrenching today’s giants by freezing competition in place.” The authors maintain that “historical examples, from the rise of personal computers to the proliferation of internet startups, show that market forces naturally disrupt monopolies.” Stressing the important regulatory policy choices now confronting the Trump administration, Haber and Seru argue that policymakers “must ensure AI strengthens American innovation by fostering progress and recognize that slowing down such progress is not an option.”

Read more here.

EmailBanner_TPA_June_6

Court Allows Lawsuit over Character.AI Conversations That Allegedly Caused Teen’s Suicide 

On his blog, Senior Fellow Eugene Volokh outlines the reasons a Florida judge allowed a lawsuit by the mother of a 14-year-old boy who killed himself after developing a relationship with a Game of Thrones–based AI character. Volokh says the case hinges on whether Character.AI’s output of communication to the teenage boy can be considered speech for First Amendment purposes. The court appeared to decide the AI’s output shouldn’t be considered speech, a position Volokh said he disagrees with. “I think the First Amendment does apply to such AI output,” he wrote, “even though one can still argue that any First Amendment protection should be overcome in certain situations (e.g., by the interest in protecting children, or by the interest in preventing negligently produced physical harm).”

Read more here.

Fellow Spotlight: Jacquelyn Schneider

Jacquelyn Schneider

Jacquelyn Schneider is the Hargrove Hoover Fellow at the Hoover Institution, the director of the Hoover Wargaming and Crisis Simulation Initiative, and an affiliate with Stanford's Center for International Security and Cooperation. Her research focuses on the intersection of technology, national security, and political psychology with a special interest in cybersecurity, autonomous technologies, wargames, and Northeast Asia.

 

For more insight on Understanding the Effects of Technology on Economics and Governance, click here

Expand
overlay image