Hoover Institution (Washington, DC)— As the power of AI grows and its use in everyday American life expands, AI firms will have to ensure it meets people’s expectations and policymakers will have to determine how regulate it effectively.

These demands are what the Hoover Institution’s Center for Revitalizing American Institutions (RAI), in partnership with Pepperdine University’s School of Public Policy and Stanford University’s Deliberative Democracy Lab (DDL), sought to explore by hosting a symposium on AI, From People to Policymakers: A Symposium on Perspectives on AI, held December 9, 2025.

The event brought together fifty attendees from academia, industry, and civil society, including two members of Congress, for a full-day discussion of the public’s attitudes toward artificial intelligence.

Attendees discussed how the rapidly advancing capabilities of AI and its growing use in workplaces and schools, and within other facets of civic life, will impact existing challenges facing the United States, including low trust in core institutions, political polarization, and the possible widening gap of wealth inequality. Another key focus was how companies can create opportunities to incorporate public feedback into their AI development.

Representatives from Meta and Canadian AI firm Cohere, US Reps. Jay Obernolte and Ro Khanna from California, and scholars from across the country explored a range of issues: how to ensure fairness when AI is used to vet employment applications, how the US public views use and governance of AI, use of AI in schools, and how AI can make government services more responsive and efficient.

Presentations about the fairness of AI demonstrated that there are risks to using AI to augment the hiring process. One industry leader’s research indicates hiring systems based on large language models (LLMs), especially when used for hiring-related retrieval tasks, can exhibit notable biases that lead to discriminatory outcomes in real-world contexts.

Furthermore, in multilingual environments, AI agents tend to slow down, and their overall work quality suffers. One study presented at the gathering demonstrated a new method to evaluate AI performance across eleven different languages, that showed poorer overall performance and accuracy when AI agents were asked to translate their work from English into other languages. The new method could be used in the evaluation of future LLMs and their performance in languages other than English.

Some attendees stressed the need for a greater effort to harness the power of AI agents to speed up the processes of government, but to do so in a way that does not introduce new inequities. One example that was suggested of good AI agent application in government would be to use it to speed up the processing for unemployment claims and other requests for state benefits.

In a conversation with Hoover Senior Fellow Larry Diamond, Representatives Khanna and Obernolte discussed the importance of building public trust in artificial intelligence, addressing its misuse, and managing workforce transitions to the technology. They also highlighted AI’s potential to drive innovation, economic growth, and new opportunities for American workers.

Rep. Khanna stressed that US AI policy must ensure AI enhances human capability and improves worker productivity instead of serving only as a tool for labor substitution and ensuing job displacement. He also said policymakers will need to contend with a US public that is more distrustful of AI than other societies around the globe, and suggested there is work to be done to improve AI’s reputation, especially among youth and working-class Americans. 

Meanwhile, Rep. Obernolte said there is a need to codify the Center for AI Standards and Innovation (CAISI) guidelines and equip it with the resources needed to prioritize innovation and US global competitiveness in AI. He also wants to ensure law enforcement is equipped to address illegal uses of AI, according to existing criminal laws rather than new ones drafted specifically about AI. Rep. Obernolte also stressed that regulation should come through sectors, in a hub-and-spoke model, as opposed to a central regulatory authority, as is the case in the European Union. 

The event concluded with a panel that included Hoover Institution Distinguished Visiting Fellow Daniel Lipinski and focused on the principles that ought to guide the future of AI. The Center for Democracy & Technology shared research indicating 50 percent of students use AI but do so more for personal than school reasons. Reported effects of this use include increased use of AI for companionship (42 percent of respondents) and romantic interactions (20 percent of respondents), rising when schools introduce AI. Parents want opt-out and informed consent about their child’s AI use yet are often excluded. Seventy-five percent of caregivers say they don’t know how to use AI. Guidance is scarce (about half received any) but valued (90 percent found it helpful).

In conclusion, participants said Americans must be given the opportunity to engage more heavily with policymakers and AI firms about the future direction of AI use and deployment in the country. This can be achieved through wider public consultation, more discussion about AI in schools, and efforts by government to ensure no one is left behind.

Expand
overlay image