The Hoover Institution hosted Productivity Gains And Labor Pains: What Will AI Do To Jobs? on Tuesday, March 17, 2026 from 5:00-7:00 pm PT in the Hauck Auditorium, David and Joan Traitel Building.

Artificial intelligence is transforming our economy faster than any technology in modern times. AI will fundamentally change how work gets done and what skills are required to do it. That change carries enormous promise, but it is critical that leaders act to ensure that we are prepared for the transition. AI is not waiting. This was a timely and thought-provoking discussion about how AI is reshaping the workplace and what leaders need to do in response. 

Session 1: Augmentation or Automation? What AI means for Work

Featuring Erik Brynjolfsson, Karin Kimbrough, and Peter McCrory - in conversation with Steven Davis. 

Erik Brynjolfsson is a professor, author, and inventor. At Stanford, he is a professor at the Institute for Human-Centered AI (HAI) and director of the Digital Economy Lab, with positions at the Stanford Institute for Economic Policy Research, the Economics Department, and the Graduate School of Business. His research and speaking focus on the economics of AI and digital technologies, including their effects on productivity, business strategy, and the future of work. Brynjolfsson is a best-selling author of several books including The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. Brynjolfsson holds five patents and is the cofounder of Workhelix Inc., which helps companies identify opportunities for generative AI.

Steven J. Davis is the Thomas W. and Susan B. Ford Senior Fellow and director of research at the Hoover Institution, and a senior fellow at the Stanford Institute for Economic Policy Research. He is a visiting scholar at the Federal Reserve Bank of Atlanta and cofounder of the Economic Policy Uncertainty project, the Survey of Business Uncertainty, the US Survey of Working Arrangements and Attitudes, and the Global Survey of Working Arrangements. Davis cocreated several influential economic indices and surveys, including the Economic Policy Uncertainty Index. He cohosts the Hoover podcast Economics, Applied and cofounded the Asian Monetary Policy Forum in Singapore.

Karin Kimbrough is the chief economist for LinkedIn, where she leads a team devoted to surfacing thought leadership on the world of work. Prior to joining LinkedIn in 2020, she served as the assistant treasurer for Google and the head of macroeconomic policy at Bank of America. In addition, Kimbrough worked at the Federal Reserve Bank of New York in the Markets Group for nearly a decade and as an economist at Morgan Stanley in London and New York. She serves on the board of directors for the National Bureau of Economic Research and the Federal Reserve Bank of San Francisco. She holds a bachelor’s degree from Stanford University, a master’s from Harvard University, and a PhD in economics from the University of Oxford.

Peter McCrory is head of economics at Anthropic, where he leads the Economic Research team. Prior to joining Anthropic, McCrory was the head of labor research at LinkedIn’s Economic Graph Research Institute, after having worked on an applied science and market design team. Earlier in his career, McCrory was a US economist at J.P. Morgan and a research associate at the Federal Reserve Bank of St. Louis. He holds a PhD in economics from the University of California, Berkeley.

- Welcome to the Hoover Institution, for what we know will be a very stimulating and insightful discussion on one of the transformative technologies of our time. That is how to think about artificial intelligence. But of course, not just artificial intelligence, but we are in the midst of a technological revolution of epic proportions. I am delighted that you are all here, because here at Hoover, we think of our ability to convene people from the academic community, of course, but also from the private sector, and indeed from government experience, to talk about how we face as a country and as a world as human beings. This transformative moment every year in collaboration with the Stanford School of Engineering and the Institute for Human-Centered ai, Hoover publishes the Stanford Emerging Technology Review. And in that we give an in-depth overview of developments in 10 key emerging technologies. The work is done by the scientists who are still in the labs. They are still pressing the frontiers. And on occasion, I'll ask something like I asked Fefe Lee. I said, can you tell us about what's coming over the horizon? And she said, you mean in six weeks? And so that speed of change is one of the things that we will address in our discussions today. Now, in that review, each chapter lists the key policy areas that we think policymakers should be paying particular attention to. And the impact of artificial intelligence on employment is at the top of the list of our policy issues. And that is today the central question that we are asking. Before we start with our panel, I would like to invite to the stage Rishi Sunna. Rishi is the Hoover Institution, distinguished Visiting fellow, but that's not the really important point. He's the 57th Prime Minister of the United Kingdom, a great ally, and in fact, a place that has been at the lead in both AI innovation and in AI policy. And so if you would join me, Rishi, at the, at the podium.

- Well, very thanks and warm, warm, welcome to you all. A special thanks to the people whoop whooping in the, in the middle Ella, I, I can't think of a better place to have this event than here. If you go to the top of Hoover Tower and look out across Silicon Valley and the Bay, you are seeing where the next great transformational technology of our time is being born. AI is of course, a general purpose technology, which means it's gonna have the ability to change every aspect of our economy, our societies, all our daily lives. And that's something very special that we get to live through. But every time that I'm here, I'm reminded that change is coming far faster than our politics realizes. A conservative estimate is that AI is going to have twice the impact of the industrial Revolution in just half the time. But the conversations that are happening here are very different to those that are happening in Westminster, and I think in DC as well. And we really need to bridge that gap. And there's nowhere better to do that than the Hoover Institution and Stanford University. Now, I discovered that myself studying here 20 years ago for my MBA spending time with people like Fei fei who Condi mentioned. And it was then that I realized just how transformative technology was gonna be during my career. And I took that experience and learning into government. It informed how I delivered AI policy for the uk, but also ended up hosting the World's First Leaders Summit on AI at Bletchley Park a few years ago. And since I've been back at Hoover as a visiting fellow and spending time here, I'm delighted that Stanford is continuing to be at the forefront of that. The Stanford emerging technology that Condi mentioned is absolutely brilliant primer for world leaders about what's happening at the frontier. And when I see my former colleagues, I, I say to them, well, it's, you know, we can't ensure that all of you got to study at Stanford, sadly. But the next best thing we can do is give you a copy of the Stanford Emerging Technology Review to at least half get you up to speed. It doesn't come with the sunshine, but it's certainly a great way to, to get going. And I very much hope that today's event can similarly play a role in bridging that gap, talking about AI's impact on employment, how, when, and then figuring out what we as policy makers need to do about that to make sure it works to the benefit of everyone in society. And it's crucial that we get this right because all big societal changes, however excited we might be about them sitting here in Silicon Valley, require the consent of their populations society. And people need to be brought along with these changes because if they're not, you inevitably get a backlash, which can make, which can make things much worse. So, you know, if we truly want to have AI be an idea that advances freedom, that transforms prospects for economic growth, that changes our society for the better. We have to have these discussions. We have to make sure that policy makers are educated and informed so that they can make the right decisions. And that's what today is all about. And I'm absolutely delighted, not just that you are all here, but that we've assembled a brilliant set of people to talk about this a little bit later on. Secretary Rice is going to chair a panel with James Monika, who's the head of Technology and Society and Policy at Alphabetic Google and Secretary Mundo, who will be familiar to many of you who served in the Biden administration, both good friends of mine, and we'll talk a little bit about the policy implications. But I think actually more excitingly is about what you are gonna hear from all the colleagues of mine on my right. And we've got a great set of people who are seeing firsthand what is happening and can give us a framework for how to think about it, not just today, but into the future. We've got Steven Davis, the Director of research at the Hoover Institution. We've got Eric Bryson, who's a professor and director of the Stanford Digital Economy Lab. We've got Karen Kimbro, the Chief Economist at LinkedIn, and we have Peter McCrory, head of economics at Anthropic. I know, like me, you are all excited to hear from them. So without any further ado, thank you for being here. Please join me in welcoming our panel.

- So thank you, Rishi. Thank you Condi. They've already done a beautiful job of setting up this panel, so I'm just going to jump right in and ask these folks questions about what AI means for jobs productivity. Will it be a world of automation and displacement, or a world of huge productivity booms and better, more enjoyable, interesting jobs for all of us? So Eric, let me start with you. And you have a recent study in one of the top economics journals that is a very detailed study of, correct me if I'm wrong, giving customer support workers an AI powered administrator assistant,

- Right.

- To help them do a better job. And I wonder, it's an extensive study, but I wonder if you can just tell us maybe two key findings from that study and also what larger lessons you draw from that study about the broader impact or potential impact of generative AI on the labor market.

- Sure. So yeah, this was one of the first companies that started using LLMs large language models to help customer support folks. And the nice thing about it was they did a phase rollout. So we were able to get sort of causal estimates. We could see people who had access to that technology and other people at the same time who didn't have access to it. And the, the first main finding was we just saw an enormous boost in productivity, a double digit gain, typically about 14% within just a few months. And it wasn't just, you know, one measure of productivity. We had like a dozen different key performance indicators, customer satisfaction, average handle time. Also, interestingly, even the workers themselves seem to be happier. They're less likely to quit when they're using it. So you, whether it was the stockholders, the customers, the workers, it wasn't like it was moving benefits from one to the other. They were all benefiting from access to the technology. The, the other, another very interesting finding I thought, was that if you looked at which groups were affected the most, it was quite uneven. The people who were newest to the firm and also distinctly the people who had previously not been doing their job very well, had the biggest gains, 30, 40% productivity gains. Then if you looked at the quintile, the 20% who were, had been to the longest, or who had been doing the job, well actually they had close to zero gains. It was indistinguishable from zero and the other groups in between. It was sort of a monotonic relationship between the two. And trying to understand why that was, part of it had to do with the nature of how the LMS worked. What they did was they looked at millions and millions of these conversations, these transcripts, and figured out which ones were associated with a happy customer, a good success. That's a good way to answer the question and which ones weren't so good. And of course, it, it, they, they took the good answers and they made those available to and suggested them to the customer support people. By the way, I should clarify the customer support. People didn't just mimic what the LMS were saying. They would sort of use this to help answer the questions better. But think about it, if you are a, a new worker, this is a godsend. Like, oh wow, this is a good way to answer this complicated question. If you've been there a long time and you really know how to answer it, the LLM is basically telling you back what you've been saying all along. And not surprisingly, that didn't lead to a big boost. I wouldn't say that always in everywhere LLMs have this leveling effect. And I think there's been research in other places where it didn't do that. But I do think that we're seeing more and more evidence of the, the productivity effect double digit productivity gains in a lot of specific applications. It's going to take a while for those spot benefits in, you know, customer service, coding, sales, certain management applications to start being more widespread throughout the economy. And, and as that happens, I think, we'll, we can probably see what your, one of your earlier questions, we will, I think we will see a world of much higher productivity. I'm on record, is betting that productivity is gonna be substantially higher than what the congressional budget office and others are predicting. Because I see what's happening on the ground and it's just a matter of time before that spreads as to what's gonna happen in terms of equality and inequality. That's something that I'm less confident about which direction it's going to be. I think it was a happy outcome in this particular case, but I'm sure we'll talk more about some of the labor market imp implications more broadly.

- We will, we'll come back to the potential downsides and disruptions, but that, that particular study you just summarized for it is, is remarkably positive in terms of its outlook. I think it was productivity enhancing, but also bringing up productivity the most for the least experienced, least able workers. So that, so there's a definitely a, there's definitely an upside.

- All there's, there's an existence proof there that, that that can happen,

- Right? So that, that's all to the good. Now, Karen, I want to turn to you and you know, you have a really, a tremendous window into the labor market as chief economist at LinkedIn. And as somebody who's been working on labor economics for 40 years, I'm frankly a little bit envious of the data that you have access to.

- Yeah, amazing.

- So, and you know, LinkedIn is, is is the platform for professional networking, not just in the United States, but around most of the world. So that's, that's where you sit and from where you sit, I want you to tell us what you see. And I'm gonna ask you to try to look ahead for the next 12 to 18 months on the emerging effects of artificial intelligence technologies, gen ai, but perhaps other AI technologies on the labor market, on the kinds of jobs that are diminishing on the kinds of new jobs that are emerging. And, and are we really at an inflection point that I wanna ask you that too? So, so next, next 18 months likely to look a lot different than the past couple years.

- Yeah, yeah. No, and thanks again for having me here. It's really a pleasure. I love talking about our data. It's overwhelming 'cause there's so much of it with 1.3 billion members on the platform. But let me tell you what we see right now and where we think it's going. So I, I know you wrote, you've got canaries in the coal mine, that's

- Another paper

- I, we're gonna talk about that. I'm sure

- We're coming to that one.

- Yep, yep. I know, I know. I tend to say green shoots. I think I'm looking across a landscape of the workplace. Where do I see green shoots, where AI is taking root? And this is what I see, I see that one for, for example, there are now 1.3 million new jobs on our platform that didn't exist three or four years ago. And a lot of them, almost all of them are AI related. So there's an intense increase in demand by employers for AI related work. They're looking for ai, technically skilled people who can build LLMs and interpret it and integrate it. And they're also looking for people who are AI literate. And the AI literacy is that idea. I can use any sort of these, any general tool for my purpose in my job. I don't have to be an engineer or mathematician. A physicist. And so for, as an example, I mentioned 1.3 million new AI related jobs, whether you're AI integrator, AI engineer, whatever it may be, we also see a doubling in people who are trying to read the room and add AI literacy type skills to their profile. It's not for me to say whether or not they're proficient at it, but people are reacting, they're moving where the market is because increasingly there are engineering jobs that are AI engineering jobs, but not general software coding jobs as we talked about. And we're seeing that, we see that for young people, the types of jobs they're going to are very different than where they were going five years, 10 years ago. So for those of you who we call new grads or new career entrants and are finding it struggle, we absolutely see this in our data. There are jobs, there are jobs where people are being hired 40% slower. I mean, there just aren't jobs there and no one's getting hired into these roles than five years ago. And you can think of these things like, I'll give you an example. These are the most extreme ones. They're kind of very obvious, like medical scribe, copywriter, legal associate. There are a lot of these jobs where people just aren't being hired at all or barely being hired for those roles. But there are of course flows of new grads who are going into new roles that we're seeing. A lot of these roles are looking for a combination of AI literacy or technical skills and those human skills. So we kind of call 'em the new collar jobs, but it's basically a combination of that. And you think of these jobs where I want you to be adept at being very productive with your AI tool, but I also want you to be able to go talk to a customer and engage with them and keep them warm, as we say with our company. So they're looking for a combination about the human skills, problem solving, conflict negotiation, communication, all of those classic human skills that we're not expecting AI to do, and they want it balanced with AI literacy. So that's where their jobs are rising. And these jobs are often in sales, business development, operations, anything kind of human facing. The, I will tell you, very sad to me post post-grad researcher. Postdoc researcher is one of the slowest hiring jobs right now. I know. Alright. The other thing I wanted to just quickly touch on. So we're seeing jobs that are being created. We're seeing some jobs that are going away. We're also seeing jobs that are just being redefined. And again, as we all know, a job is, you know, basically you have these skills, you apply to tasks and the collection of tasks is your job. And what we're seeing are increasingly certain types of skills that are permeating many roles as opposed to it being a job in itself. And I'll give you an example. We don't see as many data scientist jobs, but we do see a lot more jobs that are looking for data science skills. So do I know that that job is gonna go away? Absolutely not. I still hire for data scientists. I, my son wants to be a data scientist, so I'm, and I'm encouraging him, but, but the truth is that's a skill that's in hot demand. But as a role, it's not necessarily expanding on its own. It's, it's almost telling you that roles are gonna be redefined completely to pull in lots of different skills. So we're seeing that sort of skill shuffling happening. And then if I could just say, you asked me one last question, which was where are things gonna be in 18 months? And I'll give you two, two other little thoughts. I think we're gonna, I don't think we're there yet. I don't think we're at that inflection point. I think it's happening at the tip of the spear are large companies that can have deep pockets, that can spend on ai, invest on it, and go through this change management with their entire workforce to get them upskilled and ready. But the truth is, most people are not there yet. And when we look at our members on our platform, I can tell you less than 1% are considered AI technically experienced. So companies know they need to upskill, they can't compete with each other to hire this very narrow set of expertise. They need to upskill their own workforce. That's gonna take some time. And that's why I'm saying the 12, 18 months, right, even for the largest.

- So I'm gonna summarize briefly, I hear you talk to talk in the beginning about green shoots.

- Green

- Shoots, like I like that. I'm gonna use that. We green come back to

- That canary in the coal mine,

- Skilled reshuffling, redefining the nature of particular jobs. I like that too. And at the end you may, you made what I think is a very important point that I wanna underscore, adopting these technologies is often a challenging undertaking for large enterprises requires significant costs. Okay? And it's not just, oh, I'm gonna start using this. You have organizations have a certain way of doing things and now they're gonna do things a different way. And that's challenging. So we'll come back to this. So let me, let me turn to you, Peter. So you, you're another guy with a, with a job that I envy or chief economist at philanthropic. So you're right in the center of a, of a different part of the ecosphere. And as I understand it, you, you're an advocate of better measurement of AI powered structural transformation in the economy. So that's kind of a big term. I want you to explain for us what you mean by that, why you think it's important that we measure it better and how we can do that.

- So when I joined Anthropic, I would say that recent advances in AI and large language models, rep arguably represented the most consequential technology in the last 30 years, if not the past century. Being at the forefront of this technology and its innovation and how fast it's improving and seeing how a firm that is developing the tools is deploying it within our, my, my job as an economist and every other role at the company, I've dropped the word arguably and I've dropped the, the 30 years. I think this is the most consequential technology, and this is for a number of reasons, it's a general purpose technology. Every occupation is poised for some degree of transformation in our data. When we first started tracking how people and businesses are using these tools, a third of jobs a year ago had at least a quarter of the tasks that they did in those jobs Claude was being used for. And that number has risen to about 50%. So half of all jobs in the US economy, Claude is being used for. It's also the case that adoption has been very fast by a number of measures. Businesses and individuals are adopting these tools much quicker than the internet, much faster than other general purpose technologies in the past. So already you're on, on the precipice of a lot of uncertainty. And then you add to that, that the capabilities of these models are improving very rapidly on the order of ev every four to seven months, this famous chart from meter documenting that the amount of time that these models, a task duration that these models can complete reliably is doubling roughly every four to seven months. And that gets us two weeks long, months long of autonomous work within the next few years. And it's not just that, it's also, to borrow a phrase I learned from Jonathan Haskell, an economist in London, it's, it represents a me, it's an innovation in the method of innovation. So AI has this potential to make us better and more innovative at the things that we already do, and help us uncover new ways of doing more differently. You put that into your favorite model of the economy over the long run. This can generate acceleration in productivity growth and, and it can be quite consequential. So this is like a, a more, an enormous backdrop of uncertainty. We're arguing about what impact AI is already having in the labor market, to what extent it's already showing up in the productivity statistics and there's a lot of uncertainty over what the next one to two years has. So I think that's the, the main motivation.

- Okay.

- Now how do we actually measure that? What we are trying to do at an an anthropic is use privacy preserving methods to see how people and businesses are actually adopting these tools. So there has been a number of research papers trying to articulate where can large language models be useful, maybe for customer support, maybe for software engineering, for any number of tasks that knowledge workers do, for the reasons that you've alluded to. There are barriers and bottlenecks to actual adoption. And so even as adoption is going very fast, we need to understand what are the determinants of adoption, where is this unfolding? And this can also give us the tools for navigating this transition to powerful and pervasive ai.

- But let me just press you a bit on what you mean by adoption.

- Yep.

- So I've, I just released a study with a dozen other authors, some at Stanford that survey senior business executives in four countries around the world. And we asked them about AI usage in their, in their firms, but in their, in their own roles. And perhaps not surprisingly, two thirds of them say they use AI tools in their own role as CEO or CFO. But then when you ask them how many hours a week do they use it, it's like an average of one and a half hours. So I, I take the point that the adoption rate at kind of some extensive set margin sense is quite high, but the actual usage intensity is another question. So can you, can you kind of elaborate on that a little bit more and say something about, okay, it's used widely, but where is it used intensively and where are people just dabbling with it?

- Yeah. So I think, I think here, I think there are a number of ways you can tackle this. We put out a report in September where we laid out the geography of adoption. And an important lesson of general purpose technologies over the course of the 20th century is there is oftentimes a lot of geographic concentration. So we see this dimension in our data where high income economies have disproportionate rates of adoption of Claude, and within the US a small number of states represent the lion's share of overall usage. This might shape the economic implications to the extent that productivity gains concentrate for already rich countries. This might exasperate exacerbate regional diver differences in income. So there's like some open questions about the geographic dimension of adoption.

- I see. And is, and then go ahead.

- Oh, sorry. And then, and then I would say, like, what we do in our data is we look at what are the specific, you can ask the question not of how many firms, how many individuals are using Claude, but what are the specific tasks that they're using and how do they relate to the tasks that these models might potentially be useful for. There you see a large gap. So coding related software engineering related jobs by one measure, a famous measure put out by some researchers back in 2023, maybe 94% of tasks within those jobs have potential exposure. On our platform. We only see adoption in automated ways for work related purposes and around 30% of, of those tasks. So we're even on the intensive margin, there's a big gap between what these tools might be useful for and what these tools are actually being used in the real

- World. Okay. I'm, I'm gonna come back later and ask you about what, what, what accounts for that gap, but I, I think that's an important point to understand. Eric, I wanna go back to you. So you told us a story about in one particular setting where Jen, I, the introduction of gen gen, I had very positive productivity effects, especially bringing up the bottom end. We heard from Karen about green shoots, but as she already previewed, you also have a paper called Canaries in the Gold, in the Canaries in the coal mine, which doesn't sound quite as upbeat as green shoots. So tell us about the Canaries paper

- Sure.

- And what you learned from it.

- Sure. This is a paper with Barar and Ru Chen, a couple of amazing folks at the Digital Economy Lab. And we used data from a DP, the world's largest payroll processor. And what we did was we tried to understand we, there are all these news stories about people being laid off or people being hired and this and that. And so we tried to see if there's any sort of broad patterns in the data. And when we first, you know, when Ru and Brad first looked through it, we looked at the sort of the top level. We actually didn't see much. And I was like, we were almost gonna write a paper. Okay, there's not much happening there. But then we decided to slice it a little bit more carefully into different subgroups. So there's a couple of ways you can look at it. One is you can rank every occupation by how exposed it is to ai. It's actually kind of hard to do it at the occupation level directly, but every occupation as, as Karen previewed is, is a bundle of tasks. And so radiologists, there's 26 different tasks according to o net that they do. If you go through all the occupations and you look at the individual tasks, then it's much easier. You can say, okay, writing a memo, that's something, an ai, a large language model can help a lot with lifting a box, not so much andro uses this index, this, this o net category a a as well. And then you can see, okay, some jobs are gonna be more exposed, some on average, less exposed, although all of them have some mixture of, of highly exposed and less exposed tasks. Then we did that, we didn't, we didn't see a whole lot there. We started seeing some inklings that maybe the more exposed tasks were, were having an effect, then we sliced it one more way, which is by age. So we looked at the youngest workers age 22 to 26, and all these different groups going up. Now, something just popped out very visibly. If you looked at the most exposed occupations, and you looked at the ones that for the young, the youngest age group there, say 22 to to 26, then you saw a very noticeable decline in employment in, in some occupations like coding, it was about a 20% in customer service call centers. It was, I think it was like 14% on average for that whole, that that 20% of most exposed occupations. It was about 11% decline for youngest workers when we first published the paper in or, or printed the paper in, in in August. Since then, every month we get more data and I rushed to see what the new data say. It's been steadily increasing. Now it's about 17% decline for that age group of the most exposed, I don't wanna leave it. That's, that was the canaries.

- Those are the canaries, the really young

- People. Those are a little worried about the early, early sign that maybe something may be happening and is growing. I wanna say there all, there are three other groups that actually had growing employment. So I want to mention that that also existed. So one was, or at least flat employment. The, the, the, the older workers, we didn't see much of an effect for. So they seemed to be so in within coding more senior coders, we didn't see a big decline. It was, it was the young coders. Secondly, it, you know, if you go to the other unexpected, the people who were unexposed, like home health aides, there wasn't a lot of that LLMs do for them. They actually had growing employment. So that was also growing. So there was, you know, less in the exposed and more in the unexposed. And then finally, I think this is most interesting, thanks to Peter and his team. One of the ways they categorized it was by folks that were using AI mainly to automate their work versus mainly to augment the work that is to like learn a new skill and able to do their job better. It turns out that second group with the we're using it to augment also had somewhat growing employment. It wasn't as big a group as the, as the automate unfortunately, but it was a group that was both getting more productivity and more employment. So there are some, there's some canaries, there are also some green shoots in, in, in both of them. The, the, the broader more dramatic effect was the canaries. But I did want to point out that there are these other three groups that we're benefiting.

- Great, thank you. You know, I, I can't resist pointing out that in describing the substance of his paper, Eric also revealed one of the secrets of a successful researcher, which he definitely is. Which is if you write a paper that says, well, we didn't find much there, you know, nobody's gonna get published. He found something important and got published in one of the top journals of the profession

- To be, to be fair, we did actually, do we, did we, no, we did have a first version that, so, so Barat has the thing, you can go back and look in the record that, that we, we don't find much. But then, you know, we just followed, well, you looked harder. See, we just, we just wanted to see what, what it was there. So I, I, I, I, I, i, if, if there had been nothing there, we would've, we definitely would've, we Of course, of

- Course, of course. I wasn't meaning to suggest otherwise. No, this was, this was a compliment to the creativity of your research style.

- But, but, but, you know, yeah. One, I just wanna say one other thing that sort of 'cause because interesting to juxtapose these two papers and you know, if you, you know, listening carefully, you notice that the young group in the first one actually kind of benefited. And in the second group, you know, maybe not so much. And at first I was kinda like, well, that seems like, okay, maybe there's something different about the settings. Then as I thought about it a little bit more, actually, it may be part of the same story, which is both groups are just being much more affected. AI is much more relevant for some of those kinds of tasks. These are tasks that, you know, what, what do LLMs do? They, they read all the books and all the stuff on the internet, and there's a lot of book learning and knowledge in there. What there's not a lot of in, in LLMs is tacit knowledge that's never been written down, that's never been figured out. And the senior workers tend to have a little bit more of that. So there's a little bit more of a, a potential for impacting the people at the early career stage.

- Thank you. Thank you. So k Karen, I wanna turn to you again and, and I wanna, you know, think back to Connie's opening remarks and rishi's opening remarks, talking about the extraordinary pace of change in this area. And there's no doubt that's happening on the technology side, but as Karen Al already mentioned, translating that to implementation at the organization level is challenging. And I'd, I'd like you to elaborate on that in particular, to what extent does it have to do with, as I suggested earlier, that it's costly to change organizational practices. To what extent is it about the workforce lacking the right skills? To what extent are there some policy impediments to adoption that we should think about? Or is it just maybe the technology's not as great as it cracked up to be? What, where, where do you come down on those and what light can you shed on the impediments to rapid adoption?

- Yeah, so I, I think you, you, you touched on a lot of them when I, I spent a lot of time talking to CHROs. These are people who are thinking about their hiring plans and the overall head count and what they wanna do. And they almost always say, no, we intend to keep hiring young grads. It's not like they wanna cut the pipeline off, but what they're looking for are certain sets of skills they can't find. One of their biggest problems is knowing they need to, and these are leaders of not just the human resource leaders, but leaders across the, the C-suite in these companies are saying, our biggest challenge is the change management that's required at a process level across every single function that they're overseeing to implement AI and integrate it. And it's getting people to use it. It's creating the psychological safety so that they are willing to try it and fail and realize, I actually spent six hours trying to get AI to do something when I could have just done it myself with pen and paper or, you know, in Excel for, you know, an hour. But they're learning as they're doing. So creating the space where people can learn, modeling, how to use AI and which ones are safe to use, that change management process is inordinate expensive for C-Suite. But they all understand the potential of what ai, the promise of what AI can deliver in terms of productivity. They may, many of them tell me, we don't see it right now, but we're expecting to see it in 12 to 18 months. We're expecting to see it, but they know it's a process. So there's, that's one impediment. The other one that I think a lot about is the fact that we call it upskilling. It's acquiring the skills that you need. We're really, as a country, very inefficient at it. It's really hard to get people to learn new skills. The truth is, and I'm, I'll say this and be a little spicy as a very proud Stanford grad, but universities generally aren't at the forefront of creating the course. They're looking to see what the business needs and then creating the course. So it's almost like people who are studying things aren't necessarily ready for whatever it is businesses want. So we need to do a better job of helping educators educate. We need to do a better job of making sure that we can upskill people in their job and not wait until they're displaced. So there's an upskilling impediment, there's a change management, you know, challenge. I think that's happening. And then the last thing I would say is I think there's a little bit of an enthusiasm gap as well. So when we survey, and I think you know this too, or maybe you've seen something similar, Steve, in your work, the C-suites rapidly trying to take up AI and figure out how to implement it. Workers also can see the promise of it, but they're a lot more cautious. And many leaders are also navigating the message of, I don't wanna antagonize my talent workforce by making them think I'm going to replace them with ai. I want them to understand this is a tool to empower them to give them more agency. And so people are very, you know, worried about this and leaders need to figure out how to message that. And so I think there's a lot of space between what we're all dreaming AI's gonna deliver in terms of productivity and where we are. And then the last thing I'll just say, and then I'll, I'll hand it back to you, Steve, is when we look at adoption rates, they're still pretty low, in my opinion. It's gonna take a while for us to see adoption that's really pervasive everywhere. There are some super users, but I think it's gonna take much longer than people predict. When you read those headlines that are a little bit breathless adoption for us, when we look at adoption in terms of who's hiring AI talent, where's that talent reside? It's in tech, it's in finance, it's in education, but it is not in a whole range of industries which are very, very low still. So it's gonna take some time. And you think about, there's industry differential and then there's also firm size differential. Large firms are getting better and better at implementing AI training, creating courses to train their staff, encouraging them to use it, giving them guardrails. This is safe. This is not safe to do with the data. Small firms don't, aren't there yet, unless they're AI startups, they're not there yet. So there's a whole long tail in the distribution of small firms that I think are gonna take a while and they will get there. But that's why I'm a little bit more cautious than the, like it's here now and it's gonna displace all these jobs. I don't believe it. I think we're always

- Okay. So it's, it's gonna take a while.

- I think it's gonna take a while

- For a variety of reasons. Peter, I wanted to ask you a related question, which, and you already pointed out that there's this big gap between what these tools can theoretically do on the ground and what people are actually doing with them. And so similar to the question that I asked Karen, but I want to get your perspective on the table. What is holding up the movement from what could be done to where, you know, from where we are to what could be done? How do you see that?

- So may maybe I'll focus first on how businesses are adopting Claude. So in this report in September, we took a first look at how businesses are embedding Claude into various actions, software that the companies were building through the API. And so the API is, the company provides context to Claude. Claude does something and then produces an output that then feeds into some downstream operation. So maybe you're using Claude to help you streamline your HR process, or you're building an, an application for that goes direct to the consumer. One of the interesting things that we see there is that for the most complex tasks where businesses are using Claude to automate those very complex tasks, like automated biological research relies on disproportionately more contextual information.

- Hmm.

- So in order to complete that very sophisticated task that the model is capable not just in a theoretical sense, but in an actual grounded sense, the model can actually do it. You need the relevant information. What that illustrates is what are we not seeing in our data? Well, what, what we might not be seeing are large companies that have data infrastructure that is dispersed and there are bureaucratic barriers to getting the relevant information to the model at the, at the right time. There is an importance of maybe developing new organizational workflows. So if you want to develop a sales strategy and it relies on your co knowledge that your coworker has about the, the company that, or the, the, the, the, the industry where you're trying to make this sales strategy operate. If the model doesn't have that tacit knowledge, then it doesn't matter how strong and capable the model is, it won't be able to, to complete the task at hand. So I think this interaction of how, how readily available is the relevant information coupled with capabilities helps to explain a large portion of enterprise adoption as well as I think individual adoption. So three in 10, four in 10 conversations on Claude, our coding related tasks. Claude is known to be a very strong software engineering coding model. This is also a domain where all of the contextual information is in principle contained in a folder that you're operating out of. And so as long as you equip the model with the tools to navigate and find the right information, it can be very effective. I think there are just like standard barriers to adoption of experimentation awareness. You need a digital infrastructure that can facilitate access to these models. So that's a key determinant of global adoption patterns. But within the us, within advanced economies, I think it's more about how suited is your local workforce to the capabilities that the tools provide. So states that have more software engineers as a share of overall employment tend to have higher usage per capita. DC is has for example, a very large share of knowledge workers DC is where we have the highest usage per capita on our platform within the us.

- I see. So, but you, you made what I took to be a very interesting point about how you think about these models. Okay, they may be theoretically capable of something, but whatever the domain is that they're, that the potential application is, they have to have access to data that is suitable for that particular application so that the models can be trained. And sometimes the data's out there, but it's dispersed in ways that are not readily accessible. If I got your correctly in other applications, maybe the data isn't even there and it has to be developed from scratch. This kind of tacit knowledge, the tacit knowledge is tacit 'cause it's not written down in, in a kind of a codified, systematic way. So yeah, I I think that's a really important insight. What are some areas in which you think we're on the cusp? Because the data's already there and it's really just a matter of organizing the data, ingesting it into the models so we can make use of it.

- I'll, I'll give like another interesting angle on this question, which is, so in our report that we put out in January, we introduced what we called economic primitives. Like very simple ways to understand what's happening within the conversation. When someone is using Claude for coding or for writing a marketing report, one of the things that we analyzed is we asked Claude to estimate how many years of formal education would someone need to have to understand the, the prompt that the human is per providing and how many years of formal education would you need to understand Claude's response. It turns out that across tasks, across states, across countries, there's an extremely high correlation between the knowledge that the human is providing to the model and the actual sophisticated task that the model is completing. So even though writ large, Claude is being used for higher, more sophisticated tasks that typically require more years of schooling, it's dependent upon at present human expertise to some degree and further reinforcing this point of sort of the role of human expertise at present and how these models are being deployed is the fact that the most complex tasks are also where Claude tends to struggle the most. So the, the a measure of success is lower for these harder tasks. And so this points in the direction of you need human expertise to know how to get Claude to develop a machine learning model to begin with, but you also need that expertise to be able to evaluate the quality of the work.

- Yeah, this is a, this is a key thing. So I must confess at the behest of my younger co-authors on a recent study, I tried this tool called Refine for this first degree, and this is, we'd already read the paper and I was, I was a bit skeptical, but I tried it. And you know what, it came back with several ideas and I would say three of them I had already thought of and were already on the agenda. Two were just wrong, but then had one idea I had not thought of that was pretty good. So I, you know, I'm, I've been doing this stuff for, for decades, so I I was impressed and, and so I I see that you to make, to make effective use of these tools, you yourself, as the user in a complex setting need to have a lot of domain specific knowledge to throw out the bad stuff and keep the good stuff.

- Yeah. I I, I feel like I, I borrowed this framing from Eric, so maybe you can correct me if I get this wrong, but I, I, I like to think of jobs in terms of you, I think you said it like you ask a question, then there's an implementation step and then there's an evaluation step. The implementation aspect of jobs I think is increasingly becoming saturated. I feel that in my own work as an economist downloading data, running a statistical analysis, Claude can just do it and I can ask Claude to complete the task. I need to know what the right methodology is and the right way to frame the question. And I need to know how to evaluate the quality of Claude's work. But that aspect of pure implementation is also where I suspect there is greatest near term risk of displacement. So data entry workers, technical writers, jobs that are pulling together, dispersed information and synthesizing it in some straightforward way, that's something that these models are increasingly just capable of implementing. Right. Wholesale. Okay. You wanna jump in here, Eric?

- Well, I just, I I think that's exactly right. I mean I, I, I like that framing, but the, the, the, I would say it's happen within jobs as well. Like, so in, in coding, more and more people are becoming chief executives of a fleet of agents and their job is increasingly to define what the question is and then and evaluate it later. And lots of other applications we're seeing that as well. And this is a key to, to, to your question Steve, about like what can we do to speed, speed up the diffusion of it? We need to get better at asking the right questions to aim these incredible, powerful tools are, and that requires not just an understanding of the technical capabilities, but also what are the, the problems that customers have or the problems we have in, in societies. So there's sort of a, a liberal arts and a technical confluence and people who can see both the needs as well as the capabilities and are positioned to bring those together. It's kind of a very entrepreneurial thing and I I, I think going forward, more and more of us will be, there'll be more and more opportunity for tho that kind of entrepreneurial approach and that will close, you know, well I dunno if it'll close, but it'll, it'll it'll catch up the, the gap of implementation. I, I I was gonna, I can't say close because I'm, I know you guys are gonna keep advancing the, the, the frontier as well, but at least we can try to catch up a little bit. And that's where I think most of the real challenges and opportunities lie in the, in the coming, you know, five, 10 years is in addressing this capabilities overhang by understanding where the opportunities are a little bit better.

- Go ahead,

- Karen, you made me think of something. Wanna jump and piggyback on that. One of the things that we're seeing in our data lately in the last, I'd say two to three years, is this really big rise in entrepreneurship on the platform. And in particular the way we define it as people who are now they have a title or a role as founder. And I think a lot of this is, is related to this technological wave of innovation from AI that allows people to set up their own shop and do things in a pretty effective way. 'cause they can gather a lot of data and they don't have to be an expert in everything. They just need to be an expert in asking the right questions and challenging, pushing back, digging deep. I I talk to journalists a lot who are like, well, is AI gonna take my job? And I was like, not if you ask good questions, you know, I don't think so. Be pushy. So I think, I think one of the things that we're, I'm sort of more optimistic and you can tell I'm the more optimistic here saying green shoots and all that, but is is just the fact that there's been a 60% increase in the number of people who are doing entrepreneurial type work and showcasing it on LinkedIn. And I think that speaks partly to the moment of technological innovation, but partly to this idea of what AI can unlock.

- Right? So AI is a entrepreneurship engine. Yeah. A bit of a catalyst innovation engine.

- Yeah,

- Yeah, exactly. So I, I wanna, I wanna now turn to a more open-ended question and I'm going to give each of you a chance to take this one on, and I guess it's back to the, the underlying anxiety and tension. And I, I, before hearing you speak, I was thinking about it as automation and displacement versus augmentation and wonderful productivity world, but now I'm gonna just summarize it as it's a race between green shoots and canaries. Okay. So I, I I wanna, I wanna get each of you on record as, as, as much as you are willing to do to say where do you see, you know, if going forward the green shoot's gonna win out, are the canaries gonna win out? And what can we do to promote the, the green shoots? And, and perhaps also what can we do to address the canaries that that might arise? Who wants, who wants to go first on this one?

- I mean, I'm happy to

- Go

- Ahead a stab and then you can counter. So I, I I think the interesting part for me is what does the transition look like for someone who feels that maybe they're over rotate on tasks and skills that AI can replicate? How easily do they pivot reskill themselves, upskill, whatever you wanna call it, re how easily do they move to a new role? What is the labor mobility that allows 'em to say, I was working for a long time in manufacturing, but I feel like I need to move to a different industry. Can they do it? So I think there's a question. So when, so I guess I'm leaving you more with a sort of aspirational questions we should think about are one, how easy, easy can people, how easily can people transition? What's the labor mobility? Can we preserve it? How easily can we retrain, upscale and support people? And how prolonged is the transition? You know, is it gonna take a decade? Is it gonna take, you know, two years? And I think two years would be quite painful, frankly. Hmm. So I'll leave you there, but I'm, I'm optimistic, but I'm very, very clear-eyed that there's a lot of challenges. I think whether canaries or green shoots wins out, I think it's up to us and policymakers.

- Okay. You wanna go up here? Sure. We, we just put out this research note like a week and a half ago where we introduce a framework where we want to have a monitoring toolkit to think about if AI does lead to displacement, that we should at the very least see it show up in a rise in unemployment for those workers where Claude is being used in the most automated ways. And it's an example of kind of a null re a null result. Like we've, nothing yet has emerged, but we think it's really important to, given the uncertainty to put in place this like now casting framework, which, or, or as someone put it in a more accessible term, like how do you tell the time? How do you, what's the right way to understand how AI is affecting the economy right now? And how do you track that over time? The harder question is how do you see the future? How do you do this forecasting there's, is it that there will be a rise in unemployment in the next one to two years for the most exposed workers? Or will it be like the history of skill bias, technical change where there's a lot of inequality, job transformation, job destruction under the surface. And so I don't have a good handle there. We do begin with a position of and point of view of humility that it's like really hard to get this right. There have been examples of getting it wrong at the outset in a number of other sorts of shocks. And what we're working really hard to do right now is to do both this now casting, telling the time and this forecasting worth how do we for exercise, how do we assign the likelihood of different possible futures and how do we update those probabilities based on how we see the tool being used in real

- Time? Did you put your nowcast and your forecast in the public domain so people can evaluate their performance

- E Exactly.

- Okay. Well that, that's, that's great. Eric, let's hear your thoughts on canaries versus green sheets.

- Canaries versus green sheets. So the first thing is, I think this is fundamentally not a prediction question, it's a design question.

- Yes. - Because I, I think depending on the institutions and the choices we make, we can, we can have either of those outcomes or where whens it are worse or, or or better than those. And so we need to think carefully about how we can sign this. We now have thanks to people, many of them within 30 miles of this auditorium, you know, more powerful tools than we've ever had in history. And kind of by definition that means we have more ability to change the world. We have a more powerful tool. So we need to think about how to use those intentionally. And I would that there, there are three things and we've got a panel coming up here that with, with some policy makers, there, there are three things that, that I think would help. The first is that as these technologies change very rapidly, our skills and institutions are, companies are changing much, much more slowly. And you've done some of, you've done the best work in this area about the, the the actually the falling dynamism in the US economy. And I was kind of shocked when I first saw that. 'cause you hear a lot about, you know, Silicon Valley, whatever. But if you look more broadly at the US economy, and I suspect it's worse in other economies, there's been less new company formation, fewer people moving between companies and all the other metrics of dynamism. That is a terrible mismatch for rapidly changing technology. If we want to get the productivity and we want to do it in a graceful way, we don't want to slow down the change of the companies. We wanna speed them up. The worst thing a policy maker could do is to try to freeze and place the existing jobs you need to embrace the change in the dynamism. So there's a whole host of policies, I'm not gonna try and list 'em now that could boost dynamism and flexibility. A second thing to to, to the point about now casting and, and forecasting. There are are some, you know, right now we just don't have nearly as much visibility into what's happening in the economy as we should have. Sadly, a lot of our economic statistics, the agencies budgets have been cut and they're struggling even to produce what they were before. They should be producing more. We're flying into some very turbulent times, and we're flying blind, and we need to do a much better job of getting visibility and thank you to both of your companies for providing more and more data. I think, oh, we're gonna have to start relying more and more on private sector, realtime, large scale data to do that. But the more visibility we have, the better policy makers, executives, individual workers can, can make decisions if they know what's happening. So there's a lot more we can do to improve that. And then thirdly, I would say that there are some policies in place right now. You mentioned augmenting versus automating that kind of bias us towards automating and replacing work and using technology to substitute when they could often be boosting it more. So there's some tax policies that do that, that, that if you are an entrepreneur who wants to start a billion dollar company that has a lot of human workers, you're gonna, your enterprise is gonna pay a lot more taxes than if it doesn't have a lot of human workers. Is that something we want to try to steer that way? I, I'm not so sure that we want to penalize the people who are, are keeping humans in the loop or if, or, or, or, or if you're A-A-C-F-O. I was just talking to A-C-F-O-A little earlier and there's a, a bias towards measurable cost cutting by firing people. That's something she, she literally told me that's something we can measure how AI is affecting the company. So we, we incentivize people based on how much cost and how many, how much employment they can cut. You know, it's great to be more efficient, but in most cases, the more sustainable advantage is increasing the top line. How can you create new products and services? How can you create better customer service? How can you increase total revenues? I agree. Those are harder to measure in lot of ways. Harder to invent, but ultimately they're more sustainable. And even, even researchers I think have a, a bit of a bias. You know, I, I, I talk to people and, and you know, you often, we often use humans as a benchmark and say, okay, how can we make a machine that does that? That's kind of in a way, lazy way to do research just to mimic what a machine is doing. A, a more creative and ultimately more of a valuable way is how can we combine humans and machines to do something new that has never been done before. Then you have to think a little bit harder. But ultimately, I think that's more rewarding and all of those are things we can do to steer us towards a, a world where we have not just more productivity, which I think is very likely, but also productivity. That's, you know, wealth that's widely shared, which is not at all guaranteed.

- Okay. That, yeah, I like your framing as a design problem rather than a prediction problem. I think that that is a constructive way to approach the issue. You've already kind of took my last question and I'm

- Gonna give the other, well, we've got, we got two,

- Well, we're gonna hear, we're gonna hear from Karen and Peter, but my last question is, what, what do we wanna, what, what insights about policy or advice about policy do we want to offer to the four distinguished policy makers in the public and private sector that we have coming up next? Karen and Peter briefly, if you have thoughts about that.

- Sure. I think, I think I may have said it already, but I think the most important thing is if there's a way we can do a better job of training a workforce that probably needs to be constantly trained and retrained, because as Peter said, these tools are, you know, very powerful. They're iterating very quickly and I think the more that people can retrain and pivot around the economy, the better it'll be for the dynamism.

- Great. Peter, last thoughts? I'll say two things. Incredibly important to have more information about adoption, official statistics, scaling up our public data sets. We can't only rely on private companies to provide data. We're gonna do our part at Anthropic, but it's, we only have like a small window into the economy as a whole. The second thing I would say is there is a range of uncertainty. There are policies that will be not no regret policies that no matter what one to two years looks like in the future, you should pursue those policies to begin with. I think it's really useful to think in terms of state dependent, what would you do in a scenario where the labor share of income is declining pretty swiftly In a world where unemployment is spiking pretty rapidly and having a state dependent response thought out ahead of time to anticipate that uncertainty, to navigate it effectively is, is very valuable, I think.

- Great. I, I thought this was an outstanding conversation. I wanna thank all three of you. I learned a lot. I, I'm sure the audience.

Show Transcript +

Session 2: What Should Leaders Do? Getting AI Policy Right

Featuring James Manyika, Gina Raimondo, and Rishi Sunak - in conversation with Condoleezza Rice. 

James Manyika is the President of Research, Labs, Technology and Society at Google-Alphabet. He focuses on technology and society in areas ranging from AI and computing infrastructure to the future of work, the digital economy, and sustainability that have potential for broad impact on society. He is senior partner emeritus of McKinsey & Company and is chair and director emeritus of the McKinsey Global Institute. He was appointed by President Obama to serve as vice chair of the Global Development Council at the White House, and by previous commerce secretaries to the Digital Economy Board of Advisors and the National Innovation Board. He is vice chair of the president’s National AI Advisory Committee. Manyika is a member of the National Academies of Sciences, Engineering and Medicine’s Committee on Responsible Computing and serves on the boards of research institutes at MIT, Harvard, Stanford, Oxford, and the University of Toronto. He has been elected a fellow of the American Academy of Arts and Sciences; a distinguished fellow of Stanford’s AI Institute; a distinguished fellow in ethics and AI at Oxford; a visiting fellow at All Souls College, Oxford; and a fellow of DeepMind. A Rhodes Scholar, Manyika received a DPhil, MSc, and MA from Oxford in AI, robotics, and computation and a BSc in electrical engineering from the University of Zimbabwe. 

Gina M. Raimondo is a distinguished fellow at the Council on Foreign Relations. From March 2021 through January 2025 she was the US secretary of commerce. As secretary, she focused on making America more competitive by driving job creation, fostering innovation, and strengthening national security. Under her leadership, the Department of Commerce made historic investments in internet access, manufacturing, workforce training, and climate readiness through initiatives such as the Bipartisan Infrastructure Law, the CHIPS Act, and the Inflation Reduction Act. She oversaw $39 billion in incentives for microchip manufacturing, $40 billion in broadband grants, and significant investments in climate resilience and minority business support. Raimondo has also led efforts on AI development, launching the US AI Safety Institute and establishing the International Network of AI Safety Institutes. As the seventy-fifth governor of Rhode Island and its first woman governor, she focused on economic development, infrastructure, and education. Raimondo served as the chair of the Democratic Governors Association and was Rhode Island’s general treasurer before becoming governor. She is a graduate of Harvard, Oxford (Rhodes Scholar), and Yale Law School.

Condoleezza Rice is the Tad and Dianne Taube Director and the Barbara Stephenson Senior Fellow on Public Policy at the Hoover Institution. She is also the Denning Professor in Global Business and the Economy at Stanford’s Graduate School of Business and a founding partner of international strategic consulting firm Rice, Hadley, Gates & Manuel LLC. Rice served as the sixty-sixth US secretary of state (2005–9) and as national security advisor (2001–5) in the George W. Bush administration. She previously served on President George H. W. Bush’s National Security Council staff and as Stanford University’s provost. She has been on the Stanford faculty since 1981 and has won two of the university’s highest teaching honors. Rice is a fellow of the American Academy of Arts and Sciences and has been awarded more than fifteen honorary doctorates. 

Rishi Sunak serves as a conservative member of Parliament for Richmond and Northallerton and the William C. Edwards Distinguished Visiting Fellow at the Hoover Institution at Stanford University. Sunak was prime minister of the United Kingdom from 2022 to 2024. His tenure was marked by a return to economic stability; the UK’s taking a global leadership position in new technologies such as artificial intelligence; and its increasing defense spending so the country was better equipped to stand up to the new axis of authoritarian states. Sunak was chancellor of the exchequer from 2020 to 2022. As chancellor, he led the country’s economic response to COVID-19. His business support measures including, for the first time in the UK, a furlough scheme saved millions of jobs, preventing mass unemployment. Before entering politics, Sunak had a career in finance. He worked at Goldman Sachs and the Children’s Investment Fund, and cofounded an investment firm supporting British businesses.

- I am going to introduce the next panel, which is even more outstanding than the panel we just heard from. You've already heard from Condi I'll, and she's been introduced. I'll just remind everybody that among the many hatches worn, she's also the former provost at Stanford and she's my current boss. So I'm trying, I'm trying to be careful. And Rishi, we already was already introduced as well, but I wanna remind everybody that he's also, 'cause this didn't come up, he's also a sitting member of Parliament for Richmond and North Allerton. So I wanna mention that in addition, we have Gina Raimondo on the next panel. She's the 40th US Secretary of Commerce, 75th governor of Rhode Island. I went to Brown to get my PhD. So cheers for Rhode Island. And James Mana, senior Vice president of Technology and Society at Google Alphabet, and visiting professor of practice at the University of Oxford. Welcome everyone. I'm looking forward to the conversation. Five 10.

- Well, thank you very much Steve. Thank you for the handoff in a couple of ways, the introductions, but also the challenge at the end of the panel to talk about what is required for leaders in this period. And I'm actually gonna start this conversation with James because James, you and your, you and your colleagues are the culprits for all of this. You are the ones that are pushing the frontiers and giving us wildly new technological breakthroughs. And I think we greatly appreciate it. But then leaders have to try and make sense of it all. So let me start by just asking you, what would you say to the leaders who we're going to, to talk to in a moment here about this technology, about ai, about jobs, about work? What would you say to leaders?

- Well, first of all, it's a pleasure to be back at Stanford. Good to see everybody. Thank you all for coming. I think I would say three things, three different kinds of things. Let me go through them. I think first I'd want leaders to understand that the, it's already been said that the pace is relentless of the technological development. I think the part of that, about that part that's often missed is the scope of the innovations. I think it's one thing, it's easy to have this conversation just be about chatbots or LLMs. I think what's happening is fairly broad. I think we're seeing the capabilities, the kinds of capabilities expand. We went from systems that were just about language and text to now every modality to now agent capabilities. So the, the range of capabilities and the scope of them are expanding. I think we're also seeing that in many respects. We're having to keep inventing new benchmarks for what these systems can do. I mean, every time new benchmarks come up, we kind of max out on them and then we add a few more and we come up with new benchmarks. I think that's worth keeping in mind. It's also worth looking at what those benchmarks are looking at in terms of the capabilities. I think staying with this notion of scope, I think it's important to also think about the fact that many related areas are also making progress. So AI and robotics, robotics is now starting to make progress 'cause of ai. You know, secretary, you came to see us in our quantum lab. Quantum computing is making progress thanks to ai. You know, Fairfield was here, should be talking about world models. So I think the range of things that are being touched by this technology are actually quite, quite extraordinary. It's also worth keeping in mind that the, when I say scope, it's also the, the whole stack is making progress. It's not just the models, it's the application, it's the infrastructure, it's the chips if whether it's looking at GPUs from Nvidia or TPUs that designed at Google. So even the stack is making progress. So you have all of that. I'll say one last thing about that part of it though, which is, and it's been alluded to already. I think it's important to keep in mind that as impressive as that progress is, it's very jagged. There are some things that will blow your mind that, wow, these systems can do this. Now there's some things that will surprisingly shock you at how badly they do certain things. Visual is one example. So the jaggedness is important I think for policy makers to also be aware of that. It's not just a uniform progress everywhere. The second point, if I made that I'd want policy makers to understand is that the, the possibilities, beneficial possibilities from this technology are actually quite broad and they're increasingly becoming quite real, again, in an uneven way. So, you know, this technology will empower people do extraordinary things. I was just in India, the prime minister and I were both in India. What we, what people are now able to do with languages. I mean, so we have a, you know, we're working on Indic languages. We have a moonshot to get to a thousand languages. We are now at 250 in the, some long feature we'll get to 7,000. So what this technology can do to empower people is extraordinary. I'll skip the economy 'cause it's been touched on in the previous panel. The other part of the impact is what this technology is doing to advance science. In the last two years, five of my colleagues got Nobel Prizes for their work in science. But you know, take alpha fold itself, which is one of the things I got Nobel Prize more than three and a half million biologists in over 190 countries are using it. They're working on all sorts of things. Just this week we expanded not just single proteins, we've now added complex proteins in 20 of the most studied organisms on the planet, including people. And a lot of that work on complex proteins is also now looking at the priority pathogens that the World Health Organization has prioritized. So the what's going on in science, you know, in material science, in high energy physics as a result of it is pretty broad. And then the final area that's important is how this technology is starting to actually help us work on societal challenges that are here today. I'll just mention two quick examples from just this week. First we just competed a year long study. The premise says here it's an, we did this with the NHS on early breast cancer detection that actually covered, there was the largest study over a year with 175,000 patients looking and showing how this technology can actually advance early detection as well as mis detection in, in it's published in the, in, in nature cancer. So you're starting to see these things become quite real. The other example I was gonna give is, you know, we've been working on crisis prediction and we now do flood forecasting for over, you know, 150 countries now where more than 2 billion people live using AI. Just this week we've now added flash floods, which, you know, 'cause that work is covering just in flood. So anyway, so in all these areas, the second thing I'd want is for policy makers to understand the increasing possibilities that are becoming real in all of these areas. The final third thing, which is my third point that I'll, I'll stop. I, I also think it's important for policy makers and leaders to understand the risks and complexities from this technology. And there are many, I think, you know, you can cover different categories. There's still a, a lot of performance safety risks of this technology. Whether it's, you know, inaccuracies or hallucinations or any number of these things where no one's designing these systems to do that. They're just not good enough yet. And there's still a lot more work to do. Then you've got the risks that come from misapplication and misuse of this technology. And we can imagine any number of them from deep fakes. And you can imagine the range of things that people might misuse this technology for those individuals or companies or countries or terrorists or any number of actors who could misuse this technology. Again, because I was in India and we're both at the AI summit, there's one kind of risk that was highlighted, which I think is always quite interesting. Especially you often hear this from the global south, that in addition to think about misuses and misapplications, we should also think about misuse as a

- Risk.

- And so I think it's important to think about this as another kind of risk. And the final thing I think that's worth thinking about are the complexities where, you know, there there's something lost and something gained. So think about how this technology kinda rewires things, rewires education, rewires work, rewire our institutions. And I think there it's worth remembering that a lot of these societal arrangements we made for most things are very contingent on the technology of the day. I think if any of us were to university think about education, I think if we all knew that there was an available technology that could help somebody write an essay or do their homework, I don't think we'd arrange a system where we designed take home homework. We'd probably redesign it to be quite different. But that made sense at a time when those technologies didn't exist. So there's so many societal arrangements that are just contingent on the technology of the day. And so we need to think through how we do those readjustments where clearly there's some things that will be lost, but there are also some things that could be gained. We need to think about all of that.

- Fantastic. Yeah. So Gina, you have been both a governor and a cabinet officer in the US government and they're leadership positions. And so very often as a leader, when something like James has been describing comes along, one can feel that it's being done to you rather than you are having effects. So how is a leader, do you think about the obligations, responsibilities, possibilities of this moment?

- Thank you. Thank you. Condi invited me to spend the week here at Hoover. And I have to say it's been fantastic. It is such a special place and I want to thank you for that. This is a tough one. Yeah, this is a tough one. As a leader, as a former governor and Secretary of Commerce, I look at this through an, through a few different lenses. One is the lens of how do we maintain us competitiveness. There was much discussion on the prior panel about productivity for a long time. We haven't seen productivity growth that we need in this country to compete. This is a new technology that expands our, enhances our productivity is exciting. We know for sure our adversaries, most principally China are investing massively in ai and even more so in robotics and quantum. And it, it is a race. I mean it's a, we're foolish to think it's not a race. And so I think, you know, and this was my obsession as Secretary of Commerce, how do we balance the puts and takes to make sure at the end of the day the United States is as competitive and is as safe as possible. On the flip side, if we unleash AI in a way that leads to mass unemployment, or even not mass unemployment, maybe just clusters of unemployment, frankly, the thing that worries me the most is persistent unemployment. Even if it's only five or 10 million people, which is a, which is a drop in the bucket relative to some of the numbers. You see, let's say five or 10 million people concentrated and are, and they are unemployed for more than a year. That is unbelievably disruptive to our country. We have never seen in our lifetimes that level of labor market disruption. And that will weaken our country. We, China would love nothing more than that outcome. By the way, China would love nothing more than that kind of destabilized economics, society and politics. And then the inevitable result of that happens is massive regulation, which will slow things down. So, you know, we have to think about that. The way I think about it is, and then finally slowing down ai, which is what I, I still live in DC that's what you hear most. Like the ideas out there in my mind or in my word, like incredibly like lazy and uncreative. The things out there that you hear about all the time, universal basic income. I don't think it's a good idea. Work is more than money. We can talk about that stopping or slowing down innovation in some way. I think that that's not good for anybody. And then, you know, let it rip and let you know, let the chips fall where they may. And I think that's, you know, difficult. So as I think about it, like for all the reasons James said and all the reasons we know, standing against innovation is not the right move. It's not what's made America great. It it, it will not allow us to achieve what you just said in terms of healthcare advances, productivity enhances. So that's not a good thing you can disagree with. I don't think the AI companies are the culprit actually. Like they're inventing a new, exciting technology that'll, when we get to the end, when we get through the transition, will lead us to a better, healthier, more productive, prosperous economy. So I don't think you're the culprit not just saying that. 'cause you're sitting here and I like you the culprit, if anything to use your framing is the system of education, workforce and transition that we have in this country, which is old fashioned, too slow, not flexible and not employer led. And so as a policymaker I would say, how do I harness the benefit of ai but make sure we have systems that allow us to bring everybody along, bring everybody along, and not result in pockets of persistent unemployment. Now I think that is possible as I see it here, I don't know exactly the solution, but I'm ready to get to work to figure it out. Now I'm a former governor, so I think the answer is working with governors in states to pilot new forms of transition, support, new forms of employer led job training, new form. The prior panel talked about, look, what do we know for sure? We know for sure we're moving into a period of greater labor market churn. We know for sure skill the half-life of skills is shorter. So, and we know for sure we don't have an education system or a, an employment transition system to support that. So let's get to work and try to experiment with some new ways so that we can push forward with ai, but don't let the bottom fall out from Americans.

- Yeah. I just wanna follow up on the the question about the states in this regard, because I believe at last count we had some 30 plus federal training programs. It doesn't Yeah. None of which seem to work,

- Correct.

- Well,

- That's a little strong, but

- Mostly right.

- Well, mostly

- Right. I, I'll make I'll make the provocative statement.

- Yeah, yeah. Fair.

- None of which seemed to work. I used to do what was called trade adjustment. When TAA when we would do a a trade agreement, you had to have a plan for retreading, the retraining, the displaced workers. And so, you know, our, our federal system isn't particularly good at this. Do you think this is to push it down to the states because what it takes in Rhode Island is different than in California. And what it takes in Texas is, is that

- I'm, I do, but I will say, if you put any governor up here and ask that question, the governor's gonna say, give it to the states. I would say I was a two term governor. I put a lot of effort into workforce development, higher education reform, and K through 12. So this is something I can speak to with some authority. And the secretary isn't wrong. The country spends hundreds of billions of dollars a year, at least on a patchwork of deeply fragmented, mostly ine effective worker training efforts. And in fact, I would also say we spend a massive amount of money on public higher education. Much of not at Stanford, but much of which isn't really meeting the needs of this economy. And you don't have to believe me, just look at the facts. 40% of people don't graduate. They wind up without a degree in a lot of debt. It's just look at the facts. So, and the reason most of this workforce money, by the way, it's all run through governors. The federal money is run through governors. There are many reasons it, it hasn't worked. And I'm, I've seen up close all of them. It's hugely fragmented, it's supply oriented. It is not driven by the demands of industry. And industry, to be fair, hasn't engaged the way it needs to, right? Like, I don't think you're the culprits, but I do think businesses in America have to change the way they operate and get much more engaged in the business of continual worker training. If we're gonna get through this, I think it ought to be in states, every labor market in every state is a little bit different. But if I could wave a magic wand, I would push for some outcome based funding of higher education and worker training initiatives. 'cause right now, every drop of money, literally that goes into higher education, that goes into worker training, is attendance based, student shows up, the institution gets paid. The incentive is what you incentivize attendance. So yes, I think it should go to the states. Yes, I think governors are in the best position to start innovating. And the last thing I'll say is it's not just worker training, it's transition support. Yep. Right Now also, every drop of workforce training money goes to people already unemployed because of ai. We have predictive capabilities, let's get to the business of working with people before they're unemployed and let's experiment like TAA had wage insurance, transition insurance helping people to get from one job to the next.

- Yeah. So Rishi, you did quite a lot in Great Britain around these issues of training and how one educates, I might just say we, we all have had our problems with federal or with, with national programs. The problem is people are gonna be punished even more with the rapidity of the way that technology is moving. And so we don't have the opportunity to continue to make the same mistakes over and over. And I know you were very involved in some reform efforts when you were prime minister, so can you talk some that?

- Yeah, I think maybe taking a step back and just thinking back to how James anchored us in the long list of to-dos that you gave Gina and I to to think about at the beginning of this. I, I think the, the first thing is it's, it's an incredibly fortunate time actually to be leading a, a country or an organization because of this technology. And if you think at what most certainly Western European, I'd say western leaders are grappling with, it's a feeling amongst their people that there hasn't been enough growth, right? They haven't seen their living standards rise quickly enough over the past several years. Combined with that is a sense that things are not working as they should. Government is not able to deliver the change that people expect. And this is common to multiple countries. And it's certainly a pervasive feeling in the uk. And, and here we are with a technology that actually, if you're a policymaker, you're a leader, there's something that has fallen into your lap, which if properly used and harnessed can actually answer both of those major challenges is for you. So the first thing is actually lucky time actually. And if I was in office today, I would be thinking, that's great news. What that leads you to then need to prioritize is making this the thing that you focus on. And that leap simply hasn't happened. I think in, in most capitals around Europe and probably similarly here, where this is the most important thing. Most CEOs have got that, and I spend a lot of time talking to CEOs or most CEOs, big companies, they get that this is the thing that they need to spend their time on. They need to figure out how's it gonna disrupt their business, how can they take advantage of it? How's it gonna change their industry? And and they are allocating huge amounts of their personal time as a CEO to figuring out those questions. You know, has that happened in the political capitals around the world? It hasn't and it needs to, otherwise we're not gonna get the benefits of this. So I think that that's the first thing to grasp is the urgency of the moment and the need for it to be prioritized. Now then you get into touching on what, what Gina said, you know, what is holding us back, you know, everyone in this room probably is, is pretty excited and optimistic about ai. James and I were recently in India for the AI summit tell you in India, I think the stats there, 92% of the population is positive, excited about AI, thinks it's gonna be great for them and the country, right? You ask people the same question in the UK in the US I mean, it's less than half of that, right? Right. I mean, a AI is less popular than almost anything else that you can think of, including politicians from all parties, right? As, as Gina and I are both there. So at

- It's a, it's nice, it's nice not to be last.

- Yeah. It's, and and, and that is a real problem, right? The, the overriding emotional feeling. Most people out there not sitting here in Silicon Valley have about AI is is one of anxiety, it's one of fear, it's one of uncertainty. And so the primary job of a leader, of a policymaker is actually to be much more candid about the change that's coming with their countries, right? They, and then they need to bring some clarity to actually, I think some vision to the benefits they're gonna accrue to to their citizens. And then crucially, and this gets to what Gina was saying, is reassurance. Reassurance that the transition that they are about to embark on is gonna be one that's gonna be good for them, good for their families, and they can have confidence that they will have security throughout that. And that has got to be the overriding focus. 'cause if we don't get that right, then we're gonna get a backlash. Then all the benefits that James talked about, all the things that we know this incredible technology can do, they will be regulated against, they will be campaigned against. And you can already see it. You've got populace left and right coming at AI from both sides. And so the task is to get ahead of those challenges, bring people with you on the journey. So then that we can get to all the opportunities. Now, specifically on this, this training point, and I know we'll we will dive into some of the employment impacts in in a second. I, I think one of the, the cr the crucial things that we can't do is be complacent about these impacts. And I, you know, there is a little bit of a tendency, you know, not unfairly to say, you know what these, these predictions that technology is gonna lead to, you know, less labor demand, they have a long history and a very poor track record, right? And, and that is the case. I mean, you know, Keynes, the famous British economist wrote in a very notable essay in the 1930s, it was called The Economic Possibilities for our Grandchildren. And he did this great projection of productivity growth and kind of then concluded that because of all this great technological based productivity, everyone was only gonna work three days a week, right? A few decades, hence. And Keynes was right on many things. And in fact, he was right on the productivity growth part of his analysis, but he was wrong on figuring out that actually people would want new things, so they didn't even know about new jobs would be created. And, and actually that's indeed what, what happened. So people take a lot of comfort from the history here that over time technology has led to more prosperity, more jobs and it's kind of worked out. Okay. Now, James made a really interesting point at the beginning, and I come from an investment background and there was a very, very famous British investor called John Templeton, and he said the, you know, the most dangerous words in the English language are this time it's different. And, and, and I generally would tend to agree with him, But I worry as someone who is a kind of free market, pro innovation techno optimist, you know, I, I worry that this time it may be different because as James said, the the pace at what is happening and the breadth that this technology can be applied to make me worry about the short term impacts of that. And then they get, that gets to this question about transition and, and re-skilling. And, and one very practical thing that we did when I was in government is we changed our student finance system, right? And my kind of thesis alongside with colleagues in the government was, you know, the old model of you, you provide this taxpayer subsidized finance for people to go and get a university degree, you know, three years in the uk, four years here, and then off you go start your career, you know, that that model is not the right model for the future. And, and we said, look, if we're gonna provide, you know, tax advantaged student finance to educate and skill people, which is absolutely right, and that's a great public policy, it it's wrong to limit that to this one-off three year degree. And so what we changed it to is a lifelong learning account. And so it's like, look, this is the cost of the three year tuition that you would get on these, you know, terms that are subsidized by the, the government. We're gonna just give you that account with that amount of money in it and we're gonna allow you to dip in and outta that over your entire career. So you will still get the same preferential student finance, but now you could maybe think I can use it for a one year course at the beginning of my career and I'm gonna save it up and then go and do another one year course later on when I change careers, or again, a third or a fourth. And it just gets drawn down in a modular basis. And that's just about to be rolled out. It's taken a few years to completely reorient the system. But if I think, and as Gina described, you know, if we're gonna be in a world where we need to help people transition mid-career and be more flexible about their re-skilling, you know, this very practical step of changing our student finance system to support that type of education, I think is is an important thing to do. And I know, James, you've you've talked about this as as well.

- Yeah, no, I, i if, if I please yeah. Yes, I have, I should also, I I wanted to commend the prime Minister on something which is, which I think I would hope most leaders would embrace. The Prime Minister is actually the first to convene a global summit on AI safety. The reason I point to that premise is because in some ways I think both things are true. There's extraordinary opportunity, there are extraordinary complexities and risks and I I sometimes worry that often our political leaders are either on one side or the other exactly of that and act as if the other one, the other side of that doesn't exist. I think both things are true. So I would hope that most policy makers and leaders think about solving for both things. And it's a, there's a productive tension in that. And I think the Prime Minister kind of exemplified that when, when his prime minister,

- I was gonna ask you two who have, who have had these leadership roles. Actually an important leadership role is actually in the private sector these days because governments can't do it alone. The private sector is really a major player in the space that we tend to think of as governance. So you get to ask him now, what would you like to have the private sector do to support the kinds of challenges that leaders, it's always what can Washington do? What can London do? What? But in fact the private sector's got a big role to play in this. So why don't we start with you Rishi, and what would you say to the private sector?

- You know, I'm gonna actually start with a very simple, a simple one which the previous panelists touched on. And, and that is about helping us understand what is happening. That might seem an odd thing to have to say. Now, when I was chance of ex checker, when I was finance minister during COVID, we were faced with this extraordinary situation. The economy was undergoing the most dramatic disruption that it had pretty much ever encountered. And we talk about pace of ai, this was happening day by day in front of our eyes. And it, the official statistics that you rely on just simply aren't useful, right? When you are trying to make policy as you're shutting down an economy and people are reacting to interventions telling them what to do, you know, all of that official stuff you get is far too slow and, and is not gonna help you make those decisions. And it was a brilliant opportunity to just kind of change our approach and work with the private sector to provide information. And it, it seems obvious, but it didn't exist, right? G and Gina will know this from her job, and you sit in these offices and you know, you all these very official statistics that your officials will use to brief you and base policy on. But what we did during COVID was figure out, well, we need to get the Google Maps travel data or you know, we can really measure economic activity that way. We need to get the open table reservation data to see how hospitality industry is recovering. We can get all the credit card data day by day and we need to use that. So we, we built a completely different dashboard inside the treasury to help us think about then the right economic policy and listening to the, the previous panelists, it, it's clear that we need to do a version of that again, because our official statistics that the knowledge and information that exists in government to understand what is happening, I don't think is there today. But it, it's sitting in James's company, it's sitting in LinkedIn, it's sitting in the anthropic data and, and we just need a much clearer effort amongst all the private sector companies to understand that they have an obligation and responsibility to package that up, work with each other so that policy makers like Gina and myself and our colleagues, you know, have the best possible information on which we can have to base some pretty far reaching policy choices ahead of us and understand the, the pace of change that is is happening. So that would be my, I think reasonably uncontroversial ask. I think

- I agree you were actually Secretary of commerce and you, you had a reputation for working with the private sector very well. So

- I I agree fully with that. So as Secretary of Commerce, I oversaw the Census bureau, so all of that data and the Bureau of Economic analysis, which is all the GDP data. So I have a sense of government data quality and speed and concur completely. And it's not just from one company. Like I think it would be incredible if all the AI labs could get together and share that data to, so that we can make decisions about usage capability, et cetera. In addition to that, I would have two asks. First policymaker, the government is by definition reactionary government reacts to the people, their anxieties, their needs, their whims, et cetera. The level of anxiety among the America right now about AI is high in getting higher. Daily. I'm doing some work in ai, I am talking to governors about how to get ahead of it. I'm encouraging them to become an AI ready state, you know, make the changes in their state so they can be AI ready. Many of them who are my friends, they were my colleagues say, okay g, and I'll work with you, but can we not use the word AI because my constituents hate it so much. So my number one ask would be, could you, not you specifically, but could you CEOs stop talking about the 20,000 people you're laying off because of AI and making a grand announcement about how there's gonna be no white collar work in three years. Because by the way, you can't be sure of that. And could you try to paint a clearer, more concrete picture today of the new job opportunities that you think AI will create? Because today all people are confronted with is, I'm gonna lose my job. There's no jobs for my kids that is today. That is anxiety. So what are the jobs that will be created? Paint that picture in concrete terms because if we don't turn the temperature down in this country anyway, if we don't somehow speak to people's anxiety, we're not gonna make good policy decisions because politicians are reacting to that anxiety and they're not, you know, gonna do that. And then my second ask is get in the game and make investments to create a workforce system and transition system that will get this country through it. I also am optimistic about what AI will create in the, in some number of years we are in the first inning of AI in the eighth or ninth inning. I feel great about it, I'm worried about the middle. So put your shoulder, you know, to the wheel with policymakers and quite frankly your wallet to help figure and, and the way you do business. No one wants to hire a retrained worker. You try to go to a company and say, I've been recently retrained by a government program, it's not gonna go well. So we need businesses, we need businesses to like, and the principally, I would say, AI companies, yes, share your data, but like, help us recreate a modern system in America that is effective at transition and workforce and change the way you recruit, hire, retrain and redeploy so that the average American does have a chance.

- Condi, can I? Yes, please. I was gonna come to James before he replies. I, 'cause Gina just spurred me on something. I, I think, you know, one thing that is crystal clear is, and you and I have talked about this in the past James, is that we need to build AI literacy across our population, right? And we, you know, we'll, you know, we've talked a little bit about our education systems and how to reform them, but you have to remember that if you fast forward to our 2030 workforce both here and in the the uk, 80% of that workforce is already in work. Right? Right. And, and so we need, as we heard actually from from Karen earlier, you know, what's the fastest growing skill tag on LinkedIn? It's AI literacy across all domains. And, and so we need to build this general AI literacy in our populations. I think actually Jensen said it first and said it, well, you may not lose your job to ai, but you are definitely gonna lose your job to someone who's really good at using ai. And, and one thing where I think James, your company and others, you know, are being helpful, but I'd love to get your reflections on it, is partnering with governments, and this is what we've done in the UK together with you and others, is providing these, this AI literacy, this AI fluency training and upskilling to our existing workforce. And, and it would be interesting to hear from you 'cause you must have done this in different countries. You know, who, who is getting this right? Like who, you know, and actually I I I don't, I think Singapore and, and is an example that people in the audience may be familiar with who do this really well that we could all learn from. But you know, when we've got a in work population, we need to build general fluency with AI tools. I think about, actually, you know, someone put it very nicely to me. They said AI fluency is the kind of driving license of the modern workforce, right? And it, it's kind of, we need everyone to have that. You are well placed to help us do that. So,

- So James, oh yeah. So to you, because now not only have I called you the culprit, but then I've set them at you to solve their problems in, in the private sector. How do think about they, they aren't, you aren't doing

- That, but, you know, are we doing it well enough or quickly enough or who's doing it, you know, particularly well.

- Well, I, I, I think a few responses. I certainly, I I think my colleagues and I take the challenge that you're posing. In fact the two of you in particular have actually raised challenges to us over the last couple years and we've tried to take some of our cues from that and there's a lot more to do. But at least I can, maybe I can describe some of the things we're trying to do. And there's a lot more to do by the way, I think on the, on the skills front, one of the things I think that I'm actually very proud that Google has done over the last decade of the, around the world is actually done this, you know, Google career certificates that I now trained over a hundred million people around the world, 13 million in America. And now we're trying to reimagine what does that look like in this AI moment? 'cause a lot of those skills over the last 10 years are mostly things like coding and various digital skills and so forth. But I, we know that that picture's changing. So we're trying to, we're trying to reimagine that. And we've made a commitment. So on the literacy point, which minister you've raised with me and my colleagues in the past, and secretary when you were century, you raised with us too, we've actually made a commitment over the next three years a a billion dollar commitment to literacy. In fact, the last two, we added a piece to that, which is to do air literacy for all teachers in America. All 6 million of them K through 12 is a program we just announced a couple weeks ago. So there's a lot to do on skills literacy, but I think part of the question on the skills literacy part, in some ways the literacy part is a little easier. What's harder is the skills question, which is what are those skills? I think that's a real question. So there's certainly no lack of appetite from us to work with government and policy makers everywhere to think through that question. But it, it's a real, it's a real question. Let me mention a few other things we're trying to do. One of the things we're very cognizant of, at least at at Google, is the fact that this technology is so energy intensive. It's so energy intensive. And first of all, we're working very hard by the way, to make that transparent. We publish papers on it and we're trying to improve efficiency. But the other thing we're trying to make is also to make, we made actually a couple, a few commitments in the last few weeks to not have this be a burden for Americans. So we're gonna bring our own power, we're investing in our own power so that this doesn't become a burden for citizens wherever they are. As this technology uses energy, we're also trying to invest, we probably made, you know, very large investors, probably more than most in alternative sources of energy in nuclear, in small modular reactors in hydro, but also even in geothermal data centers. So that at least we start to make use, we take this moment to also effect some of the transitions that are fundamentally important. So these are some of the things we're doing, but I think there's not enough. I think one of the challenges we see, maybe goes back to your comment secretary about states, we have to find more ways to work with states. 'cause the unevenness you see around the country is really quite striking. When you, when you go to Arkansas, you go to, you know, California, you go to Maryland, it's so uneven. So how do we make sure that these opportunities and these investments show up in many more places, in many more communities, I think is something we have to, we have to work on. Lemme say one more quick thing, and this is more on the opportunity side. I think one of the most ama some of the most compelling opportunities from AI for society are things you might call AI in the public interest. A whole bunch of applications and solutions that normal kind of commercial mechanisms won't get you to that. So we're trying to do some of that. Even the work on languages, you know, I can, I can promise you doing a thousand languages is not commercially make sense. At some level you could argue, okay, well you know how many people actually we have money and, you know, can speak all those languages. But I think there's many of these solutions and applications that are gonna benefit society where we have to go beyond purely kind of commercial interests if these are truly in the benefit of society and actually help improve. So for us, you know, I'm lucky that, you know, we're at a company where, you know, I think most of us take the, the founder's mission very seriously, which is the idea of organized world's information make it universally accessible and useful for everybody. So I think those are some of the things that I think we need more, more of us in the private sector doing those things to bring this technology to benefit everybody.

- Let me, let me ask the two of you. Thank you for that. But let me ask the two of you. So before you were leaders, you, 'cause you're in a democracy, you had to be politicians And sometimes politicians don't worry about the literacy piece. And I think one of the concerning things, and Gina you mentioned this AI among Americans at this point, and I suspect you have some of this in Great Britain as well. It's, it's gonna take my job, it's gonna raise my electricity prices and at its at its very, very worse. It's going to come at me like Tron. So when I, when I watch, I watch Hollywood pretty carefully and it's very interesting that kind of the first AI powered robot of the modern era is actually a killer robot. Might've been nice if it was a robot that dressed your grandmother instead. But my my point is that there is a lot, our politics is already dominated by hyperbole and by going to our corners and the loudest voices get all of the cliques and being disagreeable is better than compromise. So our our political, our our political lake, if you will, is already filled with in ways that's gonna make it very difficult to have a reasonable conversation about something this transformative. Yeah. And we will have elections and we will have debates. And you, at, at the corner, you said as we've gotten through this, it's already starting to left, right? Populism and AI is, is your enemy. How do you think about the ability to have a reasonable, we've just had a very nuanced discussion with the last panel about what actually is happening to the job market and how we might think about it. How we might think about task versus jobs, how we might think about skilling and how are we gonna have that discussion in the charged political environments in which we find ourselves these days? It

- Is hard to do nuance as a politician having tried sometimes successfully, sometimes very unsuccessfully. So you just put your finger on the thing that worries me most about ai, which is that in America now, we have never been more divided. The level of political violence is, is real and growing. I have friends of mine who are running for a reelection who say, and they've been doing this for 20 years, they won't do a parade. It's too dangerous. You said everything, you know, I mean, it's hard to have a real discussion. The political environment is difficult and it's hard to make any good decisions. We are frustrated with stagnant wages, a cooling labor market, increased prices, populism on the right, and the left is on the rise. So in other words, it's already tenuous. And now we're gonna add the labor market disruption of ai. So I sometimes think about AI in comparison to what we call the China shock when jobs left the US manufacturing to go to China. Now, if we're honest with each other, this is something a politician would never say if we're honest, globalization wasn't all bad, right? Wages went up in America, productivity increased. We, we become a services economy juggernaut. Like I said before in the long run, we failed miserably at that transition. And three or 4 million people lost their job and they were concentrated in a few, in a handful of states, including mine in Rhode Island, and they were concentrated. So when you have a, and we are still paying the price for that. The, the people who were lo left out were justifiably left behind pissed off. And, and that's what's been playing out in our, in our, in our democracy. If we just add this AI disruption to that tinderbox, I don't know that we can handle it truthfully. And I am miss practical, but I don't know if we can handle it, which is why you were saying it's not in your economic interest to do the thousand languages. I'm just, when I talk to CEOs as I do often, maybe it is maybe they don't think it is in their immediate short term self-interest to lean into some solutions to get us through this ai tra transition, except that it really is because the consequences of getting this wrong, layered upon, you know, it's not like we have a stable, highly functioning, you know, political system right now. So that's why when I think about it, I think this is for America an all hands on deck moment. Someone in the last panel said, and the people have said, the futures is our, the future is ours to, right? I don't believe mass unemployment is inevitable. I really don't. That's our choice. That's our choice. And so as I look to 2028, like I can only imagine how terrible that debate is gonna be around AI policy unless we start planting the seeds right now in different states around the country to new systems of, like I've already said, like transition support, different kinds of training, employer led training commitments by companies to do redeployment instead of mass layoffs. Let's start proving out that there are, 'cause if we don't do that, it's gonna be a very ugly debate stage of who's gonna outdo the next person of outlawing ai. And I, I mean, that's not gonna end well.

- Gee, I was thinking condi, you know, you were talking about AI and the, the killer robots. It reminded me of a conversation I had with one of my girl. I have two young girls, and they're starting to use these chatbots to help them. And you know, my elder daughter, she is obsessively polite when she talks to the ai, so she's kind of, please thank you, that was so lovely, et cetera, et cetera. And I, and I said to her, and I said to her, you know, I said, you, I said, it's not, you know, it's lovely, but you don't, you don't need to be that polite. It's, you know, you can save time and it is not a real person. And, and by the way, it, it, it cost James a lot in compute costs and she'd rather not have to process those tokens. And, and she said, you know what, daddy, if, if AI takes over the world, I want to have been nice to the ai. And I thought it was good advice, good advice, but no, but look on, on this, on this broader point of, of how we, how we manage this, you know, you said kind of left and right converging, and you know, it was interesting we were chatting about that, right? And you know, Bernie Saunders, this is more genina domain, you know, Bernie Saunders and Steve Bannon probably don't agree on very much that's true. But they, but they agree on ai, it seems right. And that AI is is not a good thing to simplify. So I think this is a problem. And, and you are also right. Politics does seem to be in a space which rewards painting in big primary colors. And you know, Jean and I, you know, I experience the same as you, right? Where is the space for, for nuance in that debate or in that type of political discourse? But look, I, I remain an optimist on this. I I think there is a vacuum. I I think, yes, people are anxious, but you know, that's because they want leadership, right? And I think there is an opportunity for people to lead. And my my critique is I don't quite see actually that people are stepping up to the plate and putting this as the central issue where they can lead on. And that means, and I talked about it right at the beginning, and that means, you know, speaking with candor to your citizens about the change that is coming, not pretending that things aren't gonna change and that we can stop it or putting heads in the sand. You have to be candid about the change that's coming. And I think you need to provide a vision of the benefits that it is gonna bring to people and their families and their lives. And then you need this reassurance about how we're all gonna manage the transition in a way that's good for people's families. And actually, it's one thing, if we could point to people who were doing that and it wasn't working, then I would be more pessimistic. But I'm not quite sure that we've actually seen that yet in the way that we, that we need to. So I think that if we can do that, I'm optimistic that we can actually make this a very, a very positive, a very positive thing for our countries. And I think, you know, we absolutely, we absolutely need to for all the reasons that, that we've articulated. But I'd also say, look, you know, rhetoric aside, you know, politicians are are good at that. I'd also say show don't tell, right? And you know, if you think about it, most people's day-to-day lives, they're not worried about a lot of these other things, but they interact with the state lots, right? Right. There are lots of just day-to-day things that people are interacting with the state, the government on, right? The education system, the healthcare system, the welfare system, you getting a new driving license, getting a passport, whatever it is, your local council with the bin collections, you know, have you paid your local property taxes? There's a gazillion things that actually governments need to prioritize making better, faster, more accurate using ai. And if people can see that their day-to-day lives are improving as a result of this technology that James and his colleagues are developing, then that is gonna help us, I think actually more than all the rhetoric in getting people to the right place. And that's where we need urgency, right? Like in the public sector, right? That is within the government's control, right? Are they seeing that their kid is learning faster at school? Are those teachers feeling that this has reduced the burden on their day-to-day lives and that we're seeing everyone have a much better time? Educationally, are people seeing healthcare improve for them or the costs are coming down on the billing? Are they getting diagnosed earlier? Are we starting to see some of the promising new treatments that Demis and, you know, isomorphic and help us develop? Shout out for the UK there, but I I, I think there's a lot of low hanging fruit, right? Just making our day-to-day interactions with government better and actually show don't tell is how we're gonna start to, to change people's mind on this. And I as said, I, I remain an optimist. You can't not spend time with James and his colleagues. You can't not spend time here at Stanford and in this part of the world and come away not feeling inspired and excited about the future. And I'm completely on the side of those who think the future is ours to design, right? We should not be fatalistic about this. Yes, there are challenges, yes, there are things that we're gonna have to manage, but if we get this right, this is gonna be one of the best times to have been alive. This is gonna be one of the best times to have been in government, you know? And our job is to seize that opportunity and, and grab it with both hands. And I'll, I'll just maybe leave with this one final thought. And in World War ii, Winston Churchill received a note from Alan Turing and his colleagues at Bletchley Park, which is where I held the first AI summit for obvious reasons. And this note came to William Chur, Winston Churchill, and, and it said, you know, we think we can crack the enigma code that the, the Germans and Nazis we are using, but we need all these extra resources and things. And he read this very compelling note and he put a big red stamp on it and sent it back to his officials. And that stamp said action this day. And thankfully there was action that day and the rest is history. And I think we need a bit of that attitude, right? We need at action this day from policy makers, from leaders to take this thing on and get all the benefits from it and make sure that we get our countries to a better place. And that's, you know, that's what I think we hopefully can take away from today.

- Well, what better way to close our conversation positive than to positive

- Quote. We need to have action though, absent the crisis, it's easy to have action this day in the middle of a crisis. And my worry is let's not sit here and wait for the crisis before we have the action. And President Clinton, who was a mentor of mine when I was governor, he'd say, you know, Gina, the thing in shortest supply in politics is good ideas. I actually agree with that. I agree fully with that. So we need some new ideas here. We have a higher ed system designed for World War ii, 1950s economy. We have unemployment insurance that was created a hundred years ago and the Great Depression hasn't been touched since. We have, as Condi said, hundreds of billions of dollars of taxpayer money on mostly ineffective workforce training programs. And we have companies that are doing hiring, recruiting, and retraining basically the way they've been doing it for a long time. So let's come up with some ideas. Let's be practical. How about that? Also, in short supply and politics, right? Let's just, what would you do if you had a problem? Figure out a few ideas, try them, show that they work, take action this day. And that's, man, I think it's possible. And if we do that, I also agree it'll be a bright, prosperous, healthier future

- Action this day and all hands on deck. I think those are the like, will you join me in thanking our wonderful panel.

Show Transcript +

Upcoming Events

Monday, April 20, 2026
Mexico-USA Prosperity and Security Conference
The U.S.–Mexico Economic And Security Relationship: Implications For North America
The Hoover Institution and ITAM hosts The U.S.–Mexico Economic and Security Relationship: Implications for North America conference on April 20th,…
Wednesday, April 22, 2026
Historical Thinking And Democratic Citizenship
The Alliance for Civics in the Academy hosts "Historical Thinking and Democratic Citizenship" with Mary Clark, Suzanne Marchand, Jeffrey Collins, and… Hoover Institution, Stanford University
Wednesday, April 22, 2026
GoodFellows_April22.jpg
GoodFellows Live | The Constitution: America's Greatest Design
Join us in-person on the campus of Stanford University for a live audience edition of GoodFellows, the Hoover Institution’s premier broadcast series… Hauck Auditorium, Hoover Institution
overlay image