Jon Hartley and Jay Bhattacharya discuss Jay Bhattacharya’s vision for the National Institutes of Health (NIH), running the NIH as an innovation accelerator, replication in the sciences, measuring scientist productivity, and the new NIH policy reducing animal testing.

Recorded on August 27, 2025.

- This is The Capitalism and Freedom in the 21st Century podcast, an official podcast of the Hoover Institution Economic Policy Working Group, where we talk about economics, markets, and public policy. I'm Jon Hartley, host today, my guest is Jay Bhattacharya, who is the current director of the National Institutes of Health, the NIH, and previously was Professor of Medicine, economics and Health research policy at Stanford University until March, 2025 when he went emeritus to join the National Institutes of Health. Welcome, Jay. Welcome back.

- Thanks, Jon. I still remember your help in packing up my office in the last days. Those were, those were kind of sad and bittersweet in some ways. It was very special.

- Very special. And missing you walking around in your Sanford sweatshirt. So it's, I gotta

- Put a tie and on for this job.

- Well, well, great. Well, I know you bought a bunch of new suits and ties, is my understanding from, from your interview with, with Peter Robinson. It maybe it's a new, a new look or, or a new permit look, but super excited, you know, to, to talk with you and you've since moved across the country to, to Washington, DC and, and I know NIH is is based in, in Bethesda and, and you're living in the DC area. I'm just curious, one, like how's this transition been? Can you explain to us a little bit about what NIH does? You know, I think a lot of people are familiar with the, how it funds a lot of scientific research, but can, can you explain to us sort of at, at a more granular level what NIH is doing, what's going on in the halls of NIH on on any given day?

- Sure. So how first, how all, all these questions all at once, Jon. So let's start with the first one. I mean, the, the transition's been shocking. It's been, it's a very different job to be a professor than it is to be the director of the NIH and, you know, just personally very dislocating. I'd been a professor at Stanford for 25 years, almost 40 years at Stanford, if you include all the time i on my education. So it's, it was, it, it's been, it's been quite the transition, but it's also been, I don't, I mean the NIH is a incredible organization. It's more than a century old. And it's, it, if you look at almost every single advance in biomedicine that we take for granted today, well, almost every single medicine that we use, all the sort of know knowledge we have in biomedicine, I mean, there's some role the NH has played in that. And that is just to, to lead an institution that with that incredible history is really quite humbling. And, you know, it's a, it's, it's faced a lot of challenges in the past few years. And in, in some ways it's funny that I'm, I am the NIH director. 'cause you know, the previous, one of the previous directors of the NIHI mean, you sort of didn't like me very much, I guess at the time. We, we've since broken bread together. I, I don't, Francis Collins. Francis Collins, yeah. And I, I think, you know, I was a big critic of the NIH during the pandemic, but at the same time, I have a great deal of admiration for the work of the NIH. I've been, NIH funded my entire career as a re I was a reviewer for NIH grants for almost two decades. I I love the NIH and so the, the, the ability to like have it, have the ability to lead it and, and, and help it address the problems that it sort of created for itself during the pandemic to, to restore public trust in it. That's, that's, that, that's, and and to, and to have it actually lead into the next century of innovation biomedicine. That's, that's my job.

- That that's terrific. So I, I'm just curious, so, so one, you've laid out a vision for NIH Tell us, what is your vision for NIH and tell us where did the Biden NIH go wrong.

- Okay. So, you know, I was thinking about how to do this job and I, as I said, my, I view my job as a sort of addressing like a crisis public trust in the NIH. There's a pew survey that's, I think 2024 found that one in four Americans think that scientists do not do, do not do things that are in the interest the, the, for the wellbeing of the people. I mean, just think about that. Like the NIH relies on essentially broad public support for forever. It's been, it's, it's enjoyed pretty bipartisan support. Both Democrats and Republicans have supported it pretty substantially. In fact, I just saw a paper that suggests that Republicans have had, have supported it more than, than Democrats have in terms of like votes for funding increases and all that over the last two decades. And so what you have here is a, is an institution that's broadly loved, and yet the pandemic had ruptured trust in terms of like creating a vision for it. I, my view is that I, I want the, I want, I want the reforms rather than focusing on what went wrong. I want the, the reforms of the NIH to restore the NIH to its path toward its mission, which is that, which is to do research, to support research that advances the health and longevity of the American people. Now if you look at the state of American health, it's not good, Jon. Right? In a dozen years, we've had no improvement in life expectancy in this country. That is absolutely shocking. I mean, this is something, when I was a grad student, I was, I remember reading about the, the rise in life expectancy in the United States and, and elsewhere and just being, just marveled at it. When, when I was a, when I was a, when I was born, I was in, I mean, Indian, I grew a, a naturalized citizen came to the US when I was three. I was born in India. I think the, the average life expectancy of an Indian boy born in 1968 was year I was born is something like 48 or 50. Wow. And now I'm like 57. And so you, you and what you saw is a steady rise in life expectancy in the developing world and the developed world. And I thought that would go on forever. Now that we're on the other end of a dozen years of no increase in life expectancy in the richest one of the richest countries under earth, the United States, you know, we're in a crisis, right? The, and, and in a way the NIH has not done its job. It's, it's, the job is to advance, to support research that advances life expectancy and, and health. We have chronic disease crises all over the place that, that are, you know, just, I just, I just look at the, the, the rates of chronic, of obesity of dia type two diabetes of, of ca of cancer incidents of, of, you know, sy psychological maladies including depression and anxiety at scale. You have a huge problems in the health status of our country. We are not a healthy country, Jon. And, and so that's the first element of my vision. I want the NIH to focus on research and to do research that reverses those trends. I, I think the American America Health, again, movement is an amazing opportunity for us to refocus the, the, the, the activities, the NIH toward things that actually do advance the health of the public. Now, you know, we've seen in the past dozen years huge advances in knowledge about genetics and, and a few other, and other, other items by medicine. But if those don't translate over to better health, then what purpose do they serve? Knowledge for knowledge sake is not worth funding. And it wouldn't, it will not continue to gain garnish support of all the American people, if that's all it does. So item one, we have to make America healthy again. And the NIH is going to play an enormously important part of that. Okay. Got questions more? 'cause I can go, like, I have five items.

- No, I keep going. Keep going. Okay,

- Item two. Item two. We have to solve a, a, a, a longstanding problem in biomedicine. That that's been, that's been known. My, my colleague Jonny and Edes, a CO's Stanford, who's a, I think the most highly cited living scientist on earth, right? Just an incredible man. And he, in 2005 wrote a paper with a title that goes something like, you know, why most published biomedical scientific papers are wrong? Absolutely Mindblowing title for a paper. It's five pages long. Anyone, any young graduate student looking to make A-A-A-A-A shock should emulate this paper. Although you can't take this 'cause he, he already thought of it. The, the the, and it's very convincing. Basically, the, the, the assertion is that science is hard and the standards that we have for publishing in science do not guarantee that the things that are published are true. Worse. It, we all have these like publishing standards that guarantees that a large part of what we know to be true, generally novel findings will not get published. And so you have, you have both problems. The false positive and false negatives problem in, in biomedical, in biomedical publishing where large chunks of what's published, when independent teams finally ever get along to looking at it, don't find the same thing. You know, I had a professor in medical school told me, Jay, half of what you're learning is just false. Jon, half of what you're learning is just false. It, it's true for econ as much as it is, I think for, for biomedicine. And the reason why is because economics is hard and medicine is hard. Science is hard. You can, as a scientist, you can convince yourself that you know the truth. You, you do your analyses and you can, you know, but very often you make hidden assumptions that you didn't realize you were making there. There's some, like, see, there's, there's like things you, you do to like that where you, where there's unexamined assumptions that you, you have made and you result results in. You're convincing yourself, you're right. And you're, you're a good writer. You, you can be very convincing even to peer reviewers. And the statistical standards we use are not strong enough to guarantee that there won't be good, you know, false ideas published with statistical evidence to suggest that they're

- True.

- So, and then, and so what you have is then is a replication crisis caused by the fact that science is hard. You build on top of that a reward structure in science that says, if I can publish my papers in a top journal, then I am an excellent scientist who says two things about the world, I'll make, I'll climb the social status ladder. And, and so that creates incentives to, to write papers that are, that are, I mean, and sometimes fraudulent, right? Because all that matters is you publish a hot paper and a top journal. If you can get away with it. You might, you might, you might cut some corners here. Cut some corners there. Or, or PA, I mean, you know, I, I think, yeah, I mean

- There's, there are the Francesca Gino and, and Dana, a sort of cases where it's like, you know, outright fraud. Somebody actually like, actually totally made up their data. But I, I guess there's the more pernicious thing, which is, you know, just running, you know, a thousand regressions and only reporting the one that worked and, and then, you know, so, so it's a p hacking thing. But in reality we know that, you know, that there's some just naturally, you know, some hypotheses are, are gonna occasionally be, you know, a, a false positive from time to time. And so if, if we end up just only publishing these sorts of, you know, these PA studies, you know, we end up being in a place where, you know, we publish things that sort of just accidentally happened to be true. And, and we just found those, those, those small, those small instances and, and we hunted them all down. And even though they're not really that robust.

- Yeah, well, so think about that. So now I have to say, when I look at the, the scope of of, of, of the Jon's allegation, which is essentially the most of what's published, much of what's published is not true. And also the, the fraud, the, the potential for fraud. I actually think the, the first is much more important. I mean, much more pers pervasive, but most scientists aren't committing fraud. The vast majority are not committing fraud. We're just telling ourselves that we, we think the world works a certain way and we can convince ourselves we publish it, but the world doesn't work that way. The science is just hard. Now, they're linked problems because you can, you have a system that rewards publication in top journals as a measure of, of social status and, and, and advance in science. You're gonna get the fraud. But the harder, the harder problem is sifting between true and false. When you publish publish. And the replication crisis is a symptom of that. When, I mean, like, I think there were, like, there've been now dozens and dozens and dozens of, of, of demonstrations of this replication crisis. You know, in, in field after field, after field psychology probably was the, the first big, big field to have these crises. But now you're seeing these crises in field, after field, after field biomedicine, certainly. And in fact there's, when developers, when drug developers sort of decide whether to invest in research in a certain area, they'll, they'll conduct their own private replication efforts to check if the papers they're relying on are true before they make the, you know, millions of dollars of investment in, in subsequent research. Right? Re in in a sense, it's a violation of the basic idea of how science operated once upon a time, which is on trust. If you have a paper that's published, I, I would trust you that you've reported accurately what you've, what you've done, you've you. But if, if I can't rely on that, then it makes the whole advance in science difficult. I mean, there's enormous consequences for having a biomedical literature published that is not reliable. I, and I, I want to solve that problem. I wanna solve that problem. And I think the NIH has it in its capacity to do so. In my view, there's sort of two related things. One is there's no re there's very little return to doing replication work. It's often something you hand off to a grad student to do, graduate student to do, to, to, like, you know, as a prac, as a practice exercise. You can't get it published anywhere. 'cause it's, it's seen as non-original. So that's one problem, right? No, almost no returns to, to actually doing replication and, and, and publicizing it. And then the second problem is that the, the, the re the, you never get any return for doing the kinds of pro-social things as a scientist that allow replication to happen. In fact, it's the, it's the other way. If someone approaches me and says, Jay, I wanna replicate your paper from 1987 or whatever, I actually 1996 or whatever that mean my first paper. Well, I'm gonna, I'm gonna say, well, why do you hate me, Jon? Why? What's, what's wrong with, with, with me? And rather than what I should be thinking, which is it's a great honor that you're revisiting my not very seminal work from 1996. Like, it gives it more importance, right? And in fact, that act of be a, a pro-social action, like I make my, my data publicly available. I make my code publicly available. I describe my sort of protocols in a way so that you don't even have to approach me. You can just try to replicate it yourself just based on what I've written. Those are all measurable things that are good for science, even if you don't do replication, right? They're activities of science that allow replication to happen. And yet we don't reward it. We measure, I mean, I'm sure you know, you've, you've gone to Google Scholar, you can look and see what Google Scholar, they, they, they have measures like the number of citations that you have or the number of papers that you have or the, the H index, God forbid you, you know what, as an aside, the H index is a terrible measure of of, of scientific product. It's like, it's, you know, you know how it works, right? It's, yeah. You gotta have an H index of, of, of, of, of K means you have at least k papers with K citations each, right? So imagine Francis and Watson and Crick, Fran Watson, Francis Crick and, and James Watson, they published their one big one paper in their entire life, which is the, the, the double helix structure of DNA. It gets 1 million citations, fundamentally transforms how you think about, everyone thinks about biology. It's a, it's a, if just had that one paper in your life, you think you had a successful scientific career. Right?

- Right. But an each index of one, I guess in, in this case,

- Yes. 'cause in this hypothetical, they don't have a second paper with at least two citations. So they at, but they do have one paper with at least one citation, a million citations, mind you. But one, their H index is one. And if same thing with like, they, they'd have the same H index as someone with a million papers each with one citation in the Journal of Irreproducible results or something. And that, I mean, that also would have an H index as one. It's a terrible measure. But you'll notice what's what that those measures, what they're actually measuring. They're measuring volume, how many papers you have and influence volume and influence. Are those the only real measures of, of productivity for scientists? We want, it's like measuring a baseball player by, you know, steel stolen bases, but you don't also have caught stealings. Yeah. Or home runs, but you don't also have strikeouts. Right. You need to have a more complete measure of set of things that we want. And you, if you measure it as, as, as economists all know, if we measure it, we'll reward it, we can reward it. You can change cultures around measurement. Right. Let's stick with ba Oh, sorry Jon, go ahead.

- No, I was just saying, you know, and it, I wonder if this sort of shifted, I dunno, maybe there's a reason for it in the sense that like, you know, there, there are just fewer low hanging fruit scientific ideas nowadays, you know, that there's this whole sort of idea that, you know, productivity is slowing down 'cause we're running out of good ideas. And, and maybe at some level in, in academia, people are sort of running out of ideas. So it's, you know, the last fewer Nobel prizes given out for, you know, these incredible, you know, singular achievements, whether it's, you know, double helix, DNA or Black Shoals, you know, the, there's just like concrete single papers right now. It's, you know, a career and it's a literature or changing a literature. It's the, you know, the AMO glue Jonson Robinson paper. You know, a lot of people are like Wally and the papers aren't really wrong, really rep replicable replicable. But, but it changed the direction of the literature and that it got people using, it was the first paper to use, you know, applied tools with understanding whether institutions matter for growth and, and economic growth is a big question. And, and I, I think, you know, drone brilliant. I think, you know, it's a brilliant research team, but it's interesting how it's maybe this, this is all part of a trend that people are trying to really promote themselves when, you know, there, there really aren't as many very concrete findings as maybe there were say a couple generations ago.

- I just don't agree with that, frankly. I don't, I actually, it's part of a third part of my vision. I I don't believe that we there, that the world is bereft of opportunities for ideas that advance fundamentally and our understanding of the way the world works. I mean, if I look at the way our knowledge of the way the world works, the, the physical world works, I, I mean I think we're still in our infancy of our understanding of it, right? There's vast fields of knowledge still to to, to to, to be, to be gained vast ways of thinking that we just haven't discovered yet. What we have is a incentives problem, not an opportunities problem in my view. That, that we haven't given people enough incentives, by the way, that's my third part of my vision to like make, get to transform biomedicine in a way so that people want to find those ideas, those new ideas. We, we've become stuck in a rut in ways and sta some sort of the Peter Thiel stagnation kind of idea. And I wanna fix that before we move off replication though, just, so I just wanna, I wanna close the loop on replication. If you solve those two, bare those problems, you reward people who want to do replication, give them a place to publish it, and you measure pro-social activities that, that encourage replication by scientists, you're going, you're gonna have replication essentially become the standard of truth in science. It's a epistemological revolution. Rather than saying, is it published in the QJE or the a ER or is it published in New England Journal of Medicine, biomedicine or, or cell or Nature as the standard of whether some idea is true or false. Instead, what become will is what will become the standard of truth. Is, is is the idea replicated? Are other people when they look at the same thing, do they find the same thing as you do? And is it, is it important, obviously will remain an important part of this, but like is if you have important ideas that are replicated, those ideas then have the, have, have people are more c that are true. It won't just be that you're published in a top journal right now what we have is authority based measures of truth.

- An

- Epistemological revolution with rep with replication says we it should be rep replication. That is, that's the standard of truth. Imagine you search in Biomedicines PubMed, you search PubMed, you get a paper of of where that comes up and you have a replication button. You click the replication button and all the relevant replication efforts for the paper pop up alongside it with some AI summary of what, what it says and what's links to each paper of replication effort. So you can assess yourself, it won't matter then if it's published in a top journal, even top journal papers won't necessarily replicate

- Abso Well I'm, I'm totally with you that, that we need replication and, and I've written some papers on how much p hacking there are in certain RCTs. That, and, and to what degree pre-registration, for example, improves or, or, or reduces the amount of p hacking. But I, I'm just curious like what are the tools that we can use to properly create these incentives? Like are we gonna have like, I don't know, a more replication journals or are we gonna have, I dunno, more, more meta science, you know, type studies being rewarded or like, I I'm just curious in, in the sense that I think there's still this problem, which is, you know, we have this like tenure year system that generally is rewarding people for these, you know, papers that, you know, it takes time to write a paper. It takes time to replicate a paper that at, at some level, like it, it's still the, you know, you have these tyrannies of whatever the metrics are in economics, it's top five, how many top five publications you have. You, maybe you need five or so to get tenure in a, in a top department, similar with, you know, finance. There's top three clear top three journals. Like how do you break the tyranny of the top five or how do you get the top five to actually care about replication and how do you get people to be incentivized to do replications to the point where that could actually help someone get tenure. I'm, I'm curious.

- Let me, let me, lemme just stick to biomedicine 'cause that's that as the job at the IHII fixed economics is another, is another question probably beyond me, Jon ot, frankly, the, the in, in, in biomedicine, the NIH has tremendous power to solve this problem. So first you, you, we, we, we, we can and will start awarding researchers, large grants to do independent replication. That'll be their job. We'll find the the buddy the, the, the, the, the Jon the other, the, the, the, the the new the new Johnny Andes of the world and, and have them compete for grants and give them grants, large grants. Those are signals to their institutions that these are excellent scientists and they'll start to gain promotion and tenure where that, whereas now they don't get it just because simply the NIH awarding a large grant is a strong signal of an, of this is an excellent scientist, Right? Second, the NIH can stand up a replication journal. In fact, we'll do that. We already have the journal of the National Cancer Institute. It's not something that we're not used to doing. And essentially have a a a a repository where you can put your replication work and we can link it to PubMed, which is another NI H product where, where, where the, where you can do the, the, the kind of replication button in the search results and say, does this paper replicate? And get that, get that summary. That's, those are all just, I think I can just add like, you know, this is something I can just have the NIH do as director and in fact I'm going to have the NIH do as director, we'll have to stand up. We're working with standing up this new application office that we're in inside the office of director, where you'll coordinate those replication activities across all the N nih and then

- Third, this, this is the office of research Economics Playing and analysis or arepa, is

- That Yeah, that's, well that's the new, that's the, well that's the current acronym for the, for the, for the or repa for the, for the, for the, for the office. But that, but that would be its activity to, to sort of coordinate the kind of replication activities with the aim of inducing a culture change within all, all of, and you know, essentially an epistemological revolution is what it, what it's really aiming at. And then the third thing is that we can start to develop metrics. There's a whole field of science and science. My, my, my colleague and, and former student ko Packin and I have written a whole bunch of papers together using the method of e of, of methods of economic econometric methods applied to science. And you know, the people like Pierre Ale, Bruce Weinberg, Donna Ginther whole host of like fantastic economists who've devoted their attention to this, to this field. We can, we can develop metrics, those pro-social, those metrics. We can expand the metrics so that we, right now we are right are in a kind of, okay, you're too young for this Jon, but like once upon a time there was a, a fantastic saber nutrition. Saber metrics is like the science of baseball research. And there was a guy named Bill James.

- Bill James, I, I know Bill James. I, I met, I I used to work, I spent a year doing football analyst for Dallas Cowboys. Long story

- Short, are you serious? Okay, that's fantastic.

- He's still going. I think

- He is he still, I think he works with Red Sox, I'm not sure. Anyways, he Yes,

- I think that's right. Or or was for a long time. Long time. Yeah. He might have part of ways in in recent years I think, but yeah,

- Well he, I, in the 1980s, I used to b get his baseball abstracts. Apparently he was like a night watchman. He would sit there writing books, cranky books about how no one understood baseball as, because they didn't have the right set of statistics. And I would read him go like, man, he's right. So like he had this like, he had this like point, which anyone who knows baseball realizes very quickly that a baseball hitter, their main job is to not get out. And yet the baseball, the standard baseball statistics back then didn't value walks. Well, if you, if you're walk, if you get a walk, you draw four balls and, and you, you get to first base or whatever that is, that means you didn't get out. That's a good baseball player, right? That, that has the ability to draw walks. So, so they, to these like statistics where he would like rank baseball, re-rank them based on a broader set of pro productivity metrics and very different players ended up being the top player. Like Wade Boggs turned out to be a big, a big hero of his and, and mine. 'cause he was a Red Sox fan. I'm a Red Sox fan. So, so we went from that to the Moneyball, remember that movie? It was about the Oakland A's who grabbed all of these statistics, transformed how they made this personnel decisions and became a small market team that would get, you know, that would win a lot of games every year as a result of the, of taking advantage of these like new statistical methods. We are right now in science, we are, we are, we are in the Bill James age. There are all these science of science kind of ideas floating around with metrics that are, that are ing around ling up. We are not in the Moneyball age. We have to start measuring productivity using the, the, the kind of of things we actually want scientists to do, right? Pro-social behavior by scientists. Like do you share what fraction of your papers have you shared your data? What fraction of your papers have you have you shared your code? That would be in economics, in what? Frac In, in, in biomedicine it might be tissue samples. It might be a, it might be like how con how how well do you write your method sections so that people can don't have to come and ask you for how did you do it? What, what your secret sauce was. Can they can just replicate it directly just based on the description. I mean, if you have those metrics, people will start to do it. And then if you have this epistemological revolution where, what matters is are, you know, are are your ideas replicable? Are do you have you, are you right? Are do you have important ideas that actually are replicated? Then what will happen is all the incentives for fraud will drop away. It won't matter how many top five papers you have, it'll matter. Is, are the ideas you have, whether they're published in the top five papers important and are they replicated? Right? It just changes the whole incentive structure of science. And I think the NIH can for at least for biomedicine, we can accomplish this.

- Yeah, absolutely. Well, I guess one, one thing, you know, for example, I I think that would be super interesting to look at and, and maybe something that you're already, you're doing is, you know, we, we don't, or the public at least hasn't really, and researchers outside the NIH haven't really had the data that we need, for example, to, to measure like how effective NIH grants are and like to do that, I think you need at least I guess from an econom from an econometric sort of approach and, and a methodological approach, you know, you'd need, you know, the applications, you know, for the NIH grants, you know, not just the successful ones but the, those that have been turned away and and are rejected and, and so, you know, you could probably do, you know, some sort of analysis of, you know, if there was some ranking, you know, you could sort of look at the marginal ones that were, you know, just accepted the ones that weren't. And, and, you

- Know, we're gonna, we're gonna, we're gonna have that, that, that's what, that's what this office, this new office I'm developing is gonna do, is gonna have, it's gonna have, we'll, we'll have great econometricians in there. We'll make the data. I mean, there's some restrictions 'cause you don't want scientists to send their proposals in and fear that they're gonna, the idea will get stolen. So, so there'll be some protections, but we'll make the, the, to the extent that that's feasible, those data and raw data available. So a a broader set of scientists can start to make their own metrics as well. I, I mean I want this field to just blossom. 'cause it's the key to solving all, all the, all the crazy instances of scientific fraud. You ask yourself, how, how come it's happening? Like why are these like prominent people being f found of committed fraud? It's not fundamentally a, a moral issue. It's not, and how do I put this? I I said that wrong. It's not fundamentally an issue of individual moral failure, repeated individual moral failure. It's a, it's a system failure. We have created the wrong incentives in science. If we create a different set of incentives around replication, all the incentives to commit fraud will go away. 'cause why would you commit fraud if you're not gonna need credit for it? Who cares if you're top publishing, top journal, if, if you committed fraud, then no one's gonna be able to replicate your work. You're need credit for it.

- Exactly. All but getting back to truth, I mean, that's,

- Yeah, it's an epistemological revolution

- That has to be, you know, the truth isn't, you know, centric. You know, the, there has to be, you know, truth before justice or, or you know, other concept, you know, for whatever reason why people get into doing research and, and you know, maybe people wanna help or, or, you know, maybe, you know, there's a lot of researchers that maybe have some sort of activist in intentions, you know, to, to shape policy or to shape, I don't know, you know, their field or, or some sort of policy or, or some set of, I don't know, commercializing some sort of set of drugs or, or whatever have you, whatever those incentives are, you know, truth has to come before

- Well, that, that's the, the thing, like once you, once you, once you establish that some idea is, is has some likelihood of being true, then you can start to discuss is it important for public policy? Is it important for of development of drugs? Is it important for changing how patients should behave or, or, or what, you know, advice of the advice we give pa whatever, right? That truth is a necessary condition for the effectiveness of science. And it's, I mean, that's, that's why, that's the second part of my vision is this rep, I wanna solve this replication crisis. It's 'cause it's fundamental if, if we think of science as a hugely productive that thing, but it's only productive to the extent that it, it actually encourages people to have incentives to find true things, true things about the way the physical world works. Absolutely. Okay, let's go to my third part. Third point. We spent a long time on this one. This was, this is, but this was important, right? So

- Replication crisis is a big one.

- Okay? So you've already set up the third one, which is the stagnation problem, right? So a few years back, Chad Jones and some of his colleagues, a fantastic economi, Stanford of course wrote a, wrote, wrote a paper where he calculated the number of papers in, in, in, in cancer per in, in breast cancer, breast cancer per advance in, in, in, in survival for breast cancer patients, right? And what you find is like, you know, if you go back to the fifties, sixties, the, the, the number of papers written per improvement in survival of breast cancer patients was actually pretty low. And then, and then there's like this flattening of the curve where now, now every single sort of month of life expectancy increase for cancer, for breast cancer patients involves literally, you know, thousands and thousands of papers. The per dollar we spend on biomedicine, we're getting less advanced than we, than than we used to get. Right. And you, you laid it out. Well actually, you said, well, what if there aren't enough? What if we've run through all the easy ideas and we're just, we're just now on the flat of the curve into, into perpetuity all. We're never gonna make, make, make any new advances because it, everything is just really, really hard to make any advances. You know, I I was, I was reading the other day, the, the, a physicist, very famous physicist, empirical physicist from the turn of the 19th century who said exactly that about physics. You know, he'd seen a enormous generation of tremendous advance in physics where electromagnetism had been sort of worked out, you know, of course there was Isaac Newton and mechanics had been worked out huge, huge advances in, in physics. And he said, well, what if we're done with physics and all the rest is just footnotes, you know, the the finding the, the 500th decimal in planks constant or something. Right? And you know, of course, just a few years after that, there were enormous revolutions in physics. You know, I i I, you know, the, the, I understand relativity and the quantum theory and like, just, just huge advances. You don't know that you, you, you don't know that those advances around the, around the bend unless you keep knocking at the door. You keep trying new ideas out you, in you. And, and let's just bring it back to economics. You need, you create a incentive structure where those new ideas are allowed to be tried out.

- I, I think to, you know, to complete your analogy, I, you know, it's, I I think when I, I believe George Bernard Shaw and Einstein won the Nobel Prize in the same year, or, or were at the same Nobel Prize gala, and I believe George Bernard Shaw, you know, was congratulating Einstein in overturning Newton. And he said, you know, he, he looked forward to the day when, you know, someone overturns what, you know, what Einstein's, you know, thought of the day was. So, I mean, you know, it's, I guess it's knocking at, at that door.

- Yeah. Again, let's leave aside physics because I dunno anything about physics, but like, but let's just stay with this, stay with biomedicine. I, I did some work with Miko Plin a few years back where we looked at how old are the ideas in the published biomedical literature, how old are the newest ideas pub supported by the NIH? Okay, so first of all, how do you measure how old ideas are? It's actually weirdly simple. So you just, you take PubMed, you take all of the papers in PubMed, which is all of them in biomedicine in that were published in 1940. You then do the same thing in 1941, get rid of all the synonyms and then subtract off all the 1940 ideas. And what you're left with are the new ideas that are, that were introduced into medicine in 1941. Just come straight out of some complica lar large scale computer analysis, easy to buy. And then you do it in 42, 43, 44, 45. You have, what you have is a history of biomedicine. Every year you see all the new ideas that were introduced in that year. You go back to the papers and ask how old are the newest ideas in each paper? How old are the newest ideas in each paper, right? So paper that's on the cutting edge will be working on ideas w that are zero year old at the time that the paper is published, papers that are working on older ideas or will have, you know, 10, 15 years old is the newest idea right Now, if you go back and ask what, how old were the ideas for papers that were supported by the NIH in the 1980s? Well, they were working, we were working, they were, those papers were for working on ideas that were 0 1 2 years old. Like really the bleeding edge, if you look at the 2000 tens and two, you know, what you see is papers that are working on ideas that are seven, eight years old. We become way too small. See conservative in biomedicine, like too, way too afraid to like try new ideas out. In a sense, we punish failure too much. It's very different than Silicon Valley, right? So in Silicon Valley you have like a port your portfolio manager, you have 50 projects. You, you've, you've read, you've, you've like done all your, you've, you've, you went, you want, you got your Stanford MBA. So, you know, you just need to diversify the portfolio. And the key thing is, it doesn't matter if the 49 of those projects fail, if the 50th is is like Google or something, that's a very successful portfolio In biomedicine and in, in and at the IH in particular, it's become much more conservative in the sense of, you know, if if an institute of director at the NIH funds a portfolio of 50 projects and 49 of them fail and the 50th solves type two diabetes or something, that's the often you're gonna get, people are gonna complain that why do they fund 49 projects that failed? That's a problem. Like it's, it's, it's so, so the result of that kind of reward structure, and I remember like being told this very early on in my research career is, Jay, be sure to hit the bunt singles. You can swing for the fences every once in a while, but you gotta hit the bunt singles, get, get a lot of papers out. And, and the the problem is that, is that we need people to be willing to s try new ideas out, to invest in them, to take risks on them. I mean, Silicon Valley doesn't punish people who fail all that all that much, right? If they've failed productively, they, they'll get another chance. We have limited liability and all these sorts of things that give you this. Yeah, yeah. But we buy, it's the other way, Jon, we, we look, if you've, if you, your, your postdoc, if you don't get a paper and sell, you're done. You're not gonna get that next postdoc, right? It's, it's, we punish failure too much in biomedicine and we, we essentially don't allow new research, new early career researchers who are the font of new ideas, EE enough ability to try their new ideas out when they're still young. I mean that's, that's new. By the way, in bio in 19, in the 1980s, I funded researchers with large grants. Their first large grant would be in their mid, mid thirties and now it's in their mid forties. Yeah. We, we basically make it too difficult to establish yourself as a career in biomedicine nowadays than, than than before. And I think that's the root of the stagnation problem that, that that, that that, that you saw that Chad Jones sort of documented so well Chad and his colleague documented so well, I think who else? There's another Stanford economist who, who was, who was on the paper. But anyways, but the point is that that stagnation problem, that stagnation problem is again an incentive problem. And we have to change the incentives in how we fund research to allow that kind of, of innovation to actually

- Happen. So it's, it's more like how do you make NIH kind of like a venture capital or VC accelerator in a sense.

- Yeah, exactly. Exactly. And so, okay, so this is something I just recently been working on and I think this is this, this, I dunno if this is the only innovation we need to do this, but it, I think it's part of it. We have these at the NIH the way it's structured, there are 27 institutes and centers and offices, each of which are like focused, many of which are focused on disease areas or whatnot. They, and they're the head of it is an institute director, is a world class scientist, and they're responsible for the portfolio of investments in their area that the NIH makes the, the, the emphasis has in many of those institutes is to, you have, we have a great peer review system where you send in a proposal, we'll get a scored, and they'll do is they'll, they'll look at the top n percentile, you know, 10% of scored grants and then fund those grants. 'cause we have a great, these, this peer review system that that's with, again, with world class scientists doing the peer review, evaluating each grant. The problem is, and I've I've been a peer reviewer, I can tell you the emphasis is on methods. It's so easy to say to a new idea, well, no one's tried that. It can't work really, really easy to do that. Kill, kill a new idea. They're supposed to score innovation and they do, but they don't really emphasize innovation in deciding what gets the best scores. And also the peer review will tend to focus on areas that are currently hot rather than things that are promising but not yet hot. Right. The, the institutes have their own strategic plans that they put up every, a few, few years of like where they, where the most promising ideas there are in their fields. So what I've done is a, in a is a change in how the institutes are going to choose their grants. They'll still have peer review, you'll still have the, the scientific scores and the peer review. Absolutely the fundamental bedrock of how the science, the proposal are gonna be evaluated. But suppose there's 10 grants that are basically looking at the same idea and, and that are part of the strategic plan and all, you know, basically one or two grants that didn't score quite as well because the peer review reviewers didn't really know about the idea didn't because the new idea that that the, the, the, the, the, the institute directors will have the capacity to pick their portfolio to match the strategic plan. Maybe they'll take one or two grants from the, the 10 that that's in the hot area and one or two grants from the, the less hot area, but like very promising area. And they're gonna be evaluated not based on at a very in single individual grant working, but rather the portfolio as a whole. Does it actually advance knowledge in ways that improve health of the population? Right? Does it make America healthy again? Does it change biomedicine? Does it, does it reduce fundamental changes in basic science that, that advance biomedical knowledge, things like that for the portfolio as a whole, rather than sort of grant by grant, we gotta punish failure less. We also have to think of ways to like advance the careers of early care researchers much more. And that's, that's stuff also I think we can do much better than we have.

- Absolutely. Well, it's funny, you know, I I think you and and Peter Thiel were in the same class or maybe a year apart from each other at Stanford. So I think gi given how much time you spent at Stanford, both in, I think all your schooling in many years as a, as a professor, I think you probably understand the, the VC mentality probably better than anyone else. So

- Peter and I were friends. I mean, I think I met him when was 19. He, he was, you know, he founded the Stanford Review. He was my dorm. He was poor then. So at one point I think i, I lent to pizza money. I think he still owes me. Peter, if you're listening, you don't have to pay me back. I forgive the debt. He, he, he

- Gotta think about how much interest there could be there, you know? But the return is on that.

- No, we, we, we were friends. We are friends now still, but he, but yeah, I mean his, he's, he I think has been the, probably the most prominent person articulating this idea about stagnation, scientific stagnation and, and, and, and decrying it calling for changes and how we think about science so that we can, we can get more real advances. And I think he's right, I think, but I, I don't, I don't think the problem is any lack of o opportunities in science. I think the ideas are there. We just need to set the incentives for people to find them.

- Hmm. That's, it's fascinating. It's been a, it's amazing, you know, for many years until, I think just recently, maybe the past couple years since sort of the dawn of all these generative AI tools going online, you know, that the, there was kind of a refrain for like 10 years during the 2010s and that, that period, you know, we had these books like by Robert Gordon in economics, you know, the rise and fall of economic growth and the rise and fall of American Economic Growth, I think is the title. And, and you know, this just this idea, you know, that we're running out of ideas and that's why we're, you know, becoming less productive and, and you know, we had these great ideas in the early 20th century, whether it was commercial air travel, you know, air conditioning, you know, which a lot of people work in much warmer climates, huge developing world and, and you know, the southern United States, you know, you had all these, you know, you had all these household appliances, you know, much later, or I mean, you had the automobile in the early 20th century and, and the internet at the, at the close of the 20th century and, and all these things. It's kind of like, I guess what a lot of people look back on and say, you know, in the first maybe couple decades of the 20th, of the 21st century is, you know, oh, you know, compared to all those incredible innovations all we got were, you know, social media apps. And I mean, those, those might be, you know, some sort of a negative productivity innovation perhaps. But, you know, where where are all these incredible, you know, innovations that are gonna increase productivity. And it, it's interesting now, you know, we, we obviously had, you know, the, the mobile phones and the smartphones and so forth, but, you know, which allows to do certain things. But I think, you know, the, the release of generative AI tools, I think the conversation around that is, has changed at least. And it's more sort of, I, I guess people are open to this idea that, you know, generative AI tools and, and sort of their successors or innovative successors could potentially, you know, be hugely transformative. I'm curious if you have any thoughts on, I guess, generative AI and how it intersects with, with NIH, what NIH is funding and, and thinking about,

- I mean, I think AI tools are, are probably a counter example to the, the assertion you're making about the, the, the, the, the assertion, the, the, the hypothesis you were putting forward about the slow down in ideas. I mean, that's a just unexpected leap in the ability of, of machines to, to look like they're reasoning, although I don't think you're actually reasoning, but that's, but, but they are really a big step forward. And I, I, I think that in health it's, it, it can, it's gonna transform the way we do. Drug discovery already has like, you know, alpha fold and its ability to predict protein structures has, has just sort of pushed drug development, you know, leaps and bounds forward. It's gonna change the way that, I mean, just something as mundane is like, you know, you go to a doctor and the doctor's staring at the screen the whole time rather than looking at you because they're writing in all the, the, the, the, the billing codes that need to bill, bill, imagine AI sitting there listening and then filling in all the, the, the, the, the, the records for the, for for, for the billing. And then of course the doctor can look and just very quickly do that afterwards. Instead, they spend their time looking at you. Right? Imagine a tool that helps the doctor make sure that they thought through every single diagnosis, get that they give suggestions to radiologists or cardiologists for where, where they compare against other patients that are similar to you. You know, you're still gonna need doctors, you still need the human touch when people are sick, they really do need human beings to like be there caring for them, especially smart human beings that are like thinking about what their problems are in, in, in concrete ways. But AI has the, the possibility of transforming so much in biomedicine and we are absolutely the niche investing in that. It's, we, I, you know, maybe I'm wrong, if could, maybe it won't be as productive as I think it will be, but I, I mean, my view, it's already proved to be quite productive and it will be. I, I think if we invest more, we'll be, we'll think of better ways to make sure that it, that, that, but to make that productivity happen, but also to make sure that it doesn't hallucinate and hurt patients and things like that,

- That, that's fascinating. Okay. I think that was number four. Is that right?

- That was three actually. That's still, that's still under three. The stagnation. Let me do four. I'm almost done. I promise Jon. Four is, four is we have to make sure that we don't harm people or take existential risks on behalf of the human populations in the research we do. Okay, lemme motivate this, this is gonna be controversial to some people, not controversial to others. It is quite possible that COVID was caused by scientific research that we did, that we supported that thes, the Chinese did, conducted that we, that we were part of. There was a big debate in the, in the 2000 teens over a research paradigm called gain of function research, dangerous gain of function research. Lemme lemme tell you, the PE research paradigm, the idea was that we could prevent all pandemics. It's a very utopian idea. What you do is, here's what you do. You fund research to go out into the wild places, find the viruses, find the pathogens, bring them back to city centers, manipulate them so that they're more transmissible among humans only in the Petri dish. The, the reason you do that is that if you can distinguish between pathogens, you bring back from the wild that are very easily manipulable so that they become only a few evolutionary steps to make the leap in human populations versus other pathogens that, that are very evolutionary far away from being able to make that leap. Well then you can focus on the pathogens that are likely to make the leap that are close in evolutionary space, prepare vaccines in advance, prepare all this other stuff in advance so that when it makes the leap, you're ready. That's the paradigm. The first time someone actually accomplished this of taking avian flu in 2010 and 11, making it more transmissible. This is NIH funded research. The scientific community said, this is ridiculous. This is too dangerous. You have the chance, if there's a lab leak of causing a massive pandemic that will kill millions of people, we shouldn't be doing this kind of research. Right? Th let's go back to physics and Rico Fermi, when he launched the nuclear age, that that, what he did is he did a calculation, the first nuclear chain reaction on the, on the squash court at the University of Chicago. Of all places. He did a calculation of co will this chain reaction consumed the earth, burned the earth, or, or can we make it stop if we want to? And the calculation before he did this experiment was the, the probability of consuming the earth was zero. And so then he started, then he did the experiment that would launch the nuclear age. We need to do that with biomedicine. We need, and President Trump's executive order on danger gain function actually allows us to have that kind of risk-based paradigm for thinking about regulating biomedicine. Like the vast by the majority of biomedical research has no risk of causing this kind of pandemic. Right? So, but we need a regulatory structure that gives incentives to scientists, institutions, to, to subject any, any experiment that does have the potential for catastrophic risk to be subject to this regulation where other independent eyes can decide, should we do this work? A scientist alone or small, an institutional alone should not be independently be able to make decisions that risk vast harm to VV mil, literally billions of people simply for scientific curiosity or other reasons. And so that the, the, the, the gain of function, executive order by President Trump essentially allows us to have a regulatory framework that will, that will reduce that risk to near zero. And we're, the White House is still working on that. We're still, when we're helping, this is something I think is tremendously important. It's important to me because the way you restore trust is by telling people, we're, we're working on your behalf. We're not trying to do crazy things that, that will risk your family's health and wellbeing. Quite the opposite. And so to me, this kind of sort of attention to biosecurity, to to buy, to, to buy sort of, to, to, to, to attention to like the, the, the, the sort of, the sort of the riskiness of Biore biomedical research is very, very important. It's a necessary condition for restoring trust.

- Are are there any efforts to ensure that that's happening, say in China, you know, obviously this, you know, they may not be as, you know, as careful as, as the US with these sorts of things. I mean, obviously there's only so much you can do from like a diplomatic standpoint and, and you know, China that's, you know, has broken many agreements and they, you know, they don't respect us intellectual property. I mean, there's, there's all sorts of issues, but I guess is that, obviously human rights is kind of, I think has been front and center with, with a lot of these conversations with China's, along with sort of defense issues, Taiwan and so forth. But ha have these sorts of conversations come up in any of those sorts of discussions? I also know, like ai, often one of the criticisms that I hear about sort of unregulated AI is that while, you know, there's some possibility that, you know, someone could use a, a chatbot tool in, say, a random foreign country to help build some sort of a bio weapon that they could use against the United States or peaceful people. Do either of those things concern you or

- They do

- Actually justify these things?

- Absolute concern. Me, I mean, I think, I think President Trump has set a, a huge example for the whole world by taking this kind of, of dangerous gain of function seriously and regulating it so that we essentially make a commitment not to do it. We're sending, telling the whole world, you shouldn't be doing it either. You shouldn't be taking risks that ri risk, the, the, the harm to every human being on the planet just for scientific curiosity or whatever gains you think you're gonna get from it. Even if it's, let's say it's a bio, you think you're doing bio weapons research with it or whatever, right? It's, that's banned by the 1973 Bio Weapons Convention. But say, say you're doing a really, there's no way to guarantee that you won't harm your own population in the context of that research. We kind of need a, a, a strengthened bio weapons convention, something that a, a gain of functions convention, essentially a dangerous gain of functions convention that's international in scope that says, this is not research that's worth doing. That's not going to help anybody. It's not in any country's interest to do it because it, it's, it's research that if you, if you cause catastrophic harm to human populations, you, you will also harm your own populations in doing it. There's no nationalistic interest in doing it. This is the kind of thing that really should be subject to international treaties. But I think I'm, I'm, it's really, it's really amazing that President Trump is the one that actually took the step that said, look, we're going to unilaterally not do this because it's not worth, it's not an American interest to do it. It's frankly not in anybody's interest to do this kind of dangerous research.

- Absolutely. No, it's, it's, yeah, being five years out from COVID, it's still slums large in, in, in, in the memories of, of many to the least. Okay, so number five. Have we

- Got last one? And I, I pro I, I promise I'll let you go, Jon. You probably, you probably had no idea it keeps you so long, but No,

- No.

- The number five is, is free speech and academic freedom. And Jon, you probably know this, actually, we talked about this in the, in our previous podcast. So we did, did together COVID was a very difficult time for scientific scientists to dis express their ideas, especially if they disagreed with sort of the pre dominant, pro lockdown, pro mask mandate, provax mandate kind of, kind of ideas. And it made scientific progress during the COVID era much, much harder than it should have been. Science depends on free speech just to thrive, right? If you disagree with me, Jon, you absolutely should be able to say that you disagree with me and, and you explain why. And you know, I've had many situations I've taught at Stanford for 25 years. I had so many situations. My, some of my proudest moments when a student would tell me I'm wrong, and then they would be right. And we write a paper together, you know, hiding the fact that I was wrong before, of course, except from the world. But that kind of of open correction of each other is the heart and soul of science and restrictions on free speech and academic freedom are anathema to scientific progress there. They're absolutely necessary step re required for progress in science. And the NIH needs to be a, a, a catalyst for this kind of free speech throughout the country. At the NIH itself. I, when I came in, I found out that they had a policy for that. We have a whole, a whole bunch of intramural scientists, ama amazing biologists who, who do a lot of, a whole host of different kind of research internally inside the NIH publish it in scientific literature. I found out there was a policy that many of these institutes where the scientists themselves had to seek permission, substantive permission from their supervisors before they were allowed to send their papers out for review by other, by, you know, journals. That's not academic freedom. That, I mean, that you can't make science advances advance if you have that, those kinds of restrictions where you're worried about what your supervisors think about your science. I mean, obviously you want good science, but you don't, I mean, you don't want your supervisors to tell you, well, I don't like this result. You can't send it out. So I put in a policy now where any intramural researcher can just send their paper out for scientific review without asking any permission. I fully expect there'll be papers published at the NIH that I don't agree with. I don't even like, but those researchers should, that the NIH should absolutely have the cha the, the opportunity to publish those papers. I wanna lead by example, but I also wanna encourage, and this is something I think the Trump administration, it's been, it's caused a lot of angst in the, in the scientific community, in the, in the in scientific community, the academic community at large. But I think it ultimately in the long run is a good thing. The, the sort of like pushing universities to, to adopt policies consonant with this idea of scientific freedom, of academic freedom as essentially as a way to say, look, if if you don't do this, then you're not really a great research partner with us. Right? If you don't have these policies that allow, that the, the scientists at your institutions, the to, to have their ideas and not suppress them to allow free speech at universities, really, the universities that don't have those kinds of free speech environments are not good environments to do science. You're always looking over your shoulder wondering if the paper you're writing is gonna get you canceled. Well, you're not gonna write that paper, right? And you, you, you, you cannot have this sort of like, sort of political apparat overseeing hiring decisions. And, and now it's caused a lot of angst. I know among researchers that will look, my, my field is fine, how come you're blaming me for problems in these other fields? But the university as a whole needs to have a culture that advances science that that, so that that has free speech at its core. Because if you don't, then you can't actually trust that the scientific work out of those universities is really true. Right. It's, it's only in that free speech culture that you get excellent science.

- Absolutely. No, I, I couldn't agree with you more. And it's, it's very interesting to think about, yeah. I mean culture at universities right now and, and how it's, how, how it's shifting or, or, or maybe not shifting, but you know, certainly, you know, president Trump's done a lot to, to change some of that focus.

- Well, his very first, one of his very first executive orders was to reestablish free speech in this country. I was part of a, a lawsuit against the Biden administration. 'cause the Biden administration, they had a systematic policy, Jon, to suppress free speech, right? They would go to social media companies, order them essentially to take down vast pages of even true scientific ideas. Like, you know, if you, if you're vaccine injured, they went to Facebook and told people to, told Facebook to take down p private pages where vaccine injured people would just talk to each other,

- Right?

- They, they, they ordered Twitter a whole, I mean, and you could, the, this lawsuit, this Missouri versus Biden lawsuit uncovered vast evidence of a, about all of government approach to suppress speech contrary on scientific matters. Contrary what the si the, the, the, the Biden administration thought was, was good or true or just, and they were often wrong. They SS they pressured social media companies to suppress. I was, I was actually b blacklisted at Twitter the day I joined in 2021. Wow. And you know, it, the, the Supreme Court ultimately ruled that I, I I and my colleagues didn't have standing to sue because we didn't have an email that said from Biden's folks to the Facebook or something that said censor J but we did have is emails that said censor the kinds of ideas that, that, that, that, that I was espousing, right? That that masking toddlers was a bad idea. That opening schools is a good idea. That, you know, that that mask va vaccine mandates made no sense given that the vaccine didn't stop the spread of COVID. They suppressed ideas at scale. And the Supreme Court said that's fine, the government can do that as, as long as they don't name a single individual.

- Yeah.

- Right. Now the main thing protecting free speech in this country is President Trump's executive order. That, that, that the government's not going to do that. I've, I've, I've been watching like this bruhaha over, over some, some late night comedian, what's his name?

- Jimmy Kimmel.

- Jimmy Kimmel, right

- Over Charlie. The Charlie Kirk.

- Yeah. What you had there was like, you could see it like the guy was like, he was gone for three or four days because he's, he essentially made, you are right, he made some, some basically a false statement about how what Charlie, Charlie Kirk actually believes in the, in the wake of his death. And a lot of people got upset and they were like writing to, to their local TV station saying, why are you having this guy lying about Charlie Kirk, the guy who's like, you know, a who's who's, who's, who's just been assassinated for his political beliefs and free speech beliefs. And he's pulled off the air for, for like four days by a B, C. And somehow that's a free speech issue. It's not the government telling a b, C to pull him off the air that did that. It was the, it was the, it was the individual people writing their local, local, local TV stations. You can see this because after he was restored, you know, dozens and dozens of the, of the local channels are not gonna show him again. Right. I think that's, that's consistent with free speech. Like that's, that's speech based on, based on like what, what people are willing to interested to hear. Like I, I don't have a right when I put a tweet up on Twitter to have millions of people look at it, but I do have a right to put now a right to put whatever I want up on Twitter. Absolutely shouldn't have the government essentially tell people. And you know, if you think about the difference between Jimmy Kimmel and, and what happened during the pandemic and the Biden administration, let's say the worst case, let's say you believe, and you're listening to this, you say, oh, well look, the government did say that A, B, C take Kimmel off. Well, didn't, but like, let's say you believe that at least there, A, B, C can can say, well, Kimmel, you, you're, you're being taken off. You're the government's telling you to take us off. But the, what the government did during the Biden administration was say, take, get, take all these ideas off, and I wouldn't even know they could do it anonymously. It's vast power to suppress speech is what, what what the Biden administration did and got away with it, frankly. And as a result, millions of people are worse off. Their kids didn't get years of schooling that they should have had. They were, they lost their jobs over ma over vaccine mandates. They, they, they couldn't visit their loved ones in hospital as they were dying. All this because the, the honest scientific debate that should have happened during the pandemic did not happen thanks to these re speech restrictions. We can't have, we can't have that for lots of reasons. But certainly we can't have scientific progress unless we establish an environment where speech is absolutely free in the sense that, I mean that I've been talking it about it. Absolutely.

- I have one last question for you, Jay. And that's, it's about animal testing. So animal testing, my understanding has actually lessened, or, or, or I know, you know, there's this terrible, you know, instances of animal testing on, on beagles that was going on. I think under the NIH purview, some of this has stopped and, and my sense is that you have a position that, you know, we should avoid animal testing whenever, you know, wherever possible unless there's no alternative sort of viable model. I mean, tell us a little bit about that in terms of what's going on. I I think there's some people out there that will be very interested and excited to hear that that animal testing's been r reduced under, under your tenure.

- Yeah, I mean its, it's actually interesting. Like peta you know, the people for the ethical treatment animals had sent me flowers when I, when I put this, the policy I'm about to tell you in place. Although there's some groups that, that, that still don't like me 'cause they want zero animal use. So first lemme just set the stage. So animal use and research for, for in is, has been a mainstay for biomedical research for many, many, many for essentially centuries. And you can understand why, like imagine you're trying a new drug out for the very first time. You, you can't give it to humans to start. You just cannot because if you do, you don't know the dose, you don't know what if it's lethal. You have, you, you need to have some sense of is it, is it, is it, you know, it does will what, what what the safety is before you f give it to a first human, you have to know a lot about the c the new, the new new drug or or molecule before you give it to a human, right? And then there's like animal systems that indicate something about the human physiology, right? So like the first knowledge of how our circulatory system came from animals, an analysis, analysis of animals, you know, William Harvey. And, and so like you can understand why animal use is, is, is, is, is, is part of biomedical research. At the same time there's a lot of animal use. The animal models are just thoughtlessly used thoughtlessly in a, in a very, very sim simple sense. Mean clear si single sense meaning the question is just people are used to using some animal model and they don't ask whether the thing you learn from the animal model can be translated over to humans, to, to something that's useful for humans. And in fact, occasionally you'll have animal models that mislead you, right? So like you, you know, you have a rat model or mouse model of Alzheimer's disease. You can cure the Alzheimer's disease and the mouse, but that treatment that works in the mouse does not translate over to humans. Well then why are you even working with that model if, if the knowledge you gain from that mouse model doesn't translate over to humans? So that's the new policy i I put in place thanks to, there's a woman who'd been working at this in the NA for more than a decade named Nicole Klein story who, who sort of, who, who, who taught me about this. There're now huge advances in alternatives to animal models for in many cases you can use these things called organoids, which like essentially tissue cells on a chip. You can have theil, you can have just a, you these sort of ins meaning like AI methods, a whole host of other methods that sometimes replace the animal models that people traditionally use and do better at predicting what what will happen in humans when you give a new drug to it or something, right? And so that, that's the new policy, the new policy first you have, you can't have unethical treatment of animals. You can't have beagles that are tortured. That's, that's just a given. That's never gonna happen. And if it does happen, we're gonna go, we're gonna put the hammer down on you. And if you do use animals for research, you have to justify it by saying, by showing that the research that you're doing, the knowledge you're gaining, can translate over to human activities, human human, human human applications. And also that there aren't other alternative methods that, that could do it better than the animal model itself. If the, the goal is the mission, which is advanced human health through research, advanced longevity through research. That's the goal of that, of that. And, and the happy byproduct of that is that we're not doing research on, on using animal models that don't, that, that, that are, that, that aren't necessary for advancing human health.

- Well that's, that's fantastic. I'm sure there's many beagles animals and, and people who, who thank you for it. Jay, I I really wanna thank you for coming on. This has been an amazing conversation. Really enjoyed it and it's so great to see you and hear how you're doing at NIH and to hear about your new vision for it. So it's a real honor to have you on.

- Thank you, Jon. I'm looking forward to coming back and visiting Stanford someday when I, when I finally get, you know, outta here.

- We'll, we'll look forward to having you. All right. This is the Capitalism and Freedom in the 21st Century podcast, an official podcast of the Hoover Institution Economic Policy Working Group, where we talk about economics, markets, public policy. I'm Jon Hartley, your host. Thanks for, for joining us.

Show Transcript +

ABOUT THE SPEAKERS:

Jayanta "Jay" Bhattacharya, M.D., Ph.D., took the helm as 18th director of the National Institutes of Health, the nation’s medical research agency, on April 1, 2025. President Trump nominated Dr. Bhattacharya for the position on Nov. 26, 2024, and the U.S. Senate confirmed him on March 25, 2025.

Dr. Bhattacharya, a renowned doctor, researcher, health economist, previously held a tenured professorship in the medical school at Stanford University in California. His research focused on population aging and chronic disease, particularly on the health and well-being of vulnerable populations. He has published over 170 research papers in peer-reviewed journals in medicine, epidemiology, health policy, economics, statistics, science policy, and public health, as well as a leading textbook on health economics.

During the pandemic, Dr. Bhattacharya coauthored the Great Barrington Declaration, which called for opening schools and lifting lockdowns while better protecting older populations who were most vulnerable to the disease.

Dr. Bhattacharya held numerous additional appointments at Stanford University, including courtesy appointments at the Stanford Institute for Economic Policy Research, the Stanford Freeman Spogli Institute and Stanford’s Hoover Institution, and the Economics department. Previously, he conducted research at the National Bureau of Economic Research and the SPHERE Institute, a policy research firm. Before joining Stanford, he was an economist at the RAND Corporation and worked as a visiting economics professor at the University of California, Los Angeles.

Dr. Bhattacharya is a longtime NIH grantee and has served as a standing member of multiple NIH review committees. He earned his bachelor’s and master’s degrees in economics from Stanford University. He then completed medical school and earned a Ph.D. in economics from Stanford University.

Jon Hartley is currently a Policy Fellow at the Hoover Institution, an economics PhD Candidate at Stanford University, a Research Fellow at the UT-Austin Civitas Institute, a Senior Fellow at the Foundation for Research on Equal Opportunity (FREOPP), a Senior Fellow at the Macdonald-Laurier Institute, and an Affiliated Scholar at the Mercatus Center. Jon also is the host of the Capitalism and Freedom in the 21st Century Podcast, an official podcast of the Hoover Institution, a member of the Canadian Group of Economists, and the chair of the Economic Club of Miami.

Jon has previously worked at Goldman Sachs Asset Management as a Fixed Income Portfolio Construction and Risk Management Associate and as a Quantitative Investment Strategies Client Portfolio Management Senior Analyst and in various policy/governmental roles at the World Bank, IMF, Committee on Capital Markets Regulation, U.S. Congress Joint Economic Committee, the Federal Reserve Bank of New York, the Federal Reserve Bank of Chicago, and the Bank of Canada

Jon has also been a regular economics contributor for National Review Online, Forbes and The Huffington Post and has contributed to The Wall Street Journal, The New York Times, USA Today, Globe and Mail, National Post, and Toronto Star among other outlets. Jon has also appeared on CNBC, Fox BusinessFox News, Bloomberg, and NBC and was named to the 2017 Forbes 30 Under 30 Law & Policy list, the 2017 Wharton 40 Under 40 list and was previously a World Economic Forum Global Shaper

ABOUT THE SERIES:

Each episode of Capitalism and Freedom in the 21st Century, a video podcast series and the official podcast of the Hoover Economic Policy Working Group, focuses on getting into the weeds of economics, finance, and public policy on important current topics through one-on-one interviews. Host Jon Hartley asks guests about their main ideas and contributions to academic research and policy. The podcast is titled after Milton Friedman‘s famous 1962 bestselling book Capitalism and Freedom, which after 60 years, remains prescient from its focus on various topics which are now at the forefront of economic debates, such as monetary policy and inflation, fiscal policy, occupational licensing, education vouchers, income share agreements, the distribution of income, and negative income taxes, among many other topics.

For more information, visit: capitalismandfreedom.substack.com/

Expand
overlay image