John Etchemendy and Fei-Fei Li are the codirectors of Stanford’s Institute for Human-Centered Artificial Intelligence (HAI), founded in 2019 to “advance AI research, education, policy and practice to improve the human condition.” In this interview, they delve into the origins of the technology, its promise, and its potential threats. They also discuss what AI should be used for, where it should not be deployed, and why we as a society should—cautiously—embrace it.

To view the full transcript of this episode, read below:

Peter Robinson: The year was 1956, and the place was Dartmouth College. In a research proposal, a math professor used a term that was then entirely new and entirely fanciful, artificial intelligence. There's nothing fanciful about AI anymore. The directors of the Stanford Institute for Human Centered Artificial Intelligence, John Etchemendy and Fei-Fei Li, on "Uncommon Knowledge," now. Welcome to "Uncommon Knowledge." I'm Peter Robinson. Philosopher John Etchemendy served from 2000 to 2017 as Provost here at Stanford University. Dr. Etchemendy received his undergraduate degree from the University of Nevada before earning his doctorate in philosophy at Stanford. He earned that doctorate in 1983 and became a member of the Stanford Philosophy Department the very next year. He's the author of a number of books, including the 1990 volume, "The Concept of Logical Consequence." Since stepping down as Provost, Dr. Etchemendy has held a number of positions at Stanford, including, and for our purposes today this is the relevant position, Co-director of the Stanford Institute for Human-Centered Artificial Intelligence. Born in Beijing, Dr. Fei-Fei Li moved to this country at the age of 15. She received her undergraduate degree from Princeton, and a doctorate in electrical engineering from the California Institute of Technology, Dr. Li is the founder once again of the Stanford Institute for Human-Centered Artificial Intelligence. Dr. Li's memoir published just last year, "The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI." John Etchemendy and Fei-Fei Li, thank you for making the time to join me.

Fei-Fei Li: Thank you for inviting us.

John Etchemendy: Pleasure, Peter.

Peter Robinson: I would say that I'm going to ask a dumb question, but I'm actually going to ask a question that is right at the top of my form. What is artificial intelligence? I have seen the term a hundred times a day for, what, several years now? I have yet to find a succinct and satisfying explanation. Let's see. Well, let's go to the philosophy. Here's a man who's professionally rigorous, but here's a woman who actually knows the answer.

John Etchemendy: Yeah, actually knows the answer. So no, let Fei-Fei answer, and then I will give you a different answer.

Peter Robinson: Oh, really? Alright.

Fei-Fei Li: Okay. Peter used the word succinct, and I'm sweating here. So, because artificial intelligence by today is already a collection of methods and tools that summarizes the overall area of computer science that has to do with data, pattern recognition, decision making in, you know, natural language, in images, in videos, in robotics, in speech. So it's really a collection. At the heart of artificial intelligence is statistical modeling, such as machine learning, using computer programs. But today, artificial intelligence truly is an umbrella term that covers many things that we're starting to feel familiar about, for example, language intelligence, language modeling, or speech, or vision.

Peter Robinson: John, you and I both knew John McCarthy.

John Etchemendy: Right.

Peter Robinson: Who came to Stanford after he wrote that, used the term, coined the term artificial intelligence. Now the late John McCarthy, and I confess to you who knew him, as I did, that I'm a little suspicious of the term because I knew John, and John liked to be provocative. And I'm thinking to myself, "Wait a moment, we're still dealing with ones and zeros. Computers are calculating machines." Artificial intelligence is a marketing term.

John Etchemendy: So no, it's not really a marketing term. So I will give you an answer that's more like what John would've given. All right, and that is, it's the field, the subfield of computer science that attempts to create machines that can accomplish tasks that seem to require intelligence. So, you know, early artificial intelligence were systems that played chess or checkers even, you know, very, very simple things. Now John, who as you know, if you know him, was ambitious, and he thought that in a summer conference at Dartmouth, they could solve most of the problems.

Peter Robinson: Alright, let me name a couple of very famous events. What I'm looking for here... I'll name the events. In 1997, a computer defeats Garry Kasparov at Chess. Big moment. For the first time, Big Blue, an IBM project, defeats a human being at chess, and not just a human being, but Gary Kasparov, who by some measures is one of the half dozen greatest chess players who ever lived. And as best I can tell, computer scientists said, "Yawn." Things are getting faster, but still. And then we have in 2015 a computer defeats Go expert Han Fue. And the following year it defeats Grandmaster Lee Sedol. I'm not at all sure I'm pronouncing that correctly,

Fei-Fei Li: Lee Sedol, yeah.

Peter Robinson: In a five game match. And people say, "Whoa, something just happened this time." So what I'm looking for here is something that a layman like me can latch onto and say, "Here's the discontinuity. Here's where we entered a new moment. Here's artificial intelligence." Am I looking for something that doesn't exist?

John Etchemendy: No, no, I think you're not. So, the difference between Deep Blue and--

Peter Robinson: Which played chess.

John Etchemendy: Which played chess. Deep Blue was written using traditional programming techniques. And what Deep Blue did is it would, for each move, for each position of the board, it would look down to all the possible--

Peter Robinson: Every conceivable decision tree?

John Etchemendy: Every decision tree to a certain depth. I mean, obviously you can't go all the way. And it would have ways of weighing which ones are best. And so then it would say, "No, this is the best move for me at this time." That's why in some sense, it was not theoretically very interesting. AlphaGo...

Peter Robinson: AlphaGo, which was a Google project.

John Etchemendy: Was a Google project. This uses deep learning. It's a neural net. It's not explicit programming. We don't know, you know? We don't go into it with an idea of here's the algorithm we're going to use, do this, and then do this, and do this. So it was actually quite a surprise, particularly AlphaGo,

Fei-Fei Li: Not to me, but sure.

John Etchemendy: No, no, no.

Fei-Fei Li: To the public, yes.

John Etchemendy: Yeah, to the public.

Fei-Fei Li: Yeah.

Peter Robinson: But our colleague... I'm going at this one more time because I really want to understand this. I really do. Our colleague here at Stanford, Z.X. Shen, who must be known to both of you, physicist here at Stanford, and he said to me, "Peter, what you need to understand about the moment when a computer defeated go..." Go, which is a much more complicated, at least in the decision space, much, much bigger, so to speak than chess. There are more pieces, more squares. Alright. And Z.X. said to me that "Whereas chess just did more quickly, what a committee of Grand Masters would've decided on, the computer in Go was creative. It was pursuing strategies that human beings had never pursued before." Is there something to that?

Fei-Fei Li: Yeah, so there is a famous rule.

Peter Robinson: Fei-Fei's getting impatient with me. I'm asking such, go ahead.

Fei-Fei Li: No, no, you're asking such good questions. So in the third game, I think it was the third game of the five games, there was a move. I think it was move 32.

John Etchemendy: 32 or 34

Fei-Fei Li: 32 or 35. It's that the computer program made a move that really surprised every single go master. Not only Lee Sedol himself, but everybody who's watching.

- That's a very surprising move.

- I thought it was a mistake.

Fei-Fei Li: In fact, even after analyzing how that move came about, the human masters would say, "This is completely unexpected." And what happens is that the computers, like John says, right, has the learning ability and has the inference ability to think about patterns or to decide on certain movements, even outside of the trained, familiar human masters domain of knowledge in this particular case.

Peter Robinson: Okay, may I?

John Etchemendy: So Peter, let me expand on that.

Peter Robinson: Go ahead, yes, yes, yes.

John Etchemendy: The thing is, these deep neural nets are supremely good pattern recognition systems, but the patterns they recognize, the patterns they learn to recognize, are not necessarily exactly the patterns that humans recognize. So it was seeing something about that position and it made a move that because of the patterns that it recognized in the board, that made no sense from a human standpoint. In fact, all of the lessons in how to play go tell you never make a move that close to the edge that quickly. And so everybody thought it made a mistake and then it proceeded to win. And I think the way to understand that is it's just seeing patterns that we don't see.

Fei-Fei Li: It's computing patterns, that is not traditionally human, and it has the capacity to compute.

Peter Robinson: Okay, we're already entering this territory, but I am trying really hard to tease out the, wait a moment, these are still just machines running zeros and ones, bigger and bigger memory, faster and faster ability to calculate, but we're still dealing with machines that run zeros and ones. That's one strand. And the other strand is, as you well know, "2001: Space Odyssey" where the computer takes over the ship

- Open the pod bay doors, Hal.

- [Hal] I'm sorry Dave, I'm afraid I can't do that.

Peter Robinson: Okay, we'll come to this soon enough. Fei-Fei Li, in your memoir, "The Worlds I See, quote, "I believe our civilization stands on the cusp of a technological revolution with the power to reshape life as we know it." Revolution? Reshape life as we know it? Now, you are a man whose whole academic training is in rigor. Are you going to let her get away with this kind of wild overstatement?

John Etchemendy: No, I don't think it's an overstatement.

Peter Robinson: Oh.

John Etchemendy: I think she's right.

Fei-Fei Li: He told me to write the book.

John Etchemendy: Mind you, Peter, it's a technology that is extremely powerful, that will allow us, and is allowing us to get computers to do things we never could have programmed them to do. And it will change everything, but it's like a lot of people have said it's like electricity or it's like the steam revolution. It's not something necessarily to be afraid of. It's not that it's going to suddenly take over the world. That's not what Fei-Fei was saying.

Fei-Fei Li: Well, right.

Peter Robinson: Okay.

Fei-Fei Li: It's a powerful tool that will revolutionize industries and human, the way we live. But the word revolution is not that it's a conscious being. It's just a powerful tool that changes things.

Peter Robinson: I would find that reassuring if a few pages later, Fei-Fei had not gone on to write...

Fei-Fei Li: Oh no.

Peter Robinson: "There's no separating the beauty of science from something like, say, the Manhattan Project." Close quote. Nuclear science, we can produce abundant energy, but it can also produce weapons of indescribable horror. "AI has boogeymen of its own, whether it's killer robots, widespread surveillance, or even just automating all 8 billion of us out of our jobs." Now, we could devote an entire program to each of those boogeymen, and maybe at some point we should. But now that you have scared me, even in the act of reassuring me, and in fact it throws me that you are so eager to reassure me that I think maybe I really should be even more scared than I am. Let me just go write down, here's the killer robots. Let me quote the late Henry Kissinger. I'm just gonna put these up and let you... You may calm me down, if you can. Henry Kissinger, "If you imagine a war between China and the United States, you have artificial intelligence weapons. Nobody has tested these things on a broad scale and nobody can tell exactly what will happen when AI fighter planes on both sides interact." "So you are then..." I'm quoting Henry Kissinger, who is not a fool, after all. "So you are then in a world of potentially total destructiveness." Close quote. Fei-Fei?

Fei-Fei Li: So like I said, I'm not denying how powerful these tools are. I mean, humanity before AI has already created tools and technology that are very destructive, could be very destructive. We talk about Manhattan Project, right? But that doesn't mean that we should collectively decide to use this tool in this destructive way.

Peter Robinson: All right, now we come to this, right.

John Etchemendy: Okay, Peter, you know, think back before you even had heard about artificial intelligence.

Peter Robinson: Which actually, what, isn't five years ago maybe? This is all happening so fast.

John Etchemendy: Just five years ago or 10 years ago.

Peter Robinson: Right.

John Etchemendy: Remember the tragic incident where an Iranian passenger plane was shot down flying over the Persian Gulf by an Aegis system.

Peter Robinson: Yes, yes.

John Etchemendy: Right?

Peter Robinson: One of our ships.

John Etchemendy: One of our ships. An automation, an automated system, because it had to be automated. In order to be fast.

Peter Robinson: Humans can't react that fast.

John Etchemendy: Yeah, exactly. And in this case, for reasons that I think are quite understandable now that you understand the incident, but it did something that was horrible. That's not different in kind from what you can do with AI, right? So we as creators have of these devices, or as users of AI, have to be vigilant about what kind of use we put them to. And when we decide to put them to one particular use, and there may be uses, you know, the military has many good uses for them, we have to be vigilant about their doing what we intend them to do, rather than doing things that we don't intend them to do.

Peter Robinson: So you are announcing a great theme and that theme is that what Dr. Fei-Fei Li has invented makes the discipline to which you have dedicated your life, philosophy, even more important, not less so.

Fei-Fei Li: Yeah, that's why we're codirectors.

Peter Robinson: The power of this makes the human being more important, not less so. Am I being glib or is that onto something?

John Etchemendy: So lemme tell you a story. So Fei-Fei used to live next door to me or close to next door to me. And I was talking to--

Peter Robinson: I'm not sure whether that would make me feel more safe or more exposed.

John Etchemendy: And I was talking to her. I was still Provost at the time, and she said to me, "You and John Hennessy started a lot of institutes that brought technology into other parts of the university. We need to start an institute that brings philosophy, and ethics, and the social sciences into AI because AI is too dangerous to leave it to the computer scientists alone. Nothing wrong with computer science.

Peter Robinson: There are many stories about how hard it was to persuade him when he was Provost, that you succeeded. Can I, just one more boogeyman briefly?

Fei-Fei Li: Yeah

Peter Robinson: And we'll return to that theme that you just gave us there, that we'll get back to the Stanford Institute. I'm quoting you again. This is from your memoir. "The prospect of just automating all billion of us out of our jobs." That's the phrase you used. Well, it turns out that it took me mere seconds using my AI-enabled search algorithm, search device, to find a Goldman Sachs study from last year predicting that in the United States and Europe, some two thirds of all jobs could be automated, at least to some degree. So why shouldn't we all be terrified? Henry Kissinger world apocalypse? Alright, maybe that's a bit too much. But my job.

Fei-Fei Li: So, I think job change is real. Job change is real with every single technological advances that humanity, human civilization has faced. You know, that is real. And that's not to be taken lightly. We also have to be careful with the word job. Job tends to describe a holistic profession or that a person attaches to its his or her income.

Peter Robinson: And often identity really.

Fei-Fei Li: Identity with. But there is also, within every job, pretty much within every job, there are so many tasks. You know, it's hard to imagine there's one job that has only one singular task, right, like being a professor, being a scholar, being a doctor, being a cook. All this job have multiple tasks. What we are seeing is technology is changing how some of these tasks can be done. And it's true. As it changes these tasks, some part of them could be automated. It's starting to change how the jobs are, and eventually it's gonna impact jobs. So this is gonna be a gradual process. And it's very important we stay on top of this. This is why Human-Centered AI Institute was founded, as these questions are profound. They're by definition, multidisciplinary. You know, computer scientists alone cannot do all the economic analysis. But economists not understanding what these computer science programs do, would not by themselves understand the shift of the jobs.

Peter Robinson: Okay, John, may I tell you? Go ahead.

John Etchemendy: But let me just point something out. The Goldman Sachs study said that such and such percentage of jobs will be automated or can be automated, at least in part

Peter Robinson: Yes.

John Etchemendy: Okay. Now, what they're saying is that a certain number of the tasks that go into a particular job--

Peter Robinson: Filing, research.

John Etchemendy: Exactly.

Peter Robinson: Right.

John Etchemendy: So, Peter, you said it only took me a few seconds to go to the computer and find that article. Guess what? That's one of the tasks that would've taken you a lot of time.

Peter Robinson: Yes, it would have.

John Etchemendy: So part of your job has been automated.

Peter Robinson: Okay, now let me tell you a story.

Fei-Fei Li: But also empowered.

John Etchemendy: Empowered.

Peter Robinson: Empowered. Okay, fine. Thank you, thank you, thank you. You're making me feel good. Now, let me tell you a story. All three of us live in California, which means all three of us probably have some friends down in Hollywood. And I have a friend who was involved in the writer's strike.

Fei-Fei Li: Yeah.

Peter Robinson: Okay, and here's the problem. To run a sitcom, you used to run a writer's room. And the writer's room would employ seven, a dozen. On the Simpson show, the cartoon show, they'd had two, a couple of writers' rooms running. They were employing 20. And these were the last kind of person you'd imagine a computer could replace because they were well-educated and witty and quick with words. And you think of computers as just running calculations, maybe spreadsheets, maybe someday they can eliminate accountants, but writers, Hollywood writers? And it turns out, and my friend illustrated this for me by saying, doing the artificial intelligence thing, where it had a prompt, "Draft a skit for 'Saturday Night Live' in which Joe Biden and Donald Trump are playing beer pong." 15 seconds. Now, professionals could have tightened it up, but it was pretty funny, and it was instantaneous. And you know what that means? That means you don't need four or five of the seven writers. You need a senior writer to assign intelligence, and you need maybe one other writer or two other writers to tighten it up or redraft it. It is upon us. And your artificial intelligence is going to get bad press when it starts eliminating the jobs of the chattering classes. And that has already begun. Tell me I'm wrong.

John Etchemendy: Do you know before the agricultural revolution, something like 80, 90% of all the people in the United States were employed on farms?

Peter Robinson: Right.

John Etchemendy: Now, it's down to 2% or 3%. And those same farms, that same land is far more productive. Now, would you say that your life or anybody's life now was worse off than it was say in the 1890s when everybody was working on the farm? No. So yes, you're right. It will change jobs. It will make some jobs easier. It will allow us to do things that we could not do before. And yes, it will allow fewer--

Peter Robinson: There will be disruptions?

John Etchemendy: Allow fewer people to do more of what they were doing before. And consequently there will be fewer people in that line of work.

Fei-Fei Li: Yeah.

John Etchemendy: That's true.

Peter Robinson: That is true.

Fei-Fei Li: I also wanna just point out two things. One is that jobs are always changing, and that change is always painful. And we're, as computer scientists, as philosophers, also as citizens of the world, we should be empathetic of that. And nobody's saying we should just ignore that changing pain. So this is why we're studying this. We're trying to talk to policy makers. We're educating the population. In the meantime, I think we should give more credit to human creativity in the face of AI. I started to use this example that's not even AI. Think about the advanced speaking of Hollywood graphics, technology, CGI and all that, right?

Peter Robinson: The video gaming industry or just?

Fei-Fei Li: No, just animations and all that, right? One of many of our, including our children's, favorite animation series is by Ghibli Studio. You know, "Princess Mononoke," "My Neighbor Totoro," "Spirited Away." All of these were made during a period where computer graphics technology is far more advanced than these hand-drawn animations. Yet the beauty, the creativity, the emotion, the uniqueness in this film continue to inspire and just entertain humanity. So I think we need to still have that pride, and also give the credit to humans. Let's not forget our creativity, emotion, and intelligence is unique. It's not gonna be taken away by technology.

Peter Robinson: Thank you. I feel slightly reassured. I'm still nervous about my job, but I feel slightly reassured. But you mentioned the government a moment ago, which leads us to how we should regulate AI. Let me give you two quotations. I'll begin. I'm coming to the quotation from the two of you. But I'm going to start with a recent article in the Wall Street Journal by Senator Ted Cruz of Texas and former Senator Phil Graham, also of Texas. Quote, "The Clinton administration took a hands-off approach to regulating the early internet. In so doing it unleashed extraordinary economic growth and prosperity. The Biden administration, by contrast, is impeding innovation in artificial intelligence with aggressive regulation." Close quote. That's them. This is you. Also a recent article in the Wall Street Journal, John Etchemendy and Fei-Fei Li. Quote, "President Biden has signed an executive order on artificial intelligence that demonstrates his administration's commitment to harness and govern the technology. President Biden has set the stage and now it is time for Congress to act." Cruz and Graham, less regulation. Etchemendy and Li, Biden administration's done well, now Congress needs to give us even more.

John Etchemendy: No.

Peter Robinson: Alright, John.

John Etchemendy: So, no, I don't agree with that. So I believe regulating any kind of technology is very difficult, and you have to be careful not to regulate too soon or not to regulate too late. Let me give you another example. You talked about the internet, and it's true. The government really was quite hands off and that's good. That's good.

Peter Robinson: It worked out.

John Etchemendy: It worked out. But now let's also think about social media. Social media has not worked out exactly the way we wanted, right? We originally believed that we were going to enter a golden age in which--

Peter Robinson: Friendship, comedy.

John Etchemendy: Yeah, well, and everybody would have a voice and, you know, we could all live together, kumbaya and so forth. And that's not what happened.

Peter Robinson: No. Jonathan Haidt has a new book out on the particular pathologies among young people from all of these social media. And, not an argument. It's an argument, but it's based on lots of data.

John Etchemendy: Yeah, so it seems to me that I'm in favor of very light-handed and informed regulation to try to put up sort of bumpers or I don’t what the analogy is.

Fei-Fei Li: Guardrails.

John Etchemendy: Guardrails for the technology. I am not for heavy handed top-down regulation that stifles innovation.

Peter Robinson: Okay, here's another. Let me get onto this. I'm sure you'll be able to adapt your answer to this question too.

Fei-Fei Li: Okay.

Peter Robinson: I'm continuing your Wall Street Journal piece. "Big tech companies can't be left to govern themselves." Around here, Silicon Valley, those are fighting words. "Academic institutions should play a leading role in providing trustworthy assessments and benchmarking of these advanced technologies. We encourage an investment in human capital to bring more talent to the field of AI with academia and the government." Close quote Okay, now, it is mandatory for me to say this, so please forgive me, my fellow Stanford employees, apart from anything else. Why should academic institutions be trusted? Half the country has lost faith in academic institutions. DEI, the whole woke agenda, antisemitism on campus. We've got a recent Gallup poll showing the proportion of Americans who expressed a great deal or quite a lot of confidence in higher education this year came in at just 36%. And that is down in the last eight years from 57%. You are asking us to trust you at the very moment when we believe we have good reason to knock it off.

Fei-Fei Li: Yeah.

Peter Robinson: Trust you? Okay, Fei-Fei.

Fei-Fei Li: So, I'll start with this first half of the answer. I'm sure John has a lot to say. I do wanna make sure, especially, we're in the heads of co-directors of HAI. When we talk about the relationship between government and technology, we tend to use the word regulation. I really, really wanna double click. I wanna use the word policy.

Peter Robinson: Double click, all right.

Fei-Fei Li: And policy and regulation are related, but not the same. When John and I wrote that Wall Street Journal opinion piece, we really were focusing on the piece of policy that is to resource public sector AI, to resource academia. Because we believe that AI is such a powerful technology and science, and academia and the public sector still has a role to play to create public good. And public goods are curiosity-driven knowledge exploration, are cures for cancers, are, you know, the maps of biodiversity of our globe, are, you know, discovery of nano-materials that we haven't seen before, are different ways of expressing in theater, in writing, in music. These are public goods. And when we are collaborating with the government on policy, we're focusing on that. So I really wanna make sure regulation, we all have personal opinion, but there's more than regulation in policy.

Peter Robinson: John.

John Etchemendy: So, yeah, look.

Peter Robinson: Let me make one last run at you.

John Etchemendy: Okay.

Peter Robinson: And in my theory here, although I'm asking questions that I'm quite sure you'd like to take me out and swat me around at this point, John, but this is serious. You've got the Stanford Institute for Human-centered Artificial Intelligence. And that's because you really think this is important.

Fei-Fei Li: Yeah.

Peter Robinson: But we live in a democracy, and you're gonna have to convince a whole lot of people. So let me take one more run at you, and then hand it back to you, John. Your article in the Wall Street Journal, again. Let me repeat this. "We encourage an investment in human capital to bring more talent to the field of AI with academia and the government." Close quote. That means money. An investment means money, and it means taxpayers' money. Here's what Cruz and Graham say in the Wall Street Journal. "The Biden regulatory policy on AI has everything to do with special interest rent seeking. Stanford faculty make well above the national average income. We are sitting at a university with an endowment of tens of billions of dollars. John, why is not your article in the Wall Street Journal the very kind of rent seeking that Senator Cruz and Senator Graham are saying? Are you kidding?

John Etchemendy: Peter, let's take another example. So one of the greatest policy decisions that this country has ever made was when Van Navar Bush, advisor to, at the time President Truman, convinced--

Peter Robinson: He stayed on through Eisenhower, as I recall. So it's important to know he's bipartisan.

John Etchemendy: Exactly. No, no, it was not a partisan issue at all. But convinced Truman to set up the NSF for funding.

Peter Robinson: National Science Foundation.

John Etchemendy: Right, for funding curiosity-based research, advanced research at the universities. And then not to cut... Not to, you know, say that companies don't have any role. Not to say that government has no role. They both have roles, but they're different roles. And companies tend to be better at development, better at producing products and tapping into things that, within a year or two or three, can be a product that will be useful. Scientists at universities don't have that constraint. They don't have to worry about when is this going to be--

Peter Robinson: Commercial.

John Etchemendy: Commercial. Right. And, that has I think, had such an incalculable effect on the prosperity of this country, on the fact that we are the leader in every technology field. It's not an accident that we're the leader in every technology field. We didn't use to be.

Peter Robinson: And does it affect your argument, if I add it also enabled us or contributed to a victory in the Cold War?

John Etchemendy: Yeah.

Peter Robinson: The weapon systems that came out of universities. All right.

John Etchemendy: Well no, absolutely. And, you know, President Reagan, Star Wars.

Peter Robinson: In other words it ended up being a defensive, kind of good. You could argue from all kinds of points of view that it was a good ROI for taxpayers money.

John Etchemendy: Yeah.

Peter Robinson: Alright.

John Etchemendy: So we're not arguing for higher salaries for faculty or anything of that sort. But we think particularly in AI, it's gotten to the point where scientists at universities can no longer play in the game because of the cost of the computing, the cost, the inaccessibility of the data. That's why you see all of these developments coming out of companies.

Peter Robinson: Okay.

John Etchemendy: That's great. Those are great developments. But we need to have people who are exploring these technologies without looking at the product, without being driven by the profit motive. And then eventually, hopefully, they will develop discoveries. They will make discoveries that will then be commercializable.

Peter Robinson: Okay, I noticed in your book, Fei-Fei, I was very struck that you said, oh, I think it was about a decade ago, 2015 I think, that you noticed that you were beginning to lose colleagues to the private sector.

Fei-Fei Li: Yeah.

Peter Robinson: Presumably, because they just pay so phenomenally well around here in Silicon Valley. But then there's also the point that to make progress in AI, you need an enormous amount of computational power.

Fei-Fei Li: Yep.

Peter Robinson: And assembling all those ones and zeros is extremely expensive.

Fei-Fei Li: Exactly.

Peter Robinson: So, ChatGPT, what is the parent company?

Fei-Fei Li: OpenAI.

Peter Robinson: OpenAI got started with an initial investment of a billion dollars. Friends and family capital of a billion dollars is a lot of money, even around here. Okay, that's the point you're making.

Fei-Fei Li: Yes.

Peter Robinson: Alright. It feels to me as though every one of these topics is worth a day-long seminar. Actually, I think they are, but.

John Etchemendy: And by the way, this has happened before where the science has become so expensive that university level research and researchers could no longer afford to do the science. It happened in high energy physics. You know, high energy physics used to mean you had a Van de Graaff generator in your office, and that was your accelerator and you know.

Peter Robinson: You could do what you needed to do.

John Etchemendy: Exactly.

Peter Robinson: Right.

John Etchemendy: And then it no longer was... You know, the energy levels went were higher and higher. And what happened? Well, the federal government stepped in and said "We're going to help." We're gonna build an accelerator. Stanford Linear Accelerator.

Peter Robinson: Stanford Linear Accelerator.

John Etchemendy: Exactly.

Peter Robinson: Sandia Labs, Lawrence Livermore, all these are at least in part Federal establishments.

Fei-Fei Li: CERN.

Peter Robinson: CERN, which is European, right.

John Etchemendy: Well, FermiLab. So it was the first accelerator was SLAC, Stanford Linear Accelerator Center, then FermiLab and so on and so forth. Now CERN is late, actually late in the game and it's European consortium. But the thing is, we could not continue the science without the help of the government and government funding.

Fei-Fei Li: Well there's another, and then in addition to high energy physics and then bio, right, especially with genetic sequencing and high throughput genomics. And biotech is also changing. And now you see a new wave of biology labs that are actually heavily funded by the combination of government and philanthropy and all that. And that stepped in to, you know, supplement what the traditional university model is. And so we're now here with AI and computer science.

Peter Robinson: Okay, we have to do another show on that one alone, I think. The singularity. Oh good. This is good, reassuring. You both, are you rolling your eyes? Wonderful. I feel better about this already. Good. Ray Kurzweil. You know exactly where this is going. Ray Kurzweil writes a book in 2005. This gets everybody's attention and still scares lots of people to death, including me. The book is called "The Singularity is Near," and Kurzweil predicts a singularity that will involve, and I'm quoting him "the merger of human technology with human intelligence." He's not saying the tech will mimic more and more closely human intelligence. He is saying they will merge. "I set the date for the singularity, representing a profound and disruptive transformation in human capability, as 2045." Okay. That's the first quotation. Here's the second. And this comes from the Stanford course catalog's description of the Philosophy of Artificial Intelligence, a freshman seminar that was taught last quarter, as I recall, by one John Etchemendy. Here's from the description. "Is it really possible for an artificial system to achieve genuine intelligence, thoughts, consciousness, emotions? What would that mean?" John, is it possible? What would it mean?

John Etchemendy: I think the answer is actually no.

Peter Robinson: Thank goodness. You kept me waiting for a moment there.

John Etchemendy: I think, you know, the fantasies that Ray Kurzweil and others have been spinning up, I guess that's the way to put it, stem from a lack of understanding of how the human being really works, and don't understand how crucial biology is to the way we work, the way we are motivated, how we get desires, how we get goals, how we become humans, become people. And what AI has done so far, AI is capturing what you might think of as the information processing piece of what we do. So part of what we do is information processing.

Peter Robinson: So it's got the right frontal cortex, but hasn't got the left frontal cortex yet?

John Etchemendy: Yeah, it's an oversimplification, but yes.

Peter Robinson: Imagine that on television, all right.

John Etchemendy: So I actually think it is, first of all, the date, 2045, is insane. That will not happen. And secondly, it's not even clear to me that we will ever get that.

Fei-Fei Li: Wait, I can't believe I'm saying this. In his defense, I don't think he's saying that 2045 is the day that the machines become conscious beings like humans. It's more an inflection point of the power of the technology that, you know, is disrupting the society.

Peter Robinson: Well in that sense, he's late. He's late. We're already there, aren't we?

Fei-Fei Li: Exactly. That's what I'm saying.

John Etchemendy: I think you're being overly generous. I think that what he means by the singularity is the date at which we create an artificial intelligence system that can improve itself, and then get into a cycle, a recursive cycle, where it becomes a super intelligence.

Peter Robinson: Yes.

John Etchemendy: And I deny that.

Peter Robinson: He's playing the 2001 Space Odyssey game here.

John Etchemendy: Yeah.

Peter Robinson: Different question, but related question. In some ways, this is a more serious question I think, although that's serious too. Here's the late Henry Kissinger again. Quote, "We live in a world which has no philosophy. There is no dominant philosophical view. So the technologists can run wild. They can develop world changing things. And there's nobody to say, 'We've got to integrate this into something.'" Alright, I'm going to put it crudely again. But in China a century ago, we still had Confucian thought dominant, at least among the educated classes on my very thin understanding of Chinese history. In this country, until the day before yesterday, we still spoke without irony of the Judeo-Christian tradition, which involved certain concepts about morality, what it meant to be human. It assumed a belief in God, but it turned out you could actually get pretty far along, even if you didn't believe in, okay. And Kissinger is now saying it's all fallen apart. There is no dominant philosophy. This is a serious problem, is it not? There's nothing to integrate AI into. You take his point? It's up to the to of you.

John Etchemendy: Do you wanna claim that?

Fei-Fei Li: You're the philosopher.

John Etchemendy: You're the Buddhist,

Fei-Fei Li: You're the philosopher. I think this is a great... First of all, thank you for that quote. I didn't read that quote from Henry Kissinger. I mean this is why we founded the Human-Centered AI Institute. These are the fundamental questions that our generation needs to figure out.

Peter Robinson: So that's not just a question. That's the question.

Fei-Fei Li: It was one of the fundamental question. It's also one of the fundamental questions that illustrates why universities are still relevant today, right?

John Etchemendy: And Peter, you know, one of the things that Henry Kissinger says in that quote is that there is no dominant philosophy.

Peter Robinson: Yes.

John Etchemendy: There's no one dominant philosophy like the Judeo-Christian tradition, which used to be the dominant tradition.

Peter Robinson: A different conversation in Paris in the 12th century for example, the University of Paris.

John Etchemendy: In order to take values into account, when you're creating an AI system, you don't need a dominant tradition. I mean, what you need for example, for most ethical traditions is the Golden Rule.

Fei-Fei Li: Goes back to Confucius.

Peter Robinson: Okay, so we can still get along with each other, even when it comes to deep, deep questions of value, such as this. Do we still have enough common ground?

John Etchemendy: I believe so.

Peter Robinson: I have yet another sigh of relief. Okay, let's talk a little bit, we're talking a little bit about a lot of things here, but, so it is. Let us speak of many things, as it is written in Alice in Wonderland. The Stanford Institute. The Stanford Institute for Human-Centered Artificial Intelligence, of which you are co-directors. And I just have two questions, and respond as you'd like. Can you give me some taste, some feel for what you're doing now, and, in some ways more important, but more elusive, where you'd like to be in just five years, say? Everything in this field is moving so fast. My impulse is to say 10 years because it's a rounder number. It's too far off in this field. Fei-Fei?

Fei-Fei Li: I think what really has happened the past five years by Stanford HAI, among many things--

Peter Robinson: I just wanna make sure everybody following you. H-A-I, Stanford HAI, is the way it's known on this campus.

Fei-Fei Li: Yes.

Peter Robinson: Alright, go ahead.

Fei-Fei Li: Yeah, is that we have put a stick in the ground for Stanford, as well as for everybody, that this is an interdisciplinary study, that AI, artificial intelligence, is a science of its own. It's a powerful tool. And what happens is that you can welcome so many disciplines to cross-pollinate around the topic of AI, or use the tools of AI to make other sciences happen or to explore other new ideas. And that concept, of making this an interdisciplinary and multidisciplinary field, is what I think Stanford HAI brought to Stanford and also, hopefully, to the world. Because like you said, computer science is kind of a new field. You know, the late John McCarthy coined the term, you know, in the late '50s. Now, it's moving so fast, everybody feels it's just a niche computer science field that's just like making its way into the future. But we are saying, no look abroad. There's so many disciplines that can be put here.

Peter Robinson: Who competes with the Stanford Institute in human-centered design? Is there such an institute at Harvard or Oxford or Beijing? I just don't know what those-

John Etchemendy: So, in the five years since, since we launched, there have been a number of similar institutes that have been created at other universities. We don't see that as competition in any way, shape, or form.

Peter Robinson: If these arguments you've been making are valid, then you we need them.

Fei-Fei Li: Yeah, we see that as a movement.

John Etchemendy: We need that. And part of what we want to do, and part of what I think we've succeeded to a certain extent doing, is communicating this vision of the importance of keeping the human and human values at the center when we are developing this technology, when we are applying this technology. And we want to communicate that to the world. We want other centers that adopt a similar point standpoint and importantly, I mean one of the things that Fei-Fei didn't mention is one of the things we try to do is educate, and educate, for example, legislators so that they understand what this technology is, what it can do, what it can't do.

Peter Robinson: So you are traveling to Washington or the very generous trustees of this institution are bringing congressional staff?

John Etchemendy: Both.

Peter Robinson: Both? Both are happening. All right, so Fei-Fei, first of all, did you teach that course in Stanford HAI or was the course located in the philosophy department or crosslisted? I'm just trying to get a feel for what's actually taking place there now.

John Etchemendy: Yeah, I actually taught it in the confines of the HAI building.

Peter Robinson: Okay, so it's an HAI.

John Etchemendy: No, it's a philosophy course.

Fei-Fei Li: It's listed as a philosophy course, but totally the HAI.

Peter Robinson: He's the former provost. He's an interdisciplinary walking wonder.

Fei-Fei Li: Yeah.

Peter Robinson: And your work in AI assisted healthcare.

Fei-Fei Li: Yep.

Peter Robinson: Is that taking place in HAI or is it at the medical school.

Fei-Fei Li: That's the beauty. It's taking place in HAI, computer science department. The medical school even has collaborators from the law school, from the political science department. So that's the beauty. It's deeply interdisciplinary.

Peter Robinson: If I were the provost, I'd say this is starting to sound like something that's about to run amuck. Doesn't that sound a little too interdisciplinary, John? Don't we need to define things a little bit here?

John Etchemendy: Let me say something. So Steve Denning, who was the chair of our board of trustees for many years, and has been a long, long time supporter of the university in many, many ways. In fact, we are the Denning Co-directors of Stanford HAI. Steve saw five, six years ago, he said, "AI is going to impact every department at this university, and we need to have an institute that makes sure that that happens the right way, that that impact is does not run amok.

Peter Robinson: Where would you like to be in five years? What's a course you'd like to be teaching in five years? What's a special project?

Fei-Fei Li: I would like to teach a course, freshman seminar, called The Greatest Discoveries by AI.

Peter Robinson: Oh, really? Okay. A last question, which I have one last question, but that does not mean that each of you has to hold yourself to one last answer because it's a kind of open-ended question. I have a theory, but all I do is wander around this campus. The two of you are deeply embedded here, and you ran the place for 17 years. So you'll know more than I will, including, you may know that my theory is wrong, but I'm going to trot it out, modest though it may be, even so. Milton Friedman, the late Milton Friedman, who when I first arrived here was a colleague at the Hoover Institution. In fact, by some miracle his office was on the same hallway as mine. And I used to stop in on him from time to time. He told me that he went into economics because he grew up during the depression. And the overriding question in the country at that time was, how do we satisfy our material needs? There were millions of people without jobs. There really were people who had trouble feeding their families. Alright. I think of my own generation, which is more or less John's generation. You come much later, Faye Fei-Fei.

Fei-Fei Li: Thank you.

Peter Robinson: And for us, I don't know what kind of discussions you had in the dorm room, but when I was in college, there were bull sessions about the Cold War. Were the Russians going? The Cold War was real to our generation. That was the overriding question. How can we defend our way of life? How can we defend our fundamental principles? Alright, here's my theory. For current students, they've grown up in a period of unimaginable prosperity. Material needs are just not the problem. They have also grown up during a period of relative peace. The Cold War ended. You could put it differently. The Soviet Union declared itself defunct in 1991. Cold War's over at that moment, at the latest. The overriding question for these kids today is meaning. What is it all for? Why are we here? What does it mean to be human? What's the difference between us and the machines? And if my little theory is correct, then by some miracle, this technological marvel that you have produced will lead to a new flowering of the humanities. Do you go for that, John?

John Etchemendy: Do I go for it? I would go for it if it were going to happen. Did I put that in a slightly sloppy way? Well, no, I think it would be wonderful. It's something to hope for. Now, I'm going to be the cynic. So far what I see in students is more and more focus, or Stanford students, more and more focus on technology on learning.

Peter Robinson: Computer science is still the biggest major at this university.

Fei-Fei Li: Yep.

John Etchemendy: Yeah. And we have tried, at HAI, we have actually started a program called Embedded EthiCS where the CS at the end of ethics is capitalized, so it's computer science.

Peter Robinson: That'll catch the kids' attention.

John Etchemendy: No, we don't have to catch their attention. What we do is virtually all of the courses in computer science, the introductory courses, have ethics components built in. So a problem set. So you have a problem set this week, and that'll have a whole bunch of, you know, very difficult math problems, computer science problems. And then it will have a very difficult ethical challenge. And it'll say, here's the situation. You are programming a computer, programming an AI system, and here's the dilemma. Now discuss, right? What are you gonna do? So we're trying to bring, I mean this is what Fei-Fei wanted. We're trying to bring ethics.

John Etchemendy: This is new. Yeah, relatively the last couple years.

Peter Robinson: Okay.

John Etchemendy: Two, three years. We're trying to bring attention to ethics into the computer science curriculum. And partly that's because they're not, I mean, students tend to follow the path of least resistance.

Peter Robinson: Well, they also, let's put it again, if I'm saying things crudely again and again, but someone must say it. They follow the money. So as long as this valley that surrounds us rewards brilliant young kids from Stanford with CS degrees as richly as it does, and it is amazingly richly, they'll go get CS degrees, right?

Fei-Fei Li: Well, I do think it's a little crude. I think money is one surrogate measure of also what is advancing in our time. You know, technology right now truly is one of the biggest drivers of the changes of our civilization. When you are talking about what is this generation of students talk about, I was just thinking that 400 years ago, you know, when the scientific revolution was happening, what is in the dorms? Of course it's all young men in Cambridge or Oxford. But that must also be a very exciting and interesting time. Of course there wasn’t internet and social media to propel the travel of knowledge. But imagine there was. You know, the blossoming of discovery and of our understanding of the physical world. Right now, we're in that kind of great era of technological blossoming. It's a digital revolution. So the conversations in the dorm, I think it's a blend of the meaning of who we are as humans, as well as our relationship to these technologies we're building. And so it's a...

Peter Robinson: So properly taught, technology can subsume or embed philosophy, literature?

Fei-Fei Li: Of course, can inspire. And also think about it, what follows Scientific revolution is a great period of change, of political, social, economical change. Right? And we're seeing that.

Peter Robinson: Not all for the better.

Fei-Fei Li: Right, and I'm not saying it's necessarily for the better, but we are seeing we're having even peaked the digital revolution, but we're already seeing the political socioeconomic changes. So this is, again, back to Stanford HAI when we founded it five years ago. We believe all this is happening, and this is an institute where these kinds of conversations, ideas, debates, should be taking place, and education programs should be happening. And that's part of the reason we did this.

Peter Robinson: Go ahead.

John Etchemendy: Let me tell you. Yeah, so as you pointed out, I just finished teaching a course called Philosophy of Artificial Intelligence,

Peter Robinson: About which I found out too late. I would've asked permission to audit your course, John.

John Etchemendy: No, no, you're too old. So, about half of the students were computer science students or planned to be computer science majors. Another quarter planned to be symbolic systems majors, which is a major that is related to computer science. And then there was a smattering of others. And these were people, every one of them at the end of the course, and I'm not saying this to brag, every one of them said, "This is the best course we've ever taken." And why did they say that? It inspired. It made them think. It gave them a framework for thinking, a framework for trying to address some of these problems, some of the worries that you've brought out today, and how do we think about them and how do we, you know, not just become panicked because of some science fiction movie that we've seen or because we read Ray Kurtzweil.

Peter Robinson: Maybe it's just as well, I didn't take the course. I'm sure John would've given me a C- at best. Alright.

John Etchemendy: Yeah, grade inflation. So, it's clear that they, you know, these kids, the students are looking for the opening to think these things and to understand how to address ethical questions, how to address hard philosophical questions. And that's what they got out of the course.

Fei-Fei Li: And that's a way of looking for meaning in this time.

Peter Robinson: Yes, it is. Dr. Fei-Fei Li and Dr. John Etchemendy, both of the Stanford Institute for Human-Centered Artificial Intelligence. Thank you.

Fei-Fei Li: Thank you Peter.

John Etchemendy: Thank you Peter.

Peter Robinson: For "Uncommon Knowledge," and the Hoover Institution, and Fox Nation, I'm Peter Robinson.

Expand
overlay image