- Science & Technology
- Innovation
- Understanding the Effects of Technology on Economics and Governance
While the concept of robots supplanting humans may seem the stuff of science fiction, it is in fact advancing rapidly in all sorts of real-world applications – healthcare, manufacturing, even warfare. Allison Okamura, a Hoover science fellow, Stanford University engineering professor and contributor to this year’s Stanford Emerging Technology Review (SETR), discusses robotics’ growth in present-day and future societies. Among the topics discussed: how the 10 science and technology reports within the SETR review are interwoven; the integration of robotics into everyday life; a “100,000-year data gap” and massive shortage of training data for physical robot manipulation; Elon Musk’s new Optimus Gen 3 model and the feasibility of robotic workforces; the public’s comfort level with autonomous technology (would you take a Waymo to the airport?); what the future may hold (better robotic “brains” and “bodies” and soft shape-changing fabrics, greater intelligence and physical autonomy, improvements in robotic hands and humanoids’ dexterous manipulation).
Recorded on February 4, 2026.
- First Lady Melania Trump recently had a warning for humanity. In her words quote, the robots are here. She's right, the robots are here. But as you're about to learn from a Stanford engineering professor who's also a pioneer in the field of robotics, that's not necessarily a bad thing. We'll, found out why in this episode of matters of policy and politics and how robotics factors into the realm of emerging technology. It's Wednesday, February 4th, 2026. You're listening to Matters of Policy and Politics, a podcast devoted to the discussion of policy research here at the Hoover Institution, as well as issues of local, national and geopolitical concern. I'm Bill Whalen. I'm the Hoover Institution's, Virginia Hobbs Carpenter distinguished policy fellow in journalism, but not the only fellow who's doing podcasting these days. I recommend you go to our website, one link in particular, which is hoover.org/podcast and check out our options, our menu, we have just remarkable series of podcasts to offer. That includes the audio version of the Goodfellows broadcast that I have the great honor of doing with remarkable Ceril Ferguson, John Cochran, and H HR McMaster. So definitely check that out. Now today I'm sitting here in my office in the campus of Stanford University located in the northern end of Silicon Valley. But those who are not aware of Stanford, it is a relatively young university by American standard. Stanford opened its doors in 1891, which is 25 years after Cornell, the youngest of the Ivy League colleges. Stanford came into existence about 255 years before Harvard. What about Stanford? Well, if you're familiar with Stanford, you know that it's the Cardinal Red. You know, we have one of the truly strange mascots in college sports, but you also know that Stanford is synonymous with technology tech that was developed in Stanford of the 1930s, laid the foundation for the likes of hp Hewlett Packard. You go to dorm rooms with Stanford, you find the chock full of young men and women who are planning great innovations for tomorrow. Basically their startup incubators. Now, the Hoover Institution, likewise has an abiding interest in technology, are in fact, Hoover Fellows lend their name and insights to a product called the Stanford Emerging Technology Review, or CR for short, SETR. What is CR? It's the first ever collaboration between the Hoover Institution, Stanford University's School of Engineering, and Stanford's Institute for Human-Centered Artificial Intelligence. Its goal transforming technology education for decision makers in both the public and private sector so that the United States can seize opportunities, mitigate risks, and ensure the American innovation ecosystem continues to thrive. Joining us today to discuss her role in this endeavor, the third such emerging technology review, it's my honor to welcome to the podcast Dr. Allison Okamura. Alison Okamura is a science fellow at the Hoover Institution, as well as the Richard w Wheland Professor of Engineering at Stanford University in the McCann engineering department and a courtesy appointment in computer science. She's a deputy director of the Wse Stanford Neurosciences Institute, a founding member and executive com committee member of the Stanford Robotics Center and Director of Graduate Studies at Mechanical Engineering at Stanford. Allison, thanks for coming on the podcast. It's exhausting just, just reading your resume. My goodness.
- Thanks for having me.
- So let's talk a little bit about you before we get into Cedar. I'm very curious as to how one finds their pathway into technology and robotics in particular. I was gonna make a joke about you being Hoover's science officer or Mr. Spock, if you will, but what is, how did you drift into robotics? Were you as a young girl where you fascinated by science or did you just stumble upon it one day?
- Yeah, so I, I was always interested in, in science and math and in particular physics, but I actually had no interest in robotics specifically growing up. And that's something I, I didn't really get into until I was an undergraduate student and at uc, Berkeley, and realize that robotics encompass the, the kind of physics that I was interested in and, and the fact that it was extremely tangible and it also allows you to be creative like a, a designer.
- So your resume, Alice, did you ba a McKen engineering at Cal Berkeley, right?
- That's right.
- Master's in PhD in mechanical engineering at Stanford, right?
- Yes, California native here.
- Okay. So I did a little homework here in 2024, Allison women made up about 14% of engineering workforce in America. As you are going through these various academic disciplines, how many women were in the classroom?
- Yeah, I, I've definitely seen it change over the years. So as an undergraduate, you know, it might be just a couple in a class of, of 50 or so, and that has changed markedly over the years. It, it does sort of depend on your, on your discipline. Some fields have have fewer women, some have more even within engineering. And the other thing that has changed is we have a lot more faculty members as well in, in mechanical engineering, which historically has had fewer women faculty. We probably have about 30% or so of our department is now, is now women. So that, that's been a great change in terms of all of the different types of, of activities that go on in the field.
- And what would you advise for young women in terms of getting them interested in STEM earlier than college in the K 12 process?
- Yeah, I, I mean a lot of the belief is that's where, that's where the pipeline starts, right? Even though I wasn't interested explicitly in robotics when I was growing up, I definitely had a an amazing family that encouraged me to, you know, do whatever I was interested in. And I think a lot of young women are, are getting that message now and the opportunities are, are out there.
- Alright, let's talk a little bit about the emergency tech Emerging technology review. Whereas we just talked about the challenge of getting women into science. It is not a challenge to get women involved in Cedar. Three co-chairs. One is Amy Gert, who is a Hoover Senior Fellow. She's one of my favorite people to interview on these broadcasts because she just has a fascinating portfolio of intelligence, emerging tech, national security. She wrote a great book about spies of all things not long ago.
- Yes, I had her, I had her sign my copy.
- There you go. She's a rockstar. Our second co-chair, I, that's Jennifer Weem is your boss.
- I guess she is, you know, in academia we like to pretend that we don't have bosses, but yeah, she's the dean of the School of Engineering.
- Okay. And then the third co-chair is a promising young scholar named Condoleezza Rice, who we expect great things of.
- Right.
- So here we are in 2026. This is the third such emerging technology review. And here's the question, let's talk about it, Allison, in terms of this point in history, a great powers competition is going on, which takes many forms. There's a military competition, there is an economic competition, but there's also a science competition, which has to do with AI and a question of free nations or autocracy is dominating that emerging technology,
- Right, right now, and, and I'd say this is the reason why I got involved in, in Cedar in the Stanford Emerging Technology Review, is because the implications of technology, both, you know, social, economic, political, are, are just more important than ever. And for myself, you know, coming in as just a regular engineering professor, I didn't do anything related to policy or thinking about these issues until, until quite recently. And so the Stanford Emerging Technology Review drew me in and, and got me involved in Hoover's technology policy accelerator.
- Alright, now if you read the report itself, you'll find that there are actually 10 science and tech reports within the larger review. And those categories are, I'll read them one through 10, artificial intelligence, biotechnology slash synthetic biology, cryptography slash computer science, energy technologies, material science, neuroscience, quantum technologies, semiconductors, space, and robotics. So, question Allison, are these the 10 core issues at the heart of emerging technology or did they just cut off after 10?
- Yeah, so obviously I think being a robotics is, robotics really is one of those top 10 areas, but those areas are not totally static. We, we've changed them a bit from year to year in terms of what we think is most important for the conversation right now. There are a lot of different ways to, to slice and dice technology and put them into categories. The feeling with the set that get picked every year, I think is that there's a number one, a faculty member behind it who really wants to drive and, and, you know, push forward and explain what's going on in these fields, that it's one that policy makers have a lot of questions about. And also things that are things that are rapidly changing. And so I, I expect to see the, the top 10 picks that we do from year to year evolve as as we go along.
- And how does robotics compliment Is there, could you pair robotics with one or two of these specific disciplines?
- Oh yeah. I mean, RO Robotics is, it's a very broad interdisciplinary field. So for example, robotics in space is a question, right? In order for us to go far away, we need some aspects of autonomy. We can't necessarily always have communications if we want to and, and inhabit the moon. The thought was we'll need a lot of robotic technologies in order to scope things out and set things up. So space is clearly one, I don't think it's in this year's report, but, but Fusion was a topic recently in one of the earlier reports. And that's a case where they wanted robotic technologies to be able to go in and, and reset fusion experiments in order to be able to turn around those studies much more quickly. There's also other fields which implicate robotics. So neuroscience impacts ai, which impacts robotics, material science, which has been a topic also affects robots 'cause it affects what kind of materials that we can use to create the robot structures. Not everything is, is traditional metal rigid materials. So a lot of these areas they, they interlink and that's one of the things that Stanford Emerging Technology Review tries to do is, is show where these interdependencies are.
- Now I'm gonna warn you, I'm a child of the sixties, Allison, and when I think robots, I think, I think Rosie on the Jetsons for example, or I think, I think robot B nine. And if our listeners know who robot B nine is, they've watched way too much television. That's an odd looking robot that was on lost in space, remember? Danger Will Robinson.
- Yep.
- Alright, so danger, Allison, I'm now gonna ask you some other pedestrian questions about robotics. So please bear with me. First question, what is a robot exactly? How do we define a robot?
- Yeah, you know, that's not as simple of a question is you, as you might think people disagree. The way I think about a robot is, is a physical system that connects information to action. So a robot connects information to action by taking in, sensing about the world or about its own, its own state, doing some kind of computation about it. And then it has to have a, a physical body in order to create an action. You'll sometimes hear people refer to pure pieces of code as, as robots or bots, but, but to me that's not actually a robot.
- Okay. Second pedestrian question. The history of robotics, is they, is there a godfather or a godmother of robotics?
- Oh, well there's, there's probably many. I would say maybe because it's striking. 'cause at the time there were very few women, there's probably a godmother of robotics who would be Rina bace, who did a lot of work combining robotics with computer vision. So using what we think of as computational sensing, how, how do you take sensors that are more complex and, and draw information about the world and use that to drive robots and the godfather. Well there's, there's probably a lot more in that category, so I won't name names.
- And in terms of the timeline of robotics, is it a more recent science? Can we go back in ancient times and find the Greeks and Romans talking about robots? Or is this more of a product of modern science and modern science fiction?
- Science fiction? Yeah, that's a great question. Well, if, if you go by my definition, which requires some computation, I would say modern robotics is somewhere 60 to 70 years old. But I would say automata, which we're purely mechanical systems like, like player pianos and things like that, which don't have any intelligence and can't interact with the world. Those have captured people's imaginations for hundred or hundreds of years even. So, so I think the idea of automata has, has captured human interest for a very, very long time. But it's only since the advent of, of computing and, and modern computing that you can actually have robots take in information about the world, process them, and then make an action.
- Okay, you're showing a lot of patients with me, but here we go. Here's another one. What is a soft robot? That sounds, that sounds almost oxymoronic.
- So, so you've hit on a, a personal research interest of mine, which is soft robots. And, and yeah, this is hard for people to picture because mostly
- You think metal and steel
- Yeah, exactly. Metal and steel, large robots that can move around car doors and factories or, or most robots that you see in, in the movies, right? C3 PO and R 2D two, right? They're, they're made of, they're made of metal and very rigid, right? So this newer generation of robots known as soft robots, they have all the same properties of robots I mentioned before, but the materials are soft and flexible. So for example, if you wanted a catheter that could go into the body and or something that can be done for a minimally invasive surgery, you don't want a, a big, rigid, stiff thing. You would like something that can be sort of a continuum and be flexible and, and then be able to still have all the same abilities to be, to be smart and, and provide physical work on the world. And so that's been an area of robotics, which has been growing in the last couple of decades in, in parallel to this new generation of artificial intelligence. Okay.
- And finally, what is haptic technology? H-A-P-T-I-C haptic technology?
- Yeah, the word, the word haptic, I'll, I'll give you some, an an SAT prep way of thinking about it is that haptic is to touch as optic is to vision.
- So - Haptic is anything having to do with the sense of touch haptic technology for robots. There's, there's two types. One is giving robots a haptic sense, so allowing robots to, to feel distributed forces and, and be able to touch things. And then the other is haptic technology to provide feedback for humans. So these are technologies that allow you to feel virtual environments and have that more immersion than just getting a visual experience in a virtual world. And that same haptic technology can also be used when you teleoperate robots. So teleoperation is when a human sits and usually manipulates some fancy joysticks to control a remote robot. And ideally that human tele operator of the robot should feel what the robot feels in order to do a good job with whatever remote task they're doing. So there's this type of haptics that is for robots to have their sense of touch. And then there's the type of haptics that gives humans touch feedback, like an artificial sense of touch. And
- Is it as good as human touch? Is it the same?
- Oh, neither are, unfortunately, you know, if you, if you watch computer computer graphics right in, in modern movies, right? You, it's hard to tell the difference now between something that's been generated by computer graphics versus something you might see in real life touch on the other hand has been a lot more challenging for us to pass that, pass that test. It's been very difficult to get robots to have enough sensing capability in them to come anywhere close to rivaling human abilities. And then if you try to display an artificial sense of touch to a human, it's still pretty easy to tell the difference between that artificial touch feedback compared to when you're touching real objects.
- So it sounds like we're still years away from robotic dentistry.
- Well, so this is, this is the open question, and I would say this is the a a a big debate going on in the field right now, right? There are folks designing autonomous robots and saying that you don't need a sense of touch. All you need is, is vision and, and also proprioception, which is a person or a robot sense of where its own body is in space. And there are folks who are, are betting a lot on the idea that you can control autonomous robots without, without even needing a sense of touch. I, I believe, and I think dentistry is a great example of this, that there are going to be a lot of tasks where it is, it is too risky to go in and, and have a robot or let alone a even a human try to do a task if they had no sense of touch. But that's, that's, that's something that I, I would say is an open debate in the field.
- It ties into something else, which I wanna get into in a few minutes, and that is the public's comfort with the technology. A lot of people are just terrified of going to the dentist, and I'm just not sure if you're already uncomfortable with the dentist be sitting in your chair and, and comes are too D two to, to work on you. I'm not sure a lot of people could handle that situation.
- Yeah. Well there there is a whole subfield of robotics called social robots, which is trying to understand how do you make robots palatable, palatable to humans? How do you make them seem as capable as they are? And maybe more dangerously sometimes, sometimes they might seem more capable than they actually are, right? People look today a lot at the humanoid robots and that humanoid form factor, as my colleague Rodney Brooks likes to say, kind of makes a promise about what the robot can do. But these days, most humanoid robots, they don't deliver on that promise, right? And when and when the way a robot looks, makes promises it can't keep in terms of its performance or its safety, then that I think is really dangerous for our field because if people lose trust, it will be very difficult to get it back.
- Let's talk Allison about what you wrote for Cedar, and there are three takeaways here, and I'm gonna read each takeaway and I want you to explain why you wrote what you wrote. Here's takeaway number one, and I quote, artificial intelligence holds significant potential to advance complex robotic systems, but the speed of future advances will depend on the availability of high quality training data and the systematic integration of data rich foundation models simulated interactions between robots and their environment and understanding of the real physical world.
- Yeah, right. So this is alluding to the current trend in making autonomous robots is to have the intelligence of these robots driven by models created from collected data. And for this, let me make a comparison to large language models that are used in modern chatbots, right? Right. So if you go to chat GPT, you know, type in a query, it uses incredibly large statistical models based on a huge corpus of data recorded from lots and lots of text in order to make a prediction about how it should respond. So now imagine this for a robot, we need to have the same kind of data set from which to draw if we're going to use that same approach. But now that data has to be about motions in the world and actions and what results from those actions. It has to be data that's physical and and visual and hopefully also touch based in nature. And we don't already have, we're not already sitting on a huge pile of data like that. There are a lot of videos on the internet, but of course that's nothing compared to the amount of text that we already had in the world going into technologies like Chachi Pt, right? And so for robots to be able to have the same sort of physical capability that Chachi PT has in terms of having a conversational ability, we need to collect more data and some roboticists call this the a hundred thousand year data gap because they predict that's how long it would take to collect all the data needed to make robots as physically smart as chatbots are linguistically smart. And there are potentially some ways around this other than just taking the data from physical robots and, and teleoperated robots, we mentioned simulation in that point because you could potentially have virtual environments where you simulate physical interactions. And that's something that Jensen, Huang and Nvidia has said that this is gonna solve the data problem for robotics. I'm a bit skeptical about that because making perfect virtual environments that would capture all the nuances of physical interaction in the real world, that's just technologically really difficult to do. So it may help us and speed us up, but we'll need to take more data, we'll need to have better simulations. There's a lot we're going to need to do to get to autonomous robots at the level that people are expecting now that they've seen what, what these chat bots can do.
- Are there certain things, Allison, that a robot will never be able to do that a human can? I'm thinking for example, you know, the fight flight stimulus for example,
- That is an interesting question. So if you go down to any individual specific task, I think it's quite likely we can have robots that learn from, from humans learn from demonstrations in order to mimic that task.
- Yeah,
- I think the challenging thing will be to string together all of the myriad complex different types of activities that we do, both from the perspective of just the intelligence, but also the, the bodies, right? Part of the reason we're interested in soft robotics, right, is that biological systems, humans, we're, we're soft. We have some hard components, but we're mostly soft and really robots won't be able to do the same things that humans can do unless they're somewhat soft and compliant and adaptable as well. So in general, maybe someday robots will be able to do almost everything people can do, but stringing all of those tasks together with one robotic system and one robot brain, I'm not sure if that will ever happen.
- Okay. Your second takeaway in your, in your, in your essay quote, humanoid robots show promise for specialized industrial and healthcare roles, although widespread adoption of them faces challenges linked to their cost, technical complexity, energy efficiency, safety and training data quality.
- Yes. So humanoids, there's a lot of hype right now, right? Yeah. And so
- Let's actually, let's pick this apart bit by bit. Sure. So first of all, define, define what is specialized industrial healthcare roles?
- Right? So when, when people imagine a humanoid robot, they imagine a robot that can do everything a, a human can. But typically what is really needed in industry is, for example, an assembly task, right? You have multiple components of a system, say a car. You need to be able to move those pieces together and and connect them. Maybe a little bit of dexterity might be required to, to do some subtask, but ultimately you have a pre-planned activity where you know, you need to fit some parts together in a particular way. And so that's a, a typical in industrial task, and that's something that robot arms can already do. If you have a humanoid robot, it might be able to do that in an environment that maybe is not purely designed for robots. So a big argument for humanoids is that if you have a, a scenario where it's already been designed for humans, like a human has to sidle in between some parts in order to get to a particular location. If the humanoid has that same form factors the human, then the idea is that the humanoid will be able to do that. But because the humanoids are not as, say, flexible and diverse as as humans right now, I think they'll be limited to a smaller subset of tasks, which is the point we
- Were talking. So, so I'm envisioning a team of robots on an assembly line putting together a car or a television or something like that.
- That's, that's right. I think that the difference between a humanoid robot in that scenario and the classic robot arm, a kind of disembodied arm, which are currently used in these types of tasks, is that a humanoid robot would also have legs that, you know, lets it move around and, and kind of get the body to a different position. And if you have legs rather than wheels, you might be able to turn sideways and sidle through tight areas. So it would have more flexibility than today's stationary fixed robot arms. But what I'm saying is that there's still not at that same level of flexibility that humans have.
- Okay. And now the healthcare side of things you talk about, about healthcare, healthcare roles,
- Yeah. So in healthcare there are a bunch of interesting topics and potential applications, let me start with, with surgery, because surgery is a place where robots really have made inroads, but not autonomous robots. So these are robots that are tele operated, as I mentioned in in teleoperation a human, in this case, the surgeon sits down and manipulates some fancy joysticks, and those joysticks send commands to a robot, which might have much smaller hands, essentially than the human hands. And that can go into a patient's body in a very minimally invasive fashion. And having going in through a very small hole would reduce recovery time, reduce chance of infection. There's a lot of advantages. And those types of teleoperated robots, they are not super intelligent. They mostly follow the direct commands of the human operator. And it's almost more like they make the, the surgeon kind of superhuman have superhuman capabilities or maybe even subhuman capabilities in terms of size. And, and so for humanoids, you have to ask, well, would you want a humanoid to do surgery when we already have surgical robots that have advantages in terms of having a scale that's smaller than humans so that they can have that minimally invasive property. So for humanoids to do a surgical task probably doesn't make a lot of sense. But then there are other medical tasks, for example, with growing elderly populations, especially in some countries, they're interested in how can we have autonomous robots help older adults age in place and stay in the home rather than having to go into a nursing home or rather than having to pay the, the huge expense of a, of a human caregiver. Of course robots are also very expensive, but the thinking is that eventually the economics will work out. In those cases, it might make sense to have a humanoid robot in the home, both for reasons of flexibility as well as social acceptance, but we, we'll definitely need to be able to bring them down in cost. And that means addressing supply chain issues. We need to have better safety checks than what we have in robots today. There's a lot of work that needs to be done.
- So if I wanted a robot to take care of my ailing father or mother, how much would, let's call it the Allison 3000, how much would the s and 3000 go for these days?
- Yeah, well, so there's, there's one local company here in Palo Alto that is, is saying you can get their robot for about $20,000, really. Which at this point though is not an autonomous robot. It, it would be teleoperated. I don't think they're quite on delivery yet. The hope is to have delivery by the end of the year. But there's also still even open questions of, of, to what extent will that robot actually be able to, you know, help with the tasks that really need to be done to help an older adult in the home.
- Is is having that robot Allison, like having another ev, does the robot have to go in the garage at night and get charged or,
- Oh, the robot definitely needs to get charged. And this is another one of the challenges, unfortunately, it probably doesn't just have to get charged at night. It might have to get charged every hour depending on what it's doing. So the battery life for robotics is another challenge. And, and this is also true for, for quadri that, for example, are being of interest to be used by the military. There was a, a quaded in a, in a Washington DC parade last year, which, you know, had to get swapped out and charged pretty often. And so some of these things are, are pretty far from real world application because of that practical challenge.
- I'm also curious, Allison, about liability issues. So let's go back to healthcare. For example, if I go to Stanford Hospital and I need surgery and there's going to be a robot involved in the surgery, is do I, did they put a form in front of me at any point and saying, oh, by the way, are you cool with having a robot cut you open?
- Yeah. So currently there is robot assisted surgery and and hundreds of thousands of these, of, of these procedures are, are done a year in the, in the US alone and even more around the world. And, and definitely patients have to sign off for using whatever particular procedure is being implemented. But I think the key thing to note of most of what's being done in the clinic today is not at all autonomous. There's a human, a human surgeon, basically behind the wheel, right? That's, that's driving the robot. I think an interesting aspect of this has been how much people are, are interested and, and willing to use these robots because you can actually see some tangible benefits because of the idea that the robot's, fingers and hands can be much smaller than the surgeons and therefore more accurate and less invasive. People really do see the advantages of using these robotic technologies. But on the other hand, there's an interesting burden, which is on the part of a robot, surgical robot manufacturer, if something does actually go wrong with the surgery, which is sometimes does, you know who is to blame, who's at fault? And at the moment, since we don't have autonomy, you can say that it's either the robot manufacturer or it's the surgeon did something wrong. And, and patients, if something goes wrong, they would much prefer to sue the Fortune 500 company that made the robot than they would the friendly surgeon, the human that they, they met. And, and yet the surgical robotics companies that are well established today have, have survived this very well because they have, they have a kind of robot version of a black box, like a black box for an airplane where everything that was done was recorded and they can be able to show based on that data, what was actually done and, and show that the robot didn't mechanically fail. I think the challenge is going to be that when we have autonomous systems coming into the more autonomy coming into the operating room, being able to show where the fault lies if something went wrong will become much more challenging.
- Now, Allison, could a robot do my job? And here I hope Secretary Rice, my boss is not listening, but let's be honest, a lot of my job is taking information and putting it together and then figuring out a way to ask you about it. So there's a bit of creativity involved, but ultimately it's processing information. So one would think a robot could come along, we'd have to work on the voice, maybe a robot would've a better voice than me, maybe give it George Clooney's voice. I don't know. But it seem to me that a robot could possibly become a podcast host.
- So the, you know, we're sort of separating out the, the physical aspect of the robot from the intelligence part. There's already a lot of AI generated news reports and such, and at the moment, for me at least, it's, they're pretty distinguishable from things that are produced by humans. But I agree that that's not always going to be the case. I think the difference is going to be intentionality, right? So while maybe a, a robot can run or an AI system can run a podcast and, and the receiver of that podcast might not know or notice the difference, you bill running this podcast, you have an idea of what you wanna talk about and how you wanna say it and what kinds of things you wanna dig into, right?
- And - The robot is not going to understand what, what you wanted out of that podcast. So while maybe a robot could create a, a product that that could be digested by the public, it it, there's no way it could be the same as what it would be if you wanted to make it because it, it's, it can't tell what it is that you are trying to, to get out of, out of the interview.
- They really have to work on the vocal aspect of it. Alison, maybe I'm watching low end ai, but so often you go onto Instagram or Facebook or some app and you see something clearly AI and the voice always talks like this. It is clearly a very strange computer thing driven at you. So they need to, they need to work on inflections and how people actually talk.
- Yeah, yeah. It's, it's the voice. But, but e even even the text too, it's, it's, it can get harder to tell on the receiving end, but if you are a creator and you asked a robot, you know, do a podcast for me, that in the way that I would do it, you would notice the difference. It, it would not be to your satisfaction.
- Question for you, Allison, when's the last time you were in a Fremont, California?
- Oh, just last week.
- Yeah. Okay. Maybe you know what I'm getting at here. Fremont.
- Yes. I I, yes, I drove right by it.
- Yes. So those are not familiar with California geography. Fremont California is on the other side of San Francisco Bay. It is about 22, 23 miles as the crow flies here from the Stanford campus. Fremont happens to be the home of Tesla's manufacturing plant. And two Wednesdays ago, Tesla began production of its optimist, gen three humanoid robot. This is Elon Musk targeting up to 1 million units a year from this site alone about optimist Gen three. It's built as a general purpose factory helper. It stands about a powerful five feet, seven inches, weighs about 125 pounds. So it's not a menacing figure. It's powered by a 2.3 kilowatt hour battery that can deliver 10 to 12 hours of work on a single charge. Meaning it can work a shift. So what is Elon Musk up to here? Allison?
- So Elon Musk, as well as other companies are interested in essentially creating this new market for humanoid robots to replace human workers. And the idea is that they would be able to work longer, they would not, you would be cheaper, not take as much as much pay and potentially also be, be more accurate. And, and that's, that's basically the vision, right? That if you have a, a factory or a setup that's already designed for humans, you would swap out a human worker for a humanoid. And that's, and that's the bet that they're making. And some industry analysts that I've read, yeah, you know, believe, believe in that vision. I'm personally quite skeptical because I think we, we have a choice here about what, what we want from our robots. It's first of all very difficult to create, as we talked about, a humanoid that has all the capabilities of a, of a person. And I think people will be disappointed in the performance of these robots, at least in the, in the near future. So I think it'll take a long time to get there. Another issue is going to be where, where's the hardware and the components going to come from? So robots are really a component based technologies. You need to buy motors, you need to buy sensors, you need to buy structural materials. And at the moment, many of those components are having very significant supply chain issues. And the cost of these robots is going to skyrocket if we require that all those components are, are made in the USA or even made from what are considered allied countries. And that's gonna change this cost benefit ratio. So I believe we're, what is probably the better economic decision for many factories would be to, instead of trying to just replace a human worker with a humanoid, you know, redesign and have multiple purpose specific robots rather than one general purpose robot. And that will give you a lot more flexibility in terms of what components you can use.
- We look at Musk and we think of him as a game changer. He builds a Tesla car and we think, wow, that really changes the auto industry. He does SpaceX and he's now changing how we go into outer space. Is he fundamentally changing robotics or is he just going with the flow?
- I I would say in robotics, well, I think in terms of ideas, it's going with the flow humanoid robots has and, and autonomous vehicles, right? These, these have been research topics for many decades and there are already multiple companies that have humanoid robots that have more capabilities than what has been released from in the videos that I've seen at least from Musk and company. But what, what he has that a lot of these other groups don't have is I guess a whole, a whole ecosystem of, of products, right? Right. Tesla cars can collect a lot of data, for example. There's already a greater understanding of the supply chain issues that are gonna be challenging for all these humanoids companies. This is something that, you know, already has to be dealt with for, for Tesla vehicles. So he definitely has some advantages, but they're not in terms of the basics i ideas and understanding of the humanoid technology, there's a lot of great front runner companies in this area.
- Now let's go to your third takeaway from your writing for Cedar. And you write quote, robots may be useful for improving the US manufacturing base, reducing supply chain vulnerabilities, delivering elder care, enhancing food production, tackling the housing shortage, improving energy sustainability, and performing almost any task involving physical presence. So what is it that robots can't do? You've given them a lot of credit there.
- Yeah, yeah. Well I think the things that robots can't do are, I think some of the things that I was mentioning about being able to multitask, right? Being able to switch between a wide variety of tasks because both, a, they, they don't have the bodies for it and, and b, they also don't yet have the brains for it. And in some ways these days with the, the fast development of, of LLMs and potentially improving foundation models, those same types of models for robotics, the the brains are, are getting there faster, but the bodies, like I said, they're still lagging behind. It's hard to get the same kind of maybe power ability that, that humans have given the energy input and versus what we get out of it for robots as we do for humans. So I think a lot of these types of tasks that we're mentioning when we mention them, you might picture those tasks being done in the same way that humans do them. What the trick to use robots is you have to figure out how can those tasks be done differently by robots? How do you use the things that robots are really good at being precise using certain sensors which might be different from the ones that humans use in order to achieve those tasks. So it's not only about taking robots and slotting them into the existing way of doing things, but even rethinking how those things are done in the first place. And, and that's where the, the strength of robots will come in.
- Okay. So how would robots affect food production? 'cause I, I hear food production, I think okay, they work in the fields. Yep,
- Yep. Yeah. So many different aspects of food production. Let me take one, which is being able to harvest fruit when it's, it's ready. So this is something that there have many, been many startup companies that have tried to tackle this. And one thing that robots can do is they can very directly target weeds and, you know, do this not just during the daytime but also at night. So do it 24 7, be able to search and kind of target things that need to be removed from the, the workspace. They can use additional sensors beyond what humans have to determine the readiness of, of fruit to get picked. And they can also, you know, gather more material at once, you know, be able to hit lift heavy loads, for example, in ways that humans can't. But the truth is that probably they're not going to be, these robots aren't gonna be operating completely by themselves, right? They're, they're gonna be collaborating with humans. And the trick in these tasks is figuring out what, what are the things that humans are really good at and, and how do you get robots to assist them for the things that are hard for humans to do?
- And you mentioned energy, what is the connection to energy?
- I mentioned earlier fusion. So one challenge with environments where you're trying to do fusion is that you can't have a human anywhere near there. That's not an environment that's hospitable for people. And so in this case, you probably don't want soft robots actually, you probably want robots made of very robust materials that can go in and work in these spaces where, where humans can't go. So this is definitely looking forward into the future, but whenever there's a place that's, that's too hot or chemically dangerous for human operators, this is something where robots could go and they might not even need to be auto completely autonomous robots, right? These could be robots that are tele operated and driven by human operators if we think the robots are not ready or safe to be operated autonomously.
- China, Allison. So China is developing robots that do rather benign things like working the fields, loading dishwashers I saw as an application, but they're also developing motion controlled combat robots that mimic a soldier's combat moves in real time. So this kind of gets us back to the idea of splitting the atom back in the 1940s, and this is ultimately gonna be a peaceful or belligerent thing. Are we gonna see robots take a more forward position in future of warfare?
- I mean, I think it's already happening, right? You look at the, the Ukraine and Russia and the use of, of drones for example,
- Right?
- So already there's this capability to put more machines into places where you don't want to send humans. And so, you know, inherently you are kind of switching from, from humans on the ground or in the air and you know, to, to machines. I think the big question that folks have is to what extent these systems will be autonomous, right?
- And - If they're not autonomous, then that means that they're tele operated by a human and then you're kind of limited in a one-to-one, like one human drives one robot. You could maybe go to a, what we call a supervisory control scenario where maybe you have one human operating several robots, right? But when especially you're thinking about things like drones, there's a potential to have swarms, right? Tens, hundreds, maybe even thousands of robots. And in that case you want them to have a bit of a, a hive mind so that you don't have to have one human controlling each robot. And I think that's the place where people are really trying to understand what are the implications for autonomy and decision making. And once you go towards autonomy, because you can't have a human controlling every single one of those robots, then there may be some, you know, kill decisions that are being made by those autonomous systems. And a lot of folks find that very problematic. But it's going to be a tricky line to draw where we go from autonomy that's just there to enhance human controls, to autonomy that actually makes life and death decisions.
- Okay. And finally, Alison, five very scary words. The robots are taking over and you're smiling and laughing, but pop culture has told us for decades now build a robot, it's gonna turn on you eventually.
- Yeah, IIII mean I think that's up to us.
- I mean, I mean I'm, I am joking about it, but it's part of the narrative with artificial intelligence. At what point do you develop an intelligence that turns on you?
- Yeah, so again, we, we kind of have to, to split between the intelligence part and then what what robots with bodies can actually do. And, and I think pe people are, are rightfully concerned in that an AI that is purely information based maybe can send commands and such, but ultimately can't, you know, physically do something that would harm someone. But once that intelligence is connected to a robot, there is the potential for unanticipated, you know, physical action in the world. I think though that we, we have the control to design and put safeguards on these systems if we want to have them. I think that the question and the danger is going to be, is going to be, you know, bad actors that create robots that have behaviors that others, you know, don't want or find unethical or find dangerous. So the robots are coming, but ultimately they are of our own design and we can't absolve ourselves of that responsibility. We can't say, oh yes, you know, AI is just happening, the robots are just happening. You know, ultimately that's, that's our, our decision. And I think our biggest challenge is going to be that if we have adversaries that are using robots in ways that we find unethical or inappropriate, are, are we going to do the same in order to combat that or are we gonna find another path?
- Question for you, if you ordered a car to take you to the airport, which would be less likely to get into, would you be less likely to get into a Waymo or would you be less likely to get into a car driven by a robot?
- Oh, well, in my view it's actually one and the same. So for me, the the Waymo, it, it is a robot, right? Yeah. It's an autonomous car. So it is driving autonomously. If you took a humanoid robot and sat it in the front seat and had it drive that Waymo, I'd probably be a little less likely to get in because I think right now, you know, humanoids are not at the same place of, of, of safety and reliability as, as a Waymo. So I, I go more, more for the way Waymo, I don't want a humanoid middleman driving the car for me, but ultimately I see the Waymo car as a, as a robot itself as well.
- Okay. And final, final question, I promise Allison. So here we are in a free market think tank as we talk about developing science technology and robotics. What is the role of government in terms of both investment but also oversight?
- Yeah, I I'd say both encouraging the technology and then regulating it are super important roles. I feel that we need a lot more investment in robotics in the US both at the basic research level because that's the, the seed corn that is going to provide us with leadership in the field 20, 50 years from now. And then also in terms of helping our companies acquire both acquire and use robots to improve manufacturing as well as develop the robots themselves. It's gonna be difficult for us to adopt robotics use in this country to speed up our manufacturing capabilities if we're not actually, you know, making the robots here as well. And then on the regulation side, that's, that's the difficult thing. You know, I, I definitely have colleagues that say we should just regulate the applications, we should not regulate the basic technologies. But I would say I'm quite conflicted there. I I think it's, it's a challenge. We we're always gonna have regulations on applications, but some basic technologies, it's, it's hard for researchers as myself to fully understand dual use implications. And, and that's a big reason why I'm involved with, with Hoover and the technology policy accelerators to try to, to tease apart when is it appropriate to apply regulations to the basic technologies versus those applications.
- Alright. We are still months away from the 2027 emerging technology review. Hopefully there is robotics in it, hopefully you're writing it as well. What is the next breakthrough in robotics? What is the next big thing people are looking for?
- I would say two things I would say. One would be are there ways to get around this, what I refer to as the a hundred thousand year data gap in robotics. What is the next way to think about AI for robots? Rather than trying to imitate what was done for large language models, there's hopefully going to be other approaches to get robots to have more intelligence and more autonomy in, in, in their physical sense. And I think we're, we're seeing a lot of people pushing and I hope to see some breakthroughs in that area in the coming year. And then that's on the robot brain side. The other side is in, in robot bodies, one topic I'm particularly interested in is robotic fabrics and can we see a next generation of, of soft robotic fabrics that change their shape and help us, you know, rethink what a robot normally would be. But let me add on one other thing that maybe is a little bit more realistic for actually having an impact in the next year. And that might be robot hands and dextrous manipulation. You see a lot of humanoids, but they don't really have this Dexter manipulation capability. So maybe that's something a little more grounded is is better robot hands.
- Well Allison, you did a great job of explaining this to me better than I think a robot could. Anything else you'd like to add before we sign off?
- No, I think we're good.
- Okay. Well look, I appreciate your time. Congratulations again on the report. It's a fantastic review as it always is, thanks to the great work you do at Hoover.
- Great, thank you.
- You've been listening to matters of Policy and politics, a podcast devoted to the discussion of policy research here at the Hoover Institution, as well as issues of local, national and geopolitical concern. If you enjoyed this podcast, please don't forget to rate, review, and subscribe to our show. If you wouldn't mind spread the word, tell your friends about us. The Hoover Institution is Facebook, Instagram, and x speeds. RX handle is at Hoover. I nst. Allison, are you on social media?
- I am. I'm on LinkedIn. Alison Okamura, easy to
- Find. Okay. Now Hoover by the way, is also on substack. It's called Freedom Frequency. Allison is not on social media, but Stanford Engineering is where you can track her down there and their handle on X is at Stanford, anger at Stanford, ENG. I also recommend you sign up for the Hoover daily reports, keeps you updated on what Allison and the Hoover colleagues are up to, and as emailed to you that's delivered to your inbox weekdays Stanford Emerging Technology Review. By the way, you can track it down if you go to the following website, SET r.stanford.edu. That's it for this podcast. We'll be back a week from now if the new episode of Matters of Policy and Politics we're gonna be talking about of all things Valentine's Day, economics of love and marriage and family. So you don't wanna miss that Brother Hoover Institution. This is Bill Whalen. Till next time, take care and thanks for listing.
- This podcast is a production of the Hoover Institution, where we generate and promote ideas advancing freedom. For more information about our work, to hear more of our podcasts or view our video content, please visit hoover.org.
ABOUT THE SPEAKERS
Allison Okamura is a science fellow at the Hoover Institution. She is the Richard W. Weiland Professor of Engineering at Stanford University in the Mechanical Engineering Department and has a courtesy appointment in Computer Science. Dr. Okamura has more than thirty years of experience in research, teaching, and development of human-centered robotics, including medical robots, soft robots, and wearable robots. She is also a contributor and member of the faculty council of the Stanford Emerging Technology Review.
Follow Allison Okamura:
- LinkedIn: @allisonokamura
- Facebook: @allison.okamura
Bill Whalen, the Virginia Hobbs Carpenter Distinguished Policy Fellow in Journalism and a Hoover Institution research fellow since 1999, writes and comments on campaigns, elections and governance with an emphasis on California and America’s political landscapes. Whalen writes on politics and current events for various national publications, as well as Hoover’s California On Your Mind web channel.
Whalen hosts Hoover’s Matters of Policy & Politics podcast and serves as the moderator of Hoover’s GoodFellows broadcast exploring history, economics, and geopolitical dynamics.
RELATED SOURCES
-
Learn more about the Stanford Emerging Technology Review.
-
Learn more about the Technology Policy Accelerator.
ABOUT THE SERIES
Matters of Policy & Politics, a podcast from the Hoover Institution, examines the direction of federal, state, and local leadership and elections, with an occasional examination of national security and geopolitical concerns, all featuring insightful analysis provided by Hoover Institution scholars and guests.
To join our newsletter and be the first to tune into the next episode, visit Matters of Policy & Politics.