Past episodes
About Uncommon Knowledge

For more than a decade the Hoover Institution has been producing Uncommon Knowledge with Peter Robinson, a series hosted by Hoover fellow Peter Robinson as an outlet for political leaders, scholars, journalists, and today’s big thinkers to share their views with the world. Guests have included a host of famous figures, including Paul Ryan, Henry Kissinger, Antonin Scalia, Rupert Murdoch, Newt Gingrich, and Christopher Hitchens, along with Hoover fellows such as Condoleezza Rice and George Shultz.

Uncommon Knowledge takes fascinating, accomplished guests, then sits them down with me to talk about the issues of the day,” says Robinson, an author and former speechwriter for President Reagan. “Unhurried, civil, thoughtful, and informed conversation– that’s what we produce. And there isn’t all that much of it around these days.”

The show started life as a television series in 1997 and is now distributed exclusively on the web over a growing network of the largest political websites and channels. To stay tuned for the latest updates on and episodes related to Uncommon Knowledge, follow us on Facebook and Twitter.

December 7, 2001 | Recorded on December 7, 2001

FUTURE SHOCK: High Technology and the Human Prospect

Computers more intelligent than humans? Self-replicating molecular robots? Virtual immortality? These may sound like science fiction, but some reputable computer scientists are predicting they will happen within the next several decades. What will our world be like if and when our machines surpass us in intelligence? Do the advances in biotechnology, robotics, and nanotechnology, which make intelligent machines possible, pose dangers of their own? Should we embrace such a future or try to stop it?


Peter Robinson: Today on Uncommon Knowledge: if you think your computer makes you feel stupid now, just wait!

Announcer: Funding for this program is provided by the John M. Olin Foundation and the Starr Foundation.

[Music]

Peter Robinson: Welcome to Uncommon Knowledge. I'm Peter Robinson. Our show today: technology and the future of humankind.

Computers that are much smarter than we are, robots capable of self-replication, virtual immortality. All of that may sound like science fiction but it's exactly what reputable computer scientists are predicting and not for the distant future but for just a few decades from today.

When our machines are smarter and more adaptable than we are ourselves, where will that leave us?

Joining us today, two guests. Bill Joy is Chief Scientist and Co-founder of Sun Microsystems. Ray Kurzweil is an award winning inventor and author of the bestselling book, The Age of Spiritual Machines.

Title: Future Shock

Peter Robinson: A statement by a moderator, the moderator of panel at which you both spoke recently, I'm quoting him, "Today's human research is drawing on emerging research in areas such as artificial life, artificial intelligence, nanotechnology, virtual reality and on and on and on, are striving perhaps unwittingly to render themselves obsolete." So let's establish right at the beginning, do you really believe, do you really wish to contend that in the course of this century, human beings will become, in some sense, obsolete?

Bill Joy: It may be an unwitting consequence, certainly not the direct intent of most of the people working in the field.

Peter Robinson: But that doesn't strike you as an outlandish statement at all?

Bill Joy: There's enough credibility to it that we ought to think about it.

Peter Robinson: Ray?

Ray Kurzweil: Without putting too fine a point on it, depends on what you mean by human. I think human…

Peter Robinson: You've already got me concerned.

Ray Kurzweil: Human beings will change and we're in the early stages of already putting computers certainly in our clothing, in our pockets. Some people have them in their brains. The deaf friend who has a cochlear implant, was profoundly deaf, he's getting a new model now that has a thousand points of frequency so he'll actually be able to hear music for the first time. There are people with Parkinson's Disease that actually have these small corpus of biological neurons that are destroyed by that disease, replaced the functionality of those cells replaced by a neural implant. There are dozens of different implants primarily for people with profound disabilities and medical issues. But in the decades ahead, we'll be putting them in our bodies and brains for other reasons, to enhance our normal functionality and we'll be able to do it without surgery. We'll introduce them by sending tiny little intelligent robots, which I call anabots in through our bloodstream. But we'll be changing--we'll be merging with our technology. And an important point is that the non-biological portion of our intelligence is growing exponentially. Our biological intelligence isn't growing. It's…

Peter Robinson: Brains are brains.

Ray Kurzweil: It's fixed and it's a pretty fixed architecture. It has a certain amount of plasticity but we'll be able to expand our experiences and ultimately our intelligence through this intimate merger. So we'll change the nature of what it is to be human but in--the positive side of it is we can make ourselves more human.

Peter Robinson: You published The Age of Spiritual Machines in 1999, predicting what you've just said. Bill you wrote a ten-thousand word article in Wired Magazine that appeared in the year 2000, meditating on and expanding on some of Ray's predictions and saying that there's plenty in what's to come for humans to be worried about. Your book became a big bestseller. Your article produced an enormous number of emails, letters, news reports, huge response to both of those. Were you surprised by the response that those--that your book and your article received?

Bill Joy: Yeah, I was surprised by the length of the response in particular and the depth of the response and I think it touched a chord in people that people realized that things are happening that are challenging our ethical standard. There's many things that we can do, imagine doing, people propose to do that we find troubling in some way and also exciting at the same time.

Peter Robinson: Ray Kurzweil's book sparked the outcry that we've been discussing, so let's take a closer look at his argument.

Title: Ghost in the Machine

Peter Robinson: Ray, your book, The Age of Spiritual Machines, rests on a quite specific premise. You call it the Law of Accelerating Returns. Now a lot of people are familiar with Moore's Law which holds, I'll let you state it…

Ray Kurzweil: Well, Moore's Law is a specific paradigm for improving computational power. We're shrinking transistors by fifty percent every two years. We can put twice as many on a chip. They run twice as fast. It's a quadrupling of computer power. The Law of Accelerating Returns broadened that first of all in terms of computers. Moore's Law was just one way of improving computational power. The next paradigm will be going into the third dimension. But moreover, this type of exponential growth is a function of all information-based technologies.

Peter Robinson: That's an arresting point to me because I, as a layman, had understood that everybody was expecting perhaps sometime in this decade that Moore's Law would begin to bump up against physical limits. Once you etch circuits molecule or an atom apart, you just can't cram anymore in. But you're saying no, we'll transcend it somehow.

Ray Kurzweil: That's the key point. I mean each paradigm comes to an end. They were shrinking vacuum tubes; they couldn't make them any smaller and keep the vacuum. Well then a whole different approach came, transistors, which are not just small vacuum tubes. And we will get to the point where the key features are a few atoms in width. We won't be able to shrink transistors on a integrated circuits. But integrated circuits are flat. We live in a three-dimensional world. We might as well use the third dimension. Our brains are organized in three dimensions.

Peter Robinson: Okay, you write, I'm quoting from your book, "It is now 2009, a one thousand dollar computer can perform about a trillion calculations per second. It is now 2019, a one thousand dollar computing device is now approximately equal to the computational ability of the human brain. It is now 2029," I'm continuing to quote you, "a one thousand dollar unit of computation has the computing capacity of approximately one thousand human brains. There is a growing discussion about the legal rights of computers, machines claim to be conscious and these claims are largely accepted." Is consciousness merely a product or artifact of computational ability?

Ray Kurzweil: Well not at all. The power of the computer to match the computational ability of the brain is the hardware side. There's also the software side, understanding the methods of operation, otherwise we'll just have very fast computation of our spreadsheets. The primary source of that is understanding how the human brain works. And there's a grand project underway, which we've made more progress on than people realize, of reverse engineering, understanding the principles of the operation of the human brain. And there's many different approaches to that. We're modeling the hundreds of different types of neurons. We have very detailed mathematical models. We're actually scanning the human brain seeing the wiring of the interconnections and there's actually been a couple dozen of the several hundred regions of the brain that have been reverse engineered in some detail and re-implemented. So we're under--there's other sources of the software intelligence as well, just our experimentation of artificial intelligence but one major source is going to be understanding what goes on in the human brain and then being able to replicate that.

Peter Robinson: Here's what I find provocative about the book and what I'd like to press you on a little bit, the notion here--you say machines will claim to be conscious and these claims will be accepted. And so the question I have is do you draw any--is there any sharp or distinct line between humans? Is there something distinctively human about us that machines can never have or is that all a blur, a continuum?

Ray Kurzweil: Well I didn't really answer that question. I think it's a deeper philosophical issue as to what is consciousness. We accept that each other are conscious but it becomes controversial when we go outside of human experience. There's a controversy about animals. And we will have a controversy about machines. But my point is, you could--I mean, twenty-eight years from now you could have someone on your program and they'll have a visual appearance and they'll convince you that they're human, at least they'll seem that way. But some philosophers will say no, they're not squirting human neurotransmitters, therefore they can't be conscious. I don't think there really is a scientific test, a machine you could slide it into the in and a green light will go on saying, this is conscious, that doesn't have some philosophical assumptions with it.

Peter Robinson: Trying to define consciousness and humanity itself, talk about challenging philosophical questions. Where do Bill and Ray stand on those?

Title: I Am Somebody

Peter Robinson: Pope John Paul II--I want to read this and see what you guys think of it--"If the human body takes its origin from preexistent living matter, the spiritual soul is immediately created by God consequently, theories which consider the mind as emerging from the forces of matter or as a mere epiphenomenon of matter are incompatible with the truth about man." So there you have a stark philosophical statement that there is a spiritual component which is separate from the mere matter. Do you buy it? Does that strike you as persuasive?

Bill Joy: Well I see a strong spirituality in my children that I don't think I have impressed on them. I think it arises naturally and certainly, you know, our spirituality can come from our historicity, you know. And I would note that machines are unlikely to be much like us, intelligent machines. They're not likely to have…

Peter Robinson: Even in the year 2029, even far out, relatively far out?

Bill Joy: Well they're not likely to have a sexual nature. I mean, they can reproduce asexually or in some other way. I'm not clear that they necessarily have an individual mind. There's no reason they can't share things with a LAN and I also think that they can also share experience in what you'd call Lamarckian as opposed to sort of a Darwinian way. So the natural…

Peter Robinson: Hold on. You've got to explain that for this layman.

Bill Joy: Well you normally don't expect that things that you learn in your lifetime will be passed to your children the way gene transmission works. You transmit, you know, there's this indirect way of evolution through selection but not a sharing or passing of experience or acquire…

[Talking at same time]

Peter Robinson: They acquire or learn for themselves as we say about our kids. Right?

Bill Joy: Right. So culture is a mechanism of transmission but not--you don't essentially directly pass experience. So if you know French, your son or daughter still has to learn it, you know, directly but that wouldn't be true of machines. So I think machines--the natural life form in a robotic or machine substrate is likely to be, you know, more different from us than ants or wasps or different from us. And so I think it's a bit romanticizing--if we could actually create the computational power to create a race of intelligent or a species of intelligent machines. I don't think we would be--we're not the natural life form, our emotional, spiritual, sexual, individual nature is not the natural life form in that environment. And so we wouldn't necessarily survive there for long even if we could be transplanted into that space.

Peter Robinson: He's making me feel better. Your book creates a tremendous anxiety on the part of the reader for some clear line which as you've said, you don't draw, between us and them, between the machines and us.

Ray Kurzweil: I mean, the question comes up, will these future machines be very humanlike or will they be very alien? And the answer is both. We'll have alien forms of intelligence that are not at all human because they don't need to be but we will also have machines that are very much human like, if for no other reason than to communicate with us humans because we like to communicate in a humanlike way. And we will have machines that act human. And moreover we will be enhancing our own biological intelligence through intimate connection with machine intelligence. I see that actually as a primary application. We'll be able to, for example, shut down our signals coming from our real senses, replace them with the signals coming from a virtual environment and…

Peter Robinson: If the prospect of intelligent machines is several decades into our future, the technology that will create them will also bring us other more immediate concerns.

Title: Deadly Information

Peter Robinson: You write that the NBC weapons--nuclear, biological and chemical weapons--that proved the cause of so much concern during the twentieth century will be largely displaced in the twenty-first by the GNR technologies--genetics, nanotechnology and robotics--as a matter of concern.

Bill Joy: I'm not saying that the nuclear, biological and chemical weapons aren't going to be a continuing concern. In fact, we see concern today.

Peter Robinson: Sure.

Bill Joy: But biological--what I'm talking about is things that occur in the natural world and things that are made with non-information based techniques. So like the Russian bioweapons program largely would just take a disease and hit it with--they'd hit it with an antibiotic, grow it in an antibiotic and try to find an antibiotic resistant version of it. With genetics, you can do things with a much deeper understanding of what's going on. You have a gene sequence, you can use some information model of how the genes work or what the genes do and start cutting and pasting things together. And this has been used quite successfully to do a number of amazing things already. So that whole field of genetic engineering and the related sciences are going to bring us enormous benefit. So there's many genetically based diseases we hope to come up with cures for, for example. Nanotechnology is basically nanos--you know, a billionth--it's basically anything done at kind of the atomic scale. But you can think of--what the human body has is it has machinery for taking information at this kind of atomic scale and for manufacturing things using the ribosome. I mean, it has a molecular machinery for making certain things at that scale. Now it doesn't make them out of an arbitrary set of elements. It makes them out of organic building blocks.

Peter Robinson: Right.

Bill Joy: What nanotechnology really is--a simple way to think about it is you use the entire periodic table. You can manufacture anything at that scale, not just a particular subset of things that the biological world tends to manufacture. And then the robotics is what we've been talking about. These three fields would only--can only exist with information technology.

Peter Robinson: We understand that there are all kinds of good things that can come from these technologies but that piece that you wrote in Wired was a kind of warning, was an alarm system going off. And what is it that you want to warn against?

Bill Joy: Well the difficulty is that these three fields, because the designs are basically information, so a genetic design for a modified smallpox would be more like a computer file. It's not a vial containing some substance. And it's just--it can just be information that you could then put back into physical form using some machine. And so you can view that that information is then as dangerous as the physical substance. But our society, we haven't figured out how to collectively control information. And so we see biologists putting the gene sequence of pathogens up on the web. But those are equally as dangerous as the vials of the material that we would…

[Talking at same time]

Peter Robinson: So ten or fifteen years from now a Saddam Hussein with what would then be a standard sort of PC, could do a lot of damage?

Bill Joy: Certainly there's some people today that can probably do such damage. It's still the case that a lot of the machinery to do this and the knowledge of how to do this is relatively limited. And not everybody in the world knows this but it's--the knowledge and the ability to do this is also expanding at a very rapid pace.

Peter Robinson: So far here at this table, we've talked about what a bad human being could do with some of these information technologies but there's a lot in your article about what the information technologies could begin to do, so to speak, on their own. Am I reading that correctly or is this just a layman's free-floating anxiety?

Ray Kurzweil: What Bill was talking about that all these technologies are self-replicating.

Peter Robinson: Right.

Ray Kurzweil: Certainly pathogens, whether the virus is bacteria or cancer cells, do their damage by self-replication. The same specter exists in the nanotechnology field where you could have non-biological cancers, for example. The question is what to do about it and the dilemma is that the technologies that are beneficial are the same ones that are harmful. I think that's the problem because otherwise, you could say well, let's keep the good things and let's just relinquish the bad.

Peter Robinson: All right, I'm convinced of the dangers of these technologies but what should be done about it?

Title: Don't Go There

Peter Robinson: You've got a lot of people lathered up by your notion of relinquishment. Can you explain what you meant by that?

Bill Joy: Well what I said more precisely is we should relinquish the stuff that we consider too dangerous.

Peter Robinson: Okay, and that would be…

Bill Joy: And, first of all, I didn't say, for example, relinquish nanotechnology. I said we should relinquish the stuff that we judge to be too dangerous. And the "we" in that case would be hopefully the set of people who are doing the work, because I don't think anyone else is capable necessarily from the outside, which means, for example, the biologists and the nanotechnologists taking responsibility. But you have to take responsibility in a non-naïve way. I think we were all very, very naïve about these things say, you know, kind of 9/10 kind of thinking was that no one would use biological weapons. That was the standard answer to this is that people just--it wasn't thinkable. An army would never do it because it might turn around and hit themselves. Well that's, you know, a kind of failure of imagination because there are people who would use it who aren't armies. You know, like crazy people. And so to go along…

Peter Robinson: So Osama Bin Laden although he used old-fashioned technology, airplanes and jet fuel, in fact, gave great impetus to your argument and to your concerns?

Bill Joy: Well we don't know who the person who did the anthrax letters is…

Peter Robinson: Right.

Bill Joy: …but, in fact, they're of much more concern, I think, than Osama bin Laden. Osama bin Laden is an outlier because he has lots of money and he had a state base to do it from. The single--potentially single actor who did the anthrax letters is probably much more like the thing in the future that we're going to face because, you know, it's someone coming out of nowhere, we have no intelligence about and it's very difficult to catch.

Peter Robinson: But now relinquish--you've got to explain this to me a little bit more because you say to me, sitting here in Silicon Valley, the people who are doing the technical work will be the ones who themselves decide to relinquish it. And I look up and down this valley and there was a lot of money made during the boom and no doubt as the economy corrects itself, there will have been--in other words, the financial incentives to pursue every possibility are enormous. And so are you hoping to create not only good technologists but saints. I don't--I mean, as a practical matter, how would you expect your regime to work, Bill?

Bill Joy: First of all, if "work" is defined as eliminate all risks then it's never going to work because we--what we can do is hope to bring risk and reward into a better balance. Somebody who's using a new technology in a way that has a catastrophic risk but has no economic feedback in what they're doing to cause them to think about that is proceeding essentially without any input, any sanity. So somebody who, you know--we went and genetically modified all the corn in the world and we said, for example, it would never effect the wild corn in Mexico. Well it did. Well…

Peter Robinson: That's happened now?

Bill Joy: Yeah. Too bad. But there's no one to take responsibility. If there had been a catastrophe, who knows what would have happened.

Peter Robinson: So the government…

Bill Joy: So who takes responsibility? The company that does it can just go bankrupt in our financial system if there's a big accident. And what ends up happening is we all get stuck with the collective costs of a cleanup.

Peter Robinson: So what do you make of this? As a practical matter, his regime of relinquishment?

Ray Kurzweil: I would agree with what I call fine grained relinquishment and that--I think that it's a good idea to raise the level of concern about these issues, certainly 9/11 has shown that we have hundreds of vulnerabilities which are quite mind boggling. And these new technologies are far more powerful and will create far more danger. I do think we have to be very concerned about it and I think the answer is not broad relinquishment but there are specific developments that would be too dangerous and we should--that we should not pursue. I think the answer is a set of ethical standards of responsible practitioners.

Peter Robinson: Established by whom?

Ray Kurzweil: Well I think it's a collaboration between the technologists and society. I mean, they say war is too important to leave to the generals. I think technology is too important potentially to the technologists. I think it's got to be a whole social discussion.

Peter Robinson: Like a professional board, like the American Medical Association or do you want a…

Ray Kurzweil: This is not a new idea. I mean, if you go into biotechnology today, there are very detailed ethical standards and they're backed up by legal regulation, which is quite active in the medical field.

Peter Robinson: Okay.

Ray Kurzweil: Perhaps not as active in areas like corn but so I mean, we do have to have regulation. We have to have ethical standards. We have to, I think, put a lot more effort into developing technological safeguards and immune systems and counter veiling technology that…

Bill Joy: The problem we have is that the benefit flows to those who exploit the technologies and take the risks. And the cost of the cleanup and the defense largely is a--becomes a governmental function.

Peter Robinson: So does Bill believe that government needs to control technology development or are there other solutions?

Title: Risky Business

Peter Robinson: You read through the responses to your article and what you get from a lot of quarters is, ultimately he's calling for coercion, meaning if people relinquish it works fine. If they don't relinquish and somebody won't--then the government has to step in. So how do you response to that?

Bill Joy: Well I actually wrote an Op Ed about a year and a half ago in the Post where I made some specific suggestions. I suggested that scientists take a Hippocratic Oath. That was suggested by Hans Bethe. I think that would be constructive.

Peter Robinson: "First do no harm…"

Bill Joy: Yeah, it's part of developing a notion of taking responsibility for consequences. Josh Letterberg who spent a lot of time worrying about bioweapons suggested that we bring back technology assessment as an inter-government, intergovernmental function so we think about some of these things a little more before we do them. I think that's a good step. But the, you know, there was a list of five things. I'm not going to do them all but the third one was to say, if we--through a technology assessment, we discover that certain technologies are very dangerous, then we ought to try to provide economic feedback to those who would use it so they balance the risk and the reward. And one way you can do that is to force people to take catastrophic risk insurance. And if you can't get the catastrophic insurance, you don't use the technology. Today something that's very risky but has no cost, no unit cost, looks very great because you're not actually--the company that's using that technology isn't paying for the risk.

Peter Robinson: So you could apply, say--would this strike you as a--this may be a crude parallel but the notion that one way of approaching the problem of pollution is to cause polluters to pay for it. So a factory that pollutes has to--you kind of…

[Talking at same time]

Peter Robinson: But that's a way of making the market respond itself. Right?

Bill Joy: What could we do today is we have this very, very time consuming procedure we introduce into drugs. Instead we could make the companies that want to introduce them pay for the risk in some way.

Ray Kurzweil: I do agree with Bill here that we have to put a much higher priority on being concerned about the dangers and risks. I think they're manageable. We can take some comfort from how well we've managed a risk in Bill's own area, which is software viruses, not they're as dangerous as some of the other things we've been talking about but the defenses have progressed along with the evolution of the offensive use of software viruses and we've kept them relatively at bay. If we can do half as well in some of these other areas, that'll be beneficial.

Peter Robinson: Bill, let me--last question for you. We've got Thomas Malthus arguing in the Eighteenth Century that population grows faster than food supplies so human populations are constantly going to be crashing through starvation. You've got the Club of Rome in the 1970's predicting that certain resources will become scarce. In fact, new technologies are developed, new deposits of oil and minerals are developed, all the prices of the commodities that they were worried about twenty and thirty years ago are lower today than they were then. So there seems to be this kind of allure for the expert mind to predict catastrophe. And the question would be, that's a line of argument for poo-pooing you. How do you say, no, don't poo-poo me. This is different.

Bill Joy: What I'm really saying is that the technologies that are emerging are sufficiently powerful that they can be used to redesign ourselves in the world. And I don't think there's much dispute about that, that using biotechnology we can reengineer our species if we so choose. Using nanotechnology we can do all sorts of amazing things if we so choose and that we will eventually be able to make intelligent machines. I think some people may argue about whether it's thirty years or a hundred years but those technologies are sufficiently powerful that we can reinvent the world and then reinvent ourselves. And all I'm saying is that rather than letting whatever happens happens, we ought to think about what kind of world we want to have. If we have the power to invent it, we ought to take some time and have a discussion about what kind of world we want to have. And the first part of that is to take and have a discussion about how much risk we want to take because these technologies are very risky. They're proponents and no one really denies that anymore.

Peter Robinson: Bill Joy and Ray Kurzweil, thank you very much.

Peter Robinson: I'm Peter Robinson for Uncommon Knowledge. Thanks for joining us.