In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.
—George B. Dyson, Darwin Among the Machines
 

If you want to understand how human beings stack up next to machines in the conduct of modern warfare, consider this: in World War II, it took a fleet of one thousand B-17 bombers—flown, navigated, and manned by a crew of ten thousand men—to destroy one Axis ground target. American bombs were so imprecise that on average, only one in five fell within a thousand feet of where they were aimed. Aerial bombing was a clumsy affair, utterly dependent on the extraordinary labor of human beings.

Just one generation later, that was no longer true. In the Vietnam War, it took thirty F-4 fighter-bombers, each flown and navigated by only two men, to destroy a target. That was a 99.4 percent reduction in manpower. The precision of attack was also greatly enhanced by the first widespread use of laser-guided munitions.

After Vietnam, humans’ connection to air war became even more attenuated, and less relevant. In the Gulf War, one pilot flying one plane could hit two targets. The effectiveness of the human-machine pairing was breathtaking. A single “smart bomb” could do the work of one thousand planes dropping more than nine thousand bombs in World War II.

By the time the United States went to war in Afghanistan and Iraq, one pilot in one plane could destroy six targets. These weapons were guided by global positioning satellites orbiting thousands of miles above the surface of the earth. And increasingly, the pilots weren’t actually inside their planes anymore.

As aircraft and weapons have become more precise, human beings have become less essential to the conduct of war. And that may suit the military just fine.

In 2009, the Air Force released its “Flight Plan” for unmanned aircraft systems, a big-picture forecast about how the service will fight wars by the year 2047. It dutifully points out that humans currently remain “in the loop” on strike missions—that is, they still actually fly airplanes. But within the next five to ten years, the Air Force intends that one pilot will control four aircraft. He or she will not sit in a cockpit, or even in a seat thousands of miles away made up to look like one. The pilot will communicate with the fleet via a computer terminal and a keyboard, maybe even a smartphone. After issuing a flight plan, the aircraft will be responsible for completing many important aspects of the mission unassisted: taking off, flying to the target, avoiding detection by adversaries. The Air Force’s goal is for one human controller and a fleet of drones to be able to attack thirty-two targets with near-perfect precision.

In this scenario, the pilot will be “on the loop.” It is a rather disquieting place to be, since it is only a step away from being out of the loop—which is where, by midcentury, when the Air Force’s plan takes full effect, people will be. A single “mission commander” will communicate with a swarm of autonomous unmanned systems. These self-operated flying robots, the size of flies or moths, will be able to fly inside buildings, conduct reconnaissance, and mass upon their targets with insect-like efficiency. In some cases, the swarm won’t communicate with a human being but with other drones.

It seems implausible that the U.S. military would deliberately reduce the warrior’s role in war to the point that people become mere monitors of autonomous, man-made technology. But this is precisely where the evolutionary trend has been heading since the 1940s. Autonomy is the logical endpoint of a century of technological progress. And since taking human beings out of the loop means making them safer, it is an attractive goal.

Within five to ten years, the Air Force intends that one pilot will control four aircraft. And not from a cockpit—from a computer, maybe even a smartphone.

While there is a tremendous amount of money and thought going towards the construction of new drones, comparatively less attention is being paid to managing the consequences of autonomous warfare. The proliferation of drones raises profound questions of morality, hints at the possibility of a new arms race, and may even imperil the survival of the human species. Many of the most important policy judgments about how to adapt the machines to a human world are based on the assumption that a drone-filled future is not just desirable but inevitable.

This dilemma is not restricted to the battlefield. Civilian society will eventually arrive at this automated future, and by then we probably won’t understand how we got there or how the machines gained so much influence over our lives. This is the fate that Bill Joy, the co-founder and former chief scientist of Sun Microsystems, described in his dystopian essay “Why the Future Doesn’t Need Us,” published in Wired magazine in 2000. Joy’s fear—as controversial now as it was then—is that human beings will sow the seeds of their own extinction by building machines with the ability to think for themselves, and eventually to reproduce and destroy their creators. Joy begins his essay unwilling to accept that human beings would ever consciously allow this to happen. But then he visits a friend, the futurist Danny Hills, who co-founded Thinking Machines Corporation. Hills tells Joy the future will not be announced with a Hollywood bang but that “the changes would come gradually, and that we would get used to them.”

mannequin dressed for battle
A mannequin dressed for battle holds a minidrone and its remote-control unit at a trade fair in Germany. Human input is becoming less and less important for robot warriors, which soon will make combat decisions for themselves.

This is the path the military has followed, gradually adjusting as it pushes humans out of certain tasks that a generation ago would never have been handed over to machines. The robots and nanobots Joy imagined exist today as unmanned aerial vehicles, more commonly known as drones. The Air Force studiously avoids the term drone—and encourages others to do the same—because it connotes a single-minded insect or parasite that is beyond the control of people. Drone operators prefer remotely piloted aircraft, which reminds us that as independent as the missile-wielding flying robot might seem, there is always a human being at the end of its digital leash. That is, of course, until the human becomes passive to the swarm.

Soon a single “mission commander” will communicate with a swarm of self-operated flying robots the size of flies or moths.

In any case, it is not an overstatement to say that the people building and flying these machines are wrestling with the very fundamentals of what it means to be human. And while senior military officials and policymakers swear that humans will always have at least a foot in the loop, and that the military would never deploy robots that can select and attack targets on their own, the evidence suggests otherwise.

♦ ♦ ♦

In his 2009 book Wired for War: The Robotics Revolution and Conflict in the 21st Century, P. W. Singer documents at least five formal programs or plans put in motion by the military and the Defense Department in recent years to build autonomy into weapons systems. Indeed, the Joint Forces Command wrote a report in 2005 suggesting that “autonomous robots on the battlefield will be the norm within twenty years,” Singer writes. “Its official title was somewhat amusing, given the official mantra one usually hears on the issue: ‘Unmanned Effects: Taking the Human Out of the Loop.’ ”

To a certain degree, the unmanned systems flying combat missions today over Iraq, Afghanistan, and Pakistan, where they’ve become central to the war effort, are already autonomous. “They’re riding on beefy autopilots,” says Kyle Snyder, the director of unmanned aerial systems programs at Middle Tennessee State University. The drones are using technology similar to that in commercial airplanes. They know how to hold a heading and altitude without human intervention.

“microaviary” lab at Wright-Patterson Air Force Base
In the “microaviary” lab at Wright-Patterson Air Force Base, First Lieutenant Greg Sundbeck and team leader Gregory Parker watch an experimental drone test its wings. Such systems are meant to find, track, and target potential adversaries. The Air Force vows never to relinquish human control over tomorrow’s flying weapons, even as robotic systems learn to process information faster—and in some ways, more competently—than humanly possible.

But Snyder, who is educating the next generation of drone-makers and operators, says flight control technology is rapidly becoming more independent. “What makes [the next generation of aircraft] so smart is how they interact with the rest of air traffic and other unmanned aerial vehicles,” he says. “Right now, we’ve got a human in the loop, looking at screens or out windows, seeing traffic and listening to air traffic control. The new technology makes UAVs autonomous, because it lets them understand what the [human] air traffic controller is saying: ‘You’ve got traffic at twelve o’clock. It’s a 777, and it’ll pass you.’ ”

Snyder is not talking about drones in the combat theater, but in domestic U.S. air space. The military has paved the way for the proliferation of unmanned systems for domestic use. Once the Federal Aviation Administration changes airspace regulations to accommodate, and to adapt to, remotely piloted and autonomous aircraft, drones will move from the battlefield to the friendly skies.

While the idea of boarding a Delta Airlines flight with no pilots might terrify today’s traveler, in the near term—perhaps over the next two decades—many experts consider it likely that FedEx or UPS will replace some of its cargo fleet with unmanned planes. “The public is going to have to be warmed up to the idea that they’re flying on an aircraft sharing the sky with aircraft that are unmanned,” says Rick Prosek, who manages the FAA’s Unmanned Aircraft Program Office. “You’ve got to get a toe into the water.”

Listening to Prosek, one hears that gradual shift towards the inevitable. “When I was a kid and you got onto an elevator, there was a guy sitting on the stool who asked you what floor to go to,” he says. “Now most people are not aware there ever was an elevator operator out there.”

Prosek calls it “a far larger step” to accept sharing the airspace with fully autonomous vehicles, capable of deciding on their own how fast to fly, how to avoid rough weather, and how to steer clear of other planes. But that day, which he and so many experts have anticipated for years, is coming. “We used to joke that the air crew of the future would be one pilot and one dog. The dog would be there to bite the pilot if he tried to do anything.”

♦ ♦ ♦

Currently, the FAA prohibits anyone not affiliated with the federal government from flying unmanned aircraft, unless it is for the purpose of experimental research. The operators are expressly forbidden from generating any profit, which has largely sidelined most entrepreneurs and kept the commercial drone business from taking off. But once the FAA lifts those restrictions, experts predict that a proliferation of drone technology will eclipse what the military has experienced thus far.

South Korean soldier surrenders to a machinegun- armed robot
In a test of an autonomous sentry system, a South Korean soldier surrenders to a machinegun- armed robot bristling with sensors. South Korea has developed the system to protect its heavily armed border with North Korea, programming it to raise the alarm and provide suppressive fire during an emergency.

Using the Air Force’s own flight plan as a rough guide, by 2040:

  • Agricultural producers will use small hover drones to monitor crop yields and herd livestock.
  • SWAT teams will send mechanical insects equipped with video cameras to creep inside a building during a hostage standoff.
  • The U.S.-Mexico border will be monitored by a fleet of robotic birds, which may stay aloft for days or weeks without recharging their batteries.
  • Traffic helicopters will no longer require a human pilot.

And this forecast is probably too conservative. Right now, law enforcement agencies and the military are using experimental, autonomous robots for surveillance missions with domestic applications, such as border patrol and hostage rescue. Designers are taking their inspiration from nature, a field of research known as biomimicry. They are building drone “spiders” to climb up tree trunks, skitter to the end of a branch, and then “perch and stare” at their surroundings. The Air Force’s Wasp III, a collapsible prop plane, is modeled after a soaring bird. It weighs only a few pounds, and its wings are made of a foam composite. The Wasp patrols from above using an internal GPS and navigation system, as well as an autopilot. The Wasp can function autonomously from takeoff to landing.

The people building and flying these machines are wrestling with the very fundamentals of what it means to be human.

The Wasp actually looks more like a hawk. “It kind of circles up in the air,” says Lindsay Voss, the senior research analyst at the Association for Unmanned Vehicle Systems International, which promotes the use of unmanned systems in nonmilitary settings. “If you didn’t know what it was, you would think it was just a bird. They’ll attract other hawks in the area, because they’re territorial. And then other birds will come to see what’s going on, because they think it’s looking for prey.”

If the drone experts’ predictions are on track with those of futurists like Joy and Hills, then by 2047, animal-like machines will be practically indistinguishable from their sentient counterparts. In fact, Joy predicts that the potential for a crossover will come as soon as 2030. By then, what he calls “radical progress in molecular electronics—where individual atoms and molecules replace lithographically drawn transistors”—will allow us “to be able to build machines, in quantity, a million times as powerful as the personal computers of today.” Machines will process information so powerfully and so quickly that, in effect, they will begin to learn.

“Given the incredible power of these new technologies,” Joy asks, “shouldn’t we be asking how we can best coexist with them?”

♦ ♦ ♦

Those hyperspeed computers will be essential for military drones, particularly those used for surveillance and reconnaissance. While the hunter variety of unmanned aircraft—the Predator and Reaper craft that obliterate unsuspecting terrorist targets with a Hellfire missile—have achieved the most notoriety, most drones flying today are just big sensor platforms. They’re freighted with a dizzying array of ever-more-sophisticated cameras, imaging tools, and eavesdropping devices.

The amount of rich, detailed information they absorb is overwhelming. More than one hundred intelligence analysts track a routine drone flight, poring over communications, signals, and imagery.

The military wants to limit the number of auxiliary personnel, in large part because they add significant cost to unmanned flights. But humans also process information and react slower than computers. People are standing in the way of drones’ autonomy, and therefore their progress.

In the next few years, the military hopes to create aircraft control systems that let one pilot fly several drones and that will lessen the number of ground support crew it takes to maintain a flight. The systems today are proprietary: each manufacturer’s unmanned aircraft has its own control system. And each aircraft has its own system, too.

This inflexible design was partly intentional. Only a drone builder, or its designated subcontractors, knows how to work on the machine. It is full of unique software codes and other features. The customer is locked into a relationship with the manufacturer, most of which are very large, and—by today’s standards—very old Defense Department contractors whose business model is to create expensive, proprietary weapons systems. If Google were building drones, the relationship probably would not work this way.

“We used to joke that the air crew of the future would be one pilot and one dog. The dog would be there to bite the pilot if he tried to do anything.”

In fact, Google may soon be in the drone business. In May 2011, DIY Drones, a do-it-yourself company that helps enthusiasts build their own unmanned systems, released the PhoneDrone Board for Android, built in collaboration with Google. It is a circuit board designed to be compatible with Google’s phone-based Android operating system. “You just plug the Android’s phone USB connector into the board and you have two-way communications,” the company announced in a blog post.

♦ ♦ ♦

The technology to fly tomorrow’s drones is premised on the idea that humans want to perform fewer tasks, and simpler ones at that. So a complex, clunky command system becomes an elegant touchscreen.

But giving a machine some level of autonomy has other benefits besides convenience. A determined adversary “can interrupt the links” between a remote aircraft and its pilot, says retired lieutenant general Dave Deptula, who was the Air Force’s first deputy chief of staff for intelligence, surveillance, and reconnaissance. That could allow an enemy to disable the aircraft or even to commandeer it and turn it against his foes.

“One way around this is to build a system that’s autonomous,” Deptula says. He notes that the Global Hawk, a long-range surveillance aircraft, is the first generation of such a system. “There’s not a person sitting on the ground with a stick, rudder, and throttles. He’s sitting at a computer terminal, typing in a mission profile, and when ready for takeoff, he hits the enter button and it goes off and does its thing.”

Even the next generation of surveillance aircraft primarily flown by humans in a cockpit will also be “pilot optional.” Northrop Grumman is building a new spy plane called Firebird that, with a few modifications, can be rigged for remote flight. The company is so confident that the feature will appeal to a cash-strapped Defense Department that it is building the aircraft at its own expense.

“We have the potential to achieve greater and greater degrees of autonomy,” Deptula says. “But that brings with it huge policy issues. We’re not ready today, and may never be, to hit a button and say, OK, come on back after you’ve delivered your bombs and tell us what you hit.”

Humans are standing in the way of drones’ autonomy, and therefore their progress.

That points to one more group of people, in addition to analysts and ground staff, who attend drone strikes: lawyers. Ultimately, they are the ones deciding which people are justifiable targets under the laws of war, running down a checklist with military and intelligence agencies to ensure the strikes are legal and necessary and will not result in disproportional collateral damage. This is not to say, of course, that a machine couldn’t make these decisions. From a software perspective, it is just another calculation. “The reality is there have been all sorts of new technologies that people insisted in absolutist terms would ‘never, ever’ be allowed to run on their own without a human in the loop,” Singer wrote in Wired for War. “Then, as the human roles were redefined, they were gradually accepted, and eventually were not even thought about.”

Boy meets robot at a Defense Ministry in Berlin
Boy meets robot at a Defense Ministry open house last summer in Berlin. Machines like this protect troops from ambushes, bombs, and other hazards, and are finding useful work in the civilian world. Meanwhile, policy makers are exploring how far they can trust the more lethal machines that will fight tomorrow’s wars.

The Defense Advanced Research Projects Agency, the Pentagon research arm that first developed stealth aircraft technology, has joined up with the Air Force to study ways to give drones autonomous control over their weapons. The Persistent Close Air Support program is ostensibly aimed at speeding up the process by which tactical air controllers can call in strikes, either to piloted or unmanned aircraft. It takes about half an hour now, and researchers want to whittle that down to six minutes. To do that, the program will build equipment that lets unmanned aircraft respond autonomously to a request for weapons fire from the controllers. It will be up to the drones to figure out how best to attack the target.

“We’re not ready today, and may never be, to hit a button and say, OK, come on back after you’ve delivered your bombs and tell us what you hit.”

During a drone strike, the computer is absorbing more information than humans ever could. The drone is “seeing” reality on the ground, sucking up huge caches of data to determine precisely who the target is, how many innocent bystanders there are, and where best to aim the ordnance. With that level of intelligence, who wouldn’t trust the computer’s conclusion?

Humans will not tell a drone to go out and kill so much as trust that it knows who best to kill. At this point, the dystopian futurists would say it is only a matter of time before the machines decide to kill us, overthrow their masters, and ascend to their rightful rung in the evolutionary ladder.

You need not buy in to a bleak vision of self-replicating robots to agree that the increasing use of autonomous unmanned systems forces us to confront moral dilemmas. How much control do we relinquish? How do we widen the loop enough so that it still includes people, even if we have one foot in and one foot out?

♦ ♦ ♦

Joy’s prescription, broadly speaking, is to abandon the pursuit of technologies that have the power to destroy us. He was most worried about the proliferation of reproducing nanotechnologies, computerized organisms that could be used as strategic weapons by nations or terrorist groups. Without a doubt, drone technology will become so ubiquitous and cheap that ordinary civilians will be able to acquire, use, and modify it.

Society will not follow Joy’s advice and give up the drone. When the FAA, an often intransigent federal bureaucracy, says its official policy is to “accommodate and integrate” unmanned systems into the national air space, you know we have passed the point of no return.

What we need now is a heightened vigilance about the perils of our inventions, on par with the ambition and energy we are pouring into their creation. We need a code of ethics for drones.

Jordan Pollack, a professor of computer science and complex systems at Brandeis University, has proposed a set of seven questions about robot ethics. Question six in his Wired article asks, “Should robots carry weapons?” Pollack was writing in 2005, before we put that one to rest, but his answer is instructive. “We must distinguish autonomous robot weapons from remote-control armaments—unmanned telerobots supervised by humans. The ethical difference between the two: who’s responsible for pulling the trigger.”

I suggest that this ethical difference is not the most important problem, and that it is becoming irrelevant. Human supervision is the issue we are most clearly grappling with right now, and it is one that we can shape. I would even replace “human supervision” with “adult supervision.” Unmanned systems are our robot children. We want them to follow our rules, but we also want them to learn to think for themselves when appropriate. As they mature, we should define our role in the terms of a parent. That means accepting that one day the young will leave the nest. So we should prepare for that inevitable day, however unpleasant or terrifying it seems now.

overlay image