In recent years, there has been increasing concern about the effects of artificial intelligence and robots on humans. Some people have worried that humans will be marginalized to the point of being put out of work. Why hire a human when a much cheaper robot can do the job without being distracted? Of course, we can never be sure about the future. But a look at technological revolutions in the past should make us more optimistic than pessimistic about the fate of human labor in the age of AI.

In the past, the introduction of more and more machinery made people more and more productive. And since real incomes—wages and salaries—are closely tied to productivity, machinery caused people’s real incomes to increase. The same will be true of robots, whether we define robots narrowly as human-looking machines that move purposely on a factory floor or more broadly as machines that involve artificial intelligence. The fear of robots is similar to the fear of automation that was common only a few decades ago—and just as bogus.

In 1930, British economist John Maynard Keynes, reflecting on the progress of technology, predicted that his generation’s grandchildren would have a 15-hour workweek. Assuming that a generation is 30 years, we should have had that 15-hour workweek in 1990.  Did we? Not even close. Twenty-seven years after 1990, we still don’t. But why don’t we? Where did Keynes go wrong?

It wasn’t in his assumption about increasing productivity. Rather, Keynes was probably assuming that people would work enough to get the same standard of living they had in 1930. If that was his assumption, then he was quite accurate in predicting our productivity per hour. In the four score and seven years since Keynes made his prediction, our productivity has doubled and doubled again. We could easily have what we had then if we worked 15-hour weeks now.

MIT labor economist David Autor estimated that an average U.S. worker in 2015 could achieve his 1915 counterpart’s real income by “working about 17 weeks per year.” Seventeen weeks per year at 40 work hours per week is 680 hours per year. Spread over a 50-week work year, that’s 13.6 hours per week. And that overstates the workweek required for a 1930 standard of living for two reasons. First, the quality of almost everything we buy that is not produced by government has increased. Second, we can buy things that were simply unavailable then. Cell phones, anyone?

Why don’t we work 14-hour weeks? The answer, briefly, is that we want more. We are acquisitive people.  Consider cars. Those few families that had cars in Keynes’s day usually had only one. Even 30 years later, when I was growing up, my father had one old Ford. And we were not poor: Dad’s income was probably just below the median income in Canada. Now, many families have two or three cars. We could do without televisions and smart phones, but we don’t want to. We could settle for being like most Brits or Americans in Keynes’s time, never traveling more than 200 miles from home. But we’ve heard about places called Las Vegas, Disneyland, and Florida—and, we want to go there. Also, antibiotics and other life-saving medicines come in awfully handy—but they cost money to get. The reality is that we want more and we will always want more.

Fortunately, there’s a way to satisfy this yearning—with technology. Specifically, far from taking away from our livelihoods, robots will actually give us more by increasing real output and real GDP. That’s the whole point. If they didn’t increase output, we wouldn’t value them. The key to economic growth is increased productivity—producing more and more output with more and more efficient means. The usual way to do that is to increase the amount of capital per worker: more capital per worker makes workers more productive.

This is not to say that there are no downsides to increased automation. But the downsides are likely to be small. We can get a clue about the effect of robots on productivity, wellbeing, and jobs by looking at the history of automation.  

In 1760, Richard Arkwright’s cotton-spinning machinery was introduced. At the time, England had 5,200 spinners using spinning wheels and 2,700 weavers, for a total employment of 7,900. But by 1796, after Arkwright’s invention had been well integrated into the production process, the number of spinners and weavers was 320,000, an increase of over 3,900 percent.

Why? The answer is, in economist’s jargon, high elasticity of demand. The invention crushed costs, and so prices for textiles fell a lot. Clothing was no longer a luxury. The lower prices caused many people to buy more clothing more often. Overall output soared.

But consider another case in which the opposite happened: farming. In 1900, farm workers made up 41 percent of the U.S. labor force. By 2000, that was down to two percent. It’s true that the U.S. labor force had grown substantially during the century, from 27.6 million to 142.6 million. But even in absolute terms, the number of farmers fell, from 11.3 million to 2.9 million. Virtually all of the increase in the labor force was due to technology. Compare pictures of farm machinery now to pictures of farm machinery then. The latter were known as horses. Farming became an order of magnitude more productive. So jobs were destroyed in farming. But were jobs destroyed on net? No. The number of jobs rose in line with the labor force.

As noted above, people sometimes worry that robots will become too human-like and replace us altogether. But Autor has pointed out that journalists and even experts “tend to overstate the extent of machine substitution for human labor.” In doing so, they “ignore the strong complementarities between automation and labor that increase productivity, raise earnings, and augment demand for labor.” Automation, he argued credibly, raises the value of tasks that we workers uniquely supply.

If you’re still worried that robots will be too human-like, consider what happened to men’s jobs when women, who not only are human-like but also are actual humans, increasingly entered the labor force. Men’s jobs didn’t decline; they increased. In 1950, before the large entry of women into the U.S. labor force, 43.8 million men and 18.4 million women were employed. By 2015, women’s employment had skyrocketed to 78.0 million, while men’s employment, far from shrinking, almost doubled to 84.4 million.

The simple fact is that the amount of work to be done in the economy is unlimited. What’s limited is the number of humans, which is why the late population economist Julian Simons called humans, in a book by the same name, the “the ultimate resource.” There’s a story—perhaps apocryphal but no less insightful for that—about an American engineer visiting China in the 1960s, when the Chinese government was building a dam. The American, noting the large number of workers digging with shovels, told his Chinese host that the digging could be done more quickly if the Chinese used steam shovels. “Oh,” answered the host, “but then there would be fewer jobs.” “I didn’t realize that was the goal,” answered the American, “but if your goal is jobs, you might consider replacing the shovels with spoons.”

What this story illustrates is that although jobs are important for creating value, if we can create the same amount of value with less input, it’s wise to do so. Who, for example, wouldn’t want an innovation that allowed them to do their current job and be paid just as much, while working half the time? This is not a fantasy. Pay is closely tied to productivity. The hypothetical innovation would destroy “half a job.” And we would love it. We would use that freed-up time for leisure, or, more likely given our unlimited wants, for doing other work that gives us pecuniary rewards. That is the story of economic growth.

But won’t such innovations such as self-driving vehicles replace a large percent of the approximately 3.5 million truck drivers? Yes. But there are two things to note. First, the average age of a truck driver is now about forty-nine. So, with the innovation taking at least a few years to occur and then, most likely, occurring gradually, many of the displaced truckers would have been retiring anyway. Second, and much more important, the vast majority of those displaced truckers will find other work, just as the vast majority of displaced farmers early last century found other work. Do we know what the work will be? No, and we can’t know, just as we couldn’t know what jobs those who left the farms would get in 1900. But they got them.

As robots become more common and displace some people from their current jobs, is there any role for government policy? Yes. The government should not make it harder for people to find new jobs. So, for example, it should start tearing down government barriers such as licensing laws that make it difficult for people to enter as many as 800 occupations in the United States. And the higher minimum wages that many people are advocating now would make it even harder for less skilled people to find new jobs and acquire skills. Indeed, higher minimum wages would artificially hasten the move to robots.


Non-economists have two biases that distort their understanding of economics. One is what my co-blogger, George Mason University economist Bryan Caplan, calls the pessimism bias. It’s easier for people to be pessimistic than optimistic, both about the future and about current affairs in the world. Therefore, it’s reasonable to “adjust up” their predictions. If people are pessimistic about robots, we should be more optimistic.

Here’s the other bias. Non-economists tend to put undue weight on what the nineteenth-century economist Frederic Bastiat called “the seen,” and too little weight on the “unseen.” That matters here. It’s much easier to point out jobs that have been destroyed due to robots than to the ones that have been or will be created. If we knew what industries would be created in the future, we could invest accordingly and become incredibly wealthy. Unfortunately, we don’t know. What we can be highly confident of, though, is that there will be jobs.

Think of it this way. Every time people predicted that automation and machinery would destroy jobs overall, they were wrong. Are they really likely to be right this time around?

And there’s more good news. If you disagree with me and are confident about your disagreement, you can make some real money. Another George Mason University economist named Donald Boudreaux recently offered to bet on the issue. Here’s what he wrote to a job pessimist:

I will bet you $10,000, straight up, that in not one of the next 20 years will the annual U.S. labor-force participation rate, as measured by the U.S. Bureau of Labor Statistics, fall below 58.1 percent—which is the lowest rate on record at the Bureau of Labor Statistics.  (The labor-force participation rate hit this post-WWII low in December 1954.  And because the unemployment rate does not count unemployed workers who are so discouraged that they’ve stopped looking for jobs, the labor-force participation rate is more likely than is the unemployment rate to capture any long-term job-destroying effects of technology.)

If you strongly believe the opposite, take Boudreaux up on his bet. 

overlay image