The late William A. Niskanen, a member of President Reagan’s Council of Economic Advisers, began a 1997 book chapter titled “R&D and Economic Growth: Cautionary Thoughts” with this quote from President Bill Clinton about his proposed federal budget:
American history clearly demonstrates the importance of American leadership in science and technology to the future of our nation. Investments in science and technology drive economic growth, generate new knowledge, create new jobs, build new industries, ensure sustained national security, and improve our quality of life.
Niskanen then commented, “I wonder to which American history President Clinton was referring.” His point was that Clinton’s claim had no connection to what really happened in America and what really accounted for economic growth. Possibly the reason this chapter has received so little attention is that it was in a book rather than an economic journal. That’s too bad, because Niskanen made some devastating points.
I would bet that more people agree with Clinton than with Niskanen. But they shouldn’t. Niskanen made a strong argument and, in the process, also showed that the case for large government investment in research and development is weaker than is often thought.
Niskanen pointed out that the Clinton view has been around since long before Clinton. Indeed, one of the first people who articulated it was Francis Bacon, who did so in a 1605 book called The Advancement of Learning. Based on the book, Niskanen formulated a chain of reasoning that he, Niskanen, labeled “Bacon’s chain.”
Bacon’s chain, according to Niskanen, goes as follows (I’ve edited his statement a little):
Government financing is necessary to provide the adequate level of basic research. Basic research is necessary to provide the scientific foundation for advanced technology. Finally, advanced technology accounts for a large part of economic growth.
By sifting through the evidence, Niskanen throws strong doubt on each of these three links in Bacon’s chain.
From advanced technology to economic growth
Consider the links in reverse order. The economist in the mid-twentieth century who got the most credit for his model of economic growth and, indeed, won the 1987 Nobel Prize for his effort, was MIT’s Robert Solow. Solow tried to explain economic growth by considering two obvious contributors: increases in the amount of capital and increases in the amount of labor. Capital, by the way, does not mean money. It is things like dams, buildings, and plant and equipment. Solow found that capital and labor explained only about 50 percent of economic growth. That meant that there was a large residual, approximately 50 percent, that was not explained. This residual, quite naturally, came to be called the “Solow residual.”
The odds are high that a mix of factors accounted for the residual. But many otherwise relatively well-informed economists, noticing the Solow residual, claimed that improvements in technology accounted for half of economic growth. This was pure speculation on their part, with little to no grounding in economic theory or economic data. To see how unjustified the attribution is, consider the fact that one cause that economists know is a strong contributor to economic growth is reductions in tariffs and other trade barriers. In a 2019 study for the Peterson Institute of International Trade, Dartmouth’s Douglas A. Irwin, arguably the top international trade economist in the world, surveyed a large literature on the effects on economic growth of reducing trade barriers. He concluded:
[A] variety of studies using different measures of policy have found that economic growth is roughly 1.0–1.5 percentage points higher than a benchmark after trade reform. Several studies suggest that this gain cumulated to about 10–20 percent higher income after a decade.
Yet reductions in trade barriers are not the same as technological change. As Niskanen put it, this residual “is a measure of what we do not understand about economic growth, not a direct measure of the effects of technology.”
From basic research to advanced technology
What about the middle link in the Bacon chain: the dependence of advanced technology on basic research? It turns out that there’s almost no dependence. Niskanen pointed to multiple studies that made that point. I’ll highlight two. In 1969, the Department of Defense produced “Project Hindsight.” Thirteen teams of scientists and engineers were tasked with identifying “the key research events that contributed to twenty weapon systems.” They found seven hundred such events. How many came from basic research? Two.
In a 1991 study, University of Pennsylvania economist Edwin Mansfield, whose specialty was studying technology, reported the results of a survey he conducted on seventy-six firms in seven manufacturing industries. His goal, wrote Niskanen, was to “determine the share of the firms’ new products and processes that could not have been developed without academic research conducted within the prior fifteen years.” Only 11 percent of new products and 9 percent of new processes, Mansfield found, “could not have been developed, without substantial delay, in the absence of recent academic research.” Moreover, the products and processes that depended on academic research, pointed out Niskanen, “accounted for only 3 percent of sales and 1 percent of the industry savings attributable to technological innovation.”
In short, strong evidence should make one doubt the claim that basic research is crucial for advances in technology.
Since reading Niskanen’s chapter over twenty years ago, I’ve come across other work preceding his that backs his point that technology doesn’t typically rely on basic research. In his 1983 book The Tower and the Bridge: The New Art of Structural Engineering, David P. Billington explained why:
There is a fundamental difference between science and technology. Engineering or technology is the making of things that did not previously exist, whereas science is the discovering of things that have long existed. Technological results are forms that exist only because people want to make them, whereas scientific results are formulations of what exists independently of human intentions. Technology deals with the artificial, science with the natural.
Is government support needed for basic research?
A view that most economists hold strongly is that government must fund a large part of basic research. The argument for government funding is that basic research is what economists call a “public good.” That term has a very specific meaning in economics. A public good has two characteristics: (1) non-rivalry in consumption and (2) a prohibitive cost of excluding non-payers from using it. Non-rivalry in consumption means that my use of it doesn’t affect your use of it. But the key characteristic that leads economists to advocate government subsidies is the second one: if you can’t prevent non-payers from using it, goes the argument, then you have little incentive to produce the good in the first place. People will free-ride and with enough free-riders, there are not enough payers. If that happens, the good won’t be produced or, at least, too little of the good will be produced.
The problem is that the argument proves too much: it predicts that the private for-profit firms will not produce public goods even though an incredibly successful example of such a good has been staring us in the face for over a century. When I taught public goods in my microeconomics class, I always used this example because it perfectly satisfies both criteria. I refer to private AM radio broadcasting. First, my listening to a radio station in no way impedes your listening, thus satisfying the non-rivalry in consumption criterion. Second, before the advent of scrambling technology, there was no low-cost way to prevent non-payers from listening. Radio station owners didn’t even try to charge for listening. Instead, they charged for advertising.
What would motivate for-profit firms to fund basic research? Niskanen discussed this issue too. One motivator is a patent. Because there is a public good element to basic research, one way to exclude non-payers is to grant patents to the firms that come up with the research. Sometimes, of course, it’s difficult to patent basic research. Even in such cases, though, there are free-market solutions. If a firm invests in basic research and manages to keep the results to itself for some time, it can have what economists call a “first-mover advantage.” That is, it can use this research in cutting-edge products and charge a premium for those products. Indeed, Niskanen referenced two studies that found gains to firms that invested in basic science. A 1980 study of sixteen oil and chemical firms by Edwin Mansfield found that investment in basic science enhanced firms’ productivity growth. Harvard economist Zvi Griliches found, in a 1986 study of 911 firms, that investment in basic science enhanced firms’ profits.
Interestingly, the federal government does not account for a large percent of R&D funding. In 2017, for example, business spent $400.1 billion on R&D funding, higher education accounted for $71.3 billion, the federal government spent $52.6 billion, and nonfederal government and other nonprofits spent $24.0 billion. The business number could be exaggerated because of the special tax treatment of expenditures that are labeled R&D. New rules that kicked in in 2022 under the 2017 Tax Cuts and Jobs Act curbed this tax treatment. Presumably, though, even with the prior tax treatment, legitimate business spending on R&D swamped the spending by other entities.
The ideas that government must fund most basic research, that such research is crucial for advanced technology, and that advanced technology is a strong contributor to economic growth have all become cliches. Not only President Clinton, but also many others, assume that these ideas are true without bothering to examine them.
William Niskanen was not one of those people. More than almost any other economist of his era, he marched to the beat of his own drummer. In doing so, he often challenged accepted views. And in this case, his challenge hit the mark. (Disclosure: he, along with Martin Feldstein, was my boss for two years at the Council of Economic Advisers.)