Hoover Daily Report

Government Forecasters Might as Well Use a Ouija Board

via Wall Street Journal
Thursday, October 16, 2014
Image credit: 
Alex Mit, Shutterstock

Government economic forecasts receive a great deal of attention and are used to make a case for or against legislation or public policies. How good are the forecasts? The answer: not very. Forecasting is an inexact science at best, and the trust that Congress and the public invest in these estimates is not warranted.

The Congressional Budget Office and a group simply known as “Troika”—the administration’s Council of Economic Advisers, Office of Management and Budget and Treasury Department—put out annual economic forecasts. As chairman of the Council of Economic Advisers from 2006-09, I headed Troika. The agencies are staffed by capable and cautious career economists who do not claim that their forecasts are accurate, only that they are unbiased. Unfortunately, these caveats often fall on deaf ears.

The CBO and administration (through Troika) put out annual forecasts on economic variables including the gross domestic product, unemployment rates, inflation and interest rates. Real GDP growth is perhaps most important for a variety of reasons, not the least of which is estimating how economic growth will affect government revenues and program costs. Yet the forecasting error by CBO and the administration is very large.

My analysis of 1999-2013 reveals that the CBO’s real GDP growth forecasts for the next year were off, on average, by 1.7 percentage points, either too high or low. Administration forecasts were similarly off by a slightly larger 1.8 percentage points on average, also to high or too low. Given that the average growth rate during this period was only 2.1%, errors of this magnitude are substantial.

Perhaps most damning: History is a better predictor of annual growth than government forecasts. Simply assuming that GDP growth will be 3.1% in each year—the average annual rate for the 30 years that precede the study period—results in an average forecast error of 1.5 percentage points.

Troika is associated with a presidential administration while the CBO is generally regarded as a nonpartisan agency that serves Congress. The CBO estimates might be the least political; and although the CBO and Troika do not differ much in their average forecast error, administration forecasts over the entire period studied tend to be higher by 0.7 percentage points than those of the CBO.

The CBO is also charged with estimating the costs of proposed legislation. As with GDP, and despite its professionalism, the task is daunting and the numbers should be read with caution. Large transfer programs and tax-change legislation provide important examples.

The food stamp program, which dates to 1939, became the Supplemental Nutrition Assistance Program or SNAP in 2008. In 2007, the CBO estimated that in 2013 SNAP would cost under $40 billion. The actual cost was more than $83 billion. The CBO failed to anticipate the effect of the severe recession. Yet even in 2009, during the recession, the CBO underestimated 2013 costs by more than 20%.

Sometimes CBO overestimates cost. In 2003, the CBO estimated that the new Medicare Part D program would cost around $100 billion by 2013; actual spending was $50 billion. The CBO significantly overestimated how many seniors would enroll, and the market-based plans cost less than estimated because of the competition that Part D enabled.

The estimated costs of the Affordable Care Act are also sensitive to enrollment because participation affects costs and nonparticipation affects penalties collected. Changes in estimates from year to year can be large. The 2012 and 2013 CBO estimates of the 2015 budgetary impact of the ACA differ by 35%, or about $30 billion.

Occasionally, these cost estimates receive unwarranted credence because their methodology is obscure. One good example involves the American Recovery and Reinvestment Act of 2009, aka the stimulus.

In 2009, before the American Recovery and Reinvestment Act passed, the CBO forecast the effect of the $840 billion stimulus plan on GDP over the next few years. In 2014 the CBO issued a report stating that GDP increased by almost exactly the amount it had projected. Was the CBO’s forecast that good? No, the CBO simply used almost the same model—not actual data—to estimate the effect of the stimulus in 2014 as it had in 2009.

The same approach is used to forecast jobs “created or saved” by the stimulus. Government economists generally assume a mechanical link between forecasted GDP growth and forecasted job growth. This means that estimating job effects is subject to the same qualification as estimated GDP effects. Both are based on models, not actual experience. The CBO described the methodology in its report, but those who reported or trumpeted the CBO’s stimulus numbers were generally unaware of how GDP or jobs numbers were generated.

Forecasting is inherently difficult and almost always inaccurate. When basing decisions on forecasts, even those issued by government agencies, it is important to remember that there may be less than meets the eye.

Mr. Lazear, who was chairman of the President’s Council of Economic Advisers from 2006-09, is a professor at Stanford University’s Graduate School of Business and a Hoover Institution fellow.