For well over a century, with ever-expanding scale and scope, the United States government has been generating statistics that might illuminate the plight of society’s poorest and most vulnerable elements. From the beginning, the express objective of such efforts has always been to abet purposeful action to protect the weak, better the condition of the needy, and progressively enhance the general weal.

America’s official quest to describe the circumstances of the disadvantaged in quantitative terms began in the 1870s and the 1880s, with the Massachusetts Bureau of Statistics of Labor and the U.S. Bureau of Labor Statistics, and the initial efforts to compile systematic information on cost-of-living, wages, and employment conditions for urban working households in the United States.1 U.S. statistical capabilities for describing the material well-being of the nation’s population through numbers have developed greatly since then.

Today the United States government regularly compiles hundreds upon hundreds of social and economic indicators that bear on poverty or progress on the domestic scene. Within that now-vast compendium, however, one number on deprivation and need in modern America is unquestionably more important than any of the others — and has been so regarded for the past four decades. This is what is commonly known as the “poverty rate” (the informal locution for the much more technical mouthful “the incidence of poverty as estimated against the federal poverty measure.”)

First unveiled in early 1965, shortly after the launch of the Johnson administration’s “War on Poverty,” the poverty rate is a measure identifying households with incomes falling below an official “poverty threshold” (levels based on that household’s size and composition, devised to be fixed and unchanging over time). Almost immediately, this calculated federal poverty measure was accorded a special significance in the national conversation on the U.S. poverty situation and in policymakers’ responses to the problem.

Just months after its debut — in May 1965 — the War on Poverty’s new Office of Economic Opportunity (oeo) designated the measure as its unofficial “working definition” of poverty. By August 1969, the Bureau of the Budget had stipulated that the poverty thresholds used in calculating American poverty rates would constitute the federal government’s official statistical definition for poverty. It has remained so ever since.2

The authority and credibility that the official poverty rate (opr) enjoys as a specially telling indicator of American domestic want is revealed in its unique official treatment. The opr is regularly calculated not only for the country as a whole, but for every locality down to the county level and beyond — on to the level of the school district. (It is even available at the level of the census tract: enumerative designations that demarcate the nation into subdivisions of as few as one thousand residents.)

Furthermore, U.S. government antipoverty spending has come to be calibrated against, and made contingent upon, this particular measure. Everywhere in America today, eligibility for means-tested public benefits depends on the relationship between a household’s income and the apposite poverty threshold. In Fiscal Year 2002 (the latest period for which such figures are readily available), perhaps $300 billion in public funds were allocated directly against the criterion of the “poverty guideline” (the Department of Health and Human Services’ version of poverty thresholds).3 The poverty rate currently also conditions many billions of dollars of additional public spending not directly earmarked for anti-poverty programs: for example, as a component in the complex formulae through which community grants (what used to be called “revenue sharing”) dispense funds to local communities.

Given its unparalleled importance — both as a touchstone for informed public discussion and as a direct instrument for public policy — the reliability of the official poverty rate as an indicator of material deprivation is a critical question. How accurately — and consistently — does the opr reflect changing patterns of material hardship in modern America or changes in the living standards of the U.S. “poverty population?” How faithfully, in other words, does our nation’s poverty rate describe trends and patterns in the condition that most Americans would think of as poverty?

Although our official poverty rate is now by and large taken for granted, having become widely regarded with the passage of time as a “natural” method for calibrating the prevalence of material deprivation in American society, the measure itself was originally an ad hoc improvisation — and arguably a fairly idiosyncratic one — and in practical terms appears to be a problematic descriptor of poverty trends and levels in modern America. For one thing, its reported results do not track well with other indicators that would ordinarily be expected to bear directly on living conditions across the nation. In fact, over the past three decades, the relationship between the opr and these other indicators has been perversely discordant.

While the official poverty rate suggests that the proportion of the American population living below a fixed “poverty line” has stagnated — or increased — over the past three decades, data on U.S. expenditure patterns document a substantial and continuing increase in consumption levels for the entire country — including the strata with the lowest reported income levels. And while the poverty threshold was devised to be measuring a fixed and unchanging degree of material deprivation (i.e., an “absolute” level of poverty) over time, an abundance of data on the actual living conditions of low-income families and “poverty households” contradicts that key presumption — demonstrating instead that the material circumstances of persons officially defined as poor have improved broadly and appreciably over the past four decades.

In short, America’s most relied-upon metric for charting a course in our national effort to reduce and eliminate poverty appears to offer unreliable, and indeed increasingly misleading, soundings on where we are today, where we have come, and where we seem to be headed.

 

History of a calculation

The current conception of the U.S. federal poverty measure was first introduced to the American public in January 1965 in a landmark study by Mollie Orshanky, an economist at the Social Security Administration.4 Drawing upon her own earlier work, in which she had experimented with household income thresholds for distinguishing American children living in poverty conditions, Orshansky proposed a countrywide annual income criterion for identifying households in poverty, based on money income requirements set “essentially on the amount of income remaining after allowance for an adequate diet at minimum cost.”

As devised, Orshanksy’s “poverty thresholds” were established as scalar multiples of the annual cost of a nutritionally adequate — but humble — household diet. For the base food budget in calculating poverty thresholds, Orshansky chose the U.S. Department of Agriculture (usda) “economy food plan” (known today as the “thrifty food plan”) — the lower of the two such budgets usda prepared for nonfarm families of modest means (one specifically proposed by usda “for temporary or emergency use when funds are low”).

The selection of a particular poverty-level food budget then immediately raised the question of the appropriate multiplier for an overall “poverty line” for demarcating total annual income for the officially poor. The answer to that question was by no means obvious. While the cost of the economy food plan could be justified in terms of sheer empirical exigency — people must eat to survive; food costs money — the choice of a food budget multiplier was a much more subjective affair.

From the pioneering work of the Prussian economist Ernst Engel in the 1850s onward, a century of household budget studies around the world had demonstrated that food did not account for a fixed percentage of household expenditures — but rather that the share of food in total spending steadily and predictably declined as household income levels increased. In impoverished low-income countries, 60 percent or more of the household budget was allocated for food — while on the other hand, a much smaller fraction of total income went to food in the richest countries in the postwar era. What was the correct proportion to use in constructing a U.S. poverty threshold?

In the event, Orshansky suggested a multiplier of roughly three times the minimum food budget for poverty-level incomes. While readily noting that her proposed multiplier was “normative,” Orshansky also argued that her coefficient had a solid grounding, and indeed reflected the norms in U.S. contemporary living standards.

A usda national food consumption survey, conducted in the U.S. in the spring of 1955 (the most recent such survey available at the time of Orshansky’s study), indicated that American nonfarm families of two or more were devoting an average of roughly one-third of their after-tax money incomes to food. Orshansky seized this three-to-one relationship for the general guideline for the poverty line she computed, and accordingly established her poverty threshold as a sort of multiplicative product of a nutritionally adequate (but stringent) food budget otherwise suggestive of poverty conditions on the one hand and, on the other, the then-conventional ratio of food to nonfood expenditures for “Main Street” Americans.

But not all households were accorded a poverty threshold of exactly three times their corresponding “economy food plan” budget. Orshansky tailored those thresholds further, to account for variations in household size and composition and the presumed impact of these demographic factors on what is known as the Engel coefficient. (Larger poor households, for example, were posited to allocate a higher share of their income to food than smaller ones, and senior citizens living alone were presumed to require a larger share of their budgets for nonfood necessities than younger one-person households.) In her calculations, Orshansky drew upon usda economy food plan budgets, Bureau of Labor Statistics (bls) expenditure surveys, and 1960 census returns from the U.S. Census Bureau, crafting detailed weightings for estimated food needs in diverse household structures, and then adjusting the Engel coefficients actually observed for such households in bls expenditure surveys in accordance with judgments about the role that economies of scale — or sheer deprivation — had played in influencing those outcomes.

The usda economy food plan offered separate budgets for 19 types of household configurations. For her part, Orshansky created poverty thresholds for 62 separate types of nonfarm households — 58 varieties of different sorts of families, and an additional four for persons living alone (differentiated by age and gender). She also estimated the 62 corresponding poverty thresholds for the U.S. farm population, for a nationwide total of 124 U.S. poverty thresholds.

Using these poverty thresholds (all initially benchmarked against the 1964 usda economy food plan), Orshansky calculated the total population below the poverty line for the United States as a whole — and for regional and demographic subgroups within the country — for calendar year 1963, relying upon Census Bureau data on pretax money income for that same year. (The statistical distinction between pretax income — the figures used for determining whether a family fell below the poverty threshold — and after-tax money income — the criterion against which those same poverty thresholds had been originally constructed — was finessed through a presumption that the poor would not be paying out much, or anything, in taxes.)

Although Orshansky’s study did not actually use the term poverty rate — it talked instead about the incidence of poverty — the poverty rate quickly came to mean the proportion of persons or families below the poverty line in the apposite reference group, and has been so understood ever since.

The schema and framework for estimating official poverty rates in the United States today are basically the same as in 1965. Annual oprs are still determined on the basis of poverty thresholds maintained and updated by the U.S. Census Bureau (currently calculated for “only” 48 family subtypes); official poverty status is still contingent upon whether a household’s measured annual pretax money income exceeds or falls below that stipulated threshold. While a number of minor revisions have been introduced (such as the elimination of Orshansky’s farm/nonfarm differentials, and also of her differentials between male- and female-headed households), the original Orshansky approach of computing poverty rates on the basis of poverty thresholds and annual household income levels remains entirely intact.

The most significant change in the original poverty thresholds is their annual upward adjustment to compensate for changes in general price levels. In 1969, the Bureau of the Budget directed that the poverty line would thenceforth be pegged against the Consumer Price Index (cpi) and ruled that the cpi deflator would also be used to establish official “poverty thresholds” back to 1963, the base year for Orshansky’s original study. (cpi-scaled adjustments were subsequently utilized to calculate poverty thresholds, and thus official poverty rates, back as far as 1959 — i.e., the year against which the household money incomes in the 1960 census were reported.)

To this writing, official U.S. poverty thresholds continue to be updated annually in accordance with changes in the cpi — and with cpi changes alone. Implicit in this decision is the important presumption that America’s official poverty rate should be a measure of absolute poverty rather than relative poverty. Whereas a relative measure might take some account of general improvements of living standards in assessing material deprivation, the determination to hold poverty thresholds constant over time, adjusting only for inflation, is to insist upon an absolute conception of poverty: a standard of deprivation held as constant over time as the index problem will permit.

In her seminal 1965 study, Orshansky acknowledged more than once that her measure of poverty was “admittedly arbitrary” — although she also vigorously defended it as “not unreasonable.” Though she did not dwell on the point, a considerable degree of the apparent arbitrariness in this poverty measure was conditioned by the imperative of fashioning a serviceable and regularly updateable index from the limited data sources then readily at hand.

Whatever the intellectual merits of representing material deprivation in terms of a nationwide annual reported pretax money incomes standard — a variety of objections to which practice could be drawn from basic tenets in microeconomics — the singular virtue of such a poverty indicator was that the Census Bureau was already producing detailed and continuous data of just this sort through its p–60 (i.e., Consumer Income) Series of “Current Population Reports.”

By the same token, Orshansky’s poverty thresholds were open to criticism on a number of conceptual and empirical grounds, as she herself recognized. But those constructs also happened to represent concoctions — arguably quite insightful and ingenious ones — based on the somewhat haphazard ingredients then at hand in the statistical larders of usda, the Census Bureau, and the bls.

As of 2005, the U.S. official poverty rate is the single longest-standing official index for assessing deprivation and material need in any contemporary country. That fact alone makes it unique. But America’s opr is unique in another sense, as well. For although a multitude of governments and international institutions have pursued quantitative efforts in poverty research over the past two decades, and have even fashioned particular national and international poverty indices, none has elected to replicate the Orshansky approach to counting the poor. This curious fact is not often remarked upon by U.S. statistical authorities — but it is not only worth bearing in mind, it is also worth pondering as one evaluates the U.S. poverty rate and its long-term performance.

 

Stark numbers

Estimates of the official poverty rate for the United States are available from the year 1959 onward. For the total population of the U.S., the opr declined by nearly half over this period, from 22.4 percent in 1959 to 12.7 percent in 2004, and dropped by roughly similar proportions for America’s families, from 20.8 percent to 11.0 percent. Measured progress against poverty was more pronounced for older Americans (the opr for persons 65 and older fell from 35.2 percent to 9.8 percent) but more limited for children under 18 (27.3 percent vs. 17.8 percent). For African Americans, the official poverty rate declined by almost three-fifths — by over 30 percentage points — between 1959 and 2004, but in 2004 remained over twice as high for whites.

One may note that most of the reported reduction in overall U.S. poverty, according to this federal poverty measure, occurred at the very beginning of the series — that is to say, during the first decade for which numbers are available. Between 1959 and 1968, the opr for the total population of the United States fell from 22.4 percent to 12.8 percent, or by more than a point per year. In 2004, by contrast, the U.S. poverty rate was only imperceptibly lower than it had been in 1968 — and actually slightly higher than it had been back in 1969.

Indeed, to judge by the official poverty rate, the United States has suffered a generation and more of stagnation — or even retrogression — in its quest to reduce poverty. Figure 1 illustrates the situation. For the entire U.S. population, the lowest opr yet recorded was for the year 1973, when the index bottomed at 11.1 percent. Over the subsequent three decades, the opr nationwide has remained steadily above 11.1 percent, often substantially; in 2004, the rate reported was 12.7 percent.

This long-term rise in the official poverty rate for the U.S. as a whole was not a statistical artifact — an arithmetic consequence of averaging in some particularly grim trends for some smaller subpopulation within the nation. To the contrary, long-term increases in oprs were characteristic for the overwhelming majority of the U.S. public during the period in question. Between 1973 and 2004, the official poverty rate did decline for older Americans as a whole (16.3 percent vs. 9.8 percent) and for persons living alone (25.6 percent vs. 20.5 percent); it also declined for African Americans overall (31.4 percent vs. 24.7 percent). But for the rest of the country, the official poverty rate was in general higher at the start of the new century than it had been in the early 1970s. Measured poverty rates, for example, were higher in 2004 than they had been in 1973 for children under 18 (14.4 percent in 1973 vs. 17.8 percent in 2004) and for people of working ages, i.e. 18 to 64 (8.3 percent vs. 11.3 percent). The nationwide opr for U.S. families likewise rose over those years (from 9.7 percent to 11.0 percent). Outside of the South, where the opr registered a slight decline (from 15.3 percent to 14.1 percent), poverty rates were higher in every region of America in 2004 than in 1973. Overall poverty rates for non-Hispanic whites — so-called Anglos — were also higher than they had been in 1973 (7.5 percent vs. 8.6 percent). No less striking, the overall poverty rate for Hispanic Americans was exactly the same in 2004 as in 1973 21.9 percent — implying that the circumstances of this diverse but often socially disadvantaged ethnic minority had not improved at all over the course of three full decades.

Taken on their face, these stark numbers would seem to be a cause for dismay, if not outright alarm. To go by the official poverty rate, modern America has failed stunningly to lift the more vulnerable elements of society out of deprivation — out from below the income line, according to the author of the federal poverty measure, where “everyday living implied choosing between an adequate diet of the most economical sort and some other necessity because there was not money enough to have both.” This statistical portrait of an apparent long-term rise in absolute poverty in the contemporary United States evokes the specter of profound economic, social, and political dysfunction in a highly affluent capitalist democracy. (It is a picture that conforms disturbingly well with some of the Marxian and neo-Marxist critiques of industrial and global capitalism, which accused such systems of inherently generating “immiserating growth.”) All the more troubling is the near-total failure of social policy implied by such numbers, for despite the War on Poverty and all subsequent governmental antipoverty initiatives, official poverty rates for the nation have mainly moved in the wrong direction over the past three decades.

 

Other measures

Although the official poverty rate is accorded a special official status as an index of poverty conditions in modern America, it is by no means the only available indicator that might provide insight on poverty conditions and material deprivation in the country. Many other indices bearing upon poverty are readily available, and their trends can be compared with the reported opr. Curiously, the official poverty rate does not seem to exhibit the normal and customary relationship with any of these other poverty proxies.

Table 1 illustrates the problem. It contrasts results for the years 1973 and 2001 for the official poverty rate and several other indicators widely recognized as bearing directly upon the risk of poverty in any modern urbanized society. (The choice of these two specific end-years is admittedly and deliberately selective — but it is a selection that highlights the underlying contradictions discussed below.)

In the period between 1973 and 2001, for example, per capita income in the United States rose very significantly in real (inflation-adjusted) terms: by roughly 60 percent, according to estimates from the Census Bureau’s cps series. Other official U.S. data, incidentally, suggest the gains over those years may have been even more substantial: The National Income and Product Accounts from the Bureau of Economic Analysis (bea), for example, estimate an increase in per capita output of about 67 percent for 1973–2001.5

By the same token, the measured rate of unemployment for persons 16 and older was somewhat lower in 2001 (4.7 percent) than it had been in 1973 (4.9 percent). Alternative measures of the availability of remunerative employment also indicated that a higher fraction of the American population was gainfully occupied in 2001 than in 1973: Labor force participation rates for those 16 and older, for instance, were over six points higher in 2001 (66.9 percent) than they had been in 1973 (60.5 percent), and the employment-to-population ratio for the 16-plus group was almost seven points higher in 2001 (63.7 percent) than in 1973 (56.9 percent).

As for educational attainment, America’s working-age adults clearly had completed more years of schooling in 2001 than in 1973. In 1973, nearly 40 percent of U.S. adults 25 or older had no high school degree; by 2001, the corresponding fraction was under 16 percent. Among youths and young adults, the profile for access to schooling also improved between 1973 and 2001, if less dramatically: Whereas the ratio of net enrollment in high school for children 14–17 years of age had been 91.0 percent in 1973, it was a projected 94.8 percent for 2001.

Then there are the trends in spending by government at the federal, state, and local levels on means-tested benefit programs: that is to say, public antipoverty outlays. Between Fiscal Year 1973 and Fiscal Year 2001, real spending on such programs more than tripled, leaping from $153 billion to $484 billion (in constant 2002 dollars), or by over 150 percent on a per capita inflation-adjusted basis. One can make arguments for excluding the health and medical care component from the measure of antipoverty program spending; doing the sums, nonhealth antipoverty spending would still rise in constant 2002 terms from $109 billion in 1973 to $231 billion in 2001, or by 57 percent per capita.6 These data, one must emphasize, account for just the government’s share of anti-poverty programs: Private charitable donations provide additional resources for meeting the needs of America’s poor, and those resources are considerable. In the year 2001, total private philanthropic giving was estimated at $239 billion — in real terms, 156 percent more than in 1973; and in real per capita terms, an increase of over 90 percent. Although we cannot know the exact proportion of these private funds earmarked for poverty alleviation, it seems safe to say that antipoverty spending by both the public and the private sectors increased very significantly on a real per capita basis between 1973 and 2001.

As it was constructed, the official poverty rate was meant to measure only pretax money incomes; in-kind benefits, such as food or housing, would be excluded from this calculus automatically, and by design. Given the prevailing perceptions that cash aid accounts for only a small fraction of U.S. antipoverty spending — and the common belief that means-tested cash aid has been substantially reduced in the United States since “welfare reform” laws of 1996 — one might assume that antipoverty spending ought not to have too much of an influence on long-term trends in the official poverty rate. Yet cash transfers through official antipoverty policies are by no means trivial today, nor has the rise over the past three decades in such spending been insignificant. In 2001, government-provided cash aid programs for the poor dispensed over $100 billion — 81 percent more in real terms than in 1973, and nearly 35 percent more on real per capita basis. If we were to factor in private-sector cash aid, total anti-poverty transfers for 2001 would be that much higher.

Per capita income, unemployment, educational attainment, and anti-poverty spending are factors that would each be expected to exert independent and important influence on the prevalence of poverty in a modern industrialized society — any modern industrialized society. When trends for all four of these measures move conjointly in the direction favoring poverty reduction, there would ordinarily be a strong expectation that the prevalence of measured poverty would decline as well (so long, of course, as poverty was being measured against an absolute rather than a relative benchmark). Yet curiously, the official poverty rate for the United States population was higher for 2001 (11.7 percent) than for 1973 (11.1 percent).

Needless to say, this is a discordant and counterintuitive result that demands explanation. Further examination, unfortunately, reveals that the paradoxical relationship between the poverty rate and these other indicators of material deprivation in Table 1, while perverse, is not at all anomalous. To the contrary: For the period since 1973, the U.S. poverty rate has ceased to correspond with these other broad measures of poverty and progress in any common-sense fashion. Instead, the poverty rate seems to have become possessed of a strange but deeply structural capriciousness: For while it continues to maintain a predictable relationship with these other indicators, the relationship is by and large precisely the opposite of what one would normally expect for a poverty indicator.

The curious behavior of the official poverty rate in relation to these four other important measures bearing on material deprivation is underscored by simple econometrics, through regression equations in which these other measures are utilized in an attempt to “predict” the poverty rate for a 30-year period (1972–2002). Under ordinary circumstances, we would expect unemployment and poverty to be positively associated (the higher the unemployment level, the higher the poverty level), while per capita income, educational attainment, and anti-poverty spending should all correlate negatively with any absolute measure of poverty. Between 1972 and 2002, however, the official poverty rate happens to correlate positively with increases in per capita income — and the statistical association is a strong one. Indeed, controlling for changes in unemployment levels, a rise in real U.S. per capita income of $1,000 (in 2002 dollars) would be predicted to push up the official poverty rate for the entire population by over half a percentage point.

If we exclude per capita income from the tableau, the other three measures — unemployment, education, and anti-poverty spending — can in tandem do a very good job of predicting changes in the poverty rate, together explaining over 90 percent of the variation in the poverty rate during the period in question. But the relationships between the poverty rates and these other variables are perverse: The poverty rate falls when unemployment rises; and when education or anti-poverty spending rise, the poverty rate rises too. And if we use all four measures to try to predict the poverty rate, the common-sense (i.e. negative) correlation between per capita income and poverty at last emerges, and that relationship is statistically strong — yet strong relations between the poverty rate and the other three measures also emerge, and all of those are perverse. Those relationships, in fact, imply that an eight-point jump in the unemployment rate would reduce the official poverty rate by a point, while a ten point drop in the percentage of adults without high school degrees would raise it by a point! No less striking: A nationwide increase in means-tested public spending of $1,000 per capita (in 2002 dollars) would be predicted to make the official poverty rate rise — by over three percentage points.

Clearly, something is badly amiss here. And unless someone can offer a plausible hypothesis for why U.S. data series on per capita incomes, unemployment rates, adult educational attainment, and anti-poverty spending should be collectively flawed and deeply biased for the post–1973 period, the simplest explanation for these jarring results would be that the officially measured poverty rate happens to offer a highly misleading, or even dysfunctional, measure of material deprivation and has, moreover, been doing so for some considerable period of time.

 

A major discrepancy

Over the years a number of criticisms have been lodged against the official poverty rate, among them:

  • The opr takes no account of regional differences in U.S. price levels.

  • It embraces an inappropriate deflator for its inter-temporal adjustments in price levels.

  • It takes no account in “money income” of either personal taxes paid or capital gains reaped — quantities that have been on the rise over the past generation.

  • It is biased because it makes no imputation for the implicit rental “income” homeowners enjoy through occupying their own properties.

  • It is biased because it takes no account of the noncash benefits that households consume (including means-tested public benefits and such private services as employer-provided health insurance).

The Census Bureau has attempted to deal with most of these objections. A series of Census Bureau studies, in fact, have calculated “alternative poverty estimates” for the United States using both a different price index (cpi-u-rs, whose calculated tempo of increase has been somewhat slower than cpi-u), and a variety of more inclusive measures of “income” — and all the associated permutations for the two.7 (The Census Bureau has not been able to calculate regional poverty thresholds for different regions within the United States, due mainly to a lack of necessary detailed data on local price levels.)

There is, however, an additional problem with the official poverty rate — one possibly more significant than any of the criticisms just mentioned. This is its implicit assumption that a poverty-level household’s annually reported money income will equate to the level of its annual expenditures.

The original Orshansky methodology estimated “poverty thresholds” to designate consumption levels consonant with poverty status, and matched these against annually reported household incomes — but it made no effort to determine the actual consumption levels of those low-income households. Instead, it posited an identity between reported money income and expenditures for these families. To this date, the method by which the official poverty rate is calculated continues to presume an identity between measured annual money incomes and annual expenditure levels for low-income households. Yet this presumption is dubious in theory, and it is confuted empirically by virtually all available data on spending patterns for America’s poorer strata.

From the standpoint of economic theory, a corpus of literature extending back to the early postwar period, and including the contributions of at least two Nobel laureates in economics (Milton Friedman and Franco Modigliani) has outlined the entirely logical reasons for expecting expenditures to exceed income for consumers who end up in the lowest income strata in any given year. Both the “permanent income hypothesis” and the “life cycle income hypothesis” tell us that families and individuals base their household budgets not just on the fortunes (and uncertainties) of a single year, but instead against a longer life-course horizon — stabilizing their long-term living standards (and smoothing their consumption trajectory) against the vagaries of short-term income fluctuations. Such behavior naturally suggests that the marginal propensity to consume will tend to be disproportionately high for lower-income households — and for the perhaps considerable number of households where expected “permanent income” exceeds current income (i.e. “transitory income”), current consumption will likewise exceed current income if financial arrangements permit.8

From the standpoint of empirics, U.S. survey data document a by now major discrepancy between reported annual expenditure levels and reported annual income levels for poorer households in the United States — a disproportion that seems to have been widening steadily over the decades since the official poverty rate was first devised. These trends are evident from the Consumer Expenditure (ce) Survey, produced by the Bureau of Labor Statistics (bls). Unlike the Census Bureau’s p–60 series on money incomes of U.S. households, which has been prepared continuously since the late 1940s, the bls ce surveys have until recently been episodic, taking place about once a decade between the end of World War ii and the start of the 1980s. From 1984 onwards, the ce survey has been published annually. Like the p–60 series, this one in principle measures pretax money income of households, but it also cross-references reported annual income against a detailed breakdown of reported out-of-pocket expenditures (net of reimbursement).

In the four decades between 1960–61 and 2002, according to ce surveys, real per household expenditures in the United States rose overall by roughly 65 percent — but since average household size declined over those years from 3.2 persons to 2.5 persons, unweighted real per capita expenditures rose by about 111 percent.9 Over that same period, real expenditures rose substantially for lower-income Americans as well: In 2002, constant expenditures for the poorest fifth (lowest income quintile) of U.S. households were 77 percent higher than they had been for the poorest fourth (lowest quartile) in 1960–61; between 1972–73 and 2002, real expenditures for the lowest quintile of households increased by 57 percent. Given changes in household size, unweighted per capita expenditure levels were 130 percent higher in real terms for the poorest fifth of U.S. households in 2002 than they had been for the poorest fourth in 1960–61 — and for the lowest income quintile were about 43 percent higher in 2002 than for 1972–73.

It is striking that real levels of household expenditures for the poorest fifth of U.S. households have risen by over half during a period in which the official poverty rate should also have risen (from 11.5 percent of the population in 1972–73 to 12.1 percent in 2002) — and during which, according to the same ce survey data, real incomes for the poorest fifth of U.S. households reportedly fell. The contradiction is explained, in proximate terms, by a dramatic increase in the ratio of expenditures to income for poorer U.S. households. Whereas the ratio of expenditures to pretax income remained fairly stable for U.S. households overall between 1960–61 and 2002 (rising from 81 percent to 86 percent), that same reported ratio has skyrocketed for poorer Americans since the advent of the official poverty rate. In 1960–61, the lowest income quartile of U.S. households reportedly spent about 12 percent more than their annual pretax income. By 1972–73, however, the poorest fifth of households were spending nearly 40 percent more than their annual income — and by 2002 were spending well over double their reported annual income. (See Table 2.)

Statisticians and economists at the bls caution that theirs is an expenditure survey, rather than an income-and-expenditure survey, and explicitly recommend that “for users interested only in income information, data published by the Census Bureau of the U.S. Department of Commerce may be a better source of information.” Substituting Census Bureau estimates for pretax money income for the poorest quintile, however, does not vitiate the apparently widening gap between incomes and expenditures for poorer American households. Comparing ce survey data on expenditures and Census Bureau data on money incomes, we find reported expenditures for the lowest fifth of households 24 percent higher than pretax income in 1972–73, but over 90 percent higher in 2002. Furthermore, the gap between money incomes for the poorest fifth (as reported by the Census Bureau) and expenditure levels for the poorest fifth (as reported by the bls) appears to have widened gradually over the 1980s and 1990s.

It is worth noting that virtually all of the 13.6 percent of the U.S. population in the lowest income quintile of the ce surveys in 2002 would have counted as officially poor under contemporary poverty thresholds and bls soundings on their annual income. Yet paradoxically, as of 2002 the average expenditure level for this poorest fifth of U.S. households was 50 percent above the official poverty threshold for a two-person family — even though the average household size of for those in the lowest quintile was less than two persons (1.7). Furthermore, since the ce surveys report only out-of-pocket expenditures (excluding unreimbursed employer and government noncash benefits), actual levels of consumption of goods and services for low-income households may be higher still than these nominal results suggest.

In the early 1960s — the period whose data Orshansky relied upon in devising her original poverty rate — a surfeit of reported expenditures over reported pretax income among low-income households was already evident in national consumer expenditure surveys, but that discrepancy was relatively modest: about 12 percent for the lowest income quartile. By the turn of the century, that reported discrepancy was truly enormous: It had risen to almost 130 percent for the lowest income quintile of U.S. households. The arguably unexpected but in any case continuing and now-extreme divergence between reported income and reported expenditure levels for low-income households represents a critical blind spot for the official American poverty indices. With reported pretax income levels an ever poorer predictor of true household consumption levels, the official poverty rate — contingent as it is on income rather than consumption numbers — would correspondingly appear to be an increasingly biased estimator of the actual prevalence of deprivation among United States households.

 

Temporary poverty

The stark and increasing mismatch between reported annual incomes and reported annual expenditures for low-income households in contemporary America may go far in helping to explain why the official poverty rate — predicated as it is on reported annual money income — seems so very out of keeping with other data series bearing on the incidence of material deprivation in modern America. But how is this widening gap to be explained? How did the reported surfeit of expenditures over pretax income for low-income households in America in ce surveys vault from about 12 percent in the early 1960s to almost 130 percent in 2002?

One hypothesis for the growing discrepancy between income levels and expenditure levels for poorer Americans might be that low-income Americans are “overspending” — i.e., going ever deeper into debt. By the reasoning of this surmise, the apparently widening gap between income and expenditures reported for poorer Americans, far from being an artifact, would represent an all-too-genuine phenomenon: an unsustainable binge that must eventually end, with ominous consequences for future living standards of the vulnerable and the disadvantaged.

On its face, this hypothesis might seem plausible. In the event, however, it appears to be confuted by data on the net worth of poorer American households. If expenditures for lower-income households were being financed through a steady draw-down of assets or accumulation of debt, we would expect the net worth of poor Americans to decline steadily over time in absolute terms. No such trend is evident from the two government data sources that attempt to estimate the net worth of poorer Americans: the Census Bureau’s Survey of Income and Program Participation (sipp) and the Federal Reserve Board’s Survey of Consumer Finance (scf).10 To be sure, poorer American households do appear to have very modest means by comparison with the rest of contemporary America. At the turn of the century, according to both sipp and scf, the median net worth for U.S. households in the bottom income quintile was less than $8,000 (in 2001 dollars). But available data do not suggest that median net worth of poorer households is declining steadily over time. sipp data report that median net worth of poorer U.S. households dipped between the mid-1980s and the early 1990s, but then rose back to close to their earlier levels by the turn of the century; scf data corroborate a steady rise in median net worth for poorer households over the 1990s.

A slightly more sophisticated version of the same spend-down thesis might propose that net worth was holding for low-income households only because fixed-value liabilities were being accumulated against nominal (and potentially transient) increases in assets values. (We might term this a “second-order overspending” hypothesis.) Available data argue against this conjecture as well. The scf provides estimates not only of mean net worth, but also of mean assets and liabilities for the poorest fifth of U.S. households. Between 1989 and 2001, the estimated mean value of those assets appreciated much more substantially than mean liabilities ($24,000 versus $8,000, in constant 2001 dollars). Consequently, the mean net worth of the poorest fifth of U.S. households was estimated to rise in real terms by roughly half over those same years, from about $34,000 to over $52,000 (in constant 2001 dollars). Poorer U.S. households, taken as a whole, may have been “spending down” a portion of their appreciating asset values — but only a portion of those gains.

If the growing statistical discrepancy between incomes and expenditures for poorer Americans cannot be explained by a growing indebtedness of lower-income households, how, then, can we account for it? Three partial explanations come immediately to mind.

  • Changes in CE survey methods and practices. The growing mismatch between reported income and reported expenditures for lower-income households could in part be an artifact of changes in the ce survey itself. The University of Texas’s Daniel Slesnick, a trenchant student of U.S. poverty data, has noted that the correlation between reported income and reported expenditures on the ce surveys as a whole dropped substantially between the early 1960s and the 1980s,11 but Harvard’s Christopher Jencks has counseled against imputing too much significance to the apparent change. Jencks observes that the ce survey currently entails fewer built-in checks and safeguards than in the past: Whereas inconsistent or curious responses would be likely to invite re-interviews — and emendations — in the 1960–61 survey, similarly suspicious data might simply be entered into the official data-base in more recent surveys.12 Neither Slesnick nor Jencks, however, offers us an indication of the actual quantitative impact of these alterations in the conduct of the ce survey.

  • Income underreporting. A second potential problem, related to the first, might be a tendency over time toward increased misreporting of income. As already mentioned, the bls staff responsible for the ce surveys carefully note that users should place more confidence in their expenditure estimates than their income estimates, especially for the lowest reported income deciles. (The ce staff seems especially concerned by the relatively large number of respondents who report extremely low or even negative incomes but healthy spending patterns.) As a possible corrective for the survey’s income underreporting, ce researchers have proposed the ranking of households by outlays rather than income. Ranking households by current outlays rather than income radically changes the ratio of outlays to income for the bottom quintile: In the 1992 ce survey, for instance, that ratio drops from 2.05 (income-ranked) to a mere 0.67 (outlay-ranked).13

This innovative exercise casts an interesting additional light on U.S. expenditure patterns — but as a corrective for income underreporting, it has some problems of its own. For one thing, an outlays-based ranking of household incomes and expenditures produces the entirely anomalous result that America’s greatest “savers” are the quintile of households with the very lowest incomes (with a pretax income-to-outlays ratio of 1.50), while the greatest dis-savers are the very top quintile (with a ratio of only 0.88).

Accounting for the growing discrepancy between reported income and reported expenditures in the ce survey, moreover, would require some evidence of increased misreporting of incomes for the lowest quintile of households. In actuality, the discrepancies between ce and Census Bureau estimates for pretax money incomes have been diminishing over the past two decades. ce estimates for the lowest quintile’s money incomes were 44 percent below Census Bureau estimates in 1984 — whereas the difference was 17 percent in 2002. The gradual reconciliation of ce and Census Bureau estimates would not argue for increasing misreporting unless the Census Bureau were itself the main source of the problem.

  • Increased year-to-year income variability. One possible explanation for a secular rise in the expenditure-to-income ratio for households in the lowest annual income quintile would be a long-term increase in year-to-year variations in household income. If U.S. consumer behavior comports with the “permanent income” hypothesis, and if the stochastic year-to-year variability (i.e., transitory variance) in American income patterns were to increase, then we would expect, all other things being equal, that the ratio of reported annual expenditures to reported annual incomes would increase.

This ratio would be expected to rise because intensified transitory variance would mean that, at any given time, a higher proportion of effectively nonpoor households would be experiencing a “low income year” — and since their consumption levels would be conditioned by their “permanent income” expectations, they would still be spending like nonpoor households, even if they were temporarily classified as poor households by the criterion of current income. The greater the proportion of “temporary poor” in the total poverty population, the greater the discrepancy between observed income levels and observed expenditures levels should be within the poverty population.

If poverty is defined in terms of a particular income threshold, it should be readily apparent that poverty status is not a fixed, long-term condition for the overwhelming majority of Americans who are ever designated as poor. Quite the contrary: Since American society and the U.S. economy are characterized by tremendous and incessant mobility, long-term poverty status appears to be the lot of only a tiny minority of the people counted as poor by the official U.S. poverty metric.

The Census Bureau’s longitudinal Survey on Income and Program Participation (sipp) documents this central fact. For the calendar year 1999, nearly 20 percent of the noninstitutionalized American population was estimated to have experienced two or more months in which their household income fell below the poverty threshold. And at some point during the four years 1996–1999, fully 34 percent of the surveyed population spent two months or more below the poverty line. On the other hand, just 2 percent of the population spent all 48 months of 1996–99 below the poverty line. The long-term poor (or “permanent poor”), in other words, accounted for barely one-tenth of those who passed through officially designated poverty at some point in 1999, and less than 6 percent of those who were counted as poor at any point between the start of 1996 and the end of 1999. (See Figure 2.)

As might be expected, the incidence of chronic or long-term poverty varies according to ethnicity, age, household composition, and location. Whereas just 1 percent of the non-Hispanic white population is estimated to have spent all of 1996–99 below the poverty line, the rate was over 5 percent for both African-Americans and Hispanic-Americans; long-term poverty rates of over 5 percent also typified female-headed households and persons living alone. Yet even for the groups with the highest measured rates of long-term poverty, these permanent poor accounted for a very small fraction of the “ever poor”: Fewer than a sixth of the Hispanics counted as poor at any time during 1999, for example, had been below the poverty line throughout 1996–99.

Given the high proportion of the temporarily poor within the overall population of those counted as poor, it should not be surprising that reported expenditures would exceed reported income among America’s lower-income strata, as they apparently do today. But while the dynamics illustrated by the sipp data speak to high, steady, and rapid rates of transition into and out of poverty status for American households in the late 1990s, those data do not indicate whether or not the longer-term trend in year-to-year household income variability has been increasing.

More extended longitudinal data series would be required for such calculations — and fortunately, such data bases are currently available. One of these is the Panel Series on Income Dynamics (psid), an ongoing in-depth socioeconomic survey that commenced in 1968 and currently follows 7000 sample families. Several researchers have attempted to estimate longer-term trends for transitory variance in U.S. household income based on these data. Their findings all point to a single general pattern: one of secular, and quite significant, increases in such variability between the early 1970s and the beginning of the twenty-first century.

Although the concept of transitory income — and thus variance in transitory income — is clear enough in theory, the task of computing transitory variance is not straightforward in practice, owing to the nature of the observational problem; consequently, a variety of techniques has been advanced for decomposing “permanent variance” and “transitory variance” within the spectrum of overall income differences within a given population.

One recent approach to decomposing the two was developed by Johns Hopkins University’s Robert A. Moffitt and Boston College’s Peter Gottschalk, applying their method to psid household earnings data. Relying on this same technique, Yale University’s Jacob S. Hacker calculated that the year-to-year variability of pretax income for U.S. families rose dramatically over the last quarter of the twentieth century, more than doubling between 1973 and 1998. By those calculations, transitory variance (or what Hacker labels “income instability”) rose quite steadily over the course of the 1970s and 1980s, then spiked upward in the early 1990s — dropping off in the mid-to-late 1990s, but nevertheless remaining in 1998 well above the average level of the 1973–90 period.

Further work by Hacker updated those calculations to cover the 1973–2000 period, and changed the metric from pretax family income to post-tax, post-transfer family income (arguably a more representative measure for permanent income). Those computations also indicated a substantial long-term rise in transitory variance for U.S. household income. Like his initial findings, these updated calculations report a curious and unexplained spike in transitory variance for the year 1993 — but even excluding that observation, there is an unmistakable secular increase in measured year-to-year variability over this period.

Further analysis of the psid survey corroborated Hacker’s findings and expanded on them. For a special series of articles on economic insecurity in the United States today for the Los Angeles Times,14 Moffitt was commissioned to supervise an additional breakdown of trends in transitory variance in U.S. family income over the 1970–2000 period. Utilizing Moffitt-Gottschalk techniques, he and two graduate students calculated, among other things, the changes in transitory income variance for families at different rungs on the income ladder, and the absolute change in transitory variance for median-income households in the United States.

According to those calculations, inflation-adjusted variations in annual U.S. family income registered a steady and consequential climb over the 1970–2000 period. For a median-income American household — a family in the very middle of overall income distribution — the maximum expected random volatility in year-to-year income more than doubled over these years, rising from about $6,00 in 1970 to nearly $13,500 in 2000 (in constant 2003 dollars).15 (See Figure 3.) Since inflation-adjusted median family income (in the psid data series) rose by just 28 percent over those same years, maximum random annual volatility in relation to annual income rose significantly — from about 16 percent in 1970 to about 27 percent in 2000.

The correspondence between income shocks and family income levels, moreover, was not uniform across the income spectrum. Moffitt calculated what statisticians call the “coefficient of variation” (variance as a proportion of the sample’s mean) for families at three separate positions in the income scale: the twentieth percentile (designated as the working poor), the fiftieth percentile (labeled the middle class), and the ninetieth percentile (upper income). In 1970, the coefficient of variation was lowest for the highest of these income groupings, and highest for the lowest income grouping; proportional income variability was about twice as high for families at the 20 percent mark in the overall income distribution as for those at the 90 percent threshold.

Between 1970 and 2000, the coefficient of variation rose for families at all three spots in the overall U.S. income distribution — but it was measured as rising especially sharply for those bordering the bottom income quintile. Whereas proportional income variability increased by about three-fifths for the upper-income grouping, and by about three-fourths for the middle-class grouping, it fully doubled for the working poor families at the boundary between the bottom income quintile and the second income quintile in the overall income distribution.

The long-term increase in proportional income variability for American households evident within the psid data series — and the disproportionate increase in such variability for Americans at the lower rungs of the income ladder — are highly suggestive. If corroborated through other longitudinal data series (such as the Census Bureau’s sipp), these would qualify as truly major socioeconomic trends for contemporary America. Yet the finding is so robust within the psid data that it merits immediate discussion, even before exploring other longitudinal series.

Certainly the measured long-term increases in transitory income variance reflected in the psid would be consistent with the by now generally accepted finding that secular differences in overall household earnings and overall household income both increased during the last quarter of the twentieth century in the United States.16 The causes of, and relative contributions of different socioeconomic factors to, the phenomenon of increased U.S. earnings and income dispersion in contemporary America are matters of extensive ongoing research and active debate among informed specialists.

The social consequences of increased income equality, and the policy implications of those trends, are likewise matters of widespread interest and continuing, intense dispute. For our limited purposes here, it may suffice to underscore a single statistical consequence of the measured rise in U.S. income inequality. If (as psid data strongly suggest) the proportional variation in American annual household income has been on the rise over the past generation — and if, moreover, such increases have been especially pronounced at the lower quintiles of the overall income distribution (as psid also strongly suggest) — then we would correspondingly expect a rise, possibly even a sharp rise, in the discrepancy between reported annual income and reported annual expenditures for households in the bottom quintile of the income distribution.

Clearly more research is warranted here. For now, however, we may note that the curious divergence between reported income and expenditure patterns that has been recorded in consumer expenditure surveys for the period since the early 1970s appears to be matched by a simultaneous reported rise in transitory income variance for U.S. families in the psid survey — and with a particularly marked increase in proportionate year-to-year variations for families on the borderline of the bottom income quintile.

 

Incontestably better off

By indexing annual changes in nominal poverty thresholds against the Consumer Price Index, the official poverty rate for the U.S. is, in principle, devised to track over time a set of fixed and constant household income standards for distinguishing the poor from the nonpoor. While there are conceptual justifications for both absolute and relative measures of poverty, the incontestable fact is that the opr was intended to be an absolute measure — one that would identify people living in conditions determined by a specific and unchanging budget constraint.

Thus constructed and thus interpreted because contemporary specialists on poverty in the United States widely understand the poverty line to demarcate the population within the United States whose absolute material circumstances have not improved since the advent of the War on Poverty. This understanding is implicit in the comments of economist Sheldon Danziger, a leading authority on America’s poverty problem, upon the release of official poverty numbers for the country in 2004 that were higher than the ones reported in the mid-to-late 1970s: “We have had a generation with basically no progress against poverty. . . . The economic growth is not trickling down to the poor.”17

The notion that the official poverty rate tracks a fixed and unchanging material condition, however, is contradicted by a wide array of physical and biometric indicators. These data demonstrate steady and basically uninterrupted improvements in the material conditions and consumption levels of Americans in the lowest income strata over the past four decades.

Mollie Orshansky intended her original standard for counting the poor to designate an income level below which “everyday living implied choosing between an adequate diet of the most economical sort and some other necessity because there was not money enough to have both.” In purely material terms, today’s American poverty population is incontestably better off than were Orshansky’s original poor back in 1965.

To track the changing material circumstances of America’s low-income population, we will follow trends in four areas: 1) food and nutrition; 2) housing; 3) transportation; and 4) health and medical care. From the early 1960s through the beginning of the twenty-first century, American consumers, poor and nonpoor alike, devoted the great majority of their personal expenditures to these four categories of goods and expenditures. Between 1960–61 and 2002, food, housing, transport, and health/medical care together accounted for about 70 percent of mean U.S. household expenditures, and for about 80 percent of the expenditures of households in the lowest income quintile. And while the composition of these allocations by category shifted over these decades, their total claim within overall expenditures remained remarkably stable. Let us then examine in turn trends in food and nutrition, housing, transportation, and health/medical care.

Food and nutrition. In the early 1960s — the years for which the poverty rate was first devised — undernourishment and hunger were unmistakably in evidence in the United States. Indeed, self-assessed food shortage was clear from the expenditure patterns of American consumers: In the 1960–61 consumer expenditure survey, for example, the marginal propensity of consumers to spend income on food rose between the lowest and the next lowest income groupings. With an income elasticity for food of more than 1.0, this poorest grouping of Americans — accounting for about 1 percent of the households surveyed — defined a grouping for which foodstuffs were “luxury goods.” In no subsequent consumer expenditure surveys for the United States, however, is it possible to identify sub-categories of the U.S. population with income elasticities of expenditure for foodstuffs in excess of 1.0.

Biometric assessments of nutritional status amplify and extend the evidence from consumer expenditures surveys. Health survey data collected by the National Center for Health Statistics (nchs) of the U.S. Centers for Disease Control and Prevention (cdc) make the point. Between the early 1960s and the end of the century, for example, the proportion of the adult population 20 to 74 years of age assessed “probabilistically” as underweight from weight-for-height readings (i.e., with a measured body mass index of under 18.5) dropped by half, from 4.0 percent to 1.9 percent.18 The main nutritional problem to emerge over those years in the anthropometric data was obesity, the prevalence of which (as predicted by weight-for-height data) soared from 13 percent in 1960–62 to 31 percent in 1999–2002.

For purely biological reasons, a society’s most nutritionally vulnerable groups are typically infants and children. Anthropometric and biometric data suggest that nutritional risks to American children have declined almost continuously over the past three decades. Even for low-income children — i.e., those who qualified for means-tested public health benefits — those nutritional risks look to have been declining progressively. According to the National Pediatric Surveillance System of the cdc, for example, the percentage of low-income children under five years of age who were categorized as underweight (in terms of bmi for age) dropped from 8 in 1973 to 5 in 2003; since the cutoff for “underweight” was defined probabilistically as the fifth percentile on normed pediatric growth charts, the 2003 finding would be consistent with observations for a normalized population with an underweight prevalence of zero. Similarly, the proportion of medically examined low-income children who presented height-for-age below the expected fifth percentile level on pediatric growth charts declined from 9 percent in 1975 to 6 percent in 2003. Blood work for these same children suggested a gradually declining risk of anemia, to judge by the drop in the proportion identified as having a low hemoglobin count.

Housing and home appliances. Statistical information on U.S. housing conditions and home appurtenances are available today from three main sources: 1) the decennial census of population and housing; 2) the Census Bureau’s American Housing Survey (ahs), conducted in 1984 and every few years thereafter; and 3) the Department of Energy’s Residential Energy Consumption Survey (recs), initially conducted in 1978 and currently re-collected every four years. Since 1970, the decennial census has cross-classified household housing conditions by official poverty status; ahs and recs also track poverty status and its correlates in their surveys.

Basic trends in housing conditions for poverty households and officially nonpoor households are highlighted in Table 3. In terms of simple floorspace, the homes of the officially poor were more spacious at the dawn of the new century than they had been three decades earlier. In 1970, almost 27 percent of poverty-level households were officially considered overcrowded (the criterion being an average of over one person per room). By 2001, according to the ahs, just 6 percent of poor households were “overcrowded” — a lower proportion than for nonpoor households as recently as 1970. Between 1980 and 2001, moreover, per capita heated floor-space in the homes of the officially poor appears to have increased substantially — to go by official data, by as much as 27 percent or perhaps even more.19 By 2001, the fraction of poverty-level households lacking some plumbing facilities was reportedly down to 2.6 percent — a lower share than for nonpoor households in 1970.

Trends in furnishings and appurtenances for American households similarly record the steady spread of desirable consumer appliances to poor and nonpoor households alike. From 1970 to the present, poorer households’ access to or possession of modern conveniences has been unmistakably increasing. For many of these items — including telephones, television sets, central air conditioning, and microwave ovens — prevalence in poverty-level households as of 2001 exceed availability in the typical U.S. household as of 1980, or in nonpoor households as of 1970. By the same token, the proportion of households lacking air-conditioning was lower among the officially poor in 2001 than among the general public in 1980. By 2001, over half of all poverty-level households had cable television and two or more television sets. Moreover, by 2001 one in four officially poor households had a personal computer, one in six had internet access, and three out of four had at least one vcr or dvd — devices unavailable even to the affluent a generation earlier.

These data cannot tell us much about the quality of either the housing spaces that poverty level households inhabit or the appurtenances furnished therein. They say nothing, furthermore, about nonphysical factors that bear directly on the quality of life in such housing units — most obvious among these being crime. These data, however, strongly support the proposition that physical housing conditions are gradually improving not only for the rest of America, but for the officially poor as well. In any given year, a gap in physical housing conditions separates the officially poor from the nonpoor — but the data for today’s poor appear similar to those for the nonpoor a few decades earlier.

Transportation. At the time of the 1972–73 consumer expenditure survey, almost three-fifths of the households in the lowest income quintile had no car. Since the official poverty rate for families in those years was only about 10 percent, we may suppose that the proportion of poverty-level households without motor vehicles at that time was somewhat higher. By 2003, however, over three-fifths of U.S. poverty-level households had one car or more — and nearly three of four had some sort of motor vehicle. (The distinction is pertinent, owing to the popularity and proliferation of suvs, light trucks, and other motor vehicles classified other than as cars from the late 1970s onward.)

By 2003, quite a few poverty-level households had multiple motor vehicles: Fourteen percent had two or more cars, and 7 percent had two or more trucks. In 2003, to be sure, vehicle ownership was more limited among the officially poor than among the general public; for the country as a whole, fewer than 9 percent of households reported being without any motor transport whatever. The increase in motor vehicle ownership among officially poor households has followed the general rise for the American public — albeit with a very considerable lag. As of 2003, auto ownership rates for poverty-level households mirrored ownership rates for U.S. families in general in the early 1950s; for all forms of motor transport, U.S. poverty households’ ownership levels in 2003 matched overall U.S. families’ auto ownership levels from the early 1960s; and poverty households’ ownership levels for two or more motor vehicles paralleled that of the general U.S. public in the late 1950s or early 1960s.

Health and medical care. nchs data can be used to illuminate two separate aspects of health status and medical care in modern America: outcomes and service utilization. The most critical datum for health status is arguably mortality: All other health indicators are subsidiary to survival. The single most intuitively clear mortality indicator may be expectation of life. Unfortunately, however, available data do not permit the construction of “life tables” and attendant survival schedules by official poverty status. But mortality data are available for adults by their educational attainment — and this proxy affords us a glimpse at some of the socioeconomic differences in death rates in contemporary America.

Perhaps not surprisingly, adults without a high school diploma had significantly higher age-standardized death rates than the general population: In 2002, the differential was over 50 percent among both men and women. Despite the relative magnitude of this disparity, however, in absolute terms death rates in 2002 for this educationally disadvantaged group were lower than they had been among the general public some years earlier. The overall age-standardized death rate for women 25 to 64 years of age in 1970, for example, was slightly higher than the 2002 rate for their counterparts who had not completed high school. Among adult men, death rates for the general public in 1970 were about 10 percent higher than among high-school dropouts in 2002.

For babies and infants, the single most important measure of health status is surely the infant mortality rate. Between 1970 and 2002, the infant mortality rate in the United States fell by nearly two-thirds, from 20 per 1,000 live births to 7 per thousand. The infant mortality rate continued its almost uninterrupted annual declines after 1973, when officially measured poverty rates for U.S. children began to rise. The contradistinction is particularly striking for white babies. Between 1974 and 2001, their infant mortality rates fell by three-fifths, from 14.8 per 1,000 to 5.8 per 1,000; yet over those same years, the official poverty rate for white children rose from 11.2 percent to 13.4 percent. (See Figure 4.)

These survival gains were achieved not only in the face of purportedly worsening poverty status, but also despite unfavorable trends in biological risk. In 2001, the proportion of white babies born at high-risk “low birth weight” (below 2,500 grams) was actually somewhat higher than in 1974. Yet despite these troubling trends in low-birth-weight disposition, infant mortality rates improved dramatically. Since the inherent biological disparities in mortality risk between low-birth-weight and non-low-birth-weight newborns did not diminish over this period, the reasonable inference might be that medical and health care interventions — changes in the quality and availability of services — accounted for most of the difference. And since low-birth-weight infants are disproportionately born to mothers from disadvantaged socioeconomic backgrounds, a further reasonable inference is that these improvements in quality and availability of medical care extended to America’s poorer strata, not just the well-to-do.

One particularly revealing indicator of health status and health care availability is dental health. From at least the nineteenth century, with its path-breaking reform-movement studies of the English working classes, the condition of a population’s teeth has been recognized as a telling reflection of social well-being. Dental health is also an informative proxy for health care access because dentistry is still widely regarded as an optional medical service. Between the early 1970s and the late 1990s, the share of the U.S. adult population with untreated dental cavities is estimated to have dropped by nearly half, from 48 percent to 26 percent. Of officially poor adults, fully two-fifths still had untreated cavities in the 1999–2000 nchs survey — but since nearly two-thirds of poverty-level adults had untreated cavities in the 1971–74 surveys, this represented a considerable advance over circumstances a generation earlier. For older Americans, the loss of all natural teeth was always a likely outcome in later life — but a majority of Americans 65 and older can now expect to avoid that fate. According to nchs health examination surveys, the fraction of edentulous senior citizens declined from about 50 percent in 1960–62 to about 30 percent in 2000. (No data are available here for trends for poverty-level seniors.)

Such improvements in dental conditions are suggestive of improved dental care. Time series data on dental visits are not immediately available, but data for recent years could be consistent with increased use of dentistry by the official poverty population. By 2002, nearly half of poverty-level adults aged 18 to 64 and nearly two-thirds of poverty-level children 6 to 17 years of age were reportedly making at least one dental visit a year. Such rates would look comparable to the ones reported for the general population in the early 1960s.

Trends in utilization of health care for the poor are further illustrated by the circumstances of children under 18 — more particularly by the proportion reporting no medical visits over the year preceding their health interview survey. (See Figure 5.) While the percentage of children without an annual medical visit is always higher among the poor than among the nonpoor, steady declines are reported for both groups — and the declines were substantial. The proportion of children without a reported annual medical visit, in fact, was significantly lower for the poverty population in 2002 (12.1 percent) than it had been for the nonpoverty population 20 years earlier (17.6 percent). Figure 5 cannot address the question of preexisting health needs — it could be that pediatric medical problems were on the rise during this period. These data thus do not conclusively demonstrate that “access” or “availability” of health and medical care have been improving. But they are strongly suggestive of this possibility — all the more so in conjunction with the salutary trends in health status outcomes.

To summarize the evidence from physical and biometric indicators: Low-income and poverty-level households today are better-fed and less threatened by undernourishment than they were a generation ago. Their homes are larger, better equipped with plumbing and kitchen facilities, and more capaciously furnished with modern conveniences. They are much more likely to own a car (or a light truck, or another type of motor vehicle) now than 30 years earlier. By most every indicator apart from obesity, their health care status is considerably more favorable today than at the start of the War on Poverty. Their utilization of health and medical services has steadily increased over recent decades.

All of this is in one sense reassuring. These data underscore the basic fact that low-income Americans have been participating in what Orshansky termed “America’s parade of progress.” Orshansky had worried that the poor in modern America might be watching that parade and “wait[ing] for their turn — a turn that does not come”; fortunately, her apprehension has proved to be unfounded.

To state this much is not to assert that material progress for America’s poverty population has been satisfactory, much less optimal. Nor is it to deny the importance of relative as opposed to absolute deprivation in the phenomenon of poverty as the poor themselves experience it. Those are serious questions that merit serious discussion, but they are questions distinct and separate from the focus of this study — i.e. the reliability of the official poverty rate per se as an indicator of material deprivation.

As we have seen, the U.S. federal poverty measure is premised on the assumption that official poverty thresholds provide an absolute poverty standard — a fixed inter-temporal resource constraint. Such a standard should mean that general material conditions for the poverty population should remain more or less invariant over time. Yet quite clearly, the material condition of the poverty population in modern America has not been invariant over time — it has been steadily improving. The opr thus fails — one is tempted to say that it fails spectacularly — to measure what it purports to be tracking over time. As an indicator of a condition originally defined in 1965, the official poverty rate seems to have become an ever less faithful and reliable measure with each passing year.

 

Biases and flaws

In some quarters, criticism of the various shortcomings of America’s official poverty rate will be taken as evidence of indifference to the plight of America’s disadvantaged and poor. Such an inference is illogical at best. Proponents of more effective antipoverty policies should be in the very front ranks of those advocating more accurate information on America’s poverty problem. Without such information, effective policy action will be impeded; under the influence of misleading information, policies will be needlessly costly — and ineffective.

The official poverty rate is incapable of representing what it was devised to portray: namely, a constant level of absolute need in American society. The biases and flaws in the poverty rate are so severe that it has depicted a great period of general improvements in living standards — three decades from 1973 onward — as a time of increasing prevalence of absolute poverty. We would discard a statistical measure that claimed life expectancy was falling during a time of ever-increasing longevity, or one that asserted our national finances were balanced in a period of rising budget deficits.

Central as the “poverty rate” has become to antipoverty policy — or, more precisely, especially because of its central role in such policies — the official poverty rate should likewise be discarded in favor of a more accurate index, or set of indices, for describing material deprivation in modern America.

The task of devising a better statistical lodestar for our nation’s antipoverty efforts is by now far overdue. Properly pursued, it is an initiative that would rightly tax both our formidable government statistical apparatus and our finest specialists in the relevant disciplines. But such exertions would also stand to benefit the common weal in as yet incalculable ways.

1Joseph P. Goldberg and William T. Moye, The First Hundred Years of the Bureau of Labor Statistics, Bureau of Labor Statistics Bulletin 2235 (September 1985).

2For informative background on the origin and evolution of the poverty rate, see Gordon M. Fisher, “The Development of the Orshansky Poverty Thresholds and Their Subsequent History as the Official U.S. Poverty Measure,” U.S. Census Bureau Poverty Measurement Working Papers (May 1992, partially revised September 1997).

3Douglas J. Besharov and Peter Germanis, “Reconsidering the Federal Poverty Measure: Project Description,” http://www.welfareacademy.org (June 14, 2004), 5. Poverty guidelines are based on poverty thresholds but differ from them in that they are more currently updated to reflect intervening changes in price levels and have a slightly more simplified schema for determining household eligibility levels, with fewer categories for family size and composition than are found in the Census Bureau’s poverty threshold tables.

4Mollie Orshansky, “Counting the Poor: Another Look at the Poverty Profile,” Social Security Bulletin 28:1 (January 1965).

5Real per capita gdp estimates derived from nipa bea Tables and mid-year population for 1973 and 2001 as reported in Statistical Abstract of the United States 2004–2005, Table No. 2.

6Derived from Vee Burke, Cash and Noncash Benefits for Persons with Limited Income: Eligibility Rules, Recipient and Expenditure Data, fy2000–fy2002, Congressional Research Service Report rl32233 (November 25, 2003), Table 5, and Statistical Abstract of the United States 2004–2005, Table 2.

7See Joe Dalaker, Alternative Poverty Estimates in the United States: 2003, U.S. Census Bureau Series p–60:227 (June 2005).

8The concept of transitory income can be traced back at least as far as Milton Friedman and Simon S. Kuznets, Income from Independent Professional Practice (National Bureau of Economic Research, 1945), Chapter 7, where the term itself was perhaps coined. Consumer behavior theory would suggest that annual incomes would equate to annual expenditures in the lowest income strata only where those low income levels were in fact consonant with a household’s expectations of its long-term financial outlook — or where institutional barriers prevented the household from financing additional near-term consumption.

9 We use unweighted per capita consumption here rather than a weighted adjustment because the former measure is more straightforward. There are good arguments for the latter, insofar as we might expect consumption needs of children and the elderly to be lower than those of working-age adults — but since there are no generally accepted differentials for such weightings, we opt here for transparency over sophistication.

10scf appears to offer more a comprehensive inventory than sipp of the various components of household wealth. For a detailed comparison and evaluation, see John L. Czaijka, Jonathan E. Jacobson, and Scott Cody, Survey Estimates of Wealth: A Comparative Analysis and Review of the Survey of Income and Program Participation (Mathematica Policy Research Inc., August 22, 2003).

11Daniel T. Slesnick, Consumption and Social Welfare: Living Standards and Their Distribution in the United States (Cambridge University Press, 2001).

12Christopher Jencks, comments at Second Seminar on Reconsidering the Federal Poverty Measure,  (September 14, 2004), as reported in Summary Report by Douglas J. Besharov and Gordon Green, http://www.welfareacademy.org (October 18, 2004), 10.

13John M. Rogers and Maureen B. Gray, “ce Data: Quintiles of Income Versus Quintiles of Outlays,” Monthly Labor Review (December 1994), 32–37. As defined by bls, “current outlays” is a slightly more comprehensive measure of spending than “total expenditures.”

14Peter G. Gosselin, “The New Deal: If America Is Richer, Why Are Its Families So Much Less Secure?” (Three Part Series in the Los Angeles Times, October 12–December 30, 2004).

15Technically speaking, Moffitt’s measure in Figure 3 uses the statistical benchmark of a single standard deviation of variance to establish what the Los Angeles Times series refers to as the “maximum fluctuation in annual household income for 68 percent of U.S. families.” That is to say, assuming the year-to-year variations in family income conform to the bell-shaped normal distribution, this calculation delineates the mark within which just over two-thirds of observed stochastic variations in income for median-income families would be expected to fall.

16For the official U.S. data on these trends, see Arthur F. Jones Jr. and Daniel H. Weinberg The Changing Shape of the Nation’s Income Distribution, 1947–1998, U.S. Census Bureau, Series P–60:204 (June 2000) and, for some updated data for 1967–2001, the U.S. Census Bureau’s “Historical Income Inequality Tables.”

17David Leonhardt, “More Americans Were Uninsured and Poor in 2003, Census Finds,” New York Times (August 27, 2004).

18Weight-for-height designations of obesity should be regarded as probabilistic because they do not actually measure or estimate a given individual’s actual proportion of body fat (as is done clinically through skin-fold tests, etc.)

19recs 2001 (upon whose figures the calculation above was based) places the mean heated floor space per poverty household at 472 per person; the ahs 2001, for its part, indicates a median value of 739 square feet per person for poverty households, although this total appears to include both heated and unheated floor space and pertains only to the 55 percent of poverty-level households in single, detached and/or mobile/manufactured homes. (American Housing Survey 2001, Table 23.)

overlay image