"Never consider money when treating patients.” This improbable edict was endlessly reiterated to young doctors during my early years in medicine.
Now, four decades later, that mantra from the past has metamorphosed into the currently fashionable requirement to “always consider money when treating patients.”
Non-physicians are likely to find either premise dubious, if only because absolutes deserve to be greeted with skepticism. Had a fat-cat surgeon with his chauffeured Rolls-Royce waiting in the no-parking area in front of the hospital (we did have occasional such examples) been the one to advise disregarding money, we would have laughed. But our mentors were anything but such caricatures. The most selfless, hard-working, and decent professors all sought to impress upon us that there was a “right” and a “wrong” way to practice medicine from a purely scientific point of view; the injection of cost considerations could only pervert proper decision making.
Respect for our teachers’ consummate skills as physicians led us to learn medicine from them and not probe too deeply into the financial philosophy they espoused. Nevertheless, a few of us remained uneasy about the impracticality of practicing anything in a vacuum. But it was embarrassing to discuss such a renegade concern, so we largely kept silent.
In retrospect, the late 1960s and early 1970s continued to be flush times in America. As health economist Victor Fuchs has pointed out, increasing productivity covered rising health-care costs from the end of World War II until 1978. After that, productivity increases no longer kept up with costs and employer interest in controlling health insurance premiums led to the sponsorship of managed care. Employer-paid health insurance was a legacy of World War II. “Fringe benefits” were excluded from wartime price controls, so ever more comprehensive health coverage became a way to attract and retain workers in the face of labor shortages. Medicine had been largely insulated from these changes because the jobs of prescribing care and arranging for its financing were largely kept separate.
It would be easy to conclude that the morality of medicine at the time was heavily influenced by economic factors. Internal medicine and its unusually comprehensive physical examinations were popularized. Doctors felt that the assembly of a physiological database would help them understand their patients scientifically as well as personally. If testing the blood level of vitamin B12 or obtaining an X ray of the lungs looked even marginally useful, it was ordered. Freedom from financial constraints was intellectually stimulating. Once doctors entered practice, the profession was personally remunerative as well.
Was this wasteful? The answer depends, to a significant degree, on personal assessment of risk. If one believes that unearthing even an occasional case of diabetes, kidney disease, or breast cancer that might have otherwise gone undetected is worth spending a great deal of money on screening for, the answer is yes. This is not very different from making decisions on how much insurance of other types we choose to buy. Deciding where to draw the line is, of course, easier when money is plentiful. Now that health-care costs have come to exceed 14 percent of the gross national product and are no longer fully funded by productivity increases, there is intense preoccupation with how much benefit (“bang for the medical buck”) we are getting for every health dollar expended.
It also follows that easy money leads easily to abuse. No doubt, money was spent for both appropriate and inappropriate reasons in the plush years. There were five-day checkups in the hospital that were paid for by insurance. This could range from an executive who wanted “everything checked” to the hospitalization of a dependent parent to be “looked over” while the family vacationed in Mexico or Hawaii. Doctors who were inclined to fight the world (usually a small minority) balked at such requests. Others went along with common practice. The availability of insurance money blunted the ethical issues. (It would surely have been different if personal money were involved.) Unfortunately, past lack of controls has given rise to another medical myth about money—that there is so much fat in the system that one can always reduce financing without ever injuring the muscles and sinews of health care.
Costs were low then as well. Not long ago, a patient seeking to amuse me sent in a copy of her bill for a complex laceration that I had sutured in the emergency room of a California hospital in the 1960s. My fee was $15 (with no HMO or PPO reduction), and the hospital charged only $10 for use of the facilities and all supplies. At those prices, even if corrected by a factor of 3 or 4 for inflation, we had a lot of leeway to do as we wished.
The ethics of doing more are easier to defend than those of doing less. Consequently, cost-benefit and cost-effectiveness ratios and the subtle differences between them are now introduced early in the medical school curriculum. The current rationale is that resources are finite and that waste in one area deprives people elsewhere. (This assumes that money “saved” by reducing reimbursement to hospitals, for example, would automatically be applied to better prenatal care.) In only 25 years, a focus on money in medicine has been transformed from a sin into a virtue.
Medicare now demands that fees be calculated to the penny even as pennies have become inconsequential elsewhere in the economy. Likewise, HMOs bargain for pennies of difference among fees with the expectation that every service will be provided at the lowest possible cost. In California, particularly, there has been a severe conflict between the high cost of providing services and intense across-the-board downward pressure on fees.
The result has been not only financially painful to the providers of care but also frequently economically illogical. Physicians are expected to do what no supermarket has succeeded in doing: supplying every item at the lowest possible price. In practice, this means that simple services such as diagnosing a bladder infection are rendered infinitely more time-consuming and consumer-unfriendly when laboratory services are contracted out to the lowest bidder by HMOs, forcing patients to go elsewhere for the most ordinary of tests—say, a urinalysis—and to wait as phone calls go back and forth to get a prescription filled. Pennies are saved for the insurance carrier regardless of costs to everyone else. Such bottom-dollar contracting turns medical care into a mere commodity. Some physicians now do certain tests free of charge and at their own expense just to avoid dealing with the cumbersome red tape. But this is hardly sound policy.
Constant awareness of cost and reimbursement issues has had the paradoxical effect of exposing just how much cross-subsidization has existed within medical practices. Traditionally, wealthier patients paid more and the poor were treated for less or even nothing. Such a “Robin Hood” approach to health care was deemed demeaning to those who could not afford care and was rendered immoral, if not illegal, by the enabling legislation for Medicare and Medicaid in 1965. The result is fixed below-market rates for some patients and above-market rates for others.
Physicians are being badgered into constantly considering the costs to “the system” rather than to individual patients. Not only does this confuse individual patient welfare with the budgets of insurers, but it inevitably makes physicians think of costs to themselves. Unreimbursed care for research on complex cases, accepting difficult or time-consuming patients for fixed rates that are too low to cover even office overhead, or providing services that are convenient for patients but economic losers for doctors even when reimbursed (lab work by internists, for example) become hard to justify in a money-talks-loudest climate. In my own experience, such work has never been worth the effort involved in strictly monetary terms, but it was always considered part of the job until the focus shifted from total care to item-by-item, bottom-dollar medicine.
Financial pressures have led primary physicians to turn their patients over to “hospitalists”—general physicians who work only in the hospital—and to emergency room doctors after hours. Care that was formerly comprehensive has become fragmented. The jury is out on whether this has improved overall quality and economics or done the reverse. But there is no longer any shame when a doctor sheds low-paying or inconvenient services.
Changes in medical practice have been obscured because the principal locus of competition has shifted from among doctors to among insurance carriers and health plans. This may be one of the major unintended consequences of the movement toward managed care. Commenting on this, a highly successful publisher I treat observed, “A caricature of a market-based system is being imposed on medicine. The real world of business seeks stability and long-term productivity and not chasing marginal savings. Secure, long-term relationships make things work. Banks lend money based on a stable customer base. This is the real cement that holds the economy together.”
After a decade of contrary brainwashing by managed care, it is these words that induced me to furnish this perspective from within medicine. Obsessing over money does not seem to be any more appropriate as a guiding principle for health care than ignoring it. A search for balance should continue between these extremes.