The business scandals that roiled the first two years of President George W. Bush’s administration, not to mention the long boom that led up to the scandals, are best understood as an emblem of society’s attempt to come to terms with a new information product: No, not the personal computer or the internet, but the information product known as a “stock price.”

Start with Enron, a natural gas company that, until the late 1990s, had been basically a $20 stock. Then its management began talking about creating virtual markets for all kinds of new-age commodities on the internet. Through these markets that Enron would run, advertisers could buy and sell time on media networks. Telecom carriers could buy and sell unused bandwidth. In a period of months, the gas company’s stock price rose from the mid-20s to over $90 a share, as the new Enron vision played into the millennial expectations created by the internet revolution. It didn’t matter that not one investor or analyst or fund manager could penetrate the company’s published reports. By now, investors were valuing Enron based on the prospects of industries that had yet to be invented.

Jeff Skilling, the company’s chief executive, could reasonably say, as he did shortly before the company made its dive into bankruptcy, that the stock price should be twice as high. After all, if the market was willing to value the stock based on management’s promises of future innovation, management could always make more and grander promises about what the future held.

Or take Sunbeam: The struggling maker of small appliances hired cost cutter Al “Chainsaw” Dunlap in 1997. Suddenly a lackluster stock was lofted 300 percent over a period of weeks as investors anticipated a repeat of Dunlap’s previous cost-cutting performance at Scott Paper. There, within months of his taking over, the company had been brutally downsized and sold for a fat premium to competitor Kimberley Clark. Then a wrinkle emerged: Sunbeam’s stock price had already risen so high on these expectations that it became apparent that none of the likely acquirers found it worth buying. Dunlap, having brutally downsized Sunbeam, was stuck having to run the company, a job for which he was poorly suited. Not long after, amid evidence that the company had engaged in accounting fraud in a losing battle to justify its towering valuation, Sunbeam plummeted into bankruptcy.

Or take Amazon.com, the web-based purveyor of books, music, toys, and the like. Though a mere start-up, the company soon after birth rocketed to a huge, and precarious, stock market value of $20 billion. A company’s value is usually considered to represent an estimate of the present value of its future profits, but here the company was selling for a multiple of its annual sales, never mind any profits. That presented a problem: Amazon needed more money to finance its growth, but was undoubtedly worried about breaking the spell over its share price. Amazon didn’t want to take a chance of spooking the over-eager day traders who were chasing its few publicly traded shares regardless of price. Instead, it sold “convertibles.” These are bonds that pay interest but can also be, at the company’s option, converted into shares if the company’s underlying stock price rises to a stated level. Amazon’s bonds were convertible when the share price hit $234, a level that seemed plausible in early 1999, when the stock had already rocketed to $200. The equity conversion option allowed Amazon to get away with paying a lower interest rate than a risky start-up otherwise would have, and it could avoid having to dip into its meager cash flow for interest payments at all if the stock rose to the conversion price.

The customers for such complex securities aren’t day traders or small investors but sophisticated mutual and hedge funds. Amazon was offering these disciplined investors a way to play Amazon’s extraordinary share price while protecting themselves on the downside if the bubble created by other, less sophisticated investors burst. And plenty of takers for the bet materialized. In the end, demand was so great that the company boosted the offering from $500 million to $1.25 billion—money that stayed in the bank to finance the company’s expansion during the tech drought that followed.

All three companies were emblematic of the stock market boom of the late 1990s — a boom that, needless to add, has left a bad taste in many investors’ mouths. Washington has responded by holding hearings, and a new law, Sarbanes-Oxley, has prescribed in more elaborate detail than previously the duties of accountants, corporate managements, and boards of directors. It also imposes more severe penalties on those who fall short. Prosecutions are pending against genuine frauds, including executives at Enron and WorldCom. Yet none of this comes close to touching the source of investor dissatisfaction, the sharp rise and fall in share prices.

How can it? Inaccurate corporate reporting and other kinds of fraud played, if anything, a tiny role in the stock market drama. Share prices during the period grew much faster than reported profits — which would seem to let corporate reporting off the hook. Internet companies, of course, rose to unprecedented heights even though many reported no profits or revenues at all. In classic fashion, investors encountered the enemy, and it was . . . themselves.

 

Measuring the future

The examples of Enron, Sunbeam, and Amazon demonstrate the peculiar challenge of managing a business when such extraordinary (even unrealistic) expectations are priced into a stock. Amazon, of course, managed to find its way through this thicket without resorting to accounting fraud, though its shareholders perhaps have done worst of all, at least in terms of the distance between its peak share price and post-bubble low. All three are emblematic, in their way, of a corporate sector that had been forced to become more accustomed to risk-taking than before, and more willing to write off bad experiences, even accounting fraud, quickly and move on in search of fresher game. Behind it all, corporate management has become obliged to adopt as its main guiding star a stock price increasingly set by the speculative judgments of millions of investors.

How did this come about? In the early 1980s, there was a sea change in how companies were managed. Corporate raiders and leveraged buyout firms had noticed that large U.S. companies had low stock valuations, not least because they were managed to eschew risk, which meant to eschew debt, or leverage.

Leverage signifies that a company is betting that its growth opportunities are sufficient to meet a high interest payment burden while producing growing profits for shareholders. The great insight of the early 1980s was that American business leveraged itself to suit the risk tolerance of managers, whose jobs, livelihood, and status were wrapped up in their companies. Managers, in turn, did not leverage their companies enough to satisfy diversified shareholders, who are more willing to assume higher risk for higher gain with respect to any single company in their portfolios.

That difference in risk appetite, taken to a logical conclusion, helped produce the quicksilver corporate order that characterized the late 1990s. Companies came to be judged more on their opportunities than their past performance. This is especially true of the companies that typified the opportunities afforded by technology and intellectual capital, or those that reflected innately speculative ventures like entertainment, fashion, or radically innovative new consumer-business models.

Critics have grown alarmed in retrospect over the dot-com failures and the palpable disaster of telecom overinvestment. What will eventually become noticeable, though, is how little real damage these episodes have done. Even after the setback of recent years, the stock market is up 700 percent from 1982, when the revolution toward higher risk-taking began. In those days, corporate America was being eulogized as hopelessly bloated and bureaucratic, trailing the Japanese and Germans. Today nobody doubts, even with the scandals of the past year, that U.S. business is the most dynamic and innovative in the world — or that Americans have benefited by it.

Even a company like General Electric, 124 years old and having had only nine chairmen in its history, is alive today to brag about its pedigree because its last chief, Jack Welch, a self-proclaimed radical from the “lunatic fringe,” questioned every assumption, tore down every wall, and waged guerilla war against his own company’s bureaucracy. In ways that the rest of the world would find strange, such has become the prevailing ethos of our risk- and change-loving corporate sector.

When asking what the high-risk corporate economy has wrought, it’s appropriate to look beyond the scandals that recently engulfed companies like Enron, WorldCom, Tyco, and Adelphia. These are hardly emblematic of the totality of U.S. corporate performance. By the same token, though, what of the several trillion dollars in wealth destroyed in the market correction that followed the dot-com bubble? Can we really afford such “dynamism” if the cost is so devastating to so many investors?

There is no reason to doubt that some investors were really hurt: those who committed the cardinal sin of putting all their eggs in one basket, once it turned out to be the wrong basket. But look more closely: A disproportionate part of that destroyed wealth resided in a few name-brand tech companies that can be counted on two hands. Cisco alone saw half a trillion of market capitalization wiped out, and yet Cisco not only remains a thriving business, the essential global company in internet routers, but also had $21 billion in cash on its balance sheet from its ongoing business. Microsoft, Intel, Oracle, Nortel Networks, Lucent, jds Uniphase, Juniper Networks, and Sun Microsystems accounted for $1.5 trillion in lost wealth among them — yet these companies are still in business, still technology leaders.

Moreover, the paper wealth that investors reveled in was so short-lived — much of it created and destroyed in a few months in 1999 and early 2000 — that the owners could hardly have adjusted their life expectations and financial obligations. The high-risk economy giveth and it taketh away. This was better understood by ordinary Americans than by the punditry. It no doubt explains why consumer spending and housing prices were not hit by the much-predicted “reverse wealth effect.” The words “creative destruction” gained a currency that their author, the mid-twentieth-century economist Joseph Schumpeter, never would have imagined. Investors were plainly able to put their wild ride into perspective. A replay of the 1929 crash, after which a typical saving household put its cash in a mattress and refused to revisit the equity markets for a generation, has not materialized.

Indeed, for all the surface turmoil, the high-risk economy has a subterranean stability that must surprise anyone who remembers the stagflationary 1970s. Inflation remains quiescent. Unemployment, at 6 percent, is hardly dire. The longest economic expansion in the nation’s history ended with a downturn so short and shallow that it barely qualifies as a recession. The bad news is that the good news has clearly not been good enough to satisfy the political culture. Some have posited a kind of resentment of prosperity, focusing on the unequal and somewhat random distribution of the very large prizes of the 1990s (though it was also a time when the real wages earned by average workers increased by more than they had in 20 years). But there also has been a significant dissatisfaction with our market institutions and a clamor for reform. This has focused on two concerns in particular, accounting and executive pay.

 

The accountants

Anecdotally, the mid- and late 1990s were notable for a succession of accounting scandals. Companies not quite well-known enough to make the evening news but prominent in the investment community, such as Oxford Health, Cendant, Warnaco, the aforementioned Sunbeam, and several others, were forced to restate revenues and profits for prior years.

Oxford Health was an hmo that predominantly served wealthy, healthy yuppies in the New York City area, charging them a relatively high premium for allowing them more choices of doctors than a typical hmo. When Oxford tried to expand its successful business model to a poorer and sicker Medicare population, this happy balance fell apart. Whether Oxford had intentionally underestimated its medical costs or was merely a victim of its own well-documented computer snafus is still debated, but the company’s meltdown made it the Enron of 1997.

Next along was Cendant, a franchiser of hotel chains (Ramada Inn, Travelodge, and Howard Johnson), car rental agencies, and real estate agencies, which merged with a company called cuc International, part of whose business was selling memberships in discount shopping clubs. cuc was later found to have systematically overestimated the revenues to be gained from new members. Though a criminal trial is pending, refusal to see how the rise of the public internet was undermining its business model apparently played a role.

By far, the largest category of accounting restatement in 1990s comprised episodes like this, involving “revenue recognition.” Accounting affords a great deal of flexibility, especially in estimating future revenues to be derived as a result of willingness to incur a present cost. Other companies laid low, in one way or another, were Aurora Foods, Lucent, and Xerox, not to mention countless internet startups, energy firms, and telecom firms that booked revenues from transactions that in retrospect were seen to be more window dressing than reality.

The accounting firm of Arthur Andersen (later to be ruined over its audits of Enron) produced a study that showed the number of restatements by public companies doubled from 1997 to 2000. Arthur Levitt, who chaired the Securities and Exchange Commission in the Clinton administration, pointedly warned that “wishful thinking may be winning the day over faithful representation” in corporate financial reporting. Not long after came the collapses of Enron and WorldCom. Up went a cry against the accounting profession itself. These companies’ books had been audited by reputable accounting firms. Why weren’t investors warned that management was playing fast and loose with the numbers? Why didn’t accountants stop it?

In this, politicians and investors were enacting a ritual as old as the sec, which since the 1930s has required every public company to hire a certified public accountant to audit its books. Since this mandate was laid down, the cycle of market boom and bust has inevitably been followed by a cycle of recrimination against accountants. The accountant’s job — or so the public believes — is to make sure companies are telling the truth. In the 1960s, Equity Funding Corp., various conglomerates, and the computer-leasing industry produced a succession of book-cooking scandals. In the 1970s, it was Penn Central. In the 1980s, it was the savings and loan industry. Almost by definition, the failure of a public company is always unexpected — if investors had seen it coming, they would already have yanked their money and the company would have failed. Invariably, then, the accountants end up with egg on their faces.

The principle was first named in 1974 by Carl D. Liggio, who went on to become general counsel of Arthur Young and Co. He called it the “expectations gap,” a term that has been recycled every time the accounting profession finds itself in the crosshairs. Succinctly put, investors believe the accountants are supposed to stop fraud. But accountants have resolutely refused to accept that responsibility. Their job, they say, is not to suspect that management is lying but simply to make sure that the data — whose accuracy is management’s responsibility — are presented in a manner consistent with accounting convention.

In reality, the sources of accounting fraud have always been easy to explain. Accounting fraud is a product of business failure. The uncomfortable truth is that a company facing difficulties is making a rational decision when it tries to conceal its condition from investors, creditors, and employees, who otherwise might turn fear of failure into a self-fulfilling prophecy.

The 1990s were a fertile time for creating new companies and testing new business models. As such, it was a fertile decade for business failures, and for accounting fraud. At the start of the decade, some 7,500 public companies were in existence. By 2000, the number had risen to 15,000. Today, there are certainly thousands fewer, and some inevitably became examples of full-blown accounting scandals as they disappeared or were reorganized.

Cynically speaking, this is what you might expect from speculative companies taking a flyer on experimental new business models. The “expectations gap” has become as much an investor’s excuse as an accountant’s excuse. Nobody invested in Enron, after all, because they admired Arthur Andersen, its accountant. To repeat a point, its stock price was happily boosted to $90 a share by many astute investors and fund managers who knew they couldn’t make heads or tails of the company’s books. Investors, in the long run, get the accounting they deserve. Though forgotten now, Enron’s unraveling began when investors woke up to the inscrutability of its books and pressed for more clarity, before any breath of scandal had touched the company. It wasn’t the sec or prosecutors who uncovered the Enron scam but the market itself.

 

Stock options

To the public and serious analysts alike, however, a new and worrisome corollary emerged from the corporate ructions of the late 1990s. This was the belief that managements themselves had become systematically corrupted by the use of stock options as the primary means of executive compensation. Though options had existed since the beginning of corporate capitalism, their vast expansion was a landmark phenomenon of the more heady corporate world of the 1990s. Unlike accounting scandals, this was truly something new under the sun.

Eighty percent or more of the average chief executive’s pay takes the form of options. Options allow an executive to buy a fixed number of his company’s shares during a fixed period at a price set the day the options are issued. If the stock goes up sharply under his or her tenure, huge windfalls are not only possible — during the long bull market of the 1980s and 90s, they became commonplace. Michael Eisner of Disney was calculated to have made, on paper, $300 million in the mid-1990s. Sanford Weill, head of Citigroup, was calculated to have gotten $150 million in 2000.

This is not to say executives were innocent beneficiaries of a rising market. Any serious executive knows that 90 percent of the factors affecting his share price are out of his control. Management can’t control the market. But management is uniquely positioned to determine how the benefits of the corporation’s net cash flow are distributed. Assume, as many people do, that low inflation and low interest rates were primarily responsible for the rise in share prices in the 1990s. Even if a company’s earnings remain drearily unchanged from year to year, its implicit market value would rise to reflect the increased stability in the value of money. Management decides how much of the actual value will be captured by employees (in the form of rising pay, perks, or self-interested empire building) or by shareholders in the form of maintained or improved profit generation. Stock options have been rightly described by one corporate compensation expert as “neither compensation nor benefits” but as a “behavior-changing employee communication plan.” Options are indeed a powerful form of behavior modification, making the stock price a central consideration in every decision a ceo makes. In this way, the rise of stock-based compensation has been an important force in pushing companies to be more risk-taking to meet the risk appetite of public investors.

Stock options have been especially favored in those industries that have driven much of the recent growth of the U.S. economy — those based on technology and intellectual property. Notable about such companies is the fact that the assets that accountants traditionally pore over — buildings, machines, real estate—tell little about how much their future earnings will be worth. Though they are obliged to file the same voluminous accounting as an old-line manufacturing company, often there isn’t much that can be intelligently measured except cash coming in and cash going out.

Because of the difficulty of monitoring what goes on inside the black box of such a company, harnessing the ceo to a large carrot was, in the minds of many investors, particularly useful in such industries. Since they have no independent way to assure themselves about what a company is doing, incentives are expected to carry much of the weight. No better example could be found than Steve Jobs of Apple Computer, a company whose precarious survival depends on pulling rabbits out of its hat every few years to capture the imagination of computer users who could easily buy a cheaper, more convenient substitute in the Microsoft-Intel world. In 2001, Mr. Jobs, who was paid a salary of $1 a year, was granted options on stock with a face value of $870 million, plus a $90 million Gulfstream jet.

The Apple chief’s compensation package immediately featured in every media account of runaway executive pay. But investors were betting on a ceo and on the power of stock options to simplify — even to do away with — the problem of corporate governance. That problem can be stated simply: How do you know a company to which you’ve trusted your savings will use it wisely to generate a profit? How do you maintain a comfort level especially when the company is dependent on a visionary chief to keep pulling rabbits from a hat?

Like all solutions that seem to offer a once-and-for-all escape from complexity, stock options were bound to disappoint. For one thing, investors are dependent on management to provide much of the information the market uses to set the share price. That opens the door for management to manipulate the price by putting out false information. At the same time, the share price is supposed to be a signal to the ceo about the market’s collective judgment about the validity and progress of his strategies. That opens the door to corporate decisions being influenced by uninformed or “irrationally exuberant” investors who deflect a company’s share price from what wiser heads would consider a sober value.

These concerns are legitimate, proving at least that stock options are not the magic solution to all corporate governance problems. But then history suggests they were never meant to be. Modern use of stock options was adopted to solve a limited problem, not to become the all-purpose tool for aligning management and shareholder interests that some have claimed.

Those who have followed the matter closely find it no coincidence that the popularity of management stock options began just as courts and legislatures were erecting obstacles to hostile takeovers, which had been the market’s way of disciplining truant managements in the 1980s. In particular, corporate reformers championed stock-based pay to offset the boardroom “poison pills” that numerous companies were adopting. Abhorrent in principle to corporate governance mavens, these devices greatly expand the freedom of management to reject an unsolicited bid even if it might be in the interest of shareholders. The solution? Wave large stock incentives in front of managements so they will have a strong motive to entertain a bid at the expense of their own job security. There’s even some evidence that the pill has benefited shareholders by forcing bidders to pay a higher price for takeover targets. Economists Marcel Kahan and Edward Rock, in the University of Chicago Law Journal last summer, trace this history and conclude that stock options “had the effect of transforming the pill, a potentially pernicious governance tool, into a device that is plausibly in shareholders’ interest.”

There can be little doubt that stock options work as advertised when managements are faced with a takeover offer. How about other kinds of decision-making? The economics profession has yet to reach a consensus on whether options have been good for corporate performance overall, but in the absence of a learned verdict, the media have come up with their own. Amid the recent scandals, stock options are universally put forward as the motive for what critics see as “pump and dump” behavior by corporate executives — that is, misleading the market with false information to drive up the share price so management can cash out its options at a profit.

Unfortunately, these accusations rely heavily on distorted hindsight, and even more on the assumption that any time a ceo sells a share of stock and later the price goes down, he must have been up to no good. Yet the landmark scandals of the past decade tell a different story. Enron’s Ken Lay, WorldCom’s Bernie Ebbers, and Tyco’s Dennis Kozlowski — a rogues’ gallery of disgraced executives — all had one thing in common: They apparently went to unusual lengths not to sell stock in their companies, even as their companies were unraveling. Though pump-and-dump villains have media appeal, these scandals fit a less cinematic but more believable mold. They were cases of classic myopia, complicated by personal neuroticism, in which executives managed to fool themselves about the wisdom and propriety of their action until it blew up in their faces.

Stock options may not be completely exculpated. How to compensate and incentivize management properly without producing unintended consequences is a puzzle that no doubt will never be completely solved. But these scandals arise not from one narrow innovation in corporate compensation but from the age-old lure of wealth, adulation, and success. In the recent scandals, investors played their role too, choosing with some deliberation to roll the dice with “visionary” entrepreneurs rather than to inquire carefully into whether the numbers added up.

 

Fortune-telling

If there’s a larger question, it’s this: Why does the abandonment of investor skepticism so often coincide with periods of unusual growth and inventiveness in the economy? The answer may be that distinguishing a good business idea from a bad one is often impossible. Foretelling the future is hardly easy under the best of circumstances, and sometimes an investor may be sensible not even to try. Digging through piles of a company’s past records, after all, doesn’t tell you much about its future in a time when new industries are being invented. The alternative, throwing money indiscriminately at companies, may actually be a realistic option.

Yes, that means throwing money at charlatans and crooks as well as real inventors and entrepreneurs, but these categories frequently overlap. John Maynard Keynes, who understood these matters as well as anybody, maintained that “animal spirits” were the economy’s driving force. If investors always opted for the flinty conservatism that scolds frequently urge on them, we might not have such a creative economy. Not that the bubble was as irrational as frequently portrayed. Dramatic, transformative new technologies are, by definition, a challenge to assess. Investors in internet companies had to assess not only how their old technology competitors would respond, but whether intellectual property law and other kinds of property rights would allow the first movers to lock up certain hugely valuable opportunities, much the way Microsoft’s dos operating system became the foundation for a near-monopoly on the basic software for half a billion personal computers currently installed around the world.

In any case, those investors who violated the basic canon of diversification and lost more money than they could afford on internet speculation can hardly say they didn’t understand the risk they were taking. Nor has their “irrational exuberance” been without dividends for the rest of us. After all, large amounts were lost investing in dozens of personal computer makers in the early 1980s, before the consolidation of the Intel-Windows duopoly, or in the auto industry in the 1920s, before it settled into the oligopoly that it would remain for decades. The same has been true of many industries that came into the world to give us the standard of living we enjoy today.

What’s more, for all the talk that the game is rigged against the “little guy,” the industry dynamics that really matter play themselves out in plain sight. All investors have about the same chance to buy the winners and sell the losers. The fact that some will necessarily end up losers, as we are learning again in the wake of the dot-coms, certainly presents challenges to the body politic. But the mere existence of investors who are mad because they lost money is hardly an indictment of capitalism. Losers are an unavoidable consequence of the system that has made the country rich.

overlay image