Are you a technoptimist or a depressimist? This is the question I have been pondering after a weekend hanging with some of the superstars of Silicon Valley.

I had never previously appreciated the immense gap that now exists between technological optimism on the one hand and economic pessimism on the other. Silicon Valley sees a bright and beautiful future ahead. Wall Street and Washington see only storm clouds. The geeks think we’re on the verge of the Singularity. The wonks retort that we’re in the middle of a Depression.

Let’s start with the technoptimists. I listened with fascination as a panel of tech titans debated this question: “Will science and technology produce more dramatic changes in solving the world’s major problems over the next twenty-five years than have been produced over the past twenty-five years?”

They all thought so. We heard a description of what Google’s Project Glass, the Internet-enabled spectacles, can already do. (For example, the spectacles can be used to check if another speaker is lying.) Next up: a search engine inside the brain itself. We heard that within the next twenty-five years, it will be possible to take thousand-mile journeys by being fired through tubes. We also heard that biotechnology will deliver genetic “photocopies” of human organs that need replacing. And we were promised genetically engineered bugs, capable of excreting clean fuel. The only note of pessimism came from an eminent neuroscientist who conceded that a major breakthrough in the prevention of brain degeneration was unlikely in the next quarter century.

WONDROUS TECHNOLOGY ISN’T NEW

For a historian, all this technoptimism is hard to swallow. The harsh reality, as far as I can see, is that the next twenty-five years (from now until 2038) are highly unlikely to see more dramatic changes than science and technology produced in the past twenty-five (1987–2012).

For a start, the end of the Cold War and the Asian economic miracle provided one-off, nonrepeatable stimuli to the process of innovation in the form of a massive reduction in labor costs and therefore the price of hardware, not to mention all those ex-Soviet PhDs who could finally do something useful. The information-technology revolution that began in the 1980s was important in terms of its productivity impact inside the United States—though this shouldn’t be exaggerated—but we are surely now in the realm of diminishing returns (the symptoms of which are deflation plus underemployment due partly to automation of unskilled work).

The breakthroughs in medical science we can expect as a result of the successful mapping of the human genome probably will result in further extensions of the average lifespan. But if we make no commensurate advances in neuroscience—if we succeed only in protracting the life of the body, but not the mind—we will simply increase the number of dependent elderly.

My pessimism is supported by a simple historical observation. The achievements of the past twenty-five years were actually not that big a deal compared with what we did in the preceding twenty-five years, 1961–86 (for example, landing astronauts on the moon). And the twenty-five years before that, 1935–60, were even more impressive (e.g., splitting the atom). In the words of Peter Thiel, perhaps the lone skeptic within a hundred miles of Palo Alto: in our youth we were promised flying cars. What did we get? One hundred and forty characters.

Moreover, technoptimists have to explain why the rapid scientific technological progress in those earlier periods coincided with massive conflict between armed ideologies. (Which was the most scientifically advanced society in 1932? Germany.)

Technopism pie

So let me offer some simple lessons of history: more and faster information is not good in itself. Knowledge is not always the cure. And network effects are not always positive.

In many ways, the discussion I’ve just described followed logically from a widely reported spat between Thiel and Eric Schmidt at the Brainstorm Tech conference in Aspen, where Schmidt took the technoptimistic line and Thiel responded with a classic depressimistic question: why, if information technology is so great, have median wages stagnated in the forty years since 1973, whereas in the previous forty years, between 1932 and 1972, they went up by a factor of six?

Democracy is chronically shortsighted, especially if there are major elections every two years.

By the same token, there was great technological progress during the 1930s. But it did not end the Depression. That took a world war. So could something comparably grim happen in our own time? Don’t rule it out. Let’s remind ourselves of the sequence of events: economic depression, crisis of democracy, road to war.

Talk to anyone who manages money these days and you will hear a doleful litany: the global economic slowdown, the persistence of unemployment, widening inequality, the problem of excessive debt, the declining effectiveness of monetary policy. Recently Ray Dalio—founder of the mega–hedge fund Bridgewater—spoke of a “dangerous dynamic . . . making a self-reinforcing global decline more likely.” With good reason, Dalio frets about the dangers of a “debt implosion” or currency breakup in Europe.

In the 1930s, economic disaster undermined weak democracies all over the world. The equivalent phenomenon in our own time is the seeming inability of many Western politicians to get re-elected. That, however, is no more than what you’d expect in a time of depression. More troubling is the evidence that our basic faith in democracy is being corroded.

I have heard a politician admit that the generous benefits that have been promised to retired public workers are in danger of bankrupting the country. I have heard a leading entrepreneur complain that the revolving door leading from the Pentagon to defense contractors is a subtle form of corruption. And I have heard more than one reputable academic assert that the Chinese one-party system offers real advantages over our own antiquated system of democracy.

This is certainly the Chinese view. Viewed from Beijing, Western “participatory democracy” is defective in at least three ways. It is anti-intellectual (politicians are condemned if they are too “professorial”). It is shortsighted, to the detriment of future generations. And, if democracy is applied in multiethnic societies, it can lead to discrimination and even violence against minorities.

Sadly, not all of this is wrong. Democracy works best with constituency-based, bicameral parliaments under the rule of law, and works less well with proportional representation and referendums. That is one reason Europe is in such a mess. Democracy is chronically shortsighted, especially if there are major elections every two years. With our increasing lifespans (life expectancy was just over fifty when the U.S. Constitution was written, compared with seventy-eight today), a case can surely be made for longer terms in office (say, 50 percent longer) and therefore less-frequent elections.

As the American empire wanes, whoever emerges on top in the world’s trouble spots is unlikely to get there without bloodshed.

As for the problem of corruption, it is all too real. But it takes two forms: the power of cash-rich vested interests as exemplified by the lobbyists on K Street, and the growing share of public-sector employees and welfare recipients relative to direct taxpayers in the electorate. If anything, it is the second of these that has been pushing the Western world ever deeper into debt over the past decade.

DANGEROUS AND UNKNOWN

In the 1930s script, democratic decay is followed by conflict. I am not one of those who expect Europe’s monetary meltdown to end in war. Europeans are too old, disarmed, and pacifist for there to be more than a few desultory urban riots. But I am much less confident about peace to Europe’s south and east. North Africa and the Middle East now have the ingredients in place for a really big war: economic volatility, ethnic tension, a youthful population, and an empire in decline—in this case the American empire.

Weary of warfare and waking up to the fossil-fuel riches made accessible by fracking, the United States is rapidly winding up four decades of hegemony in the Middle East. No one knows who or what will fill the vacuum. A nuclear Iran? A neo-Ottoman Turkey? Arab Islamists led by the Muslim Brotherhood? Whoever emerges on top is unlikely to get there without bloodshed.

Knowledge is not always the cure.

It’s a dangerous world. Ask anyone who works in the world of intelligence to list the biggest threats we face, and the list is likely to include bioterrorism, cyberwar, and nuclear proliferation. What these have in common, of course, is the way modern technology can empower radicalized (or just plain crazy) individuals and groups.

I wish I were a technoptimist. It must be heartwarming to believe that Facebook is ushering in a happy-clappy world where everybody “friends” everybody else and we all surf the Net in peace (insert smiley face). But I’m afraid history makes me a depressimist. And no, there’s not an app—or a gene—that can cure that.

overlay image