The atmosphere is about 0.8˚ Celsius warmer than it was in 1850. Given that the atmospheric concentration of carbon dioxide has risen 40 percent since 1750 and that CO2 is a greenhouse gas, a reasonable hypothesis is that the increase in CO2 has caused, and is causing, global warming.

But a hypothesis is just that. We have virtually no ability to run controlled experiments, such as raising and lowering CO2 levels in the atmosphere and measuring the resulting change in temperatures. What else can we do? We can build elaborate computer models that use physics to calculate how energy flows into, through, and out of our planet’s land, water, and atmosphere. Indeed, such models have been created and are frequently used today to make dire predictions about the fate of our Earth.

The problem is that these models have serious limitations that drastically limit their value in making predictions and in guiding policy. Specifically, three major problems exist. They are described below, and each one alone is enough to make one doubt the predictions. All three together deal a devastating blow to the forecasts of the current models.

  1. Measurement Error

Imagine that you’re timing a high school track athlete running 400 meters at the beginning of the school year, and you measure 56 seconds with your handheld stopwatch that reads to ±0.01 seconds. Imagine also that your reaction time is ±0.2 seconds. With your equipment, you can measure an improvement to 53 seconds by the end of the year. The difference between the two times is far larger than the resolution of the stopwatch combined with your imperfect reaction time, allowing you to conclude that the runner is indeed now faster. To get an idea of this runner’s improvement, you calculate a trend of 0.1 seconds per week (3 seconds in 30 weeks). But if you try to retest this runner after half a week, trying to measure the expected 0.05-second improvement, you will run into a problem. Can you measure such a small difference with the instrumentation at hand? No. There’s no point in even trying because you’ll have no way of discovering if the runner is faster: the size of what you are trying to measure is smaller than the size of the errors in your measurements.

Scientists present measurement error by describing the range around their measurements. They might, for example, say that a temperature is 20˚C ±0.5˚C. The temperature is probably 20.0˚C, but it could reasonably be as high as 20.5˚C or as low as 19.5˚C.

Now consider the temperatures that are recorded by weather stations around the world.

Patrick Frank is a scientist at the Stanford Synchrotron Radiation Lightsource (SSRL), part of the SLAC National Accelerator Laboratory at Stanford University. Frank has published papers that explain how the errors in temperatures recorded by weather stations have been incorrectly handled. Temperature readings, he finds, have errors over twice as large as generally recognized. Based on this, Frank stated, in a 2011 article in Energy & Environment, “…the 1856–2004 global surface air temperature anomaly with its 95% confidence interval is 0.8˚C ± 0.98˚C.” The error bars are wider than the measured increase. It looks as if there’s an upward temperature trend, but we can’t tell definitively. We cannot reject the hypothesis that the world’s temperature has not changed at all.

  1. The Sun’s Energy

Climate models are used to assess the CO2-global warming hypothesis and to quantify the human-caused CO2 “fingerprint.”

How big is the human-caused CO2 fingerprint compared to other uncertainties in our climate model? For tracking energy flows in our model, we use watts per square meter (Wm–2). The sun’s energy that reaches the Earth’s atmosphere provides 342 Wm–2—an average of day and night, poles and equator—keeping it warm enough for us to thrive. The estimated extra energy from excess CO2—the annual anthropogenic greenhouse gas contribution—is far smaller, according to Frank, at 0.036 Wm–2, or 0.01 percent of the sun’s energy. If our estimate of the sun’s energy were off by more than 0.01 percent, that error would swamp the estimated extra energy from excess CO2. Unfortunately, the sun isn’t the only uncertainty we need to consider.

  1. Cloud Errors

Clouds reflect incoming radiation and also trap it as it is outgoing. A world entirely encompassed by clouds would have dramatically different atmospheric temperatures than one devoid of clouds. But modeling clouds and their effects has proven difficult. The Intergovernmental Panel on Climate Change (IPCC), the established global authority on climate change, acknowledges this in its most recent Assessment report, from 2013:

The simulation of clouds in climate models remains challenging. There is very high confidence that uncertainties in cloud processes explain much of the spread in modelled climate sensitivity. [bold and italics in original]

What is the net effect of cloudiness? Clouds lead to a cooler atmosphere by reducing the sun’s net energy by approximately 28 Wm–2. Without clouds, more energy would reach the ground and our atmosphere would be much warmer.  Why are clouds hard to model? They are amorphous; they reside at different altitudes and are layered on top of each other, making them hard to discern; they aren’t solid; they come in many different types; and scientists don’t fully understand how they form. As a result, clouds are modeled poorly. This contributes an average uncertainty of ±4.0 Wm–2 to the atmospheric thermal energy budget of a simulated atmosphere during a projection of global temperature. This thermal uncertainty is 110 times as large as the estimated annual extra energy from excess CO2. If our climate model’s calculation of clouds were off by just 0.9 percent—0.036 is 0.9 percent of 4.0—that error would swamp the estimated extra energy from excess CO2. The total combined errors in our climate model are estimated be about 150 Wm–2, which is over 4,000 times as large as the estimated annual extra energy from higher CO2 concentrations. Can we isolate such a faint signal?

In our track athlete example, this is equivalent to having a reaction time error of ±0.2 seconds while trying to measure a time difference of 0.00005 seconds between any two runs. How can such a slight difference in time be measured with such overwhelming error bars? How can the faint CO2 signal possibly be detected by climate models with such gigantic errors?

Other Complications

Even the relationship between CO2 concentrations and temperature is complicated.

The glacial record shows geological periods with rising CO2 and global cooling and periods with low levels of atmospheric CO2 and global warming. Indeed, according to a 2001 article in Climate Research by astrophysicist and geoscientist Willie Soon and his colleagues, “atmospheric CO2 tends to follow rather than lead temperature and biosphere changes.”

A large proportion of the warming that occurred in the 20th century occurred in the first half of the century, when the amount of anthropogenic CO2 in the air was one quarter of the total amount there now. The rate of warming then was very similar to the rate of warming recently. We can’t have it both ways. The current warming can’t be unambiguously caused by anthropogenic CO2 emissions if an earlier period experienced the same type of warming without the offending emissions.

Climate Model Secret Sauce

It turns out that climate models aren’t “plug and chug.” Numerous inputs are not the direct result of scientific studies; researchers need to “discover” them through parameter adjustment, or tuning, as it is called. If a climate model uses a grid of 25x25-kilometer boxes to divide the atmosphere and oceans into manageable chunks, storm clouds and low marine clouds off the California coast will be too small to model directly. Instead, according to a 2016 Science article by journalist Paul Voosen, modelers need to tune for cloud formation in each key grid based on temperature, atmospheric stability, humidity, and the presence of mountains. Modelers continue tuning climate models until they match a known 20th century temperature or precipitation record. And yet, at that point, we will have to ask whether these models are more subjective than objective. If a model shows a decline in Arctic sea ice, for instance—and we know that Arctic sea ice has, in fact, declined—is the model telling us something new or just regurgitating its adjustments?

Climate Model Errors

Before we put too much credence in any climate model, we need to assess its predictions. The following points highlight some of the difficulties of current models.

Vancouver, British Columbia, warmed by a full degree in the first 20 years of the 20th century, then cooled by two degrees over the next 40 years, and then warmed to the end the century, ending almost where it started. None of the six climate models tested by the IPCC reproduced this pattern. Further, according to scientist Patrick Frank in a 2015 article in Energy & Environment, the projected temperature trends of the models, which all employed the same theories and historical data, were as far apart as 2.5˚C.

According to a 2002 article by climate scientists Vitaly Semenov and Lennart Bengtsson in Climate Dynamics, climate models have done a poor job of matching known global rainfall totals and patterns.

Climate models have been subjected to “perfect model tests,” in which the they were used to project a reference climate and then, with some minor tweaks to initial conditions, recreate temperatures in that same reference climate. This is basically asking a model to do the same thing twice, a task for which it should be ideally suited. In these tests, Frank found, the results in the first year correlated very well between the two runs, but years 2-9 showed such poor correlation that the results could have been random. Failing a perfect model test shows that the results aren’t stable and suggests a fundamental inability of the models to predict the climate.

The ultimate test for a climate model is the accuracy of its predictions. But the models predicted that there would be much greater warming between 1998 and 2014 than actually happened. If the models were doing a good job, their predictions would cluster symmetrically around the actual measured temperatures. That was not the case here; a mere 2.4 percent of the predictions undershot actual temperatures and 97.6 percent overshot, according to Cato Institute climatologist Patrick Michaels, former MIT meteorologist Richard Lindzen, and Cato Institute climate researcher Chip Knappenberger. Climate models as a group have been “running hot,” predicting about 2.2 times as much warming as actually occurred over 1998–2014. Of course, this doesn’t mean that no warming is occurring, but, rather, that the models’ forecasts were exaggerated.

Conclusions

If someone with a hand-held stopwatch tells you that a runner cut his time by 0.00005 seconds, you should be skeptical. If someone with a climate model tells you that a 0.036 Wm–2 CO2 signal can be detected within an environment of 150 Wm–2 error, you should be just as skeptical.

As Willie Soon and his coauthors found, “Our current lack of understanding of the Earth’s climate system does not allow us to determine reliably the magnitude of climate change that will be caused by anthropogenic CO2 emissions, let alone whether this change will be for better or for worse.”

overlay image