Climate Change & Tropospheric Temperature Trends

Part II: A Critical Examination of Skeptic Claims

Current Revision Level

Rev. 1.3:    Jan. 21, 2009


I would like to thank the following for taking the time out of their already busy schedules to offer badly needed comments and suggestions regarding the content of this paper. Without their contributions, it would not have been possible. Thank you!

Dian Seidel    (NOAA Air Resources Laboratory, Silver Spring, MD)
Kevin Trenberth    (National Center for Atmospheric Research / Climate and Global Dynamics, Boulder, CO)
Jerry Mahlmann    (National Center for Atmospheric Research / Climate and Global Dynamics, Boulder, CO)
Rasmus Benestad    (Norwegian Meteorological Institute, Oslo, Norway; Contributing Editor for
Gavin Schmidt    (Goddard Institute for Space Studies, New York, NY; Contributing Editor for
William Connolley    (British Antarctic Survey, Cambridge, U.K.; Contributing Editor for
David Parker    (U.K. Met Office, Bracknell, Berkshire, U.K.)


Shortly after the dawn of the nuclear age, Albert Einstein observed that for the first time in human history, the extinction of our own race and the destruction of the biosphere has been brought within the realm of technical possibilities. He was of course, referring to the advent nuclear weapons and their proliferation. But in the years since, his words have proven to be more prophetic than he imagined. The industrial revolution has given us the ability to globally alter the very processes that sustain all life on our home planet, including our own. With each passing day it seems, we are learning of new and unforeseen long-term changes that our activities are imposing on the biosphere, and increasingly, these are proving to be damaging. For the first time in our history, we are tinkering with the very umbilical cord that supports us, and doing so in ways that we do not fully understand and that many of us do not want to address. Nowhere is this more evident than in regard to global warming – or more appropriately, climate change.

Climate change is in many ways unlike any other dilemma the human race has ever faced. First, to a greater extent than any other environmental problem, it is truly global in nature. In this sense, global warming is not a misnomer. Though there is much debate about how severe its impacts will be and how they will play out in various global regions, few scientists today believe that the consequences will even be acceptable, much less positive – and they will be borne by all of us. Second, unlike other forms of pollution or environmental degradation, the response times associated with climate change are large compared with the human activities that are forcing them. It is often forgotten that climate change is not an atmospheric response to natural and/or anthropogenic forcing. The biosphere is a coupled system – atmosphere, ocean, and continental land – and it is as a system that it responds to being forced. The latent heat retention and transport capability of the world’s oceans is huge with respect to its atmosphere and continental land masses. As a result, when “kicked” it will respond much like a bowling ball being pulled by a slinky - it will be some time before we actually see motion, and once the ball is rolling it will take an equal or longer time to stop 1. This presents an utterly unique and historically unprecedented moral dilemma. Significant sectors of the global economy have been built upon greenhouse gas emitting technologies and land use practices that are having global consequences. These however, are concentrated mainly in the developed world where decades, and even centuries of growth have inculcated a worldview that demand lifestyles dependent on them. At the time of this writing for instance, the United States comprises roughly 5 percent of the world’s population, yet it consumes over a third of its natural resources and generates some 20 percent of its pollution – including greenhouse gases. Few Americans do not have at least one automobile, and fewer still drive hybrids, or other alternatives that would mitigate some of the impact. The cost of shifting our economies away from these practices is will be significant – lifestyle changes, major economic adjustments, and though few want to admit it, a rethinking of the natural resource consumption levels of the world’s richest nations, their impact on its poor nations, and the roadblock this presents to allowing all nations and peoples to achieve a happy and productive existence. Thus, the costs of mitigating climate change impacts are likely to be enormous, and the burden of responsibility will fall mainly on those who are contributing most to the problem – the world’s developed nations, their industries that have created unprecedented wealth from greenhouse gas emitting technologies, and the First World consumer base that has grown dependent on them. Yet to avoid the most damaging impacts, these sacrifices will need to be made early rather than late in the process, and the payoff will not be obvious for decades to come. Delaying action until human cause global change is unavoidably obvious will almost certainly be too late.

It is difficult to imagine an environmental or moral dilemma tailored more perfectly to the most primal of human weaknesses – denial, rationalization, the passing of blame to others. So it should come as no surprise that the last 10 to 15 years have seen an unprecedented growth in challenges to the mainstream scientific consensus on global warming. Nor should it surprise us that almost without exception, these have come not from the mainstream scientific community, but from polluting and extraction industries and ultra-conservative special interests (who typically have worldviews rooted in free market values and the sanctity of business interests). Numerous front groups and think tanks funded by these interests have appeared in recent years seeking to disprove global warming, or at least to divert public policy away from mitigation efforts. Typically, these groups employ scientific consultants, all of which are drawn from the same pool of one to two dozen scientists who are well known for their contrarian views. None of them do original peer-reviewed research in climate science or any other field, and apart from one or two notable exceptions, little has been published by their consultants that seriously challenge the current consensus. Most of their efforts have been devoted to extensive public relations programs, ad campaigns, “educational” forums, and lobbying efforts at the state and federal level, where they have enjoyed wide support from the Bush administration. The lion’s share of their funding has come from industry coalitions (chiefly the fossil fuel, auto, mining, and coal fired power industries), and from ultra-conservative foundations and religious groups such as the John Mellon and Sarah Scaife Foundations, the Olin Foundation, the Coors empire, the Unification Church (the “Moonies”), and many others (Beder, 1998; 1999; Gelbspan, 1998; 2004).

So how is this relevant to troposphere temperatures? Given the maturity of surface station records of the last century, most global warming skeptics will now admit that the earth’s surface has warmed in recent years to at least some extent. Their main points of contention are a) that this warming has an anthropogenic component, and b) that the consequences of future greenhouse gas emissions are likely to be severe enough to justify mitigation efforts. If the observed warming is natural and not historically unprecedented, there is little we can do about it. If the likely impacts of warming over the next century are relatively mild (or even beneficial as some have argued), there is nothing to worry about. Either way, costly mitigation efforts and a technological shift away from fossil fuels will be unnecessary – a position that has obvious appeal to industry. Ultimately, this boils down to determining how much of the observed warming is due to anthropogenic greenhouse gases and land use, and what consequences can be expected from the status quo during the next century.

AOGCM’s have played a key role in the search for answers to these questions. The IPCC (2001) conducted a review of the best of these models, as evaluated by the Coupled Model Intercomparison Project, or CMIP (Meehl et al. 2000). On overview of how these models performed when compared against surface air temperature (Jones et al., 1999), precipitation (Xie and Arkin, 1996), and sea level pressure (ERA-15 reanalysis) is given collectively in Figures 15 and 16. All evaluated models were forced with a combination of natural and anthropogenic forcings, including greenhouse gas emissions. Volcanic and ENSO effects are included in most. Figure 41 is a Taylor diagram in which variance and correlation are plotted radially. The point marked “Observed” corresponds to observation. The further any given point is from the Observed point, the larger its overall RMS error in whatever variable is being measured. It can be seen that there have been issues with how well these models have replicated global precipitation and sea level pressure. But the best of them have done a good job of replicating the last century’s surface temperature evolution. When anthropogenic greenhouse gas emissions and land use activities are removed from these models, this agreement with observation is lost. This strongly supports both the reliability of these models in reproducing overall global surface temperature trends, and the reality of anthropogenic impacts on climate change. When these models are forced beyond the present and into the next century, all show significant warming that is appreciably curbed (but not removed) only by drastic greenhouse gas reductions.

Skeptic Arguments

Because of their ability to demonstrate anthropogenic impacts on global climate, and their predictions for the future, discrediting AOGCM’s has been a primary objective of global warming skeptics and their benefactors. Most of these models predict a strong surface-troposphere and show the latter warming at least as fast as the former. Their inability to comfortably reproduce the discrepancies that have been observed is an obvious weak point, and considerable effort has been expended by contrarians in attacking it. Selected portions of the MSU and radiosonde records have been the weapons of choice. The bulk of their literature has been published in popular books, press releases and editorials, and online. Most of it is little more than a superficial rehash of the same few claims. As there is a virtual avalanche of these publications today, and one is very much like another, I will not attempt to address all of them but will select a few representative examples that highlight the most common skeptic arguments.

1)   The MSU record is the only reliable satellite analysis.

Skeptics argue that the satellite record is the only one accurate enough to determine global temperature trends. The surface record, they say, is plagued by urban heat island effects, poor data quality, and a host of other issues that are avoided in MSU products. Robinson et al. (1998) 2 are fairly typical of the skeptic literature on this point. They argue that,

“Since 1979, lower-tropospheric temperature measurements have also been made by means of microwave sounding units (MSU’s) on orbiting satellites. Figure 6 shows the average global tropospheric satellite measurements – the most reliable measurements, and the most relevant to the question of climate change.”

(Robinson et al., 1998)

The Figure 6 they refer to presents a time series of monthly global troposphere temperature anomalies based on UAH Version C (Christy et al., 1998), though their citation is to a letter in Nature that briefly discusses the accuracy of MSU detectors (Christy and Braswell, 1997). They do not directly cite UAH Version C – which included a discussion of the recently discovered spurious warming introduced by POES orbital decay (Wentz and Schabel, 1998) and specifically notes that a correction for it was not included in that dataset. They go on to say,

“Disregarding uncertainties in surface measurements and giving equal weight to reported atmospheric and surface data and to 10 and 19 year averages, the mean global trend is minus 0.07 ºC per decade. In North America, the atmospheric and surface records partly agree. Even there, however, the atmospheric trend is minus 0.01 per decade, while the surface trend is plus 0.07 ºC per decade. The satellite record, with uniform and better sampling, is much more reliable.”

(Robinson et al., 1998)

We’re told that the MSU record has “uniform and better sampling, [and] is much more reliable”, yet no details are given. We aren’t shown any specifics regarding the problems claimed for either record or any comparisons that might allow us to judge the relative uncertainties in each. Skeptic claims about flaws in the surface record are beyond the scope of this paper and have been dealt with in detail elsewhere (NRC, 2000; IPCC, 2001). As for the MSU record, it is not at all clear that its uncertainties are any less problematic. The one clear advantage of the MSU Record is that it is truly global whereas the surface and radiosonde records are not. Beyond that its uncertainties are legion - impacts of sampling error on the evaluation of diurnal drift (UAH Record) and synthetic Channel signals like 2LT and TLT, imperfect characterization of various instrument non-linearities and calibration issues, the shortened service life of some NOAA POES spacecraft (most notably, NOAA-09), uncertainties in the characterization of Instrument Body Effect, potential signal contamination from surface and stratosphere emissions, and various complications surrounding merge calculation methodologies. These were covered in detail in Part I of this paper. Some of these errors are less severe than others, and efforts to correct for them have resulted in genuinely low confidence intervals compared to the surface and radiosonde records. Others however, are more significant. The differing merge methodologies of RSS and UAH products alone accounts for at least 65 percent of the difference between their trends. When smoothing philosophies are included as well, the difference is larger still. The anomalously large values derived by the UAH team for the NOAA-09 target factor are a particular point of concern. Yet the RSS analysis can comfortable account for the best characterized AOGCM results which shows that it is even possible that differing data reduction philosophies may be able to explain most of the discrepancy between the records (Santer et al., 2003). Then there is the fact that MSU products measure bulk layer temperatures rather than altitude specific ones, and the layers measured imperfectly represent the lower and middle troposphere temperatures that are needed for AOGCM comparisons. In particular, stratospheric noise on Channel 2 accounts for as much as 20 percent of its signal. MSUTLT avoids much of this pollution, but the differencing method it is based on more than doubles its sampling noise compared to Channel 2 (NRC, 2000; Mears et al., 2003c). Furthermore, because the stratosphere is cooling faster than the troposphere is warming, there will be an even larger impact on the trends themselves. It now appears that stratospheric contamination may contribute up to 0.08 deg. K/decade to the Channel 2 trend (Fu et al., 2004). This is nearly half of the expected trend if the surface and upper atmosphere are strongly coupled, and more than half of the observed RSS MSU Channel 2 trend. These results can hardly be considered as less problematic than those of other products.

Though Robinson and his co-authors avoided any specifics regarding issues with MSU datasets, others have attempted to dispel concerns about them. For instance, in 1998 shortly after Wentz and Schabel published their paper on spurious cooling due to POES orbital decay (1998), it came under immediate attack from numerous industry front groups. Typical of the bullets fired was an August 1998 press release from the California based National Center for Public Policy Research in which it was claimed that,

“A study released yesterday suggesting that satellite data showing a drop in the earth's temperature over the past 18 years is wrong is fatally flawed. The study thus has no impact on the ongoing global warming debate.

The study, written by Frank J. Wentz and Matthias Schabel, claims that because NASA's orbiting satellites can lose altitude as they circle the globe, temperature data collected by these satellites has been inaccurate. Wentz and Schabel further suggest that, with these altitude drops factored in, the temperature of the planet has warmed 0.13 degrees per decade rather than declined by 0.09 degrees per decade.”

(NCPPR, Aug. 13, 1998)

NCPPR goes on to argue that Wentz and Schabel ignored “false warming caused by other factors” and concluded that,

"The Wentz/Schabel study is fatally flawed and is thus of little use in the current global warming debate," said David Ridenour, Vice President of The National Center for Public Policy Research. "The bottom line is that satellite data -- which has consistently shown no warming trend -- remains the most reliable means of measuring the earth's temperature. Satellites cover 99% of the surface of the planet. By contrast, reliable ground temperature data over the past 100 years covers just 18.4% of the planet."

(NCPPR, Aug. 13, 1998)

Not surprisingly, the argument has been carefully cherry-picked. The “false warming” factors they are referring to include updated corrections for diurnal drift and IBE. These two effects collectively add up to 0.07 deg. K/decade of spurious cooling in MSU2R (the lower troposphere synthetic channel used in Versions B and C that is analogous to 2LT used in later Versions). The corresponding spurious cooling of MSU2R from POES orbital decay described by Wentz and Schabel was about -0.10 deg. K/decade. The net effect of the two is an increase in observed MSU2R warming from 0.03 deg. K/decade to 0.06 deg. K/decade (Christy et al., 2000). A corresponding loss of warming was observed on MSU2 (from 0.08 deg. K/decade to 0.04 deg. K/decade) but the end result was still a warming troposphere. The confidence interval on these values includes zero, but the most likely result is warming. The statement about the Wentz/Schabel study being “fatally flawed” is flat out incorrect. Both corrections are needed in MSU products. When the NCPPR published this press release UAH Version C was the extant UAH MSU analysis product. Version C notes the orbital decay issue, but did not include it, as the discovery was made after the final pre-publication paper had gone to galley print. Because the corrections for diurnal drift and IBE (i.e. “false warming”) that were included are actually smaller than the orbital decay correction, Wentz and Schabel were actually closer to the truth than UAH Version C. Later versions of UAH and RSS products corrected for both errors, and as we have seen, the evolution in MSU observed trends has been upward ever since. Though UAH Version D had not been published at the time of this press release, all the information necessary for this comparison had been (Wentz and Schabel, 1998; Christy et al., 1998). Ridenour and the NCPPR simply did not do their homework.

Note that the previous statements all implicitly assume that the surface and tropospheric records are measuring the same things. They are not. We have already seen in Part I that there is likely to be at least some decoupling of the two in the tropics and extra-tropics (Trenberth and Stepaniak, 2003). The differing impacts on each from volcanic eruptions and ENSO’s were well known at the time Robinson et. al. and the NCPPR published their statements. Furthermore, though AOGCM’s have assumed a strong degree of surface-troposphere coupling over the long term, the short-term forcings and evolution of surface and tropospheric temperatures are vastly different and may show considerable variation both regionally and temporally. So in regard to evaluations of global warming, there is no justification to giving the two “equal weight” on decadal time scales even in principle. The concluding statement contrasting the global coverage of the MSU record with “reliable ground temperature data” from the last century, which we are told covers “just 18.4% of the planet.” But only a few paragraphs before they stated that,

“The satellite measurements showing no global warming have been corroborated by weather balloon measurements. If this satellite data were significantly off, satellite and weather balloon measurements should have diverged…”

(NCPPR, Aug. 13, 1998)

The fact that the NCPPR would make an argument like this is revealing. “Weather balloons” is a reference to radiosondes. At the time these comments were published, the most commonly cited radiosonde product for MSU comparisons was the Angell 63 station network (Angell, 1988). With data from literally thousands of in situ thermometers globally distributed on all continents and many sea surface records as well, the surface record has far better coverage than Angell 63 or any other radiosonde product since (IPCC, 2001; Seidel et al., 2003; 2004). The NCPPR has chosen to compare the MSU record (which dates from 1979) with the surface record dating back to a century ago. No doubt this was done because the global surface network was considerably smaller in the late 19th century, and they could take advantage of that for their “comparison”. It is evident that the any valid comparison of the two must consider the period of common record – the record since 1979 – during which the surface record has been far more complete. Yet again, the “comparison” has been carefully set up to give the desired results.

2)   Climate Scientists prefer RSS only because it agrees with models.

To no one’s surprise, the skeptic literature draws heavily from the MSU record, and UAH products in particular as they show the least amount of tropospheric warming. Most references are to UAH Versions D and 5.0 (Christy et al., 2000; 2003), though even today a few skeptics still cite older Versions 3. In recent years, confidence in UAH upper air products has been waning, and it is now likely that a majority of mainstream climate scientists do not believe their TLT and TMT trends (Trenberth, 2004).

When RSS Version 1.0 was first made public in early 2003 it attracted immediate attention. RSS Version 1.0 was the first new MSU analysis product produced that examined MSU data with the same level of detail and thoroughness as the pioneering UAH products. Like those products, it addressed all currently known sources of error, improving on the characterization of some of them, and incorporated more recent data than the extant UAH product at that time (Version D – Version 5.0 was published later that year). But unlike UAH products, it predicted satellite era troposphere temperature trends that were noticeably higher, and roughly consistent with those of Prabhakara et al. (2000). RSS published their full analysis product later that same year in Journal of Climate (Mears et al., 2003). Skeptics have every reason to be terrified of this analysis. Not only is it well characterized on every level, it yields results that are consistent with the predictions of state-of-the-art AOGCM’s. As such, they wasted no time in going after both with considerable vitriol. Within days, skeptic forums worldwide were claiming that the RSS results were fatally flawed. Without exception, the criticisms had little specific content as to exactly what was flawed about it, and for the obvious reason – they had none. In the absence of any valid method criticisms, they turned to external comparisons. Other than the claim that RSS products lacked “verification” from the radiosonde record (this claim will be examined shortly), the most common attack was that RSS had cooked their analysis to justify the surface record and AOGCM predictions. To this end, they had a specific target to aim at.

Earlier that year, a team led by Ben Santer of the Lawrence Livermore Laboratory that included Carl Mears, Frank Wentz, and Matthias Schabel of RSS, compared four runs of the Dept. of Energy’s Parallel Climate Model (PCM) with MSU data from UAH and RSS. PCM, which is described in Washington et al. (2000) is a coupled land, ocean, atmosphere, and sea-ice model that does not use flux corrections at interfaces. The atmospheric and land components are taken from the NCAR’s Version 3 Community Climate Model (CCM3) and Land Surface Model (LSM). CCM3 is the same atmospheric model that RSS used to characterize their diurnal correction. The reliability of CCM3 for diurnal behavior has already been seen (Figure 3). The ocean and sea-ice components are taken from the Los Alamos National Laboratory Parallel Ocean Program (POP) and a sea-ice model from the Naval Postgraduate School. In PCM, these various components are tied together with a flux coupler that uses interpolations between the component model grids in a manner similar to that used in the NCAR Climate System Model (CSM). Grid resolution varies from ½ deg. at the equator to 2/3 deg. near the North Atlantic. The atmospheric component (CCM3) uses 32 vertical layers from the surface to the top of the atmosphere. In various experiments PCM has been very reliable in reproducing observed global surface temperature behavior (see Figures 15 and 16) and stable, well characterized results for a broad range of forcings, and has done an excellent job of capturing ENSO and volcanic effects as well.

Santer’s team ran four realizations of the “ALL” PCM experiment which makes use of well-mixed greenhouse gases (including anthropogenic greenhouse gas emissions), tropospheric and stratospheric ozone, direct scattering and radiative effects of sulfate and volcanic aerosols, and solar forcing (Ammann et al., 2003; Meehl et al., 2003). All used identical forcings but differing start times. Simulated MSU temperatures were derived from global model results by applying MSU Channel 2 and 4 weighting functions to the PCM output across its 32 vertical layers, and these were then compared with UAH and RSS analysis products. The goal was to see if an anthropogenic fingerprint on global tropospheric temperature trends could be detected in either of the two MSU products. First, the model was “fingerprinted” using standard techniques (Hasselmann, 1979; Santer et al., 1995) to see if observational uncertainties had a significant impact on PCM’s consistency. Internal climate noise estimates (which are necessary for fingerprint detection experiments) were obtained from PCM and the ECHAM/OPYC model of the Max-Planck Institute for Meteorology. The anthropogenic fingerprint on climate change was taken to be the first Empirical Orthogonal Function (EOF), Φ, of the mean of the four ALL runs of PCM. Then, increasing expressions of Φ were sought in UAH and RSS analyses in an attempt to determine the length of time necessary for it to be detected at a 5 percent statistical significance level in both observational records (Santer et al., 2003).

They found that a clear MSU Channel 2 anthropogenic fingerprint was consistently found only in the RSS dataset. This is not surprising as the RSS team found consistently warmer Channel 2 trends that UAH. What is more noteworthy, is that this was true only for the mean-included comparisons. When the means are removed from both datasets, the fingerprint was clearly visible at the 5 percent level in 6 out of 8 cases for the RSS and UAH analyses – a consequence of the fact that PCM captures the observed equator to pole temperature and trend gradients quite well, and these are in turn manifested in Φ. The team concluded that the main differences in the ability of the RSS and UAH products was due to the large global mean and trend differences between the two, and these were in turn likely to be due to uncertainties in how each was analyzed. Santer’s team correctly concluded that,

“Our findings show that claimed inconsistencies between model predictions and satellite tropospheric temperature data (and between the latter and surface data) may be an artifact of data uncertainties.”

(Santer et al., 2003)

This is, of course, exactly what we would expect. Nearly two thirds of the trend discrepancy between the UAH and RSS analyses is related to the differing methods each team used to characterize IBE and do their merge calculations, and to a lesser extent, their differing methods of smoothing and diurnal drift correction. Since detection of the anthropogenic fingerprint in PCM as characterized by Φ, depends on this difference, it would not be surprising if the difference between detection and non-detection is the result of data and/or data processing uncertainties. The fact that the mean-removed analyses of both teams do capture the fingerprint demonstrates the ability of PCM and its component models to capture real tropospheric and surface effects.

Yet we would never gather any of this from the skeptic press. In May of 2003, shortly after Santer et al. (2003) was published, the Greening Earth Society 4 attacked the RSS analysis and the work of Santer’s team in one of their “Virtual Climate Alerts”, which was typical of a wide range of skeptic publications that came out shortly thereafter, and have continued to do so ever since. In it, they stated that,

“As has been known for years, there is a major discrepancy between tropospheric (earth’s atmosphere at an altitude from 5,000 to 30,000 feet) temperatures as measured by satellite-based instruments and projections of those temperatures by climate models. The former find only a tiny warming trend while the models predict something four times larger.

Most scientists, upon recognizing such a discrepancy, would ask themselves what is wrong with the model. Good science, as elementary school students are taught, begins with a hypothesis. In this case, "hypothesis" is another term for "model." The model, or hypothesis, is tested against what is observable in the real world and, if the two differ, the hypothesis (or model) is altered to fit the facts, then retested. Science is not about changing facts to fit hypotheses, unless you happen to be part of a team of researchers led by Ben Santer at the Lawrence Livermore National Laboratory.

Santer, et al report in Science on what we feel compelled to call "an interesting exercise in mathematical philosophy." Specifically, they used a climate model to determine which of two competing datasets is more correct: John Christy’s satellite record or an altered, warmer version of that record which never has been published in peer-reviewed literature…

The observed trend in the lower atmosphere in the Wentz./Schabel dataset is reported to be 0.1ºC per decade greater than the UAH data, so it "more closely matches" the observations from the surface and the climate model projections.

What Santer et al chose to do is compare the temperature projections for the surface and atmosphere from the global climate model developed by the National Center for Atmospheric Research (NCAR) with the satellite temperature history devised by Wentz and Schabel and with that of Christy and Spencer. How could it be a "surprise" that the warmer of the two datasets (Wentz/Schabel) provides a better match with climate model projections?

Santer et al proclaim, "Our findings show that claimed inconsistencies between model predictions and satellite tropospheric temperature data (and between the latter and the surface) may be an artifact of data uncertainties." What they fail to mention is that the climate model they used is one that projects the slowest rate of warming for the next hundred years (all climate models are not created equal). If that model is perturbed with data capturing the observed slowing in per-capita carbon dioxide emissions, the global warming it projects between now and 2100 is very close to the low values espoused by the climate skeptics – something around 1.6°C.”

(GES, May 6, 2003)

Nearly every word of these statements is either false or misleading. “Model” is not another term for “hypothesis”. PCM and other extant AOGCM’s like it have been meticulously constructed off of known climatological principles and repeatedly tested against observed trends. While none is perfect, the ability of the best of them (including PCM) to replicate observed surface temperature trends has already been demonstrated (Washington et al., 2000; IPCC, 2001; see Figures 15 and 16). Nor is it true that PCM projects the slowest rate of warming for the next hundred years. Like any other AOGCM, the warming rates projected by PCM (which was developed by the DOE, not NCAR – only the CCM3 component is from NCAR) depend on how it is forced, and any of a wide range of responses is possible. GES states that it yields a warming of “something around 1.6 deg. C” over the next century when forced with “the observed slowing in per-capita carbon dioxide emissions”. But the slowing of CO2 emissions tells us nothing about the existing baseline rates of emission used for this prediction, or how either relates to what is forecasted for the next century under the various scenarios studied by the IPCC. Naturally, GES carefully avoided providing a proper citation for this statement, so it cannot be put into context or checked for accuracy.

Nor is it true that Santer’s team “used a climate model to determine which of two competing datasets is more correct “. They compared simulated MSU Channel 2 observations from PCM (which are representative of middle troposphere temperatures and trends - not “temperature projections for the surface and atmosphere”) with the corresponding records from UAH Version 5.0 and RSS Version 1.0 to see if an anthropogenic fingerprint on global warming could be detected in either. They concluded that the ability of the two datasets to do so was driven by their differing methods of analysis and/or data uncertainties – which of course, is correct. An anthropogenic fingerprint, as characterized by the first empirical orthogonal function in the PCM runs Santer et al. used, can be detected in UAH Version 5.0, but only after removing global mean values from the dataset – and as was shown in Part I, these do in fact differ from RSS largely because of methodology. Yet taking the comments of GES at face value, we would conclude that Santer’s team deliberately fabricated a purely theoretical AOGCM run and then cooked up an MSU analysis to agree with it! The attempt to discredit the RSS dataset by claiming that it “never has been published in peer-reviewed literature…” was a particularly low blow. In fact, RSS Version 1.0 had already been in review for several months on the day this GES Virtual Climate Alert was published, and in fact, was submitted to the Journal of Climate for its second round of review only 3 days later (on May 9, 2003). It was published in Journal of Climate later that year (Mears et al., 2003).

In passing, one other observation needs to be made here – one that is particularly important for any paper that offers a critical review of climate change “skeptics”. The same week GES published their screed, John Christy of the UAH team offered a similar criticism of the Santer et al. paper in testimony before the U.S. House of Representatives (Christy, 2003). While his remarks suffer from the same shortcomings on this point, it’s instructive to compare their content and tone with those of GES and other similar front groups. In the tradition of his team’s demonstrated commitment to excellence, Christy’s comments were measured, professional, and completely lacking in the ad-hominem that has become a staple for these groups. They were also human - in addition to his scientific testimony, he shared his own experiences as a missionary in Africa in relation to international global warming mitigation efforts. Not only were his remarks more thoughtful than those of other skeptics, they demonstrated that right or wrong, his concerns are ultimately rooted in his own direct experience with people he cares about as well as scientific. GES on the other hand, is being paid to have theirs 4. This should serve as a reminder that not all climate change “skeptics” are alike – a fact that is all too easily forgotten when dealing with contentious subjects like the upper-air record 16.

At times, the attempts to elevate the status of UAH products over those of other teams borders on the absurd. In a recent editorial at Tech Central Station, astrophysicist and industry consultant Sallie Baliunas 5 stated that,

“The best analysis of air temperature over the last 25 years is based on measurements made from satellites and checked with information from weather balloons. That work, conducted by J. Christy and R. Spencer at the University of Alabama at Huntsville (UAH), shows a small global warming trend…

Three new analyses of troposphere temperatures have appeared in the publications Science and Journal of Oceanic and Atmospheric Technology. They all start with the same set of measurements made from satellites, but find different results. Because not one but a series of satellites has collected the data, corrections need to be made to the measurements from each instrument to produce a precise record of temperature going back over two decades. How to find the best result?”

(Baliunas, 2003)

Interspersed among these comments, there was some very general background on surface temperatures and climate models, and glowing praise for UAH analysis methods. The reference to Journal of Oceanic and Atmospheric Technology was for UAH Version 5.0. The other two papers from Science she mentions are Santer et al. 2003b) and Vinnikov and Grody (2003). She makes no mention of Prabhakara (2000) and mentions RSS only in passing saying that their results were “just appearing in Journal of Climate.” In fact, they were already published (Mears et al., 2003), and details of their methods and results had been published over 9 months before (Mears et al., 2003b) and investigated in a separate study by published 6 months before by Santer et al. (2003) that she makes no mention of. After this generalized, and selective, introduction, she concludes that,

“The remaining two studies consider the same satellite measurements and find results consistent with computer-based forecasts of globally-averaged human warming. But those two studies also produce contradictory results, indicating the small temperature trend from UAH is the most reliable.”

(Baliunas, 2003)

In other words, because Santer et al. (2003) and Vinnikov and Grody (2003) reach conclusions that she considers “contradictory” UAH Version 5.0 is thereby proven to be the most reliable MSU analysis available.

This is an outright non-sequiter. Issues with one or two studies do not prove that a completely separate study is reliable! Baliunas does provide a superficial explanation of what she considers to be the problems with Santer et al. (2003b) and Vinnikov and Grody (2003), and an explanation for what she likes about UAH Version 5.0, specifically her belief that radiosondes validate it (I will examine this claim shortly). But her criticisms of Santer et al. and Vinnikov and Grody are for the most part weak (particularly for Santer et al.) and even if true, they would not, by themselves, vindicate UAH Version 5.0 as the best study available. At the time this editorial was published, information regarding the methods and results of RSS Version 1.0 were easily available and had been for almost a year, had she bothered to consult them. Independent studies analyzing their work such as Santer et al. (2003) had been available for over 6 months. Prabhakara et al. (2000) had been available for almost 3 years. Yet she makes no attempt to address any of this, despite the fact that, by her own admission, she was aware of the RSS team’s work. Her complete neglect of these studies is careless at best, and downright negligent at worst.

Ms. Baliunas is of course, quite right to rely on UAH products as indispensable upper-air records. They are the pioneering works in MSU trend analysis and continue to be on the cutting edge of upper-air remote sensing products. It will be some time before they are discarded as contending tropospheric trend analyses (indeed, if they ever are). While favor is tending toward RSS products as of this writing, UAH and RSS analyses must be considered together as complementary MSU/AMSU retrievals. Both are needed to understand tropospheric and stratospheric temperature trends. The issue of contention here is Ms. Baliunas’ selectivity - climate change skeptics and their benefactors rely exclusively on UAH products alone for no other reason than because those products tell them what they want to hear – particularly when combined with other equally cherry picked surface and upper-air records. This is selective science at its very worst.

3)   UAH analyses have been independently validated by the radiosonde record.

To date, one of the most common, and strident, of skeptic claims is that the radiosonde record is fully independent of the MSU record and validates UAH products only. Numerous examples of this can be found throughout the skeptic literature. For instance, the same Greening Earth Society Virtual Climate Alert cited above (May 6, 2003) takes up the issue. After their misguided attacks on the work of Santer et al. (2003), they continue,

“All this is very interesting, of course, but what we’ve neglected to remind you (until now) is that there exists a completely independent source of observations of atmospheric temperature – weather balloons. Weather balloons are launched twice daily from sites around the world and have been for the last fifty years or so. As they ascend through the atmosphere, they transmit a host of observations back to the ground, among them temperature and atmospheric pressure. This is data that can be used to construct a dataset from the same part of the atmosphere that is measured by those orbiting satellites. The weather-balloon data is completely independent of that generated by the satellites and serves as a different measurement of the same quantity (the lower atmosphere).

Two research reports published earlier this year carefully compare the weather-balloons with the UAH satellite-derived temperature observations. Lanzante et al (2003) find there to be no difference in the temperature trend derived from the two datasets. The other (Christy et al, 2003) examines several different weather-balloon datasets and finds that the UAH satellite trend (0.06ºC per decade) is within a few hundredths of a degree Celsius of the trends derived from the four sets of weather-balloon observations, but always more positive (0.04ºC, -0.02ºC, 0.00ºC, and 0.05ºC, per decade). This makes the Wentz/Schabel trend of 0.16ºC per decade several times greater than that of any weather-balloon record.”

(GES, May 6, 2003)

At about the same time this report was published, the U.S. House of Representatives was considering ratifying an amendment, introduced by Rep. Bob Menendez (D-NJ) that would have overturned S. Res. 98, (the Byrd-Hagel resolution) and opened the door to the United States supporting the United Nations Framework Convention on Climate Change and becoming a signatory to or the Kyoto Protocol. In response to the so-called Menendez Amendment, Marlo Lewis, a Senior Fellow at the Competitive Enterprise Institute, and Bob Ferguson, Executive Director of the Center for Science and Public Policy (a project of the Frontiers of Freedom Institute) published a report of specific comments on the Menendez amendment in which they advanced the radiosonde vindication argument (Ferguson and Lewis, 2003). In this report they stated that,

“The computer models say that the troposphere should have warmed by +0.5 C in the last two decades. However, both NASA satellites and weather balloons show virtually no troposphere warming.

A new paper by Santer et al. attempts to debunk the satellite record. They claim that a satellite dataset produced by Remote Sensing Systems (RSS) in Santa Rosa, California, is more accurate than the dataset produced by climatologists Roy Spencer and John Christy at the University of Alabama in Huntsville (UAH). Why is the RSS dataset more accurate, according to Santer et al.? It conforms more closely to climate models. But data is supposed to confirm models, not the other way around. The UAH dataset agrees with a totally independent troposphere temperature record—weather balloon observations, which show about 0.08 degrees C of warming trend (see Figure 1) during the past two decades even when one includes the large warming contributed by the 1997-1998 El Nino event. The UAH results are plotted side by side with two independent determinations of the global temperature of lower troposphere in Figure 1. Note the near-perfect agreement (with correlation coefficients greater than 0.94 and 1 being perfect correlation) between the UAH satellite record and (a) balloon results from the U.K. Meteorological Office (marked HadRT) and (b) the assimilated global lower tropospheric temperature deduced by U.S. National Centers for Environmental Prediction (marked NCEP). The latest UAH effort in confirming the accuracy of the satellite temperature record and its error estimates are published in the May 2003 issue of the Journal of Atmospheric and Oceanic Technology (vol. 20, 613-629).”

(Ferguson and Lewis, 2003)

Once again, we see the same superficiality and cherry-picking that has become a trademark for organizations like these. First, radiosonde (“weather balloon”) records in general are not completely independent of the MSU record. Though the LKS record (Lanzante et al., 2003) cited here by GES is independent, most that are used in MSU intercomparison studies rely on MSU records to at least some extent to detect anomalous events. HadRT, which Ferguson and Lewis incorrectly report as independent, is one example (Parker et al., 1997; Free et al., 2002). While such corrections are generally minor for the products that use them, they cannot be ignored altogether.

Nor is it true that radiosondes are “a different measurement of the same quantity (the lower atmosphere)”. MSU and AMSU devices measure the bulk brightness temperature of broad atmospheric layers such as the middle troposphere and lower stratosphere. The narrowest measurement they can reasonably make is of the lower troposphere alone (MSUTLT), which is at least 7 km thick, and this observation can only be made with differencing methods that increase sampling noise (NRC, 2000). Radiosondes measure direct ambient air temperature using thermistors or bimetallic sensors of varying designs. These have a host of calibration and sensitivity issues of their own that are completely different from those of the MSU. Because they take snapshot readings of local temperature at discreet altitudes, they must be converted to equivalent MSU “brightness” temperatures by weighted summations over altitude and geographic region. There are many issues surrounding how this should be done as well. Many of the issues impacting the reliability of the radiosonde record are of precisely the type that will cause it to under-estimate actual air temperature (e.g. – the Phillips to Vaisala equipment shifts at many stations), so it is not surprising that these records are often lower than their MSU equivalents, as GES is emphatically asserts. None of this passes for measuring “the exact same things”.

There are also basic numerical misunderstandings in both the GES report and Ferguson and Lewis’ work. After yet another repeat of the bogus claim that Santer et al. (2003) attempted to “debunk” the MSU record, Ferguson and Lewis argue that there is “near-perfect agreement (with correlation coefficients greater than 0.94 and 1 being perfect correlation)” between UAH, and 2 other “independent determinations of the lower troposphere”. This is not only false, it reflects a serious misunderstanding of correlations. High correlations between MSU anomalies and those of other datasets imply only that they are not entirely independent. What is at issue are temperature trends, and correlation coefficients actually have very little to do with small trend differences. In this case rms errors and the confidence intervals derived from them are far more meaningful, and these typically show more a lot more variance between MSU and radiosonde products. The data that Ferguson and Lewis report as being a “near-perfect” fit to UAH Version 5.0 are the HadRT2.1 radiosonde analysis and the version of the National Center for Environmental Prediction (NCEP) reanalysis product 6 that was extant at the time. For UAH Version 5.0 comparisons to these products, the trend difference standard errors (which are based on annual anomaly difference rms errors) are +/- 0.0308 deg. K/decade for HadRT2.1 and +/- 0.0285 deg. K/decade for NCEP. The corresponding 95 percent trend difference confidence intervals are +/- 0.075 deg. K/decade for HadRT2.1, and +/- 0.067 deg. K/decade for NCEP (Christy et al., 2003 – see Figure 9). These values are large enough to include RSS Version 1.0. This is hardly “near perfect agreement”.

But if Ferguson and Lewis misrepresented the lower troposphere record, GES positively botches it. They reported that Lanzante et al. (2003) found no trend difference between their radiosonde product (LKS) and UAH Version D, but failed to note that this was for Channel 2LT only, for adjusted and sign-averaged global data only, and only through 1997, neglecting nearly a fourth of the extant record at the time GES made these comments. The absolute value of the trend differences reported by Lanzante et al. was larger globally (0.011 deg. K/decade), and considerably larger for median values computed over individual station averages (0.246 deg. K/decade). GES then compares these values with RSS trends even though RSS has no 2LT product. The trend value of 0.16 deg. K/decade they report for RSS Version 1.0 is incorrect and appears to be the result of simple carelessness. From the looks of it, GES started with the Channel 2 difference between UAH Version 5.0 and RSS Version 1.0 (approximately 0.10 deg. K/decade - see the earlier quote from the same article), and then added this to the UAH 2LT trend rather than checking the value directly. The actual Channel 2 trend for RSS Version 1.0 is 0.097 deg. K/decade (Mears et al., 2003).

These arguments are typical of most skeptic forums. In recent years, most rely on the LKS radiosonde dataset (Lanzante et al., 2003) as vindication of UAH products (without mentioning that it only extends to 1997 and leaves out the significant warming of the latter 90’s and early 21st century). Some point to HadRT and Angell products, including older versions of both. But all use carefully selected portions of each record to “prove” that the UAH record is more reliable than its competitors, even to the point of comparing RSS middle troposphere products to lower troposphere UAH and radiosonde records for which they have no comparable record, as GES did in the above example. But in the summer of 2004, three of the foremost global warming skeptics who to date have concentrated mainly on anti-global warming publicity and industry funded lobbying, escalated the skeptic challenge to a new level when they published papers in peer-reviewed journals – two that challenges the upper air record, and one that challenges its relationship with the surface record. These papers take the radiosonde/MSU comparison and its relationship with AOGCM’s in skeptic forums to a new level that merits a discussion of its own.

Douglass, Singer & Michaels (2004)

The large majority of global warming skeptic publications are popular media pieces or summary papers for policy makers. One is very much like another, and those I have cited so far are typical examples. Skeptics seldom publish their work in peer-reviewed journals or contribute to the peer-review process. Yet there are a few notable exceptions. In July of 2004, David Douglass 7, S. Fred Singer 8, and Patrick Michaels 9 lead teams that published two papers in Geophysical Research Letter in which they claim to have demonstrated that there is a clear disparity between surface and lower troposphere temperature trends (Douglass et al., 2004), and that current state-of-the-art AOGCM’s cannot accommodate it (Douglass et al., 2004b). Their arguments differ from those considered so far in that they attempt to formally demonstrate both a disparity in the observational record and a model-observation as well. They were also significant in that both were peer-reviewed and published.

In the first of these papers Douglas et al. (hereafter, DEA) use MSU data, radiosonde data, and a reanalysis product applied to the period of 1979 to the present to argue that the disparity exists and that it cannot be accounted for by any known tropospheric dynamics. To do this, they start with global surface temperature data from Jones et al. (2001). These are monthly anomalies with respect to the 1961-1990 average of global surface air temperatures over land, and below-surface water temperatures for oceanic regions, as represented within a 5 deg. by 5 deg. grid cells. This record is then compared with lower troposphere trends taken from UAH Version D MSU2LT Data (Christy et al., 2000) and data from a new “2-meter” temperature product (R2-2m) derived from an updated version of the National Centers for Environmental Prediction - National Center for Atmospheric Research (NCEP/NCAR) Reanalysis 6 (Kanamitsu et al., 2002; Kalnay et al., 1996). The latter is selected for its consistency and completeness between the surface and 850 hPa layers, and because it is (they argue) a dataset that is independent of both the MSU record and the radiosonde products that have been used to date for tropospheric intercomparison studies (Christy et al., 2000; 2003; 2004; Seidel et al., 2003, 2004; Angell, 2003). In the second (2004b), they compare results from 3 AOGCM’s with surface temperature trends similar to those used in the first paper (but taken from Jones et al., 1999 rather than 2001), MSU2LT data from UAH Version D (Christy et al., 2000), radiosonde data from HadRT2.0 (Parker et al., 1997), and 50 year results from the NCEP/NCAR Reanalysis (Kisteler et al., 2001). From these datasets they argue that the models, which represent the current state of the art in AOGCM’s, cannot account for the observed troposphere and surface temperature trends. Predictions of global warming during the upcoming century are based on AOGCM predictions. Because these models typically show tropospheric warming that is equal to or greater than that of the surface, DEA claim that their papers prove that significant global warming is not happening now and will not happen any time soon.

Shortly after these papers were published (Aug. 12, 2004), Douglass, Singer, and Michaels jointly published an article online at Tech Central Station (Douglass et al., 2004c) in which they triumphantly announce that the science of global warming has been settled once and for all, and the climate change skeptics (themselves) have won. Challenging the existing scientific consensus, they ask us,

“How many times have we heard from Al Gore and assorted European politicians that ‘the science is settled’ on global warming? In other words, it's ‘time for action.’ Climate change is, as recently stated by Hans Blix, former U.N. Chief for weapons detection in Iraq, the most important issue of our time, far more dangerous than people flying fuel-laden aircraft into skyscrapers or threatening to detonate backpack nukes in Baltimore Harbor.

Well, the science may now be settled, but not in the way Gore and Blix would have us believe. Three bombshell papers have just hit the refereed literature that knock the stuffing out of Blix's position and that of the United Nations and its Intergovernmental Panel on Climate Change (IPCC).”

(Douglass et al., 2004c)

The first 2 “bombshell” papers they are referring to are the ones mentioned above (Douglass et al., 2004; 2004b). After this dramatic and inflammatory introduction, they go on to announce what they believe their papers have accomplished, and compare these results with the consensus view.

“The surface temperature record shows a warming rate of about 0.17˚C (0.31˚F) per decade since 1979. However, there are two other records, one from satellites, and one from weather balloons that tell a different story. Neither annual satellite nor balloon trends differ significantly from zero since the start of the satellite record in 1979. These records reflect temperatures in what is called the lower atmosphere, or the region between roughly 5,000 and 30,000 feet....

So, which record is right, the U.N. surface record showing the larger warming or the other two? There's another record, from seven feet above the ground, derived from balloon data that has recently been released by the National Oceanic and Atmospheric Administration. In two research papers in the July 9 issue of Geophysical Research Letters, two of us (Douglass and Singer) compared it for correspondence with the surface record and the lower atmosphere histories. The odd-record-out turns out to be the U.N.'s hot surface history. 

This is a double kill, both on the U.N.'s temperature records and its vaunted climate models. That's because the models generally predict an increased warming rate with height (outside of local polar regions). Neither the satellite nor the balloon records can find it. When this was noted in the first satellite paper published in 1990, some scientists objected that the record, which began in 1979, was too short. Now we have a quarter-century of concurrent balloon and satellite data, both screaming that the UN's climate models have failed, as well as indicating that its surface record is simply too hot.”

(Douglass et al., 2004c)

A closer examination of these “bombshell” papers reveals a very different story. In the introduction to the first, DEA tell us that,

“The globally averaged surface temperature (ST) trend over the last 25 years is 0.171 K/decade [Jones et al., 2001], while the trend in the lower troposphere from observations made by satellites and radiosondes is significantly less, with exact values depending on both the choice of dataset and analysis methodology [e.g., Christy et al., 2003, Lanzante et al., 2003]. This disparity was of sufficient concern for the National Research Council (NRC) to convene a panel of experts that studied the “[a]pparently conflicting surface and upper air temperature trends” and concluded, after considering various possible systematic errors, that “[a] substantial disparity remains”[National Research Council, 2000]. The implication of this conclusion is that the temperature of the surface and the temperature of the air above the surface are changing at different rates due to some unknown mechanism.

A number of studies have suggested explanations for the disparity. Lindzen and Giannitsis [2002] have ascribed the disparity to a time delay in the warming of the oceans following the rapid temperature increase in the late 1970s. Hegerl and Wallace [2002] have concluded that the disparity is not due to El Nino or cold-ocean-warm-land effects. Other authors [Santer et al., 2000] have suggested that the disparity is not real but due to the disturbing effects of El Niño and volcanic eruptions, a conclusion that has been critiqued by Michaels and Knappenberger [2000]. Still others argue that the disparity results from the methodology used to prepare the satellite data [Fu et al., 2004, Vinnikov and Grody, 2003]; however, only the results from Christy et al. [2000] have been independently confirmed by weather- balloon data [Christy et al., 2000, Christy et al., 2003, Lanzante et al., 2003, Christy and Norris, 2004].”

(Douglass et al., 2004)

So the trend from satellite and radiosonde products is “significantly less, with exact values depending on both the choice of dataset and analysis methodology”. In fact, only UAH Products are significantly less, and while it is true that the exact values do depend on dataset choice, DEA’s wording makes it sound as if the differences are little more than minor adjustments. In fact, the variations are considerable. Notice also that after commenting on the potential impact of methodology on the MSU record, they carefully avoid any mention of RSS Version 1.0 (Mears et al., 2003; 2003b; 2003c) which is not only one of the best characterized MSU products in existence, but one that agrees well with state-of-the-art AOGCM predictions. Instead, they cite Vinnikov and Grody (2003) whose analysis is quite unique, has many open questions yet to be resolved, and also, coincidentally, has the largest trend predictions of any MSU product by a factor of at least 2. They also cite Fu et al. (2004) regarding satellite dataset analysis method differences when in fact, Fu’s team made no statements about this. What they did was demonstrated that the MSU Channel 2 data were contaminated by stratospheric emissions and quantified the degree of this effect.

Then, after this rather deft diversion, the 2 sources they do cite for the MSU and radiosonde records just happen to be the ones that are closest in agreement for the period 1979 to 1997 and low in trend – UAH Version 5.0 truncated to 1997 (Christy et al., 2003) and the LKS radiosonde product (Lanzante et al., 2003). Figure 11 shows tropospheric temperature trends for UAH, RSS, LKS and HadRT2.1 for 3 layers and by global region (Seidel et al., 2004). It can be seen that there is very good agreement between LKS and both UAH products for MSU2, though regionally the confidence intervals are large enough to accommodate RSS outside of the southern hemisphere for which the UAH-RSS discrepancy is largest. So not surprisingly, the southern hemisphere contributes most to the discrepancy. HadRT2.1 shows significant disagreement with both. For MSU2LT however, the LKS dataset shows noticeable discrepancies with UAH products, but agreement with HadRT2.1 is improved. In this case the largest regional discrepancy is again with the southern hemisphere, where now LKS shows more warming. This is particularly significant, as it is in this region that we expect the 2LT product to be most impacted by Antarctic sea-ice and summer melt pools (Swanson, 2003). Thus, even though UAH median trend estimates tend to be closer to comparably adjusted radiosonde products, agreement varies significantly by layer and region, and confidence intervals tend to be large.

When we extend the record another 4 years the picture changes yet again. Figure 12 shows the same troposphere temperature trends by layer and region as Figure 11, but for 1979-2001. In this case, UAH and RSS products are compared with HadRT2.1 (the LKS record ends in 1997). Now we see that both UAH and RSS products are in relatively good agreement with each other, and both disagree with HadRT2.1 globally and in all regions except the southern hemisphere, where UAH products are closer the HadRT2.1 than RSS. For the MSU2LT layer, both UAH products and HadRT2.1 agree well, but the confidence intervals for each are as large as the trends being measured (Seidel et al., 2004). It is worth noting that until 1997, LKS trends in all regions and globally were consistently warmer than their HadRT2.1 counterparts. Given the 1997-98 El Nino and its impact on all trends, it would have been surprising if this had not continued if LKS had been extended to 2001.

Once again, we see that agreement depends on layer and region, and confidence intervals tend to be large in comparison to the trends being measured. This is particularly true of the 2LT layer that is of most interest to DEA. Furthermore, which layer is in agreement and to what degree appears to be strongly driven by the length of record being examined. Note also that DEA carefully avoid any discussion of the issue of limited radiosonde coverage, particularly in regions such as the southern oceans that have the most impact on differences between UAH and RSS trends. They also avoid any discussion of Antarctic sea-ice and melt pool impacts which will be of particular importance for the lower troposphere 2LT trends that they are most concerned with. It is difficult to see how any of this adds up to a “confirmation” of UAH MSU products at the expense of its competitors! All this is neatly obscured by DEA’s subtle language, limited citations, and the ensuing, rather pensive discussion of a few exotic theories that might explain the ”discrepancy”.

But the selectivity doesn’t end there. To demonstrate their irreconcilable “disparity” DEA chose a common time frame over which to compare the three datasets. Their choice was based on the datasets they had selected and the point they were trying to prove. Introducing their methods, they state that,

“Since we wish to examine the disparity in the temperature trends among these three datasets, we limit our analysis to a common observational time series. The starting point in our analysis will be 1979, which is the beginning year in both the R2-2m and MSU data. We truncate the analysis at December 1996 which avoids the snow cover issue in R2-2m. This also avoids the anomalously large 1997 El Nino event in the tropical Pacific which Douglass and Clader [2002] showed can severely affect the trend-line. We will show later in this paper that it is likely that our conclusions would change little had we been able to use data though 2003.”

(Douglass et al., 2004)

In other words, even though the extant MSU records from both UAH and RSS extend to the present, DEA purposely choose to truncate their analysis to omit nearly a third of it! The stated reasons for doing this are to exclude a known issue with snow cover contamination in R2-2m and the “anomalously large” ENSO event of 1997, but these arguments are unconvincing. There were at least 4 other ENSO events during the satellite era (1982-83, 1986-87, 1991-92 and 1994-95). The 1982-83 event was one of the largest of the 20th century and occurred during the tropospheric/stratospheric impact of the El Chicon eruption (see Figures 6-8). These were not omitted even though the 1982-83 event was almost as large as the 1997 event. Furthermore, there is at least some evidence that a relationship may exist between global warming and ENSO events, particularly their frequency (Meehl and Washington, 1996; Knutson et al., 1997; Timmermann et al., 1999; Collins, 2000). Though the jury is still out on this (Zhang et al., 1997; Knutson et al., 1997; Boer et al., 2000), there is enough evidence of a possible relationship between the two that we certainly cannot avoid them prima facie in upper-air climate change studies! Likewise, avoiding the snow cover issue is also unconvincing as the MSU2LT record is impacted by this as well, particularly in those regions where UAH and RSS products differ significantly (Swanson, 2003). Even if neither of these things were an issue, we are still left with an analysis of only 2/3 of the relevant upper-air record being used to evaluate products that cover the entire period.

The truncation of DEA’s analysis period raises another point. DEA specifically compare lower troposphere trends as determined by UAH MSU2LT products with surface and upper-air trends from other records. The online community encyclopedia Wikipedia ( has a section that discusses the MSU record that includes a table that shows these trends as a function of the record ending year from 1992 through 2003 (Wikipedia, 2004). A check of this table reveals that the year DEA decided to truncate their analysis, 1996, just happens to be the last year for which UAH 2LT products show a negative lower troposphere temperature trend. How convenient! Had they used data up to the present they would have observed a warming trend that in the last several years has been moving more or less steadily in the direction of restoring long-term agreement with the surface record. By truncating their analysis to 1996 they have,

  • Omitted a full third of the MSU record and including only that portion of it for which a negative lower troposphere temperature trend can be derived. Longer 2LT records show warming trends that are moving in the direction of restoring long-term agreement with the surface record.
  • Allowed themselves to directly compare the UAH MSU2 record with the one record which is truly independent of MSU products and shows the best agreement with UAH for that period, LKS (Lanzante et al., 2003). The LKS record does not extend beyond 1997.
  • Allowed themselves to directly compare another radiosonde product, HadRT2.0, with the UAH 2LT record over a period where there is very good agreement between the two, yet avoid a longer period over which the agreement is much worse (see their second paper cited here, Douglass et al., 2004b).

These points are particularly telling because DEA have made a point of ridiculing the argument that the MSU record is too short to be of use for long-range climate change predictions. In the Aug. 2004 Tech Central Station argument where they introduce these papers to the public they state that,

“When this was noted in the first satellite paper published in 1990, some scientists objected that the record, which began in 1979, was too short. Now we have a quarter-century of concurrent balloon and satellite data, both screaming that the UN's climate models have failed, as well as indicating that its surface record is simply too hot.”

(Douglass et al., 2004c)

Yes, now we do have a quarter century long MSU record against which to evaluate troposphere temperature trends, and DEA deliberately omit almost a third of it in their study! They claim that this neglect has little impact on their results. The rationale offered is that a repeat of their analyses for the 1979-2002 over ocean regions only (which they say avoids “snow cover” problems) produces similar trends. This however, is a bogus comparison. First of all, land regions contribute significantly to the overall trend and cannot be ignored regardless of oceanic response. DEA’s reasoning on this point assumes that snow cover is one of the most dominant features of land based trends, if not the most important. This is patently false. Indeed, it is enlightening to compare their reasoning here with the MSU regional trends shown in Figure 4B. Remember that DEA cite UAH Version D as their only trusted authority for this record. MSU 2LT trends by global region are shown in the middle map. Note that the large majority of lower trend areas for this period are over the world’s oceans. This is not surprising, as we expect oceanic regions to have a mediating effect (we have already seen this at work in the Ocean Only vs. Ocean + Land diurnal cycles discussed earlier). Similar land-ocean trend differences can also be seen in the RSS regional trends (top map), though with higher overall values. Note also that many of the warmer regions occur in tropical or extratropical areas like the southeast United States and the Arabian Peninsula. It is difficult to see how snow cover could be polluting tropospheric trends over Florida and Saudi Arabia! The truth is that DEA “tested” their MSU record truncation by choosing a comparison that anyone could have told them would produce minimal differences in trend for the 2 periods. Then, they came up with a rationale to sell it. Lastly, it is interesting that they are so worried about snow cover here when they were obviously less worried about it in regard to its effects on MSU 2LT trends in the high southern latitudes - where it is far more abundant and its seasonal variations most affect trend differences between UAH and RSS products.

Moving on to their regional data and figures, we see even more problems. Figure 17 shows DEA’s Figure 1 (Douglass et al., 2004) which presents their regional 1979-1996 trends as determined by the surface record (Jones et al., 2001), the UAH Version D MSU record (Christy et al., 2000), and the NCEP/NCAR 2-Meter Reanalysis (Kanamitsu et al., 2002). For the period they analyzed, the surface record contained many gaps, so DEA wisely conducted their study only for areas where there were consistent records for all 3 products. However, in this figure where they report regional trends, they show cells with missing data in the same color (dark blue) as those with the minimum regional cooling rates so that a casual inspection of it implies more regions with satellite era cooling trends than they actually observed. Though the caption mentions this in passing, we are left with an inability to tell what regions they observed cooling in from which ones they had no data for. At best, this is misleading. Likewise, Figure 18 shows their Figure 2 which presents their 1979-1996 trends for the Surface Record (Jones et al., 2001), the UAH Version D MSU 2LT Record (Christy et al., 2000), and the NCEP/NCAR 2-Meter Reanalysis (Kanamitsu et al., 2002) plotted by latitude. The first thing to notice is that the plot is not symmetric about the equator. DEA extend their trends northward beyond 60 deg. N> Latitude, stopping just short of the Arctic Circle. But in the Southern Hemisphere they truncate it at about 35 deg. S. Latitude. Why? A comparison of Figure 18 with Figures 4A and 4B is revealing. It can be seen that by ending their geographic trend record here they avoid the very region of the globe where UAH and RSS products are most different! The region from 60 deg. S. Latitude to the South Pole is precisely where Antarctic sea-ice and summer melt pools have the most impact on the MSU 2LT and TLT records (Swanson, 2003). These regions also significantly impact the NCEP/NCAR R2-2m record as well. Figure 19 shows zonally averaged oceanic albedo as a function of latitude in both the original NCEP/NCAR Reanalysis (Kalnay et al., 1996) and the R2-2m product used by DEA. Sharp increases beyond 60 deg. latitude at either pole reflect the heavy influence of sea-ice. The austral summer cycling of these albedos can be readily seen in the R2-2m product at higher latitudes than 60 deg. S. Note also that the R2-2m product will not reflect the effect of summer melt pools on this albedo (which will have the effect of lowering it to open ocean values). These high albedos will appear as warming trends to the UAH 2LT record, and their interaction with summer melt pools correlate strongly with lower UAH 2LT trends. The effect is much stronger in the Southern Hemisphere than in the North (Swanson, 2003). By avoiding the polar regions, DEA avoid the impact of these influences on their trends, and they avoid the regions of largest difference between UAH and RSS for MSU Channel 2.

Thus, DEA’s first “bombshell” paper is little more than a cherry-picking tour-de-force. They examine only two thirds of the extant MSU record and compare it to products that were carefully selected for their agreement with it at the expense of other equally valid upper-air products. Along the way, they made sure that they had picked a time period that would yield the desired discrepancy between surface and tropospheric trends. Their second “bombshell” (Douglass et al., 2004b) shows little improvement. Here, DEA shift their attention from their alleged surface/upper-air “discrepancy” to an attempt to show that state-of-the-art AOGCM’s cannot account for it. They examine results from 3 AOGCM’s and compare them to the 1979-1997 surface temperature record as determined by Jones et al. (1999) and resolved to a 5 deg. by 5 deg. (latitude vs. longitude) grid, MSU 2LT lower troposphere temperatures as determined by UAH Version D (Christy et al., 2000), the same as determined by HadRT2.0 (Parker et al., 1997), and the NCEP/NCAR 2-Meter Reanalysis (Kisteler et al., 2001). The models they choose are Hadley CM3 (Tett et al., 2002), the Goddard Institute for Space Studies GISS SI2000 atmospheric model (Hansen et al., 2002), and the Dept. of Energy Parallel Coupled Model, or PCM (Meehl et al., 2003; 2003b).

Hadley CM3 is run for the period 1985-1995 and forced with greenhouse gas emissions, sulfates, and tropospheric and stratospheric ozone. The 1961-1980 portion of this run was removed. Once again we see a truncated record – this time one that avoids both the beginning and the end of the MSU record. An examination of the upper-air history during the satellite era reveals that the portion of the record DEA omitted in their Hadley CM3 run contains the El Chicon eruption (1982) and a large El Nino event. Hadley CM3 has the ability to capture both events and in fact, results from runs with solar and volcanic forcing were available to DEA at the time they published (Tett et al., 2002; Braganza et al., 2004). An examination of Figures 6 and 8 reveals that the combined impact of these two events was a boost in tropospheric temperatures below 300 hPa for a year or two followed by a cooling period of comparable length prior to 1985 (when their run began). The impact of including these events might well have boosted the early end of the record in this model and resulted in a lower overall trend for the period the examined. So the total period for which DEA run Hadley CM3 amounts to less than half of the extant MSU record, and for a portion that produces the result they desire.

The selectivity becomes even more obvious in their GISS SI2000 run. DEA use runs of this model that are described in Hansen et al. (2002). In particular, they draw upon results cited in Figure 16 from that reference, which is reproduced here as Figures 20A and 20B. SI2000 is a coupled ocean-atmosphere model with several alternative oceanic components and a 4 deg. x 5 deg. gridded atmospheric portion. The atmospheric portion is an update of the earlier GISS SI95 model where the number of vertical layers has been increased from 9 to 12, and the higher layers have been made higher resolution to allow for more accurate modeling of ozone and stratospheric aerosols from volcanic eruptions. Several other refinements were used to improve the performance of this model. Its higher tropopause level resolution of results in a lower 2 X CO2 forcing compared to SI95 and its climate sensitivity falls within the range of 3.5-4.1 W/m2 reported by IPCC WG I (2001). SI95 also contained a programming error that caused it to misrepresents sea-ice and summer melt pool absorptivity, and SI2000 contains an update that corrects for this by fixing the Antarctic and Greenland interiors at an albedo of 0.80 (Hansen et al., 2002).

Regarding DEA’s SI2000 studies, the most relevant piece is the ocean component. In SI2000 the atmospheric component model is coupled at a common interface grid to any one of the 5 oceanic component models it uses - Ocean A through Ocean E. Each of these has strengths and weaknesses and they vary in their ability to reproduce different aspects of oceanic response. Ocean A (observed Sea Surface Temperature) is based on the HadISST1 ocean surface model (Rayner et al., 2003) and provides global representations of SST, sea-ice, and night marine air temperatures for the period 1871-2000. Reliable in-situ data for these quantities are not consistently available for all regions and periods, so data sparse regions and periods have been filled in using reduced-space optimum interpolation methods (Kaplan et al., 1997; 1998). Ocean A does not model deep ocean responses such as latent heat transport or heat content, so it cannot be used for studies of oceanic response to climate forcings. But it has the advantage of being based on “real” rather than modeled oceanic history. So to the extent that the datasets and interpolation methods it draws from are reliable, it can be said to “capture” deep ocean history. Ocean B is a “Q-flux” ocean that models surface and deep ocean responses to a depth of 1 km. It models both horizontal and vertical heat transports using, a) horizontal heat transports chosen for their overall agreement with control runs of SST, and b) mixed layer to deep layer penetration of oceanic heat anomalies based on diffusion coefficients that vary by region and are based on local climatological stability (Hansen et al., 1984; Sun and Hansen, 2003). Hansen et al. (2002) apply the model to a depth of 1000 meters. Based on observed rates of ocean mixing of tracers, Ocean B provides a good approximation of oceanic global heat uptake for climate forcing scenarios that do not fundamentally alter the deep ocean circulation (true of most multi-decadal simulations such as those done by DEA), and has proven useful for characterizing the efficacy of each of SI2000’s radiative forcings when only limited dynamical interactions are permitted. Ocean C, another deep ocean model, uses a pressure related vertical coordinate to characterize ocean heat content and transport (Russel et al., 1995). Ocean D is a deep ocean model based on the Geophysical Fluid Dynamics Laboratory (GFDL) Modular Ocean Model (MOM), and Ocean E is taken from the isopycnic coordinate based Hybrid Coordinate Ocean Model (HYCOM) as described in Bleck (1998).

Of these, Oceans A and B are the most popular, and the ones to which Hansen et al. (2002) devote the most attention. Each has its strengths and weaknesses. Ocean A is a favorite choice for studies where the historic atmosphere data assimilation and reanalysis studies. In these cases, actual ocean dynamics are less important than is a clear picture of how they forced an atmospheric response and Ocean A has the obvious advantage of being based on known rather than modeled ocean history. But there are limitations to this. The effectiveness of Ocean A hinges on the accuracy of the historic SST and sea-ice data on which it was based. Though quite good overall, this data is known to be regionally and temporally incomplete and the interpolation methods that were used to “fill in the blanks” have had mixed success - in particular, characterizations of SST and sea-ice at high latitudes have substantial uncertainties. Some of this problem is ameliorated by the fact that the most serious difficulties occur prior to the satellite era, and HadISST sea-ice records were “homogenized” so as to provide consistency between differing components. But significant uncertainties remain in both (Hansen et al., 2002). This difficulty will be particularly telling for the high southern latitudes that are most important for discriminating between competing MSU products. Ocean A can also yield unreliable ocean-atmosphere heat fluxes that regionally impact its results. In fact, for some large scale effects such as the North Atlantic Oscillation, it can even yield the wrong sign for the resultant heat flux anomalies (Bretherton and Battisti, 2000). For instance, it is known to misrepresent the North Atlantic Oscillation (NOA) heat fluxes. These generally lead to a cooling of Siberia, but their misrepresentation by Ocean A leads instead to an NOA induced cooling of Eurasia that is not observed (Hansen et al., 2002). Problems like these are regional and far less problematic for global atmospheric change studies like those being considered in this paper, but they can have an impact.

Lastly, it must be remembered that Ocean A is an historic ocean model. As such, it will be of little use in evaluating future global warming – a fact that will bear directly on the question of whether a failure of AOGCM’s to reproduce upper-air trends would disprove anthropogenic greenhouse warming. On the other hand, though Ocean B lacks the data driven and largely verifiable ocean history of Ocean A, it does provide a good representation of actual deep ocean dynamics and unlike Ocean A, it can be used for predictions of future climate change on regional and global scales. Studies of this sort require the ability to reliably reproduce oceanic heat storage, transport, and mixing. Ocean B yields good estimates of global mean thermal response to a wide range of natural and anthropogenic forcings, particularly moderate ones in which ocean surface heat anomalies will penetrate to deep ocean layers like “passive tracers” (Hansen et al., 2002). The Q-flux method on which it is based is flexible enough that a wide range of transient global surface temperature responses can be modeled with an appropriate choice of diffusion coefficient. Provided that climate forcing is moderate, and the dominate modes of deep ocean circulation do not change drastically over the period being studied – conditions that are very likely to be true for the upcoming century – this flexibility allow for good approximations of the heat uptake, storage, and transport characteristics of more sophisticated ocean models and reasonably good agreement with past observational data as well (Hansen et al., 2002; Solokov and Stone, 1998).

Oceans A and B are therefore complementary. One excels at reproducing historic ocean-atmosphere interactions, and the other provides a good basis for predictions of future climate change. Both are necessary for model based studies of a potential anthropogenic fingerprint on the global climate of the upcoming century. Furthermore, other SI2000 ocean components – Ocean E in particular – reproduce other climatic features that are missed by both Oceans A and B, giving SI2000 a suite of modeling options that allow for a wide range of surface and upper-air studies. Thus, any true test of this model’s potential will draw upon runs based on each, and using a full suite of natural and anthropogenic forcings. Indeed, Hansen et al. (2002) evaluated results from Ocean A and Ocean B, and Sun and Hansen (2003) used Ocean’s A, B, and E.

Which brings us to DEA’s use of SI2000 for their troposphere trend comparison study. They used the 6 forcing case employed by Hansen et al. (2002) for the period 1979-1998 using Ocean A only. Figures 46A and 46B show the change in annual-mean temperature profile vs. pressure altitude for the period 1979-1998 (assuming linear trends) as determined from this run along with results from a comparable run using Ocean B. Vertical trend profiles from HadRT2.0 and HadRT2.1 (radiosonde – Parker et al., 1997), and MSU Channels 2LT, 2, and 4 (Christy et al., 2000). The left-side plot gives the Ocean A results used by DEA, and the right-side gives Ocean B. It is evident that Ocean A produces the largest discrepancy between model and observation. Both regionally and globally, Ocean B provides a better fit to both the radiosonde and MSU data. Furthermore, the MSU data shown in these figures is taken from UAH Version D (Christy et al., 2000), not the larger trends given in RSS Version 1.0. Yet even so, the 6-forcing driven Ocean B case gives global responses that consistently fall within the confidence intervals of the lower UAH trends even for the 2LT layer. Regionally, confidence intervals overlap. For the middle troposphere layer (850-300 hPa) RSS Version 1.0 can be expected to run roughly 0.18 deg. K higher than the MSU trends shown for the same period and would be a better fit still across all regions. It is clear from this data that even though it is not perfect, SI2000 run with Ocean B gives a very good overall representation of regional and global temperature trends for the surfaced and troposphere when forced by well known effects.

Yet DEA make absolutely no mention of it and present only the Ocean A results that yield the surface-troposphere discrepancy they desire. For the purposes of a study such as theirs, which seeks a satellite era comparison of modeled vs. observed results, Ocean A is in fact a good choice and they are right to include it. But the choice to use this ocean component alone must be viewed in the context of their larger objectives. DEA claim to have proven that state-of-the-art AOGCM’s cannot reproduce past or present climate trends that agree with observation, and are therefore useless for predicting future global change. Indeed, this more than anything else, is the basis of their “declaration of victory” over mainstream climate change science. Even a casual inspection of Figures 20A and 20B reveals that this is false. Ocean B does in fact, yields results which are in quite good agreement with their referenced observations. Furthermore, while it makes certainly makes sense to base a study like this on observed historical data to the greatest extent possible (as they have done), SST and sea-ice characterizations in Ocean A are not without their issues and it is far from evident that results from other components can be dismissed out of hand. This is particularly true in that DEA are claiming to have demonstrated the inability of models like SI2000 to capture future as well as past climatic changes, and between the two, only Ocean B can be used to model future climate change. DEA’s case would have been more compelling had they done the following,

  • Demonstrated that these, and results from other ocean components should be dismissed outright, and only Ocean A should be used.
  • Provided compelling evidence that runs based on Ocean A demonstrate that SI2000 cannot produce viable predictions of future climate change, even though Ocean A would not be used for such studies.

Not surprisingly they did not even attempt to do either, much less succeed, leaving the casual reader with the impression that GISS SI2000 is unable to reproduce any aspect of observed climate change.

These omissions becomes even more evident when we expand our evaluation of SI2000 to include its other ocean components. Oceans A and B are relatively simple component models that provide versatile and reasonably robust results, hence their popularity. But SI2000 has other ocean components that offer more thorough characterizations of many key ocean properties. An investigation of these tells even more about its capabilities. Ocean E for instance, a quasi-isopyncal Hybrid Coordinate Ocean Model (HYCOM), yields a much more complete picture of oceanic heat uptake and transport than Ocean B. It mixes heat more deeply than Ocean B, and in so doing provides a more realistic picture of oceanic heat sequestration – a feature that will be particularly telling for its ability to reproduce climate moderating effects and the amount of atmospheric warming still “in the pipe”, waiting to be released at a future date when global oceanic heat sequestration reaches its limits (Sun and Hansen, 2003). It provides a fairly good representation of oceanic heat storage profiles vs. depth and latitude as compared with observation, though specific geographic patterns often vary, and captures observed heat loss fluxes in the North Pacific and heat storage in the circum-Antarctic belt (Sun and Hansen, 2003; Levitus et al., 2000). Like Oceans A and B, Ocean E is not without its problems, at least two of which will likely be important for studies of satellite era trends. It displays a non-negligible climate drift, which if allowed to run to equilibrium would introduce an additional 8 deg. C to its results, and cannot be accounted for using flux corrections without introducing other unrealistic variations (Sun and Hansen, 2003; Neelin and Dyjkstra, 1995; Tziperman, 2000). It also fails to adequately capture equatorial “waveguide” cycles which likely contributes to its under-estimation of ENSO amplitudes. The latter has a predominately regional rather than global impact, and the former can be corrected for to a great extent by differencing control and experiment runs (Sun and Hansen, 2003). But overall, it yields a very good picture global climate during the satellite era and the longer period since the early ‘50’s.

Figure 32 shows global mean temperature trend profiles taken from Sun and Hansen (2003) for an expanded set of SI2000 runs. The results shown, which are directly comparable to those in Figures 20A and 20B, reflect 5 and 6 forcing cases applied to Oceans A, B, and E for the satellite era and the longer 1958-1998 period, as compared with radiosonde trend profiles from HadRT2.0 and HadRT2.1 (Parker et al., 1997), and MSU data for the 2LT, MSU2, and MSU4 layers from UAH Ver. D (Christy et al., 2000). Figure 33, also from Sun and Hansen (2003) shows transient temperature responses for the MSU 2LT, MSU2, and MSU4 layers, and global ocean heat content anomalies for the three same runs and the period 1951-1998, with anomalies referenced to a base period of 1984-1990. Once again we see that of the three ocean components, Ocean A consistently predicts the highest trend profiles and Oceans B and E both do surprisingly well at reproducing comparable trend profiles from the referenced radiosonde and MSU datasets. The Ocean A trends are the only ones that fall outside of the MSU confident intervals for most of the free troposphere (850-300 hPa) and are a worse fit than Oceans B and E at all layers except the surface. In fact, Ocean E actually under-represents surface trends. Likewise, Ocean B and E global transient responses are for the most part much closer to observation than their Ocean A counterparts. All three capture stratospheric response fairly well, but Oceans B and E consistently capture the MSU2 response better. Ocean A consistently over-represents observed global ocean heat content anomalies while Oceans B and E fall to either side of it. Thus, while far from perfect, Oceans B and E offer much better characterizations of many key ocean-atmosphere responses than Ocean A, and unlike Ocean A are well suited to studies of future as well as past and present climate change. Clearly, any realistic evaluation of the SI2000’s capabilities must consider all three – particularly when an evaluation of the ability of AOGCM’s to predict future climate change is being sought, as DEA are doing. Despite these considerations, they have restricted themselves to Ocean A runs only – because only these runs yield the significant surface-troposphere disparity they desire.

Of the 3 AOGCM’s evaluated by DEA, the Dept. of Energy PCM model is the only one they ran using a full suite of realistic forcings and oceanic and atmospheric components for a time period that includes all significant ENSO and volcanic episodes for the satellite era. They use the “ALL” case that includes greenhouse gases, sulfate aerosols (direct effect only), stratospheric and tropospheric ozone, solar, and volcanic forcings. This is the same run that Santer et al. (2003) considered in their evaluation of the detectability of an anthropogenic fingerprint in a modeled climate. We have already seen that they did in fact, detect an anthropogenic fingerprint in that model, and that while it is not a good fit with UAH Versions D and 5.0, it is a good fit with RSS Version 1.0. DEA of course, make no mention of any of this, but highlight its disagreement with the preferred UAH products.

Their Figure 1 presents results from the PCM “ALL” case, along with results from GISS SI2000 and Hadley CM3, as zonally averaged trends vs. latitude. Their Figure 2 presents decadal trends vs. altitude from these runs compared with observational data (Douglass et al., 2004b). Here we see the same selectivity in observational datasets that plagued the first paper, but with a few new twists. The limitations of the radiosonde datasets and the Reanalysis product have already been discussed. But it is noteworthy that for their radiosonde comparison they choose HadRT2.0 when HadRT2.1 was available. We saw in Part I that the latter had improved considerably on the former with updated corrections for anomalous data and discontinuous records (Free et al., 2002; Seidel et al., 2003; 2004). Once again, they appear to have chosen the former because it yields the desired larger discrepancies with modeled results for the lower troposphere. Note also that with the exception of the northern hemisphere, their Figure 2 shows all negative trends at 800 hPa for the MSU record (MSU 2LT). Yet their cited source is UAH Version D (Christy et al., 2000) which reports an MSU 2LT trend of 0.06 deg. K/decade for 1979-2001. The answer, of course, is that once again, DEA only report the value through 1996 omitting fully one third of the extant MSU record! For the NCEP/NCAR Reanalysis they use the original version (R1) in this paper rather than the later version (R2-2m) that was used in the first. It has already been noted that the updated version of this product corrected many problems present in the first. These included corrections for bogus data in the southern hemisphere, snow and ice cover problems for the 1974-1994 period, and snowmelt pool and oceanic albedo problems for the entire record (Kanamitsu et al., 2002) – all problems that will be of importance to MSU and model comparisons. Yet despite these issues, they use the original record in this case when the updated product was available for the same period.

So between two papers published in the same month, we essentially have a cherry-picking tour-de-force. Both have been carefully orchestrated to “prove” a disparity between observational temperature trends at the earth’s surface and those of the modeled and observed troposphere featuring,

  • A complete neglect of almost one third of the extant record, including a significant ENSO event of the late 1990’s, even though such events may well be related to anthropogenic global warming, and previous events almost as large were included.
  • A “validation” of this shorter record based on exactly the choice of global region that is most likely to produce minimal trend differences for both periods, followed by a rationale for this choice that assumes a major “snow cover” problem over latitude bands where snow cover is minimal.
  • A neglect of 3 other upper-air MSU products, at least one of which overall is every bit as well characterized and the one they chose, and in a few respects, better.
  • Neglect of the most recent, and improved, analyses of the MSU product they did use (other than passing remarks) – most likely because the later products (Christy et al., 2003; 2004) show higher MSU TLT trends than the one they chose (Christy et al., 2000) and that one covers a time frame closer to the truncated period they analyzed.
  • A selection of only those AOGCM run periods and parameters that produce large discrepancies between troposphere and surface trends - including a choice of ocean component model for GISS SI2000 that although it has many merits for satellite era trend studies like theirs, cannot be used for the very studies of future climate change (the very turf on which they claim to have demonstrated AOGCM failings) and which consistently shows worse agreement with observation than at least two other SI2000 ocean components that are well suited to studies of future climate change – even though results based on one of these components was presented in the very Figure they cited, side-by-side with the one they did use (Hansen et al., 2002).

Surely DEA are aware of the various climate events that influenced the 1979-present tropospheric record, so they must know that one of the ENSO events they included (1982-83) was almost as large as they one they truncated their record to avoid. They have pointed out in many other forums how regionally and temporally variable the upper-air history is, and they are aware of the geographic limitations of both the surface and radiosonde records in comparison to the MSU record. So they must know that these differences will impact trend comparisons. Though their choice of Ocean A for use in their GISS SI2000 coupled model runs is commendable in many ways, they must have been aware that it was not the only choice available. In particular, they must have been aware that this component would not even be used for the very AOGCM based studies of future global warming that they are claiming to have refuted, and the ones they ignored not only could be, they yield better agreement with observations with the same comparisons used in their paper – a fact that was clearly apparent in the right half of the very figure they referenced. It is one thing to make mistakes – we all do, and that’s why there is a peer-review process that we all benefit from. But the errors and omissions in these two papers are serious enough that it is difficult to see how they could have been accidental. In light of this, it would be fitting for Douglass, Singer, and Michaels to either present adequate explanations for them, or retract both papers with apologies.

McKitrick & Michaels (2004)

But wait! There are more surprises in store. Though DEA claim to have defeated global warming, even they are forced to admit from their own cherry-picked data, that the northern hemisphere surface and troposphere records show significant warming trends. So having dropped “bombshells“ one and two, they move on to their next holy grail – an attempt to explain (or perhaps explain away) the observed northern hemisphere warming trends as due to something other than climate change. Returning to their Tech Central Station victory celebration, we’re told that,

“As bad as things have gone for the IPCC and its ideologues, it gets worse, much, much worse.

After four years of one of the most rigorous peer reviews ever, Canadian Ross McKitrick and another of us (Michaels) published a paper searching for "economic" signals in the temperature record. McKitrick, an economist, was initially piqued by what several climatologists had noted as a curiosity in both the U.N. and satellite records: statistically speaking, the greater the GDP of a nation, the more it warms. The research showed that somewhere around one-half of the warming in the U.N. surface record was explained by economic factors, which can be changes in land use, quality of instrumentation, or upkeep of records.”

(Douglass et al., 2004c)

The reference is to “bombshell” 3 - a paper published in May of this year by Michaels and University of Guelph economist Ross McKitrick (2004 – hereafter, MM) in which it is argued that 1979-2000 northern hemisphere warming trends are driven by “economic and social factors” rather than climate change. While this paper does not address the MSU record, it is worth a digression here because,

  1. MM have stated their intention to expand the study it describes to include the MSU record in the near future (Michaels et al., 2004), and as we shall see, they note in it that their conclusions agree with UAH Version 5.0 TLT trends, indicating that they see it as yet another vindication of that upper-air product.
  2. It is even more revealing of the selective, and at times even haphazard methods of its authors.
  3. DEA consider it to be the great missing link in global warming surface trends that their alleged surface-troposphere disparity portends.

So how do MM go about testing an alleged link between global warming and economic activity? They constructed a model of global climate and economic activity as a linear combination of various parameterized independent variables categorized as Climate related, Economically related, and Socially related, plus a coefficient to account for unexplained residuals in each model run iteration. The parameters used were as follows,

Climate Parameters

  • Surface pressure in dry regions (which they consider to be a proxy for local surface temperature).
  • Coal use (which they use as a surrogate for sulfate emissions).
  • Cosine of latitude ( Cos(L) ).
  • Coastal proximity, expressed as a dummy variable.

Economic Parameters

  • Population.
  • Real per capita income (factored by population as a measure of the intensity of regional economic activity).
  • Scale of economic development activity as characterized by land use changes. urban heat island effects, and regional collection and maintenance of temperature records in rural vs. developed areas.
  • Coal growth rate.
  • National GDP growth rate.

Social Parameters

  • Local literacy rates.
  • Number of months of missing temperature data by region.
  • A dummy variable that discretely identifies former Soviet Union weather data, which is characterized as being particularly variable due to the dramatic geopolitical changes in this region during the period of study.

Coefficients were then derived for each of these independent variables using least squares methods. For dependent variables, MM use monthly climate records for the period 1979-2000 taken from a network of 218 land based weather stations in 93 countries (shown in Figure 21) chosen from the GISS homogenized surface record (Hansen et al., 1999), and another set of temperature data which they modeled separately, that consists of 5 deg. by 5 deg. gridded data from the IPCC for the cells that correspond to their selected network of GISS stations. This global model was then subjected to a standard multiple regression analysis from which correlations were sought between global warming trends and the various climate, economic, and social influences they modeled.

To no one’s surprise, MM find that the warming observed in the northern hemisphere since 1979 correlates better with economic and/or social variables than with climate variables. To further clarify these results, they sort their data into two groups – one corresponding to global data for the colder half of any given year, and one for the warmer half, after which they further subsample their cold season data to reflect dry (that is, subzero dew point) regions only. They reran their analysis under these conditions, sorted their resulting independent variable correlations by order of importance in each grouping, and then evaluated the actual temperature trend impact of each by removing them one at a time for successive runs of their regression (McKitrick and Michaels, 2004). The results of this exercise are shown in Figure 22. MM note that with economic and social effects removed, the global trend is in surprisingly good agreement with the MSU record (by which, they mean the UAH Version 5.0 TLT record of course). Removal of their Soviet “dummy variable” drops the trend further (to a value remarkably close to the UAH Version 5.0 TMT record, but MM do not comment on this), but they acknowledge that their Soviet variable may not be a true greenhouse surrogate.

Thus, regarding global surface temperature trends for 1979-2000, McKitrick and Michaels are led to two striking conclusions,

  1. Outside of cold season dry regions, warming trends are dominated by economic factors, as characterized in their model.
  2. During the warm season, warming trends in all regions are dominated by a combination of economic and social factors.

These are bold statements. If MM are right, then nearly all of the warming trends observed worldwide at the earth’s surface, and possibly in the lower troposphere as well, are little more than evidence of our own growing wealth and productivity, mistaken for dangerous global warming. So victory over global warming has finally been achieved and we can all relax, right?

Wrong! Not only did MM fail to make their point, the ensuing drama that followed the publication of this “bombshell” paper will surely go down in history as one of the greatest comedy of errors ever to beset the scientific community. The journal Climate Research, which had already been through one scandal involving yet another seriously flawed paper 10, published this one in May of 2004. Shortly thereafter the festivities began. MM ran their multiple regression analyses using an econometrics program called SHAZAM 11. SHAZAM makes use of input data files and has its own associated language for characterizing variables and program calls. As noted above, MM used cosine of latitude ( Cos(L) ) as one of their input climate parameters. They derived this within the SHAZAM user interface by trigonometric identity using the program’s built-in Sin(x) variable and give it the label COSABLAT. According to the SHAZAM User’s Guide 11, all trigonometric variables in SHAZAM require their arguments to be expressed in radians. In August of 2004, barely 12 weeks after the paper hit the street, Tim Lambert of the University of New South Wales, Sydney, Australia, obtained McKitrick’s SHAZAM input file from his U. of Guelph web site and discovered that MM had input all of their latitude data to the variable COSABLAT in degrees rather than radians, making virtually all of their derived results useless! When Lambert corrected the errors and reran their analysis MM’s “economic” signal was drastically reduced 12. Figure 23 shows MM’s results after corrections to COSABLAT were made to their inputs. Comparison with their pre-corrected results (Figure 22) reveals that with the corrections made their economic and social signals have been reduced from 0.259 to 0.160 deg. K/decade – in other words, by over one third. At Tech Central Station, DEA proudly tell us that McKitrick and Michaels published this research “after four years of one of the most rigorous peer reviews ever…” (Douglass et al., 2004c). But this “rigorous” peer review process did not even manage to catch a simple conflation of input units that would have been inexcusable in an undergraduate exam!

Well alright then – so they made a simple mistake with units that any of us could have made on a bad day. So what! They still come up with over half of the surface temperature signal tied to economic and social factors rather than climate change. Doesn’t this establish their central thesis anyway?

Only if you neglect the overly simplistic characterization of these effects in their model and it resultant potential for data clustering. Consider MM’s choice of input parameters as given above. Their modeling of effects as complicated as global climate, economic activity, and even social change… boils down to a mere 12 variables. The methodological shortcomings in this approach are almost too numerous to mention. Consider the following,

  • Surface pressure in dry regions only is taken as a primary driver of climate response. Moist regions – which comprise the large majority of the earth surface – are summarily lumped into a single “remainder” category.
  • Other than a dummy variable for generalized “coastal proximity”, oceanic effects aren’t considered at all, even though oceans cover over 4 fifths of the earths surface and dominate the overall response of the lower atmosphere. Certainly we would expect Michaels to be aware of this given the effort he and his co-authors went to in order to avoid the 1997 ENSO event in their first “tropospheric disparity” paper (Douglass et al., 2004).
  • “Coal use” is treated as a proxy for sulfate emissions even though they could have just used direct measurements of anthropogenic sulfate emissions with far less uncertainty (Lelieveld et al., 1997; IPCC, 2001 Chap.
  • Economic development activity is characterized by simple land use changes, urban heat island effects, and the impacts of economic development on record keeping with no apparent attempt to correct these effects for the fact that many regional climate responses, including surface temperature, are known to be correlated over larger distances than what can be resolved by simply looking at urban growth and development (Wilks, 1995).
  • National GDP growth rate is considered a key economic proxy for regional climate change, yet it is a characteristic of entire national economies. MM make no attempt to describe GDP impacts the regional distribution of economic development within any given country other than factoring it by population. Consider the magnitude of error likely to result from applying a single number nation-specific parameter like this to the United States without considering the separate regional contributions to it of say, the Aleutian Islands, the Sonora desert, and New York City or Los Angeles. Even if we were to factor GDP by say, regional population (as MM do with their annual per capita income inputs), it is still straightforward to identify rural and urban areas, or even separate urban areas with different industry make-ups, that have similar populations but very different heat and/or greenhouse gas or particulate emissions characteristics.
  • Other than generalized land use and urban heat island impacts, there is no clear differentiation between the impacts of manufacturing and service based economic growth, nor is any attempt made to discriminate between various greenhouse gas emitting industries other than via simple coal production. Concrete for instance, is known to be a significant producer of greenhouse gas emissions, particularly CO2 (see for instance, Milmoe, 1999). Yet MM do not consider it at all.
  • Local literacy rate is identified as a social proxy for climate change. Yet how the two might be related is not addressed, and a myriad of other potential social impacts which conceivably could have a much larger influence (e.g. – cultural trends regarding environmental sensitivities, for which there is a fair amount of survey data, particularly in the U.S. and Europe), are not even considered.
  • The former Soviet Union is singled out as a unique contributor to the variability of surface station temperature records - even to the degree that MM consider it to be a step function input to their model. The basis for this is the dramatic socio-economic changes experienced by former Soviet regions during the period of record. But other regions such as sub-Saharan Africa have undergone even more dramatic social upheavals during the same period, suffering similar and/or greater proportional impacts on their existing surface station records, yet these are not considered at all, much less treated as step function changes.
  • Coal growth rate is considered to be a key economic proxy for climate change, but automobile production and the resultant emissions increases are not even considered! If coal consumption counts as an economic surrogate for sulfate production, certainly growth in automobile production and use would be a proxy for particulate emissions, smog, and greenhouse gas emissions as well. MM do not even mention it.

And so on, and so on, and so on. This list could be expanded ad-infinitum, but by now the problem should be abundantly clear. The heart and soul of MM’s methodology is the use of multiple regression methods, by which they wish to demonstrate that perceived anthropogenic climate signals are in fact more strongly correlated with social and economic “signals” unrelated to climate change. But it is a well known mathematical fact that multiple regression analysis requires that input variables be independent and identically distributed. If they are not, data clustering (the presence of unaccounted-for correlations between variables that are being treated as independent) is almost certain to happen. As a consequence, their standard errors, and therefore their confidence intervals, are almost certain to be under-estimated, and they will treat statistically insignificant results as valid signals.

Problems like this have plagued econometric large scale analyses of economic and social problems for years because it is generally impossible to account for all the variables needed to make such analyses believable. Such studies almost always degenerate to endless number crunching exercises where input parameters “parameterized adjustments” intended to account for the unavoidable blizzard of unknowns allow for the output of, quite literally, any result desired. MM’s paper is no exception. Consider for instance, their use of cosine of latitude ( Cos(L) ) for a climate variable rather than simply latitude L. There is no apparent justification for this, and MM offer none – they simply do it. We have to wonder what the point is of arbitrarily adding this extra layer of complexity to their inputs. It is true that the area associated with an annular “slice” of latitude of thickness dL would vary as Cos(L), so if MM were seeking correlations to variables that were sensitive to area, they might get a more direct comparison from Cos(L) perhaps. But this would only serve to simplify the mathematical formulation of their desired description of latitude correlated effects – it wouldn’t have any truly meaningful effect on the correlation itself, which is what they’re ultimately after. Though it would be difficult to prove, the best explanation for the use of Cos(L) rather than L appears to be that MM experimented with latitude based input variables and got the results they desired using Cos(L).

Given the issues surrounding multiple regression methods and data clustering in studies such as MM’s, in the very least they should have subjected their model to independent tests of its robustness. The SHAZAM program they used does generate a heteroskedasticity-consistent covariance matrix as part of its output. Ordinary least squares methods can produce wide variation in small trend estimates from separate datasets seeking to measure the same changes even though the two datasets are well correlated (we saw this earlier while comparing separate MSU and radiosonde analyses). If the residuals from these analyses contain significant heteroskedasticity (e.g. – significant variations in standard deviation, or standard error, over subsampled portions of the time series being examined), this would be a big warning sign that the model had been somehow mischaracterized or contaminated by variables that had not been taken into account. Given that MM’s SHAZAM run would have produced heteroskedasticity-consistent results, it is reasonable to expect that their results bias free, and therefore independent of the multiple regression model used to derive them. Furthermore, if their economic and social signals are truly real, rerunning their analysis with part of their dataset should allow for the rest of their independent data to be reproduced.

Tests like these are standard for multiple regression models like MM’s, yet they steered well clear of them. But even so, it wasn’t long before someone else got ahold of their data and did subject it to robustness tests like these. Benestad (2004) obtained MM’s dataset from McKitrick’s web site and reconstructed their analysis using a separate multiple regression model. In particular, Benestad was concerned about the fact that MM did not account for the interstation temperature dependencies caused by correlations of temperature trends over geographic regions larger than MM’s land use and urban heat island variables could resolve. First, he established that his model could reproduce their results using their full data set, thereby demonstrating that he had established a baseline from which further tests could be made. Then, he reran their analysis using 2 separate subsets of their data – one of which was used to calibrate the model, and the other to test its ability to predict the outcome of the variables that had been omitted.

To no one’s surprise, MM’s model failed miserably. Benestad ran five separate analyses of MM’s dataset, each constructed from a subset of MM’s input variables. Dependent variable data from stations within the latitude band of 75.5 deg. S to 32.2 deg. N were used to calibrate the model, and data from the latitude band 35.3 deg. N to 80 deg. N were used to evaluate MM’s results. The following runs were evaluated,

  • A model that used all of their input data (McKitrick and Michaels, 2004 Table 4).
  • A model that used only their physical geographic data.
  • A model using only their geographic information and population data.
  • A model that used only their non-climatic factors (e.g. – their economic, social, and Soviet data).
  • A model using only the variables that MM identified as significant.

The results of Benestad’s five runs is shown in Figure 24. Though his full model run reproduced MM’s result quite well, none of the 4 models run with subsets of their dependent variables is able to reproduce any of their independent variables. Furthermore, his model run using only MM’s economic, social, and Soviet variables produces a near zero trend. Thus, Mckitrick and Michaels have done little more than use a careful selection of data and some involved number crunching to generate exactly the economic and social signals they wanted – but well known and reasonable independent tests cannot reproduce.

One other point is worth noting. Benestad’s test of MM’s analysis was submitted to Climate Research in early August of this year (2004), almost 3 weeks prior to the discovery that they had used degrees instead of radians in their latitude data. At the time he noticed that MM had used degrees for their latitudinal inputs. Being aware that his own models required radians for this variable (as most similar models do for trigonometric quantities), he made sure his own models were done properly (Benestad, 2005). Yet even so, he found little latitudinal influence on any of MM’s principle signal variables. At least two of his model runs made no reference to latitude. The remaining three used the same latitude data, in proper units, across differing combinations of other input parameters with varying results. Thus, the issues associated with MM’s model characteristics and the design of their input parameters appear to be separate from their use of bogus data and continue to plague their results even after those errors were corrected. The mere fact that MM’s results are sensitive to this input while other valid model approaches are not should be a warning sign in itself. In the very least, had MM analysis been robust to any acceptable degree, their results should be reproducible via other proven methods.

Mckitrick and Michaels responded to Benestad’s comments in the same edition of Climate Research (McKitrick and Michaels, 2004c). The only criticisms they could muster were that Benestad’s tests of their model (particularly his separation of their variables by latitude band for separate calibration and validation runs) were not commonly used in the refereed climate science literature, and that had used the “worst” of their data to calibrate his runs. The first point is, of course, irrelevant. What Benestad tested was MM’s use of multiple regression methods to derive correlations from modeled data where certain correlations were expected. This has to do with statistical mathematics, not climate science per se, and Benestad’s methods are commonly used to test multiple regression models in the peer reviewed literature from many fields. Benestad later responded to McKitrick and Michaels at the online weblog stating that,

“McKitrick and Michaels claim that I do not dispute their approach (i.e., multivariate regression using economic variables as potential predictors of surface temperature). That claim is both peculiar, and misses the point. A method is only valid when applied correctly. As described, above, [McKitrick and Michaels] failed egregiously in this regard. The purpose of my paper was simply to demonstrate that, whether or not one accepts the merits of their approach, a correct, and more careful, repetition of their analysis alone is sufficient to falsify their results and their conclusions.”

(Benestad, 2004b)

Furthermore, MM are wrong. Methods such as Benestad’s have been used throughout the refereed climate science literature whenever they were relevant. For instance, see the many examples cited in the Wilk text cited above (1995) which specifically deals with the use of statistical methods in climate science. Lastly, there is an even more fundamental point that goes right back to the very climate science literature MM claim Benestad is out of step with – independent verification from separate data sources. If global change is truly economic rather than climatic, we would not expect to see long-term evidence of it in regions and natural processes that are far removed from economic activity. The refereed literature is replete with data on SST changes, glacier retreat, changing precipitation patterns, ecosystem impacts, and many other effects that are widely distributed in unpopulated areas and not even remotely related to centers of economic activity. Across the board, these results flatly contradict MM’s conclusions. MM mention one such study (Boehm et al., 1998), dismissing it only as being “obscure” without a proper explanation as to why their own results are more trustworthy.

So McKitrick and Michael’s bombshell paper fails numerous independent tests from alternate regionally and globally distributed data sources that bear directly on their principle conclusion. Beyond that, it had not even been off the presses for 12 weeks before it fell to a standard set of robustness checks that any serious multiple regression model would have to pass, and a basic conflation of units that would not have been tolerated even in an undergraduate homework assignment. This, and Douglass, Singer, and Michaels’ two cherry-picked analyses of tropospheric trends and AOGCM’s, are the basis of their declaration of “victory” over global warming science.

We have to wonder what defeat would look like.

Fu et al. and Climate Change Skeptics

In Part I we saw that MSU Channel 2 signal receives up to 15 percent of its signal (its raw digital counts) from the lower stratosphere (the 100-50 hPa layer) and thus it very likely underestimates temperature trends in the lower to middle troposphere (the 850-300 hPa layer). Traditionally, this was accounted for by using MSU2 and TLT as complementary lower troposphere products. But while TLT reduces the stratospheric Channel 2 “footprint”, it pays a price in sampling error and contaminating inputs from other sources such as Antarctic sea-ice and melt pools. Chiang Fu and his co-authors developed their method to avoid these problems. By using direct MSU4 temperature and trend data to correct MSU2 they avoid sampling errors associated with off-nadir MSU views and greatly minimize signal contamination from the surface, particularly the sea-ice and melt pool problem affecting the TLT record. When Fu et al. used their method to correct existing MSU products for stratospheric trend aliasing, they found that all existing MSU products were now in agreement with the predictions of AOGCM’s - the only remaining exception being the TLT record (which has not yet been corrected for sea-ice and melt pool problem). Furthermore, the Fu et al. weighting function was based on the radiosonde record (Lanzante et al., 2003) and used that record mainly to derive a correction for the very upper-air layer for which trends have been most monotonic and consistent during the satellite era - the lower stratosphere. As such, it is consistent with that record as well - the observed trend differences between radiosonde products and the Fu et al. trends being likely due to coverage, surface signal contamination, and other factors. Details of the Fu et al. method are discussed in Part I, and the method is derived in its Appendix.

This delivered yet another serious blow to skeptic arguments, and around the world skeptic forums reacted immediately – once again, with well deserved fear. The poison darts began flying within days. Criticisms fell chiefly into two groups – concerns about the functional form of the Fu et al. corrected weighting function WFT, and concerns about the reliability of using statistical methods to derive the T2 and T4 data used with it. This takes much of the force out the claims of global warming skeptics, who to date have been depending on the disparity for their case against global warming and the required mitigation efforts.

The same week that the Fu et al. method appeared in the pages of Nature, Tech Central Station published an editorial by Roy Spencer of the UAH team criticizing the Fu et al. method and even going so far as to refer to that journal as “gray scientific literature”. According to Spencer,

“The authors, noticing that channel 4 measures the extreme upper portion of the layer that channel 2 measures (see Fig. 1), decided to use the MSU channel 4 to remove the stratospheric influence on MSU channel 2. At first, this sounds like a reasonable approach. We also tried this thirteen years ago. But we quickly realized that in order for two channels to be combined in a physically meaningful way, they must have a large percentage of overlap. As can be seen in Fig. 1, there is very little overlap between these two channels. When a weighted difference is computed between the two channels in an attempt to measure just the tropospheric temperature, an unavoidable problem surfaces: a large amount of negative weight appears in the stratosphere. What this means physically is that any attempt to correct the tropospheric channel in this fashion leads to a misinterpretation of stratospheric cooling as tropospheric warming. It would be possible for their method to work (through serendipity) if the temperature trends from the upper troposphere to the lower stratosphere were constant with height, but they are not.

In this instance, the negative (shaded) area for the Fu et al. weighting function in Fig. 1 would be cancelled out by its positive area above about 200 millibars. Unfortunately, weather balloon evidence suggests the trends change from warming to strong cooling over this altitude range.”

(Spencer, 2004)

Thus, Spencer was arguing for the first criticism. His Figure 1 is reproduced here as Figure 25 modified to reflect my wording rather than his. This figure shows WFT compared with the weighting functions for MSU2 and MSU4. The claim is that because WFT goes negative above 100 hPa it will inevitably alias spurious warming into the troposphere trend. Spencer argued that the method might work, but only if trends are constant with altitude from the upper troposphere to the lower stratosphere (roughly 300 -50 hPa) – which they are not (Spencer, 2004). This would be a valid criticism if the method used WFT strictly for the derivation of MSU2 brightness temperature with the layers above 100 hPa removed. This is not the case. What Fu and his colleagues actually did can be seen more clearly in Figures 21 to 23. Figure 48 shows Figure 47 with MSU2 color banded according to the layers it detects. The region shown in light orange reflects the uncorrected free troposphere contribution to MSU2. The region shown in light blue reflects the tropopause and lower stratosphere, where 300 hPa can be considered the “lowest approach” altitude for the tropopause and 200 hPa a global mean. Figure 13 (right side) shows 1979-2001 upper-air trends as a function of altitude for several radiosonde products and single point trends for UAH Version D (Angell, 2003). Similar data is reproduced in Figures 31 and 33 as broad layer bar graph data for the longer 1958-1997 period using a different set of radiosonde products. It can be seen that the satellite era trends decrease with altitude. Within the uncertainty ranges shown, they go negative above altitudes of roughly 7 to 9 km with the global average being around 8 km (the 300-100 hPa layer). Comparing these trends with Figure 48 reveals that for the satellite era, the light blue layer has an overall negative trend and the orange layer a positive one. Because MSU2 sees the full weighting function of both, it will alias the cooling trends above 300 hPa into the warming trends below. Figure 49 shows Figure 47 shaded to reflect the layer coverage of the Fu et al. weighting function in comparison to its uncorrected MSU2 and MSU4 counterparts. The region shown in dark blue can be expressed in terms of MSU4 and is chosen so that its weighting will integrate to zero with altitude above 300 hPa. Below, the Fu et al. function will have the same weighting that MSU2 would have seen below 300 hPa if the stratosphere were not contributing to its signal (the combined light and dark orange regions). The characterization of this weighting function allows for these two regions to be separately expressed as multiples of T2 and T4 from which the actual free troposphere brightness temperature trend can be derived.

Now it can be seen that Spencer (2004) misunderstood the Fu et al. method. In fact, the method separates the layered trends out of the uncorrected MSU2 signal and accounts for each. Their weighting function goes negative above 90-100 hPa because it must do so to prevent a stratospheric cooling from being aliased into the free troposphere trend. To his credit, Spencer has relented somewhat since this editorial was published. He is still skeptical of the Fu et al. method, and in particular he is concerned about discrepancies between the Fu et al. free troposphere trends and those observed by other radiosonde products for the same layer – trends that he believes to confirm the UAH TLT and TMT records. But he does acknowledge that the method is a useful piece of the puzzle and should be investigated further. Commenting in a more recent editorial at Tech Central Station he says that,

“As is often the case, the press release that described the new study made claims that were, in my view, exaggerated. Nevertheless, given the importance of the global warming issue, this line of research is probably worthwhile as it provides an alternative way of interpreting the satellite data.”

(Spencer, 2004b)

In a recent interview, Fu indicated that he did not know Dr. Spencer at the time he published, but has since had the opportunity to meet him at a few conferences and engage in some very stimulating and mutually productive discussions about both team’s methods. “I didn't know Spencer before this,” he said, “but now I've met him at some scientific conferences, and we can talk about the science. At the time, he was so sure we were damn wrong ... Now he says we don't know enough.” (Whipple, 2004).

Another challenge to the Fu et al. method was published in December of 2004 by the journal Nature. Simon Tett and Peter Thorne (hereafter, TT) of the UK Met Office used the Fu et al. method to derive new coefficients and free troposphere trends for the tropics (30 deg. S. to 30 deg. N Latitude) during the period 1978-2002 using the HadRT2.1s radiosonde analysis, the ERA-40 reanalysis (Uppala, 2003), and an ensemble of model runs (Tett and Thorne, 2004). These trends, which they denote as Tfjws in contrast with the Ttr850-300 derived by other methods, were then compared to corrected MSU2 trends from UAH Version 5.0 (Christy et al., 2003), RSS Version 1.0, and surface trends. A comparison of their results is given in Figure 28. For non-satellite analyses, surface temperatures were derived from the products indicated. Satellite products were compared to surface trends from the HadCRUT2v dataset. ERA-40 reanalysis based surface trends were derived using zonal averages of 2-meter temperatures over land and SST’s over ocean regions. For their model comparisons TT used an ensemble of 6 runs of the atmosphere-only HadAM3 (Pope et al., 2000) and 4 runs of the coupled ocean-atmosphere HadCM3 model (Stott et al., 2000). Their HadAM3 and HadCM3 modeled results were forced with a suite of natural and anthropogenic inputs as described in the cited sources, and were identical with the exception of two corrections in HadAM3 – one for errors in ozone depletion and one for changes in sulfur cycle forcing (Tett and Thorne, 2004). Based on these results they concluded that,

  • Fu et al. “trained” and tested their MSU2 and MSU4 coefficients (a2 and a4, respectively) using the same radiosonde dataset (Lanzante et al., 2003), obtaining false agreement and overfitting of the data. Their resulting corrections are overly small and result in overly warm free troposphere trends.
  • For the Fu et al. methods to work, stratospheric trends must be relatively stable over the period analyzed, but in fact they are not. In particular, they claim that the lower stratospheric impact of the quasi-biennial oscillation (QBO) will be aliased into Fu et al. derived trends.
  • With the exception of HadRT2.1s, free troposphere temperature trends as derived using the Fu et al. method applied to a suite of other upper-air products, show worse agreement with observation and larger confidence intervals than does the UAH Version 5.0 TLT product.
  • Agreement between model run derived trends and those based on Fu et al. derived observations show good agreement only between the HadAM3 atmosphere-only run and RSS Version 1.0.

From a review of their methods and results, several comments can be made 13.

First, it is odd that Tett and Thorne (hereafter, TT) base their comparison study on the tropics only. This is precisely the latitude band for which lapse rates are largest and trends are most variable for the period they studied. Extant radiosonde and reanalysis products are poorly characterized in this region as well. It is not clear why they did not extend their analysis to include a study of global and high latitude trends, and they offered no explanation for this. Such a study would have been particularly useful because the northern latitudes in particular are where their chosen radiosonde and reanalysis products are relatively well characterized and have good coverage. Furthermore, the high southern latitudes are where we expect the biggest differences between UAH and RSS products prior to correction by the Fu et al. method, and where we expect the largest contamination of the TLT record from sea-ice and summer melt pools signals. A test of the Fu et al. method in these regions would have been far more revealing than the region they chose. TT use ERA-40 for their reanalysis product, and this analysis has made great strides over the earlier ERA-15 product in dealing with issues like sea-ice and snow cover, particularly during the satellite era (Bromwich and Fogt, 2004). Comparisons with this product in these regions might have shed light on potential problems with the TLT record, but were not investigated.

Even so, their criticisms of the tropical record are flawed as well. TT rightly point out that the Fu et al. method is most reliable when stratospheric trends are relatively stationary by region and period. But then they point to the Quasi-Biennial Oscillation (QBO) as evidence that they are not and claim that Fu et al. are aliasing QBO trends into their MSU2 correction. Figure 8 shows the stratospheric QBO signal compared to monthly anomaly time series for 6 vertical layers averaged over several upper-air products. Included in this comparison are LKS, HadRT, RIHMI, Angell 63, Angell 54, and UAH Versions D and 5.0. All six time series shown are global, and the QBO signal was determined using 50-hPa altitude zonal wind patterns from radiosonde data at Singapore (Seidel et al., 2004). Because these time series’ draw upon a variety of products including both radiosonde and MSU, they are less subject to the idiosyncracies of any particular dataset, and as they are global in nature they present a better comparison to the Fu et al. methods than the tropical data used by TT. Three things are apparent. First, it can be seen that apart from a slight upturn prior to 1981 (at the beginning of the satellite era) and the upward punctuations of the El Chicon and Pinatubo eruptions, the MSU4 and 50-100 hPa time series’ are fairly monotonic and stable for the entire period TT examine, so this requirement is met.

It may be argued that the two volcanic events destroy this continuity, but they are also reflected in the tropospheric MSU2 and 850-300 hPa records as proportionally large dips in those records shortly after the stratospheric spikes. Therefore, both layers will reflect this activity in comparable proportions with regard to trend comparisons. Second, a close examination of the stratospheric global MSU4 and 50-100 hPa layer records reveals that at best, the QBO impact on them is barely noticeable. The tropics where TT chose to do their analysis, is the one region where we expect the most significant QBO impact, but this region tells us the least about the applicability of the Fu et al. method to the global trends it was used for (Seidel et al., 2004). Finally, the QBO time series is highly periodic, and therefore self-canceling. Even if it did alias a significant signal into the tropospheric record, that signal would be largely removed by the trending process (Fu et al., 2004b). Furthermore, TT’s criticisms assume that the Fu et al. weighting function goes negative above 100 hPa and will therefore alias QBO effects into the free troposphere record that are not there currently. In fact, this is true only of the Fu et al. global weighting function. The revised Fu and Johanson tropical weighting function does not go negative until around 75 hPa. Figure 29 shows this function compared to its MSU2 counterpart. It is evident that the MSU2 weighting receives more signal from this layer than the Fu et al. weighting, and for the latter the layer above 100 hPa will cancel out while the MSU2 contribution will not (Fu and Johanson, 2004; Fu et al., 2004b). This can even be seen in the global Fu et al. and MSU2 weightings shown in Figure 53.

TT’s reported disparities between Tsub>fjws and Tsub>850-300 trends in the tropics are also less revealing than they believe. In addition to the large lapse rates and temporal variability characterizing this region, the tropical tropopause often dips as low as 300 hPa. Tropopause trends are poorly characterized across all upper-air products and can significantly affect lower altitude trends if it is not excluded from the sampling (Seidel et al., 2004). For this region, Tsub>fjws is representative of the entire troposphere from the surface up to 100 hPa rather than the 850-300 hPa layer alone. This can be seen clearly in TT’s own dataset. Their 1000-100 hPa layer trends agree quite well with their reported Tsub>fjws trends (Fu et al., 2004b). Their ERA-40 derived trend for Tsub>850-300 is 0.03 deg. K/decade. The ERA-40 vertical trend profile in this region is revealing. It is positive below 775 hPa, negative between 700 and 400 hPa, and strongly positive between 300 hPa and the tropopause – which itself may occur anywhere between 300 and 100 hPa in this region for the period TT analyze. Therefore, for this region the Ttr850-300 trend may be much smaller than its Tsub>fjws counterpart simply because of the vertical variability of this region (Fu et al., 2004b). But the global record will not reflect this.

TT state that modeled results agree with Tsub>fjws trends only for the atmosphere-only runs, but once again there are serious omissions. They use HadAM3 (atmosphere-only) and HadCM3 (coupled ocean-atmosphere) forced with natural and anthropogenic inputs for their model comparisons. But like Douglass et al. (2004b) they did so using model components and regional constraints that are not representative of the method they are testing. TT report that while their atmosphere-only and atmosphere-ocean coupled model runs gave similar results, their couple model runs (HadCM3) yielded a higher range of trends and only were consistent only with the corrected Tsub>fjws trend of RSS Version 1.0. However, TT’s HadCM3 coupled model run was not based on a true deep ocean model, but on HadlSST which is a simple analysis of observed SST’s (Tett, 2004). As such, like the Ocean A component of the GISS SI2000 model, it neglects the moderating effects of deep ocean latent heat advection and will thus overestimate atmospheric trends. In light of this, it is instructive to compare TT’s use of HadCM3 with that of Douglass et al. (2004b) and the corresponding runs of Ocean A and Ocean B forced GISS SI2000 (Hansen et al., 2002). DEA obtained SST forced HadCM3 data (Tett et al., 2002) directly from Tett and used it to generate vertical trend profiles for the tropics (30 deg. S to 30 deg. N Latitude). These vertical profiles are directly comparable to the data used by TT, the sole exception being that whereas TT report 1979-2001 trends by layer, DEA truncate their analysis to 1979-1996 so as to create the surface-troposphere trend disparity that their case depends on.

Figure 30 shows DEA’s 1979-1996 vertical trend profile for the same tropical region analyzed by TT (Douglass et al., 2004b). Included is a direct comparison of HadCM3 for the period 1975-1995 and GISS SI2000 forced with Ocean A for a similar period (1979-1998). Like TT, DEA use HadCM3 runs that were forced with natural and anthropogenic inputs, and did the same for their GISS SI2000 results. For this region and these periods, it is evident that both SST forced models give strikingly similar results indicating that they are largely comparable for this region and period. Extending the record to 2001 would not be likely to change this result significantly, as both models can be expected to capture the large 1997 ENSO event which dominates this portion of the record. Given the similarities between the two models, the impact of replacing the SST driven Ocean A component of GISS SI2000 that was used by DEA with a true deep ocean component like Ocean B. Figure 20B shows the difference. The left side figure shows the 1979-1998 vertical trend profile for the tropics and extra-tropics (40 deg. S to 40 deg. N Latitude - a region slightly wider than that used by TT) that is obtained using Ocean A SST forcing. The right side shows the comparable trend profile obtained from Ocean B forcing. A clear moderating effect of roughly -0.07 to 0015 deg. K/decade can be seen in the Ocean B run, demonstrating what we saw earlier when examining DEA’s results – the neglect of deep ocean latent heat advection leads to a model induced overestimation of atmospheric trends.

It is reasonable to expect similar behavior in HadCM3. Had TT used a true deep ocean component in their HadCM3 runs we would expect their modeled tropical trends to be lower by a similar spread. This would have put them in a range where given the uncertainties in forcing and model component responses, they would be adequate representations of either UAH or RSS corrected tropospheric trends. With regard to the uncertainties inherent in an exercise like this one, note also that neither Had CM3 or GISS SI2000 with either forcing scenario reproduces the positive-negative-positive vertical trend variability that is observed in the tropics as we saw earlier (Fu et al., 2004b). Something approaching this behavior is somewhat noticeable in the high southern latitudes (Figure 20B, lower right), but is not captured in the tropics and extra-tropics. This alone should lead us to exercise caution when using model runs for comparisons with observation in localized latitude bands like the tropics that are highly variable and not well characterized in upper-air products. TT raise many important question regarding the use of the Fu et al. method and have shown that care must be used in applying it. But their specific criticisms are at best a poor representation of how the method is used, and thus they do not stand up to scrutiny.

With regard to tests of the Fu et al. method, there is one more that needs to be considered. In the same December 2004 issue of Nature, alongside of TT, Nathan Gillett and Andrew Weaver of the University of Victoria, BC and Ben Santer (hereafter, GWS) published the results of their application of the Fu et al. method to AOGCM derived upper-air temperatures for the period 1958-1997 (Gillett et al., 2004). GWS used global upper-air temperatures from a four-run ensemble of the DOE PCM coupled ocean-atmosphere model forced with natural and anthropogenic inputs (Santer et al., 2003c; Washington et al., 2000) and used the Fu et al. method to derive values for the asub>0, asub>2, and asub>4 coefficients. These were then applied to MSU2 and MSU4 brightness temperature trends that had been obtained by applying the respective weighting functions to PCM temperatures and using least squares methods to obtain the corresponding layer trends. The resulting Tsub>FT (free troposphere) trends were then compared with the equivalent Tsub>850-300 and TLT trends that had been derived from PCM. Results are shown in Figure 31.

GWS found that the Fu et al. derived Tsub>FT trends agree with the model “observed” Tsub>850-300 trends to within +/- 0.016 deg. K/decade. Similar agreement was found for the northern and southern hemispheres and the tropics (Gillett et al., 2004). It is interesting to note that GWS’s Tsub>FT trends also agree with their simulated TLT trends for the same period and regions, indicating that the two do reflect similar upper-air layers. The significance of this test as compared to others is that the PCM modeled climatology is precisely known, and is therefore not subject to the observational uncertainties that plague the existing satellite, radiosonde, and reanalysis records (e.g. sampling noise, incomplete coverage and temporal record, differences in merge method, etc.). Whether or not it is accurate in its finest details compared to observations is beside the point. PCM does in fact reproduce the large scale behavior of the surface and troposphere and captures most of its more significant features. Therefore, it represents a valid “upper-air” environment against which the Fu et al. method itself can be tested. Because the objective of the GWS study was to test this method rather than reproduce observed climate variables, the robustness of the Fu et al. technique was demonstrated.

Thus, though a number of challenges have been made to the Fu et al. method, none of them withstands scrutiny. Given the relative stability of the stratospheric record and the independent test of the method’s robustness using modeled and multi-dataset applications, what criticisms remain regarding the statistical characterization of the method’s trend analysis are not likely to stand the test of time. As the quality of radiosonde, rawinsonde, and AMSU products grows, better characterizations of Wsub>FT and Tsub>FT will emerge that will allow for more complete investigations of the Fu et el. Weighting function and TLT products. There is no reason to expect that future investigations will not continue the general trend of closing even further the gap between surface and upper-air products.

But if past history gives any indication of what to expect, it is even more certain that the Fu et al. methods will not silence global warming skeptics any time soon. The same criticisms will likely continue and if anything, they will become even more strident – and even more irrelevant. As increasing numbers of climate scientists are now acknowledging, this is the last piece of the puzzle that makes the perceived disparity between surface and troposphere trends a red herring - and with it, the case of global warming skeptics.


The questions discussed here have never been more important. Advocacy groups skeptical of climate change have used carefully edited presentations on the troposphere temperature record to reposition global warming as “myth”. Their arguments have been enthusiastically embraced by the growing ranks of lawmakers in Congress who are only too happy to base U.S. domestic and foreign policy on them. To this end, have enjoyed much success and undermined many badly needed global warming mitigation efforts. In fact, skeptic arguments based on the troposphere temperature record alone may well comprise the bulk of the Bush Administration’s rationalization for not supporting the Kyoto Protocol. Until the remaining upper-air questions are answered more convincingly, this state of affairs will likely continue.

The potential impacts of climate change are not known with certainty, but as more is learned it is becoming clearer that at best they will be troublesome, and at worst potentially catastrophic. Abrupt climate change is also a possibility that cannot be ruled out (NRC, 2002). Mitigation of these consequences will require the nations of the world to shift their economies away from greenhouse gas emitting technologies and climate disruptive land use activities. These will be long-term changes, and the longer they are delayed, the more risk there will be for future generations. The shift toward wiser, saner economies must begin now. Today’s choices require today’s courage, and that requires the wisdom and foresight to embrace the well-being of our children, and their children, as today’s burden – even if it means costly sacrifice. Yet few things are as unnatural to human beings as delayed gratification, especially when the gratification is a gift to someone else rather than us. This weakness is exacerbated almost beyond remedy when individual conscience becomes collective conscience. Within communities, individuals can dilute personal responsibility with group imperatives and at the same time insulate themselves from the impacts of group policies making denial not only easy, but convincing. It’s no surprise that today’s climate change warnings and the attendant responsibilities are treated with fear and loathing by many, and efforts to replace them with more comforting ideas are widespread.

It’s regrettable, but in recent years science has become highly politicized in the world’s wealthier nations. Nowhere on earth has this been more prominent than in the United States. At the time of this writing (Jan. 2005) the U.S. has just re-elected George W. Bush to another 4-year term as president, and increased the already considerable numbers of its most ultra-conservative factions. This has left America with a Congress and presidential administration that are indisputably the most hostile toward science and the environment than any other in the nation’s history. Never before has the flow of accurate scientific and environmental information to the general public been more distorted and destructive environmental policy more rampant. The scientific community has largely been ill-equipped to prevent this.

Science builds human knowledge one brick at a time, carefully cross-examining its own methods at every step. By its very nature, it is at its best when questions loom larger than answers. Mysteries are investigated. Discoveries are shared openly and subjected to thorough review by peers. Hard questions are asked, ideas are tested by other experiments, other observations and analyses are brought to bear on the challenges, and the uncertainties are worked out over time. Only that which withstands the trial by fire of the peer-review process survives. Though enlightening, this process is not a swift one and it is vulnerable at every point to the lightning rod of emotional appeals. That which has been carefully crafted to speak with passion and certainty to our deepest fears today is far more likely to find a place in our hearts than any seemingly cold and distant knowledge base that offers a return far in the hazy distance – even if the knowledge is far more reliable 14.

Industry and ultra-conservative advocacy groups have been able to capitalize on these weaknesses. We have already seen numerous instances of where these groups have carefully, and perhaps even deliberately cherry-picked analyses of the upper-air record to yield the conclusions they were after. None of these arguments stands up to even a modest level of scrutiny, and would certainly not survive the peer-review process. It is startling that Douglass et al. (2004, 2004b) even managed to get so far as to publish in Geophysical Research Letters. Unfortunately, these flaws are quite easy to camouflage. By packaging them in visually and emotionally stunning “sound science” presentations and going directly to lawmakers and the general public, advocacy groups have been able to make end-runs around the scientific peer-review process and avoid the trial by fire that would expose their failings. By the time the scientific community can respond within the bounds of its own thorough and thoughtful methods, the fire is already spreading. Consider for instance the attacks on the IPCC. In their Climate Change 1995 report they observed that,

“The balance of evidence suggests a discernible human influence on global climate.”

(IPCC, 1996)

Note the tentativeness and restraint in this statement (e.g. “balance of evidence”, “suggests”, etc.) – despite the fact that it was based on more than 20,000 peer-reviewed papers in a number of climate related fields. Contrast tone and content of that statement with the following one made by Douglass, Singer, and Michaels at Tech Central Station where they announced their 2004 GRL papers,

“How many times have we heard from Al Gore and assorted European politicians that "the science is settled" on global warming? In other words, it's "time for action." Climate change is, as recently stated by Hans Blix, former U.N. Chief for weapons detection in Iraq, the most important issue of our time, far more dangerous than people flying fuel-laden aircraft into skyscrapers or threatening to detonate backpack nukes in Baltimore Harbor.

Well, the science may now be settled, but not in the way Gore and Blix would have us believe. Three bombshell papers have just hit the refereed literature that knock the stuffing out of Blix's position and that of the United Nations and its Intergovernmental Panel on Climate Change (IPCC)…”

So, to all who worry about global warming, to all who think that people threatening to blow up millions to get their political way is no big deal by comparison, chill out. The science is settled. The "skeptics" -- the strange name applied to those whose work shows the planet isn't coming to an end -- have won.”

(Douglass et al., 2004c)

Here there is neither restraint nor humility. We’re told out and out that the skeptics have “knock[ed] the stuffing out of” the IPCC’s conclusions”. As if this statement was not dramatic enough, it is accompanied by a thinly veiled attempt to somehow relate the IPCC’s conclusions to the 9/11 attacks and WMD’s in Iraq. Of course, the two have nothing to do with each other. Yet by making this association in a public and highly emotional appeal, DEA can tap directly into the rage and fears of readers – and lawmakers – and win a foothold in hearts and minds before the merits of their 3 “bombshell” papers are even examined. We have already seen how flawed those papers are. This is exacerbated by the fact that at least some of the advocacy group consultants making claims like these do have scientific credentials, and these groups have grown adept at claiming a scientific “consensus” for their views that does not exist. Even today many are still claiming that over “17,000 scientists and engineers” believe global warming is not happening despite the fact that this claim originated with a petition project and an unpublished paper that were discredited almost as soon as they came out, the paper’s authors were found to have plagiarized the format of a journal that had not reviewed or published it, and nearly all of the signatories have no background in any climate science related field (see Footnotes 2 and 3). Despite shortcomings in these arguments and the checkered history of those who have advanced them, advocacy groups know that once they have been actively promoted to lawmakers and the general public in highly partisan forums like Tech Central Station, it is already too late. The bomb has already exploded and the flaws will not be investigated. The problem is made even worse by the fact in the interest of “balanced coverage”, media outlets will typically give the same degree of coverage to advocacy group consultants that they give to legitimate climate scientists – giving both the appearance of being equally credible. As admirable as this goal is, it is poorly suited to scientific subjects because popular media forums are not generally equipped to discriminate legitimate science from pseudoscience – a fact that industry and ultra-conservative front groups are only too happy to take advantage of 15.

Tactics like these have allowed climate change skeptics and their benefactors to enjoy successes far beyond what the merits of their arguments justify. As upper-air products are refined and improved, and the knowledge base grows, the effectiveness of these tactics will diminish. In fact, despite their declarations of victory, climate change skeptics have retreated considerably from many of their earlier pronouncements. But until the upper-air knowledge base is greatly expanded from its present state, these tactics will continue to do much damage. It is of the utmost importance that the remaining questions of surface-troposphere temperature trends be addressed, not only so our knowledge of the impacts of our activities can be clarified, but so that impediments to sound environmental and economic policies can be removed.

It is also more important than ever before that scientists take their discoveries directly to the public. We need more popular science writers who can make the discoveries of climate science both interesting and accessible to the general public. The anti-climate science flaws described in this paper need to be more visible and more engaging than they typically have been. Even simple searches reveal that anti-environmental special interest groups have largely dominated the Internet. Effective challenges to this must be put forth. The recent appearance of the Realclimate web site ( is a desperately needed step in this direction and has already been an incalculable blessing in the short time it has been up. Such activities need to be expanded and should include print and broadcast media as well. It is true that activities like these detract from the time available for badly needed research, but we have reached a point where they are no longer indispensable. It may sound somewhat trite to say so, but it is easy to forget that we do not own the earth – it is on lease to us from our children and grandchildren. May history find that we were faithful to them in our stewardship.


  1. Most natural phenomena respond to being somehow disturbed – or “forced” - as integrated systems rather than as the sum of their individual parts. When “kicked”, the response will be delayed in proportion to the size of their generalized “mass” (that is, how much “inertia” the system has in the relevant response variables with respect to how they are forced), and the relative “softness” with which the forcing is transmitted. This can be seen most clearly in the response of a weight to a force imparted by a spring. A baseball dragged by a steel rod behind a truck will move immediately when the truck does. But if a bowling ball is dragged with a rubber band, there will be a time delay after the truck starts moving before the bowling ball responds. In the case of the earth’s climate, the forcing is thermal (from the sun, and anything terrestrial that increases its efficiency in depositing energy here – like greenhouse gases). The largest reservoirs of thermal “inertia” are the world’s oceans (which have an astronomically high heat retention capability in proportion to the sun’s ability to deposit energy into them) and the “transmission” of the sun’s forcing is via atmospheric and terrestrial systems whose relative thermal conductivities are small compared with the forcing itself and the world’s oceans to temporarily absorb it. In other words, the world’s oceans are thermal “bowling balls” and the atmosphere is a thermal “rubber band”. This is why even though greenhouse gases have been forcing the earth’s biosphere for close to 2 centuries, the response to this forcing has only become noticeable in recent decades.
  2. This paper, titled “Environmental Effects of Increased Carbon Dioxide” has an interesting history. The paper was prepared as a joint project of the Marshall Institute and a tiny anti-environmental front group called the Oregon Institute of Science and Medicine based in Cave Junction, Oregon. In early 1998 the paper was mailed to thousands of climate scientists and meteorological professionals, along with a reprint of a 1997 Wall Street Journal editorial titled “Science has spoken – Global warming is a myth” and a plea to sign a petition calling for the U.S. to withdraw support for the Kyoto Protocol. The effort has since come to be called the Global Warming Petition Project. The paper attracted immediate attention because even though it had never been published or peer-reviewed, it was printed in a publishing format that bore a striking resemblance to that of the Proceedings of the National Academy of Sciences. The letter and editorial it was sent with included among its signatories Frederick Seitz, a former president of the NAS. Shortly thereafter, the NAS issued a press release disassociating themselves from the paper and the petition project and strongly condemned its contents and deceptive publishing format. The incident ranks as one of the larger plagiarism scandals of the last decade.
  3. The Robinson et al. paper accompanying the Global Warming Petition Project was originally released in 1998. For its MSU data sources, the paper cites a reference to UAH Version C in Christy and Braswell (1997) and UAH Version A (Spencer and Christy, 1990). They did not cite Version C more directly (Christy et al., 1998). This paper specifically stated that at least one major source of error in the MSU record, spurious cooling due to orbital decay (Wentz and Schabel, 1998), had not been accounted for in the analysis because the effect had been discovered just after it had returned from galley printing. UAH Version D did take this effect into account, and ended up with a global tropospheric trend significantly larger than that reported by Robinson et al. (Christy et al., 2000). Even though the corrections were well known at the time, Robinson et al. made no attempt to address them. Another typical example can be seen in an Aug. 1, 2003 editorial for Fox News by anti-environmental lobbyist and commentator Steven Milloy. He wrote, “Of course, it’s not even clear that any measurable ‘global warming’ has really occurred, much less that it’s human-induced. Satellite and weather balloon measurements of atmospheric temperatures since the 1970s actually indicate slight cooling to no change” (Milloy, Aug. 1, 2003, my emphasis). Milloy of course, offers no proper citation for this (he seldom does for any of his claims), but once again the reference appears to be to UAH Version C (Christy et al., 1998). He shows no evidence of having made any effort to acquire more recent research, and Fox News certainly did not require him to.

    In both cases, not only were these remarks based on obsolete data, updated information was easily available that even a substandard attempt at scholarship would have uncovered. The MSU data Milloy based his statements on was over six years old and had been through no less than two full revisions when he used it. Robinson et al. did not even bother to cite the actual source of their data, or read it closely enough to cite another well publicized paper directly relevant to its conclusions (Wentz and Schabel, 1998). Furthermore, at the time of this writing the Robinson et al. paper is still being circulated online at the OISM web site ( citing the same obsolete UAH products. The most recent of these is 3 revisions and 8 years old. No attempt has been made to correct the errors in this paper or to update it in any way. Neither Milloy or Robinson et al. considered Prabhakara et al. (1998; 2000), Vinnikov and Grody (2003), or any RSS product. Scholarship this sloppy would be inexcusable in a High School term paper, much less in publications that claim to meet high standards of scientific and/or journalistic professionalism. Yet they are typical of popular industry and ultra-conservative forums. It is perhaps, no accident that of the most watched cable news outlets today, Fox News is the only one that does not have a Science section at their web site or any team specifically dedicated to addressing scientific issues.
  4. The Greening Earth Society is an anti-environmental front group started in 1998 by the coal-fired power interest Western Fuels to convince the public that “using fossil fuels to enable our economic activity is as natural as breathing". Today Western Fuels continues to be their largest benefactor, and a large portion of their budget is devoted to blocking global warming science and mitigation efforts.
  5. Sallie Baliunas, of the Harvard Smithsonian Center for Astrophysics, is one of the most prominent, and prolific, of today’s professional global warming skeptics. For at least 10 years, she has been a consultant and/or a contributing writer for almost every major industry and far-right front group in existence that has waged war against mainstream climate change science. While she does admit that global warming is happening, she believes that it is driven primarily by solar forcing rather than human activities, and that its effects will either be minor or non-existent. She is also one of the co-authors of the infamous Robinson et al. (1998) paper that accompanied the Global Warming Petition Project. The circulation of the paper and petition had been orchestrated by the Marshall Institute and a tiny far-right think tank called the Oregon Institute of Science and Medicine (OISM) based in Cave Junction, Oregon. The paper, and the petition that accompanied it, led to one of the more prominent plagiarism scandals of the 90’s when it was published with a print format that was nearly identical to that of the Proceedings of the National Academy of Sciences, a peer-reviewed journal that in fact had never published it. On April 20, 1998, shortly after this paper and the petition project were made public, the NAS issued a press release strongly condemning the paper and its deceptive publishing format and distancing themselves from its conclusions. That statement can be read online at the National Academies web site at To date, none of the paper’s authors, including Baliunas, have retracted it or updated its conclusions in any way. See Footnote 3 above.
  6. The NCEP/NCAR Reanalysis is a composite upper-air analysis product containing several meteorological parameters combined in a global spatial grid of 2.5° x 2.5° (latitude x longitude) resolution from the surface up to the 10 hPa level. It uses data from land and ship based measurements of temperature, wind and humidity, weather forecasts, MSU satellite data, and rawinsonde data (that is, data from radiosondes that have been tracked by radar or radio-theodolite to obtain wind speed and direction). These data sources are tied together by an AGCM (Atmospheric General Circulation Model – no ocean coupling) run in a “frozen” state to evaluate upper-air temperature, pressure, wind, and humidity from 1948 to the present. The MSU data are used to provide weekly raw “soundings” for the Reanalysis. They are not actual weighted brightness temperature measurements of the sort used in upper-air MSU products like those of UAH and RSS, and are independent of those products. Because it is heavily dependent on model based extrapolations from global rawinsonde data, the NCEP/NCAR Reanalysis cannot be considered as independent of the radiosonde record.

    This Reanalysis product has proven to be a valuable tool in many upper-air studies because of its reliance on multiple datasets and the stability of the AGCM that ties them together, minimizing the impact of flaws in any one dataset. However, like other upper-air products it too has difficulties that limit its usefulness for studies of the troposphere and lower stratosphere. These include changes in synoptic land station and ship observations records, contamination of some of its data by surface snow and sea-ice albedo, problems accounting for some regional weather patterns such as the annual Indian monsoon season, and all the same limitations of coverage and record continuity that plague the radiosonde record. It is also subject to many of the same issues facing AOGCM’s as well, which can be more problematic in that it is being used for a fine detail extrapolation of in situ data, whereas AOGCM’s are typically used only for large scale predictions of regional and global upper-air trends. The particular version used by Douglass et al. (2004) for their intercomparison study is a recent update of the Reanalysis that is based on 2-meter resolution vertical layer rawinsonde readings. The original NCEP/NCAR Reanalysis product is best described in Kalnay et al. (1996), and the 2-meter update used by Douglass et al. (2004) is described in Kanamitsu et al. (2002).
  7. Dept. of Physics, University of Rochester, N.Y.
  8. S. Fred Singer, a retired atmospheric physicist formerly with the University of Virginia. Early in his career, he was best known for his work developing upper-air ozone detection instrumentation. After retiring he founded the Science and Environmental Policy Project (SEPP), an anti-environmental “think tank” funded by industry and ultra-conservative interests, where he currently serves as Director (his wife is the organization’s Executive Vice President as well). For well over a decade Singer has been one of the most strident and publicly active of the more prominent global warming contrarians. The SEPP has actively opposed nearly all mainstream environmental science in a wide range of fields including global warming, ozone depletion, pollution, pesticides, and many other issues of interest to its benefactors. SEPP was started in 1990 with seed capital and office space provided by the Unification Church (the “moonies”) that had been funneled through a Washington DC based church front group called the Washington Institute for Values in Public Policy. Since then SEPP has received extensive funding from the fossil fuel, coal-fired power, and automotive industries among others as well as ultra-conservative foundations. Singer has also offered his services to the tobacco industry as Chief Reviewer of the report “Science, economics, and environmental policy: a critical examination” that was published by the Alexis de Tocqueville Institution (AdTI) where at the time he was a Senior Fellow. This report was part of an attack on EPA regulation directed at environmental tobacco smoke, and had been funded by the Tobacco Institute. (see Though he is commonly praised as one of America’s most “eminent” scientists by the Far-Right press, it has been some time since Singer has actively participated in the scientific peer-review process beyond critiques of mainstream work submitted as letters or comments (for instance, though he is commonly cited as an “ozone science” expert, he has not published in that area since 1971). His recent co-authorship of two papers in Geophysical Research Letters with Patrick Michaels and David Douglass (Douglass et al, 2004; 2004b) is the exception rather than the rule. In addition to the SEPP, Singer has also been a consultant and/or contributing fellow to the Competitive Enterprise Institute, the National Center for Public Policy Research, Tech Central Station, the Cato Institute, and a number of other industry and ultra-conservative think tanks and front groups. For more on Singer and the SEPP, see Gelbspan (1998; 2004), Beder (1998; 1999), and
  9. Patrick Michaels, currently a research professor of environmental sciences at the University of Virginia, is another of the world’s most prominent professional global warming skeptics. Michaels, whose PhD is in ecological climatology, is probably best known as the editor of the World Climate Review and it’s successor, the World Climate Report. Both have been funded and published the corporation Western Fuels to publicly erode public confidence in all climate change and pollution mitigation efforts. On comparatively rare occasions, Michaels has been known to publish in peer-reviewed journals, his latest efforts being co-authoring the Douglass et al. papers (2004; 2004b) that appeared in the summer of 2004 in Geophysical Research Letters and his paper on global warming “economic signals” (McKitrick and Michaels, 2004) at the same time in Climate Research. But like Fred Singer and nearly all other prominent global warming skeptics, his contributions to the peer-review process are few and far between. Fewer yet are contributions that have survived later scrutiny. An earlier paper by Willie Soon and Sallie Baliunas (also industry funded professional global warming skeptics) that Climate Research published led to a scandal involving that publication’s review process, resulting in the resignation of several board members including Editor in Chief Hans Von Storch. Michaels has on numerous occasions given testimonies before Congress containing numerous errors regarding data, analysis, and in particular, the use of AOGCM’s. he presented the results of AOGCM research he had done to disprove 20th century global warming that contained basic mathematical errors – in particular, a misunderstanding of the fact that model forcings are not linear and will impact phase lags in model response (this is a misunderstanding basic system dynamics that would be unacceptable on an undergraduate level physics exam). For more on this incident, see the appendix of Gelbspan (1998). During his skeptic career, Michaels has been involved with numerous industry and ultra-conservative think tanks and front groups including Tech Central Station, the Cooler Heads Coalition, the Greening Earth Society, the Cato Institute, the Heritage Foundation, Consumer Alert, the Advancement of Sound Science Coalition (TASSC - a tobacco industry front started by Philip Morris and commentator/lobbyist Steven Milloy to combat the science relating second hand smoke to lung cancer), the American Policy Center, and many more. Over the years he has received funding from the fossil fuel, auto, coal-fired power, and mining industries as well as a variety of ultra-conservative foundations. He has also been a resource for a number of front groups for the “Wise Use” movement advocating anti-environmental and property rights extremism, including People for the West. For more on Michaels, see Gelbspan (1998; 2004), Beder (1998; 1999), and
  10. In January of 2003 Climate Research published a paper by global warming skeptics Willie Soon and Sallie Baliunas of the Harvard-Smithsonian Center for Astrophysics in which it was claimed that the “Hockey Stick” graph – the plot of global surface temperature vs. historical time that shows a sharp rise at the end of the 20th century that has no precedent during the last 1000 years – was flawed, and that the late 20th century warming was not at all unusual for this period (Soon and Baliunas, 2003). The paper presented a literature review rather than original research, and examined a wide range of temperature proxy records including data from ice cores, corals, tree rings, and more. I eventually came out that the paper had not been submitted through the regular channels but had instead been sent directly to Chris de Freitas, an editor at Climate Research and a known global warming skeptic. De Freitas then sent the paper out for review by people of his own choosing, giving Soon and Baliunas and end-run around more rigorous channels. After publication the paper was found to be riddled with errors, including partiality with data records, inappropriate data comparisons, a systematic conflation of humidity and temperature data, and flawed analysis. Almost immediately ultra-conservative politicians in the U.S., few of whom had any science background at all much less any background in paleoclimatology, proclaimed the study as a triumph of “sound science” over “eco-extremism”. Senator James Inhofe (R-OK) even went so far as to say the paper had created a “paradigm shift” in climate change science. To no one’s surprise, the Bush administration jumped on the bandwagon as well, and even attempted to force an edit of an Environmental Protection Agency report to include reference to the Soon and Baliunas paper. The scientific community, appropriately alarmed by the spectre of political end-runs around the peer-review process and becoming public policy, published a number of rebuttals to it. Several editors at Climate Research, including Editor-in-Chief Hans von Storch, eventually resigned and the journal’s publisher, Otto Kinne, finally admitted that the paper was seriously flawed and never should have been published (Kinne, 2003).
  11. SHAZAM is an econometrics program originally developed by the University of British Columbia (Copyright by K.J. White) to handle large linear and non-linear regression problems of multiple varieties. I has its own user interface and language and is widely used for problems of the sort addressed by MM. It is available for multiple Operating System environments, but for problems of the magnitude tackled by MM’s analysis it is typically run in a Unix environment with a FORTRAN or C compiler. More information about SHAZAM is available from the SHAZAM web site at The SHAZAM User’s Guide is available at, and with a compatible input data file, the program can even be run over the Internet at
  12. The SHAZAM various input command and output files from MM’s paper, along with a zipped folder of their Soviet station data and the paper’s abstract can be downloaded from McKitrick’s U. of Guelph web site at At the time of Lambert’s investigation, the input file posted at this page had McKitrick’s original input .dif file containing the erroneous latitude inputs. At the time of this writing (Dec. 2004) that file has been removed and replaced with a corrected one, along with a correction notice in PDF format in which McKitrick presents the impact of the data corrections on his results, and argues that it is “small”. More about Lambert’s results and McKitrick’s responses to it can be found at Lambert’s online weblog Deltoid at Economist John Quiggin of the University of Queensland, Australia, later duplicated Lambert’s findings and provided other critiques of MM’s work. His comments can be found at the Crooked Timber online weblog at Both links were available as referenced at the time of this writing in Dec. 2004.
  13. It is important to note here that Tett and Thorne are not climate change skeptics. In particular, they are definitely not industry and Far-Right funded professional skeptics like most of the others discussed in this paper. Unlike these, who operate mainly on the scientific fringe, and for entirely ideological reasons, Tett and Thorne have distinguished themselves as among the world’s most important contributors to the subject of upper-air dynamics. Though their treatment of the Fu et al. methods is, in my opinion, lacking in some respects, they have brought up some perfectly valid concerns that rightly should be addressed (Part I of this paper discusses their arguments as well). Their comments are discussed here, in a skeptic rebuttal paper, only because skeptics can, and likely will, take advantage of their remarks in misleading ways for their own purposes.
  14. The concept of certainty is crucial here. In public discussions of scientific and environmental issues like global warming, endless confusion results from fundamental misunderstandings of this word. When scientists speak of “uncertainty” they are referring to something very specific - the standard errors and/or confidence intervals on their data. In other words, how accurate their measurements are and within what spread of values the results are known to fall. As I write these words I do not know to within a few feet how far away the Eiffel Tower is from where I am now sitting. But I do know with absolute certainty that it is more than 6000 miles and less than 10,000. By contrast, in popular usage, “uncertainty” is usually taken to mean lack of knowledge. If it’s uncertain, then I don’t really know it at all. This is false. I do know how far away the Eiffel Tower is! I just don’t know it with absolute precision. This confusion allows advocacy groups to reposition solid scientific results as though they were mere guesses that had no basis in observation.
  15. The media’s emphasis on balance is both necessary and admirable. But it is meant to guarantee equal exposure to all beliefs and opinions, not all statements. It is ill suited to matters of science where data and evidence have been brought to bear and consensus positions have been reached by communities of professionals dealing with a large knowledge base. There is a reason why institutions do not grant “equal time” to flat earth theories in geography and astronomy classes, even though these beliefs have their proponents. Though this is less obvious in debates about climate change, it is nevertheless every bit as real, and even more problematic, precisely because it is not as obvious and far more is at stake. Popular media outlets are not generally equipped to discriminate legitimate science from pseudoscience. As such, they are particularly vulnerable to being swayed by any organization that can field someone with a scientific degree, or even a largely arbitrary title like “Senior Fellow”, even though the individual in question may never have published in the field he/she is speaking about, or had nearly every contribution they made overturned by later research. Far Right advocacy groups have mastered the art of taking advantage of this to garner for themselves a legitimacy in the eyes of the general public, and in Congressional circles, that they would never be able to sustain if it were subjected to proper scientific scrutiny.
  16. This paper deals mainly with professional skeptics and industry and Far Right funded front groups. For reasons which should be obvious, these groups and their hired guns draw almost exclusively on UAH upper-air products for their arguments. Because of this, at the risk of being overly repetitive, it is necessary to continually reiterate one important point: None of the criticisms in this paper of skeptic front groups or their hired consultants should imply that John Christy, Roy Spencer, or any member of the UAH team are fringe scientists of the same sort. It is true that both are well known climate change skeptics and have contributed to various ultra-conservative front group forums as global warming speakers or writers, including some that are discussed in this paper. Christy has consulted for the Cato Institute, the Competitive Enterprise Institute, and the Hoover Institute, and been a guest speaker at some conventions funded by similar groups. Likewise, Spencer has contributed to Tech Central Station (as we saw above) and other similar forums.

    All similarity ends there however. Christy and Spencer have never taken funds from ideologically driven front groups or allowed their beliefs to demonstrably impact the quality of their work – all of which has been peer-reviewed, properly published, and subjected to the highest standards of quality. They regularly share their results with other teams, including those who do not necessarily share their conclusions, and ruthlessly screen their own work for any omission or failing, updating it whenever necessary. They have repeatedly shown themselves to be willing to modify their conclusions whenever truly compelling evidence was put before them. Whatever their personal beliefs about global warming, or whatever contributions they may have made to Far Right front groups, this puts them in a completely different class than the skeptic front groups and professional consultants that are the main subject of this work – who are ultimately pursuing an economic and/or ideological agenda for their own benefit rather than the advancement of science.


AOGCM   -   Oceanic and Atmospheric General Circulation Model.

AMSU   -   Advanced Microwave Sounding Unit.

AVHRR   -   Advanced Very High Resolution Radiometer.

CARDS   -   Comprehensive Aerological Reference Data Set.

CMAP   -   Climate Prediction center Merged Analysis of Precipitation.

CPC   -   Climate Prediction center. NOAA/National Weather Service.

CRU   -   The Climate Research Unit. University of East Anglia, Norwich, U.K.

DEA   -   Douglass et al.

ECMWF   -   European Centre for Medium-Range Weather Forecasts.

ENSO   -   El Nino Southern Oscillation.

ERBE   -   Earth Radiation Budget Experiment.

FWHM   -   Full Width Half Maximum power.

GES   -   Greening Earth Society.

GFDL   -   Geophysical Fluid Dynamics Laboratory. Princeton, NJ.

GISS   -   Goddard Institute for Space Studies. NASA. New York, NY.

GSFC   -   Goddard Space Flight Center. NASA. Greenbelt, MD.

GUAN   -   Global Climate Observing System Upper Air Network.

GWS   -   Gillette, Weaver, and Santer.

HIRS   -   High-Resolution Infrared Sounder. Hybrid Coordinate Ocean Model

HYCOM   -   Hybrid Coordinate Ocean Model.

hPa   -   Hectopascal. SI unit of pressure equivalent to one millibar.

IBE   -   Instrument Body Effect.

IPCC   -   Intergovernmental Panel on Climate Change.

JPL   -   The Jet Propulsion Laboratory, Pasadena, CA.

LECT   -   Local Equatorial Crossing Time.

LKS   -   Lanzante, Klein, Seidel. (Lanzante et al., 2003).

MM   -   McKitrick and Michaels.

MSU   -   Microwave Sounding Unit.

MSU2LT   -   Microwave Sounding Unit Channel 2 – Lower Troposphere.

MSU2MT   -   Microwave Sounding Unit Channel 2 – Middle Troposphere.

MSU2RT   -   Microwave Sounding Unit Channel 2 – Lower Troposphere (UAH Version B).

MSUTLT   -   MSU Channel 2/AMSU Channel 5 – Lower Troposphere.

MSUTMT   -   MSU Channel 2/AMSU Channel 5 – Middle Troposphere.

MSUTST   -   MSU Channel 4/AMSU Channel 9 – Lower Stratosphere.

NAO   -   North Atlantic Oscillation.

NASA   -   National Aeronautics and Space Administration.

NCAR   -   National Center for Atmospheric Research. Boulder, CO.

NCDC   -   National Climatic Data Center. NOAA.

NCEP   -   National Centers for Environmental Prediction.

NCPPR   -   National Center for Public Policy Research.

NOAA   -   National Oceanographic and Atmospheric Administration.

NRC   -   National Research Council. National Academy of Sciences, Washington D.C.

PDO   -   Pacific Decadal Oscillation.

POES   -   Polar Orbiting Environmental Satellite.

PRT   -   Platinum Resistance Thermocouple.

PR   -   Prabhakara et al.

QBI   -   Quasi Biennial Oscillation Index.

RMS   -   Root Mean Square.

RAOB   -   RAwinsonde OBservation.

RATPAC   -   Radiosonde Atmospheric Temperature Products for Assessing Climate.

RSS   -   Remote Sensing Systems. Santa Rosa, CA.

SEPP   -   The Science and Environmental Policy Project.

SOI   -   Southern Oscillation Index.

SPARC-STTA   -   Stratospheric Processes and their Role in Climate - Stratospheric Temperature Trends Assessment Program.

SST   -   Sea Surface Temperature.

SSU   -   Stratospheric Sounding Unit.

TOA   -   Top of Atmosphere.

TIROS   -   Television Infrared Observation Satellite.

TIROS-ATN   -   Advanced Television Infrared Observation Satellite.

TOVS   -   TIROS Operational Vertical Sounder.

TT   -   Tett and Thorne.

UAH   -   University of Alabama, Huntsville, AB.

UCAR   -   University Corporation for Atmospheric Research. Boulder, CO.

UKMO   -   The U.K. Met. Office. Exeter and London, U.K.

VG   -   Vinnikov and Grody.


Ammann, C.M., G.A. Meehl, W.M. Washington and C.S. Zender. 2003. A monthly and latitudinally varying volcanic forcing dataset in simulations of 20th century climate. Geophys. Res. Lett., 30, doi:10.1029/2003GL016875RR.

Angell J.K. 1988. Variations and trends in tropospheric and stratospheric global temperatures, 1958 – 87. J. Climate, 1, 1296-1313.

Angell J.K. 1999. Comparison of surface and tropospheric trends estimated from a 63-station radiosonde network, 1958 - 1998. Geophys. Res. Lett., 26, 2761-2764.

Angell J.K. 2000. Difference in radiosonde temperature trend for the period 1979-1998 of MSU data and the period 1959-1998 twice as long. Geophys. Res. Lett.,27, 2177-2180.

Angell J.K. 2003. Effect of exclusion of anomalous tropical stations on temperature trends from a 63-station radiosonce network, and comparison with other analyses. J. Climate, 16, 2288-2295.

Angell J.K. and J. Korshover. 1983. Global temperature variations in the troposphere and low stratosphere, 1958-1982. Mon. Wea. Rev., 111, 901-921.

Bailey, M. J., A. O’Neill, and V. D. Pope. 1993. Stratospheric analyses produced by the United Kingdom Meteorological Office. J. Appl. Meteorol., 32 (9), 1472–1483.

Baliunas, S.L. 2003. The Air Up There – Is It Hotter? Tech Central Station Online. Published on Nov. 21, 2003. Available online at Accessed Nov. 30, 2004.

Bassist A. N., and M. Chelliah, 1997. Comparison of tropospheric temperatures derived from the NCEP/NCAR reanalysis, NCEP operational analysis, and the Microwave Sounding Unit. Bull. Amer. Meteor. Soc., 78, 1431–1447.

Beder, S. 1998. Global Spin: The Corporate Assault on Environmentalism. Chelsea Green Publishing Company. 288pp. ISBN 1 870098 67 6.

Beder, S. 1999. Climatic Confusion and Corporate Collusion: Hijacking the Greenhouse Debate. The Ecologist, March/April 1999. 119-122. Available online at Accessed on Jan. 9, 2005.

Benestad, R.E. (2004). Are temperature trends affected by economic activity? Comment on McKitrick & Michaels. Climate Research, 27, 171-173. Available online at Accessed on Dec. 19, 2004.

Benestad, R.E. (2004b). Are Temperature Trends affected by Economic Activity? Available online at Accessed on Dec. 19, 2004.

Bleck, R., Ocean modeling in isopycnic coordinates, in Ocean Modeling and Parameterization, edited by E. P. Chassignet and J. Verron, pp. 423–448, Kluwer Acad., Norwell, Mass., 1998.

Boer, G.J., G. Flato, and D. Ramsden, 2000b: A transient climate change simulation with greenhouse gas and aerosol forcing: projected climate for the 21st century. Clim. Dyn., 16, 427-450.

Bretherton, C. S., and D. S. Battisti. 2000. An interpretation of the results from atmospheric general circulation models forced by the time history of the observed sea surface temperature distribution. Geophys. Res. Lett., 27, 767–770.

Bromwich, D.H., and R.L. Fogt. 2004. Strong trends in the skill of the ERA-40 and NCEP/NCAR Reanalyses in the high and middle latitudes of the Southern Hemisphere, 1958-2001. J. Climate, 17, 4603-4619.

Brown, S.J., D.E., Parker, C.K. Folland, and I. Macadam. 2000: Decadal variability in the lower-tropospheric lapse rate Geophys. Res. Lett., 27, 997-1000.

Bengtsson, L., Roeckner, E. and Stendel, M. 1999: Why is the global warming proceeding much slower than expected? J. Geophys. Res. - Atmospheres, 104, D4, 3865-3876.

Braganza, K., D.J. Karoly, A.C. Hirst, P. Stott, R. Stouffer, and S.F.B. Tett. 2004. Simple indices of global climate variability and change Part II: attribution of climate change during the twentieth century. Climate Dynamics, 22, 823-838.

Cavalieri, D. J., P. Gloersen, C. L. Parkinson, J. C. Comiso, and H. J. Zwally. 1997. Observed Hemispheric Asymmetry in Global Sea Ice Changes. Science, 278, 1104–1106.

Christy, J.R. and McNider, N.T. 1994: Satellite greenhouse signal. Nature, 367, 325.

Christy, J.R., R.W. Spencer, and R.T. McNider. 1995: Reducing noise in the MSU daily lower-tropospheric global temperature dataset. J. Climate, 8, 888-896.

Christy, J.R., and W.D. Braswell. 1997: How accurate are satellite `thermometers'? Nature, 389, 342.

Christy, J.R., R.W. Spencer, and E.S. Lobl. 1998: Analysis of the merging procedure for the MSU daily temperature series. J. Climate, 11, 2016-2041.

Christy, J.R., R.W. Spencer, and W.D. Braswell. 2000: MSU tropospheric temperatures: Dataset construction and radiosonde comparisons. J. Atmos. And Oc. Tech., 17, 1153-1170.

Christy, J.R., R.W. Spencer, W.B. Norris, and W.D. Braswell. 2003: Error estimates of Version 5.0 of MSU-AMSU bulk atmospheric temperatures. J. Atmos. And Oc. Tech., 20, 613-629.

Christy, J.R. May 13, 2003. Testimony before the U.S. House of Representatives' Committee on Resources. Available online from CO2 Science magazine, Volume 6, Number 22: 28 May 2003 at Accessed Nov. 28, 2004.

Christy, J.R., R.W. Spencer, and W.D. Braswell. 2004: Update on microwave-based atmospheric temperatures from UAH. 15th Symposium on Global Change and Climate Variations. 84th AMS Annual Meeting. Seattle, WA. Jan. 11, 2004. Extended abstract available online at

Christy, J.R. and W.B. Norris. 2004. What may we conclude about global tropospheric temperature trends? Geophys. Res. Lett., 31, L06211, doi:10.1029/2003GL019361.

Collins, M., 2000. The El-Niño Southern Oscillation in the second Hadley Centre coupled model and its response to greenhouse warming. J. Climate, 13, 1299-1312.

Covey, C., A. Abe-Ouchi, G.J. Boer, G.M. Flato, B.A. Boville, G.A. Meehl, U. Cubasch, E. Roeckner, H. Gordon, E. Guilyardi, L. Terray, X. Jiang, R. Miller, G. Russell, T.C. Johns, H. Le Treut, L. Fairhead, G. Madec, A. Noda, S.B. Power, E.K. Schneider, R.J. Stouffer and J.S. von Storch. 2000. The Seasonal Cycle in Coupled Ocean-Atmosphere General Circulation Models. Clim. Dyn., 16, 775-787.

Crowley, T.J. and T. Lowery. 2000. How warm was the Medieval warm period? Ambio, 29, 51-54.

Douglass, D. H., B. D. Pearson, S. F. Singer, P. C. Knappenberger, and P. J. Michaels (2004). Disparity of tropospheric and surface temperature trends: New evidence. Geophys. Res. Lett., 31, L13207, doi:10.1029/2004GL020212.

Douglass, D. H., B. D. Pearson, and S. F. Singer (2004b). Altitude dependence of atmospheric temperature trends: Climate models versus observation. Geophys. Res. Lett., 31, L13207, doi:10.1029/2004GL020212.

Douglass, D. H., S. F. Singer, and P. J. Michaels 2004c. Settling Global Warming Science. Tech Central Station Online. Published on Aug. 12, 2004. Available online at Accessed Dec. 5, 2004.

Durre, I., T. C. Peterson, and R. S. Vose. 2002. Evaluation of the effect of the Luers-Eskridge radiation adjustments on radiosonde temperature homogeneity. J.Climate, 15, 1335–1347.

Eskridge, R.E., O.A. Alduchov, I.V. Chernykh, P. Zhai, A.C. Polansky, and S.R. Doty. 1995. A Comprehensive Aerological Reference Data Set (CARDS): Rough and systematic errors. Bull. Amer. Meteor. Soc., 76, 1759-1775.

Ferguson, B. and M. Lewis. 2003. Specific Comments on the Menendez Climate Change Amendment Findings. May 6, 2003. Competitive Enterprise Institute. Available online at Accessed Nov. 28, 2004.

Free M., I. Durre, E. Aguilar, D. Seidel, T.C. Peterson, R.E. Eskridge, J.K. Luers, D. Parker, M. Gordon, J. Lanzante, S. Klein, J. Christy, S. Schroeder, B. Soden, L.M. McMillan, and E. Weatherhead. 2002: Creating Climate Reference Datasets. CARDS Workshop on Adjusting Radiosonde Temperature Data for Climate Monitoring. Bull. Amer. Meteor. Soc., June 2002, 891-899.

Folland, C.K., Rayner, N.A., Brown, S.J., Smith, T.M., Shen, S.S.P., Parker, D.E., Macadam, I., Jones, P.D. and others. 2001. “Global temperature change and its uncertainties since 1861.” Geophys. Res. Lett., 28, 2621-2624.

Fu, Q., and C.M. Johanson. 2004: Stratospheric Influences on MSU-Derived Tropospheric Temperature Trends. J. Climate, To be published Dec. 15, 2004.

Fu, Q., C.M. Johanson, S.G. Warren, D.J. Seidel. 2004. Contribution of stratospheric cooling to satellite-inferred tropospheric temperature trends. Nature, 429, (6987), 55-58.

Fu, Q., C.M. Johanson, S.G. Warren, D.J. Seidel. 2004b. Atmospheric science: Stratospheric cooling and the troposphere (reply). Nature, 432, doi:10.1038/nature03210 Brief Communications. Dec. 2, 2004. Abstract available at Accessed on Dec. 27, 2004. Subscription required for access to the full article.

Gaffen, D.J. 1994. Temporal inhomogeneities in radiosonde temperature records. J.Geophys. Res., 99, 3667-3676.

Gaffen, D.J. 1996. A digitized metadata set of global upper-air station histories. NOAA Tech. Memo ERL ARL 211, 38 pp.

Gaffen, D.J., M.A. Sargent, R.E. Habermann, and J.R. Lanzante, 2000a: Sensitivity of tropospheric and stratospheric temperature trends to radiosonde data quality. J.Climate, 13, 1776-1796.

Gaffen, D.J., Santer, B.D., Boyle, J.S., Christy, J.R., Graham, N.E. and Ross, R.J. 2000b: Multidecadal changes in the vertical temperature structure of the tropical troposphere. Science, 287, 1242-1245.

Gelbspan, R. 1998. The Heat Is on: The Climate Crisis, the Cover-Up, the Prescription. Perseus Publishing Company. 288pp. ISBN 0738200255.

Gelbspan, R. 2004. Boiling Point: How Politicians, Big Oil and Coal, Journalists and Activists Are Fueling the Climate Crisis--And What We Can Do to Avert Disaster. Basic Books. 254pp. ISBN 046502761X.

Gelman, M. E., A. J. Miller, R. N. Nagatani, and C. S. Long. 1994. Use of UARS data in the NOAA stratospheric monitoring program. Adv. Space Res., 14 (9), 21–31.

Gillett, N.P., B.D. Santer, and A.J. Weaver. 2004. Atmospheric science: Stratospheric cooling and the troposphere. Arising from: Q. Fu et al., Nature 429, 55–58 (2004). Nature, 432, doi:10.1038/nature03209. Brief Communications. Dec. 2, 2004. Abstract available at Accessed on Dec. 27, 2004. Subscription required for access to the full article.

Greening Earth Society (GES). May 6, 2003. Virtual Climate Alert, 4, 9. Available online at Accessed Nov. 21, 2004.

Hansen, J., A. Lacis, D. Rind, G. Russell, P. Stone, I. Fung, R. Ruedy, and J. Lerner. 1984. Climate sensitivity: Analysis of feedback mechanisms, in Climate Processes and Climate Sensitivity. Geophys. Monogr. Ser., 29, edited by J. E. Hansen and T. Takahashi, pp. 130–163, AGU, Washington, D.C.

Hansen, J., M. Sato, R. Ruedy, A. Lacis, K. Asamoah, K. Beckford, S. Borenstein, E. Brown, B. Cairns, B. Carlson, B. Curran, S. de Castro, L. Druyan, P. Etwarrow, T. Ferede, M. Fox, D. Gaffen, J. Glascoe, H. Gordon, S. Hollandsworth, X. Jiang, C. Johnson, N. Lawrence, J. Lean, J. Lerner, K. Lo, J. Logan, A. Luckett, M.P. McCormick, R. McPeters, R.L. Miller, P. Minnis, I. Ramberran, G. Russell, P. Russell, P. Stone, I. Tegen, S. Thomas, L. Thomason, A. Thompson, J. Wilder, R. Willson, and J. Zawodny. 1997. Forcings and chaos in interannual to decadal climate change. J. Geophys. Res. 102, 25679-25720.

Hansen, J., R. Redy, J. Glascoe, and M. Sato. 1999. GISS analysis of surface temperature change. J. Geophys. Res., 104, 30997-31022. Data from this study is available online from the Goddard Institute for Space Studies at Accessed on Dec. 18, 2004.

Hansen, J., M. Sato, L. Nazarenko, R. Ruedy, A. Lacis, D. Koch, I. Tegen, T. Hall, D. Shindell, B. Santer, P. Stone, T. Novakov, L. Thomason, R. Wang, Y. Wang, D. Jacob, S. Hollandsworth, L. Bishop, J. Logan, A. Thompson, R. Stolarski, J. Lean, R. Willson, S. Levitus, J. Antonov, N. Rayner, D. Parker, and J. Christy. 2002. Climate forcings in Goddard Institute for Space Studies SI2000 simulations. J. Geophys. Res., 107 (D17), 10.1029/2001JD001143.

Hasselmann, K. 1979. In Meteorology of Tropical Oceans. D.B. Shaw (Ed.) Royal Meteorological Society of London. pp. 251-259.

Hauchecorne, A., M.-L. Chanin, and P. Keckhut. 1991. Climatology and trends of the middle atmospheric temperature (33–87km) as seen by Rayleigh lidar over the south of France. J. Geophys. Res., 96, 15,297–15,309.

Hegerl, G.C. and J.M. Wallace. 2002: Influence of patterns of climate variability on the difference between satellite and surface temperature trends. J. Climate, 15, 2412-2428.

Intergovernmental Panel on Climate Change (IPCC) – Working Group I, (1996): Climate Change 1995 – The Science of Climate Change, Houghton, J.T., F.G. Meira Filho, B.A. Callander, N. Haarris, A. Kattenberg, and K. Maskell eds. 1996. Cambridge University Press.

Intergovernmental Panel on Climate Change (IPCC) – Working Group I, (2001): Climate change 2001 – The Scientific Basis, Houghton, J.T., Ding, Y., Griggs, D.J., Noguer, M. van der Linden, P.J., Dai, X., Maskell, K., Johnson, C.C. eds. 2001. Cambridge University Press.

Jones, P.D. 1994: Recent warming in global temperature series. Geophys. Res. Lett., 21, 12, 1149-1152.

Jones, P.D., K.R. Briffa, T.P. Barnett, and S.F.B. Tett. 1998: High-resolution paleoclimatic records for the last millennium: interpretation, integration, and comparison with General Circulation Model control run temperatures. The Holocene, 8, 455-471.

Jones, P.D., M. New, D.E. Parker, S. Martin, and I.G. Rigor. 1999: Surface air temperature and its changes over the past 150 years. Rev. Geophys., 74, 173-199.

Jones, P.D., T.J. Osborne, K.R. Briffa, C.K. Folland, E.B. Horton, L.V. Alexander, D.E. Parker, and N.A. Raynor. 2001: Adjusting for sampling density in grid box land and ocean surface temperature time series. J. Geophys Res., 103, 3371-3380.

Hack, J.J. Rosinski, J.M., Williamson, D.L., Boville, B.A. and Truesdale, J.E. 1995. Computational Design of the NCAR Community Climate Model. Parallel Comput., 21, 1545-1569.

Hurrell, J., Hack, J.J. Boville, B.A., Williamson, D., and Kiehl, J.T. 1998. The Dynamical Simulation of the NCAR Community Climate Model Version 3 (CCM3). J. Climate, 11, 1207-1236.

Kalnay, E. et. al. 1996: The NCEP/NCAR 40-year reanalysis project. Bull. Amer. Meteor. Soc., 77, 437-471.

Kaplan, A., Y. Kushnir, M. Cane, and M. Blumenthal. 1997. Reduced space optimum analysis for historical data sets: 136 years of Atlantic sea surface temperatures. J. Geophys. Res., 102, 27, 835– 27,860, 1997.

Kaplan, A., M. A. Cane, Y. Kushnir, A. C. Clement, M. B. Blumenthal, and B. Rajagopalan. Analyses of global sea surface temperature 1856– 1991. J. Geophys. Res., 103, 18, 567– 18,589, 1998.

Karl, T.R. et. al. 1993: Bull. Am. Meteorol. Soc., 37, 173-200.

Karl T.R., J.R. Christy, B. Santer, F. Wentz, D. Seidel, J. Lanzante, K. Trenberth, D. Easterling, M. Goldberg, J. Bates, C. Mears 2002: Draft White Paper: Understanding recent atmospheric temperature trends and reducing uncertainties. Prepared in support of the Strategic Plan for the U.S. Climate Change Science Program. Available online at Accessed Jan. 9, 2005.

Kiehl J.T., J.J. Hack, G.B. Bonan, B.A. Boville, B.A. Briegleb, D.L. Williamson, and P.J. Rasch. 1996: Description of the NCAR Community Climate Model (CCM3). Boulder: National Center for Atmospheric Research.

Kiehl J.T., J.J. Hack, G.B. Bonan, B.A. Boville, B.A. Briegleb, D.L. Williamson, and P.J. Rasch. 1998: The National Center for Atmospheric Research Community Climate Model: CCM3. J. Climate, 11, 1131-1149.

Kinne, O. 2003. Climate Research: an article unleashed worldwide storms. Climate Research, 24, 197-198.

Kistler, R., E. Kalnay, W. Collins, S. Saha, G. White, J. Woollen, M.l Chelliah, W. Ebisuzaki, M. Kanamitsu, V. Kousky, H. van den Dool, R. Jenne and M. Fiorino. (2001). The NCEP_NCAR 50-year Reanalysis: monthly means. Bull. Amer. Meteor. Soc, 82, 247-267. Dataset available online at Accessed on Dec. 5, 2004.

Knutson, T.R., S. Manabe and D. Gu, 1997. Simulated ENSO in a global coupled ocean-atmosphere model: multidecadal amplitude modulation and CO2-sensitivity. J. Climate, 10, 138-161.

Koshelkov, Y. P., and G. R. Zakharov. 1998. On temperature trends in the Arctic lower stratosphere. Meteorol. Gidrol., 5, 45–54.

Labitzke, K., and H. van Loon. 1994. Trends of temperature and geopotential height between 100 and 10 hPa in the Northern Hemisphere. J. Meteorol. Soc. Jpn., 72, 643–652.

Lanzante, J.R. 1996: Resistant, robust, and nonparametric techniques for the analysis of climate data: Theory and examples, including applications to historical radiosonde station data. Int. J. Climatol., 16, 1197-1226.

Lanzante, J.R., S.A. Klein, D.J. Seidel. 2003: Temporal homogenization of monthly radiosonde temperature data. Part I: Methodology. J. Climate, 16(2), 224-240.

Lanzante, J.R., S.A. Klein, D.J. Seidel. 2003: Temporal homogenization of monthly radiosonde temperature data. Part II: Trends, sensitivities and MSU comparison. J. Climate, 16(2), 241-262.

Lelieveld, J., G.J. Roelofs, L. Ganzeveld, J. Feichter, and H. Rodhe. 1997. Terrestrial sources and distribution of atmospheric sulphur. Phil. Trans. R. Soc. Lond. B., 352, 149-158.

Litten, L., J.R. Christy, and R.W. Spencer. 2005. Non-thermometric effects on MSU tropospheric temperatures. 16th Conference on Climate Variability and Change. 85th AMS Annual Meeting, San Diego, CA., Jan. 2005.

Luers, J.K., and R.E. Eskridge. 1998: Use of radiosonde temperature data in climate studies. J. Climate, 11, 1002-1019.

Mann, M.E., R.S. Bradley, and M.K. Hughes. 1999. Northern Hemisphere Temperatures During the Past Millenium: Inferences, Uncertainties, and Limitations. Geophys. Res. Lett., 26, 759-762.

McKitrick, R. and P.J. Michaels. (2004). "A Test of Corrections for Extraneous Signals in Gridded Surface Temperature Data". Climate Research, 26 pp. 159-173.

McKitrick, R. and P.J. Michaels. (2004b). "Correction: A Test of Corrections for Extraneous Signals in Gridded Surface Temperature Data". Published Sept. 13, 2004. Available online at Accessed on Dec. 19, 2004.

McKitrick, R., and P.J. Michaels. 2004c. Are temperature trends affected by economic activity? Reply to Benestad. Climate Research, 27, 175-176.

Mears, C.A., M.C. Schabel et al. 2002: Correcting the MSU Middle Tropospheric Temperature for Diurnal Drifts. Proceedings of the International Geophysics and Remote Sensing Symposium, Volume III, pg. 1839-1841, 2002.

Mears, C.A., Schabel, M.C., Wentz, F.J. 2003: A Reanalysis of the MSU Channel 2 Tropospheric Temperature Record. J. Climate, 16 (22), 3650–3664.

Mears, C.A., Schabel, M.C., Wentz, F.J. 2003b: Understanding the difference between the UAH and RSS retrievals of satellite-based tropospheric temperature estimate. Workshop on Reconciling Vertical Temperature Trends, NCDC, Oct. 27-29, 2003. Available online at

Mears, C.A., Schabel, M.C., Wentz, F.J. 2003c: A New Tropospheric temperature Dataset From MSU. 14th Symposium on Global Change and Climate Variations, Long Beach, CA., Feb. 11, 2003. Available online at

Meehl, G.A. and W.M. Washington, 1996: El Nino-like climate change in a model with increased atmospheric CO2-concentrations. Nature, 382, 56-60.

Meehl, G. A., Washington, W. M., Wigley, T. M. L., Arblaster, J. M., Dai, A., 2003.  Solar and greenhouse gas forcing and climate response in the twentieth century  J. Clim., 16, 426-444.

Meehl, G. A., Washington, W. M., Arblaster, J. M., 2003b.  Factors affecting climate sensitivity in global coupled climate models.  Paper presented at the American Meteorological Society 83rd Annual Meeting, Long Beach, CA.

Meehl, G.A. W.M. Washington, T.M.L. Wigley, J.M. Arblaster and A. Dai. 2003. Solar and Greenhouse Gas Forcing and Climate Response in the Twentieth Century. J. Climate, 16, 426-444.

Meehl, G.A., G.J. Boer, C. Covey, M. Latif and R.J. Stouffer, 2000. The Coupled Model Intercomparison Project (CMIP). Bull. Am. Met. Soc., 81, 313-318.

Michaels, P.J., R. McKitrick, and P.C. Knappenberger. 2004. Economic Signals in Global Temperature Histories. 15th Symposium on Global Change and Climate Variations; The 84th AMS Annual Meeting. Seattle, WA, Jan 2004. Available online at Accessed on Dec. 18, 2004.

Milloy, S. Aug. 1, 2003. Global Warming is not a WMD. Fox News Channel. Available online at,2933,93466,00.html. Accessed on Nov. 21, 2004.

Milmoe, P.M. 1999. Evaluation of the Environmental Impacts from APCA/CW Partnership. The 1999 American Council for an Energy Efficient Economy (ACEEE) Summer Study. Available online at$file/ACEEEevaluat.doc?OpenElement. Accessed on Dec. 19, 2004.

Mo, T. 1995: A Study of the Microwave Sounding Unit on the NOAA-12 satellite. IEEE Trans. Geosci. Remote Sens., 33, 1141-1152.

Nash, J., and G. F. Forrester. 1986. Long-term monitoring of stratospheric temperature trends using radiance measurements obtained by the TIROS-N series of NOAA spacecraft. Adv. Space Res., 6 (10), 37–44.

National Center for Public Policy Research (NCPPR). Aug. 13, 1998. Study Challenging Validity of Satellite Data Flawed. Satellite Data Still the Most Reliable Means of Measuring Planet's Temperature. NCPPR Press Release. Ridenour, D. (ed.). Available online at

National Research Council (NRC). Panel on Reconciling Temperature Observations. 2000. Reconciling Observations of Global Temperature Change. Wallace, J.M. et. al. (eds.). National Academy Press, Washington DC. ISBN 0-309-06891-6. Available online at

National Research Council (NRC). Committee on Abrupt Climate Change. 2002. Abrupt Climate Change: Inevitable Surprises. Alley, R.B. et. al. (eds.). National Academy Press, Washington DC. ISBN 0-309-07434-7. Available online at

Neelin, J. D., and H. A. Dijkstra. 1995. Ocean–atmosphere interaction and the tropical climatology. Part I: The dangers of flux adjustment. J. Climate, 8, 1325–1342.

Oort, A. H., and H. Liu. 1993. Upper-air temperature trends over the globe, 1956–1989. J. Climate, 6, 292–307.

Parker, D.E., and D.I. Cox. 1995. Toward a consistent global climatological rawinsonde database. Internat. J. Climatol., 15, 473-496.

Parker, D.E., M. Gordon, D.P.N. Cullum, D.M.H. Sexton, C.K. Folland, and N. Rayner. 1997. A new global gridded radiosonde temperature data base and recent temperature trends. Geophys. Res. Lett., 24, 1499-1502.

Parker, D.E., C.K. Folland, and I. Macadam. 2000: Why is the global warming proceeding much slower than expected? J. Geophys. Res., 104, 3865-3876.

Pope, V. D., Gallani, M. L., Rowntree, P. R., and Stratton, R. A. (2000. The impact of new physical parametrizations in the Hadley Centre Climate Model - HadAM3. Climate Dynamics, 16, 123-146.

Prabhakara, C., J.R. Iaacovoazzi, J.M. Yoo and G. Dalu. 1998: Global Warming deduced from MSU. Geophys. Res. Lett., 25 (11), 1927-1930.

Prabhakara, C., J.R. Iaacovoazzi, J.M. Yoo and G. Dalu. 2000: Global Warming: Evidence from satellite observations. Geophys. Res. Lett., 27 (21), 3517-3520.

Ramaswamy, V., M.L. Chanin, J. Angell, J. Barnett, D. Gaffen, M. Gelman, P. Keckhut, Y. Koshelkov, K. Labitzke, J.-J. R. Lin, A. O’Neill, J. Nash, W. Randel, R. Rood, K. Shine, M. Shiotani, and R. Swinbank. 2001. Stratospheric temperature trends: observations and model simulations. Rev. Geophys., 39 (1), 71-122.

Rayner, N. A., D. E. Parker, E. B. Horton, C. K. Folland, L. V. Alexander, D. P. Rowell, E. C. Kent and A. Kaplan. 2003. Global Analyses of SST, Sea Ice and Night Marine Air Temperature Since the Late Nineteenth Century. Journal of Geophysical Research, 108 (D14), 4407, doi:10.1029/2002JD002670.

Reitenbach, R.H., and A.M. Sterin. 1996. An objective analysis scheme for climatological parameters. Thirteenth Conference on Probability and Statistics in Atmospheric Sciences. American Meteorological Society. pp. 334-338.

Reynolds, R.W. 1988. A real-time Global Sea Surface Temperature Analysis. J. Climate, 1, 75-86.

Reynolds, R. W., and T. M. Smith. 1994. Improved global sea surface temperature analyses. J. Climate, 7, 929– 948.

Robinson, A.B., S.L. Baliunas, W. Soon, and Z.W. Robinson. 1998. Environmental Effects of Increased Atmospheric Carbon Dioxide. Unpublished. Available online at Accessed on Nov. 30, 2004.

Russell, G. L., J. R.Miller, and D. Rind. 1995. A coupled atmosphere-ocean model for transient climate change studies. Atmos. Ocean, 33, 683–730.

Spencer, R.W. and J.R. Christy. 1990. Precision monitoring of global temperature trends from satellites. Science, 247, 1558-1562.

Santer, B.D., Mikolajewicz, U., Brüggemann, W., Cubasch, U., Hasselmann, K., Höck, H., Maier-Reimer, E. and Wigley, T.M.L. 1995. Ocean variability and its influence on the detectability of greenhouse warming signals. Journal of Geophysical Research, 100, 10693-10725.

Santer, B.D., J.J. Hnilo, T.M.L. Wigley, J.S. Boyle, C. Doutriaux, M. Fiorino, D.E. Parker, and K.E. Taylor. 1999. Uncertainties in observationally based estimates of temperature change in the free atmosphere. J. Geophys. Res., 104, 6305-6333.

Santer, B.D., T.M.L. Wigley, D.J. Gaffen, L. Bengtsson, C. Doutriaux, J.S. Boyle, M. Esch, J.J. Hnilo. G.A. Meehl, E. Roeckner, K.E. Taylor, M.F. Wehner. 2000. Interpreting Differential Temperature Trends at the Surface and in the Lower Troposphere. Science, 287, 5456, 1227-1232.

Santer, B. D., T. M. L. Wigley, J. S. Boyle, D. J. Gaffen, J. J. Hnilo, D. Nychka, D. E. Parker, and K. E. Taylor. 2000b. Statistical significance of trends and trend differences in layer-average atmospheric temperature time series. J. Geophys. Res., 105, 7337– 7356.

Santer, B.D., T.M.L. Wigley, C. Doutriaux, J.S. Boyle, J.E. Hansen, P.D. Jones, G.A. Meehl, E. Roeckner, S. Sengupta, and K.E. Taylor. 2001. Accounting for the effects of volcanoes and ENSO in comparisons of modeled and observed temperature trends. J. Geophys. Res., 106, 28033-28059.

Santer, B.D., T.M.L. Wigley, G.A. Meehl, M.F. Wehner, C. Mears, M. Schabel, F.J. Wentz, C. Ammann, J. Arblaster, T. Bettge, W.M. Washington, K.E. Taylor, J.S. Boyle, W. Bruggemann, C. Doutriaux. 2003. Influence of Satellite Data Uncertainties on the Detection of Externally Forced Climate Change. Science, 300, 1280-1284.

Santer, B.D., R. Sausen, T. M. L. Wigley, J. S. Boyle, K. AchutaRao, C. Doutriaux, J. E. Hansen, G. A. Meehl, E. Roeckner, R. Ruedy, G. Schmidt, and K. E. Taylor. 2003b. Behavior of tropopause height and atmospheric temperature in models, reanalyses, and observations: Decadal changes. J. Geophys. Res., 108 (D1), 4002, doi:10.1029/2002JD002258.

Santer B.D., M.F. Wehner, T.M.L. Wigley, R. Sausen, G.A. Meehl, K.E. Taylor, C. Ammann, J.M. Arblaster, W.M. Washington, J.S. Boyle and W. Bruggemann. 2003c. Contributions of anthropogenic and natural forcing to recent to recent tropopause height changes. Science, 301, 479-483.

Schiermeier, Q. 2004. Global warming anomaly may succumb to microwave study. Nature, Nature News, 429, 7, doi:10.1038/429007a. Available online at Accessed on Dec. 27, 2004.

Schubert, S. R., R. Rood, and J. Pfaendtner. 1993. An assimilated data set for Earth science applications. Bull. Am. Meteorol. Soc.,74, 2331–2342.

Seidel, D.J. 2004. Personal communication, Sept. 2004.

Seidel, D.J., J. Angell, J. Christy, M. Free, S. Klein, J. Lanzante, C. Mears, D. Parker, M. Schabel, R. Spencer, A. Sterin, P. Thorne, and F. Wentz. 2003. Intercomparison of Global Upper-Air Temperature Datasets from Radiosondes and Satellites. AMS Annual Meeting, 2003.

Seidel, D.J., J.K. Angell, J. Christy, M. Free, S.A. Klein, J.R. Lanzante, C. Mears, D. Parker, M. Schabel, R. Spencer, A. Sterin, P. Thorne, F. Wentz. 2004. Uncertainties in Signals of Large-Scale Climate Variations in Radiosonde and Satellite Upper-Air Temperature Datasets. J. Clim., 17, 2225-2240.

Shen, S.S., M. Thomas, C.F. Ropelewski, and R.E. Livezey. 1998. An optimal regional averaging method with error estimates and a test using tropical Pacific SST data. J. Clim., 11, 2340-2350.

Singer, S.F. 1999. EOS, 80, 183.

Sokolov, A. P., and P. H. Stone. 1998. A flexible climate model for use in integrated assessments, Clim. Dyn., 14, 291– 303.

Soon, W. and S. Baliunas. 2003. Proxy climatic and environmental changes of the past 1000 years. Climate Research, 23, 89-110.

Spencer, R.W. 2004. When Is Global Warming Really a Cooling? Tech Central Station Online. Published on May 5, 2004. Available online at Accessed Dec. 26, 2004.

Spencer, R.W. 2004b. Global Warming: The Satellite Saga Continues. Tech Central Station Online. Published on Dec. 3, 2004. Available online at Accessed Dec. 27, 2004.

Spencer, R.W. and J.R. Christy. 1990. Precision monitoring of global temperature trends from satellites. Science, 247, 1558-1562.

Spencer, R.W. and J.R. Christy. 1992a. Precision and radiosonde validation of satellite gridpoint temperature anomalies. Part I: MSU Channel 2. J. Clim., 5, 847-857.

Spencer, R.W. and J.R. Christy. 1992b. Precision and radiosonde validation of satellite gridpoint temperature anomalies. Part II: A tropospheric retrieval and trends during 1979-90. J. Clim., 5, 858-866.

Spencer, R. W., and J. R. Christy. 1993. Precision lower stratospheric temperature monitoring with the MSU technique: Validation and results, 1979–1991. J. Climate, 6, 1194–1204.

Stendel, M., Christy, J.R. and Bengtsson, L. 2000. Assessing levels of uncertainty in recent temperature time series. Clim. Dyn., 16, 1405-1423.

Sterin, A.M., 1999. An analysis of linear trends in the free atmosphere temperature series for 1958-1997. Meteorologiai Gidrologia, 5, 52-68.

Sterin, A.M., 2000. Variations of upper-air temperature in 1998-1999 and their effect on long period trends. Proc. 24th Annual Climate Diagnostics and Prediction Workshop. NOAA. pp. 222-225.

Sterin, A.M., 2001. Tropospheric and Lower Stratospheric Temperature Anomalies Based on Global Radiosonde Network Data . In Trends Online: A Compendium of Data on Global Change. Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A.

Stott, P.A., Tett, S.F.B., Jones, G.S., Allen, M.R., Mitchell, J.F.B., and Jenkins, G.J. 2000. External control of 20th century temperature by natural and anthropogenic forcings. Science, 290 (5499): 2133-2137.

Sun, S., and J.E. Hansen. 2003. Climate Simulations for 1951-2050 with a Coupled Atmosphere-Ocean Model. J. Climate, 16, 2807-2826.

Tett, S. 2004. Personal communication, Dec. 21, 2004.

Tett, S.,F. B., G.S. Jones, P.A. Stott, D.C. Hill, J.F.B. Mitchell, M.R. Allen, W.J. Ingram, T.C. Johns, C.E. Johnson, A. Jones, D.L. Roberts, D.M.H. Sexton, and M.J. Woodage. 2002. Estimation of natural and anthropogenic contributions to twentieth century temperature change. J. Geophys. Res., 107 (D16), 10.1029/2000JD000028.

Tett, S., and P. Thorne. 2004. Tropospheric temperature series from satellites. Arising from: Q. Fu et al., Nature 429, 55–58 (2004). Nature, 432, doi:10.1038/nature03208. Brief Communications. December 2, 2004. Abstract available at Accessed on Dec. 27, 2004. Subscription required for access to the full article.

Thompson, J., and S. Solomon, Interpretation of Recent Southern Hemisphere Climate Change. Science, 296, 895–899.

Timmerman, A., J. Oberhuber, A. Bacher, M. Esch, M. Latif and E. Roeckner, 1999. Increased El Nino frequency in a climate model forced by future greenhouse warming. Nature, 398, 694-696.

Trenberth, K.E. 1984. Signal vs. noise in the Southern Oscillation. Mon. Wea. Rev., 112, 326-332.

Trenberth, K.E. and J. M. Caron. 2001. Estimates of meridional atmosphere and ocean heat transports. J. Climate, 14, 3433–3443.

Trenberth, K.E., J. M. Caron, and D. P. Stepaniak, 2001: The atmospheric energy budget and implications for surface fluxes and ocean heat transports. Climate Dyn., 17, 259–276.

Trenberth, K.E. and D.P. Stepaniak. 2003a. Covariability of components of poleward atmospheric energy transports on seasonal and interannual timescales. J. Clim., 16, 3691-3705.

Trenberth, K.E. and D.P. Stepaniak. 2003b. Seamless poleward atmospheric energy transports and implications for the Hadley circulation. J. Clim., 16, 3706-3722.

Trenberth, K.E. 2004. Personal communication, Jan. 2004.

Tziperman, E. 2000. Uncertainties in thermohaline circulation response to greenhouse warming. Geophys. Res. Lett., 27, 3077–3080.

Uppala, S. 2003. In Proc.Workshop Reanalysis 5–9 November 2001, 1–10 (European Centre for Medium-range Weather Forecasting, Reading).

Vaughan, G.J. Marshal, W.M. Connolley, J.C. King and D.P. R. Mulvaney. 2001. Climate Change: Devil in the Detail. Science, 293, 1777-1779.

Vinnikov, K.Y, A. Robock, D.J. Cavalieri, C.L. Parkinson. 2002a. Analysis of seasonal cycles in climatic trends with application to satellite observations of sea ice extent. Geophys. Res. Lett., 29 (9) doi: 10.1029/2001GL014481.

Vinnikov, K.Y, A. Robock, A. Basist. 2002b. Diurnal and seasonal cycles of trends of surface air temperature. J. Geophys. Res., 107 (D22), 4641, doi:10.1029/2001JD002007.

Vinnikov, K.Y. and N.C. Grody. 2003. Global warming trend of mean tropospheric temperature observed by satellites. Science, 302, 269-272.

Vinnikov, K.Y, A. Robock, N.C. Grody and A. Basist. 2004. Analysis of Diurnal and Seasonal Cycles and Trends in Climatic Records with Arbitrary Observation Times. Geophys. Res. Lett, In Press.

W.M. Washington, J.W. Weatherly, G.A. Meehl, A.J. Semtner Jr., T.W. Bettge, A.P. Craig, W.G. Strand Jr., J.M. Arblaster, V.B. Wayland, R. James, Y. Zhang. 2000. Parallel Climate Model (PCM) control and transient simulations. Climate Dynamics, 16, 755-774.

Weatherhead, E.C. and Coauthors. 1998. Factors affecting the detection of trends: Statistical considerations and applications to environmental data. J. Geophys. Res., 103, 17 149-17 161.

Wentz, F.J. 1998. Algorithm Theoretical Basis Document: AMSR Ocean Algorithm. Remote Sensing Systems. Santa Rosa, CA. RSS Tech. Report 110398. Nov. 3, 1998.

Wentz, F.J. and M. Schabel. 1998. Effects of orbital decay on satellite-derived lower tropospheric temperature trends. Nature, 394, 661-664.

Whipple, D. 2004. Climate: The Tropospheric Data Do Conform. UPI Newswire. Dec. 67, 2004. Available online from Space Daily at Accessed on Dec. 27, 2004.

Wigley. T.M.L. and B.D. Santer. 2003. Differential ENSO and volcanic effects on surface and tropospheric temperatures. J. Clim., Submitted.

Wikipedia. 2004. Satellite temperature measurements. Wikepedia. Available online at Accessed Dec. 12, 2004.

Wilks, D. S. 1995. Statistical Methods in the Atmospheric Sciences: An Introduction. Academic Press, New York, 467 pp.

World Meteorological Organization (WMO). Report of the International Ozone Trends Panel: 1988, Global Ozone Res. and Monit. Proj., Rep. 18, chap. 6, pp. 443–498, Geneva, 1990.

World Meteorological Organization (WMO). 1996. Measurements of upper air temperature, pressure, and humidity. Guide to Meteorological Instruments and Methods of Observation, Chapter 12. WMO-No. 8, Sixth Edition, Geneva, I.12-I-I.12-32.

Xie, P. and P.A. Arkin, 1996. Analyses of Global Monthly Precipitation Using Gauge Observations, Satellite Estimates, and Numerical Model Predictions. J. Climate, 9, 840-858.

Xie, P. and A. Arkin. 1997. Global precipitation: A 17-year monthly analysis based on gauge observations, satellite estimates and numerical model outputs. Bull. Amer. Meteor. Soc., 78, 2539–2558.

Zhang, Y., J.M. Wallace and D.S. Battisti, 1997: ENSO-like interdecadal variability. 1900-93. J. Climate, 10, 1004-1020.


Static atmospheric weighting function profiles
Figure 1:   Static atmospheric weighting function profiles as a function of altitude (in pressure units 3) for MSU and AMSU products. Not shown are the surface contribution factors, which for land (ocean) for TLT are 0.20 (0.10) and for TMT are 0.10 (0.05) of the total weighted profile. The land surface contribution increases for higher surface altitudes. The MSU/AMSULT profile is calculated from views of Nadir and Off-Nadir views of Channel 2. From Christy et. al. (2003).
Global average tropospheric temperature results from MSU and AMSU
Figure 2:   Global average tropospheric temperature results from MSU and AMSU records. TLT results are representative of the lower troposphere. Channels 2 and 4 give the middle troposphere and lower stratosphere respectively.
MSU Channel 2 brightness temperature differences
Figure 3:   Ascending-Descending Channel 2 brightness temperature differences for the entire MSU dataset for the central 5 fields of view, the month of June, and for ascending node Local Equatorial Crossing Times (LECT’s) of 15:00 to 16:00 (Top), and the same as simulated by CCM3 diurnal climatology by the RSS Team (from Mears et. al., 2002).
MSU Channel 2 brightness temperatures for 1979 to 2001
Figure 4a:   MSU Channel 2 brightness temperatures for 1979 to 2001 as determined by, a) RSS Ver. 1.0 (Mears et. al., 2003), b) UAH Ver. 5.0 (Christy et. al., 2003), and, c) the difference between the two. Taken from Mears et. al. (2003).
MSU Channel 2 brightness temperatures for 1979 to 2001
Figure 4b:   Same as Figure 11A but for 1979 to 2002. Taken from Mears et. al. (2003b).
Upper air temperature trends from Angell 54.
Figure 5:   Upper air temperature trends in deg. K/decade from Angell 54 at various troposphere and lower stratosphere altitudes for the Northern Hemisphere, the Southern Hemisphere, the Tropics, and the globe, compared with those from MSU, other radiosonde analysis and re-analysis products, and surface-air data, for 1958-2000 (Left) and 1979-2000 (Right). MSU data (M) are from UAH Ver. D (Christy et. al., 2000). Alternate sonde products are from Lanzante et. al. (2003: solid triangles), Parker et. al. (1997: P), and Gaffen et. al. (2000b: G). The re-analysis product is a radiosonde-satellite product from Ramaswamy et. al. (2000: R). Surface temperature trends are from Jones et. al. (2001: J) and Hansen et. al. (1999: H). Trends shown for Lanzante et. al. (2003) are for 1959-1997 (Left) and 1979-1997 (Right), and data for Gaffen et. al. (2000b) are for 1960-1997 (Left) and 1979-1997 (Right). The small circles unconnected by straight lines show trends for the original Angel 63 network (Angell, 1988). The horizontal bars show 2-sigma confidence intervals for each trend indicated. Figure taken from Angell, 2003.
Global temperature anomalies for the middle troposphere from MSU/AMSU and 2 radiosonde datasets.
Figure 6:   Global temperature anomalies for the middle troposphere from MSU/AMSU and 2 radiosonde datasets. The HadRT sonde dataset represents monthly CLIMAT TEMP reports and the LKS sonde dataset is from an 87 station network corrected for temporal inhomogeneities. The bottom curve gives the average trend for all products and the individual product curves give deviations from the average (from Seidel et. al., 2003).
Global temperature anomalies for the lower stratosphere from MSU/AMSU and 2 radiosonde datasets.
Figure 7:   Global temperature anomalies for the lower stratosphere from MSU/AMSU and 2 radiosonde datasets. The HadRT sonde dataset represents monthly CLIMAT TEMP reports and the LKS sonde dataset is from an 87 station network corrected for temporal inhomogeneities. The bottom curve gives the average trend for all products and the individual product curves give deviations from the average (from Seidel et. al., 2003).
Multidataset-average monthly anomaly time series for 6 vertical layers
Figure 8:   Multidataset-average monthly anomaly time series for 6 vertical layers compared with time series for the Quasi-Biennial oscillation (QBO) as determined by 50-hPa altitude zonal wind patterns from radiosonde data at Singapore, and the Southern Oscillation Index (SOI) as determined by Trenberth (1984). The datasets shown are global averages of data from LKS, HadRT, RIHMI, Angell 63, Angell 54, and UAH Vers. D and 5.0. All are global average time series except for the 300-100 hPa (tropopause) time series which is for the Tropics only. Taken from Seidel et. al., 2003.
Figure 9:   Summary of 95 percent confidence interval estimates for calculations of global troposphere temperature statistics for UAH Ver. 5.0 based on UAH analysis of the Minqin radiosonde station in China, UAH selected U.S. radiosonde stations, the NCEP reanalysis product, and HadRT2.1. TLT corresponds to the lower troposphere, TMT the middle troposphere, and TLS the lower stratosphere. From Christy et. al., 2003.
Trends in global temperature for 1958-1997.
Figure 10:   Trends in global temperature for 1958-1997 for troposphere (top), tropopause (middle), and lower stratosphere (bottom), in four regions, from 5 radiosonde datasets. The confidence intervals shown are typical values of the ±2 sigma uncertainty estimates. Imagining placing the midpoint of these confidence intervals at the value of each trend, and determining if there is overlap, will give a sense of whether there are statistically significant differences within groups of trend estimates. From Seidel et. al., 2003.
Trends in global temperature for 1958–97 for three atmospheric layers.
Figure 11:   Trends (deg. K/decade) in global temperature for 1958–97 for three atmospheric layers (top) 100–50 hPa (top), 300–100 hPa (middle), and 850–300 hPa (bottom), in four regions, from radiosonde datasets (left side), and for 1979–97 for three layers (top) MSU4, (middle) MSU2, (bottom) MSU2LT, in four regions, from MSU/AMSU and radiosonde datasets. Confidence intervals shown are +/- one Standard Error estimates. HadRT data are for the HadRT2.1 release. From Seidel et al. (2004).
Temperature trends for 1979–2001 for three atmospheric vertical layers.
Figure 12:   Temperature trends for 1979–2001 for three vertical layers MSU4 (top), MSU2 (middle), and MSU2LT (bottom), in four regions, from MSU/AMSU and radiosonde datasets. Confidence intervals shown are +/- one Standard Error estimates. HadRT data is for the HadRT2.1 Version. From Seidel et al. (2004).
Corrected MSU Channel 2 weighting function derived by Fu et al. (2004).
Figure 13:   The corrected MSU Channel 2 weighting function derived by Fu et al. (2004) compared with the uncorrected MSU2, MSU4, and 2LT/TLT channels (Christy et al., 2003; Mears et al., 2003). Whereas the actual Channel 2, 4, and TLT functions are everywhere positive, as required for real weighting, the Fu et al. function goes negative above 100 hPa to remove stratospheric effects from the uncorrected MSU2 channel. Global average tropopause height is shown for comparison.
Trends in monthly mean troposphere temperature anomalies for MSU channel 2.
Figure 14:   Trends in monthly mean troposphere temperature anomalies for MSU channel 2 without correction for stratospheric influence (top), and for the MSU-derived 850–300-hPa layer with correction (bottom). Trends are given for the globe, Northern Hemisphere (NH), Southern Hemisphere (SH) and tropics (308 N–308 S). Uncorrected UAH values are from Version 5.0 (Christy et. al., 2003) and uncorrected RSS values are from Version 1.0 (Mears et. al., 2003). Surface temperature trends for the same regions are also shown for comparison. From Fu et. al., 2004.
Components of space-time errors of surface air temperature simulated by CMIP2.
Figure 15:   Components of space-time errors of surface air temperature (climatological annual cycle) simulated by Coupled Model Intercomparison Phase 2 CMIP2 model control runs. Shown are the total errors, the global and annual mean error (“bias”), the total rms (“pattern”) error, and the following components of the climatological rms error: zonal and annual mean (“”); annual mean deviations from the zonal mean (“”), seasonal cycle of the zonal mean (“”); and seasonal cycle of deviations from the zonal mean (“”). For each component, errors are normalised by the component’s observed standard deviation. The two left-most columns represent alternate observationally based data sets, ECMWF and NCAR/NCEP reanalyses, compared with the baseline observations (Jones et al., 1999). Remaining columns give model results: the ten models to the left of the second thick vertical line are flux adjusted and the six models to the right are not. From Covey et al. (2000) and the IPCC (2001).
Second-order statistics of surface air temperature, sea level pressure and precipitation simulated by CMIP2.
Figure 16:   Second-order statistics of surface air temperature, sea level pressure and precipitation simulated by the Coupled Model Intercomparison Phase 2 CMIP2 model control runs (Meehl et al. 2000). The radial co-ordinate gives the magnitude of total standard deviation, normalized by the observed value, and the angular co-ordinate gives the correlation with observations. It follows that the distance between the OBSERVED point and any model’s point is proportional to the rms model error. Numbers indicate models counting from left to right in Figure 38. Letters indicate alternate observationally based data sets compared with the baseline observations: e = 15-year ECMWF reanalysis (“ERA”); n = NCAR/NCEP reanalysis. From Covey et al. (2000) and the IPCC (2001).
Trend-line maps of Surface Temperature viewed from North Pole, Full World, and South Pole Projections - Douglass et al. (2004).
Figure 17:   Trend-line maps of Surface Temperature, UAH Ver. D MSU 2LT, and R2-2m for 1979-1996 viewed from North Pole, Full World, and South Pole Projections as reported in Douglass et al. (2004). Note that apart from polar regions (which are shown as colorless circles) cells where Surface Temperature data are missing are made dark blue – and are therefore indistinguishable from their cells that show strong regional cooling. Taken from Douglass et al. (2004).
Figure 18:   Zonally averaged temperature trends for the period 1979-1996 from the Surface Record, MSU2LT, and the NCEP/NCAR 2-Meter Reanalysis as determined by Douglass et al. (2004) and plotted as a function of latitude. Taken from Douglass et al. (2004).
Comparison of 10-yr mean (1979–88) zonally averaged albedo over ocean regions.
Figure 19:   Comparison of 10-yr mean (1979–88) zonally averaged albedo over ocean regions in the original NCEP/NCAR R-1 Reanalysis (dashed - Kalnay et al., 1996) and the R2-2m update (solid – Kanamitsu et al., 2002) shown as fractions of 1.0. Albedos increase significantly beyond 60 deg. N. or S. latitude toward either pole. Taken gtom Kanamitsu et al. (2002).
Figure 20a:   Change of annual-mean temperature profile in the GISS SI2000 AOGCM for the globe and Northern Hemisphere over the period 1979–1998 based on linear trends. Model results are for oceans A (left) and B (right), with five and six forcings as applied by Hansen et al. (2002). Surface observations are the land-ocean data of Hansen et al. (1999), with SSTs of Reynolds and Smith (1994) for ocean areas. The bars on the MSU satellite data (Christy et al., 2000) are twice the standard statistical error adjusted for autocorrelation (Santer et al., 2000b). Radiosonde profiles become unreliable above about the 100-hPa level. Twice the ensemble standard deviation is shown at three pressure levels for ocean B with six forcings. Taken from Hansen et al. (2002).
Figure 20b:   Change of annual-mean temperature profile in the GISS SI2000 AOGCM for the Tropics/Extratropics and Southern Hemisphere over the period 1979–1998 based on linear trends. Model results are for oceans A (left) and B (right), with five and six forcings as applied by Hansen et al. (2002). Surface observations are the land-ocean data of Hansen et al. (1999), with SSTs of Reynolds and Smith (1994) for ocean areas. The bars on the MSU satellite data (Christy et al., 2000) are twice the standard statistical error adjusted for autocorrelation (Santer et al., 2000b). Radiosonde profiles become unreliable above about the 100-hPa level. Twice the ensemble standard deviation is shown at three pressure levels for ocean B with six forcings. Taken from Hansen et al. (2002).
The network of surface weather stations used by McKitrick and Michaels (2004).
Figure 21:   The network of surface weather stations used by McKitrick and Michaels (2004) in their study of correlations of 1979-2000 surface temperature trends to parameterized climate, economic, and social factors. The stations were selected from GISS surface records (Hansen et al., 1999) and records from the Climate Research Unit, University of East Anglia. Taken from Michaels et al. (2004).
Figure 22:   Global, land based average surface temperature trends with and without economic and social influences, and their associated standard deviations, as reported by McKitrick and Michaels (2004). Taken from Michaels et al. (2004).
Figure 23:   Global, land based average surface temperature trends with and without economic and social influences, and their associated standard deviations, as reported by McKitrick and Michaels after correction of erroneous latitude inputs to their original SHAZAM regression run. Taken from McKitrick and Michaels (2004b).
Regression analyses with 5 different models designed to reproduce the modeled results of McKitrick and Michaels (2004).
Figure 24:   Results of regression analyses with 5 different models designed to reproduce the modeled results of McKitrick and Michaels (2004) to test their derived correlations of surface temperature trends with economic, social, and climatic variables. One model was run using all of McKitrick and Michaels’ data and the remaining 4 were run using various subsets of their dependent variables. Each model run shown used data from stations within the latitude range 75.5° S to 35.2°N for calibration and stations in the latitude range 35.3° to 80.0° N and corresponding depending variables for prediction and evaluation. The trend estimates shown are in deg. K/decade. Taken from Benestad (2004).
Figure used by Roy Spencer at Tech Central Station to dispute the results of Fu et al. (2004).
Figure 25:   The figure used by Roy Spencer at Tech Central Station (May 5, 2004 – his Figure 1) to dispute the results of Fu et al. (2004), modified to reflect my wording rather than his. Weighting functions for MSU TLT (“Spencer & Christy”), TMT (Ch. 2), and TLS (Ch. 4) from UAH Version 5.0 (Christy et al., 2003) are shown with the effective weighting function of Fu et al. (2004) for the free troposphere (850-300 hPa layer). Spencer claimed that the area shown in red aliased a spurious cooling into the free troposphere trend. Taken from Spencer (2004).
Figure 26 modified to reflect the layers being detected and trended by MSU2.
Figure 26:   Figure 25 modified to reflect the layers being detected and trended by MSU2. The region shown in orange is the free troposphere (850-300 hPa layer), the light blue region reflects the tropopause and lower stratosphere, and the red region reflects the surface affected layer. MSU2 measures the entire shaded region, but the layers shown in orange and light blue are known to have differing trends during the satellite era. Adapted from Spencer (2004).
Figure 25 modified to reflect the layers being detected and trended by the effective weighting function of Fu et al. (2004).
Figure 27:   Figure 25 modified to reflect the layers being detected and trended by the effective weighting function of Fu et al. (2004). The region shown in dark blue reflects the third term in equation 2 and reflects the coefficient weighted MSU4 trend. The combined area shaded in light orange, dark orange, and dark blue is representative of the equation 2 combined trend and is effectively the actual trend of the free troposphere (850-300 hPa layer), shown here in light orange. The red region reflects the surface affected layer. Adapted from Spencer (2004).
Tropical temperature trends for the period 1978-2002 as derived by Tett and Thorne (2004) using the Fu et al. (2004) method.
Figure 28:   Tropical (30° S to 30° N Latitude) temperature trends (deg. K/decade) for the period 1978-2002 as derived by Tett and Thorne (2004) using the Fu et al. (2004) method and data from radiosonde, reanalysis, and model run products. For the non-satellite data sets, static weighting functions were used to estimate MSU2 and MSU4 equivalents. Tfjws is the free troposphere trend they derived for each data set using the Fu et al. published coefficients applied to the T2 and T4 data. All datasets were zonally averaged, then cosine-weighted and least-square estimates of the linear trends were computed from annual-means. Indian data were removed from the HadRT2.1s analysis. Also shown are the logarithms of the pressure-weighted 850–300 hPa temperatures and the pressure-weighted 1,000–100-hPa temperatures. The RMS of the annual-mean differences between those trends and Tfjws is shown in brackets. Surface trends are from data averaged over land and ocean. For ERA-40, 2-meter temperatures were used over land and sea surface temperatures over the oceans. Surface temperatures from HadCRUT2v were used for RSS, UAH and HadRT2.1s. For the two model ensembles, the average, largest and smallest trends are shown. The difference between largest and smallest gives an indication of uncertainty in the ensemble average. The coupled (HadCM3) and atmosphere-only (HadAM3) simulations differ in their forcings, with the main differences being a correction of an error in ozone loss and changes to the sulphur cycle in the HadAM3 simulations. The HadAM3 (HadCM3) ensemble consists of six (four) simulations. Taken from Tett and Thorne (2004).
Free troposphere weighting function of Fu and Johanson (2004) for the tropics.
Figure 29:   The free troposphere weighting function of Fu and Johanson (2004) for the tropics (30 deg. S to 30 deg. N Latitude) compared to the corresponding weightings for MSU2 and 2LT/TLT. Taken from Fu and Johanson (2004).
Modeled and observed vertical trend profiles for the tropics - Douglass et al. (2004b).
Figure 30:   Modeled and observed vertical trend profiles for the tropics (30 deg. S to 30 deg. N Latitude) as reported by Douglass et al. (2004b). HadCM3 trends are for 1975-1995, DOE PCM trends are for 1979-1999, and GISS SI2000 trends are for 1979-1998. The MSU trend (single point) gives TLT data from UAH Version D (Christy et al., 2000) truncated to 1996. The surface trend (single point) is from Jones et al. (1999). The NNR profile is for the NCEP/NCAR 2-Meter Reanalysis (Kisteler et al., 2001). MSU, surface, and NNR trends are for the period 1979-1996. Taken from Douglass et al. (2004b).
Simulated trends in global-mean free-tropospheric temperature as derived using the Fu et al. method - Gillett et al. (2004).
Figure 31:   Simulated trends in global-mean free-tropospheric temperature as derived by Gillett et al. (2004) using the Fu et al. method applied to 1958-1999 results from the DOE PCM coupled AOGCM. Black crosses, are 850-300 hPa layer trends in each of four realizations of an experiment with anthropogenic and natural forcing. Asterisks indicate free-tropospheric temperature trends reconstructed from synthetic MSU2 and MSU4 trends using the method of Fu et al. These are calculated using three different sets of regression coefficients, which are derived from radiosonde observations by Fu et al. (pink asterisks), estimated from the PCM experiments (dark blue asterisks), and obtained directly from the MSU2 and MSU4 weighting functions (light blue asterisks). Red crosses, simulated trends in MSU2; green crosses, simulated trends in TLT. The simulated trend in MSU4 is -0.36 +/- 0.03 deg. K per decade. The model’s surface warming over 1890–1999 (0.62 °C) is consistent with that observed. Taken from Gillett et al. (2004).
Global mean annual mean temperature trends from GISS SI2000.
Figure 32:   Global mean annual mean temperature trends from GISS SI2000 for (top) 1958–98 and (bottom) 1979–98 based on linear trends. Model results are for oceans A, B, and E with (a) five forcings and (b) six forcings. Radiosonde data are from HadRT2.0 and HadRT2.1 (Parker et al., 1997). The surface observations (green triangles) are the land–ocean data of Hansen et al. (1999) with SST of Reynolds and Smith (1994) for ocean areas. The green bars are MSU trends for Channels 2LT, MSU2, and MSU4 from UAH Ver. D (Christy et al. 2000). Error bars reflect 2-sigma confidence intervals adjusted for autocorrelation (Santer et al. 2000b). Taken from Sun and Hansen (2003).
Transient responses of MSU 2LT, MSU2, and MSU4 layer temperatures, and global oceanic heat content anomalies in GISS SI2000.
Figure 33:   Transient responses of MSU 2LT, MSU2, and MSU4 layer temperatures, and global oceanic heat content anomalies in GISS SI2000 using the Oceans A, B, and E component models. Observed MSU layer temperatures are from UAH Ver. D (Christy et al., 2000). Results on the right employ six forcings, while those on the left exclude tropospheric aerosol changes. Taken from Sun and Hansen (2003).


Page:      1      
Global Warming Skeptics
Climate Astroturfing
OISM Petition Project
Leipzig Declarations
Climate Denial 101
Christianity & the Environment
Climate Change
The Web of Life
Managing Our Impact
Caring for our Communities
The Far-Right
Ted Williams Archive