Search:           


Climate Change & Tropospheric Temperature Trends

Part I: What do we know today and where is it taking us?

Objections

It is evident from the previous discussions that since the Year 2000 report of the National Research Council on upper-air trends (NRC, 2000) much progress has been made toward addressing the problems discussed in this paper. The last few years have seen a steady erosion of support for a long-term disparity between surface and troposphere temperature trends that is both robust and unexplainable in terms of known mechanisms. Support has also diminished for the belief that such a disparity, even if it does exist, disproves anthropogenic climate change. Yet many questions remain and the issue of troposphere temperature trends from satellite and radiosonde products continues to be controversial. Some have challenged the more recent developments in this field, arguing that the upper-air record still shows an irreconcilable disparity with the surface that argues against anthropogenic climate change. The large majority of these criticisms can be summarized in one or more of the following claims;

  1. UAH MSU/AMSU products, which continue to show lower trends than other MSU products, have been independently confirmed by the radiosonde record and are therefore more reliable other satellite products – particularly those of RSS.
  2. Apart from their agreement with the predictions of state-of-the-art AOGCM’s, there is no valid reason to favor RSS MSU/AMSU analysis products over those of UAH.
  3. State-of-the-art AOGCM’s continue to disagree substantially with the best characterizations of the upper-air record and are therefore unable to demonstrate an anthropogenic “fingerprint” on climate change.
  4. The Fu et al. method cannot account for perceived surface-troposphere disparities because its adjusted MSU2 weighting function overcorrects for the stratospheric and aliases a spurious warming into the free troposphere trend. Critics have argued that this is largely because the method relies on statistical trend evaluation to characterize stratospheric MSU2 contributions in terms of MSU4, and (it is claimed) this cannot be done reliably on a regional or global scale for the entire satellite era.

While these criticisms have received most of their coverage from forums outside of the peer-review process (e.g. think tank and advocacy group publications), they have received noteworthy attention within the scientific community as well. Douglass, Singer, and Michaels (2004; 2004b) argue for the first three points in two papers published by Geophysical Research Letters in July of 2004. The last point has been argued mainly in scientific conferences, and now most recently in a paper published by Nature in December, 2004 (Tett and Thorne, 2004). As they relate to the strengths and weaknesses of the upper-air record in general, these points have already been addressed. But the specific criticisms that have appeared in recent journal publication and/or conference settings will now be addressed.

Models and the Troposphere - Santer et al. (2003)

When RSS Version 1.0 was first made public in early 2003 it attracted immediate attention because it was the first new MSU analysis product produced that treated that record in the same level of detail as the pioneering UAH products. Like those products, it addressed all currently known sources of error, improving on the characterization of some of them, and incorporated more recent data than the extant UAH product at that time (Version D – Version 5.0 was published later that year). But unlike UAH products, it predicted satellite era troposphere temperature trends that were noticeably higher, and roughly consistent with those of Prabhakara et al. (2000). RSS published their full analysis product later that same year in Journal of Climate (Mears et al., 2003). These results were consistent with the predictions of state-of-the-art AOGCM’s. Ever since, there has been lively, and at times heated debate as to whose analysis product is more accurate. Some have even gone so far as to claim that the RSS team had cooked their analysis to justify the surface record and AOGCM predictions. RSS published their full analysis product later that same year in Journal of Climate (Mears et al., 2003).

In spring of 2003, a team led by Ben Santer of the Lawrence Livermore Laboratory that included Carl Mears, Frank Wentz, and Matthias Schabel of RSS, published a paper in the journal Science that compared results from a state-of-the-art AOGCM with RSS and UAH analysis products to see how well the results of either could be accounted for by the latest model improvements. Santer’s team compared four runs of the Dept. of Energy’s Parallel Climate Model (PCM) with MSU data from RSS Version 1.0 and UAH Version D (Christy et al., 2000) to see if either MSU product could reproduce an anthropogenic “fingerprint” that was visible in PCM. This model, which is described in Washington et al. (2000) is a coupled land, ocean, atmosphere, and sea-ice model that does not use flux corrections at interfaces. The atmospheric and land components are taken from the NCAR’s Version 3 Community Climate Model (CCM3) and Land Surface Model (LSM). CCM3 is the same atmospheric model that RSS used to characterize their diurnal correction. The reliability of CCM3 for diurnal behavior has already been seen (Figure 9). The ocean and sea-ice components are taken from the Los Alamos National Laboratory Parallel Ocean Program (POP) and a sea-ice model from the Naval Postgraduate School. In PCM, these various components are tied together with a flux coupler that uses interpolations between the component model grids in a manner similar to that used in the NCAR Climate System Model (CSM). Grid resolution varies from ½ deg. at the equator to 2/3 deg. near the North Atlantic. The atmospheric component (CCM3) uses 32 vertical layers from the surface to the top of the atmosphere. In various experiments PCM has been very reliable in reproducing observed global surface temperature behavior (see Figures 40 and 41) and stable, well characterized results for a broad range of forcings, and has done an excellent job of capturing ENSO and volcanic effects as well.

Santer’s team ran four realizations of the “ALL” PCM experiment which makes use of well-mixed greenhouse gases (including anthropogenic greenhouse gas emissions), tropospheric and stratospheric ozone, direct scattering and radiative effects of sulfate and volcanic aerosols, and solar forcing (Ammann et al., 2003; Meehl et al., 2003). All used identical forcings but differing start times. Simulated MSU temperatures were derived from global model results by applying MSU Channel 2 and 4 weighting functions to the PCM output across its 32 vertical layers, and these were then compared with UAH and RSS analysis products. The goal was to see if an anthropogenic fingerprint on global tropospheric temperature trends could be detected in either of the two MSU products. First, the model was “fingerprinted” using standard techniques (Hasselmann, 1979; Santer et al., 1995) to see if observational uncertainties had a significant impact on PCM’s consistency. Internal climate noise estimates (which are necessary for fingerprint detection experiments) were obtained from PCM and the ECHAM/OPYC model of the Max-Planck Institute for Meteorology. The anthropogenic fingerprint on climate change was taken to be the first Empirical Orthogonal Function (EOF), Φ, of the mean of the four ALL runs of PCM. Then, increasing expressions of Φ were sought in UAH and RSS analyses in an attempt to determine the length of time necessary for it to be detected at a 5 percent statistical significance level in both observational records (Santer et al., 2003).

They found that a clear MSU Channel 2 anthropogenic fingerprint was consistently found only in the RSS dataset. This is not surprising as the RSS team found consistently warmer Channel 2 trends that UAH. What is more noteworthy, is that this was true only for the mean-included comparisons. When the means are removed from both datasets, the fingerprint was clearly visible at the 5 percent level in 6 out of 8 cases for the RSS and UAH analyses – a consequence of the fact that PCM captures the observed equator to pole temperature and trend gradients quite well, and these are in turn manifested in Φ. The team concluded that the main differences in the ability of the RSS and UAH products was due to the large global mean and trend differences between the two, and these were in turn likely to be due to uncertainties in how each was analyzed. Santer’s team correctly concluded that,

“Our findings show that claimed inconsistencies between model predictions and satellite tropospheric temperature data (and between the latter and surface data) may be an artifact of data uncertainties.”

(Santer et al., 2003)

This is exactly what we would expect. We saw earlier that nearly two thirds of the trend discrepancy between the UAH and RSS analyses is related to the differing methods each team used to characterize IBE, do their merge calculations, and to a lesser extent, their differing methods of smoothing and diurnal drift correction. Since detection of the anthropogenic fingerprint in PCM as characterized by Φ, depends on this difference, it would not be surprising if the difference between detection and non-detection is the result of data and/or data processing uncertainties. The fact that the mean-removed analyses of both teams do capture the fingerprint demonstrates the ability of PCM and its component models to capture real tropospheric and surface effects.

Even so, some have claimed that these results are a self fulfilling prophecy. PCM and RSS Version 1.0 it is argued, were used to justify each other and the RSS product has been preferred because of the agreement rather than its own merits as an observational analysis - Point 2 above (Christy, 2003). But the criticism does not bear scrutiny. In fact, Santer’s team did not use PCM to determine the accuracy of UAH or RSS products. They compared simulated MSU Channel 2 observations from PCM with the corresponding records from UAH Version D and RSS Version 1.0 to see if an anthropogenic fingerprint on global warming could be detected in either. They found that an anthropogenic fingerprint, as characterized by the first empirical orthogonal function in the PCM runs Santer et al. used, can be detected in both products, but that it is observable in UAH Version D only after removing global mean values from the dataset. From this observation they concluded that both products likely capture an anthropogenic fingerprint, and the difference between the two products is largely a matter of how each team handled data uncertainties. Examinations of the merge methodologies of each team, their smoothing methods, and their characterizations of IBE and diurnal correction have already verified this independent PCM. Furthermore, the fact that an anthropogenic fingerprint can be found in each product by the Santer team methodology demonstrates the ability of PCM to reproduce many of the temporal and geographical patterns inherent in real temperature trends, demonstrating that one MSU product is not likely to be more physically consistent with PCM than the other.

Douglass, Singer & Michaels (2004)

In July of 2004, David Douglass 5, S. Fred Singer 6, and Patrick Michaels 7 lead teams that published two papers in Geophysical Research Letter in which they claim to have demonstrated that there is a clear disparity between surface and lower troposphere temperature trends (Douglass et al., 2004), and that current state-of-the-art AOGCM’s cannot accommodate it (Douglass et al., 2004b). In the first of these papers Douglas et al. (hereafter, DEA) use MSU data, radiosonde data, and a reanalysis product applied to the period of 1979 to the present to argue that the disparity exists and that it cannot be accounted for by any known tropospheric dynamics. To do this, they start with global surface temperature data from Jones et al. (2001). These are monthly anomalies with respect to the 1961-1990 average of global surface air temperatures over land, and below-surface water temperatures for oceanic regions, as represented within a 5 deg. by 5 deg. grid cells. This record is then compared with lower troposphere trends taken from UAH Version D MSU2LT Data (Christy et al., 2000) and data from a new “2-meter” temperature product (R2-2m) derived from an updated version of the National Centers for Environmental Prediction - National Center for Atmospheric Research (NCEP/NCAR) Reanalysis 8 (Kanamitsu et al., 2002; Kalnay et al., 1996). The latter is selected for its consistency and completeness between the surface and 850 hPa layers, and because it is (they argue) a dataset that is independent of both the MSU record and the radiosonde products that have been used to date for tropospheric intercomparison studies (Christy et al., 2000; 2003; 2004; Seidel et al., 2003, 2004; Angell, 2003). In the second (2004b), they compare results from 3 AOGCM’s with surface temperature trends similar to those used in the first paper (but taken from Jones et al., 1999 rather than 2001), MSU2LT data from UAH Version D (Christy et al., 2000), radiosonde data from HadRT2.0 (Parker et al., 1997), and 50 year results from the NCEP/NCAR Reanalysis (Kisteler et al., 2001). From these datasets they argue that the models, which represent the current state of the art in AOGCM’s, cannot account for the observed troposphere and surface temperature trends.

DEA based their conclusions on several claims. But upon closer inspection, each one is supported with problematic treatments of the data they cite. First, they argue that the trend from satellite and radiosonde products is significantly less than that of the surface, with exact values depending on both the choice of dataset and analysis methodology (Douglass et al., 2004). This is only true of UAH products. Trends from RSS, Prabhakara, and Vinnikov and Grody differ considerably spanning the range from full agreement to disagreement. They do mention Vinnikov and Grody (2003) and Fu et al. (2004) in reference to the MSU record (RSS Version 1.0 was not addressed) but base their MSU trends only on UAH Version D, which they claim is the only extant MSU product that is validated by the radiosonde record (Douglass et al., 2004).

Closer inspection reveals that the support for this claim is driven entirely by the datasets and time frames they have chosen for their comparison. The MSU and radiosonde records chosen just happen to be the ones that are closet in agreement for the period 1979 to 1996 and low in trend – UAH Version 5.0 truncated to 19967 (Christy et al., 2003) and the LKS radiosonde product (Lanzante et al., 2003). Figure 33 shows tropospheric temperature trends for UAH, RSS, LKS and HadRT2.1 for 3 layers and by global region (Seidel et al., 2004). It can be seen that there is very good agreement between LKS and both UAH products for MSU2, though regionally the confidence intervals are large enough to accommodate RSS outside of the southern hemisphere for which the UAH-RSS discrepancy is largest. So not surprisingly, the southern hemisphere contributes most to the discrepancy. HadRT2.1 shows significant disagreement with both. For MSU2LT however, the LKS dataset shows noticeable discrepancies with UAH products, but agreement with HadRT2.1 is improved. In this case the largest regional discrepancy is again with the southern hemisphere, where now LKS shows more warming. This is particularly significant, as it is in this region that we expect the 2LT product to be most impacted by Antarctic sea-ice and summer melt pools (Swanson, 2003). Thus, even though UAH median trend estimates tend to be closer to comparably adjusted radiosonde products, agreement varies significantly by layer and region, and confidence intervals tend to be large.

When we extend the record another 4 years the picture changes yet again. Figure 34 shows the same troposphere temperature trends by layer and region as Figure 33, but for 1979-2001. In this case, UAH and RSS products are compared with HadRT2.1 (the LKS record ends in 1997). Now we see that both UAH and RSS products are in relatively good agreement with each other, and both disagree with HadRT2.1 globally and in all regions except the southern hemisphere, where UAH products are closer the HadRT2.1 than RSS. For the MSU2LT layer, both UAH products and HadRT2.1 agree well, but the confidence intervals for each are as large as the trends being measured (Seidel et al., 2004). It is worth noting that until 1997, LKS trends in all regions and globally were consistently warmer than their HadRT2.1 counterparts. Given the 1997-98 El Nino and its impact on all trends, it would have been surprising if this had not continued if LKS had been extended to 2001.

Once again, we see that agreement depends on layer and region, and confidence intervals tend to be large in comparison to the trends being measured. This is particularly true of the 2LT layer that is of most interest to DEA. Furthermore, which layer is in agreement and to what degree appears to be strongly driven by the length of record being examined. DEA did not address the issue of limited radiosonde coverage, particularly in regions such as the southern oceans that have the most impact on differences between UAH and RSS trends. Nor did they address the issue of Antarctic sea-ice and melt pool impacts which will be of particular importance for the lower troposphere 2LT trends that they are most concerned with. These factors are significant and they cannot be ignored in MSU/radiosonde comparisons.

The chosen time frame for their study (1979-1996) raises questions as well. DEA state that,

“Since we wish to examine the disparity in the temperature trends among these three datasets, we limit our analysis to a common observational time series. The starting point in our analysis will be 1979, which is the beginning year in both the R2-2m and MSU data. We truncate the analysis at December 1996 which avoids the snow cover issue in R2-2m. This also avoids the anomalously large 1997 El Nino event in the tropical Pacific which Douglass and Clader [2002] showed can severely affect the trend-line. We will show later in this paper that it is likely that our conclusions would change little had we been able to use data though 2003.”

(Douglass et al., 2004)

In other words, even though the extant MSU records from both UAH and RSS extend to the present, DEA considered only the first 17 years, leaving out nearly one third of it. Their stated reasons were to exclude a known issue with snow cover contamination in R2-2m and the ENSO event of 1997, but these arguments are unconvincing. There were at least 4 other ENSO events during the satellite era (1982-83, 1986-87, 1991-92 and 1994-95). The 1982-83 event was one of the largest of the 20th century and occurred during the tropospheric/stratospheric impact of the El Chicon eruption (see Figures 20-22). These were not omitted even though the 1982-83 event was almost as large as the 1997 event. Furthermore, there is at least some evidence that a relationship may exist between global warming and ENSO events, particularly their frequency (Meehl and Washington, 1996; Knutson et al., 1997; Timmermann et al., 1999; Collins, 2000). Though the jury is still out on this (Zhang et al., 1997; Knutson et al., 1997; Boer et al., 2000), there is enough evidence of a possible relationship between the two that they cannot be avoided prima facie in upper-air climate change studies. Likewise, avoiding the snow cover issue is also unconvincing as the MSU2LT record is impacted by this as well, particularly in those regions where UAH and RSS products differ significantly (Swanson, 2003). Even if neither of these things were an issue, we are still left with an analysis of only 2/3 of the relevant upper-air record being used to evaluate products that cover the entire period.

The truncated time period is also noteworthy in one other respect. DEA specifically compare lower troposphere trends as determined by UAH MSU2LT products with surface and upper-air trends from other records. The online community encyclopedia Wikipedia (www.wikipedia.com) discusses the MSU record and presents a table that shows troposphere trends from UAH products vs. record ending year from 1992 through 2003 (Wikipedia, 2004). A check of this table reveals that the year at which DEA’s analysis ends, 1996, is the last year for which UAH 2LT products show a negative lower troposphere temperature trend. This is interesting because the claim that the troposphere has cooled during the satellite era has been a popular one in many forums. At face value, DEA’s choice of record length supports this claim. But any and all record extensions beyond 1996 yield a warming trend that in the last several years has progressed toward a restoration of long-term agreement with the surface record. Thus, by limiting the period of their analysis DEA has,




Top

Page:   << Previous    1    2    3    4    5    6    7    8    9    10    11    12    13    14    15    16    17    18    19    20    21    22    23    24    25    26    27    28    29       Next >>
Climate Change
General Science
Troposphere Temperatures
Negative Climate Feedbacks
The Hockey Stick
Polar Ice-Caps & Sea-Level Rise
Solar Climate Forcing
Resources & Advocacy
Christianity & the Environment
Global Warming Skeptics
The Web of Life
Managing Our Impact
Caring for our Communities
The Far-Right
Ted Williams Archive