Search:           


Climate Change & Tropospheric Temperature Trends

Part I: What do we know today and where is it taking us?

Because it relies only on bulk temperature comparisons, the UAH method does not make use of individual station metadata so it is less likely to be impacted by changes in record keeping methods that plague many sonde station histories. But as such, it goes without the station metadata that is consistent and reliable and thus lacks important input from independent sources. Like HadRT, this method is strongly tied to the UAH Version D MSU data and is therefore subject to all the same limitations, including its limited usefulness for independently evaluating MSU products. A more serious problem is that the UAH sonde product for Version D was based on a network of 97 U.S. controlled stations covering North America, Bermuda, Iceland, and the Western Pacific. This network was chosen because it has an unusually high degree of consistency in equipment and methods, making these stations more reliable than most (Christy et al., 2000; Luers & Eskridge, 1998). But a price is paid in limited global coverage. We will see later on that coverage of this network is sparce to non-existent in those global regions that are most important for comparing the relative reliability of different MSU/AMSU products, particularly UAH vs. RSS.

LKS

Despite good global coverage and robust methods, the HadRT and UAH products depend on MSU datasets for detection and correction of sonde time series. This limits their usefulness as independent validation of MSU products, particularly UAH MSU products. In addition, because the MSU record only goes back to 1979, these methods offer no insight into the sonde record prior to that time – nearly half of the extant record. Angell 54 does not rely on MSU comparisons and is thus truly independent. But with only 54 stations globally, it has less coverage than HadRT products and also relies on data from many weather stations that are less reliable than those used by UAH. For independent validation of the MSU record, what is needed is a truly reliable sonde analysis product that has truly global coverage, particularly in the Southern Hemisphere and tropics, and effectively identifies and corrects for anomalous discontinuities in historical sonde data without referencing any MSU product.

In 2003, John Lanzante and Steven Klein of the NOAA Geophysical Fluid Dynamics Laboratory in Princeton, NJ, and Dian Seidel of the NOAA Air Resources Laboratory in Silver Spring, MD took the first major step in this direction when they published a sonde analysis product based on a global network of 87 stations that had been selected and given needed data adjustments based on criteria that did not use MSU products (Lanzante et al., 2003). It has been shown that statistical methods alone can identify many abrupt discontinuities in sonde datasets and historical weather station records are often not adequate to identify the causes of these discontinuities (Lanzante, 1996; Gaffen et al., 2000a). Lanzante, Klein, and Seidel (LKS) capitalized on this and other data to create a sonde analysis product that is independent of MSU products. Their analysis uses a global network of 87 sonde stations with global coverage similar to that of the Angell 54 network but more density. Figure 15 shows the global station network used. Within this network, each station’s record was painstakingly examined in detail with special attention being paid to the following,

  • Statistics and station metadata
  • 0000 UTC temperature measurements minus 1200 UTC measurements (where both were taken)
  • Temperatures measured at nearby levels at the same station
  • Temperatures predicted using statistical regression of existing measured temperatures and winds
  • Historical records of sonde launch times
  • The Southern Oscillation Index
  • Volcanic eruption history
  • Comparable temperature data from nearby stations

Each member of the team evaluated the records of all 87 stations on a case by case basis and made recommendations as to where data adjustments were needed and to what degree. Afterwards they met together, compared results, and reached a consensus from which the final analysis product was created (Lanzante et al., 2003). Adjustments for identified discontinuities in each record were made at specific pressure levels (altitudes) by extrapolating from time series taken at nearby reference pressure levels at the same station. The reference levels were chosen so that they either required no correction, or had already been corrected to guarantee their reliability as a reference for other changes. If no suitable reference data was available, an adjustment was made by interpolating from data before and after the discontinuity (Lanzante et al., 2003). This method, though thorough, is quite labor intensive and so does not lend itself to really large networks, and as such, it lacks the degree of coverage of HadRT. But it is also the only record currently used for independent MSU comparison studies that is truly independent of MSU data.

RIHMI

Alexander Sterin (1999) of the All-Russian Research Institute of Hydrometeorological Information (RIHMI) in Obninsk, Russia has prepared another upper air radiosonde analysis product that is based on a much broader base of data than those considered do far. Sterin used CARDS and telecommunicated data from a network of over 800 radiosonde stations worldwide and reanalyzed it into a monthly gridded temperatures at two pressure levels - 850-300hPa (troposphere) and 100-50hPa (lower stratosphere). Initial quality control checks were done using the Complex Quality Check (CQC). Data that passed this test was then spatially processed into anomalies for the globe and for 3 latitude zones: the Southern Extra-tropics (-90°S to -20°S), the Tropics (-20°S to +20°N), and the Northern Extra-tropics (+20°N to +90°N. Processing was done using an algorithm based on adaptive polynomial interpolation and sequential corrections (Reitenbach and Sterin 1996) with mass weighting for the vertical layers. Latitudinal Adjustments were made for unobserved regions using direct interpolation from locations with verified data. The initial version of this product used only gross spatial and temporal consistency checks for quality control. Later versions used improved quality control checks on raw data and expanded the network of stations used to over 2500 stations worldwide (Sterin, 2001). This method has the advantage of a much larger network and better coverage than other sonde analysis products. Normalizing from a much larger dataset will help eliminate the effects of data clustering as well. But RIHMI also suffers from having limited checking for inconsistencies in station metadata and historical discontinuities as well as the added uncertainty introduced by interpolating in global regions where there is no data.

UAH 2004

In March of this year the UAH team published an updated radiosonde/MSU comparison that addressed many of the difficulties associated with other comparison studies (Christy and Norris, 2004). Christy and Norris noted the various methodological differences between their MSU/AMSU products and those of RSS, as well as the southern hemisphere coverage issues with independent radiosonde analyses that have been sought for MSU intercomparison studies. They also discussed the other issues that have plagued these studies, in particular solar heating, lag, incompleteness of record, and equipment variations such as the switch from Phillips or VIZ-B to Vaisala RS-80 radiosondes at many stations (Parker et al., 1997; NRC, 2000; Christy et al., 2000; 2003; Seidel et al., 2003; 2004; Angell, 2003). Many of these problems become more severe with increasing altitude (Parker et al., 1997; Gaffen et al., 2000a; Lanzante et al., 2003). The VIZ-B/Phillips to Vaisala evolution alone for instance, can account for corrections of up to 1 to 3 deg. K per station in the lower stratosphere, which is larger than the trends being measured in this layer. In addition, variations in tropopause height and altitude have also affected radiosonde records at many stations (Angell, 2003). Though corrections have been necessary in the lower troposphere as well, they have generally been smaller.

Realizing this, Christy and Norris sought an independent MSU-radiosonde intercomparison study that focused on the lower troposphere alone and attempted to redress the southern hemisphere coverage issue. They constructed a time series of monthly lower troposphere temperature anomalies for the 271 month period of Jan. 1979 to July 2001 using a global network of 89 stations, which is shown in Figure 42. These stations were selected based on their meeting a minimum requirement of at least 60 percent of the monthly records could be generated from daily soundings. A subset of these 89 stations met a further requirement that 75 percent of their monthly records could be similarly generated. To determine the necessary adjustments to this network, a method similar to the UAH method described above was used, and the results were checked against independent records of Durre et al. (2002) who constructed a similar record for the northern hemisphere, and Peter Thorne of the Hadley Centre, who constructed an independent product based only on the records of neighboring stations.

Christy and Norris’s Year 2004 intercomparison product goes a long way to resolving many of the issues of earlier products, including an increased emphasis on southern hemisphere stations where other products have been weak, and a concentration on lower altitude data that avoid many of the problems of more complete high altitude datasets. But even so, an examination of Figure 42 reveals that the increased southern hemisphere coverage is still primarily over land, emphasizing South America and Australia, but remains scant in the southern Pacific where the largest RSS-UAH differences remain. In addition, while much has been done to enhance the completeness of record for the requisite 89 stations, incompleteness of record is still an issue, as are the difficulties in assuring that record discontinuities are all captured.

Comparisons of Radiosonde Analysis Methods

In addition to accurate data, a reliable radiosonde time series requires an accurate assessment of the data’s history and data gathering methods. It has already been noted that this is no easy task. Incomplete records, variations in equipment and methods which may or may not have been documented over the last 50 years and a host of other complications make historical reconstructions challenging. Apart from their data reduction methods, the main differences in the radiosonde analysis product described above lie in the way that they detect and correct for anomalies and/or gaps in datasets. There are two problems to be avoided. First, changes in the record unrelated to climate must be detected and accurately corrected for. Second, real climate signals must not be mistakenly identified as spurious. How well each of the current sonde analysis products does either is a matter of intense research.

In October of 2000 Dian Seidel of the NOAA Air Resources Laboratory and Tom Pederson of the National Climatic Data Center convened the CARDS Workshop on Adjusting Radiosonde Temperature Data for Monitoring at NCDC in Ashville, NC. At this workshop representatives from several research centers discussed how well the current generation of sonde analysis products achieve these goals. Attendees divided up into 7 groups to analyze the historical upper air sonde data from 12 stations worldwide. The 12 stations used in this exercise were chosen for their emphasis on countries with large networks and their high quality of metadata. Two of the stations, located in Australia, were also included because of known discrepancies between their results and those of the UAH Version D results at the same location. Each team prepared an analysis of this network employing one of the established methods currently in use, including the methods discussed here. Some teams used daily data and others used monthly gridded data, as differing methods required. The objective was to determine which methods were most reliable at capturing discrete changes in tropospheric and stratospheric temperature records, characterizing them as natural or anomalous, and accurately correcting for them if necessary. The methods and results of this exercise are discussed in Free et al. (2002). Figure 16 shows the results of each team’s characterization of temperature change events from the 12 station record. Included are the methods of LKS (denoted as GFDL), HadRT (denoted as Met Office), and UAH (results from 3 other methods tested at the Workshop are also presented, but are not considered here as they have not been used extensively for MSU comparisons). Identified change points are given per station and per investigating team/method. Figure 17 shows in tabular form the percentage of instances where any 2 methods agreed on an event within a 6 month window (top) and the total number of events detected by each team/methodology.

It is evident from these results that agreement between the various methods was the exception rather than the rule. The average number of record change points captured per decade, real and anomalous, was highest for the UAH method and lowest for the LKS method. The total number of changes identified was considerably less than the number of known metadata events based on extant records. For instance, the National Climatic Data Center (NCDC) has record of 21 known changes in method and/or equipment after 1979 at 10 of the 12 stations that are significant for the data gathered. Yet out of these 21 changes, there were only 4 occasions where changes at any given station were reliably captured by all teams and methods. Agreement between any 2 methods averages only about 50 percent. There is also little agreement on the pressure levels (altitudes) where temperatures needed adjusting, and the size of the needed correction. In many cases, different methods yielded different signs for their adjustments. One case in point was Darwin, Australia, where there is a fairly complete record. Figure 18 shows the change points identified by each team at all altitudes from the surface to 10 hPa for the Darwin record. The corrections applied by each team are given as colored triangles pointing upward for positive corrections and downward for negative ones. The size of each triangle is proportional to the magnitude of the correction it denotes. Figure 19 shows the uncorrected and corrected temperature trends for Darwin at 50 hPa (stratosphere) and 200 hPa (tropopause) for 4 of the methods studied. Despite the completeness of the metadata and records from Darwin, there is no point at which all 4 methods detect the same event. A significant discontinuity at 50 hPa that is gives a large correction for HadRT and LKS yields only a small correction in NCDC - and is not even detected by UAH. Likewise, UAH detects at least 2 events that do not appear to have been accounted for in HadRT, LKS, or NCDC. Another large event in 1953 at 200 hPa is given a significant correction in LKS, but is not accounted for by either UAH or HadRT because it occurs prior to the beginning of the MSU record, which is required by both methods. It can also be seen that the magnitudes of the corrections for the troposphere, though not unduly large, are significant (at least 20 percent of the value), and those needed for the stratosphere are quite large. The HadRT stratospheric correction (1.94 +/- 1.42 deg. K) is over 3 times the size of the adjusted trend (while stratospheric trends are not directly relevant to troposphere temperature trends, they do impact the MSU Channel 2 signal, and must be considered).

These results demonstrate the difficulties associated with preparing radiosonde time series, and are in agreement with the results of similar studies (Santer et. al., 1999; Gaffen et. al., 2000a). Furthermore, it has also been shown that adjustments to sonde datasets can have a significant impact on confidence intervals as well as the derived trend. Even one level shift in a dataset can increase by as much as 50 percent the length of time series necessary to accurately derive a trend (Weatherhead et. al., 1998). Each of the methods discussed here has proven useful for independent checks of MSU records, even those like the HadRT and UAH methods that are not entirely independent of this record. But all have yielded mixed results, emphasizing the importance of ongoing efforts to improve access to weather station records and datasets. The National Research Council recently convened a Panel on Reconciling Temperature Observations to address these concerns as well as the issues already discussed regarding the MSU record. Their report (NRC, 2000) recommends that station metadata be updated and expanded for all stations worldwide, not just for a few as has been done to date. They also recommended that policies be put in place to regulate future changes in equipment and methods to guarantee continuity of records in the future, and efforts are under way to put these recommendations into practice. For today, caution must be exercised when comparing separate radiosonde analyses with each other and the MSU record (NRC, 2000; Free et al., 2002).

Discussion




Top

Page:   << Previous    1    2    3    4    5    6    7    8    9    10    11    12    13    14    15    16    17    18    19    20    21    22    23    24    25    26    27    28    29       Next >>
Climate Change
General Science
Troposphere Temperatures
Negative Climate Feedbacks
The Hockey Stick
Polar Ice-Caps & Sea-Level Rise
Solar Climate Forcing
Resources & Advocacy
Christianity & the Environment
Global Warming Skeptics
The Web of Life
Managing Our Impact
Caring for our Communities
The Far-Right
Ted Williams Archive