Search:           


Climate Change & Tropospheric Temperature Trends

Part II: A Critical Examination of Skeptic Claims
  • A model that used only their physical geographic data.
  • A model using only their geographic information and population data.
  • A model that used only their non-climatic factors (e.g. – their economic, social, and Soviet data).
  • A model using only the variables that MM identified as significant.
  • The results of Benestad’s five runs is shown in Figure 24. Though his full model run reproduced MM’s result quite well, none of the 4 models run with subsets of their dependent variables is able to reproduce any of their independent variables. Furthermore, his model run using only MM’s economic, social, and Soviet variables produces a near zero trend. Thus, Mckitrick and Michaels have done little more than use a careful selection of data and some involved number crunching to generate exactly the economic and social signals they wanted – but well known and reasonable independent tests cannot reproduce.

    One other point is worth noting. Benestad’s test of MM’s analysis was submitted to Climate Research in early August of this year (2004), almost 3 weeks prior to the discovery that they had used degrees instead of radians in their latitude data. At the time he noticed that MM had used degrees for their latitudinal inputs. Being aware that his own models required radians for this variable (as most similar models do for trigonometric quantities), he made sure his own models were done properly (Benestad, 2005). Yet even so, he found little latitudinal influence on any of MM’s principle signal variables. At least two of his model runs made no reference to latitude. The remaining three used the same latitude data, in proper units, across differing combinations of other input parameters with varying results. Thus, the issues associated with MM’s model characteristics and the design of their input parameters appear to be separate from their use of bogus data and continue to plague their results even after those errors were corrected. The mere fact that MM’s results are sensitive to this input while other valid model approaches are not should be a warning sign in itself. In the very least, had MM analysis been robust to any acceptable degree, their results should be reproducible via other proven methods.

    Mckitrick and Michaels responded to Benestad’s comments in the same edition of Climate Research (McKitrick and Michaels, 2004c). The only criticisms they could muster were that Benestad’s tests of their model (particularly his separation of their variables by latitude band for separate calibration and validation runs) were not commonly used in the refereed climate science literature, and that had used the “worst” of their data to calibrate his runs. The first point is, of course, irrelevant. What Benestad tested was MM’s use of multiple regression methods to derive correlations from modeled data where certain correlations were expected. This has to do with statistical mathematics, not climate science per se, and Benestad’s methods are commonly used to test multiple regression models in the peer reviewed literature from many fields. Benestad later responded to McKitrick and Michaels at the online weblog www.realclimate.org stating that,

    “McKitrick and Michaels claim that I do not dispute their approach (i.e., multivariate regression using economic variables as potential predictors of surface temperature). That claim is both peculiar, and misses the point. A method is only valid when applied correctly. As described, above, [McKitrick and Michaels] failed egregiously in this regard. The purpose of my paper was simply to demonstrate that, whether or not one accepts the merits of their approach, a correct, and more careful, repetition of their analysis alone is sufficient to falsify their results and their conclusions.”

    (Benestad, 2004b)

    Furthermore, MM are wrong. Methods such as Benestad’s have been used throughout the refereed climate science literature whenever they were relevant. For instance, see the many examples cited in the Wilk text cited above (1995) which specifically deals with the use of statistical methods in climate science. Lastly, there is an even more fundamental point that goes right back to the very climate science literature MM claim Benestad is out of step with – independent verification from separate data sources. If global change is truly economic rather than climatic, we would not expect to see long-term evidence of it in regions and natural processes that are far removed from economic activity. The refereed literature is replete with data on SST changes, glacier retreat, changing precipitation patterns, ecosystem impacts, and many other effects that are widely distributed in unpopulated areas and not even remotely related to centers of economic activity. Across the board, these results flatly contradict MM’s conclusions. MM mention one such study (Boehm et al., 1998), dismissing it only as being “obscure” without a proper explanation as to why their own results are more trustworthy.

    So McKitrick and Michael’s bombshell paper fails numerous independent tests from alternate regionally and globally distributed data sources that bear directly on their principle conclusion. Beyond that, it had not even been off the presses for 12 weeks before it fell to a standard set of robustness checks that any serious multiple regression model would have to pass, and a basic conflation of units that would not have been tolerated even in an undergraduate homework assignment. This, and Douglass, Singer, and Michaels’ two cherry-picked analyses of tropospheric trends and AOGCM’s, are the basis of their declaration of “victory” over global warming science.

    We have to wonder what defeat would look like.

    Fu et al. and Climate Change Skeptics

    In Part I we saw that MSU Channel 2 signal receives up to 15 percent of its signal (its raw digital counts) from the lower stratosphere (the 100-50 hPa layer) and thus it very likely underestimates temperature trends in the lower to middle troposphere (the 850-300 hPa layer). Traditionally, this was accounted for by using MSU2 and TLT as complementary lower troposphere products. But while TLT reduces the stratospheric Channel 2 “footprint”, it pays a price in sampling error and contaminating inputs from other sources such as Antarctic sea-ice and melt pools. Chiang Fu and his co-authors developed their method to avoid these problems. By using direct MSU4 temperature and trend data to correct MSU2 they avoid sampling errors associated with off-nadir MSU views and greatly minimize signal contamination from the surface, particularly the sea-ice and melt pool problem affecting the TLT record. When Fu et al. used their method to correct existing MSU products for stratospheric trend aliasing, they found that all existing MSU products were now in agreement with the predictions of AOGCM’s - the only remaining exception being the TLT record (which has not yet been corrected for sea-ice and melt pool problem). Furthermore, the Fu et al. weighting function was based on the radiosonde record (Lanzante et al., 2003) and used that record mainly to derive a correction for the very upper-air layer for which trends have been most monotonic and consistent during the satellite era - the lower stratosphere. As such, it is consistent with that record as well - the observed trend differences between radiosonde products and the Fu et al. trends being likely due to coverage, surface signal contamination, and other factors. Details of the Fu et al. method are discussed in Part I, and the method is derived in its Appendix.

    This delivered yet another serious blow to skeptic arguments, and around the world skeptic forums reacted immediately – once again, with well deserved fear. The poison darts began flying within days. Criticisms fell chiefly into two groups – concerns about the functional form of the Fu et al. corrected weighting function WFT, and concerns about the reliability of using statistical methods to derive the T2 and T4 data used with it. This takes much of the force out the claims of global warming skeptics, who to date have been depending on the disparity for their case against global warming and the required mitigation efforts.

    The same week that the Fu et al. method appeared in the pages of Nature, Tech Central Station published an editorial by Roy Spencer of the UAH team criticizing the Fu et al. method and even going so far as to refer to that journal as “gray scientific literature”. According to Spencer,

    “The authors, noticing that channel 4 measures the extreme upper portion of the layer that channel 2 measures (see Fig. 1), decided to use the MSU channel 4 to remove the stratospheric influence on MSU channel 2. At first, this sounds like a reasonable approach. We also tried this thirteen years ago. But we quickly realized that in order for two channels to be combined in a physically meaningful way, they must have a large percentage of overlap. As can be seen in Fig. 1, there is very little overlap between these two channels. When a weighted difference is computed between the two channels in an attempt to measure just the tropospheric temperature, an unavoidable problem surfaces: a large amount of negative weight appears in the stratosphere. What this means physically is that any attempt to correct the tropospheric channel in this fashion leads to a misinterpretation of stratospheric cooling as tropospheric warming. It would be possible for their method to work (through serendipity) if the temperature trends from the upper troposphere to the lower stratosphere were constant with height, but they are not.

    In this instance, the negative (shaded) area for the Fu et al. weighting function in Fig. 1 would be cancelled out by its positive area above about 200 millibars. Unfortunately, weather balloon evidence suggests the trends change from warming to strong cooling over this altitude range.”

    (Spencer, 2004)

    Thus, Spencer was arguing for the first criticism. His Figure 1 is reproduced here as Figure 25 modified to reflect my wording rather than his. This figure shows WFT compared with the weighting functions for MSU2 and MSU4. The claim is that because WFT goes negative above 100 hPa it will inevitably alias spurious warming into the troposphere trend. Spencer argued that the method might work, but only if trends are constant with altitude from the upper troposphere to the lower stratosphere (roughly 300 -50 hPa) – which they are not (Spencer, 2004). This would be a valid criticism if the method used WFT strictly for the derivation of MSU2 brightness temperature with the layers above 100 hPa removed. This is not the case. What Fu and his colleagues actually did can be seen more clearly in Figures 21 to 23. Figure 48 shows Figure 47 with MSU2 color banded according to the layers it detects. The region shown in light orange reflects the uncorrected free troposphere contribution to MSU2. The region shown in light blue reflects the tropopause and lower stratosphere, where 300 hPa can be considered the “lowest approach” altitude for the tropopause and 200 hPa a global mean. Figure 13 (right side) shows 1979-2001 upper-air trends as a function of altitude for several radiosonde products and single point trends for UAH Version D (Angell, 2003). Similar data is reproduced in Figures 31 and 33 as broad layer bar graph data for the longer 1958-1997 period using a different set of radiosonde products. It can be seen that the satellite era trends decrease with altitude. Within the uncertainty ranges shown, they go negative above altitudes of roughly 7 to 9 km with the global average being around 8 km (the 300-100 hPa layer). Comparing these trends with Figure 48 reveals that for the satellite era, the light blue layer has an overall negative trend and the orange layer a positive one. Because MSU2 sees the full weighting function of both, it will alias the cooling trends above 300 hPa into the warming trends below. Figure 49 shows Figure 47 shaded to reflect the layer coverage of the Fu et al. weighting function in comparison to its uncorrected MSU2 and MSU4 counterparts. The region shown in dark blue can be expressed in terms of MSU4 and is chosen so that its weighting will integrate to zero with altitude above 300 hPa. Below, the Fu et al. function will have the same weighting that MSU2 would have seen below 300 hPa if the stratosphere were not contributing to its signal (the combined light and dark orange regions). The characterization of this weighting function allows for these two regions to be separately expressed as multiples of T2 and T4 from which the actual free troposphere brightness temperature trend can be derived.

    Now it can be seen that Spencer (2004) misunderstood the Fu et al. method. In fact, the method separates the layered trends out of the uncorrected MSU2 signal and accounts for each. Their weighting function goes negative above 90-100 hPa because it must do so to prevent a stratospheric cooling from being aliased into the free troposphere trend. To his credit, Spencer has relented somewhat since this editorial was published. He is still skeptical of the Fu et al. method, and in particular he is concerned about discrepancies between the Fu et al. free troposphere trends and those observed by other radiosonde products for the same layer – trends that he believes to confirm the UAH TLT and TMT records. But he does acknowledge that the method is a useful piece of the puzzle and should be investigated further. Commenting in a more recent editorial at Tech Central Station he says that,

    “As is often the case, the press release that described the new study made claims that were, in my view, exaggerated. Nevertheless, given the importance of the global warming issue, this line of research is probably worthwhile as it provides an alternative way of interpreting the satellite data.”

    (Spencer, 2004b)

    In a recent interview, Fu indicated that he did not know Dr. Spencer at the time he published, but has since had the opportunity to meet him at a few conferences and engage in some very stimulating and mutually productive discussions about both team’s methods. “I didn't know Spencer before this,” he said, “but now I've met him at some scientific conferences, and we can talk about the science. At the time, he was so sure we were damn wrong ... Now he says we don't know enough.” (Whipple, 2004).

    Another challenge to the Fu et al. method was published in December of 2004 by the journal Nature. Simon Tett and Peter Thorne (hereafter, TT) of the UK Met Office used the Fu et al. method to derive new coefficients and free troposphere trends for the tropics (30 deg. S. to 30 deg. N Latitude) during the period 1978-2002 using the HadRT2.1s radiosonde analysis, the ERA-40 reanalysis (Uppala, 2003), and an ensemble of model runs (Tett and Thorne, 2004). These trends, which they denote as Tfjws in contrast with the Ttr850-300 derived by other methods, were then compared to corrected MSU2 trends from UAH Version 5.0 (Christy et al., 2003), RSS Version 1.0, and surface trends. A comparison of their results is given in Figure 28. For non-satellite analyses, surface temperatures were derived from the products indicated. Satellite products were compared to surface trends from the HadCRUT2v dataset. ERA-40 reanalysis based surface trends were derived using zonal averages of 2-meter temperatures over land and SST’s over ocean regions. For their model comparisons TT used an ensemble of 6 runs of the atmosphere-only HadAM3 (Pope et al., 2000) and 4 runs of the coupled ocean-atmosphere HadCM3 model (Stott et al., 2000). Their HadAM3 and HadCM3 modeled results were forced with a suite of natural and anthropogenic inputs as described in the cited sources, and were identical with the exception of two corrections in HadAM3 – one for errors in ozone depletion and one for changes in sulfur cycle forcing (Tett and Thorne, 2004). Based on these results they concluded that,

    • Fu et al. “trained” and tested their MSU2 and MSU4 coefficients (a2 and a4, respectively) using the same radiosonde dataset (Lanzante et al., 2003), obtaining false agreement and overfitting of the data. Their resulting corrections are overly small and result in overly warm free troposphere trends.
    • For the Fu et al. methods to work, stratospheric trends must be relatively stable over the period analyzed, but in fact they are not. In particular, they claim that the lower stratospheric impact of the quasi-biennial oscillation (QBO) will be aliased into Fu et al. derived trends.
    • With the exception of HadRT2.1s, free troposphere temperature trends as derived using the Fu et al. method applied to a suite of other upper-air products, show worse agreement with observation and larger confidence intervals than does the UAH Version 5.0 TLT product.
    • Agreement between model run derived trends and those based on Fu et al. derived observations show good agreement only between the HadAM3 atmosphere-only run and RSS Version 1.0.



    Top

    Page:   << Previous    1    2    3    4    5    6    7    8    9    10    11    12    13    14    15    16    17    18    19    20    21    22       Next >>
    Global Warming Skeptics
    Climate Astroturfing
    OISM Petition Project
    Leipzig Declarations
    Climate Denial 101
    Christianity & the Environment
    Climate Change
    The Web of Life
    Managing Our Impact
    Caring for our Communities
    The Far-Right
    Ted Williams Archive