Upcoming Talk and Panel This Week

July 3rd, 2005

Posted by: Roger Pielke, Jr.

For those of you who are local, we are co-organizing a talk and panel discussion with colleagues at NCAR, to take place at the Mesa Lab main seminar room, 3PM on Friday. Here are the details:

July 8, 2005 -
Joint CGD-ISSE & CIRES Seminar – Panel Discussion

Hockeysticks, the tragedy of the commons and sustainability of climate science.

Hans von Storch – Director of Institute of Coastal Research of the GKSS Research Centre in Geesthacht. Professor at the Meteorological Institute of the University of Hamburg , Germany

Panelists: Warren Washington, Caspar Amman and Doug Nychka, NCAR, and Roger Pielke Jr. (CIRES)
Location

Mesa Lab Main Seminar Room
Time 3:00pm

Abstract

The “hockey stick”, elevated to icon-status by the IPCC, plays a crucial role in debate regarding climate change. Yet the methods used to develop it have not been completely explicated. We have tested the method in the artificial laboratory of the output of a global climate model, and found it to significantly underestimate both low-frequency variability and associated uncertainties. Our work focuses on multi-century simulations with two global climate models to generate a realistic mix of natural and externally (greenhouse gases, solar output, volcanic load) forced climate variations. Such simulations are then used to examine the performance of empirically based methods to reconstruct historical climate. This is done by deriving “pseudo proxies” from the model output, which provide incomplete and spatially limited evidence about the global distribution of a variable.


Our simulation study was published in “Science” but received less response than expected – almost no open response, a bit in the media; but many colleagues indicated privately that such a publication would damage the good case of a climate protection policy.

In this talk the methodical critique of the hockey stick methods will be presented, followed by a personal discussion about the problem of post-normal climate science operating in a highly politicized environment.?

The presentation will be followed by a panel discussion on the science of the hockey stick in the context of high?profile political issues. Panelists: Warren Washington, Doug Nychka, and Caspar Amman, NCAR and Roger Pielke Jr., Center for Science and Technology Policy Research at the University of Colorado (CIRES).

12 Responses to “Upcoming Talk and Panel This Week”

    1
  1. Steve Bloom Says:

    Maybe somebody should tell von Storch that his active egging-on of McIntyre (see “Climate Audit” for details) may be a factor in the noted lack of response. I’m no climate scientist, but I do know a fair amount about politics, and to the extent that most climate scientists have decided that the best thing to do is circle the wagons on this matter they can hardly be blamed. Look for the skeptics to try to get press coverage of this event to try to keep the ball in the air.

  2. 2
  3. Steve Bloom Says:

    And I just spotted this from McIntyre on Climate Audit:

    ‘But in the climate field, and for multiproxy work in particular, I am really struck by the extaordinary prevalence of defects in the work of the most influential authors. While I’ve opined about MBH, IMHO the work of Jones, Briffa, Crowley etc. has equally glaring defects, which I’m in the process of documenting. I really don’t see how the corpus of present multiproxy work can be used to develop a valid scientific “consensus”. It seems odd that competent people should have adopted and applied such weak papers or that the replication effort should be so nonexistent, suggesting that there is a very strong ideological component or bias to the widespread acceptance and use of this weak material.’

    So now that von Storch has admitted (in the last couple of days in media coverage of the Barton letter imbroglio) that he would be unable to produce the computer code for his own study, the knives must be getting sharpened for him as well.

  4. 3
  5. Kooiti Masuda Says:

    Though I have not examined myself the paleoclimatic reconstruction by Mann et al., reputation among my fellow scientists suggests that their practice is good in estimating the most plausible temperature at each of past years given the data they had chosen. I think that their studies are not seriously flawed as climate reconstruction.

    I instincitvely felt, however, that their reconstructed time series seemed too smooth to be true. But I think it is normal. Time series connecting best estimates (in the sense of least sum of squared errors or something alike) tend to be smooth when the number of data points are large enough. Even if we anticipate that there are troughs and ridges, we do not consider it better reconstruction to put ridges and troughs at arbitrary positions than leaving the smooth “best estimate” curve intact.

    I think that what IPCC (2001) wanted to use the reconstruction is to put the latest decades in the context of past climate variability (e.g. variance of decadal mean temperature in a millenium). To this end, the studies by Mann et al. (1998, 1999) were better than nothing, but far from ideal. The good practice to have pointwise best estimates almost inevitably leads to underestimate variability. But I was not confident about this intuitive finding of mine until reading the paper by von Storch et al (2004, Science) which demonstrated the issue much more systematically.

    By using an artificial example, von Storch et al. (2004) demonstrated weakness of usual methods of climate reconstruction in estimating past variability. They did not show, however, better methods for recustructing variability, perhaps except incorporating more input data (especially those excluded by Mann et al. because of not having annual time resolution). Reconstruction of climatic variability is a scientific challenge, and surely some advance will be achieved in this decade, but it is not guraranteed whether such quality that can be used as basis of policymaking can be achieved.

  6. 4
  7. Kooiti Masuda Says:

    Re: Steve Bloom’s comments.

    I think that the main goal of science is shared knowledge, and that scientific achivements should be reproducible.

    But I think that McIntyre is too demanding about disclosure of program codes and input data used by scientists. Those scientists who can put all codes and data publicly available, such as Mann (even if partly due to pressure from McIntyre), are admirable, but actually exceptional. Part of the software I use may be proprietary and not readily sold as commodity either. Another part may have been written by my fellow scientists who feel free for me to use it but not releasing it publicly. Another part which was written by myself is not yet well documented, and I feel giving it away without appropriate documentation is prone to misuse. Thanks to open-source software movement, some scientists may be able to conduct all computation on open-source software, and also to make their own software open-source. I think we should encourage such movements, but I do not think we can make it norm of scientific community.

    Similar things can be said about data. Actually this is my own main concern, but I skip disussing details for now. I think it is very constructive to create common data bases which everyone can access. But I do not think it constructive to inhibit studies which use data that cannot be entered to the data bases. Also I do not think it constructive to require preservation of all data which was used in scientific study in the form that facilitates easy reproduction.

    Usually, reproducibility as a norm of science does not mean exact reproduction with the same data and the same program. Obviously, in medical studies, readers cannot usually access the same patient (at least at the same stage of cure) that the author mentions. I think that reproducibility in science means that the same general conclusion can be drawn from similar but not exactly the same data.
    It is true that we think such studies that inhibit access to original data less credible. For example, some people in oil companies reconstructed the sea level in the past hundreds of millions of years, and the source data were trade secrets. We think it not so credible until another study with open data confirms their main conclsion. But then we think it intellectually richer to have two results than just open-source one.

  8. 5
  9. Kooiti Masuda Says:

    Sorry!

    Correction to my comment at July 5, 2005 01:49 AM

    The second paragraph,
    >number of data points are large enough
    reads
    number of data points are NOT large enough

  10. 6
  11. garhane Says:

    Seeking to learn how climate scientists go about working up graphs which do not resemble normal series for many types of data we read a short piece by I.M. Scientist and found he seems to hold some peculiar ideas about the adequacy of samples of data points where series present smooth curves. He states that such series “…tend to be smooth when the number of data points are large…”.

    This seemed odd, even though it is a claim by a person who alleges a standing in climate science, so we decided to check the claim. We took the following statistical procedures A through G from data bank Z and subjected sample data to a series of standard statistical tests. Unlike I. M. Scientist we have provided our data which is here.
    We ran series of tests using data with very few data points all the way up to data with very many.
    We also ran a series of standard tests of reliability and our results, which can be seen to be very high, are here. The original claim of I. M. Scientist is found in the web page blank, where it appeared with no supporting data whatever. So far as we can tell, the claim has not previously been published or subjected to peer review.

    Our tests showed that the claim made by this scientist is completely false and his conclusion is spurious.
    To ensure we had obtained the correct statement of his claim we made repreated requests for disclosure, to this individual, and for confirmation that he had indeed made this false claim, after we had written up our results and submitted them to a journal.
    We have not yet been favored by a personal reply but there is a most interesting follow up. This well known scientist who would not answer our inquiries has made another posting to the web page where he published his false claim and sought to alter his posting. Now he says in web page comments, that his statment about smoother data when the number of data points are large, should be read in reverse, as a statement that applies when the number of data points is not large. So much for those who relied on him.
    We have again requested disclosure to determine how this peculiar outcome occurred. We believe complete and fair disclosure of all data methodology, calculations workinfg notes, source code, code for any programs existing on the same computer, and samples of the pens used in writing up notes, supported by Affidavit of the University Administrator should accompany claims of this sort. Perhaps this will lead to earlier detection so that such spurious claims will not be made.

  12. 7
  13. Bob Says:

    Any chance of publishing a transcript of this seminar/discussion on this website?

  14. 8
  15. Roger Pielke Jr. Says:

    Bob- A good idea. We’ll look into it.

  16. 9
  17. Kooiti Masuda Says:

    Re: my comment at July 5 2005 01:45 AM (see also correction at July 5, 2005 03:22 AM)

    Please note also: my intuitive “finding” was not well founded and maybe simply wrong.

    As a time series analysis, the number of data points used by Mann et al. was not small. Most of the input data had values at every year during many hundreds of years. We usually consider the number of data points is large enough to represent decadal mean values.

    Also spatially, number of sample points was not so small. That is why Mann et al. thought it appropriate to use multivariate statistics (principal component analysis or something like that) to aggregate the information.

    What made me feel insufficiency of data in this case was strong spatial inhomogeneity of data. Useful data exist in some regions of the world but not in the other regions. Then, another intuitive reasoning can be made. Our experience with modern climatological data suggests that regional anomalies often compensate each other when global averages are taken. Thus, we may expect smaller variability of actual global averages (or averages based on homogeneous sampling) than global averages based on inhomogeneous sampling. But this reasoning implicitly assumes that such forcing that tends to bring global warming or global cooling are unimportant — surely not proven even about the era before industrial revolution.

    Thus I cannot objectively say that the reconstruction by Mann et al. is too smooth to be true. I think the study by von Storch et al. (2004) strongly suggests something like my guess, but, strictly speaking, this is just yet another case study and maybe not universal.

  18. 10
  19. Roger Pielke Jr. Says:

    We won’t be recording or transcribing the talk and panel, but three people have agreed to summarize the sesssion and we will post their reports here next week.

  20. 11
  21. Frank H. Scammell Says:

    I am somewhat surprised that the debate is so circuitous. Surely some must recognize that having the hockeystick handle flatter than the evidence suggests is merely a ploy to accentuate the blade-”the Instrumental Record”. “The Instrumental Record” continues to deviate from radiosonde (balloon) and satellite records (see Junk Science) because the ‘Urban Heat Island” effect is incorrectly modelled (talk to Dr Jim Hansen). If you examine the MBH results – without the merging with “the Instrumental Record”, there certainly seems to be nothing of concern. Additionally look at the scaling – chartsmanship at its finest – , long way to go to projected worst cases. The political issue is control over energy usage – by whom, when, and how.

  22. 12
  23. Kooiti Masuda Says:

    Excuse me for continuing discussion out of original context of this posting, but I think I should make some clarification of one of my previous comments.

    My comment here at July 5, 2005 03:00 AM was, unlike other comments of mine, not aimed at so-called hockey-stick reconstruction but about climate science in general. Actually my main concern in mind was instrumental meteorological data and climate model codes.

    The word “norm” may be ambiguous. We should distinguish what is desirable
    from what is obligatory. I strongly think that it is desirable for climate scientists to contribute source codes and data to public data bases.

    (But I still do not think it desirable to archive in public data bases all codes and data made during various processes of scientific studies. The cost of maintaining such archives is likely to be prohibitive.)

    On the other hand, making it obligatory for climate scientists to deposit source programs and data (i.e. inhibiting publication of studies which do not follow this norm) would result in huge loss of scientific knowledge that cannot be compensated by relative improvement of quality.

    Though it may be unimaginable to U.S. citizens, large part of instrumental
    meteorological data of many countries are not freely available. I cannot give all the data which I use to those who request them. Let me illustrate the situation using somewhat idealized examples. Each of countries X and Y has 100 good stations, but releases observation records at just 30 stations as free contribution to the world following the guideline of World Meteorological Organization. Each government considers the rest intellectual properties. Country X has a national data center and it has a data catalog with price tags. In this case my responsibility seems to be just to mention the entry in the data catalog which corresponds to the one I used. The rest of reproducibility depends only on funds available to the requester. This is a relatively easy case. In the case of Country Y, data became available to us by negotiation, and the condition of availability and the price may be different to other users who also manage to get them. This is a case of low reproducibility, and the data at 70 stations in country Y can be called “closed”. It is true that such studies that use “closed” data should be considered less reliable than those which use open data only (from the standpoint of reproducibility). But on the other hand, results obtained using 100 stations may be much informative than those using just 30 stations in some aspects — in spatial details, probably. (This is similar to the case of the oil company study mentioned previously.)

    Certainly we should encourage those countries to release the data more openly. The U.S. Government seems to be in the best position to make such suggestion. But the response of bureaucracies are likely to be slow. Also, if the countries feel threats behind the words of the U.S. government, they probably tend more towards protection. Publishing scientific results which use the data seems to be one of best ways of assistance to the goal of making the data eventually join the global public goods.

    Software is, in current legal principle, considered intellectual property. It is actually rare that programs written by climate scientists are considered economic goods. But the atmospheric part of a climate model is almost the same as a weather forecast model and can thus possibly be used for profit. Some institutions want to protect the code from free-riding — they think that the profit should be shared by the institutions which originated the code. More often, the authors dislike bureaucracy of their own institutions, avoid claiming copyrights officially, and put their software in an informal state like trade secrets. They give the codes to friendly collaborators. But they do not want to give them to unfriendly competitors. (They fear that their paper may not be considered original study by scientific journals if it is submitted later than similar paper by their
    competitors.) They also do not want to give them to unfriendly critiques who may emphatically announce faults of the code to the public before helping the authors correct them.

    (The range of applicability of scientific software is usually not rigorously determined. Behavior of a program with such input data that the authors did not imagine may or may not be good [i.e. representing the physical model behind the code correctly]. The authors would welcome users who report them questionable behavior and help them understand the limitation of the code. But the authors would not welcome users who just shout that the code is bad. This is what I meant by “My code is not yet well documented …” in the previous comment.)

    These attitudes may be selfish, but there seems to be no grounds either for their competitors or critiques favored above them.

    Sometimes, especially when there is suspect of frauds, there will be needs
    for some measure to enforce submission of
    programs and data. Then we need a publicly recognized arbiter who can settle disputes.