Comments on: Spinning Probabilities in GRL http://cstpr.colorado.edu/prometheus/?p=5113 Wed, 29 Jul 2009 22:36:51 -0600 http://wordpress.org/?v=2.9.1 hourly 1 By: Chip Knappenberger http://cstpr.colorado.edu/prometheus/?p=5113&cpage=1#comment-13313 Chip Knappenberger Thu, 09 Apr 2009 14:57:25 +0000 http://sciencepolicy.colorado.edu/prometheus/?p=5113#comment-13313 Bverheggen, Certainly the trend at any point in time is based on the data that go into it and the variability of the trend over time is related to the length of the trend, shorter trends have greater variability than do longer trends. In Figure 3 of Pat’s testimony we show the current (ending in December 2008) value of trends ranging in length from 5 to 15 years. Also in Figure 3 is the Holy Grail that everyone (including Roger) is looking for—it the 95% confidence bounds of the expected range of model trends for the first 20 years of the 21st century of any length you want from 5 to 15 years (under A1B). We only show the 95% confidence bounds, but we could just as easily show you and confidence bounds you’d like to see. Thus, we can calculate the probability of occurrence of any trend value of any length (from 5 to 15 years). Armed with this information, you can then check to see how any observed value of any trend length (from 5 to 15 years) compares to model expectations. This is what we did for observed trends ending in December of 2008 calculated from the HadCRUT3v dataset. If you want to see, say, how the 7-year trend value ending in February 2006 calculated from the GISS dataset compares to the model expectations, all you need to do is calculate the value from the observed data and see where it fits in the model expectations. The whole point of developing the model range is to eliminate the dependence on start and stop dates from the comparison. This point seems to be missed by our critics. You can choose any start and stop dates and any data set you want, but if you start to find some that produce observed trend values that fall outside the range of expected model trends, then you have a potential problem. Where the problem lies, is the next task to undertake. Is it with the observed data? The model internal variability? The model climate sensitivity? The actual vs. the projected forcings? Random chance? Any of these may be the case. So, while the observed trends that ended in 2007 lie further towards the middle of the range than do the trends that end in 2008, this is of less interest than the observed trends that start to fall outside the 95% confidence range—because these trends potentially point to something being amiss somewhere—and this is what we are starting to see. The longer this period of reduced warming rate continues, the more the observed trends are going to start to fall outside the range of model expectations. In case you want to see the time history of the observed values of trends of various lengths (the kind of things that Gavin and Bverheggen suggest we are hiding), they are available in a World Climate Report article we posted in December 2008 (this is prior to our model investigation): http://www.worldclimatereport.com/index.php/2008/12/17/recent-temperature-trends-in-context/ Further, like I have repeated on many occasions, a group of us are actively involved in preparing our findings for submission to a journal, where we will have a much better opportunity to better describe what we have done as well as provide more analysis and results. I am quite confident that what we are doing will provide a useful measure by which to gauge observed trends against model expectations. -Chip Bverheggen,

Certainly the trend at any point in time is based on the data that go into it and the variability of the trend over time is related to the length of the trend, shorter trends have greater variability than do longer trends. In Figure 3 of Pat’s testimony we show the current (ending in December 2008) value of trends ranging in length from 5 to 15 years.

Also in Figure 3 is the Holy Grail that everyone (including Roger) is looking for—it the 95% confidence bounds of the expected range of model trends for the first 20 years of the 21st century of any length you want from 5 to 15 years (under A1B). We only show the 95% confidence bounds, but we could just as easily show you and confidence bounds you’d like to see. Thus, we can calculate the probability of occurrence of any trend value of any length (from 5 to 15 years).

Armed with this information, you can then check to see how any observed value of any trend length (from 5 to 15 years) compares to model expectations.

This is what we did for observed trends ending in December of 2008 calculated from the HadCRUT3v dataset. If you want to see, say, how the 7-year trend value ending in February 2006 calculated from the GISS dataset compares to the model expectations, all you need to do is calculate the value from the observed data and see where it fits in the model expectations.

The whole point of developing the model range is to eliminate the dependence on start and stop dates from the comparison. This point seems to be missed by our critics.

You can choose any start and stop dates and any data set you want, but if you start to find some that produce observed trend values that fall outside the range of expected model trends, then you have a potential problem. Where the problem lies, is the next task to undertake. Is it with the observed data? The model internal variability? The model climate sensitivity? The actual vs. the projected forcings? Random chance? Any of these may be the case.

So, while the observed trends that ended in 2007 lie further towards the middle of the range than do the trends that end in 2008, this is of less interest than the observed trends that start to fall outside the 95% confidence range—because these trends potentially point to something being amiss somewhere—and this is what we are starting to see. The longer this period of reduced warming rate continues, the more the observed trends are going to start to fall outside the range of model expectations.

In case you want to see the time history of the observed values of trends of various lengths (the kind of things that Gavin and Bverheggen suggest we are hiding), they are available in a World Climate Report article we posted in December 2008 (this is prior to our model investigation):

http://www.worldclimatereport.com/index.php/2008/12/17/recent-temperature-trends-in-context/

Further, like I have repeated on many occasions, a group of us are actively involved in preparing our findings for submission to a journal, where we will have a much better opportunity to better describe what we have done as well as provide more analysis and results.

I am quite confident that what we are doing will provide a useful measure by which to gauge observed trends against model expectations.

-Chip

]]>
By: bverheggen http://cstpr.colorado.edu/prometheus/?p=5113&cpage=1#comment-13312 bverheggen Thu, 09 Apr 2009 13:52:59 +0000 http://sciencepolicy.colorado.edu/prometheus/?p=5113#comment-13312 Roger, I have not analysed recent temps myself, but from what I've read, many of the short term trends of the last decade or so are smaller than the longer term trend. E&W argue that such periods are entirely normal within a longer timeframe. Your critique of them seems centered on stating that at any particular point in time, this is a relatively rare event. But for it to 'randomly' occur within a longer time frame is to be expected. Isn't that their main point, and is it not valid? Bart Roger,

I have not analysed recent temps myself, but from what I’ve read, many of the short term trends of the last decade or so are smaller than the longer term trend.

E&W argue that such periods are entirely normal within a longer timeframe. Your critique of them seems centered on stating that at any particular point in time, this is a relatively rare event. But for it to ‘randomly’ occur within a longer time frame is to be expected. Isn’t that their main point, and is it not valid?

Bart

]]>
By: Roger Pielke, Jr. http://cstpr.colorado.edu/prometheus/?p=5113&cpage=1#comment-13311 Roger Pielke, Jr. Thu, 09 Apr 2009 13:23:09 +0000 http://sciencepolicy.colorado.edu/prometheus/?p=5113#comment-13311 Bart, Raven, Chip- Seems to me that you guys are really arguing about the future not the past. Bart, Chip says that recent temperatures are at the bottom end of model distributions. This seems to me the exact same thing that Easterling and Wehner are saying. Would you say that they are in the middle of the distribution? Top end? The numbers all seem clear. It is the meaning of those numbers that people disagree about, which is why I say this debate is about the future not the past. But we should be able to get agreement on what the numbers say. Bart, Raven, Chip-

Seems to me that you guys are really arguing about the future not the past.

Bart, Chip says that recent temperatures are at the bottom end of model distributions. This seems to me the exact same thing that Easterling and Wehner are saying. Would you say that they are in the middle of the distribution? Top end?

The numbers all seem clear. It is the meaning of those numbers that people disagree about, which is why I say this debate is about the future not the past. But we should be able to get agreement on what the numbers say.

]]>
By: bverheggen http://cstpr.colorado.edu/prometheus/?p=5113&cpage=1#comment-13309 bverheggen Thu, 09 Apr 2009 13:01:55 +0000 http://sciencepolicy.colorado.edu/prometheus/?p=5113#comment-13309 Raven, Upon a quick glance her criticism seems directed at the filling in/guessing of what 2009 temps may look like, and how that would affect the graph. If that is correct, then the main point, that such a graph is very sensitive to the endpoint chosen and the dataset used, still stands, and the so-called bottom line from Chip above is not warranted. Bart Raven,

Upon a quick glance her criticism seems directed at the filling in/guessing of what 2009 temps may look like, and how that would affect the graph.

If that is correct, then the main point, that such a graph is very sensitive to the endpoint chosen and the dataset used, still stands, and the so-called bottom line from Chip above is not warranted.

Bart

]]>
By: Raven http://cstpr.colorado.edu/prometheus/?p=5113&cpage=1#comment-13308 Raven Thu, 09 Apr 2009 09:00:55 +0000 http://sciencepolicy.colorado.edu/prometheus/?p=5113#comment-13308 (#17) Bart, Lucia addresses Gavin's spin here: http://rankexploits.com/musings/2009/look-i-can-use-made-up-data-just-like-gavin/ Her technique is different from Chip's but the conclusions are the same. (#17) Bart,

Lucia addresses Gavin’s spin here:
http://rankexploits.com/musings/2009/look-i-can-use-made-up-data-just-like-gavin/

Her technique is different from Chip’s but the conclusions are the same.

]]>
By: bverheggen http://cstpr.colorado.edu/prometheus/?p=5113&cpage=1#comment-13307 bverheggen Thu, 09 Apr 2009 07:37:35 +0000 http://sciencepolicy.colorado.edu/prometheus/?p=5113#comment-13307 Chip (#2), The graph you refer to is very sensitive to the endpoint chosen and the dataset used. This has been pointed out to you, and you agreed with the points raised: http://www.realclimate.org/index.php/archives/2009/03/michaels-new-graph/langswitch_lang/6o So you know that the bottom line you state is false. Bart Chip (#2),

The graph you refer to is very sensitive to the endpoint chosen and the dataset used. This has been pointed out to you, and you agreed with the points raised:
http://www.realclimate.org/index.php/archives/2009/03/michaels-new-graph/langswitch_lang/6o

So you know that the bottom line you state is false.

Bart

]]>
By: Boslough http://cstpr.colorado.edu/prometheus/?p=5113&cpage=1#comment-13306 Boslough Wed, 08 Apr 2009 23:58:38 +0000 http://sciencepolicy.colorado.edu/prometheus/?p=5113#comment-13306 You wrote: "...if Easterling and Wehner were asked ten years ago what the odds of seeing a decade of no warming, they would have answered 10%." The definition of a decadal "period of lack of warming" is one in which "least-squares trends to running 10-year periods in the global surface air temperature time series" is not positive. Most of the decadal periods that terminated in the past ten years showed net warming, with a few intervals of very strong warming. For example the interval 1993-2002 showed an increase of .359 degrees. The only full decadal period out of the last ten years that showed a net cooling was 1998-2007. That's one out of ten, or 10%. I do not understand the basis for your criticism. You wrote: “…if Easterling and Wehner were asked ten years ago what the odds of seeing a decade of no warming, they would have answered 10%.” The definition of a decadal “period of lack of warming” is one in which “least-squares trends to running 10-year periods in the global surface air temperature time series” is not positive.

Most of the decadal periods that terminated in the past ten years showed net warming, with a few intervals of very strong warming. For example the interval 1993-2002 showed an increase of .359 degrees.

The only full decadal period out of the last ten years that showed a net cooling was 1998-2007.

That’s one out of ten, or 10%.

I do not understand the basis for your criticism.

]]>
By: Celebrity Paycut - Encouraging celebrities all over the world to save us from global warming by taking a paycut. http://cstpr.colorado.edu/prometheus/?p=5113&cpage=1#comment-13302 Celebrity Paycut - Encouraging celebrities all over the world to save us from global warming by taking a paycut. Wed, 08 Apr 2009 21:20:54 +0000 http://sciencepolicy.colorado.edu/prometheus/?p=5113#comment-13302 [...] Spinning Probabilities in GRL April 7th, 2009 [...] [...] Spinning Probabilities in GRL April 7th, 2009 [...]

]]>
By: stan http://cstpr.colorado.edu/prometheus/?p=5113&cpage=1#comment-13296 stan Wed, 08 Apr 2009 19:46:27 +0000 http://sciencepolicy.colorado.edu/prometheus/?p=5113#comment-13296 Isn't "peer review" fast becoming a pejorative? Some argue that most published research is wrong -- http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124 As I understand it, grad students are no longer assigned the task of replicating other studies. Such work doesn't generate grants for the department. Thus, it would stand to reason that a lot more bad science is now flying under the radar. This is compounded by the fact that scientists have less reason to fear that their poor work will be exposed. What we do know is that there is an incredibly cavalier attitude toward quality control by climate scientists. See e.g. Gavin Schmidt's mind-boggling comments re: mistakes in October database; see e.g. satellite errors in ice data going undetected for over a month until amateurs noticed; see e.g. temperature monitor siting disasters; see e.g. Jones failure to provide critical study; see e.g. trend away from transparency and data archiving; see e.g. multiple instances by IPCC and other committees ignoring major studies when compiling assessments; and on and on. Climate science quality is a bad joke. Isn’t “peer review” fast becoming a pejorative?

Some argue that most published research is wrong — http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124

As I understand it, grad students are no longer assigned the task of replicating other studies. Such work doesn’t generate grants for the department. Thus, it would stand to reason that a lot more bad science is now flying under the radar. This is compounded by the fact that scientists have less reason to fear that their poor work will be exposed.

What we do know is that there is an incredibly cavalier attitude toward quality control by climate scientists. See e.g. Gavin Schmidt’s mind-boggling comments re: mistakes in October database; see e.g. satellite errors in ice data going undetected for over a month until amateurs noticed; see e.g. temperature monitor siting disasters; see e.g. Jones failure to provide critical study; see e.g. trend away from transparency and data archiving; see e.g. multiple instances by IPCC and other committees ignoring major studies when compiling assessments; and on and on.

Climate science quality is a bad joke.

]]>
By: bverheggen http://cstpr.colorado.edu/prometheus/?p=5113&cpage=1#comment-13295 bverheggen Wed, 08 Apr 2009 19:36:44 +0000 http://sciencepolicy.colorado.edu/prometheus/?p=5113#comment-13295 Roger, Even if it’s an unlikely event (1 in 10) at any particular point in time, I think the point of this paper is to argue that within a longer (centennial) timeframe, the existence of such decadal periods without a significant positive trend are to be expected (“entirely normal” and “likely”). Just as when you play poker long enough, the existence of a hand with two pairs is to be expected. The fact that the absence of a significant trend over the past decade is so widely used to try to claim that global warming has stopped, shows that there is a need to show the falsehood in that premise. In a way it is a sad state of affairs that scientists need to use their time to discredit obvious falsehoods, but I guess that’s where we’re at. Bart Roger,

Even if it’s an unlikely event (1 in 10) at any particular point in time, I think the point of this paper is to argue that within a longer (centennial) timeframe, the existence of such decadal periods without a significant positive trend are to be expected (“entirely normal” and “likely”). Just as when you play poker long enough, the existence of a hand with two pairs is to be expected.

The fact that the absence of a significant trend over the past decade is so widely used to try to claim that global warming has stopped, shows that there is a need to show the falsehood in that premise. In a way it is a sad state of affairs that scientists need to use their time to discredit obvious falsehoods, but I guess that’s where we’re at.

Bart

]]>