Comments on: The Helpful Undergraduate: Another Response to James Annan http://cstpr.colorado.edu/prometheus/?p=4419 Wed, 29 Jul 2009 22:36:51 -0600 http://wordpress.org/?v=2.9.1 hourly 1 By: Lupo http://cstpr.colorado.edu/prometheus/?p=4419&cpage=3#comment-10131 Lupo Fri, 23 May 2008 16:40:06 +0000 http://sciencepolicy.colorado.edu/prometheusreborn/?p=4419#comment-10131 If "the models" are to show policy people what different emissions levels will do to climate and the scenarios all have the emisssion levels wrong and too high - then what use are they? If they can't predict less than 20 years with any skill - then what use are they? If they are simply "not inconsistent" with observations - then what use are they? If the last 40 years has each 10 year period going backwards from 2007 showing about a .15 to .20 anomaly trend rise, what is the use or importance in saying it will continue? If “the models” are to show policy people what different emissions levels will do to climate and the scenarios all have the emisssion levels wrong and too high – then what use are they?

If they can’t predict less than 20 years with any skill – then what use are they?

If they are simply “not inconsistent” with observations – then what use are they?

If the last 40 years has each 10 year period going backwards from 2007 showing about a .15 to .20 anomaly trend rise, what is the use or importance in saying it will continue?

]]>
By: JamesG http://cstpr.colorado.edu/prometheus/?p=4419&cpage=3#comment-10130 JamesG Wed, 21 May 2008 12:11:02 +0000 http://sciencepolicy.colorado.edu/prometheusreborn/?p=4419#comment-10130 Lazar I take your point without actually agreeing with it, and I won't go into a you said, he said discourse, but I'll admit to trying to cut through the verbiage to point out the utility or lack of it of models for policy. The word "consistent" has been abused by a few correspondants here to the point where it is utterly meaningless. Statistical distributions too are being wrongly used to try to argue the opposite of reality. If I wish to remind people of this reality once you cut away the weasel wording it means bringing up a line of thought that people have apparently not bothered to consider. Perhaps you might try the same. Models must be tested to prove their usefulness. It is up to the modelers to define and carry out these tests, which they consistently refuse to do whilst taking umbrage with others who attempt it. You are propounding the view that the models are a priori useful despite them being untested and unproven. As such you demonstrate little more than confidence in the efforts of the modelers, despite much evidence of clear bias in the modeling assumptions and much real world evidence that the models are indeed very poor for any sort of prediction at all. I'd describe such blind faith as anti-science, pro-activist. Lazar
I take your point without actually agreeing with it, and I won’t go into a you said, he said discourse, but I’ll admit to trying to cut through the verbiage to point out the utility or lack of it of models for policy. The word “consistent” has been abused by a few correspondants here to the point where it is utterly meaningless. Statistical distributions too are being wrongly used to try to argue the opposite of reality. If I wish to remind people of this reality once you cut away the weasel wording it means bringing up a line of thought that people have apparently not bothered to consider. Perhaps you might try the same.

Models must be tested to prove their usefulness. It is up to the modelers to define and carry out these tests, which they consistently refuse to do whilst taking umbrage with others who attempt it. You are propounding the view that the models are a priori useful despite them being untested and unproven. As such you demonstrate little more than confidence in the efforts of the modelers, despite much evidence of clear bias in the modeling assumptions and much real world evidence that the models are indeed very poor for any sort of prediction at all. I’d describe such blind faith as anti-science, pro-activist.

]]>
By: Lazar http://cstpr.colorado.edu/prometheus/?p=4419&cpage=2#comment-10129 Lazar Tue, 20 May 2008 22:53:41 +0000 http://sciencepolicy.colorado.edu/prometheusreborn/?p=4419#comment-10129 JamesG... "Have you actually looked at the scenarios? In fact the opposite is true." ... you seem to have a problem with opposites... what you say with regard to scenarios does not contradict what I wrote although you claim it does... just like in response to Mark, you wrote... "No Mark, an increase in the range of projections means that the model (or ensemble of model runs) is even worse than before for policy support since it's value for prediction is even less." ... did not contradict what he wrote, though you claimed it did. "This touted idea that a longer time period is better for predictions is completely offset by the uncertainty about future emissions. We might be able to test it 20 years from now but what use is that? You might as well admit that the models are useless." ... predicted temperature anomalies diverge among emissions scenarios as a function of time... a prime utility of models is to show policy makers how temperatures will be effected by different emissions levels... you are now claiming that showing this shows the models are useless(!)... or maybe you mean the range of predicted temperature anomalies for a given scenario increases as a function of time due to systematic differences between models, i.e. they produce different average trends in response to the 'same' forcing... but the discussion so far has been on the spread of trends, which do converge as a function of time. JamesG…

“Have you actually looked at the scenarios? In fact the opposite is true.”

… you seem to have a problem with opposites… what you say with regard to scenarios does not contradict what I wrote although you claim it does… just like in response to Mark, you wrote…

“No Mark, an increase in the range of projections means that the model (or ensemble of model runs) is even worse than before for policy support since it’s value for prediction is even less.”

… did not contradict what he wrote, though you claimed it did.

“This touted idea that a longer time period is better for predictions is completely offset by the uncertainty about future emissions. We might be able to test it 20 years from now but what use is that? You might as well admit that the models are useless.”

… predicted temperature anomalies diverge among emissions scenarios as a function of time… a prime utility of models is to show policy makers how temperatures will be effected by different emissions levels… you are now claiming that showing this shows the models are useless(!)… or maybe you mean the range of predicted temperature anomalies for a given scenario increases as a function of time due to systematic differences between models, i.e. they produce different average trends in response to the ’same’ forcing… but the discussion so far has been on the spread of trends, which do converge as a function of time.

]]>
By: Roger Pielke, Jr. http://cstpr.colorado.edu/prometheus/?p=4419&cpage=2#comment-10128 Roger Pielke, Jr. Tue, 20 May 2008 22:01:21 +0000 http://sciencepolicy.colorado.edu/prometheusreborn/?p=4419#comment-10128 Lupo- Your comment reminds me of something I wrote on this exact subject in 2003: "Consider the following analogy. You wish to teach a friend how to play the game of tennis. You carefully and accurately describe the rules of tennis to your friend, but you speak in Latin to your English-speaking friend. When you get onto the court, your friend fails to observe the rules that you so carefully described. Following the game, it would surely be inappropriate to criticize your friend as incapable of understanding tennis and futile to recommend additional tennis instruction in Latin. But, this is exactly the sort of dynamic observed in studies of public understanding of scientific uncertainties. For example, Murphy et al. (1980) document that when weather forecasters call for, say, a 70% chance of rain, decision makers understood the probabilistic element of the forecast but did not know whether rain has a 70% chance for each point in the forecast area, or that 70% of the area would receive rain with a 100% probability, and so on. Do you know?" Pielke, Jr., R.A., 2003: The role of models in prediction for decision, Chapter 7, pp. 113-137 in C. Canham and W. Lauenroth (eds.), Understanding Ecosystems: The Role of Quantitative Models in Observations, Synthesis, and Prediction, Princeton University Press, Princeton, N.J. http://sciencepolicy.colorado.edu/admin/publication_files/2001.12.pdf Lupo- Your comment reminds me of something I wrote on this exact subject in 2003:

“Consider the following analogy. You wish to teach a friend how to play the game of tennis. You carefully and accurately describe the rules of tennis to your friend, but you speak in Latin to your English-speaking friend. When you get onto the court, your friend fails to observe the rules that you so carefully described. Following the game, it would surely be inappropriate to criticize your friend as incapable of understanding tennis and futile to recommend additional tennis instruction in Latin. But, this is exactly the sort of dynamic observed in studies of public understanding of scientific uncertainties. For example, Murphy et al. (1980) document that when weather forecasters call for, say, a 70% chance of rain, decision makers understood the probabilistic element of the forecast but did not know whether rain has a 70% chance for each point in the forecast area, or that 70% of the area would receive rain with a 100% probability, and so on. Do you know?”

Pielke, Jr., R.A., 2003: The role of models in prediction for decision, Chapter 7, pp. 113-137 in C. Canham and W. Lauenroth (eds.), Understanding Ecosystems: The Role of Quantitative Models in Observations, Synthesis, and Prediction, Princeton University Press, Princeton, N.J.
http://sciencepolicy.colorado.edu/admin/publication_files/2001.12.pdf

]]>
By: Lupo http://cstpr.colorado.edu/prometheus/?p=4419&cpage=2#comment-10127 Lupo Tue, 20 May 2008 21:29:50 +0000 http://sciencepolicy.colorado.edu/prometheusreborn/?p=4419#comment-10127 "Such claims by scientists and the IPCC lead people in the professions of policy research and decision making to ask, "what could these scientists mean with such claims?" When efforts to resolve this question are met with responses that "technical meaning may not always match intuitive meaning" then we have a problem. If the IPCC is indeed to support decision making then it should present its findings in ways that make intuitive sense to decision makers." Hiring a translator might help? Why are not informations on policy matters presented to policy makers in real world terms and not vague specialized scientific or statistical terms? Can you blame the German speaker for not understanding what the French speaker meant if the words are wrong? Do you blame the electronics student on her first day for not knowing how to build a working flipflop from discrete components? "There is however a reasonably good meteorological explanation - anyone want to make a stab at it?" I will stab. PDO and IPO and AMO changes alone and in relation to each other? “Such claims by scientists and the IPCC lead people in the professions of policy research and decision making to ask, “what could these scientists mean with such claims?”

When efforts to resolve this question are met with responses that “technical meaning may not always match intuitive meaning” then we have a problem. If the IPCC is indeed to support decision making then it should present its findings in ways that make intuitive sense to decision makers.”

Hiring a translator might help?

Why are not informations on policy matters presented to policy makers in real world terms and not vague specialized scientific or statistical terms? Can you blame the German speaker for not understanding what the French speaker meant if the words are wrong? Do you blame the electronics student on her first day for not knowing how to build a working flipflop from discrete components?

“There is however a reasonably good meteorological explanation – anyone want to make a stab at it?”

I will stab. PDO and IPO and AMO changes alone and in relation to each other?

]]>
By: JamesG http://cstpr.colorado.edu/prometheus/?p=4419&cpage=2#comment-10126 JamesG Tue, 20 May 2008 20:26:45 +0000 http://sciencepolicy.colorado.edu/prometheusreborn/?p=4419#comment-10126 Lazar: "the spread reduces greatly over longer time periods" Have you actually looked at the scenarios? In fact the opposite is true. This touted idea that a longer time period is better for predictions is completely offset by the uncertainty about future emissions. We might be able to test it 20 years from now but what use is that? You might as well admit that the models are useless. Indeed a model that isn't testable is the very definition of useless. The only way forward is to ditch the idea of model ensembles and concentrate on the few model runs which seem to match the obs better. This is incidentally, standard practice outside of climate science. Also people are blithely repeating, as if it was fact, that the inability to predict the climate in the past 8 (or 10) years is due to "weather noise". This is of course a gross assumption with zero scientific justification behind it. It could in fact be the start of a new trend or even a new cycle. If someone was to actually define what this weather noise consisted of, as opposed to the current hand-waving, then I might be inclined to believe they actually knew what they were talking about. There is however a reasonably good meteorological explanation - anyone want to make a stab at it? I'd also like climate modelers to estimate the amount of warming weather noise in the 1975/1998 period and maybe someone from Hadley centre could tell us why they were so convinced that natural variability couldn't explain the late 20th century warming when quite obviously they cannot quantify natural variability at all. Does anyone think that realclimate will bet on the observations coming back up to join the IPCC model ensemble 10 years from now? Not even the middle, just perhaps the bottom boundary? Me neither! Lazar: “the spread reduces greatly over longer time periods”

Have you actually looked at the scenarios? In fact the opposite is true. This touted idea that a longer time period is better for predictions is completely offset by the uncertainty about future emissions. We might be able to test it 20 years from now but what use is that? You might as well admit that the models are useless. Indeed a model that isn’t testable is the very definition of useless. The only way forward is to ditch the idea of model ensembles and concentrate on the few model runs which seem to match the obs better. This is incidentally, standard practice outside of climate science.

Also people are blithely repeating, as if it was fact, that the inability to predict the climate in the past 8 (or 10) years is due to “weather noise”. This is of course a gross assumption with zero scientific justification behind it. It could in fact be the start of a new trend or even a new cycle. If someone was to actually define what this weather noise consisted of, as opposed to the current hand-waving, then I might be inclined to believe they actually knew what they were talking about. There is however a reasonably good meteorological explanation – anyone want to make a stab at it?

I’d also like climate modelers to estimate the amount of warming weather noise in the 1975/1998 period and maybe someone from Hadley centre could tell us why they were so convinced that natural variability couldn’t explain the late 20th century warming when quite obviously they cannot quantify natural variability at all.

Does anyone think that realclimate will bet on the observations coming back up to join the IPCC model ensemble 10 years from now? Not even the middle, just perhaps the bottom boundary? Me neither!

]]>
By: Tom Fiddaman http://cstpr.colorado.edu/prometheus/?p=4419&cpage=2#comment-10125 Tom Fiddaman Tue, 20 May 2008 20:13:31 +0000 http://sciencepolicy.colorado.edu/prometheusreborn/?p=4419#comment-10125 Argh ... again I've forgotten about lack of tags. 1st and 3rd paragraphs of my 11:13am post are quotes from Roger's comment (19th, 2:40pm). Argh … again I’ve forgotten about lack of tags. 1st and 3rd paragraphs of my 11:13am post are quotes from Roger’s comment (19th, 2:40pm).

]]>
By: Tom Fiddaman http://cstpr.colorado.edu/prometheus/?p=4419&cpage=2#comment-10124 Tom Fiddaman Tue, 20 May 2008 17:13:41 +0000 http://sciencepolicy.colorado.edu/prometheusreborn/?p=4419#comment-10124 Roger - Your alteration of the one I proposed suggests a different problem to be addressed -- not right, not wrong, just different. I agree that they're different. I assert that only one can be right in the sense that it is a better mapping of the climate problem. Which is it? That's why I think it would be more productive to focus on a clear definition of the climate problem. I agree with your a., b. here we have a stationary trend, and I don't think that the unequal variances in this case lead to different conclusions than what you have presented, and c. then this variance should be incorporated in the model-produced forecasts on short time scales. The variance in c. is incorporated in the model forecasts, to the extent that models' endogenous variability has a >8yr component, which appears to be the case. Re b., I'm not sure what you mean, but the short term temperature trend on earth seems clearly variable. The crucial problem with using the 8yr trend confidence interval to evaluate a prediction (e.g. .2C/decade) is that it excludes the >8yr noise. Thus the confidence interval on the measurement is too narrow, and the recent data is less informative about underlying forced response than it appears to be. The problem is finding the "true" variability. Lucia noted that the model envelope might overstate it, because ss(models)=ss(model_weather)+ss(intermodel) (where ss = sigma squared). That could be true, though it could also be the case that ss(model_weather)>ss(model_weather) or that ss(models)>>ss(true_weather). The 8yr trend of the observations understates the true variability, because ss(true_weather)=ss(8yr) Actually it's more complicated than that because one should also account for variation in forcings and measurement error. Suppose we split the difference and assume that "true" variability is .14 - halfway between the measurement (.07) and the models (.21). You still can't use the unpaired t-test you linked to evaluate this, because N(trend_msmt) = 1, not 5 (plus, the trend isn't even a measurement ... it's a statistic a set of measurements). Assuming normality of the observational trend and model distributions, the difference between N(-.07,.14) and N(.19,.14) is distributed N(.26,.2), which means you can't reject 0 difference with any confidence. Roger -

Your alteration of the one I proposed suggests a different problem to be addressed — not right, not wrong, just different.

I agree that they’re different. I assert that only one can be right in the sense that it is a better mapping of the climate problem. Which is it? That’s why I think it would be more productive to focus on a clear definition of the climate problem.

I agree with your a., b. here we have a stationary trend, and I don’t think that the unequal variances in this case lead to different conclusions than what you have presented, and c. then this variance should be incorporated in the model-produced forecasts on short time scales.

The variance in c. is incorporated in the model forecasts, to the extent that models’ endogenous variability has a >8yr component, which appears to be the case. Re b., I’m not sure what you mean, but the short term temperature trend on earth seems clearly variable.

The crucial problem with using the 8yr trend confidence interval to evaluate a prediction (e.g. .2C/decade) is that it excludes the >8yr noise. Thus the confidence interval on the measurement is too narrow, and the recent data is less informative about underlying forced response than it appears to be.

The problem is finding the “true” variability. Lucia noted that the model envelope might overstate it, because
ss(models)=ss(model_weather)+ss(intermodel)
(where ss = sigma squared). That could be true, though it could also be the case that ss(model_weather)>ss(model_weather) or that ss(models)>>ss(true_weather).

The 8yr trend of the observations understates the true variability, because
ss(true_weather)=ss(8yr)
Actually it’s more complicated than that because one should also account for variation in forcings and measurement error.

Suppose we split the difference and assume that “true” variability is .14 – halfway between the measurement (.07) and the models (.21). You still can’t use the unpaired t-test you linked to evaluate this, because N(trend_msmt) = 1, not 5 (plus, the trend isn’t even a measurement … it’s a statistic a set of measurements). Assuming normality of the observational trend and model distributions, the difference between N(-.07,.14) and N(.19,.14) is distributed N(.26,.2), which means you can’t reject 0 difference with any confidence.

]]>
By: Lazar http://cstpr.colorado.edu/prometheus/?p=4419&cpage=2#comment-10123 Lazar Tue, 20 May 2008 17:07:12 +0000 http://sciencepolicy.colorado.edu/prometheusreborn/?p=4419#comment-10123 Roger, no caffeine at 7:06 am, I can understand. "The problem" ... a problem, ok there's a problem... "that the worse the forecast (ie., the greater the uncertainty, defined as the width of the range, the greater the "consistency")." ... is that a problem? Why? It's the nature of the beast. A wider prediction is more likely to be consistent with observations. As long as one is aware of that fact, it ought not to be a problem. I think uncertainty needs to be approached head-on, it can't be wished away and ought not be ignored. ... I can accept that observations are consistent with model predictions over eight years ... and I can accept that the predictions are weak due to large uncertainty ... and note that the large uncertainty is reduced considerably for twenty year predictions ... and that eight year predictions are not very useful in terms of GHG policies. ... it is even desirable that stochastic variability in the real world climate is represented in the uncertainty of predictions. "While this can be defined to pass certain statistical tests," This means you agree model predictions pass the test for consistency? "it fails what I would call the common-sense test." ... that is much too vague. "I can guarantee you that if I predicted that tomorrow's weather will see a high temperature of between 40 and 100 degrees and then trump my predictive success when the temperature is observed to fall in this range, it would not be received as a meaningful accomplishment." ... but that is not a test, "common-sense" or otherwise. It is a statement of fact. It is the nature of prediction and consistency you are describing. To repeat my previous comment... "... complaints about spread make no sense... the spread is explicit... any competent individual can look at the spread and assess the usefulness of model predictions accordingly... any competent individual can reduce the number of models/evaluate models on an individual basis and thereby reduce the spread... the spread is what it is, the models are what they are, complaining the spread is wide does not make the test 'wrong' or justify using a bogus test with reduced spread... the spread reduces greatly over longer time periods... the time periods policy makers are interested in... so where's the problem?" repeat... I still can't see your problem. "A further problem that we have highlighted is that the IPCC presented a very narrow range of error bars in its 2007 report for short-term trends in temperature (figure 10.26, chapter 10). Now it is either the case that recent trends fall outside of that uncertainty range, or the IPCC made some bad decisions in how it chose to present such uncertainties." On that basis, the latter I think. I prefer the approach in figure 10.5, where they plot the average for each model. I think the best approach is, as RC did, plot all the model realizations, or the ensemble mean with 2sd error bars of the individual realizations, or the normal distribution. I also would prefer winnowing out models. Testing them individually. Rejecting some maybe. Anyway, thanks for the polite reply. You are a gentleman as always. Roger, no caffeine at 7:06 am, I can understand.

“The problem”

… a problem, ok there’s a problem…

“that the worse the forecast (ie., the greater the uncertainty, defined as the width of the range, the greater the “consistency”).”

… is that a problem? Why? It’s the nature of the beast. A wider prediction is more likely to be consistent with observations. As long as one is aware of that fact, it ought not to be a problem.

I think uncertainty needs to be approached head-on, it can’t be wished away and ought not be ignored.
… I can accept that observations are consistent with model predictions over eight years
… and I can accept that the predictions are weak due to large uncertainty
… and note that the large uncertainty is reduced considerably for twenty year predictions
… and that eight year predictions are not very useful in terms of GHG policies.
… it is even desirable that stochastic variability in the real world climate is represented in the uncertainty of predictions.

“While this can be defined to pass certain statistical tests,”

This means you agree model predictions pass the test for consistency?

“it fails what I would call the common-sense test.”

… that is much too vague.

“I can guarantee you that if I predicted that tomorrow’s weather will see a high temperature of between 40 and 100 degrees and then trump my predictive success when the temperature is observed to fall in this range, it would not be received as a meaningful accomplishment.”

… but that is not a test, “common-sense” or otherwise. It is a statement of fact. It is the nature of prediction and consistency you are describing.

To repeat my previous comment…

“… complaints about spread make no sense… the spread is explicit… any competent individual can look at the spread and assess the usefulness of model predictions accordingly… any competent individual can reduce the number of models/evaluate models on an individual basis and thereby reduce the spread… the spread is what it is, the models are what they are, complaining the spread is wide does not make the test ‘wrong’ or justify using a bogus test with reduced spread… the spread reduces greatly over longer time periods… the time periods policy makers are interested in… so where’s the problem?”

repeat… I still can’t see your problem.

“A further problem that we have highlighted is that the IPCC presented a very narrow range of error bars in its 2007 report for short-term trends in temperature (figure 10.26, chapter 10). Now it is either the case that recent trends fall outside of that uncertainty range, or the IPCC made some bad decisions in how it chose to present such uncertainties.”

On that basis, the latter I think. I prefer the approach in figure 10.5, where they plot the average for each model. I think the best approach is, as RC did, plot all the model realizations, or the ensemble mean with 2sd error bars of the individual realizations, or the normal distribution. I also would prefer winnowing out models. Testing them individually. Rejecting some maybe.

Anyway, thanks for the polite reply.
You are a gentleman as always.

]]>
By: Bob B http://cstpr.colorado.edu/prometheus/?p=4419&cpage=2#comment-10122 Bob B Tue, 20 May 2008 17:04:42 +0000 http://sciencepolicy.colorado.edu/prometheusreborn/?p=4419#comment-10122 OK---GCM's have made their way into to public debate via wild claims either by the modelers or the media. Let's phrase the question(s) in another way that the public which is being asked to sacrifice their lifestyle should be allowed to ask of the modelers: 1 Do the GCM prove beyond a reasonable doubt that catastrophic global warming is imminent?---NO 2.Do the models predict the recent downturn in the last 6-10yrs of the Earth's global temperature ---NO 3. Do the climate models simulate using ALL physical potential climate drivers (ie PDO,AMO,ENSO--etc)---NO 4. Has Lucia shown that within a reasonable doubt that the recent temperature observations don't match short-term IPCC projections---Yes OK—GCM’s have made their way into to public debate via wild claims either by the modelers or the media. Let’s phrase the question(s) in another way that the public which is being asked to sacrifice their lifestyle should be allowed to ask of the modelers:

1 Do the GCM prove beyond a reasonable doubt that catastrophic global warming is imminent?—NO

2.Do the models predict the recent downturn in the last 6-10yrs of the Earth’s global temperature —NO

3. Do the climate models simulate using ALL physical potential climate drivers (ie PDO,AMO,ENSO–etc)—NO

4. Has Lucia shown that within a reasonable doubt that the recent temperature observations don’t match short-term IPCC projections—Yes

]]>