Global Cooling Consistent With Global Warming
April 30th, 2008Posted by: Roger Pielke, Jr.
For a while now I’ve been asking climate scientists to tell me what could be observed in the real world that would be inconsistent with forecasts (predictions, projections, etc.) of climate models, such as those that are used by the IPCC. I’ve long suspected that the answer is “nothing” and the public silence from those in the outspoken climate science community would seem to back this up. Now a paper in Nature today (PDF) suggests that cooling in the world’s oceans couldthat the world may cool over the next 20 years few decades , according to Richard Woods who comments on the paper in the same issue, “temporarily offset the longer-term warming trend from increasing levels of greenhouse gases in the atmosphere”, and this would not be inconsistent with predictions of longer-term global warming.
I am sure that this is an excellent paper by world class scientists. But when I look at the broader significance of the paper what I see is that there is in fact nothing that can be observed in the climate system that would be inconsistent with climate model predictions. If global cooling over the next few decades is consistent with model predictions, then so too is pretty much anything and everything under the sun.
This means that from a practical standpoint climate models are of no practical use beyond providing some intellectual authority in the promotional battle over global climate policy. I am sure that some model somewhere has foretold how the next 20 years will evolve (and please ask me in 20 years which one!). And if none get it right, it won’t mean that any were actually wrong. If there is no future over the next few decades that models rule out, then anything is possible. And of course, no one needed a model to know that.
Don’t get me wrong, models are great tools for probing our understanding and exploring various assumptions about how nature works. But scientists think they know with certainty that carbon dioxide leads to bad outcomes for the planet, so future modeling will only refine that fact. I am focused on the predictive value of the models, which appears to be nil. So models have plenty of scientific value left in them, but tools to use in planning or policy? Forget about it.
Those who might object to my assertion that models are of no practical use beyond political promotion, can start by returning to my original question: What can be observed in the climate over the next few decade that would be inconsistent with climate model projections? If you have no answer for this question then I’ll stick with my views.
April 30th, 2008 at 10:11 pm
Roger,
Can you explain where you got “may cool over the next 20 years” from? I didn’t see that in the paper. I did see the paper clearly suggest that the change may be small for 5-10 years (eg their Figure 4), but over the 30 year time frame there will be strong warming (which also answers your original point, I think).
It’s long been known that climate models (and the real world!) may have short periods when natural variability dominates or at least strongly mitigates the forced response. The only new thing I see here is a claim to be able to predict such events in advance. Time will tell.
April 30th, 2008 at 10:22 pm
Roger,
Can you explain where you got “may cool over the next 20 years” from? I didn’t see that in the paper. I did see the paper clearly suggest that the change may be small for 5-10 years (eg their Figure 4), but over the 30 year time frame there will be strong warming (which also answers your original point, I think).
It’s long been known that climate models (and the real world!) may have short periods when natural variability dominates or at least strongly mitigates the forced response. The only new thing I see here is a claim to be able to predict such events in advance. Time will tell.
May 1st, 2008 at 6:52 am
Hi James- Saw the 20 years in the media reports. Maybe I should have said “next few decades”.
I see that you neglected to address my central question, as a climate modeler, any thoughts on that?
May 1st, 2008 at 7:10 am
Roger,
I explicitly wrote “over the 30 year time frame there will be strong warming” – and actually 20y would be a safe bet too. TBH I still reckon warming over the next single decade must be considered likely – I have my doubts about this paper (it makes a pleasant change to be sceptical of a paper that is insufficiently “alarmist” for a change). From what I can see, they never even showed that they had any skill in hindcasting global mean temperatures, so I wouldn’t put too much weight on that aspect (even if the press do).
May 1st, 2008 at 7:21 am
James-
I changed the text, as you can see. The exact duration of their proposed cooling trend is less important that the central point of this post, which has to do with what might be observed in the near term that would be inconsistent with climate model projections.
I see that you have once again avoided addressing this question. Can I assume from your silence that the answer is indeed “nothing”?
May 1st, 2008 at 8:18 am
This is a particularly telling post in light of the comments you’ve posted at Romm’s blog and elsewhere. I have to say, that at first I didn’t believe the accusations, but it’s pretty hard to read something like this and come to a different conclusion.
Natural variability will sometimes mask, sometimes amplify the warming signal. This isn’t a shocking revelation.
When you say things like: “But when I look at the broader significance of the paper what I see is that there is in fact nothing that can be observed in the climate system that would be inconsistent with climate model predictions”, it gives the impression that you are either being deliberately disingenuous or your opinion is so ill-informed as to be unworthy of consideration.
May 1st, 2008 at 8:23 am
Jon-
Not sure I understand your comment, care to clarify?
Perhaps you have an answer to the question, what would be inconsistent with climate model predictions?
Or is asking the question a sign of heresy?
May 1st, 2008 at 8:52 am
Heresy? Imagined persecution isn’t particularly flattering.
Before anyone answers your questions, why don’t you address the one you’re begging with the title of this post- where in the Nature article did you read anything about “global cooling”?
May 1st, 2008 at 9:30 am
“This means that from a practical standpoint climate models are of no practical use beyond providing some intellectual authority in the promotional battle over global climate policy.”
As someone who has been involved with computational fluid mechanics, numerical analysis, and software design for most of my professional career (20+ years), I believe you have hit the nail on the head here.
I have studied some of the climate codes and, while many (like CAM 3 at NCAR) appear to be well documented and reasonably well tested, others (like model E at the NASA GISS) are not. I have not found any detailed descriptions of these codes where they relate the differential equations and supporting parameterizations (e.g. tracers, cloud and ocean models, etc.) they are purportedly solving to ** specific ** subroutines and functions contained in the codes. It is also not clear what validation and verification tests have been done on individual subroutines.
In the end, most of what I see are research grade codes of various qualities – that is, those codes that have the source listings available online (most do not, for whatever reason). Consequently, I am personally very cautious about the predictive claims made by the code’s authors…
Frank
May 1st, 2008 at 9:32 am
Jon-
Are you serious? Look how Andy Revkin at the NYT characterizes it:
http://dotearth.blogs.nytimes.com/
From the abstract of the Nature paper:
“we make the following forecast . . . North Atlantic SST and European and North American surface temperatures will cool slightly, whereas tropical Pacific SST will remain almost unchanged. Our results suggest that global surface temperature may not increase over the next decade . . .”
This is why if you Google the Keenlyside (lead author) you’ll get a bunch of media stories about global cooling.
Now that I’ve answered your question, your turn to answer mine.
May 1st, 2008 at 9:41 am
“Now that I’ve answered your question, your turn to answer mine.”
You haven’t answered my question- where in the Nature article did you read anything about “global cooling”?
In response you cite Revkin and hand wave at the abstract. Neither say anything about “global cooling”.
Are you going to tell me that you didn’t even read the study before posting this nonsense? Or did you actually read it and deliberately use “global cooling” when you _knew_ that no such thing was mentioned in the Nature article?
Incompetence or dishonesty. Your pick.
May 1st, 2008 at 9:52 am
Roger, you said above “a paper in Nature today (PDF) suggests that the world may cool over the next 20 years”.
James pointed out that the paper really says there may be little warming for the next 10 years, followed by substantial warming afterwards.
So you changed “cool over the next 20 years” to “cool over the next few decades”. Huh?
The paper says nothing about global cooling for decades. It suggests minimal warming for 10 years, followed by substantial warming. It’s right there in the abstract that you linked.
Now, to answer your question. IMHO, for observations to be inconsistent with IPCC AR4, they need to be:
a) Corrected for known cycles (ENSO, PDO, 11-year solar, etc);
b) Corrected for unusual events (volcanoes, etc);
b) Have appropriate confidence limits for weather “noise”;
c) Have confidence limits that lie outside the model predictions;
There are of course many models. It would probably be difficult to find observations on a short time scale that are inconsistent with all of them. A better approach might be to investigate individual models or groups of similar models. Identifying the strengths and weaknesses of individual models would serve a useful scientific purpose.
May 1st, 2008 at 10:02 am
Jon-
I see, you want to have a semantic debate. Great.
If you look at Richard Woods commentary on the piece in nature he writes of the authors finding of changes in ocean currents: “Such a cooling could temporarily offset the longer-term warming trend from increasing levels of greenhouse gases in the atmosphere.”
Cooling. Offset longer-term warming trend. Global surface temperature may not increase over the next decade.
I have interpreted these comments in the Nature paper and companion commentary as consistent with the term “cooling” — which my OED says means “of or at a fairly low temperature”. Most of the global media appears to have the exact same interpretation.
So this is end of my disposition on the semantic nuances associated with the word “cooling”. Perhaps you might have used a different word in this context, such as “warming”. And of course that usage would be perfectly consistent with the argument my post.
Apparently nothing is inconsistent with warming, not even cooling. To suggest anything different is incompetent or dishonest. Indeed.
I assume that you’ll continue to avoid my central question, which speaks for itself. Anyway thanks for stopping by.
May 1st, 2008 at 10:09 am
John V-
Who new that the mention of “cooling” would raise such protests;-) I’ve again revised the text. OK now?
May 1st, 2008 at 10:15 am
John V-
I agree with your criteria for looking at observations vs. models, but after one does all of these things, what (hypothetically) could be observed to be inconsistent with the models?
Thanks!
May 1st, 2008 at 10:34 am
It wasn’t the word, it was the misrepresentation. It looks better now.
To get back to your question again. You should probably be asking what would be inconsistent with *this* particular paper. Since this paper makes predictions for a decade, it will only take a decade of observations to see how it does.
Since the predictions of this paper are quite different than others, observations may even discriminate between the models. That would be useful.
You claim to be frustrated that *no* observations are inconsistent with the models. Over the last 20-30 years an infinite variety of observations could have been inconsistent, but actual observations were not. It would be quite easy to falsify the models if they were incorrect. Try running Model E without including GHGs — observations would be very inconsistent with the model results.
May 1st, 2008 at 10:48 am
“Apparently nothing is inconsistent with warming, not even cooling.”
When has anyone ever proclaimed that anthropogenic warming would be monotonic? You are calling a redistribution of heat in the system temporarily masking the warming signal “global cooling”- it’s just as incorrect as claiming such about La Ninas. And it doesn’t matter whether “the global media” makes the same mistake.
“To suggest anything different is incompetent or dishonest. Indeed.”
There is a third option?
“I assume that you’ll continue to avoid my central question, which speaks for itself.”
As has been pointed out to you, your “central question” is rather ill-posed. Which model(s)? Run under what assumptions?
Obviously there are future observations that would be inconsistent with specific models run under specific assumptions. I doubt that ModelE would produce sustained *global* cooling for decades under ‘current’ assumptions. The thrust of your argument leads me to suspect that even if you were provided with specific examples that invalidated your assertion- “nothing is inconsistent with warming”- you would move the goal posts to a position that such examples aren’t meaningful.
At the end of the day, the issue is not the mean anomaly for the next ten or even twenty years. We both know this. A first pass at forecasting climate on short timescales resulting in natural variability obscuring the warming signal isn’t really unexpected, is it?
You claim to be interested in models as “tools to use in planning or policy”, which you claim they are not, or at least not useful. Unless your position is that anyone has claimed or demanded that models give accurate short term projections, I don’t see your complaint. The big picture, which I don’t think anyone needs to be reminded of, is consequences out past ten or twenty years and more than fractions of degrees in temperature.
None of which is changed by this study, and in fact, this study explicitly supports the previous findings regarding such.
“Anyway thanks for stopping by.”
You’re welcome. If you need further editing assistance in the future, drop me an email.
May 1st, 2008 at 10:57 am
As a bystander to this debate, I have to say I’m cracking up over here at the fact that no one will answer Roger’s question. Completely ignoring this article, will someone please give an example of an observation that will conclusively prove that humans aren’t causing climate change that is significant relative to natural climate change?
Here’s one: If a New York City is crushed by a glacier in 10 years, that would conclusively prove that humans’ effect on the climate is small relative to nature. Or am I wrong? If the observation of a glacier crushing NYC is not corrected for solar cycles and does not have appropriate confidence limits, would said glacier crushing NYC still be consistent with catastrophic global warming climate models?
Is there not a similar example, less extreme than this, of an observation that would be inconsistent with the models? For example, a cooling of X degrees over the next Y years? Can any climate expert out there take a shot in the dark on this one?
May 1st, 2008 at 11:05 am
Jon-
You ask: “When has anyone ever proclaimed that anthropogenic warming would be monotonic?”
Um, IPCC. Figure 10.26 here:
http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter10.pdf
Of course, perhaps you can point to an IPCC A1B model run with no warming from 1998-2020 (which would be the case if the Nature prediction pans out). I don’t think you can, as the IPCC trend of 0.2/decade is expected in across its projections, plus or minus a small uncertainty.
So, as has been pointed out in the discussions of the Nature paper, if realized its prediction would be inconsistent with IPCC AR4.
John V-
Yes, the question of consistency between models and observations always turns to the past. This of course is not falsification in the Popperian sense, which is of course my point.
And I agree that any specific model can offer forecasts that will be falsified, but I am asking a broader question.
If any possible future is consistent with some model forecast, identifiable only in retrospect, then of what practical use is short term (decadal) climate prediction?
May 1st, 2008 at 11:19 am
Roger, as usual you make some excellent points, which Jon John and James have failed to address, preferring to quibble with your wording.
The paper appeared on May 1, but April 1 might have been more appropriate. When temperatures were rising, it was put down to man-made global warming, and any talk of natural variation was dismissed. Now the alarmists have been caught out by the lack of temperature rise over the last few years, they desperately have to try to think of some excuses. So now that it is convenient to their argument, natural variation is thought to be important, of the same magnitude as AGW in order to ‘temporarily offset the projected AGW’.
Why were there no papers saying that this oscillation contributed to warming when it was in its upward phase?
May 1st, 2008 at 11:28 am
“Um, IPCC. Figure 10.26 here:”
Just so that we are both clear- you believe that those IPCC projections denote *monotonic* realized warming? Yes or no?
May 1st, 2008 at 11:32 am
Roger:
“the question of consistency between models and observations always turns to the past”
Since observations of the future are very difficult to obtain, it is necessary that consistency is determined from past observations.
NASA’s model from the late 1980s is consistent with observations.
IPCC AR4 predictions from 2001 are consistent with observations.
“of what practical use is short term (decadal) climate prediction?”
IMO, decadal climate prediction has very little practical use at this point. Short-term climate prediction is in its infancy. The authors of this paper even admit this is a very small first step.
But the issues of AGW are not what happens in the next decade. It’s the long term trend that causes concern.
—
Caption Obviousness:
Roger’s question is so vague that is difficult to answer. Which observations would be inconsistent with a particular model? Inconsistent observations, of course.
For example, an observation of warming at 1.1 degC/century with 95% confidence intervals of 0.4 degC/century to 1.8 degC/century is inconsistent with a prediction of 2 degC/century with 95% confidence. The observation must be corrected for any short-term cycles (ENSO, etc) that are excluded or averaged-out of the models.
May 1st, 2008 at 11:34 am
“For example, a cooling of X degrees over the next Y years?”
Well, the problem is Y has to be large enough for internal unforced variability to average out over the time period in question. Even if X is very far away from the predicted trend (in either direction) this is likely due to unforced variability, which dominates over short time scales.
Think about it from the other perspective. If temps rise at double the rate of the predicted trend over five years, this would still not confirm that the IPCCs trend would be accurate for the forced variability.
In short, it is problematic to attempt to use observations about unforced variability to draw conclusions about forced variability, and specifically CO2.
So “nothing” is close to the correct answer, since you are observing one phenomenon to try to gain information about a separate phenomenon. I would liken it to trying to determine the seasonal change in a region by looking at the diurnal change. You get some information, but it’s not very useful. Imagine asking “what would I need to observe in the next week’s weather that would prove that summer is not approaching.” Can you answer that question easily?
May 1st, 2008 at 11:35 am
When asked to pick a model, I consider the models that were included in the IPCC reports showed that warming would rise if CO2 levels remain high. Indeed, we exceeded the scenario A emissions while temperatures are in the basement of Scenario C. Those models also predicted a heating signature of a warm tropical troposphere if the warming was attributable to mankind. That signature has not been realized. Apparently, these two observations are not inconsistent with the models.
“You are calling a redistribution of heat in the system temporarily masking the warming signal “global cooling”- it’s just as incorrect as claiming such about La Ninas.”
It is also just as incorrect as claiming such about El Ninos which amplify the signal. That the pdo and other oscillations could counteract the models leads to a couple of issues. First, the models do not take them into account. Since they have already lead to a leveling ff of the temperature trend, means they are able to mask warming. Second, the models should not be consistant with the previous warming phase of the oscillations which amplified the warming trend. This is because if they can mask, they can also amplify.
“Unless your position is that anyone has claimed or demanded that models give accurate short term projections, I don’t see your complaint.”
So, in the short term, anything can happen and the models will still be consistent? What about the main question that spans decades? What if we see another minimum period like that of the 1960s? Or what if we had a minimum of a magnitude similar to that of the Dalton minimum, or, God forbid, the Maunder minimum? Would any of these be inconsistent with the climate models?
The main question remains unanswered:
What can be observed in the climate over the next few decades that would be inconsistent with climate model projections?
John M Reynolds
May 1st, 2008 at 11:38 am
‘You are calling a redistribution of heat in the system temporarily masking the warming signal “global cooling”‘
Exactly, when the earth cools, and the oceans cool, and the upper atmosphere cools you’re (as a whole) getting warmer… obviously calling a reduction in the total energy (heat) in the system cooling is a specious argument.
You must call higher energy (more heat) warming, and lower energy (less heat) “redistributed warming”. There is never an event, situation, or energy level of the Earth that constitutes “cooling” in any form.
My dictionary appears to be broken though. I’ll have to request a new one. One that clarifies that “cooling” never exists, even in a time of lower heat.
May 1st, 2008 at 11:53 am
Roger
Four years ago, Michael Crichton made this rather prudent suggestion (State of Fear, p. 570): “Before making expensive policy decisions on the basis of climate models, I think it is reasonable to require that those models predict future temperatures accurately for a period of ten years. Twenty would be better.”
From an economic and political point of view, it has always appeared to me much wiser to wait and see whether any climate model and prediction can withstand the test of time. Given that none predicted the apparent arrest of the warming trend in recent years, the burden of proof has now become much harder. In any case, the obvious uncertainties about the trajectory of climate change will make national and international climate policy even more complicated than before. I guess climate campaigners can bury any hopes for the prospect of internationally binding emission targets or a Kyoto-style climate treaty.
May 1st, 2008 at 11:54 am
Roger
Four years ago, Michael Crichton made this rather prudent suggestion (State of Fear, p. 570): “Before making expensive policy decisions on the basis of climate models, I think it is reasonable to require that those models predict future temperatures accurately for a period of ten years. Twenty would be better.”
From an economic and political point of view, it has always appeared to me much wiser to wait and see whether any climate model and prediction can withstand the test of time. Given that none predicted the apparent arrest of the warming trend in recent years, the burden of proof has now become much harder. In any case, the obvious uncertainties about the trajectory of climate change will make national and international climate policy even more complicated than before. I guess climate campaigners can bury any hopes for the prospect of internationally binding emission targets or a Kyoto-style climate treaty.
May 1st, 2008 at 11:56 am
“It is also just as incorrect as claiming such about El Ninos which amplify the signal.”
Of course! This is constantly hammered home when people try to artificially peg temps to the 1998 El Nino as evidence of current cooling.
“Exactly, when the earth cools, and the oceans cool, and the upper atmosphere cools you’re (as a whole) getting warmer… ”
Can you please point out the “earth” and “atmosphere cool[ing]” in the Nature article? The one I’m reading talks about warming not increasing as much- I can’t seem to find where it discusses temperatures “cooling”.
“You must call higher energy (more heat) warming, and lower energy (less heat) “redistributed warming”. There is never an event, situation, or energy level of the Earth that constitutes “cooling” in any form.”
There are only two ways to change the amount of energy in the system- change either the amount coming in or the amount going out. The Nature study is dealing with a change in the distribution of energy superimposed over a decrease of outgoing energy. That is why it is incorrect to call it “global cooling”.
A classic example of *actual* cooling (e.g. reduction in energy in the system) of the first kind would be volcanic aerosols reducing the amount of incoming shortwave radiation that is absorbed by the Earth.
An example of cooling of the second kind would be a reduction of (surprise!) greenhouse gases.
May 1st, 2008 at 1:44 pm
Entire sections of the IPCC report are devoted to your question. For one general answer, none of the models predict either long-term cooling or stabilization of temperatures this century or beyond. They all predict that if we stabilize atmospheric GHG concentrations, the climate will reach a warmer equilibrium.
People are not answering your question, because as others have already pointed out, your comments on this matter are disingenuous. In particular, your reference to Fig 10.26 is silly. No climate model predicts that warming will be monotonic. If you average all the individual simulations, the results will be monotonic, like fig 10.26. The individual simulations are independent realizations of the “noise” in the climate system, such that short and long-term oscillations (PDO, ENSO, AMO, etc.) peak in different years. In fact, modellers average the individual ensemble members together for the very purpose of smoothing out the internal variability and presenting the long-term trend. That’s why 10.26 is monotonic. The individual runs do not look like that.
May 1st, 2008 at 2:17 pm
Simon-
Yes, it must be that I am disingenuous for asking the question.
We’ve had the discussion here of trends versus variability. Can you point to any GHG-forced AOGM or SCM realization with a 20-year trend 2000-2020 that is flat or negative? You just need to point to one. Any one.
You assert boldly that “no climate model predicts that warming will be monotonic.” I’m sure that this hinges on how you define monotonic, but do you really doubt that I can provide a model run with a monotonic temperature increase, where monotonic is defined as no single 10-20 period showing a declining GMST?
So with all that, is it then fair for me to interpret your reply as saying that no observations of the climate system to, say 2015 or 2020, would be inconsistent with climate model projections? Or do you refuse to answer on the basis of my disingenuousness?;-)
May 1st, 2008 at 3:05 pm
I’m a newcomer here, and I am enjoying today’s exchanges. My own opinions on climate change are derived not from what I read (press, scientific literature, blogs etc) but from what I do with the actual data (numbers, /not/ plots) from numerous sources world-wide. Anyone who downloads such data from the famous sources cannot have failed to spot some time ago (years) that the records failed to demonstrate a consistent warming trend, which is contrary to what seems to be widely but by no means universally believed.
Thus the very recent paper that is causing so much alarm and despondency (and being discussed in this forum) comes as no surprise to me.
Just to be clear, I am not a climatologist, but simply a very interested bystander (long retired) who has had some experience of data analysis and interpretation.
What has puzzled me for several years is why there has been so much debate regarding the “hockey stick plot” – off topic I know, but nevertheless apposite in the context of data interpretation, which is what is being discussed here. If one looks at the HS data using industrially tried and tested methods it is absolutely clear that no such pattern exists in the data taken as a whole.
When the current temperature data have reached the state of historic maturity I wonder what the debate will then be about. Will it be temperature growth standstill, or reverse, or simply near chaos? The stability of the last 8 or 10 years may be a blip that will be recognised as such, or it may be an important signal. I would like to be able to live long enough to find out!
Robin
May 1st, 2008 at 3:26 pm
Jon,
I’m a mere mortal whose scientific study is nil. But you stated: “A classic example of *actual* cooling (e.g. reduction in energy in the system) of the first kind would be volcanic aerosols reducing the amount of incoming shortwave radiation that is absorbed by the Earth.”
Question: Could not a disruption within the sun’s 11 year cycle too contribute to “a reduction of energy in the system”? If I’m mistaken, and please forgive me, but are we not in an extraordinary solar “cooling” cycle now, which might give credence to a reduction of energy resulting in cooling? This would in fact, give reason to which this “heat redistribution” theory, legs to stand on.
May 1st, 2008 at 3:30 pm
Jon
“There are only two ways to change the amount of energy in the system- change either the amount coming in or the amount going out. The Nature study is dealing with a change in the distribution of energy superimposed over a decrease of outgoing energy. That is why it is incorrect to call it “global cooling”.”
Think again- a change in the internal distribution of energy can increase or decrease outgoing energy.
Are you aware of Roy Spencer’s arguments of an “internal radiative forcing”? From the link below, “Internal Radiative Forcing and the Allusion of a Sensitive Climate System”
http://climatesci.org/2008/04/22/internal-radiative-forcing-and-the-illusion-of-a-sensitive-climate-system-by-roy-spencer/
“The 800- Pound Gorilla We’ve missed: Internal Radiative Forcing.”
“Internal radiative forcing refers to any change in the top-of-atmosphere radiative budget resulting from an internally generated fluctuation in the ocean- atmosphere system that is not the direct result of feedback on temperature”.
And further: -
“We will see that the neglect of internal sources of radiative forcing represents more than a source of error. It impacts on our perception of natural climate variability and what the climate system is telling us about climate sensitivity.
That such a concept exists is in the literature but apparently largely ignored. For example this Nature article refers to “external radiative forcing” (p 82, 4th paragraph) – this implies such a thing as “internal radiative forcing” exists.
This article is being considered for publication in the Bulletin of the American Meteorological Society & is a more general version of another paper accepted for publication in the J of Climate, “Potential Biases in Feedback Diagnoses from Observational Data: A Simple Model Description.”
An interesting paper and concept.
May 1st, 2008 at 3:32 pm
Jon
“There are only two ways to change the amount of energy in the system- change either the amount coming in or the amount going out. The Nature study is dealing with a change in the distribution of energy superimposed over a decrease of outgoing energy. That is why it is incorrect to call it “global cooling”.”
Think again- a change in the internal distribution of energy can increase or decrease outgoing energy.
Are you aware of Roy Spencer’s arguments of an “internal radiative forcing”? From the link below, “Internal Radiative Forcing and the Allusion of a Sensitive Climate System”
http://climatesci.org/2008/04/22/internal-radiative-forcing-and-the-illusion-of-a-sensitive-climate-system-by-roy-spencer/
“The 800- Pound Gorilla We’ve missed: Internal Radiative Forcing.”
“Internal radiative forcing refers to any change in the top-of-atmosphere radiative budget resulting from an internally generated fluctuation in the ocean- atmosphere system that is not the direct result of feedback on temperature”.
And further: -
“We will see that the neglect of internal sources of radiative forcing represents more than a source of error. It impacts on our perception of natural climate variability and what the climate system is telling us about climate sensitivity.
That such a concept exists is in the literature but apparently largely ignored. For example this Nature article refers to “external radiative forcing” (p 82, 4th paragraph) – this implies such a thing as “internal radiative forcing” exists.
This article is being considered for publication in the Bulletin of the American Meteorological Society & is a more general version of another paper accepted for publication in the J of Climate, “Potential Biases in Feedback Diagnoses from Observational Data: A Simple Model Description.”
An interesting paper and concept.
May 1st, 2008 at 3:39 pm
Roger:
As someone who has called for model validation on actual forward forecasts (not only “hindcasting”) for some time, I am in sympathy with the spirit of your question. Non-falsifiable = non-scientific is a really useful rule-of-thumb.
However, I think that:
1. One needs to match the time period of the falsification test to the underlying physical theory. I have often been presented with the assertion by climate scientists that we require something like a 30-year period to distinguish signal from noise (i.e., the proper test period is at least 30 years), so one could see the events described in the paper, and still have not falsified the predictive model.
2. I don’t really think that a binary “data is consistent or inconsistent with model predictions” is the most productive way to think about the results of such tests. Instead, it’s really the distribution of predicted-to-actual results for a series of predictions that we care about.
May 1st, 2008 at 3:40 pm
Roger:
As someone who has called for model validation on actual forward forecasts (not only “hindcasting”) for some time, I am in sympathy with the spirit of your question. Non-falsifiable = non-scientific is a really useful rule-of-thumb.
However, I think that:
1. One needs to match the time period of the falsification test to the underlying physical theory. I have often been presented with the assertion by climate scientists that we require something like a 30-year period to distinguish signal from noise (i.e., the proper test period is at least 30 years), so one could see the events described in the paper, and still have not falsified the predictive model.
2. I don’t really think that a binary “data is consistent or inconsistent with model predictions” is the most productive way to think about the results of such tests. Instead, it’s really the distribution of predicted-to-actual results for a series of predictions that we care about.
May 1st, 2008 at 3:42 pm
“I’m sure that this hinges on how you define monotonic”
Do you have a definition, as with ‘global cooling’, that the rest of us do not?
Is it really surprising to you that people are non-plussed by this post and your subsequent responses? You’re seeing forecasts of global cooling where we can’t, claims of monotonic warming where we don’t, and are calling models useless for policy when the timescales of consequences international policy is concerned with render the initial 10 or so year discrepancy between this study and IPCC projections moot.
“So with all that, is it then fair for me to interpret your reply as saying that no observations of the climate system to, say 2015 or 2020, would be inconsistent with climate model projections?”
*Which* models? *Which* projections? Obviously a mean warming of 10C by 2020 would be inconsistent with most if not all projections based on current assumptions. So would a cooling of 10C, but you don’t seem to be after answers of that sort.
What, from a policy position, is the substantive difference between the forecast of this Nature study and IPCC projections by the year 2020?
What, from a policy position, does this study change about model usefulness?
The entire premise of your complaint here seems to be false. No global cooling is actually predicted rendering your “if/then what else” framing moot. Natural variability on short timescales masking warming changes *nothing* in regards to what actions should or should not be taken policy-wise, unless one is interested in delaying action under the guise of taking a wait and see approach- an approach which is invalidated by this very study as the temp result past a single decade is essentially the same. In essence there is no distinction between complaining about this versus complaining about ENSO variability.
May 1st, 2008 at 3:51 pm
“Redistributed warming” must be NEWSPEAK.
Tell me, did you go to Hansen’s office for the lobotamy, or did he make a house call?
I have seen Gavin Himself state on Realclimate that a decade or more of non or negative temperature trend would be sufficient to falsify the currrent theory of AGW.
May 1st, 2008 at 3:54 pm
Jon-
Now we are getting somewhere! You write:
“Obviously a mean warming of 10C by 2020 would be inconsistent with most if not all projections based on current assumptions. So would a cooling of 10C, but you don’t seem to be after answers of that sort.”
OK, how about a warming of 2.0C? A cooling of 1.0C? Where is the threshold? If 0.2 C warming to 2020 is consistent with projections, and +/- 10 C is inconsistent, then there is indeed some threshold (probabilistic if you’d like, say 95%) in between. I am asking, where is that threshold (with respect to this or any other variable that you choose)?
Who said anything whatsoever about the sensitivity of policy options to this discussion? I certainly did not. In fact I have argued in Science that the policy options that make the most sense are completely independent of the scientific debate. You are projecting something onto my views that I did not say.
I do think that the scientific community risks its credibility when it claims that this or that observation is “consistent with” projected climate change if in fact nothing is inconsistent with those projections.
I welcome your reply to my question of thresholds above.
May 1st, 2008 at 4:04 pm
Jim Manzi- I agree with these two points 100%.
John V.- I believe that you are familiar with Lucia L.’s work on IPCC 2007 trend predictions for 2001-present, where she argues that they are inconsistent, quite contrary to your own assertion.
The Nature article that we are discussing is inconsistent with AR4 trend projections. Does it matter which is correct? Probably not from a policy perspective, but from a political (promotional) perspective, yes, scientifically, yes.
My sense is that some people are so afraid of the political ramifications of suggesting that climate models might not accurately capture the evolution of the climate system that they they are willing to dismiss the scientific value of those models in an instant. If everything that can be observed over the next decade is consistent with what climate models project, then there is no scientific value in making such projections.
May 1st, 2008 at 4:50 pm
Roger,
I am indeed familiar with lucia’s work on falsifying the IPCC predictions.
She recently improved her trend estimates by correcting for the effect of the 11-year solar cycle. With this correction she has found that the 2001-present trends are *consistent* with the IPCC trend.
This was done without her ENSO correction — my rough estimate is that with ENSO and solar cycle corrections the observations are very close to the predictions.
This Nature article can only be considered inconsistent with IPCC AR4 if you believe IPCC AR4 makes short-term predictions. On longer time scales they are consistent.
“If everything that can be observed over the next decade is consistent with what climate models project…”
Many of us are trying to make the point that “climate models” are not a single monolithic entity. There are many models. Undoubtedly they all have strengths and weaknesses. Some regional or global observations will probably be inconsistent with some of them.
“there is no scientific value in making such projections”
I’m shocked that you would say this. Predictions are made and documented so that they can be tested. This is how models are improved and how knowledge is gained. There is huge scientific value in that. Making and testing predictions is the whole basis of science.
May 1st, 2008 at 5:02 pm
John V- Thanks.
When you make solar and ENSO adjustments you are in fact suggesting that these need to be incorporated into the prediction in order to make them reconcile. This is simply another way of saying that the model was incomplete.
Of course climate models are not monolithic. But if the ensemble of models predicts everything and anything, then the ensemble is of little use in a predictive capacity. Imagine if I had a set of 25 weather forecasts (from various models) for temperatures tomorrow of 50 to 100 degrees. When the temperature turns out to be 63 degrees, I could say that indeed this was observed to be consistent with the models. But as a tool of decision making it is useless.
As far as the scientific value, you are right I should not have said “no” — “”very little” would have more accurately reflected my views. Such models may indeed add insight to our understanding, but only f there is some way to discriminate among models — and one way to do this is via forecast verification. As can be seen on this thread, call for for verification are resisted strongly in public discussions, but fortunately are far more accepted in the trenches.
FYI, we have a book on Prediction:
http://sciencepolicy.colorado.edu/homepages/roger_pielke/prediction_book/
and here is a paper on the subject:
Pielke, Jr., R.A., 2003: The role of models in prediction for decision, Chapter 7, pp. 113-137 in C. Canham and W. Lauenroth (eds.), Understanding Ecosystems: The Role of Quantitative Models in Observations, Synthesis, and Prediction, Princeton University Press, Princeton, N.J.
http://sciencepolicy.colorado.edu/admin/publication_files/2001.12.pdf
May 1st, 2008 at 5:11 pm
“Who said anything whatsoever about the sensitivity of policy options to this discussion? I certainly did not.”
My policy comments were in reference to:
“So models have plenty of scientific value left in them, but tools to use in planning or policy? Forget about it.”
Please answer my questions:
What, from a policy position, is the substantive difference between the forecast of this Nature study and IPCC projections by the year 2020?
What, from a policy position, does this study change about model usefulness?
“I welcome your reply to my question of thresholds above.”
Your question depends on the model(s) and assumptions plugged in- you know this, don’t you? How many more times do people need to say this?
May 1st, 2008 at 5:15 pm
Also, earlier I said:
The thrust of your argument leads me to suspect that even if you were provided with specific examples that invalidated your assertion- “nothing is inconsistent with warming”- you would move the goal posts to a position that such examples aren’t meaningful.
May 1st, 2008 at 5:17 pm
“This is simply another way of saying that the model was incomplete.”
Well, it’s not a model, it’s an ensemble mean of all models, which tends to be more accurate, but averages out unforced variability. The solar cycle is likely left out because it evens out over the scales used for prediction. And predicting the solar cycle is a crapshoot at best.
And the ensembles do not predict “anything and everything.” But they make predictions for thirty years, so complaining that we can’t be sure about them after seven or ten years is misunderstanding what it is they actually predict.
I ask again, what could you observe in the weather in the next week that would falsify (or confirm) the theory that summer is approaching?
May 1st, 2008 at 7:41 pm
“Well, it’s not a model, it’s an ensemble mean of all models, which tends to be more accurate, but averages out unforced variability.”
Why would an ensemble be more accurate? There was a separate discussion about this at Brigg’s blog:
http://wmbriggs.com/blog/2008/04/21/co2-and-temperature-which-predicts-which/
Lucia’s work caused the modellers to change their programs. That they are being changed is not of interest. What matters are the five or six models that are being used to develop policy and are being pushed by the IPCC in 2007. If the modellers move the goal posts without telling everyone, then the changes are not being considered politically.
Somewhere there is a point where the scenarios generated by the models for the IPCC AR4 report would be proven wrong. New York being buried by a glacier in 10 years would invalidate all of the models. A rise or drop of 10C by 2018 would invalidate all of the models. What is the threshold of temperature observations that would invalidate the IPCC scenarios A2, A1B, B1 and C?
Figure 10.4 in the IPCC PDF link above shows that the starting point is 0.2C at the year 2000. From that point, all 4 scenarios are positive and give no room for a decade of cooling up to year 2100. A year or two due to La Ninas sure, but not a decade. If we go down to 0.0C by 2020, then we would be beyond the light areas that I am assuming are error margins. Indeed, we would be about the width of the error margin away from the bottom of the lower edge of the scenario C error margin. Would that negate ALL of the the models that were used to make this report? Or must we wait until 2030? Would a drop to 0.0C on that graph by 2030 invalidate all of the models?
John M Reynolds
May 1st, 2008 at 9:32 pm
Jon-
You write: “Your question depends on the model(s) and assumptions plugged in- you know this, don’t you?”
No. This is incorrect. The IPCC presents model ensembles because it believes that such collections are more apt to accurately represent trends. The question that I raise is not about a single model or model realization, but about an ensemble of relevant models.
Boris suggests that these ensembles are relevant only after 30 years. No. Rahmstorf et al. suggest that 17 years is plenty long enough. The reality is that any period can be compared, however the shorter the period the larger the uncertainty. Thus, asking what observations would be inconsistent with an ensemble of modeled results is a function of the time period selected. I’ve suggested to 2020, just for discussion.
Lucia L. has done some good work on these questions at her blog:
http://rankexploits.com/musings/
Specifically see:
http://rankexploits.com/musings/2008/ipcc-projections-continue-to-falsify/
It has been pointed out to me that a similar conversation is taking place at ClimateAudit:
http://www.climateaudit.org/?p=3048#comments
May 2nd, 2008 at 3:08 am
Obviously “global cooling” is just “global warming” with style …
May 2nd, 2008 at 6:21 am
John Reynolds,
“Lucia’s work caused the modellers to change their programs.”
Where in the world did you get this idea?
Once again, the IPCC did not make an attempt to predict the climate for the next decade. Their prediction is for thirty years.
Roger,
I agree with you that you can compare the ensemble mean to any time frame you want, but you won’t get an answer that’s worth anything until unforced variability roughly cancels out. 17 years is likely enough time.
But you are missing my larger point, that whatever variations you see short term are unforced, so they don’t tell you anything meaningfulabout forced variability.
May 2nd, 2008 at 8:10 am
Roger:
“Lucia L. has done some good work on these questions at her blog:”
I agree — she has done some good work.
I also agree that the model runs should include solar cycle forcing. IIRC, only NASA’s Model E includes it. For this reason, comparisons between observations and model results must be adjusted for solar effects. You agreed to this earlier in the thread.
ENSO and other internal, unforced variability is intentionally averaged out of the ensemble results. That is why observations must be adjusted to account for ENSO. You also agreed to this above.
Lucia took some time to adjust for the solar cycle and found that observations are consistent with predictions:
http://rankexploits.com/musings/2008/what-about-the-solar-cycle-yes-john-v-that-could-explain-the-falsification/
She has not yet done an analysis that includes both the solar cycle and ENSO.
I also agree that it is possible for a short period of observations to be inconsistent with model results. As you said, shorter observation periods require larger uncertainties.
Back to your original question:
“What can be observed in the climate over the next few decade that would be inconsistent with climate model projections?”
For the IPCC AR4 ensemble, after correcting for un-modelled exteneral forcing (solar changes, volcanoes, etc) and short-term internal cycles (primarily ENSO), a trend line over 20 years that shows less than ~1.0 degC/century would probably be inconsistent. So would a trend line that shows ~3.0 degC/century.
Is that *really* what you’re asking?
May 2nd, 2008 at 8:31 am
“…after correcting for un-modelled exteneral forcing (solar changes, volcanoes, etc) and short-term internal cycles (primarily ENSO)…”
Decisions are being made on the uncorrected version of the ensemble.
As they stood for the IPCC AR4 ensemble, what is the threshold where the trend line would (not probably) be inconsistent? But that is just the temperature trend line. That is not enough. Don’t forget about the divergence of CO2 emissions that are above Scenario A1. Because of that, we don’t qualify for the other scenarios. Is the temperature already beyond the A1 scenario’s lower error bar? If we are not yet beyond A1’s error margins, then how many more years will it take before we are? That is assuming the temperature is not going to rise until 2015. Once we cross that threshold, would that not prove that scenario to be inconsistent with observations?
John M Reynolds
May 2nd, 2008 at 8:46 am
John M Reynolds:
“Decisions are being made on the uncorrected version of the ensemble.”
The corrections are only for short-term variability. They would have no effect on longer term trends, and that’s what matters. Decisions are *not* being made on 7-years of data or a 7-year prediction. Do you really think decisions pertinent to AGW are being made based on El Nino?
There is virtually no difference between the predictions from the different scenarios at this point. It’s only been 7 years.
The exact trends that would be inconsistent can not be stated without knowing the noise in the signal. The noise determines the uncertainty in the slope of the regression. Based on past observations I believe a discrepancy of 1 degC/century over 20 years would be more than enough to be inconsistent.
What is the point of pinning me down on an exact observation that would be inconsistent? When the data is in, do the analysis and see if it agrees with the models. It’s really that easy.
May 2nd, 2008 at 8:59 am
John V-
Thanks.
“Is that *really* what you’re asking?”
Yes, this is a good example of the sort of reply that I am looking for, but I’m not asking for you to improve the models by accounting for the other factors unaccounted for (that is the job of the IPCC), but to simply take the prediction as given, at least as a starting point (as Lucia L. has done). If I want to evaluate a weather forecast, I take the forecast as given, I do not adjust it to compensate for factors that the NWS may have left out.
The point is not to defend or attack the models, simply to compare prediction with experience.
More broadly, ideally, we’d have a comprehensive list of relevant variables and thresholds for in/consistency with predictions, issued at the same time as the prediction is made. Then there would be no wrangling over what has happened after the data is in.
May 2nd, 2008 at 9:16 am
Sorry, but all occurances of Scenario A1 in my last comment should have been Scenario A2.
“The exact trends that would be inconsistent can not be stated without knowing the noise in the signal.”
That is the purpose of the error bars. It has indeed only been 7 years, but we could already be beyond the margin of error for Scenario A2. Remember that Scenario A2 turned out to be conservative so far with respect to CO2 emissions. Once we exit those error bars, then the model is falsified. I am simply asking when we will be beyond the lower A1 error bar of the temp does not rise between now and at least 2015. That is important because it would give us a time frame. Boris says it should be 30 years, but 17 may be enough. You are sticking with 20 years. I am interested in when we could exit the error bars. If the temperature drops, then we will exit those error bars sooner.
I don’t know if the faded sections are indeed error bars. Even if they are, I don’t have a close up version that shows 2000 to 2020 for scenario A2. Do you have that graph? If so, can you read the year we will exit Scenario A2’s error bars at the current temp?
Roger mentioned ‘relevant variables.’ How about if CO2 starts to drop due to a cooler ocean absorbing more? Or would that simply drop us to Scenario B1
John M Reynolds
May 2nd, 2008 at 9:52 am
“No. This is incorrect. The IPCC presents model ensembles because it believes that such collections are more apt to accurately represent trends. The question that I raise is not about a single model or model realization, but about an ensemble of relevant models.”
The IPCC AR4 ensemble projections are meant to represent trends- the internal variability is intentionally smoothed out. Specifically trends for 20 yr periods. Asking what would be inconsistent with those projections means asking “what would be inconsistent with 20 year trends, ignoring internal variability?”.
Is that the question you want answered?
May 2nd, 2008 at 10:04 am
Jon-
“The IPCC AR4 ensemble projections are meant to represent trends- the internal variability is intentionally smoothed out. Specifically trends for 20 yr periods.”
I’d like to see the reference to where the IPCC says that its projections are 20-year trend values. How should one then interpret the model predictions for 2001 as presented by IPCC if a 20-year trend value? I’m not disagreeing with you, I’d like like to see the IPCC reference to this claim, as I am unfamiliar with it.
Thanks.
May 2nd, 2008 at 10:16 am
Roger:
“Internal variability in the model response is reduced by averaging over 20-year time periods. This span is shorter than the traditional 30-year climatological period, in recognition of the transient nature of the simulations, and of the larger size of the ensemble. This analysis focuses on three periods over the coming century: an early-century period 2011 to 2030, a mid-century period 2046 to 2065 and the late-century period 2080 to 2099, all relative to the 1980 to 1999 means.”
IPCC AR4 WG1, Chapter 10, page 762.
May 2nd, 2008 at 10:26 am
Thanks for the reference, and yes that section of the report does refer to 20-years means, starting in 2011 (presumably because the models start in 2000, and 2011 would allow for a centered 20-year mean).
But what we have discussed here previous (along with Lucia L.) and which I recognize that you may not be aware of, are IPCC forecasts of trends for the period 2000-present as illustrated in Figure 10.26. I don’t think that the IPCC presentation of trends for the period 2000-2007 is a 20-year mean, given that the forecast starts in 2000.
Any thoughts on how to interpret 10.26?
Lucia L. interpreted them as trends, which have factored out variability, and thus compared to the trend in observations, considering uncertainty. Do you have a different view?
May 2nd, 2008 at 10:34 am
“For the IPCC AR4 ensemble, after correcting for un-modelled exteneral forcing (solar changes, volcanoes, etc) and short-term internal cycles (primarily ENSO), a trend line over 20 years that shows less than ~1.0 degC/century would probably be inconsistent. So would a trend line that shows ~3.0 degC/century.”
If you draw the trend-line from 1960, starting with when we got good CO2 data collection, the trendline in the record is 0.11C/decade (HadCRUT data, 0.54C rise over 48 years). So it is NOT correct to say that a model with 0.1C/decade warming or less is outside the range, actually that number is close to spot-on. Indeed, any model range from 0.0C/decade to 0.2C/decade is in range, but anything above 0.2C/decade is a trendline outside the range.
Hansen has predicted 0.3C/decade or more under scenarios with the levels of CO2 equal to what has happened (scenario A). Hansen predicted 1C increase by now. The Hansen model has been overestimating measured warming trends by a factor of 2 or more, and has a poor match to the measured temps. See this from 2006:
http://www.climateaudit.org/?p=796
At what point do we say the model is broken since its predictions don’t match reality?
Another simple way to put it is to ask the question: What climate sensitivity level is consistent or inconsistent with the temperature record thus far? Consider 4 basic models/assumptions/hypotheses:
1) “A doubling of CO2 from 300ppm to 600ppm will increase average temperatures by 1C”
consistent
2) “A doubling of CO2 from 300ppm to 600ppm will increase average temperatures by 2C”
consistent
3) “A doubling of CO2 from 300ppm to 600ppm will increase average temperatures by 3C”
consistent, but above range (some natural variability cooling required to keep consistent)
4) “A doubling of CO2 from 300ppm to 600ppm will increase average temperatures by 4C”
inconsistent/falsified?
The latter is a question, but it seems to be iconsistent … if the natural variability is such that a 0.3C/decade trend is not falsified by an actual 0.11C/decade temperature trend, then a 0.001C/decade AGW trend, ie, “there is no AGW” is not falsified either.
It appears that the temperature trend thus far is matching a climate sensitivity value of around 1.6-1.8C increase per doubling of CO2.
May 2nd, 2008 at 11:28 am
JohnV-
I haven’t ever called the back of the envelop computations I did to estimate the possible effect of solar cycles an “improvement” on the method. Currently, as the temperature trends stand, the IPCC AR4 projections fail simple hypothesis tests based on comparisons to empirical data — and they do so by a considerable margin. That is: the weather data we have had appears consistent with a range of temperature trends at a 95% confidence level. The AR4 projections of 2C/century fall outside those ranges.
However, since weather and climate are not the results of coin flips, I have been interested in which features the IPCC leaves out of their projections that could explain failure of the temperature to rise since 2001 (my start date).
It turns out that if we assume the nearly deterministic 11 year solar cycle has a fairly strong, detectable nearly deterministic trend, that is perfectly sinusoidal, and the maximum “hit” just as my period for analysis hit, that could provide a partial explanation of the recent flat dip. Bear in mind after accounting for this speculative effect, the central tendency most consistent with the weather we have had since 11 years is *still* below the IPCC projections. It’s just not sufficiently low to exclude the possibility that the trend is masked by solar forcing.
But this doesn’t mean the projections didnt’ falsify!
And oddly enough: Giving this sort of break to the IPCC projections when testing actually supports Roger’ contention that as far as models are willing to admit publicly, NOTHING can ever be said to falsify IPCC predictions/ projections or climate models in general.
Why can NOTHING be said to falsify? Because IF we get a “simple” falsification, using a garden variety test, we are then required to include effects that those making the projections knowingly, and intentionally ignore when making projections. Not only that, these effects are never discussed when we get a “failure to falsify” resultt. In some cases, the effects would tip the balance toward “falsify”. (Why for example, don’t we remove the temperature dip after the volcano eruption in 1992 when testing the IPPC projections since 1992? The eruption at that spot made the measured trend a bit high. Would we remove the temperature dip if it had happened in 2004 making the TAR projections look low?)
This special pleading to create exceptions to support the IPCC projections and “save” them doesn’t look good for the predictive ability of the IPCC as a whole.
As for my calculation, I think correct interpretation of the back of the envelope caculation is NOT, “So, the projection/prediction is ok”. The correct diagnosis is to say: the projection falsified according to the rules of a particular method of testing hypothesis.
However, because it falsified, we search for reasons. It appears that if modelers included this and such effect — which they decided to ignore based on some criterion of their own devising–, then maybe the method of developing projections could be corrected.
In otherwords. The falsification suggests: Go back to the black board. The fact that solar might be strong enough suggests, maybe you should consider the 11 year solar cycle, and you’ll do better in the future.
( That said: We have been discussing the true estimated effects of the 11 year solar at my blog. It may well be that the “collective mind”of the IPCC”– if such a thing can be said to exist, thinks the effect is too small to consider, or the response is out of phase with the GMST or any number of things. We don’t know .)
For more on this, (with graphs,) interested readers can visit:
http://rankexploits.com/musings/2008/what-about-the-solar-cycle-yes-john-v-that-could-explain-the-falsification/
John Reynolds: Though I am flattered you think my blog posts would influence modelers, I sincerely doubt any of my posts motivated modelers to develop the programs or run the cases posted in Nature. I’d bet you 100 Quatloos that the submission dates will precede the creation of my blog.
Modelers are trying to come up with better models that capture more physics, and result in better projections.
I think the most recent IPCC documents, and many climate bloggers discussion, reveal they have some blind spots. One of these is: They do not recognize what the IPCC documents actually convey to readers. The smooth plots showing projections, with their narrow uncertainty intervals, with no lines, bars or mark outs saying “Don’t take this graph seriously before 2030″ communicates something modelers claim they don’t mean to say.
That’s a problem for them– and the problem should be corrected when they write the next IPCC document. The idea that the poor communication in an IPCC document can be cured by individual bloggers explaining that the graphs don’t mean what the graphs seem to say is inappropriate. Next time, include disclaimers about time frames in appropriate portions of the document– near the projections. Add them to the graphs.
May 2nd, 2008 at 11:39 am
Roger:
“More broadly, ideally, we’d have a comprehensive list of relevant variables and thresholds for in/consistency with predictions, issued at the same time as the prediction is made. Then there would be no wrangling over what has happened after the data is in.”
I agree.
There are some other points in your reply that I disagree with, but I’ll end on a positive note.
—
John M Reynolds:
“That is the purpose of the error bars. It has indeed only been 7 years, but we could already be beyond the margin of error for Scenario A2.”
We are talking about different things. You are talking about the error bars on the model results. I am talking about the error bars on the linear regression of the observed trend.
So, *if* the 95% error bars on the observed trend are +-1.0 C/century, then observed trends of less than 1.0 C/century or more than 3.0 C/century would be inconsistent with a prediction of 2.0 C/century (at the 95% level). Without knowing the size of the error bars on the *observed* trend, we can not know the exact observed trend that would be inconsistent.
—
Patrick:
A few problems:
1. You are comparing a prediction of future warming to past data (try looking at model hindcasts from 1960 and comparing those to observations);
2. Unless there’s some reason for picking 1960 that I’m not aware of, you have cherry-picked your range (try choosing 1975 to 2008 and see if that affects your conclusions);
3. You are mistaken about Hansen’s scenario A — scenario B is much close to what really happened (right down to a couple of major volcanic eruptions in the 1990s);
4. The ClimateAudit post you link is overlaying showing HadCRUT data with a 1961-1990 baseline over GISTEMP predictions with a 1951-1980 baseline. The bias between the red line (GISS) and the blue line (HadCRUT) is obvious. Try estimating a trend through the predictions and through the HadCRUT data — they’re pretty close.
May 2nd, 2008 at 12:28 pm
“The smooth plots showing projections, with their narrow uncertainty intervals, with no lines, bars or mark outs saying “Don’t take this graph seriously before 2030″ communicates something modelers claim they don’t mean to say.”
I agree that the IPCC will need to do a better job in presenting the uncertainty in the early years of a projection. They assume that people will recall that ensemble means remove unforced variability over the short term. This probably needs to be restated quite strongly in every graph that presents an ensemble mean.
The problem is that Roger and others are still having problems understanding this concept even after it has been explained. So I’m not sure there’s a solution to the confusion that is going on here.
Perhaps an infobox on the differences between forced and unforced variability and short and long timescales would help, but I’m not optimistic.
May 2nd, 2008 at 12:36 pm
“Without knowing the size of the error bars on the *observed* trend, we can not know the exact observed trend that would be inconsistent.”
Now, that is a whole different bottle of wax. I recall seeing a temperature graph with uncertainty zones on either side of the averaged line. That showed large uncertainty in the 1800s. That uncertainty was shown to diminish as we approached the present. That estimate of error bars could be wrong. If we find there are large error bars for the observed trend upon which the models were built, then the certainty of the models will also be lessened. Unfortunately, the USHCN is only now working on that with their next generation of remote weather stations coming out. It will take years for the temperature record to be verified.
Really, without knowing the size of the error bars on the *observed* trend, the data, and models built upon that data, should not have been used to push political agendas. I have no idea why people would dare to make political decisions based on projections of possibly poor data.
Up until now, it has been taken that the observed data is alright. If you are doubting the data, then another thread would have to be created. If we are still to assume that the observed data are to be presumed accurate, then the discussion about thresholds for falsification is valid. As Lucia pointed out through back of the envelope computations, the models have already falsified for at least some scenarios. If you disagree, then I am open to reading your ideas on a different threshold.
“The problem is that Roger and others are still having problems understanding this concept even after it has been explained.”
Come on Boris. Even you suggested that 17 years is possible for falsification of the models. If it can be said that the rest of us are having problems understanding this because we are suggesting that a date earlier than 2030 could be possible, then even you are having problems understanding this concept even after it has been explained.
John M Reynolds
May 2nd, 2008 at 12:40 pm
lucia,
We’ve been back and forth on this at your blog so I won’t bother restating my counter-arguments.
In my opinion, adding the effect of the solar cycle was an improvement. I didn’t intend to put ascribe those words to you, particularly if you disagree.
It seems to me that you and I are trying to answer different questions. I want to determine the underlying trend due to AGW, and would like to know how the models could be improved to better predict that trend. I believe you (and Roger) are also concerned with how the trend is communicated and the political and economic implications of the communication.
Based on our different interests, I believe we read IPCC differently. I see the IPCC predictions as being for the AGW trend only. To me, that’s what’s important and that’s what we should be testing.
Since we’re asking different questions, it’s not surprising that we sometimes argue past each other when answering.
May 2nd, 2008 at 12:44 pm
John M Reynolds:
You missed my point. Even if the measured temperature is perfect, there is an uncertainty on the trend. This uncertainty in the trend is distinct from measurement uncertainty. It is a result of the short term weather “noise”.
May 2nd, 2008 at 1:10 pm
Over 5 years, the uncertainty in the trend is quite high. Over 30 years, the uncertainty in the trend is lessened. It has been suggested that 17 years may be long enough to reduce the uncertainty in the trend. You seem to be sticking with 20 years on that front. That is the threshold we are seeking. To say it another way, when will the uncertainty in the trend be reduced enough that observations outside of the projection error bars will falsify the models? The question remains the same. If the globe fails to warm between now and 2015, is that enough time to falsify the IPCC AR4 ensemble projections?
John M Reynolds
May 2nd, 2008 at 1:27 pm
JohnV–
I didn’t mean to suggest that you were speaking for me. I simply mean to inform people that my own interpretation of my calculations is different from yours. (That you and I make different judgements is fine with me, btw.)
On the issue of our goals and goals. You are mistaken in my interests and goals.
I too wish to determine the underlying trend. I simply disagree that the result of the “solar adjustment” necessarily provides an improved estimate of the trend. I think the real answer is we don’t know which of the two estimates of the trend is best.
As you know, from other computations at the blog, some at other blogs etc. there are plenty of people who believe there is ZERO detectable effect of the 11 year solar cycle on the the GMST. There are some physically sound arguments why the measurable effect on GMST might be quite small, and out of phase with the real solar cycle.
If you believe the solar-adjusted value is “better” than the non-solar adjusted value, that is because you have convinced yourself the 11 year solar cycles *does* have a 0.1 C trough to peak mean effect and that it’s more or less in phase with the Total Solar Intensity.
But it’s not at all clear to me that this is true. It’s also not clear to me that it’s false! That’s why I think a bounding calculation was necessary to see if whether the existance of non-existance of a 0.1C effect would make any difference.
It would make a difference. But, the fact that the 0.1 would make a difference doesn’t tell us anything about the reality of the 0.1C effect. It may or may not exist.
Because I don’t know if the solar connection with GMST anomalies is real or imagined, I don’t know if a solar adjustment results in better estimates of the underlying trend. It may make the estimate worse.
So: we are both interested in the magnitude of the true trend. We just disagree whether on adjusting for solar gives us a better estimate.
With regard to discussion of the topic of Roger’s post: that’s falsification. So, in my mind, the normal garden variety test I did without ‘adjusting’ says “falsified”. If the solar influence is real, but was not incorporated into the projections, and the failure to include it was the cause of poor forecasting, the method of forecast remains falsified.
All the back of the envelop estimate of the solar impact might reveal is a partial explanation of *why* the falsification occurred. It doesn’t reverse it.
May 2nd, 2008 at 2:34 pm
lucia:
“You are mistaken in my interests and goals.”
Ok. Thanks for clearing that up.
“you have convinced yourself the 11 year solar cycles *does* have a 0.1 C trough to peak mean effect and that it’s more or less in phase with the Total Solar Intensity”
That’s close. All published results that I am aware of find a solar cycle effect of 0.06C or larger. 0.1C seems to be the most quoted number. I feel comfortable saying that the actual solar cycle effect is closer to 0.1C than to 0.0C. “More or less in phase” sounds about right.
“So, in my mind, the normal garden variety test I did without ‘adjusting’ says “falsified”. ”
I can accept that a linear trend of 0.2C was falsified over 7 years. I’m not sure of the significance though. Without correcting for known natural cycles (that can mask or enhance the underlying trend), this falsification says very little about the underlying AGW trend.
May 2nd, 2008 at 2:57 pm
I admit to being quite impressed by the quality of discussion so far. Its quite refreshing after the cynicism and snarkiness that characterizes many unmoderated climate blogs these days.
While I take some issue with Roger’s use of the term cooling in the original formulation of the post (having just finished reading the paper in question), it reflects the overall quality of the press coverage of the article, which has been less than nuanced.
The central argument over what could falsify climate models is far more interesting. I’m not sure if the models are robust enough to support this, but here are my initial thoughts:
Choose a model and look at its hindcasting of 20th century temperatures. Strip out short-term variable forcings such as solar cycles, PDO, and volcanoes to the extent that they are modeled (while keeping long term trends in TSI, vulcanism, aerosols, GHGs, etc). Than compare the model with only long-term forcings to the actual temperature record over the last century to get a sense of the level of short-term variable forcings (e.g. “noise”) compares to the prediction for any given year.
Compare the level of “noise” in hindcasting to the deviation between projected and realized temperatures over the last 6 years to see if the recent stagnant global mean is particularly anomalous. If the short-term variable forcings in current years are much greater than those seen in the past century relative to the long-term forcings, it suggests that either that we are witnessing some short-term forcing not seen at any point in the last century (e.g. a particularly large solar minimum), or that the models of long-term forcings are incorrect.
On an unrelated note, if PDOs play such a large role in climate forcing over a few decades, and if they were one of the main drivers of the stagnant temperatures in the 1950s-1970s, I wonder what implications this would have on the climate forcing estimates for aerosols, which are more commonly blamed for the negative forcings of that period?
May 2nd, 2008 at 3:31 pm
“If it can be said that the rest of us are having problems understanding this because we are suggesting that a date earlier than 2030 could be possible, then even you are having problems understanding this concept even after it has been explained.”
There’s a significant difference between 7 and 17 years as far as determining what’s going on. Lots more time for unforced variability to cancel out. But If you want to go shorter, you need to take John V.’s advice and factor out short term changes–ENSO, solar, etc. The problem is, as Lucia notes, it’s hard to know the exact effects of things like the solar cycle, so your error bars would still be quite wide.
May 2nd, 2008 at 3:55 pm
Roger, you are quite right. There is absolutely nothing that could possibly falsify the settled science of catastrophic anthropogenic global warming. The coming disaster of coastal floodings, hurricanes, droughts, floods, desertification, and total loss of Terran biosphere is already a fait accompli. The question is when? We do not deal in such things as fixed time periods, so you must excuse us for that. But we find that the freedom to predict within an open time frame allows us a great deal more latitude than if we were to confine ourselves to time frames within our lifespan.
I hope that answers your question.
May 2nd, 2008 at 4:46 pm
Few seem to be willing to acknowledge the more important conclusions this paper, which is actually just pointing out the obvious, has on the calculated sensitivity of anthropogenic effects on climate. Arguing that a ‘lack of warming for 20 years due to natural ocean cycles’ does not dramatically reduce the threat of a man-made global warming crisis is simply ignorant. It is the same as arguing that natural cycles only cause cooling.
The IPCC states that the warming of the late 20th century must be due (in most part) to humans because there is no natural factor that could explain it. Furthermore, it argues that the rate of warming is consistent with their understanding of climate sensitivity due to increasing greenhouse gases. If we accept this papers conclusion that there will be no warming over the next 10 years due to ocean cycles offsetting the effects of the increasing greenhouse gases, then we must acknowledge that those same cycles (in their warm phase) contributed to the warming of the late 20th century.
The IPCC argues that the trend from human forcing should be around 0.1-0.2 degrees per decade. If the PDO in its cold phase can cancel out 0.2-0.4 degrees of warming over 20 years (the past 10 + the next 10), than it was likely responsible for the same amount of warming over the last 20 years of the 20th century, when it was in its warm phase. This leaves very little warming for human forcing to explain; indicates that climate sensitive to human gas emissions is far less than stated and knocks the foundation out of the IPCC argument. Indeed, this paper shows that there is a natural factor that can explain much of the late 20th century warming and that all moneys spent on mitigating a global warming crisis have been wasted.
If one accepts the conclusions of this paper, the models have already been invalidated.
May 3rd, 2008 at 1:56 pm
Wow — I’ve never seen so much (misinformed) discussion about a published paper that apparently no one has actually bothered to read closely.
If you want to know what the paper really says, try reading:
http://climateprogress.org/2008/05/02/nature-article-on-cooling-confuses-revkin-media-deniers-next-decade-may-see-rapid-warming/
It is more accurate to say the Nature study is consistent with the following statements:
* The “coming decade” (2010 to 2020) is poised to be the warmest on record, globally.
* The coming decade is poised to see faster temperature rise than any decade since the authors’ calculations began in 1960.
* The fast warming would likely begin early in the next decade — similar to the 2007 prediction by the Hadley Center in Science (see http://climateprogress.org/2007/08/15/climate-forecast-hot-and-then-very-hot/).
* The mean North American temperature for the decade from 2005 to 2015 is projected to be slightly warmer than the actual average temperature of the decade from 1993 to 2003.
I’m not exactly what more evidence people need for human-caused global warming than the much faster than predicted loss of the Artic ice, the loss of the inland glaciers, the ice sheets shrinking “100 years ahead of schedule.”, recent sea level faster than the models predicted, and the tropics expanding faster than the models predicted.
The coming decade will with little doubt end the doubt of those who can be persuaded by the facts, but who refuse to accept a well verified theory. By then it will be all but too late to stop the catastrophe, but you can all feel good that you didn’t act precipitously to save the next 50 generations from ever worsening misery.
May 3rd, 2008 at 4:22 pm
Hi Roger,
You write, “Imagine if I had a set of 25 weather forecasts (from various models) for temperatures tomorrow of 50 to 100 degrees. When the temperature turns out to be 63 degrees, I could say that indeed this was observed to be consistent with the models. But as a tool of decision making it is useless.”
I’m surprised you haven’t hammered this important point home more often…and specifically to policy makers and the scientific/policy press, regarding the IPCC’s projections.
For example, the IPCC Third Assessment Report contained the statement, “The globally averaged surface temperature is projected to increase by 1.4 to 5.8°C over the period 1990 to 2100.”
Don’t you agree that this is exactly like your hypothetical weather forecast of 50 to 100 degrees? Doesn’t it make a big difference if the real value is 63 degrees (as in your scenario) or 93 degrees?
Wigley and Raper at least addressed that issue in their July 2001 Science paper, in which they assumed equal probability for all scenarios, and came up with a 50% probability of warming from 1990 to 2100 of 3.06 deg C, and a 90% probability of warming between 1.68 and 4.87 deg C.
I followed that with my own predictions in 2002 and 2005. The 2005 predictions were published in full on my blog in April 2006. My predictions were for a 50% probability of warming from 1990 to 2100 of 1.2 deg C, and a 90% probability of warming between 0.02 and 2.45 deg C.
Don’t you agree that:
1) The IPCC TAR projections simply aren’t useful for policy, since, like your temperature forecast, they are very broad and contain no “most likely” value?
2) It makes a difference to policy whether the Wigley and Raper or Bahner projections are correct (i.e., whether, in the absence of government intervention, there is a 50 percent chance of warming between 1990 and 2100 of 3.06 deg C, or a 50 percent chance warming of 1.2 deg C)?
May 3rd, 2008 at 8:03 pm
“By then it will be all but too late to stop the catastrophe, but you can all feel good that you didn’t act precipitously to save the next 50 generations from ever worsening misery.”
Heh, heh, heh! Sorry, I find it hard to take seriously anyone who makes such statements…and you make the same statement virtually all the time. My guess is I could find the phrase “50 generations” at least a score of times in your book.
So you think you can see 50 generations into the future? By most conventional usage, that 1500 years…or at least 1000 years.
Let’s hear some of your predictions for…say the next 3-5 generations.
In the year 2000, the world per-capita GDP (purchasing power parity, or PPP), year 2000 dollars was about $7200. What are your predictions for world per-capita GDP (PPP, year 2000 dollars for):
2020
2040
2060
2080
2100?
What are your 50/50 predictions for world average surface temperature anomaly and satellite lower tropospheric relative to 1990 for those years?
What are your predictions for mean world sea levels relative to 1990 for those years?
What are your predictions for world average life expectancy at birth for those years?
Here are my guesses for those years, 2020, 2040, 2080, 2100.
World GDP (PPP, year 2000 dollars): $13,000; $31,000; $130,000; $1,000,000; $10,000,000.
Lower tropospheric temperature (versus 1990): 0.27; 0.46; 0.70; 0.94; 1.20.
World average life expectancy at birth (currently 67): 75, 85, 100, 140, 200.
What are your predictions?
May 3rd, 2008 at 8:29 pm
Oops. I missed sea level rise, relative to 1990 for 2020, 2040, 2060, 2080, and 2100, in meters. (This is assuming humans do nothing about sea level rise, which is very unlikely.)
0.11, 0,20, 0.31, 0.43, 0.56
May 4th, 2008 at 8:12 am
You guys are barking up the wrong tree. It’s already happened. “Global Warming” as observed via satellite is not really global at all. The further north one looks, the more the warming. It’s all addressed by Roger’s father here:
http://climatesci.org/2008/03/25/new-paper-elevates-the-role-of-black-carbon-in-global-warming/
The reason it’s disproven is because no models predicted the cooling, over exactly two solar cycles, here:
http://earthobservatory.nasa.gov/Newsroom/NewImages/images.php3?img_id=17257 at the *south* pole, where there *is* no black carbon. This shows that the warming is not “global” at all, ergo CO2 is not to blame (not significantly anyway).
It appears, since warming has stalled, that the tipping point may have been reached. Of course the tipping point is really that the public is realizing en masse that it’s a scam by wannabe carbon traders and their dupes, and a bunch of alarmists are about to lose their jobs, after dragging the reputation of science through the dirt.
Beside that, even if it *is* happening, how do they know it’s going to be bad? Certainly well over 50 generations reveled in the Holocene Optimum.
It’s certainly no emergency…
May 7th, 2008 at 8:26 pm
@Boris: “what would I need to observe in the next week’s weather that would prove that summer is not approaching.”
I could observe whether or not the sun is rising earlier and setting later, and how high the sun is in the sky at midday. We have a solid scientific understanding of what really causes summer to occur. However, we do not have a solid understanding of whether or not our relatively small contribution of GHGs to earth’s system has any causal influence on the always changing climate. To me the lack of an answer to Roger’s question indicates this deficiency in scientific understanding, which naturally leads one to question the predictions of climate models.
May 7th, 2008 at 8:27 pm
@Boris: “what would I need to observe in the next week’s weather that would prove that summer is not approaching.”
I could observe whether or not the sun is rising earlier and setting later, and how high the sun is in the sky at midday. We have a solid scientific understanding of what really causes summer to occur. However, we do not have a solid understanding of whether or not our relatively small contribution of GHGs to earth’s system has any causal influence on the always changing climate. To me the lack of an answer to Roger’s question indicates this deficiency in scientific understanding, which naturally leads one to question the predictions of climate models.
May 14th, 2008 at 11:41 am
“Try running Model E without including GHGs — observations would be very inconsistent with the model results.”
Of course, a model that’s been crafted to reflect an atmosphere as is becomes inconsistent if you leave off features present in the modeled atmosphere. The model relies on GHG to be consistent, so of course removing them would make it inconsistent.
What point does that prove? That if there was nothing to absorb outgoing longwave infrared thermal radiation, the atmosphere would be different. So obvious as to be uninteresting and not worth mentioning.
Just remove carbon dioxide, yes, 9% of the greenhouse effect goes away. In the model. What happens in reality? Maybe water vapor fills in for it or the lapse rate readjusts and there is no net change. But it is unphysical to remove all carbon dioxide so the thought exercise is meaningless and any answers simply conjecture.
May 14th, 2008 at 11:46 am
The yearly gistemp anomaly in 2007 was the same as in 1998. Yes I am Cherry Picking the last reported decade. Since of course that is what this paper is on. Hello, Earth calling. But in instrumental temperature history, this figure was only exceeded by the anomaly in 2005. So it has been cooling since the high in 2005 and is flat since a decade ago. Yes the anomaly not the trend.
Which brings up a question — what does it mean to have a trend (+.7) that is larger than the largest yearly value (+.62)?
Does it not bother anyone that the anomaly range itself is only -.4 to +.62 in almost 13 decades with multiple types of measurement methods over the period?
The only question is would you yourself (whomever you are) bet that any yearly anomaly will go over +.8 in the next 20 years? I would not, much less try and attribute it to anything except perhaps the measurements themselves.
Mark, my prediction is a 0 rise in the anomaly between now and 2012, + or – 10. If they move the base period in 2011 to 1981-2010 I might have to revisit that.
Which brings up an interesting question; what would the anomaly have to do or for how long to flatten or reverse the trend line since 1880? Given the drastic change in the monthly anomaly numbers since 1976 it doesn’t seem possible for the linear yearly trend to do anything but experience a reduction in the rate of increase. Since in order to flatten the line, it would take a total of -15 to get it to flatten since 1880 and -4 to flatten since 1960.
John, you asked why 1960. HadCRUT base period starts in 1961. Mauna Loa data starts in 1958. Why not? Pick the last 30 years, 1978-2007 if it makes you feel better, or 1990-2007 if “17 years is long enough”. We all know things change depending on the start, but the trend being up is an obvious fact.
May 14th, 2008 at 12:42 pm
About predictions.
So “global surface temperature may not increase over the next decade”, Is that a reduction in the rate of change for the trend but still with a positive linear slope, or is it a reversal of the trend now sloping towards the zero baseline. Is a trendline of a yearly anomaly of +.5 that ends up at a yearly anomaly of +.1 over the baseline cooling or less warming. Or is the yearly anomaly going to be flat resulting in a flat trendline. Does the anomaly have to be on the negative side of the baseline to be cooling. How you define this is important.
As far as “the next decade” that is 2011-2020 — the next ten years is 2009-2018 — but I understand the article deals with comparisons of 1994-2004 to the time frames 2000-2010 and 2005-2015 as far as the forecasts go. As RC puts it, the first period of ten years (using decade in the sense of any ten year period) starts in November of that year, so the first period to compare is 1 Nov 1994 – 31 Oct 2004 to Nov 2000 – 31 Oct 2010 I believe RC thinks the paper’s hindcast of the first period is too low and the model off, and therefore the forecasts compared to 94-04 will be too.
The abstract — and if you don’t like the abstract, blame whomever wrote it — specifies “decades” for the predictions:
“Thus these results point towards the possibility of routine decadal climate predictions.”
And about the predictions:
“Using this method, and by considering both internal natural climate variations and projected future anthropogenic forcing, we make the following forecast”
Sounds like ‘Our guess is based off these other guesses.’? And notice the interchangeable use of prediction/forecast.
We therefore get:
“North Atlantic SST and European and North American surface temperatures will cool slightly”
Some cooling.
And we also get:
“tropical Pacific SST will remain almost unchanged.”
Some near equilibrium. (‘Almost unchanged’ which direction though?)
Then the anomaly overall for 2000-2010 or 2005-2015:
“global surface temperature may not increase over the next decade”
Which also means they may increase over “the next decade” — rather meaningless. ‘it might go up and it might go down’ is not a forecast or a prediction, it is a tautology. I think this is basically what Roger means by “everything being consistent”.
But why might it not warm? Because:
“natural climate variations in the North Atlantic and tropical Pacific temporarily offset the projected anthropogenic warming.”
As other have correctly pointed out, if we can offset “warming” naturally (temperature down), the opposite may be possible also, regarding the “projected” agw.
Now, if one group of water cools and one stays the same, and this offsets the land, the abstract at least certainly seems to be saying:
“Various components used to derive the anomaly will be some mix of warming, cooling, and equilibrium the next decade, and the anomaly will basically be flat.”
So variations in the weather can cause a mix of warming cooling and nothing. Certainly this is consistent with the models. If you put in enough models and give yourself two or three standard deviations that is.
But just wait until the warming comes back with a vengence the next decade from 2018 to 2027….