Atlantic SSTs and U.S. Hurricane Damages, Part 5

October 26th, 2006

Posted by: Roger Pielke, Jr.

Widely respected hurricane expert Jim Elsner of FSU has posted a lengthy response to these posts over at his blog. I’d encourage interested readers to have a look. This exchange reminds me of a quote attributed to John von Neumann speaking on statistics, “With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” It also serves as a good reminder that Dan Sarewitz’s notion of an “excess of objectivity” is alive and well even when one is dealing with 34 data points. Let me start by acknowledging that Jim and I are going to agree to disagree and interested readers will have to judge the merits of our arguments themselves.


Elsner argues that the statistics of loss data are best fit by using a “random sum” that combines the statistics of frequency of losses with those of intensity of losses. This approach was first applied to hurricane damage data by my former colleague at NCAR Rick Katz in his 2002 paper “Stochastic modeling of hurricane damage” (in PDF). In my critique of Elsner’s work, I accept that the “random sum” methodology is indeed useful for deconvolving components of a statistical relationship (see, e.g., the acknowledgements in Rick’s paper). As Katz writes in his 2002 paper, “By enabling the variations in total damage to be attributed to either variations in event occurrence or in event damage, the present modeling approach has an inherent advantage over previous analyses.” But such a methodology, or any sophisticated statistics, cannot create a strong relationship in the real world where one does not exist.

I have focused my critique on the intensity part of Elsner’s analysis. With Jim’s help I have successfully replicated this part of their work (Part 4) and I have found that their results are highly unstable — that is they do not hold for 1950-2004 or for 1950-2006. What they report on large losses has much more to do with one event in 2005 (Katrina) than statistical properties of the dataset that are stable over time. On his blog Elsner suggests that the period 1950-2005 “is not intended to stand by itself.” That is good, because it does not stand by itself. Based on the lack of a relationship between SSTs and damage in the subset of data that Elsner claims that there is a strong relationship, I have concluded that there is little reason to expect that Elsner’s model would allow for an accurate prediction of future damage amounts conditional on SST. A question for Jim — What, for instance, would it have predicted for 2006 before the season?

Let me reassert that reasonable people can disagree on such subjects, as I had stated in Part One. Elsner would in my view make a much better case for his arguments by focusing his replies on the substantive questions, such as the obvious lack of stability in his intensity model or what physical basis there exists between May-June SSTs and damage that occurs within the hurricane season (points which he does not address). He is representing his work as “sound science that will likely have a major impact in the reinsurance industry” and indeed he is selling services to these companies. Thus, he should probably expect that his methods will attract attention (and in my experience in academia, attention means that one’s views are worth considering, which should be a compliment, even if the attention is critical as is often the case in academic discussions). If Jim is confident in his approach then he should welcome such scrutiny and efforts to clarify his methods and their significance. Bluster and invective are not only weak means of argumentation, but also make for poor marketing tools.

Let me also once again acknowledge that I did make a mistake in an earlier post, which was corrected online immediately when Jim pointed it out. In response to Jim’s complaints about a lack of apology I posted the following on Jim’s blog:

Jim- Let me once again formally apologize for making a mistake. It happens from time-to-time ;-) It has been corrected, as you know. As I wrote immediately after you brought the data issue to my attention in a personal email to you, “Thanks Jim for following up. Thanks for catching the data sort mix up, apologies for that.”

I’ll follow up on the substance next. Thanks!

In closing it is worth remembering the old adage that if one tortures data sufficiently it will confess. In this case, simple and straightforward analyses of the relationship of SST and hurricane damage without deconvolving intensity or frequency indicates that there is no relationship. Elsner and Faust both show that if you segregate the data in various ways you can use the influence of 2005 to attain, at best, a very marginal relationship. We disagree on whether such a relationship is indeed marginal and also the importance of such a relationship. Fair enough. As 2006 provides an excellent example of, scientists have no ability to predict hurricane landfalls with accuracy, much less frequency or intensity at landfall before the season starts. Until such a capability has been demonstrated, efforts to predict damage with accuracy will in my view amount to little more than statistical data mining.

19 Responses to “Atlantic SSTs and U.S. Hurricane Damages, Part 5”

    1
  1. Mark Bahner Says:

    Hi Roger,

    Again, I’m cruising through on lunch break, so I apologize if I’m not following the discussion accurately.

    Also, overall, it seems to me your arguments are very strong. I haven’t read Jim Elsner’s blog yet, but your arguments here seem very solid.

    One question I do have (I can’t see why this would produce a different effect, but I’d still be interested)…

    …have you ever plotted SST at the time of storm and damage from that particular storm? If so, do the results lead to any different conclusion (than that SST doesn’t seem to affect damage)?

    You seem to have correlated storm damage for entire years with SSTs during several months of those years. But what about damage from an individual storm compared to SST at the time of the storm (at least during the month of the storm)?

    Thanks,
    Mark

    P.S. Off topic, but there’s interesting discussion of photovoltaics/fusion/fission at SciAm blog:

    http://blog.sciam.com/index.php?p=302&more=1&c=1&tb=1&pb=1#more302

    I’m about to comment that Muir Matteson’s analysis of the future of photovoltaics is a bit too optimistic. (Even though I did a not-so-very different analysis on your blog just a while ago. ;-) )

  2. 2
  3. Roger Pielke, Jr. Says:

    Mark- Thanks. I haven’t done this, and I’m not even sure how it would be done. The point of this analysis is to answer the question — if SSTs continue to increase in the Atlantic, what does the past relationship of SSTs and US hurricane damages suggest about the future?

    My answer: “nothing.” E. Faust might answer “something significant” and J. Elsner might answer “very little, but something”.

  4. 3
  5. D, F. Linton Says:

    Roger,
    Have you examined how sensitive the result is to dropping years other than 2005? With only 34 points one has to wonder what other confessions the dataset might yield….

  6. 4
  7. Roger Pielke, Jr. Says:

    D.F.-

    Thanks for your comment. Given that the relationship of storms >$250M and MJ SSTs is only marginally signifcant with 34 datapoints, it is not surprising that one value can have a large influence.

    A colleague reminds me that I have not put enough attention on the most basic question that ought to precede the statistics:

    On what basis would one expect MJ SSTs to influence intra-seasonal activity or damage? Why not use ASO SSTs which are the seas over which the hurricanes form?

    Elsner is presumably looking for predictive ability on a seasonal basis, with MJ serving as a proxy for what is expected later in the season due to the persistence of SSTs. (Since Mj occur before the season and ASO is not known until after.) I am not interested in seasonal prediction but the relationship of SSTs and damage across years, so it makes more sense for me to use ASO SSTs directly and skip the intra-seasonal step that is based on an assumption that MJ SSTs accurately anticipate ASO SSTs.

    Interestingly, the ASO SSTs using a threshold of >$250 per storm also show a similar marginal significance conditional on 2005.

  8. 5
  9. Mark Bahner Says:

    Hi Roger,

    I might be missing something, but it seems pretty straightforward how to do what I’m suggesting:

    1) Your annual values for damage are presumably the sum of values for individual storms. I’m looking for the values for individual storms.

    2) Get the SST at the time of those individual storms (e.g. I thought you presented a table of SSTs on a monthly basis for the years from 1950 to 2005).

    Then plot the damage for the individual storms as a function of the SST at the time of the storm (i.e., the SST during the month of the storm).

    It seems to me that such an analysis would be a valid way to answer the question:

    “…if SSTs continue to increase in the Atlantic, what does the past relationship of SSTs and US hurricane damages suggest about the future?”

    Mark

  10. 6
  11. Mark Bahner Says:

    Hi,

    One final thing. Presumably there are more hurricanes in August, September, and October precisely because the SSTs are higher in these months. So it seems to me one could make a rough estimate*** just based on a few seasons of data (i.e., there is no hurricane damage in March, April, May, November, and December, because the SSTs are so low.

    P.S. Rough estimates are all engineers like me ever care about…especially since we can install a storm surge protection system for peanuts and eliminate a lot of damage anyway.

    P.P.S. Not to mention the potential prospect of designing a system that simply reduces hurricane strength…thereby reducing storm surge, wind damage, AND inland flooding. :-)

  12. 7
  13. Jim Elsner Says:

    Hi Roger,

    So you agree that I was correct about your data mixup and I was correct that our method is the more compelling approach for understanding the relationship between losses and SST as acknowledged by your quote from Katz (2002). In fact, our paper is based on his excellent work, which we prominently cite in our paper.

    There is no lack of stability in our modeling results when you condition the number of events on the NAO and the magnitude of loss on SST. Sometimes when the Atlantic is warm and hurricanes are strong, the steering flow keeps them from reaching the US and the steering during the season can be predicted to some degree by preseason values of the NAO; thus a simple regression of annual loss on SST is inadequate for understanding the relationship between loss and SST. The apparent lack of stability you find arises from using the wrong approach on different data.

    Concerning over-fitting (your elephant quote), we wrote a well-cited expository article on this topic published in Weather & Forecasting (http://garnet.fsu.edu/~jelsner/www under Research) after discovering this was an issue with much of the early seasonal forecast work of a couple of well-known hurricane scientists. We are certainly not over-fitting. Moreover we are not using 34 data points, we are using 106.

    To answer a few of your other queries. We do not have a separate intensity model. We use May-Jun values because we want to have a preseason prediction of losses. I would expect a somewhat stronger model if we used August through October SST, but the persistence of SST from spring to summer gives us some preseason predictability. We worked out these relationships for US hurricane counts in a recent paper in the Journal of Climate (http://garnet.fsu.edu/~jelsner/www under Research).

    Concerning the influence one year’s data values have on your (not my) regressions, the real issue is called “leverage.” See our discussion about leverage when the response variable is logged (on our blog).

    Best,
    Jim

  14. 8
  15. Roger Pielke, Jr. Says:

    Jim-

    Thanks for this further input. It seems to me that we are saying much the same thing, but are miscommunicating for whatever reasons.

    I am asking the question, if all you know is SST and losses, does the historical record of SST and losses provide any insight into the future? I do not think that it does, and on this narrow question I think you would agree. Your argument, it seems to me, is that SST is not all that matters — your seasonal model includes SST, but also NAO. Indeed you write, “Sometimes when the Atlantic is warm and hurricanes are strong, the steering flow keeps them from reaching the US and the steering during the season can be predicted to some degree by preseason values of the NAO; thus a simple regression of annual loss on SST is inadequate for understanding the relationship between loss and SST.” From the perspective of my original post on this, my point was not to tease out the role of SST, but to indicate that SST alone is a poor correlate with damage. Should I wish to deconvolve the factors related to damage, a “random sum” approach indeed makes sense. Again, my focus is not on teasing out an effect “all else being equal” but pointing out that all else is not equal.

    I haven’t looked at the role of NAO, but assuming that you are correct, your perspective is completely consistent with my original post!! SST by itself is not sufficent to say anything meaningful about losses.

    As far as your seasonal predictions, in your paper you represented data from 1950-2005 in a linear regresison as evidence that your intensity model is robust. I successfully replicated this analysis finding the exact same relationship as you reported. I then simply looked at the same relationship ending 2004 and 2006, finding considerable differences in ths statistics. To me this is clear evidence of a lack of stability in the analysis that your paper represented as a sensitivty test of your model. Dismiss it if you want to, but I simply followed the exact approach described in your paper. In any case my interest in this subject did not arise out of concern about seasonal forecasts, so this is tangential to the main point of my posts. Perhaps that is one reason for our miscommunication.

    I am not particularly optimistic about the usefulness of seasonal hurricane or damage prediction, if nothing else for the reason that the resinsurance and verious derivative communities do not seem to have the financial instruments available to finely manage risks, even if the knowledge you and others have generated is 100% accurate and stationary. (Of course, companies have interests in these forecasts nonetheless, and will pay for services independent of unsefulness or skill!)

    But that is not my overall concern with these posts. My point is that SSTs, by themselves, do not tell us much about damage. And on this point it seems that we are in complete agreement.

  16. 9
  17. Roger Pielke, Jr. Says:

    Jim-

    On your comment on leverage, point taken. However, do note that the effect of one year is focused on the very large losses of 2005, which is contrary to your concern that with logged data “small events creating a bias relative to the large events.”

    Again, this is tangential to my main focus.

    Thanks!

  18. 10
  19. Jim Elsner Says:

    Roger,

    On your narrow question I would still respectfully disagree with you. If all we know are SST and damages from history, then I would assign a personal probability of 60-70% that over the next 100 years the warm SST years will, on average, have greater annual loss totals compared to the cold SST years.

    Best,
    Jim

  20. 11
  21. Roger Pielke, Jr. Says:

    Jim-

    Thanks. Your personal probability may indeed be correct, seems completely logical to me. But your paper (Jagger et al.) doesn’t support this assertion (e.g., what if warmer SSTs are routinely accompanied by an unfavorable NAO?) — nor does any other analysis of the historical record that I am aware of.

    But taking your assertion at face value, I’d characterize a 60%-70% probability of a relationship at the 10% variance level as not particularly strong stuff. Knowing far less than you do about hurricanes-climate, I’d put my personal probability at closer to 50%. So we are really not so far off in our views.

    Thanks!

  22. 12
  23. Jim Elsner Says:

    Roger,

    Nope…my paper is perfectly consistent with my assertion. The correlation between tropical SST and NAO is small. So a future where warm SSTs are consistently accompanied by recurving hurricanes, while possible, is inconsistent with the historical record.

    Best,
    Jim

  24. 13
  25. Roger Pielke, Jr. Says:

    Jim-

    Thanks. From here on I’ll choose to emphasize the areas where we are apparently in closer agreement with respect to the focus of my original post.

    So I’ll repeat my earlier conclusion:

    Taking all of your assertions at face value, I’d characterize your 60%-70% probability of a future SST-damage relationship at the <10% variance level as not particularly strong stuff. Knowing far less than you do about hurricanes-climate, I’d put my personal probability at closer to 50%. So we are really not so far off in our views, no matter how aggressively they are presented! ;-)

  26. 14
  27. Jim Elsner Says:

    Roger,
    Could you define “not particularly strong stuff” for me? Also could you help me with what you mean by “<10% variance level”.
    Thanks.
    Jim

  28. 15
  29. Roger Pielke, Jr. Says:

    Jim-

    Sure. In your paper you write:

    “Using the preseason Atlantic SST, we are able to explain 13% of the variation in the logarithm of loss values exceeding $100 mn using an ordinary least squares regression model. The relationship is positive indicating that warmer Atlantic SSTs are associated with larger losses as expected. The rank correlation between the amount of loss (exceeding $100 mn) and the May-June Atlantic SST is +0.31 (P-value = 0.0086) over all years in the dataset and is +0.37 (P-value = 0.0267) over the shorter 1950–2005 period.”

    You find an r^2 of 0.13. I characterize this as less than 0.10 because of the overwhelming influence of 2005 (hence “<10% variance level”).

    “not particualrly strong stuff” means that given these relationships, both that you assert in your paper and that I replicated, I don’t find your stated 60-70% confidence that future increases in SSTs will be accompanied by increases in damage to be a very strong statement.

    I know that you disagree. Fair enough.

  30. 16
  31. Wolfgang Flamme Says:

    Roger,

    I still haven’t made up my mind yet …

    Moving from below average to above average SSTs, how can we be sure we are not just watching a diminishing influence of the well-known SST temperature treshold for hurricane genesis? Once that treshold is definitely exceeded, there might be no furter increase in losses.

    Subject to that predictions might be correct if SSTs are not far from medium temperature range but fail completely in the high SST range – predicting even higher losses when actually they are rather stable …?

    One question at last: Is it possible to obtain normalized losses and SSTs on a per-event basis?

  32. 17
  33. Roger Pielke, Jr. Says:

    Wolfgang- We expect to release our providsional update 1900-2005 normalized loss dataset as soon as we submit the accompanying paper for publication. Stay tuned. Thanks.

  34. 18
  35. Mark Bahner Says:

    “One question at last: Is it possible to obtain normalized losses and SSTs on a per-event basis?”

    Yes, let’s (and by “us,” I mean someone who has more time and cares more than I!) obtain the normalized losses and SSTs on a per-event basis, and run that regression.

    Enquiring minds want to know what will happen. (But this enquiring mind doesn’t want to know enough to bother actually trying to get per-event data and coincident SSTs.)
    :-)

  36. 19
  37. Mark Bahner Says:

    Jim Elsner write, “If all we know are SST and damages from history, then I would assign a personal probability of 60-70% that over the next 100 years the warm SST years will, on average, have greater annual loss totals compared to the cold SST years.”

    How is this statement at all helpful to anyone making policy decisions?

    1) You don’t say whether the warm SST years will become more prevalent, or how much warmer they will be, and

    2) You don’t quantify what “greater annual loss” actually means.

    Roger Pielke Jr.’s data indicate an average annual adjusted loss of $13.5 billion from 1950 to 2006.

    If ALL years were to increase in SST by, for example, 2 degrees Celsius, what would the quantitative impact be? Would average annual losses go from $13.5 billion to $13.6 billion? Or $136 billion? Or some number in between (and if so, what number)?

    Aren’t those quantitative answers important?