State of Florida Rejects RMS Cat Model Approach

May 11th, 2007

Posted by: Roger Pielke, Jr.

According to a press release from RMS, Inc. the state of Florida has rejected their risk assessment methodology based on using an expert elicitation to predict hurricane risk for the next five years. Regular readers may recall that we discussed this issue in depth not long ago. Here is an excerpt from the press release:

During the week of April 23, the Professional Team of the Florida Commission on Hurricane Loss Projection Methodology (FCHLPM) visited the RMS offices to assess the v6.0 RMS U.S. Hurricane Model. The model submitted for review incorporates our standard forward-looking estimates of medium-term hurricane activity over the next five years, which reflect the current prolonged period of increased hurricane frequency in the Atlantic basin. This model, released by RMS in May 2006, is already being used by insurance and reinsurance companies to manage the risk of losses from hurricanes in the United States.

Over the past year, RMS has been in discussions with the FCHLPM regarding use of a new method of estimating future hurricane activity over the next five years, drawing upon the expert opinion of the hurricane research community, rather than relying on a simplistic long-term historical average which does not distinguish between periods of higher and lower hurricane frequency. RMS was optimistic that the certification process would accommodate a more robust approach, so it was disappointed that the Professional Team was “unable to verify” that the company had met certain FCHLPM model standards relating to the use of long-term data for landfalling hurricanes since 1900.

As a result of the Professional Team’s decision, RMS has elected this year to submit a revised version of the model that is based on the long-term average, to satisfy the needs of the FCHLPM.

This is of course the exact same issue that we highlighted over at Climate Feedback, where I wrote, “Effective planning depends on knowing what range of possibilities to expect in the immediate and longer-term future. Use too long a record from the past and you may underestimate trends. Use too short a record and you miss out on longer time-scale variability.”

In their press release, RMS complains correctly that the state of Florida is now likely to underestimate risk:

The long-term historical average significantly underestimates the level of hurricane hazard along the U.S. coast, and there is a consensus among expert hurricane researchers that we will continue to experience elevated frequency for at least the next 10 years. The current standards make it more difficult for insurers and their policy-holders to understand, manage, and reduce hurricane risk effectively.

In its complaint, RMS is absolutely correct. However, the presence of increased risk does not justify using an untested, unproven, and problematic methodology for assessing risk, even if it seems to give the “right” answer.

The state of Florida would be wise to err in the decision making on the side of recognizing that the long-term record of hurricane landfalls and impacts is likely to dramatically understate their current risk and exposure. From all accounts, the state of Florida appears to be gambling with its hurricane future rather than engaging in robust risk management. For their part, RMS, the rest of the cat model industry, and insurance and reinsurance companies should together carefully consider how best to incorporate rapidly evolving and still-uncertain science into scientifically robust and politically legitimate tools for risk management, and this cannot happen quickly enough.

8 Responses to “State of Florida Rejects RMS Cat Model Approach”

  1. Rich Horton Says:

    I don’t know. When I looked at 100 years worth of storm landfalls in Florida I found a remarkable consistancy over time. From 1907-1966 there was an average of 14.67 storms a decade making landfall in Florida. From 1967-2006 there was an average of 14.50 a decade. I found similar numbers for Texas, Louisiana and North Carolina landfalls as well. Using RMS methodology I could have cherry picked any active five year period from the early part of the 20th century and claimed that that should have signaled “increased risk for the future.” But, of course, that would have been dead wrong.

    In light of Vecchi and Soden’s paper doesn’t it seem likely that RMS is also dead wrong?

  2. 2
  3. kevin v Says:

    and of course, this did not help in terms of the “scientifically robust and politically legitimate tools” picture:

    where, essentially, the state of Florida paid for a model effort to say what they wanted it to say, then had to scramble when it didn’t say what they wanted it to say

  4. 3
  5. Rich Horton Says:

    Just to follow up on what I said earlier:

    From 1906-1910, 12 storms hit Florida.
    From 1910-1914, 4 storms hit Florida.

    From 1933-1937, 16 storms hit Florida.
    From 1938-1942, 5 storms hit Florida.

    From 2002-2006, 14 storms hit Florida.
    From 2007-2011, ??

    Are we not allowed to learn from past experience?

  6. 4
  7. Mark Bahner Says:

    I’ve just received “Useless Arithmetic” and am on the first chapter, about the collapse of cod in the Grand Banks.

    This seems like a similar situation. And a similar situation exists on climate change. Specifically, there aren’t any incentives to make accurate predictions.

    The insurance and re-insurance industries have incentives to overpredict future hurricane damages. The State of Florida and insurance purchasers have incentive to underpredict future hurricane damages.

    It seems to me the solution is similar in all these cases: reward accuracy, rather than rewarding inaccuracy. Two ways that this could be done would be to set up a futures market or a prediction prize fund.

    Here’s how the prediction prize fund might work:

    1) Beginning in 2012 (5 years from now) the State of Florida could award prizes totaling $400,000 each year, for the most accurate predictions of insured losses in Florida over the previous 5 years.

    2) A group of 30 experts would be chosen in 2007. They’d make their predictions for total insured losses each year and total losses for the next 5 years.

    3) In 2012, the expert who was closest to the 5-year total would get $200,000. Second place would get $100,000. Third and fourth would get $25,000. And each person who was closest for the 5 individual years would get $10,000. So that’s $400,000, total.

    3) In 2013, the same prizes would be awarded for predictions made in 2008.

    4) The 15 experts with the worst predictions over the previous 5 years would be dropped from the expert pool, and replaced by 15 people who’d made accurate predictions, but weren’t part of the expert pool. (That is, people could submit predictions in order to get put into the pool of experts. If they made accurate predictions, they’d replace the losers in the existing pool of experts.)

    Such a prize fund would be more likely to produce accurate predictions, because it would reward accuracy, rather than (perversely) rewarding inaccuracy, as the present system does.

    Such a system would be even better for predicting climate change, where it’s obvious that the “experts” are quite blatantly lying:

    P.S. The IPCC Fourth Assessment Report “projections” were/are indeed pseudoscientific nonsense, just as I expected in my blog post. It’s simply illogical and unscientific to set up a system that rewards inaccurate predictions (e.g., the IPCC Assessment Reports) and expect it to produce accurate predictions.

  8. 5
  9. Roger Pielke, Jr. Says:

    Rich- The question is not whether the current period is higher than any period in the past, but whether it is more active than the long-term average. If there is significant variability on long time periods then the long-term average will leave you underestimating (or overestimating) risk in any particular period. The important question is whether the risk management strategies based on a long-term average are not robust in a particular period.


  10. 6
  11. Rich Horton Says:


    What you say is probably true, but I tend to doubt we could really show anything without a few thousand years more data we do not have.

    Variability I buy. Variability that points to decade(s) long “trends”, not so much.

    To my mind, the best we can say right now is that things are “suggestive.” For example, if someone wants to claim that the present amount of activity in the Atlantic is suggestive of the more active years of the 1930’s, I’d go along wih that. At least that keeps speculation in the “maybe it is, maybe it isn’t” arena. The truth is, even if there are decade(s) long patterns that might be discernable via our statistical methods how could we be sure we have seen everything the Atlantic could throw at us in the last 150 years? We can’t. We might be entering a “pattern” that was last seen not in the 1930’s but maybe the 930’s.

    I don’t mind preparing for worst case scenarios, and historically speaking, we know a WCS for Florida could mean 20 storms in 5 years. Now, we may actually get only 3, but so what. The WCS won’t change, so that should be something that Florida should be prepared long term for.

  12. 7
  13. DeWitt Payne Says:

    I made a Poisson cusum control chart of all North Atlantic hurricanes from 1979 to 2006 (here: ) that shows an apparent regime shift in 1995 from an average of about 6 hurricanes/year (lambda for the chart) to greater than 8 (defining the Upper Control Limit). The result appears to be significant at better than 99% confidence level. However, the increase in total activity may not correlate with an increase in landfalling hurricanes. Still, it does indicate that current activity has increased.

  14. 8
  15. Harry Haymuss Says:

    Dewitt – quite true. See:

    - especially Appendix A and the first figures in Appendix B.