Catastrophe Models: Boon or Bane?

February 24th, 2005

Posted by: Roger Pielke, Jr.

I just returned from a meeting where I had a chance to discuss the role of “catastrophe models” in insurance and reinsurance. Upon returning I
thought that it might be worth revisiting an essay I wrote six years ago
for a newsletter that we used to publish called the WeatherZine. Here is
the "http://sciencepolicy.colorado.edu/zine/archives/1-29/14.html">essay
in full:

WeatherZine, February, 1999

The 1990s have seen the rise of the catastrophe modeling industry in
response to demand, primarily from the insurance industry, for
quantification of risk. Decision makers seek from catastrophe models some
estimate of the risk that they face due to extreme events like hurricanes
or earthquakes. A typical model will incorporate information on weather
(e.g., hurricane landfall and wind speed probabilities), insurance (e.g.,
the value of exposed property), and damage potential (e.g., engineering,
building materials, construction, codes). The model uses these data to
calculate things like probable maximum loss, annual expected loss, and
losses due to a specific event. Insured losses are typically much smaller
than total economic losses in a catastrophe. Catastrophe models have
become fundamental to the existence of financial products such as
catastrophe bonds and futures. Even the United States government has begun
to develop its own catastrophe models to aid its Federal Emergency
Management Agency response to disasters. Clearly, with so many decision
makers wanting to understand risk, the rise of the catastrophe model
industry should be applauded. But there is reason for hesitation: No one
knows how well the models actually perform.

Evaluation of predictions of any sort can be tricky. It involves more than
just comparing the prediction with what actually unfolds. For example, in
the late 1800s a scientist predicted days on which tornadoes would or
would not occur with 96% accuracy. This seemed like a truly remarkable
feat until someone noticed that predicting “no tornadoes” every day would
have given a 98.5% accuracy rate! For a prediction to show “skill” it must
outperform a simple prediction based on persistence. In weather
forecasting the simple prediction is climatology; in economics it is
called the nave forecast; mutual fund managers use the performance of the
S&P 500 as a benchmark. While some in the insurance industry have sought
to evaluate models against actual events and historical losses, there
exists no community-wide benchmark for evaluation, leaving most users in
the dark as to how well the models actually predict catastrophe losses.
The State of Florida and particular companies have invested significant
effort to evaluate the models, but for the most part these evaluations are
based on qualitative criteria such as the credentials of the modelers and
whether or not the results “look realistic.”

Historically, catastrophe losses have not been particularly amenable to
the development of such a benchmark because there is such dramatic change
over time in the context in which losses occur. This means that one cannot
generate a simple estimate of expected losses based on what has occurred
in the past, as actuaries typically do for the insurance industry.
Consider that the great Miami hurricane of 1926 caused an
inflation-adjusted $100 million in losses. But Miami had only about
100,000 residents at the time. Comparing the losses of 1926 with potential
losses of today is like comparing apples and oranges. Even comparing
Andrew’s losses in 1992 with today’s potential losses can mislead. Indeed,
underestimates of risk based on improperly aggregating losses over time is
one factor that stimulated the rise of the catastrophe model industry.

But even with the difficulties associated with placing catastrophe losses
on an actuarial basis, it has been done. Traveler’s Insurance Company for
many years adjusted catastrophe losses for changing societal conditions as
part of an in-house research capability. More recently, work by Changnon
et al. (1996) and Pielke and Landsea (1998) have sought to respectively
adjust crop/property insurance losses and hurricane losses for changes in
society. Such adjustments, properly done for the insurance industry, could
form the basis of a community (i.e., public) benchmark against which to
evaluate catastrophe models. A catastrophe model would have skill if it
were shown to outperform the benchmark. The degree to which the model
outperforms the benchmark would determine its relative skill as compared
to other models.

On the one hand, it seems logical that evaluation of catastrophe models
would be in the interests of the users of the models, but it would also
benefit the developers of the models. Public information on relative skill
of the models would aid in marketing and pricing of their services. On the
other hand, it is also important to recognize that for a subset of users
of catastrophe models, the performance of the models is less important
than their mere existence. Because the models exist, they allow for the
quantification of risk. Because risk can be quantified, financial
instruments like bonds and futures can be created and traded in the
financial markets. Significant financial returns result to those companies
that create and manage these financial instruments made possible by the
existence of catastrophe models. And for the most part, these are not the
same companies that bear the risk of a catastrophic loss. In the war
against catastrophe losses, they are making the bullets, so to speak. This
is perhaps one reason why there has not been a greater push to evaluate
catastrophe models in a public forum.

Given experience with multi-tens-of-billions of dollars in losses in
Hurricane Andrew in Miami and the Great Hanshin Earthquake in Kobi, it is
only prudent to ask about the consequences of once again failing to
properly calculate the risks of catastrophic losses. Catastrophe models
have provided decision makers with a means to better estimate risk, but at
the same time in catastrophe bonds and other instruments decision makers
have created products that depend upon greater accuracy in awareness of
risk. Catastrophe models are here to stay and will likely be used to
develop ever more precise predictions of risk (e.g., at the zip code or
even household level). Because most everyone pays taxes or has insurance,
it would seem to be in the common interest to know how well the models
predict by developing a public approach to the evaluation of catastrophe
models – before events show us that we waited too long.

For further reading see the publications of Rade Muslin at:
http://www.ffbic.com/actuary/papers/index.html; and particularly his paper
on “Issues in the Regulatory Acceptance of Computer Modeling for Property
Insurance Ratemaking”,
Journal of Insurance Regulation, Spring, 1997, pp. 342-359. (You need
Adobe Acrobat Reader to open.)

One Response to “Catastrophe Models: Boon or Bane?”

    1
  1. Crumb Trail Says:

    It Smells True

    This is interesting. Evaluation of predictions of any sort can be tricky. It involves more than just comparing the prediction with what actually unfolds. For example, in the late 1800s a scientist predicted days on which tornadoes would or…