Number 14, February 1999
"What's happening on the Societal Aspects of Weather WWW Site."
The 1990s have seen the rise of the catastrophe modeling industry in response to demand, primarily from the insurance industry, for quantification of risk. Decision makers seek from catastrophe models some estimate of the risk that they face due to extreme events like hurricanes or earthquakes. A typical model will incorporate information on weather (e.g., hurricane landfall and windspeed probabilities), insurance (e.g., the value of exposed property), and damage potential (e.g., engineering, building materials, construction, codes). The model uses these data to calculate things like probable maximum loss, annual expected loss, and losses due to a specific event. Insured losses are typically much smaller than total economic losses in a catastrophe. Catastrophe models have become fundamental to the existence of financial products such as catastrophe bonds and futures. Even the United States government has begun to develop its own catastrophe models to aid its Federal Emergency Management Agency response to disasters. Clearly, with so many decision makers wanting to understand risk, the rise of the catastrophe model industry should be applauded. But there is reason for hesitation: No one knows how well the models actually perform.
Evaluation of predictions of any sort can be tricky. It involves more than just comparing the prediction with what actually unfolds. For example, in the late 1800s a scientist predicted days on which tornadoes would or would not occur with 96% accuracy. This seemed like a truly remarkable feat until someone noticed that predicting "no tornadoes" every day would have given a 98.5% accuracy rate! For a prediction to show "skill" it must outperform a simple prediction based on persistence. In weather forecasting the simple prediction is climatology; in economics it is called the na´ve forecast; mutual fund managers use the performance of the S&P 500 as a benchmark. While some in the insurance industry have sought to evaluate models against actual events and historical losses, there exists no community-wide benchmark for evaluation, leaving most users in the dark as to how well the models actually predict catastrophe losses. The State of Florida and particular companies have invested significant effort to evaluate the models, but for the most part these evaluations are based on qualitative criteria such as the credentials of the modelers and whether or not the results "look realistic."
Historically, catastrophe losses have not been particularly amenable to the development of such a benchmark because there is such dramatic change over time in the context in which losses occur. This means that one cannot generate a simple estimate of expected losses based on what has occurred in the past, as actuaries typically do for the insurance industry. Consider that the great Miami hurricane of 1926 caused an inflation-adjusted $100 million in losses. But Miami had only about 100,000 residents at the time. Comparing the losses of 1926 with potential losses of today is like comparing apples and oranges. Even comparing Andrew's losses in 1992 with today's potential losses can mislead. Indeed, underestimates of risk based on improperly aggregating losses over time is one factor that stimulated the rise of the catastrophe model industry.
But even with the difficulties associated with placing catastrophe losses on an actuarial basis, it has been done. Traveler's Insurance Company for many years adjusted catastrophe losses for changing societal conditions as part of an in-house research capability. More recently, work by Changnon et al. (1996) and Pielke and Landsea (1998) have sought to respectively adjust crop/property insurance losses and hurricane losses for changes in society. Such adjustments, properly done for the insurance industry, could form the basis of a community (i.e., public) benchmark against which to evaluate catastrophe models. A catastrophe model would have skill if it were shown to outperform the benchmark. The degree to which the model outperforms the benchmark would determine its relative skill as compared to other models.
On the one hand, it seems logical that evaluation of catastrophe models would be in the interests of the users of the models, but it would also benefit the developers of the models. Public information on relative skill of the models would aid in marketing and pricing of their services. On the other hand, it is also important to recognize that for a subset of users of catastrophe models, the performance of the models is less important than their mere existence. Because the models exist, they allow for the quantification of risk. Because risk can be quantified, financial instruments like bonds and futures can be created and traded in the financial markets. Significant financial returns result to those companies that create and manage these financial instruments made possible by the existence of catastrophe models. And for the most part, these are not the same companies that bear the risk of a catastrophic loss. In the war against catastrophe losses, they are making the bullets, so to speak. This is perhaps one reason why there has not been a greater push to evaluate catastrophe models in a public forum.
Given experience with multi-tens-of-billions of dollars in losses in Hurricane Andrew in Miami and the Great Hanshin Earthquake in Kobi, it is only prudent to ask about the consequences of once again failing to properly calculate the risks of catastrophic losses. Catastrophe models have provided decision makers with a means to better estimate risk, but at the same time in catastrophe bonds and other instruments decision makers have created products that depend upon greater accuracy in awareness of risk. Catastrophe models are here to stay and will likely be used to develop ever more precise predictions of risk (e.g., at the zip code or even household level). Because most everyone pays taxes or has insurance, it would seem to be in the common interest to know how well the models predict by developing a public approach to the evaluation of catastrophe models — before events show us that we waited too long.
For further reading see the publications of Rade Muslin at: http://www.ffbic.com/actuary/papers/index.html; and particularly his paper on "Issues in the Regulatory Acceptance of Computer Modeling for Property Insurance Ratemaking" (http://www.ffbic.com/actuary/papers/jir.pdf), Journal of Insurance Regulation, Spring, 1997, pp. 342-359. (You need Adobe Acrobat Reader to open.)
— Roger A. Pielke, Jr.
[ Contents ]
Long-time readers of the WeatherZine will recall when we first proposed the idea of an Extreme Weather Sourcebook. We are happy to report that the first version is now available for your information and comment.
The site provides quick access to data on the cost of damages from hurricanes, floods, and tornadoes in the United States and its territories. Created at the National Center for Atmospheric Research (NCAR), the Extreme Weather Sourcebook reports decades of information in constant 1997 dollars, simplifying comparisons among extreme-weather impacts and among states or regions. NCAR's primary sponsor is the National Science Foundation. Visitors to the Extreme Weather Sourcebook will find the states and U.S. territories ranked in order of economic losses from hurricanes, floods, tornadoes, and all three events combined. A dollar figure for the average annual cost in each category for each state is also provided. Links take the reader to graphs with more detailed information on cost per year for each state and each hazard. The Sourcebook was partially funded by the U.S. Weather Research Program. The USWRP home page is at: http://uswrp.mmm.ucar.edu/uswrp.html
The U.S. Weather Research Program (USWRP) is an interagency effort supporting the research and technology development needed to improve weather services. The overarching USWRP objective is to improve the specificity, accuracy, and reliability of weather forecasts for disruptive, high-impact weather. The program has established as its initial focus a coordinated effort to determine the "best practicable mix" of observations, data assimilation schemes, and forecast models for operations beyond the year 2000. The National Science Foundation (NSF), National Aeronautics and Space Administration (NASA), National Oceanic and Atmospheric Administration (NOAA), and the Office of Naval Research (ONR) all participate in the USWRP.
The USWRP is currently soliciting research proposals for FY 99. Although the program's principal focus is the physical sciences, behavioral, economic, and societal research will be supported as well.
The six major areas of social science emphasis are:
Social scientists interested in these areas should contact Jeryl Mumpower, National Science Foundation, Room 995, 4201 Wilson Boulevard, Arlington, VA 22230; (703) 306-1757; fax: (703) 306-0485; email: firstname.lastname@example.org. Guidelines for proposal submission are available from http://www.nsf.gov. Proposals must be submitted by May 11, 1999.
The USWRP Home page is WWW: http://uswrp.mmm.ucar.edu/uswrp.html
[ Contents ]
[ Contents ]
[ Contents ]
[ Contents ]
[ map | home | feedback ]