Descriptive Decision Studies of the Societal
Impact of Weather and Climate Information

Thomas R. Stewart, Ph.D
Director for Research
Center for Policy Research
University at Albany
State University of New York
Albany, NY
t.stewart@albany.edu

Introduction

This paper addresses methods for studying one important aspect of the impact of weather on society--the impacts of weather and climate information. This topic is important because much atmospheric research and, in particular, the U.S. Weather Research Program, is intended to improve the quality of such information. It is reasonable to expect that such improvement in the quality of information will result in the mitigation of some of the adverse impacts of weather. It is also reasonable to expect that improved information will help people take better advantage of the positive impacts of weather. These expectations have not been adequately examined, however. More research is needed on how people use weather information in order a) to determine whether the expected benefits of improved information are realized, and b) to discover how best to formulate and disseminate information for maximum benefit.

Two related topics are not directly addressed in this paper. One is the direct effect of weather on behavior and on physical and mental health (e.g., Rotton and Frey, 1985). Another is research on risk communication (e.g., Renn, 1992), which is clearly relevant to the communication of weather related risks to the public.

Descriptive and prescriptive studies

Studies of the societal impact of weather and climate information must consider -- either explicitly or implicitly -- the decisions made by users of the information. Most such studies involve both description (how users actually decide) and prescription (how they should decide). Both descriptive and prescriptive approaches are based on the belief that the impact of weather and climate information stems primarily from its effects on the decisions of individuals engaged in weather-sensitive activities (McQuigg, 1971), and both approaches require decision making models to estimate those effects. The critical differences between the two approaches are the methods used to develop the decision making models and the criteria employed for evaluating them. Descriptive models are evaluated according to their ability to reproduce the behavior of the decision maker. Prescriptive models are evaluated according to their ability to produce decisions that are optimal according to some normative theory of decision making.

Descriptive and prescriptive studies differ in how the user's decision rules (which comprise a decision model) are obtained. In a prescriptive study, the decision model is based on a normative theory (e.g., decision theory) that specifies an "optimal" set of decision rules (e.g., maximize subjective expected utility). In a descriptive study, the decision model is determined by the user's behavior, and the goal is to obtain an accurate model of the user's actual decision process. As a result, a prescriptive study will yield an estimate of the impact of information for an idealized decision maker who optimizes according to a decision criterion derived from the normative theory, while a descriptive study will yield an estimate of the actual impact of the information on a real decision maker who may or may not use information in an optimal fashion.

In practice, descriptive and prescriptive modelers may bring quite different perspectives, and usually different types of training and experience, to their work. Those perspectives might guide their research in different directions and produce even greater differences in results than the formal differences between their purposes and methods would suggest.

For example, descriptive and prescriptive studies will require different amounts and types of involvement of decision makers. It is possible to do a prescriptive study with no direct decision maker involvement by using secondary data (Sonka et al., 1988). In a typical prescriptive study, one or more decision makers might be interviewed to determine what information is available and to ascertain decision alternatives, constraints, subjective probabilities and utilities and other parameters required by the prescriptive model. A descriptive study requires extensive involvement of several decision makers focusing on the subjective model that they use to process information and make choices.

Differences in the results obtained by the two approaches can be illustrated by comparing the descriptive study of the fruit-frost problem by Stewart, Katz, and Murphy (1984, hereafter referred to as SKM), based on interviews with fruit growers, the local NWS forecast office, and various experts in crop protection, with the prescriptive study of Katz, Murphy, and Winkler (1982, hereafter referred to as KMW), based on a dynamic decision theoretic model.

The 300 million-dollar apple crop in the Yakima Valley of Washington State is at risk every year in the early spring. Cold weather can freeze the blossoms and kill them. In order to survive, growers have to install crop protection systems to guard against frost. For years, they placed rows of oil burning heaters around the edges of their fields.

If anyone could use a good weather forecast to help them cope with uncertainty, these growers could. If they knew the night was going to be warm, they could let their help go home and sleep peacefully. If they knew the night was going to be cold, they could get their heaters fired up early (a process that might take a couple of hours) and be sure they would still have a crop in the morning. But if the forecast was bad, they could burn a lot of fuel and lose a lot of sleep unnecessarily, or they could lose their crop as their trees froze while they slept. In the early 1980's Bud Graves, the chief meteorologist at the National Weather Service Forecast Office in Yakima was preparing detailed frost forecasts every night during the season. The growers were avid listeners and strong supporters of his work.

Following are some of the differences between the KMW prescriptive analysis of this problem and the SKM descriptive study.

First, the fruit growers' alternatives were defined in different ways. KMW considered two alternatives for each night during the frost season: "protect" or "do not protect." SKM found that the important nightly decision was when to initiate protection and, of less importance, when to stop protection. KMW's choice was limited somewhat by modeling restrictions. Since they were developing a model that was dynamic over the frost season, their model would have become unmanageably complex if dynamic processes during each night were also included.

Both models recognized that crop protection and minimization of heating costs were the primary goals, but SKM identified another goal often mentioned by the growers -- psychological comfort. Since they faced substantial financial risk during the frost season, growers wanted to have confidence that they could anticipate frost hazard and would be ready to react. Thus, a frost forecast might provide psychological comfort to fruit growers even though their decision to protect their crops might be based on other information. It would be difficult or impossible to assign a monetary value to such a psychological benefit, but that does not make it any less important to the grower. In a prescriptive modeling approach, such psychological benefits could be reflected in the utility function.

The KMW model omitted an item of information that growers use to decide when to protect their crops. Almost all large growers have temperature sensors located in their orchards and frost alarms by their beds that are set to wake them when temperatures approach the critical range. SKM found that the frost alarms initiated a period of vigilance that would result in protective measures (usually wind machines, which had replaced the heaters) being turned on if temperatures dropped close to the critical level. Consequently, the evening frost forecast was not the trigger for protective action, as assumed in the KMW model.

KMW were able to develop a dynamic decision theory model that produced a quantitative estimate of frost forecast value. SKM had to be content with a qualitative description of the process and were unable to estimate a monetary value for the frost forecast. While, in principle, a descriptive model could be used to estimate a monetary value, it has rarely been accomplished. If the objective of a study is to estimate the value of a forecast in monetary units, a combination of prescriptive and descriptive approaches should prove quite valuable, as SKM observe.

Finally, it is important to note that most prescriptive studies necessarily involve descriptive elements. For example, both KMW and Mjelde et al. (1989) developed dynamic prescriptive models because they recognize that these better describe the actual decision processes under study.

Examples of descriptive studies

Previous descriptive studies of impact of weather and climate information can be classified into the following four types:

1. Case studies

Case studies of the impact of weather and climate information are usually conducted after decisions have been made. For example, Glantz (1982) studied the case of a seasonal water supply forecast for the Yakima Valley that turned out to be inaccurate. Based on a forecast of water availability during the irrigation season of less than half of normal, the Bureau of Reclamation allocated 6% of the normal water allocation to holders of junior water rights and 98% of normal to holders of senior rights. In preparation for the predicted water shortage, some farmers had wells dug, leased water rights from other farmers (at high prices), and even physically moved their crops to the Columbia River basin where water was more plentiful. Cloud seeding activities were carried out in the Cascade mountains in an attempt to increase snowpack. Actual water availability turned out to be much greater than forecast, and many farmers filed legal actions. Claims for forecast-related losses totaled approximately $20 million.

2. User surveys

One approach to studying the impact of information is simply to ask (by interview, mail, or telephone survey) a representative sample of users how they use the information and how it affects them. Some surveys have included questions about whether users are aware of weather services, whether they use them, whether they value them, and how to make the services more valuable. These are essentially marketing studies and may yield useful information for providers of information, such as suggestions for improving services.

3. Interviews and protocol analysis

In these studies, descriptions of users' decision making strategies, or "protocols," are developed from verbal reports obtained from interviews or from analysis of other written materials. The Stewart et al. (1984) study of fruit growers described above is an example of this category.

4. Decision experiments

Sonka et al. (1988) have proposed the use of "decision experiments" to assess the use of climate information and provide a study illustrating this approach. They interviewed two key managers responsible for production planning in a major seed corn producing firm. The participants were given eight climate prediction scenarios (representative of actual growing season climate conditions) and asked what actions they would have taken if they had received this information when planning decisions had been made for the current season. They used the results to develop a decision model.

Most previous descriptive studies have been either case studies or surveys. Although such studies are useful, they cannot produce adequate models for assessing or predicting the impacts of weather information. Studies involving the detailed interviews, protocol analysis, and decision experiments that are required to develop descriptive models of decision making adequate for assessing the impacts of information are rare.

Although descriptive studies have rarely produced adequate descriptive decision models, they do suggest that decision makers often do not obtain the maximum possible value of information because of a) constraints that deny users the flexibility to respond to the information, b) lack of awareness of the information or ability to obtain it, c) misunderstanding or misinterpretation of information, or d) non-use or non-optimal-use of information.

Descriptive studies also suggest that, in some situations, the impact of information may be less than expected because of a) availability and use of locally tailored information more appropriate to a specific decision than a weather forecast, or b) low importance of weather information because other non-weather factors are more important.

Descriptive studies also indicate that there are important individual differences in the use of information. Those individual differences translate into differences in the impact of information on different decision makers.

Brief overview of descriptive modeling methods

Any descriptive modeling method requires a detailed and thorough understanding of the task that faces the decision maker. It is as important to understand the decision maker's task, or "environment," as it is to understand how she mentally processes information. After all, the environment defines the problem the person is trying to cope with, and it is where she learned to make judgments and decisions in the first place (Brunswik, 1956; Hammond, 1996; Stewart et al., in press).

Approaches to descriptive modeling of decision processes can be classified into two broad groups: 1) protocol analysis (or process tracing), and 2) judgment analysis. These approaches will be briefly summarized.

Protocol analysis relies on decision makers' verbal descriptions of their reasoning. Protocols are best obtained by asking individuals to "think aloud" while they work through a series of decisions. The protocols are analyzed to develop a model which consists of a number of "if ... then ..." relations that can be diagrammed in the form of a flow chart. Protocol analysis has been used extensively by "knowledge engineers" to develop computer-based expert systems, and it has a long history in psychology as well (Kleinmuntz, 1968; Ericsson and Simon, 1984). Recently, the use of verbal reports to derive an "influence diagram," which could be considered a form of protocol, has become popular in decision analysis (Oliver and Smith, 1990).

Other researchers have used methods for analyzing judgment that do not depend on a person's ability to provide accurate verbal descriptions of decision processes. Judgment analysis refers to a class of methods for deriving models of decision making by analyzing a sample of actual judgments or decisions. The decision maker is required to do only what he or she does naturally, that is, to make decisions using familiar materials. The analyst develops a model to describe the inference process that produced the decisions. Descriptions of a theoretical foundation for judgment analysis and descriptions of the method itself can be found in Hammond et al. (1975), Brehmer and Joyce (1988), and Cooksey (1996).

The data required for the development of such a model are a number of cases of a particular type of decision. Each case includes the information used to make the decision and the resulting decision itself. Cases may be obtained in a natural setting (e.g., a fruit grower's decisions to protect crops each night during the frost season) or in controlled settings. In a controlled setting, the information decision maker would be asked to make judgments based on a sample of real situations or hypothetical scenarios designed to represent real situations.

The items of information that are available to the decision maker, called "cues," are considered independent variables in an analysis, and the decision is the dependent variable. Multiple regression analysis is one statistical technique that has been used with considerable success to fit a model to the data (Stewart, 1988), but a number of other methods have been used as well, including analysis of variance and conjoint measurement. The analysis yields a model of the decision maker that expresses the decision as a mathematical function of the cues and an index of how well the model fits the judgments (e.g., the squared multiple correlation coefficient).

Judgment analysis offers one major advantage as a tool for developing descriptive models of the use of weather and climate information: It provides a modeling method that does not rely on the decision maker's ability to describe his or her thinking process. This is important because the ability to make decisions is not always accompanied by the ability to describe accurately the process that produced the decisions, particularly when the process contains intuitive elements. Verbal descriptions of reasoning can be incomplete, inaccurate or misleading. Some important aspects of the decision process may not be readily accessible and may be difficult to translate into words. The description obtained may be unduly influenced by the method used to elicit it. Questions posed by the investigator impose a "frame" on the problem. Seemingly irrelevant and inconsequential aspects of problem framing can have a powerful effect on judgment (Tversky and Kahneman, 1981). For these reasons, it is desirable to have a modeling tool that does not depend on the expert's ability to describe the inference process.

Judgment analysis can be used in combination with verbal protocols (Einhorn et al., 1979), so that the investigator can take advantage of the insights provided by both while avoiding some of their pitfalls. If the investigator relies solely on the decision maker's verbal statements, then both the efficiency of model development and the ultimate accuracy of the model will be limited by the decision maker's ability to verbally describe his or her decision process. If, on the other hand, the investigator relies solely on judgment analysis, important characteristics of the decision problem that are not represented in the judgment task may be ignored or misunderstood. Although the two techniques complement each other well, few studies have combined them.

Conclusion

In order to understand fully the social impacts of weather and climate information, both prescriptive and descriptive studies will be necessary. Prescriptive studies provide estimates of the potential impact of information under the assumption that the decision maker follows an optimal strategy. They do not necessarily provide accurate assessments of the actual impact under current decision practices. Descriptive studies estimate the impact of information given current decision making practices but may overlook the possibility of additional benefits resulting from improvement in those practices.

Despite the potential for improved understanding of the impacts of weather information from combined prescriptive/descriptive studies, no such studies have been conducted. A major reason for this is the lack of descriptive studies that have produced decision models that are sufficiently well-developed to be used assess the impacts of weather information. Methods for developing such descriptive models exist and their application to the study of the impact of weather and climate information should be encouraged. Such application should prove useful to the meteorological community by providing a more complete understanding of the factors that determine the impact of information. This understanding could, in turn be used to help improve the formulation and dissemination of information and to help set priorities for research designed to improve information.

References

Brehmer, B. and Joyce, C.R.B., Eds. (1988). Human judgment: The Social Judgment Theory View, Amsterdam: North-Holland.

Brunswik, E. (1956). Perception and the representative design of psychological experiments. (2nd ed.). Berkeley: University of California Press.

Cooksey, R. (1996). Judgment analysis: Theory, methods, and applications. New York: Academic Press.

Einhorn, H.J., Kleinmuntz, D.N. and Kleinmuntz, B. (1979). Linear regression and processing-tracing models of judgment. Psychological Review, 86, 465-485.

Ericsson, K. A., and Simon, H. A. (1984). Protocol analysis: Verbal reports as data. Cambridge: Massachusetts Institute of Technology Press.

Glantz, M. (1982). Consequences and responsibilities in drought forecasting: The case of Yakima, 1977. Water Resources Research, 18, 3-13.

Hammond, K. R. (1996). Human Judgment and Social Policy: Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice. New York: Oxford University Press.

Hammond, K. R., Stewart, T. R., Brehmer, B. and Steinman, D. O. (1975). Social judgment theory. In Kaplan, M. F. and Schwartz, S. (Eds.), Human Judgment and Decision Processes. New York: Academic Press, 271-312.

Katz, R. W., Murphy, A. H. and Winkler, R. L. (1982). Assessing the value of frost forecasts to orchardists: A dynamic decision-making approach. Journal of Applied Meteorology, 21, 518-531.

Kleinmuntz, B., Ed. (1968). Formal Representation of Human Judgment. New York: Wiley.

McQuigg, J. D. (1971). Some attempts to estimate the economic response of weather information. Weather, 26, 60-68.

Mjelde, J. W., Dixon, B. L. and Sonka, S. T. (1989). Estimating the value of sequential updating solutions for intrayear crop management. Western Journal of Agricultural Economics, 14, 1-8.

Oliver, R. M. and Smith, J. Q., Eds. (1990). Influence diagrams, belief nets, and decision analysis. New York: Wiley.

Renn, O. (1992). Risk communication: Towards a rational discourse with the public. Journal of Hazardous Materials, 29, 465-519.

Rotton, J., and Frey, J. (1985). Air pollution, weather, and violent crimes: Concomitant time-series analysis of archival data. Journal of Personality and Social Psychology, 49(5), 1207-1220.

Sonka, S. T., Changnon, S. A. and Hofing, S. (1988). Assessing climate information use in agribusiness. Part II: decision experiments to estimate economic value. Journal of Climate, 1, 766-774.

Stewart, T. R. (1988). Judgment analysis: Procedures. In Brehmer, B., and Joyce, C. R. B. (Eds.), Human Judgment: The Social Judgment Theory View. Amsterdam: North-Holland, 41-74.

Stewart, T.R., Katz, R.W. and Murphy, A.H. (1984). Value of weather information: A descriptive study of the fruit frost problem. Bulletin of the American Meteorological Society, 65, 126-137.

Stewart, T. R., Roebber, P. J. and Bosart, L. F. (in press). The importance of the task in analyzing expert judgment. Organizational Behavior and Human Decision Processes.

Tversky, A. and Kahneman, D. (1981). The framing of decisions and the rationality of choice. Science, 211, 453-458.


Societal Aspects of Weather

Workshop's Main Page

Table of Contents