Case Histories
Workshop participants presented nine case histories as follows:
Discussants were:
John Firor
Karen Litfin
A summary of each case history follows. Short biographical
statements of each of the participants can be found as an
appendix to this report.
U.S. Climate Research and Assessments
J. Christopher Bernabo
Science and Policy Associates, Inc.
Washington, DC
INTRODUCTION
To examine the interaction of climate research and policy development we must review the
history of the issue.
Global climate change is a classic example of the mismatch that often occurs between
science and policy on complex and controversial issues. The basis for this mismatch lies in the
differing cultures and objectives of researchers and decision makers. The strong disciplinary
structure of research and the nature of political processes has made for a history of
misunderstandings and an inadequate linking of science to the needs of society.
To better understand the complex interactions of science and policy on this issue it is
sometimes necessary to use a journalistic approach. Many of the key events are not recorded in
the literature and we must resort to using some oral history. While the research is well
documented in the peer-reviewed literature, only part of the policy history can be pieced together
from records while many key facts are not available in published form.
This results because in the policy arena the incentives are reversed from those in research
and publishing objective analyses while policy making can mean perishing (although memoirs
after service are common but usually self-serving). Decision makers mostly record final results
with the details of the behind-the-scenes development process, underlying motives and actions
being unrecorded and sometimes confidential.
The blind-men-and-the-elephant syndrome also confounds getting a clear picture of the
issues. Accounts are biased by the experiences and professional perspectives of the different
actors. What little analysis of the climate science-policy interaction is available suffers from
narrow views of researchers not well versed in the real world of policy development (for instance
Rubin et al. 1992, the most cited article). An exception is work by Pielke (1995) where an
insightful analysis is provided that supports the conclusions presented here.
The author will use some first hand accounts as well as available documents. An attempt
will be made to transcend narrow disciplinary views and present a broadly integrated picture.
This analysis of the issue is based on experience in both the scientific and policy arenas, but of
course is open to other interpretations.
Back to Top
Flood Prediction:
An Issue Immersed in the Quagmire of
National Flood Mitigation Policy
Stanley A. Changnon
Changnon Climatologist
Mahomet, Illinois
INTRODUCTION
The average annual flood damages and ensuing costs of restoration in the United States rank
higher than those caused by any other natural hazard (Federal Task Force, 1994), and the loss of
human life from floods ranks third behind heat waves and lightning as the nation's primary cause
of death due to weather hazards (Changnon et al., 1996). The seriousness of the flood problem
suggests that improved predictions of floods to reduce losses in life and property should rank
high in the nation's flood damage mitigation policy, but, ironically, this is not true.
To understand today's policy issues related to floods and their prediction requires a brief
sojourn into the history of the nation's settlement, and the ensuing struggle of humans against the
peril of flooding. Extensive settlement of the nation, as we know it, began at the end of the 18th
Century. The nation's population grew from 7 million in 1800 to 50 million by 1870, and much
of this occurred within the flood-prone Mississippi River basin.
The rapid push into the lands west of the original colonies was largely facilitated by the
major rivers and streams, and settlement brought expansion of farming. Initial areas of
settlement were in the flat, soil-rich flood plains where farming flourished and towns developed.
This pattern of river-oriented settlement led to the development of river ports and the presence of
most U.S. inland cities along the major rivers. The agriculturally-oriented settlement practices
including improving drainage of the prairies, cutting of forests, and shifting land use from
prairies to farmed lands collectively increased flooding frequency and magnitude.
Ever present flooding became an ever larger problem as a result of the expanding farm
lands, population density, port cities, and river transportation system, the primary means for
shipping goods in and products out of non coastal areas of the nation. By the middle of the 19th
Century, flooding had become recognized as a key problem, particularly in the Mississippi River
basin. By 1860 flood control had become a national goal. Congress issued a series of laws from
1851 onwards that established flood control and navigation works that, for the ensuing 50 years,
largely involved levee construction and stream channelization (Morrill, 1897). Later, the
structural efforts to control floods included construction of hundreds of flood control dams and
reservoirs. Farmers along flood-prone rivers formed districts, raised funds, and also built levees.
The flooding problems in the Mississippi River basin also led to the formation of government
efforts in the 1870s to predict major floods along the river's main stem. Flood prediction
represented the nation's first efforts to predict a national hazard.
The levee-only approach for controlling floods, practiced extensively after the 1850s,
began to be challenged in the 20th Century. Flood losses continued to grow, as did hydrologic
understanding of floods, and by 1920 there was growing concern over erosion and sedimentation
due to flooding. This situation began to create a different perspective about flood mitigation
efforts, particularly at the farm level and in upland watersheds. From 1930 until now,
government policies concerning flood control and mitigation have continued to shift as
understanding of the scientific issues and social behavior changed. Efforts to develop non-structural approaches to flood mitigation became the central theme of the past 35 years.
However, flood losses continued to grow, and the federal government's use of post-flood relief
payments have become ever more prevalent in recent years, acting to negate the incentives of the
non-structural approach towards flood mitigation. Flood prediction has been become lost in this
quagmire of flood mitigation approaches.
Back to Top
The Asteroid/Comet Impact Hazard
Clark R. Chapman
Southwest Research Institute
Boulder, CO
ABSTRACT
An impact by an asteroid or comet larger than 1 km in diameter (30,000 Megaton energy) occurs
about every 100,000 years, or 1 chance in a thousand per century. Objects of this size could
cause serious regional disasters (e.g. tsunami) and objects only slightly larger would have global
environmental consequences (e.g. severe ozone loss, injections of water and dust into the
stratosphere, wild fires) that might threaten the future of civilization as we know it. (Smaller
impacts can create damage similar to other major natural disasters, but they probably account for
<0.1% of such disasters.) This "impact threat" was virtually unknown until the past two decades.
This case study reviews the history of how this "new" hazard came to be "discovered" by
the scientific community and about how knowledge of the hazard has spread to the general
public in the last few years. I also review the modest efforts by the Congress, other U.S.
governmental agencies, and other international groups (national, international, military, and
private) to deal with the impact hazard. The unusual nature of the hazard (enormous
consequence, extremely low probability of occurring in our lifetimes) has presented difficulties
in getting it considered along with other natural hazards and by agencies with responsibility for
mitigation. It now appears likely that the discovery rate of Earth-approaching objects will
increase dramatically in the next decade, raising one practical issue that has not been addressed:
how to communicate with officials and the public concerning discoveries of objects that may
impact the Earth in the near future.
Back to Top
Oil and Gas Resources:
Resources Appraisal at The
U.S. Geological Survey
Donald L. Gautier
U.S. Geological Survey
Menlo Park, California
INTRODUCTION
The OPEC oil embargo of 1973 with its attendant lines of cars waiting for gasoline at service
stations, sharply rising prices at the pump, and the perceived vulnerability of the United States
brought national attention to the issue of oil and gas resources.
Since the early 1950s M. King Hubbert, then of Shell Oil, had been predicting the decline
of United States oil production. Hubbert had correctly foretold the year of maximum oil
production (1971) and had explained to everyone who would listen that the passing of the
maximum signaled the beginning of a relentless decline from which United States petroleum
production would never recover. To many, the logistic equations indicated more than just falling
oil production. Recognizing the critical role played by petroleum in western society, they saw in
Hubbert's bell-shaped curves a bleak future in which industrial civilization would shrivel.
By the early 1970s Hubbert had come to work for the U.S. Geological Survey (USGS, the
Survey), where he was expected to carry out a program of research following on his earlier
studies at Shell. This he did, releasing the results in a series of publications (Hubbert, 1972;
1974; 1979). At the same time, however, the Director of the USGS, Vincent McKelvey, had
been espousing a radically different view of the world (McKelvey, 1968; 1972; 1984). Whereas
Hubbert analyzed the data of oil and gas discovery, development, and production from a
statistical and mathematical perspective, McKelvey envisioned a world where the principal
resource was human ingenuity. McKelvey would probably have agreed when the oil finder W.E.
Pratt said, "Oil must be found first of all in our minds."
To McKelvey, resources were widely distributed within the crust of the earth, most of
which had, of course, hardly been touched by the drill. The largely undrilled areas of the world,
the remote spots, the deep basins, the offshore areas, could naturally be expected to yield sizes
and numbers of fields similar to those already discovered. Thus the Director of the Survey went
on record with the view that at least two hundred and possibly more than 450 billion barrels of
oil (BBO) remained to be discovered and developed within the confines of the United States.
This, as it turned out, was the highest published estimate to date.
Meanwhile, just down the hall Hubbert, with his (to McKelvey) maddening reliance on
data, continued to confidently predict the decline and fall of United States oil production.
Hubbert's predictions concerning the future discovery of new oil and gas accumulations differed
from those of McKelvey by more than an order of magnitude. He predicted that no more that 55
BBO remained to be found in the U.S. McKelvey pressured Hubbert not to publish his work
through official USGS channels, and as a consequence most of the Hubbert works of this period
were published through Congress, or through the National Academy of Science, of which he was
a member. McKelvey, who was never elected to the Academy, continued to provide his advice
to the policy makers in the Department of the Interior and the Office of the President. Ultimately
the politics of resource assessment probably cost McKelvey his job, as he continued to release
his rosy views of the country's resource future in the face of a national mood that was decidedly
pessimistic with respect to oil and gas resources. His optimism simply did not fit with the
national policy being promulgated that included strict controls on natural gas pricing and
production, and hefty subsidies for development of alternative fuels and non-hydrocarbon energy
sources.
The Survey was thus in the awkward situation of having authored (all publications of
USGS employees are official statements) within the space of a year or two both one of the most
pessimistic views of the oil and gas future of the U.S. and one of the most optimistic views. In
an era when developing new government programs was not a problem, when money for research
was relatively abundant, and where such a clearly gaping hole in knowledge existed, it was only
natural that the USGS be provided with funds necessary to determine the truth regarding the
matter of the oil and gas resources of the United States.
Back to Top
Acid Rain and Predictive Modeling:
The "Problematics" of Disciplinary Science
Charles Herrick
Princeton Economic Research
Rockville, Maryland
ABSTRACT
This essay consists of three sections. Section I, entitled "NAPAP and Predictive Modeling"
provides background on the National Acid Precipitation Assessment Program (NAPAP) and
summarizes the Program's models and med-based assessment activities. Section II entitled "The
Problematics of Science-Based Policy Analysis," combines constructionist theory and behavioral
observation to articulate key difficulties associated with the use of scientific information in a
policy context. Section III, entitled "Predictive Modeling and Policy Applications," provides
answers to the questions poised to participants in this forum.
Back to Top
Case History: Short-Term Weather
William H. Hooke
U.S. Weather Research Program
NOAA, US Department of Commerce
Silver Spring, MD
In the realm of Earth Sciences prediction problems considered by this workshop, short-term
weather prediction offers some interesting and unique attributes. First of all, there is the sheer
number of short-term weather predictions issued. Within the United States alone, over 100 NWS
offices produce for public consumption a range of hourly products day in and day out, totaling
some 24,000 predictions per day or about ten million predictions each year.
Second, the expected improvement in short-term weather prediction is clearly linearly
related to research funding level. There is a long track record of forecast improvements, and
there is a high degree of community consensus on how to achieve continuing forecast
improvements in the near term. By contrast, in an area of application such as earthquake
prediction this linearity is not nearly so obvious, nor is there necessarily agreement on how best
to proceed.
Moreover, given the large number of short-term weather predictions and societal contexts
available for study, the opportunities for evaluating the societal benefits of improved forecasts
are diverse, plentiful, and frequent. By contrast, the slow cycle rate for events such as major
asteroid impacts and global change make meaningful evaluation difficult.
While the utility of certain of the forecasts addressed by the workshop are somewhat
focussed in their societal impacts, the policy implications of improved short-term weather
forecasts is woven so finely through the fabric of society that the policy objectives may be
difficult to characterize succinctly.
Improved short-term weather prediction would impact a number of other areas of
workshop focus. For example, improved prediction of extreme events would also improve our
understanding of the link between global change and natural extremes, our ability to make
seasonal and interannual forecasts of hurricane tracks, frequency, and intensity, and our policy
formulation with respect to acid rain and floods.
Furthermore, planning for a single major program, and its changes, can be tracked for two
decades. Historically, three (four) phases merit consideration:
1980-1989. The National Stormscale Operational and Research Meteorology (STORM)
Program.
1989-1992. The U.S. Weather Research Program (USWRP-A).
1993-present. USWRP-B.
(1995-present. The North American Observing System [NAOS].)
Short-term weather prediction research has had an extraordinary history over the past 10-20 years. On the one hand, it could be argued that despite tremendous effort throughout that
period, the atmospheric community has failed in efforts to sell a mega-giga-thrust to the nation.
On the other hand, it could be equally well argued that the period has been one of extraordinary
success/accomplishment. In the past two decades, the National Weather Service and cooperating
agencies have accomplished a $5b NWS Modernization and Associated Restructuring. There
have been tremendous improvements in short-term weather forecasting/warning to the public. In
response, both the public and private sectors are beginning to make far more aggressive use of
such predictions in all sorts of economic decisions. Coordination between weather researchers
and operation meteorologists is expanding in scope.
Given the confluence of the attributes listed above, it is not surprising that short-term
weather prediction and its policy linkages have been actively studied, e.g., by Pielke and Glantz
(1995), as well as others. It also makes an interesting and illuminating addition to the spectrum
of cases encompassed in the workshop.
Back to Top
Misuse of Water Quality Predictions
In Mining Impact Studies
Robert E. Moran
Moran and Associates
Hydrogeology/Geochemistry
Lookout Mountain, CO
INTRODUCTION
If one is going to commercially mine gold in the western United States, the operation is likely to
be on federally-managed land. Such federal lands comprise about 50 percent of the eleven
western states and 90 percent of Alaska. Most such operations are huge open pit mines, often
more than 1000 feet deep, and may be nearly a mile wide and more than a mile in length. The
management agency, in this case the U.S. Bureau of Land Management (BLM), will oversee the
permitting and operation processes with the intent of minimizing future impacts to the site and its
resources. However, the construction of such huge structures inevitably involves moving and
exposing massive volumes of waste rock, and mining hundreds of feet below the water table.
Once mining ceases and the dewatering pumps are shut off, a lake will form within the excavated
hole. Pits of this scale at gold sites were first constructed in the late 1980s. Thus, we have no
long-term information on the chemistry of such pit waters; these pits are still being excavated
and, in most cases, the lakes have yet to form.
How can the BLM assure the general public that site surface and ground water quality
will not be degraded as a result of these activities? Obviously they can't in fact! But, it has not
been traditionally acceptable for the BLM to tell citizens that they are uncertain about future
impacts. As a result, the BLM approach usually requires the prospective mining company to
present specific predictions of future water quality in the environmental impact studies (EIS)
prepared for public review. Unfortunately, the majority of the mining EISs I have seen tend to
anticipate few, if any, significant water quality problems. In those instances where we now have
some real-world data, it is clear that numerous unforeseen problems are beginning to surface.
Clearly there has been a tendency to predict overly-optimistic scenarios. Most of the scientists
and engineers I deal with in both the public and private sectors contend that "better science"
would solve the problem. To some extent the water quality predictive technology is still in its
infancy, but in the paper that follows, I wish to present the view that the fault lies more with the
unreasonable economic and political pressures placed on the technical consultants and
government managers, which then leads to the misuse of predictive model results.
Back to Top
The Issuance of Earthquake "Predictions:"
Scientific Approaches and Public Policy Strategies
Joanne M. Nigg
Disaster Research Center
University of Delaware
Newark, DE
ABSTRACT
The effectiveness of earthquake prediction as a tool for reducing earthquake impacts depends, in
part, on developing community response plans that can be implemented when predictions are
issued. The overarching policy issue -- how to lessen earthquake losses to the built environment
and social systems by disseminating forewarnings of future damaging earthquake events -- has
continued to be the focus of governmental efforts to deal with scientific forecasts, but the specific
strategies considered have varied, often due to changes in scientific approaches to prediction.
This case study will trace the interwoven strands of scientific approaches and policy
responses to earthquake predictions during the past 25 years, beginning in the early 1970s until
today. The state of California has been the focus for concentrated research -- in both earth
science and social science -- on earthquake prediction during this period; however, federal policy
has had an important role in identifying priorities for both scientists and state and local
government officials with respect to the manner in which earthquake predictions would impact
upon society.
This historical analysis illustrates that scientific approaches and strategies to predict or
forecast damaging earthquakes has changed considerably over time, partly due to scientific
inability to develop theoretical advances in short- and intermediate-term earthquake prediction.
In order to justify continued funding within the NEHRP program, alternative ways of projecting
future events have been developed, e.g., long-term forecasts based on probabilities of fault
segment movement, and an "early warning" system following a large, distant seismic event.
Public policy has primarily been reactive to the promises of prediction capabilities, trying
to develop prediction response plans and identify better ways of communicating earthquake
threat information to the general public. However, as the scientific strategies have changed to
emphasize long-term forecasts and instantaneous warnings, it seems that public policy concerns
have either waned (in the previous case) or have yet to develop (in the latter). While we should
not expect science to develop in an orderly or linear fashion, the extreme variations in the
approaches taken in the earthquake prediction area have resulted in reactive and short-lived
public policy responses that do not appear to have had any long-term programmatic components.
Back to Top
Predicting the Behavior of Beaches:
Alternative Models
Orrin H. Pilkey
School of the Environment
Division of Earth and Ocean Sciences
Duke University
ABSTRACT
In the USA, mathematical models are heavily relied upon in the design of beach replenishment
and coastal engineering projects; Yet, there is a great discrepancy between the predicted beach
behavior produced by the models and the reality of actual beach behavior. Some of the problem
is rooted in politics, but more important is the unreality of analytical and numerical models used
in the design process. The typical assumptions used to simplify the model equations are often
highly questionable (e.g., the existence of closure depth), processes are generalized and
incomplete (ignoring seaward-flowing bottom currents), models are assumed to apply to all
beaches (a shoreface with outcropping rock is treated no differently than a shoreface covered
with unconsolidated sand), and adequate real-world data (e.g., wave gauge information) is
generally lacking. In addition, the model approach used in USA coastal engineering design is
non-probabilistic -- in effect, storms are considered to be unpredictable accidents. They are not
directly accounted for in most models. The track record of model use is poor, and we
recommend that models should be shelved for real-world applications while recognizing their
potential usefulness in basic coastal science.
INTRODUCTION
Mathematical models to predict beach behavior are used widely in coastal engineering. Some of
the uses include predictions of shoreline retreat rates, prediction of future shoreline positions and
prediction of the impact of shoreline positions and prediction of the impact of shoreline armoring
on the beach. By far the most important use, however, is predicting the lifespan (and hence
costs) of nourished beaches. Accurate predictions are very important in the societal process of
weighing the response alternatives to shoreline retreat. The term erosion, which is a poor
descriptor of shoreline retreat, is deeply ingrained in real world usage and will be used in this
report.
Perhaps 80% of the U.S. shore line is eroding. At the same time there is a huge
population shift to the shoreline. By the year 2020, 80% of the U.S. population will live within
50 miles of a shoreline, including the Great lakes. Beachfront property, generally the most
dangerous property in the coastal zone, is particularly highly valued. The long term prognosis
for the shoreline erosion "problem" is that, because of sea level rise and especially because of the
negative impacts of mans activities on sand supply, it will be increasingly severe. Although a
number of states have instituted secondary controls on beachfront development (e.g.,
construction setback lines), no state government has instituted a long term solution to erosion.
Setback lines only put off the problem to the next generation.
The basic problem with responding to the shoreline erosion problem is that two
conflicting societal priorities are involved. One priority is the preservation of property adjacent
to the shoreline. Beachfront property owners tend to be influential people with political clout.
The second priority is preservation of the recreational beach. The beach is utilized and valued by
numbers of people much larger than numbers of property owners.
There are three ways that our society can "solve" the erosion problem.
- Hard stabilization which is any way of holding the shoreline in place using hard
immovable objects, usually seawall, groins or offshore breakwaters.
- Soft stabilization or emplacement of new sand called beach nourishment.
- Relocation or abandonment of buildings.
Beach nourishment is a commonly chosen erosion-response alternative especially on the
U.S. East and Gulf coasts. The U.S. spends approximately $100m annually on nourishment, not
counting the Pacific coast. Some states (NC, SC, RI and ME) have outlawed hard stabilization
making the soft solution of nourishment very attractive. Engineering design of replenished
beaches involves predicting the rate of loss of the artificial beach and calculation of the volumes
of sand that will be required to hold a beach of a certain dimension in place for a specified length
of time.
Accurate answers to these questions are essential. If the cost is too high, a community
may wish to relocate buildings but if beach nourishment is allowed to proceed, it will likely
encourage increased density of development. This will add more clout and pressure for
additional nourishments in the future. Economically feasible sand supplies are limited and future
costs of nourishment should skyrocket.
The U.S. Army Corps of Engineer's Coastal Engineering Research Center is the principal
organization involved in beach nourishment research. Most of the models and other concepts
used in beach nourishment in this country were developed here. A significant coastal
engineering community exists "on the outside" of CERC but has relatively little impact on
current design procedures. This is, in part, because consulting engineers perceive that they must
follow established and published Corps of Engineers procedures to avoid lawsuits in case of
project failure. Since failure in terms of prediction of beach durability is a common event,
following published government guidelines has proven to be a prudent approach. It also is a
major inhibitor to novel and creative design approaches. The basic Corps procedures are
outlined in the Corps' Shore Protection Manual (USACE, 1984).
In this paper, application of models to the beach nourishment process will be emphasized.
A second less frequent use of models in coastal engineering is prediction of beach behavior after
emplacement of seawalls.
|