Archive for the ‘Risk & Uncertainty’ Category

Cost-Benefit Analysis of the Spy Satellite Shootdown Attempt

February 20th, 2008

Posted by: Roger Pielke, Jr.

t1home.ship.missile.gi.jpg

The Navy is apparently going to try to shoot down that wayward spy satellite sometime in the next 48 hours. The attempt to shoot it down is justified in terms of protecting human life from the risk of harm caused by the satellite’s uncontrolled reentry. This post discusses whether or not the shoot down attempt can be justified in cost-benefit terms. I don’t think it can, at least in terms of the formal justifications provided by the U.S. government. There must be other factors involved. The costs per expected life saved are about $2-$3 billion dollars! Read on for details.

(more…)

Deja Vu All Over Again

January 7th, 2008

Posted by: Roger Pielke, Jr.

The Washington Post had a excellent story yesterday by Marc Kaufman describing NASA’s intentions to increase the flight rate of the Space Shuttle program. This is remarkable, and as good an indication as any that NASA has not yet learned the lessons of its past.

Challenger_explosion.jpg

According to the Post:

Although NASA has many new safety procedures in place as a result of the Columbia accident, the schedule has raised fears that the space agency, pressured by budgetary and political considerations, might again find itself tempting fate with the shuttles, which some say were always too high-maintenance for the real world of space flight.

A NASA official is quoted in the story:

“The schedule we’ve made is very achievable in the big scheme of things. That is, unless we get some unforeseen problems.”

The Post has exactly the right follow up to this comment:

The history of the program, however, is filled with such problems — including a rare and damaging hailstorm at the Kennedy Space Center last year as well as the shedding of foam insulation that led to the destruction of Columbia and its crew in 2003. . . “This pressure feels so familiar,” said Alex Roland, a professor at Duke University and a former NASA historian. “It was the same before the Challenger and Columbia disasters: this push to do more with a spaceship that is inherently unpredictable because it is so complex.”

John Logsdon, dean of space policy experts and longtime supporter of NASA, recognizes the risks that NASA is taking:

Every time we launch a shuttle, we risk the future of the human space flight program. The sooner we stop flying this risky vehicle, the better it is for the program.

Duke University’s Alex Roland also hit the nail on the head;

Duke professor Roland said that based on the shuttle program’s history, he sees virtually no possibility of NASA completing 13 flights by the deadline. He predicted that the agency would ultimately cut some of the launches but still declare the space station completed.

“NASA is filled with can-do people who I really admire, and they will try their best to fulfill the missions they are given,” he said. “What I worry about is when this approach comes into conflict with basically impossible demands. Something has to give.”

It is instructive to look at the 1987 report of the investigation of the House Science Committee into the 1986 Challenger disaster, which you can find online here in PDF (thanks to Rad Byerly and Ami Nacu-Schmidt). That report contains lessons that apparently have yet to be fully appreciated, even after the loss of Columbia in 2003. Here is an excerpt from the Executive Summary (emphasis added, see also pp. 119-124):

The Committee found that NASA’s drive to achieve a launch schedule of 24 flights per year created pressure throughout the agency that directly contributed to unsafe launch operations. The Committee believes that the pressure to push for an unrealistic number of flights continues to exist in some sectors of NASA and jeopardizes the promotion of a “safety first” attitude throughout the Shuttle program.

The Committee, Congress, and the Administration have played a contributing role in creating this pressure. . . NASA management and the Congress must remember the lessons learned from the Challenger accident and never again set unreasonable goals which stress the system beyond its safe functioning.

One would hope that the House Science Committee has these lessons in mind and is paying close attention to decision making in NASA. It would certainly be appropriate for some greater public oversight of NASA decision making about the Shuttle flight rate and eventual termination. Otherwise, there is a good chance that such oversight will take place after another tragedy and the complete wreckage of the U.S. civilian space program.

For further reading:

Pielke Jr., R. A., 1993: A Reappraisal of the Space Shuttle Program. Space Policy, May, 133-157. (PDF)

Pielke Jr., R.A., and R. Byerly Jr., 1992: The Space Shuttle Program: Performance versus Promise in Space Policy Alternatives, edited by R. Byerly, Westview Press, Boulder, pp. 223-245. (PDF)

On the Political Relevance of Scientific Consensus

December 21st, 2007

Posted by: Roger Pielke, Jr.

Senator James Inhofe (R-OK) has released a report in which he has identified some hundreds of scientists who disagree with the IPCC consensus. Yawn. In the comments of Andy Revkin’s blog post on the report you can get a sense of why I often claim that arguing about the science of climate change is endlessly entertaining but hardly productive, and confirming Andy’s assertion that “A lot of us live in intellectual silos.”

In 2005 I had an exchange with Naomi Oreskes in Science on the significance of a scientific consensus in climate politics. Here is what I said then (PDF):

IN HER ESSAY “THE SCIENTIFIC CONSENSUS on climate change” (3 Dec. 2004, p. 1686), N. Oreskes asserts that the consensus reflected in the Intergovernmental Panel on Climate Change (IPCC) appears to reflect, well, a consensus. Although Oreskes found unanimity in the 928 articles with key words “global climate
change,” we should not be surprised if a broader review were to find conclusions at odds with the IPCC consensus, as “consensus” does not mean uniformity of perspective. In the discussion motivated by Oreskes’ Essay, I have seen one claim made that there are more than 11,000 articles on “climate change” in the ISI database and suggestions that about 10% somehow contradict the IPCC consensus
position.

But so what? If that number is 1% or 40%, it does not make any difference whatsoever from the standpoint of policy action. Of course, one has to be careful, because people tend to read into the phrase “policy action” a particular course of action that they themselves advocate. But in the IPCC, one can find statements to use in arguing for or against support of the Kyoto Protocol. The same is true for any other specific course of policy action on climate change. The IPCC maintains that its assessments do not advocate any single course of action.

So in addition to arguing about the science of climate change as a proxy for political debate on climate policy, we now can add arguments about the notion of consensus itself. These proxy debates are both a distraction from progress on climate change and a reflection of the tendency of all involved to politicize climate science.

The actions that we take on climate change should be robust to (i) the diversity of scientific perspectives, and thus also to (ii) the diversity of perspectives of the nature of the consensus. A consensus is a measure of a central tendency and, as such, it necessarily has a distribution of perspectives around that central measure (1). On climate change, almost all of this distribution is well within the bounds of legitimate scientific debate and reflected within the full text of the IPCC reports. Our policies should not be optimized to reflect a single measure of the central tendency or, worse yet, caricatures of that measure, but instead they should be robust enough to accommodate the distribution of perspectives around that
central measure, thus providing a buffer against the possibility that we might learn more in the future (2).

ROGER A. PIELKE JR.
Center for Science and Technology Policy Research,
University of Colorado, UCB 488, Boulder, CO
80309–0488, USA.

References
1 D. Bray,H. von Storch, Bull.Am.Meteorol. Soc. 80, 439 (1999).
2. R. Lempert, M. Schlesinger, Clim. Change 45, 387 (2000).

AGU Powerpoint with Steve McIntyre

December 10th, 2007

Posted by: Roger Pielke, Jr.

Here is a link to a PPT file providing an overview of a paper by Steve McIntyre and I titled, “Changes in Spatial Distribution of North Atlantic Tropical Cyclones,” which he will be presenting this week at the AGU meeting.

Here are our conclusions:

Spatially descriptive statistics can contribute to analysis of controversial hurricane issues.

There has been no statistically significant increase in cyclone activity in the western Atlantic basin; the entire increase in measured storm and hurricane activity has taken place in the mid-Atlantic;

Lack of trend in landfall and normalized damage reconciles perfectly with lack of trend in western quartile storm and hurricane indices

The eastward shift cannot be attributed merely to earlier detection.

The shift could be technological or climatological or some combination; there is no plausible statistical basis for saying that the shift to the mid-Atlantic is not as important or relevant as the overall increase.

If the trend only occurs in the mid-Atlantic, should policy-makers care?

Comments welcomed.

Revisiting The 2006-2010 RMS Hurricane Damage Prediction

December 6th, 2007

Posted by: Roger Pielke, Jr.

In the spring of 2006, a company called Risk Management Solutions (RMS) issued a five year forecast of hurricane activity (for 2006-2010) predicting U.S. insured losses to be 40% higher than average. RMS is an important company because their loss models are used by insurance companies to set rates charged to homeowners, by reinsurance companies to set rates they charge to insurers, by ratings agencies for evaluating risks, and others.

We are now two years into the RMS forecast period and can thus say something preliminary about their forecast based on actual hurricane damage from 2006 and 2007, which was minimal. In short, the forecast doesn’t look too good. For 2006 and 2007, the following figure shows average annual insured historical losses (for 2005 and earlier) in blue (based on Pielke et al. 2008, adjusted up by 4% from 2006 to 2007 to account for changing exposure), the RMS prediction of 40% more losses above the average in pink, and the actual losses in red.

RMS Verification.png

The RMS prediction obviously did not improve upon a naive forecast of average losses in either year.

What are the chances for the 5-year forecast yet to verify?

Average U.S. insured losses according to Pielke et al. (2008) are about $5.2 billion per year. Over 5 years this is $26 billion, and 40% higher than this is $36 billion. A $36 billion dollar insured loss is about $72 billion in total damage, and $26 billion insured is about $52 billion. For the RMS forecast to do better than the naive baseline of Pielke et al. (2008) total damage in 2008-2010 will have to be higher than $62 billion ($31 billion insured). That is, losses higher than $62B are closer to the RMS forecast than to the naive baseline.

The NHC official estimate for Katrina is $81 billion. So for the 2006-2010 RMS forecast to verify will require close to another Katrina-like event to occur in the next 3 years, or several large events. This is of course possible, but I doubt that there is a hurricane expert out there willing to put forward a combination of event probability and loss magnitude that will lead to an expected $62 billion total loss over the next 3 years. Consider that a 50% chance of $124 billion in losses results in an expected $62 billion. Is there any scientific basis to expect a 50% chance of $124 billion in losses? Or perhaps a 100% chance of $62 billion in total losses? Anyone wanting to make claims of this sort, please let us know!

From Pielke et al. (2008) the annual chances of a >$10B event (i.e., $5B insured) during 1900-2005 about 25%, and the annual chances of a >$50 billion ($25 billion insured) are just under 5%. There were 7 unique three-year periods with >$62B (>$31B insured) in total losses, or about a 7% chance. So RMS prediction of 40% higher than average losses for 2006-2010 has about a 7% chance of being more accurate than a naive baseline. It could happen, of course, but I wouldn’t bet on it without good odds!

So what has RMS done is the face of evidence that its first 5-year forecast was not so accurate? Well, they have declared success and issued another 5-year forecast of 40% higher losses for the period 2008-2012.

Risk Management Solutions (RMS) has confirmed its modeled hurricane activity rates for 2008 to 2012 following an elicitation with a group of the world’s leading hurricane researchers. . . . The current activity rates lead to estimates of average annual insured losses that will be 40% higher than those predicted by the long-term mean of hurricane activity for the Gulf Coast, Florida, and the Southeast, and 25-30% higher for the Mid-Atlantic and Northeast coastal regions.

For further reading:

Pielke, R. A., Jr., Gratz, J., Landsea, C. W., Collins, D., Saunders, M. A., and Musulin, R. (2008). “Normalized Hurricane Damages in the United States: 1900-2005.” Natural Hazards Review, in press, February. (PDF, prepublication version)

State of Florida Rejects RMS Cat Model Approach

May 11th, 2007

Posted by: Roger Pielke, Jr.

According to a press release from RMS, Inc. the state of Florida has rejected their risk assessment methodology based on using an expert elicitation to predict hurricane risk for the next five years. Regular readers may recall that we discussed this issue in depth not long ago. Here is an excerpt from the press release:

During the week of April 23, the Professional Team of the Florida Commission on Hurricane Loss Projection Methodology (FCHLPM) visited the RMS offices to assess the v6.0 RMS U.S. Hurricane Model. The model submitted for review incorporates our standard forward-looking estimates of medium-term hurricane activity over the next five years, which reflect the current prolonged period of increased hurricane frequency in the Atlantic basin. This model, released by RMS in May 2006, is already being used by insurance and reinsurance companies to manage the risk of losses from hurricanes in the United States.

Over the past year, RMS has been in discussions with the FCHLPM regarding use of a new method of estimating future hurricane activity over the next five years, drawing upon the expert opinion of the hurricane research community, rather than relying on a simplistic long-term historical average which does not distinguish between periods of higher and lower hurricane frequency. RMS was optimistic that the certification process would accommodate a more robust approach, so it was disappointed that the Professional Team was “unable to verify” that the company had met certain FCHLPM model standards relating to the use of long-term data for landfalling hurricanes since 1900.

As a result of the Professional Team’s decision, RMS has elected this year to submit a revised version of the model that is based on the long-term average, to satisfy the needs of the FCHLPM.

This is of course the exact same issue that we highlighted over at Climate Feedback, where I wrote, “Effective planning depends on knowing what range of possibilities to expect in the immediate and longer-term future. Use too long a record from the past and you may underestimate trends. Use too short a record and you miss out on longer time-scale variability.”

In their press release, RMS complains correctly that the state of Florida is now likely to underestimate risk:

The long-term historical average significantly underestimates the level of hurricane hazard along the U.S. coast, and there is a consensus among expert hurricane researchers that we will continue to experience elevated frequency for at least the next 10 years. The current standards make it more difficult for insurers and their policy-holders to understand, manage, and reduce hurricane risk effectively.

In its complaint, RMS is absolutely correct. However, the presence of increased risk does not justify using an untested, unproven, and problematic methodology for assessing risk, even if it seems to give the “right” answer.

The state of Florida would be wise to err in the decision making on the side of recognizing that the long-term record of hurricane landfalls and impacts is likely to dramatically understate their current risk and exposure. From all accounts, the state of Florida appears to be gambling with its hurricane future rather than engaging in robust risk management. For their part, RMS, the rest of the cat model industry, and insurance and reinsurance companies should together carefully consider how best to incorporate rapidly evolving and still-uncertain science into scientifically robust and politically legitimate tools for risk management, and this cannot happen quickly enough.

Sea Level Rise Consensus Statement and Next Steps

April 1st, 2007

Posted by: Roger Pielke, Jr.

In a paper by Jim Hansen that we discussed last week, he called for a consensus statement to be issued on global warming, West Antarctica and sea level rise, from relevant scientific experts. A group of scientists have beat him to the punch issuing a consensus statement last week:

(more…)

Science, Politics, Variability, Change, Learning, Uncertainty

February 27th, 2007

Posted by: Roger Pielke, Jr.

The issue of floodplain management in the city of Boulder reflects in microcosm many of the themes that we discuss on this site. Here is an excerpt from an article in the Daily Camera today:

(more…)

Catastrophic Visions

February 23rd, 2007

Posted by: Roger Pielke, Jr.

The last time that we pointed to an essay by Brad Allenby of ASU it generated much thoughtful discussion. I expect no different from this provocative piece in the latest CSPO Newsletter from ASU titled Dueling Elites and Catastrophic Visions. Here is an excerpt:

(more…)

Where Stern is Right and Wrong

February 22nd, 2007

Posted by: Roger Pielke, Jr.

The Christian Science Monitor adds a few interesting details to Nicolas Stern’s recent U.S. visit. On mitigation Stern explains why the debate over the science of climate change is in fact irrelevant:

Even if climate change turned out to be the biggest hoax in history, Stern argues, the world will still be better off with all the new technologies it will develop to combat it.

If mitigation can indeed be justified on factors other than climate change, which I think it can, then why not bring these factors more centrally into the debate?

Stern also dismissed two other arguments for inaction: that humans will easily adapt to climate change and that its effects are too far in the future to address now. Putting the burden of dealing with climate change on future generations is “unethical,” Stern said.

Once again adaptation is being downplayed as somehow being in opposition to mitigation. Stern may in fact believe that we need to both adapt and mitigate, but that is certainly not what is conveyed here. The Stern Review itself adopted a very narrow view of adaptation as reflecting the costs of failed mitigation. When framed in this narrow way there is no alternative than to characterize adaptation and mitigation as trade-offs, and in today’s political climate guess which one loses out?