Prometheus » Risk & Uncertainty http://cstpr.colorado.edu/prometheus Fri, 02 Jul 2010 16:53:16 +0000 http://wordpress.org/?v=2.9.1 en hourly 1 Research Takes First Step on Tolerance of Nanoparticles http://cstpr.colorado.edu/prometheus/?p=5615 http://cstpr.colorado.edu/prometheus/?p=5615#comments Sun, 21 Jun 2009 23:51:47 +0000 admin http://sciencepolicy.colorado.edu/prometheus/?p=5615 The Scientist has a capsule review of a 2007 research article on the ability of mice to purge themselves of nanoparticles.  The full article is available in Nature Biotechnology (subscription/purchase required).  The article also includes notes on some subsequent work in this area.

As nanotechnology matures, providing more and more products with particles measured in nanometers, the risk of exposure to these particles needs to be assessed and regulated.  Being able to determine what size of particles can be expelled by the body and what sizes accumulate in the body helps shape the questions for the regulatory landscape.  But it doesn’t close off exploration into potential risks.

While other regulatory models can provide useful examples, it’s important to remember that the scale of these particles may provide unique concerns.  I would hope that the hard lessons of chemical regulation – where accumulated exposure flew under the regulatory radar for years (see Krimsky’s Hormonal Chaos for a good overview), could be used to good effect here.  It may not be enough that the small particles can be expelled.  Enough transitory exposures over time could have unfortunate effects, much like enough small doses of certain chemicals have had dramatic effects on endocrine systems.

]]>
http://cstpr.colorado.edu/prometheus/?feed=rss2&p=5615 0
Encourage Risky Research Through Finance? Maybe http://cstpr.colorado.edu/prometheus/?p=5099 http://cstpr.colorado.edu/prometheus/?p=5099#comments Fri, 03 Apr 2009 02:06:38 +0000 admin http://sciencepolicy.colorado.edu/prometheus/?p=5099 That appears to be one of the suggestions for how to encourage more risky research in a recent article in Physics World (H/T Scientific Blogging).  Risky in this case is meant to be different, unique, cutting-edge, not high potential for harm or damage.  This is another effort to address a perennial research management question – how do you encourage enough of this kind of risky research in order to keep getting the kind of radical breakthroughs that push science and technology along.  Science is an inherently conservative (small-c) process, as it requires the agreement of the community to establish its information.  Those who take risks and fail lose a lot, as tenure committees, journal boards, and funding organizations don’t deal well with ‘failed’ experiments.

Some of the suggestions in the Physics World article are familiar with those following ‘blue sky’ research – programs or agencies focused on this kind of research, a percentage of research funding set-aside for such research, fellowships for out-of-the-box thinkers, tenure as a means of allowing experienced researchers the job security to take these risks, etc..  They all have their benefits and problems.

What was novel – at least for me – were two strategies taken from the investment realm.  One boils down to critics and supporters of new or radical ideas putting their money where their mouth is, so to speak.  You could create an investment or prediction market on a particular scientific proposition.  This assumes – and it may be a really big assumption – that the success of prediction markets in other areas (look at those involved in political races) will transfer to scientific research.  It’s also unclear if enough people would have enough interest in the research question, one way or the other, to risk enough money in the market to make this viable.  I get the impression the market wouldn’t have to be big enough to actually fund the research, since the money in the market (or most of it) would go to those who picked correctly.  The market would have to be big enough to convince some funder that the question is worth pursuing.

The other idea would be to somehow approach research funding as an investor might try and develop a mutual fund or retirement account.  Invest enough in conservative research – research that will have results as it is expanding or refining the knowledge in a field – to allow for investing in a few riskier ventures.  Unfortunately, this strategy is a bit more problematic because of how returns on scientific research investments are accounted.  Exactly how would you measure return on investment?  It makes sense to me to approach scientific research more explicitly as an investment, but there are many counting measures to address before making this work.

]]>
http://cstpr.colorado.edu/prometheus/?feed=rss2&p=5099 1
Flood Control and Risk in the Netherlands http://cstpr.colorado.edu/prometheus/?p=4839 http://cstpr.colorado.edu/prometheus/?p=4839#comments Mon, 05 Jan 2009 05:15:51 +0000 admin http://sciencepolicy.colorado.edu/prometheus/?p=4839 The January issue of Wired also has an article on Dutch efforts currently underway to expand their flood control plans.  With most of the country below sea level, the Dutch started expanding their dam and flood control systems in the early part of the 20th century.  The Wired article describes the next big phase in the expansion of this system, where the annual chance of flood in high-risk areas will be reduced from one in 10,000 to one in 100,000.  By comparison, New Orleans protection has risk levels where the chance of flood in a year one in 100 (but for more severe hurricanes compared to the storms that spark flooding in the Netherlands).

This is a particularly daunting public works project, expected to take a century.  As the Dutch have experience with long term projects of this scope, it’s not likely to be as much of a shock as it would if this was their first attempt at long range geographic modification.  However, it’s hard to see a 100-year long project surviving the rough-and-tumble politics of most democracies.  While there are certainly climate change implications for what they are doing, the adaptation practices of the Dutch will set an example for engineers and urban planners around the world.  It’s worth taking a look.

]]>
http://cstpr.colorado.edu/prometheus/?feed=rss2&p=4839 2
Technology and the Recent Troubles http://cstpr.colorado.edu/prometheus/?p=4594 http://cstpr.colorado.edu/prometheus/?p=4594#comments Fri, 26 Sep 2008 01:55:49 +0000 admin http://sciencepolicy.colorado.edu/prometheus/?p=4594 The title of this post may seem a bit misleading, because I am not going to be writing about what many readers would automatically assume – the role of high-tech companies or products behind the recent shenanigans.  After all, that nonsense was at least one or two speculative bubbles ago. What I want to do is take a few lines and point out that the financial sector is a very real, almost tangible example of the failure of imagination often found in considerations of technology – and, by extension, technology policy.

Financial instruments, whether we’re talking about basic mortgages (sub-prime or otherwise) or more complicated derivatives like the credit default swaps that may have had a hand in locking up several financial services companies, are technologies.  They aren’t as tangible, so to speak, as other technologies, but their creation still has consequences – consequences rarely engaged until after they have emerged and changed the world – for better or worse.  But I don’t expect some kind of ELSI (Ethical, Legal and Social Implications) research program to emerge in business schools or in Science and Technology Studies departments, focused on the financial sector.  Ironically enough, there’s probably no money in it.

Changes in the business world can often be traced to changes in the instruments and forms of organization used by companies to acheive their goals.  Corporations didn’t always exist, after all.  The form had to be created.  Alfred Chandler, in his historical scholarship, has focused on the business operations behind several key industries, most of which were involved in innovation – either then or now.  The Visible Hand is a revelatory (well, for me anyway) history of how the formation of corporations and partnerships was critical to how the railroads and retail emerged in the United States during the 19th and 20th centuries.  In short, capital intensive industries needed new forms of organizations to function.

What can be said of financial instruments is true of the biggest one of all – “the market”.  Contrary to the most pious of free market disciples, markets are constructed by their participants, and the context in which those participants operate.  But this construction, as well as the financial instruments that are wielded within markets, do not receive proper scrutiny in the same way that many technologies do not.  They demonstrate what philosopher of technology Davis Baird describes in Thing Knowledge as device objectivity.  The choices and values that go into the creation of these instruments is hidden behind their function, or more explicity in the case of many financial instruments, their sheer incomprehensiveness to any but the most high market priests (while Alan Greenspan was particularly good at playing the financial oracle, his successor at the Fed, Ben Bernanke, is certainly adept at saying a lot and a little at the same time).

I am not trying to present some magic explanation for the very fine mess we’re in.  That’s way beyond my pay grade (particularly at Prometheus).  Hopefully I’ve tried to suggest here how the challenges policy researchers face when dealing with technologies and their interactions with societies need not be narrowly considered.  The patterns we have observed in other areas can be of use outside of what is generally thought of as technology.  It’s not restricted to the now (see Neal Stephenson’s The Baroque Cycle for a great examination of changes in financial technology in the transition to early capitalism), and it’s not restricted to the gee whiz stuff most people think of when they hear about technology.  There’s no good reason to stovepipe our knowledge and what we’ve learned because of some arbitrary defintion.

For instance, climate change policy is going to form – if it hasn’t already – various kinds of markets.  For the financial instruments used there, what kind of thought has been given to how these technologies will work, and if the working of those technologies will acheive the desired outcomes?

]]>
http://cstpr.colorado.edu/prometheus/?feed=rss2&p=4594 0
Tone Deaf Damage Control http://cstpr.colorado.edu/prometheus/?p=4504 http://cstpr.colorado.edu/prometheus/?p=4504#comments Sat, 09 Aug 2008 16:10:58 +0000 Roger Pielke, Jr. http://sciencepolicy.colorado.edu/prometheus/?p=4504 I find this hard to believe. As readers of this blog know, earlier this week NCAR administrator Eric Barron made the difficult decision to cut the organization’s Center for Capacity Building, which is focused on helping developing countries adapt to climate. The decision prompted a flurry of compaints and international media coverage.

I was thus surprised to see the email copied below (which was sent to a bunch of people) from Eric and UCAR Vice President Jack Fellows:

From: “Jack Fellows”
Date: August 5, 2008 7:36:07 PM EDT
To: Bierbaum Rosina , peter backlund , Kelly Redmond , kelly redmond , Brad Udall , david yates , Jonathan Overpeck , kathy hibbard , linda mearns , mary hayden , lawrence buja , tom warner , kathleen miller , philip mote , caitlin simpson , Chester J Koblinsky , lisa goddard , Nolan Doesken , Alan Robock , Mark Abbott , “Mark Z. Jacobson” , dan vimont , Anthony Janetos , greg carbone , “Rudnicki Mark” , Keith Talbert Ingram , brian oneill , Paty Romero Lankao , Liz Moyer , david battisti , urc
Cc: eric barron
Subject: 2008 AGU Fall Session on Climate Adaptation

Eric Barron and I are putting together the AGU session described below. If you have put one of these AGU sessions together, you know that I can’t promise you that it will be an oral session or the day/time it will happen, and you are restricted to no more than 3 first author abstracts for the Fall AGU mtg. That said, we’d love to have you participate in this session. Let me know if you are interested by _17 August _so I can meet an 18 Aug AGU deadline. If you can’t participate but know of a great person, please forward this email on to them. Thanks. This is a large email list, so pls just respond to my email address. Jack

Description: PA03: How Can the Science Community Help Local and Regional Decision Makers Who are Exploring or Implementing Adaptation Options to Climate Change? All weather is local! The same will be true regarding adapting to climate change. This session will examine how local universities and decision makers are working together to adapt to anticipated climate change impacts and explore how these independent activities might be networked together into a “national adaptation network or model”. NOTE: we are primarily interested in people who have really interesting policy research or a project that is partnering with local and regional decision makers to deal with climate adaptation, mitigation, or even geoengineering.

Note that the time stamp of the email is after the brouhaha started about the decision to cut CCB. This appears to be an exercise in damage control — “We really care about adaptation, no really, we do. Look we sponsored a session at AGU!” — since neither Barron or Fellows have even done any work in adaptation (and to my knowledge Fellows is an administrator, not at all a researcher).

Let me try my hand at answering the question that they pose:

How Can the Science Community Help Local and Regional Decision Makers Who are Exploring or Implementing Adaptation Options to Climate Change? . . . This session will examine how local universities and decision makers are working together to adapt to anticipated climate change impacts and explore how these independent activities might be networked together into a “national adaptation network or model”.

First thing, don’t cut a program with a 34-year track record of success in doing exactly this. Are these guys serious that they now want to discuss how to recreate the functions of the NCAR CCB at an AGU session? Seriously?

I’d love to give NCAR/UCAR management the benefit of the doubt and believe that they have a rigorous method for setting priorities and making cuts. But not only is this sort of tone deaf damage control/spin insulting to the actual adaptation research community, but a sign that NCAR/UCAR may have a management problem rather than a budget problem.

]]>
http://cstpr.colorado.edu/prometheus/?feed=rss2&p=4504 1
Joel Achenbach on Weather Extremes http://cstpr.colorado.edu/prometheus/?p=4495 http://cstpr.colorado.edu/prometheus/?p=4495#comments Sun, 03 Aug 2008 20:22:43 +0000 Roger Pielke, Jr. http://sciencepolicy.colorado.edu/prometheusreborn/joel-achenbach-on-weather-extremes-4495 In today’s Washington Post Joel Achenbach has a smart and nuanced piece on weather extremes and climate change. The attribution of weather events and trends to particular causes is difficult and contested.

Equivocation isn’t a sign of cognitive weakness. Uncertainty is intrinsic to the scientific process, and sometimes you have to have the courage to stand up and say, “Maybe.”

For Achenbach’s efforts he gets called stupid and a tool of the “deniers”. Such complaints are ironic given that Achenbach explains how foolish it is to put too much weight on extreme events in arguments about climate change:

the evidence for man-made climate change is solid enough that it doesn’t need to be bolstered by iffy claims. Rigorous science is the best weapon for persuading the public that this is a real problem that requires bold action. “Weather alarmism” gives ammunition to global-warming deniers. They’re happy to fight on that turf, since they can say that a year with relatively few hurricanes (or a cold snap when you don’t expect it) proves that global warming is a myth. As science writer John Tierney put it in the New York Times earlier this year, weather alarmism “leaves climate politics at the mercy of the weather.”

There’s an ancillary issue here: Global warming threatens to suck all the oxygen out of any discussion of the environment. We wind up giving too little attention to habitat destruction, overfishing, invasive species tagging along with global trade and so on. You don’t need a climate model to detect that big oil spill in the Mississippi. That “dead zone” in the Gulf of Mexico — an oxygen-starved region the size of Massachusetts — isn’t caused by global warming, but by all that fertilizer spread on Midwest cornfields.

Some folks may actually get the notion that the planet will be safe if we all just start driving Priuses. But even if we cured ourselves of our addiction to fossil fuels and stabilized the planet’s climate, we’d still have an environmental crisis on our hands. Our fundamental problem is that — now it’s my chance to sound hysterical — humans are a species out of control. We’ve been hellbent on wrecking our environment pretty much since the day we figured out how to make fire.

This caused that: It would be nice if climate and weather were that simple.

And the U.S. Climate Change Science Program recently issued a report with the following conclusions:

1. Over the long-term U.S. hurricane landfalls have been declining.

2. Nationwide there have been no long-term increases in drought.

3. Despite increases in some measures of precipitation , there have not been corresponding increases in peak streamflows (high flows above 90th percentile).

4. There have been no observed changes in the occurrence of tornadoes or thunderstorms.

5. There have been no long-term increases in strong East Coast winter storms (ECWS), called Nor’easters.

6. There are no long-term trends in either heat waves or cold spells, though there are trends within shorter time periods in the overall record.

In the climate debate, you would have to be pretty foolish to allow any argument to hinge on claims about the attribution of observed extreme events to the emissions of greenhouse gases. But as we’ve noted here on many occasions, for some the climate debate is a morality tale that cannot withstand nuance, even if that nuance is perfectly appropriate given the current state of understandings. But given the public relations value of extreme events in the climate debate, don’t expect Achenbach’s reasoned view to take hold among those calling for action. Like the Bush Administration and Iraqi WMDs, for some folks sometimes the intelligence that you wish existed trumps the facts on the ground.

]]>
http://cstpr.colorado.edu/prometheus/?feed=rss2&p=4495 1
Replications of our Normalized Hurricane Damage Work http://cstpr.colorado.edu/prometheus/?p=4480 http://cstpr.colorado.edu/prometheus/?p=4480#comments Mon, 14 Jul 2008 13:35:37 +0000 Roger Pielke, Jr. http://sciencepolicy.colorado.edu/prometheusreborn/replications-of-our-normalized-hurricane-damage-work-4480 This post highlights two discussion papers that have successfully replicated our normalized hurricane damage analyses using different approaches and datasets. Interestingly, both papers claim a trend in losses after normalization, but do some only by using a subset of the data – starting in 1950 in the first case and 1971 and the second case. Our dataset shows the same trends when arbitrarily selecting these shorter time frames, however, as we reported, we found no trends in the entire dataset.

If you’d just like the bottom line, here it is:

I am happy to report that Nordhaus (2006) and Schmidt et al. (2008) offer strong confirmatory evidence in support of the analysis that we have presented on adjusting U.S. hurricane losses over time. What do these studies say about the debate over hurricanes and climate change? Well, almost nothing (despite the unsuccessful effort by Schmidt et al. to reach for such a connection). There is no long-term trend in the landfall behavior of U.S. hurricanes, so it is only logical that there would also be no long-term trends in normalized damage as a function of storm behavior. Those looking for insight on this debate will need to look elsewhere. If it is to be found, such a linkage will be found in the geophysical data long before it shows up in the damage data, as we asserted at our Hohenkammer workshop.

Please read on if you are interested in the details.


The first paper is by the leading economist William Nordhaus (2006, PDF). His analysis uses the same original loss data but adjusts it for GDP rather than population, wealth, and housing units. His analysis uses our 1998 study which we updated in 2008. Nordhaus claims to have “verified our analysis” but he does leave a loose end (emphasis added):

Our estimates indicate that the time trend in the damage function is positive. For example, the time trend in the OLS full-sample equation found that normalized damages have risen by 2.9 (+0.76) percent per year, indicating increased vulnerability to storms of a given size.

This is contrary to Roger A. Pielke, Jr., “Are There Trends in Hurricane Destruction?” Nature, Vol. 438, December 2005, E11, who reports no statistically significant trend. Similar negative results were found in Roger A. Pielke, Jr. and Christopher W. Landsea, “La Niña, El Niño, and Atlantic Hurricane Damages in the United States,” Bull. Amer. Meteor. Soc., vol. 80, 2027-2033. Additionally, issues of comparability over time are non-trivial, as is discussed in Christopher W. Landsea, “Hurricanes and Global Warming,” Nature, Vol. 438, December 2005, E11-E12. The reasons for the difference in findings have not been resolved.

The reason for the difference in findings should be completely obvious: Nordhaus looks at 1950-2005, and Pielke (2005) and Pielke and Landsea (1998) both begin their analysis in 1900. Pielke (2005, PDF) reports:

For example, take the 86 storms causing at least US$1 billion in normalized damages, which removes a bias caused by small storms resulting in no damage in the early twentieth century (that is, not subjected to normalization). There is an average per-storm loss in 1900–50 for 40 storms (0.78 events per year) of $9.3 billion, and an average per-storm loss in 1951–2004 for 46 storms
(0.85 events per year) of $7.0 billion; this difference is not statistically significant. Adding Hurricane Katrina to this data set, even at the largest loss figures currently suggested, would not change the interpretation of these results.

The following figures illustrate this point quite clearly. The first figure is from Nordhaus:

Nord1.jpg

The second figure is from the data of Pielke et al. 2008 (PDF):

pietal1.jpg

Clearly, the datasets show the same trends. However, our entire dataset shows no trend, as we reported in the paper:

The two normalized data sets reported here show no trends either in the absolute data or under a logarithmic transformation: the variance explained by a best-fit linear trend line=0.0004 and 0.0003, respectively, for PL05, and 0.0014 and 0.00006, respectively, for CL05. The lack of trend in twentieth century normalized hurricane losses is consistent with what one would expect to find given the lack of trends in hurricane frequency or intensity at landfall. This finding should add some confidence that, at least to a first degree, the normalization approach has successfully adjusted for changing societal conditions. Given the lack of trends in hurricanes themselves, any trend observed in the normalized losses would necessarily reflect some bias in the adjustment process, such as failing to recognize changes in adaptive capacity or misspecifying wealth. That we do not have a resulting bias suggests that any factors not included in the normalization methods do not have a resulting net large significance.

So to summarize, Nordhaus (2006) and Pielke et al. (2008) reconcile perfectly.

A second study is just out by Silvio Schmidt et al. (2008, PDF) which applies a modified version of our normalization methodology to hurricane losses found in the Munich Reinsurance global loss dataset. This study finds no significant trend from 1950, using a log transformation of the dataset:

The trend analysis for the period 1950–2005 yields no statistically significant trend in annual adjusted losses. Even if the two extreme years, 2004 and 2005, are omitted from the trend analysis, no trend can be identified in which the explanatory variable time is significant. Thus, no conclusion can be drawn regarding a possible trend in the periods 1950–2005 and 1950–2003.

The paper does identify a statistically significant trend starting in 1971, but the significance disappears when Katrina is removed from the dataset. Once again, the Schmidt et al. analysis is perfectly consistent with our analysis. The following figure shows their log-transformed data from 1950:

schetal1.jpg

And the following graph is the same transformation applied to the data of Pielke et al. (2008).

pietal2.jpg

One notable difference is that the Munich Re dataset apparently has some gaps, as it reports a number of years with zero damage that our dataset shows the presence of damaging storms. From the graph is should be clear that any claim of a trend over the dataset depends upon 2004 and 2005. And even in this case Schmidt et al. were only able to identify a statistically significant trend by starting with 1971 (which they claim as the start of a cold phase, contrary to most studies that use 1970). The following figure from Schmidt et al. shows how close the two analyses actually are (the red curve is Pielke et al. 2008):

schetal2.jpg

The differences between the two analyses are very small, and I would guess, of no particular statistical significance over the time series of the dataset.

So I am happy to report that Nordhaus (2006) and Schmidt et al. (2008) offer strong confirmatory evidence in support of the analysis that we have presented on adjusting U.S. hurricane losses over time.

What do these studies say about the debate over hurricanes and climate change? Well, almost nothing despite efforts by Schmidt et al. to reach for such a connection. There is no long-term trend in the landfall behavior of U.S. hurricanes, so it is only logical that there would also be no long-term trends in normalized damage as a function of storm behavior. Those looking for insight on this debate will need to look elsewhere. If it is to be found, such a linkage will be found in the geophysical data long before it shows up in the damage data, as we asserted at our Hohenkammer workshop.

]]>
http://cstpr.colorado.edu/prometheus/?feed=rss2&p=4480 2
Normalised Australian insured losses from meteorological hazards: 1967–2006 http://cstpr.colorado.edu/prometheus/?p=4470 http://cstpr.colorado.edu/prometheus/?p=4470#comments Fri, 27 Jun 2008 13:52:19 +0000 Roger Pielke, Jr. http://sciencepolicy.colorado.edu/prometheusreborn/normalised-australian-insured-losses-from-meteorological-hazards-1967%e2%80%932006-4470 crompmcan.jpg

Ryan Crompton and John McAneney of Macquarie University in Sydney, Australia have an important new paper in the August, 2008 issue of the journal Environmental Science & Policy titled “Normalised Australian insured losses from meteorological hazards: 1967–2006.” (Available to subscribers here). The paper contributes to a growing literature that documents the importance of understanding the integrated effects of societal change and climate change. The paper also underscores the central role that adaptation policies must play in climate policy.

The abstract states:

Since 1967, the Insurance Council of Australia has maintained a database of significant insured losses. Apart from five geological events, all others (156) are the result of meteorological hazards—tropical cyclones, floods, thunderstorms, hailstorms and bushfires. In this study, we normalise the weather-related losses to estimate the insured loss that would be sustained if these events were to recur under year 2006 societal conditions. Conceptually equivalent to the population, inflation and wealth adjustments used in previous studies, we use two surrogate factors to normalise losses—changes in both the number and average nominal value of dwellings over time, where nominal dwelling values exclude land value. An additional factor is included for tropical cyclone losses: this factor adjusts for the
influence of enhanced building standards in tropical cyclone-prone areas that have markedly reduced the vulnerability of construction since the early 1980s.

Once the weather-related insured losses are normalised, they exhibit no obvious trend over time that might be attributed to other factors, including human-induced climate change. Given this result, we echo previous studies in suggesting that practical steps taken to reduce the vulnerability of communities to today’s weather would alleviate the impact under any future climate; the success of improved building standards in reducing tropical cyclone wind-induced losses is evidence that important gains can be made through disaster risk reduction.

The text of the paper includes this discussion:

The collective evidence reviewed above suggests that societal factors – dwelling numbers and values – are the predominant
reasons for increasing insured losses due to natural hazards in
Australia. The impact of human-induced climate change on insured losses is not detectable at this time. This being the case, it seems logical that in addition to efforts undertaken to reduce global greenhouse gas emissions, significant investments be made to reduce society’s vulnerability to current and future climate and the associated variability. Employing both mitigation and adaptation contemporaneously will benefit society now and into the future.

We are aware of few disaster risk reduction policies explicitly developed to help Australian communities adapt to a changing climate, yet disaster risk reduction should be core to climate adaptation policies (Bouwer et al., 2007). . .

An increased threat from bushfires under human-induced climate change is often assumed. Indeed Pitman et al. (2006) and others anticipate an increase in conditions favouring bushfires. However, analyses by McAneney (2005) and Crompton et al. (in press) suggest that the main bushfire menace to building losses will continue to be extreme fires and that the threat to the most at-risk homes on the bushland– urban interface can only be diminished by improved planning regulations that restrict where and how people build with respect to distance from the forest. Disaster risk reduction of this kind would immediately reduce current and future society’s vulnerability to natural hazards.

Please read the whole paper.

]]>
http://cstpr.colorado.edu/prometheus/?feed=rss2&p=4470 0
Normalized U.S. Earthquake Losses: 1900-2005 http://cstpr.colorado.edu/prometheus/?p=4464 http://cstpr.colorado.edu/prometheus/?p=4464#comments Wed, 18 Jun 2008 14:49:11 +0000 Roger Pielke, Jr. http://sciencepolicy.colorado.edu/prometheusreborn/normalized-us-earthquake-losses-1900-2005-4464 vpeq.jpg

Kevin Vranes and I have just had a paper accepted for publication on historical earthquake losses in the United States.

Vranes, K., and R. A. Pielke, Jr., 2008 (in press). Normalized earthquake damage and fatalities in the United States: 1900 – 2005, Natural Hazards Review. (PDF)

The dataset that we present in the paper will allow for a range of interesting analyses. Here is the abstract:

Damage estimates from 80 United States earthquakes since 1900 are “normalized” to 2005 dollars by adjusting for inflation, increases in wealth and changes in population. A factors accounting for mitigation at 1% and 2% loss reduction per year are also considered. The earthquake damage record is incomplete, perhaps by up to 25% of total events that cause damage, but all of the most damaging
events are accounted for. For events with damage estimates, cumulative normalized losses since 1900 total $453 billion, or $235 billion and $143 billion when 1% and 2% mitigation is factored respectively. The 1906 San Francisco earthquake and fire adjusts to $39 – $328 billion depending on assumptions and mitigation factors used, likely the most costly natural disaster in U.S. history in normalized 2005 values. Since 1900, 13 events would have caused $1B or more in losses had they
occurred in 2005; five events adjust to more than $10 billion in damages. Annual average losses range from $1.3 billion to $5.7 billion with an average across datasets and calculation methods of $2.5 billion, below catastrophe model estimates and estimates of average annual losses from hurricanes. Fatalities are adjusted for population increase and mitigation, with five events causing over 100 fatalities when mitigation is not considered, four (three) events when 1% (2%) mitigation is considered. Fatalities in the 1906 San Francisco event adjusts from 3,000 to over 24,000, or 8,900 (3,300) if 1% (2%) mitigation is considered. Implications for comparisons of normalized results with catastrophe model output and with normalized damage profiles of other hazards are considered.

]]>
http://cstpr.colorado.edu/prometheus/?feed=rss2&p=4464 5
Good Intelligence http://cstpr.colorado.edu/prometheus/?p=4439 http://cstpr.colorado.edu/prometheus/?p=4439#comments Thu, 05 Jun 2008 22:53:33 +0000 Roger Pielke, Jr. http://sciencepolicy.colorado.edu/prometheusreborn/?p=4439 Chapter 7 of The Honest Broker talks about the role of intelligence in the decision to go to war in Iraq. Today, the Senate Intelligence Committee released two reports (PDF, PDF) documenting how the Bush Administration misled policy makers and the public by politicizing government intelligence. Here is what Senator Jay Rockefeller had to say in a press release:

“Before taking the country to war, this Administration owed it to the American people to give them a 100 percent accurate picture of the threat we faced. Unfortunately, our Committee has concluded that the Administration made significant claims that were not supported by the intelligence,” Rockefeller said. “In making the case for war, the Administration repeatedly presented intelligence as fact when in reality it was unsubstantiated, contradicted, or even non-existent. As a result, the American people were led to believe that the threat from Iraq was much greater than actually existed.”

“It is my belief that the Bush Administration was fixated on Iraq, and used the 9/11 attacks by al Qa’ida as justification for overthrowing Saddam Hussein. To accomplish this, top Administration officials made repeated statements that falsely linked Iraq and al Qa’ida as a single threat and insinuated that Iraq played a role in 9/11. Sadly, the Bush Administration led the nation into war under false pretenses.

“There is no question we all relied on flawed intelligence. But, there is a fundamental difference between relying on incorrect intelligence and deliberately painting a picture to the American people that you know is not fully accurate.

“These reports represent the final chapter in our oversight of prewar intelligence. They complete the story of mistakes and failures – both by the Intelligence Community and the Administration – in the lead up to the war. Fundamentally, these reports are about transparency and holding our government accountable, and making sure these mistakes never happen again.”

I explain in The Honest Broker there is an important difference between serving as an issue advocate and serving as an honest broker. In this situation, the distinction was lost. The Administration had every right to make whatever case to the public that it wanted to make.

However, as the second report linked about argues, it warped the process of intelligence gathering in order to generate (suppress) information that supported (did not support) its desired outcomes. This represented a pathological politicization of the intelligence community and limited the scope of options available for debate among the public and policy makers.

Protecting the function of honest brokering among relevant experts is hard to do.

]]>
http://cstpr.colorado.edu/prometheus/?feed=rss2&p=4439 7