Archive for the ‘Risk & Uncertainty’ Category

Research Takes First Step on Tolerance of Nanoparticles

June 21st, 2009

Posted by: admin

The Scientist has a capsule review of a 2007 research article on the ability of mice to purge themselves of nanoparticles.  The full article is available in Nature Biotechnology (subscription/purchase required).  The article also includes notes on some subsequent work in this area.

As nanotechnology matures, providing more and more products with particles measured in nanometers, the risk of exposure to these particles needs to be assessed and regulated.  Being able to determine what size of particles can be expelled by the body and what sizes accumulate in the body helps shape the questions for the regulatory landscape.  But it doesn’t close off exploration into potential risks.

While other regulatory models can provide useful examples, it’s important to remember that the scale of these particles may provide unique concerns.  I would hope that the hard lessons of chemical regulation – where accumulated exposure flew under the regulatory radar for years (see Krimsky’s Hormonal Chaos for a good overview), could be used to good effect here.  It may not be enough that the small particles can be expelled.  Enough transitory exposures over time could have unfortunate effects, much like enough small doses of certain chemicals have had dramatic effects on endocrine systems.

Encourage Risky Research Through Finance? Maybe

April 2nd, 2009

Posted by: admin

That appears to be one of the suggestions for how to encourage more risky research in a recent article in Physics World (H/T Scientific Blogging).  Risky in this case is meant to be different, unique, cutting-edge, not high potential for harm or damage.  This is another effort to address a perennial research management question – how do you encourage enough of this kind of risky research in order to keep getting the kind of radical breakthroughs that push science and technology along.  Science is an inherently conservative (small-c) process, as it requires the agreement of the community to establish its information.  Those who take risks and fail lose a lot, as tenure committees, journal boards, and funding organizations don’t deal well with ‘failed’ experiments.

(more…)

Flood Control and Risk in the Netherlands

January 4th, 2009

Posted by: admin

The January issue of Wired also has an article on Dutch efforts currently underway to expand their flood control plans.  With most of the country below sea level, the Dutch started expanding their dam and flood control systems in the early part of the 20th century.  The Wired article describes the next big phase in the expansion of this system, where the annual chance of flood in high-risk areas will be reduced from one in 10,000 to one in 100,000.  By comparison, New Orleans protection has risk levels where the chance of flood in a year one in 100 (but for more severe hurricanes compared to the storms that spark flooding in the Netherlands).

This is a particularly daunting public works project, expected to take a century.  As the Dutch have experience with long term projects of this scope, it’s not likely to be as much of a shock as it would if this was their first attempt at long range geographic modification.  However, it’s hard to see a 100-year long project surviving the rough-and-tumble politics of most democracies.  While there are certainly climate change implications for what they are doing, the adaptation practices of the Dutch will set an example for engineers and urban planners around the world.  It’s worth taking a look.

Technology and the Recent Troubles

September 25th, 2008

Posted by: admin

The title of this post may seem a bit misleading, because I am not going to be writing about what many readers would automatically assume – the role of high-tech companies or products behind the recent shenanigans.  After all, that nonsense was at least one or two speculative bubbles ago. What I want to do is take a few lines and point out that the financial sector is a very real, almost tangible example of the failure of imagination often found in considerations of technology – and, by extension, technology policy.

Financial instruments, whether we’re talking about basic mortgages (sub-prime or otherwise) or more complicated derivatives like the credit default swaps that may have had a hand in locking up several financial services companies, are technologies.  They aren’t as tangible, so to speak, as other technologies, but their creation still has consequences – consequences rarely engaged until after they have emerged and changed the world – for better or worse.  But I don’t expect some kind of ELSI (Ethical, Legal and Social Implications) research program to emerge in business schools or in Science and Technology Studies departments, focused on the financial sector.  Ironically enough, there’s probably no money in it.

Changes in the business world can often be traced to changes in the instruments and forms of organization used by companies to acheive their goals.  Corporations didn’t always exist, after all.  The form had to be created.  Alfred Chandler, in his historical scholarship, has focused on the business operations behind several key industries, most of which were involved in innovation – either then or now.  The Visible Hand is a revelatory (well, for me anyway) history of how the formation of corporations and partnerships was critical to how the railroads and retail emerged in the United States during the 19th and 20th centuries.  In short, capital intensive industries needed new forms of organizations to function.

(more…)

Tone Deaf Damage Control

August 9th, 2008

Posted by: Roger Pielke, Jr.

I find this hard to believe. As readers of this blog know, earlier this week NCAR administrator Eric Barron made the difficult decision to cut the organization’s Center for Capacity Building, which is focused on helping developing countries adapt to climate. The decision prompted a flurry of compaints and international media coverage.

I was thus surprised to see the email copied below (which was sent to a bunch of people) from Eric and UCAR Vice President Jack Fellows:

From: “Jack Fellows”
Date: August 5, 2008 7:36:07 PM EDT
To: Bierbaum Rosina , peter backlund , Kelly Redmond , kelly redmond , Brad Udall , david yates , Jonathan Overpeck , kathy hibbard , linda mearns , mary hayden , lawrence buja , tom warner , kathleen miller , philip mote , caitlin simpson , Chester J Koblinsky , lisa goddard , Nolan Doesken , Alan Robock , Mark Abbott , “Mark Z. Jacobson” , dan vimont , Anthony Janetos , greg carbone , “Rudnicki Mark” , Keith Talbert Ingram , brian oneill , Paty Romero Lankao , Liz Moyer , david battisti , urc
Cc: eric barron
Subject: 2008 AGU Fall Session on Climate Adaptation

Eric Barron and I are putting together the AGU session described below. If you have put one of these AGU sessions together, you know that I can’t promise you that it will be an oral session or the day/time it will happen, and you are restricted to no more than 3 first author abstracts for the Fall AGU mtg. That said, we’d love to have you participate in this session. Let me know if you are interested by _17 August _so I can meet an 18 Aug AGU deadline. If you can’t participate but know of a great person, please forward this email on to them. Thanks. This is a large email list, so pls just respond to my email address. Jack

Description: PA03: How Can the Science Community Help Local and Regional Decision Makers Who are Exploring or Implementing Adaptation Options to Climate Change? All weather is local! The same will be true regarding adapting to climate change. This session will examine how local universities and decision makers are working together to adapt to anticipated climate change impacts and explore how these independent activities might be networked together into a “national adaptation network or model”. NOTE: we are primarily interested in people who have really interesting policy research or a project that is partnering with local and regional decision makers to deal with climate adaptation, mitigation, or even geoengineering.

Note that the time stamp of the email is after the brouhaha started about the decision to cut CCB. This appears to be an exercise in damage control — “We really care about adaptation, no really, we do. Look we sponsored a session at AGU!” — since neither Barron or Fellows have even done any work in adaptation (and to my knowledge Fellows is an administrator, not at all a researcher).

Let me try my hand at answering the question that they pose:

How Can the Science Community Help Local and Regional Decision Makers Who are Exploring or Implementing Adaptation Options to Climate Change? . . . This session will examine how local universities and decision makers are working together to adapt to anticipated climate change impacts and explore how these independent activities might be networked together into a “national adaptation network or model”.

First thing, don’t cut a program with a 34-year track record of success in doing exactly this. Are these guys serious that they now want to discuss how to recreate the functions of the NCAR CCB at an AGU session? Seriously?

I’d love to give NCAR/UCAR management the benefit of the doubt and believe that they have a rigorous method for setting priorities and making cuts. But not only is this sort of tone deaf damage control/spin insulting to the actual adaptation research community, but a sign that NCAR/UCAR may have a management problem rather than a budget problem.

Joel Achenbach on Weather Extremes

August 3rd, 2008

Posted by: Roger Pielke, Jr.

In today’s Washington Post Joel Achenbach has a smart and nuanced piece on weather extremes and climate change. The attribution of weather events and trends to particular causes is difficult and contested.

Equivocation isn’t a sign of cognitive weakness. Uncertainty is intrinsic to the scientific process, and sometimes you have to have the courage to stand up and say, “Maybe.”

For Achenbach’s efforts he gets called stupid and a tool of the “deniers”. Such complaints are ironic given that Achenbach explains how foolish it is to put too much weight on extreme events in arguments about climate change:

the evidence for man-made climate change is solid enough that it doesn’t need to be bolstered by iffy claims. Rigorous science is the best weapon for persuading the public that this is a real problem that requires bold action. “Weather alarmism” gives ammunition to global-warming deniers. They’re happy to fight on that turf, since they can say that a year with relatively few hurricanes (or a cold snap when you don’t expect it) proves that global warming is a myth. As science writer John Tierney put it in the New York Times earlier this year, weather alarmism “leaves climate politics at the mercy of the weather.”

There’s an ancillary issue here: Global warming threatens to suck all the oxygen out of any discussion of the environment. We wind up giving too little attention to habitat destruction, overfishing, invasive species tagging along with global trade and so on. You don’t need a climate model to detect that big oil spill in the Mississippi. That “dead zone” in the Gulf of Mexico — an oxygen-starved region the size of Massachusetts — isn’t caused by global warming, but by all that fertilizer spread on Midwest cornfields.

Some folks may actually get the notion that the planet will be safe if we all just start driving Priuses. But even if we cured ourselves of our addiction to fossil fuels and stabilized the planet’s climate, we’d still have an environmental crisis on our hands. Our fundamental problem is that — now it’s my chance to sound hysterical — humans are a species out of control. We’ve been hellbent on wrecking our environment pretty much since the day we figured out how to make fire.

This caused that: It would be nice if climate and weather were that simple.

And the U.S. Climate Change Science Program recently issued a report with the following conclusions:

1. Over the long-term U.S. hurricane landfalls have been declining.

2. Nationwide there have been no long-term increases in drought.

3. Despite increases in some measures of precipitation , there have not been corresponding increases in peak streamflows (high flows above 90th percentile).

4. There have been no observed changes in the occurrence of tornadoes or thunderstorms.

5. There have been no long-term increases in strong East Coast winter storms (ECWS), called Nor’easters.

6. There are no long-term trends in either heat waves or cold spells, though there are trends within shorter time periods in the overall record.

In the climate debate, you would have to be pretty foolish to allow any argument to hinge on claims about the attribution of observed extreme events to the emissions of greenhouse gases. But as we’ve noted here on many occasions, for some the climate debate is a morality tale that cannot withstand nuance, even if that nuance is perfectly appropriate given the current state of understandings. But given the public relations value of extreme events in the climate debate, don’t expect Achenbach’s reasoned view to take hold among those calling for action. Like the Bush Administration and Iraqi WMDs, for some folks sometimes the intelligence that you wish existed trumps the facts on the ground.

Replications of our Normalized Hurricane Damage Work

July 14th, 2008

Posted by: Roger Pielke, Jr.

This post highlights two discussion papers that have successfully replicated our normalized hurricane damage analyses using different approaches and datasets. Interestingly, both papers claim a trend in losses after normalization, but do some only by using a subset of the data – starting in 1950 in the first case and 1971 and the second case. Our dataset shows the same trends when arbitrarily selecting these shorter time frames, however, as we reported, we found no trends in the entire dataset.

If you’d just like the bottom line, here it is:

I am happy to report that Nordhaus (2006) and Schmidt et al. (2008) offer strong confirmatory evidence in support of the analysis that we have presented on adjusting U.S. hurricane losses over time. What do these studies say about the debate over hurricanes and climate change? Well, almost nothing (despite the unsuccessful effort by Schmidt et al. to reach for such a connection). There is no long-term trend in the landfall behavior of U.S. hurricanes, so it is only logical that there would also be no long-term trends in normalized damage as a function of storm behavior. Those looking for insight on this debate will need to look elsewhere. If it is to be found, such a linkage will be found in the geophysical data long before it shows up in the damage data, as we asserted at our Hohenkammer workshop.

Please read on if you are interested in the details.

(more…)

Normalised Australian insured losses from meteorological hazards: 1967–2006

June 27th, 2008

Posted by: Roger Pielke, Jr.

crompmcan.jpg

Ryan Crompton and John McAneney of Macquarie University in Sydney, Australia have an important new paper in the August, 2008 issue of the journal Environmental Science & Policy titled “Normalised Australian insured losses from meteorological hazards: 1967–2006.” (Available to subscribers here). The paper contributes to a growing literature that documents the importance of understanding the integrated effects of societal change and climate change. The paper also underscores the central role that adaptation policies must play in climate policy.

The abstract states:

Since 1967, the Insurance Council of Australia has maintained a database of significant insured losses. Apart from five geological events, all others (156) are the result of meteorological hazards—tropical cyclones, floods, thunderstorms, hailstorms and bushfires. In this study, we normalise the weather-related losses to estimate the insured loss that would be sustained if these events were to recur under year 2006 societal conditions. Conceptually equivalent to the population, inflation and wealth adjustments used in previous studies, we use two surrogate factors to normalise losses—changes in both the number and average nominal value of dwellings over time, where nominal dwelling values exclude land value. An additional factor is included for tropical cyclone losses: this factor adjusts for the
influence of enhanced building standards in tropical cyclone-prone areas that have markedly reduced the vulnerability of construction since the early 1980s.

Once the weather-related insured losses are normalised, they exhibit no obvious trend over time that might be attributed to other factors, including human-induced climate change. Given this result, we echo previous studies in suggesting that practical steps taken to reduce the vulnerability of communities to today’s weather would alleviate the impact under any future climate; the success of improved building standards in reducing tropical cyclone wind-induced losses is evidence that important gains can be made through disaster risk reduction.

The text of the paper includes this discussion:

The collective evidence reviewed above suggests that societal factors – dwelling numbers and values – are the predominant
reasons for increasing insured losses due to natural hazards in
Australia. The impact of human-induced climate change on insured losses is not detectable at this time. This being the case, it seems logical that in addition to efforts undertaken to reduce global greenhouse gas emissions, significant investments be made to reduce society’s vulnerability to current and future climate and the associated variability. Employing both mitigation and adaptation contemporaneously will benefit society now and into the future.

We are aware of few disaster risk reduction policies explicitly developed to help Australian communities adapt to a changing climate, yet disaster risk reduction should be core to climate adaptation policies (Bouwer et al., 2007). . .

An increased threat from bushfires under human-induced climate change is often assumed. Indeed Pitman et al. (2006) and others anticipate an increase in conditions favouring bushfires. However, analyses by McAneney (2005) and Crompton et al. (in press) suggest that the main bushfire menace to building losses will continue to be extreme fires and that the threat to the most at-risk homes on the bushland– urban interface can only be diminished by improved planning regulations that restrict where and how people build with respect to distance from the forest. Disaster risk reduction of this kind would immediately reduce current and future society’s vulnerability to natural hazards.

Please read the whole paper.

Normalized U.S. Earthquake Losses: 1900-2005

June 18th, 2008

Posted by: Roger Pielke, Jr.

vpeq.jpg

Kevin Vranes and I have just had a paper accepted for publication on historical earthquake losses in the United States.

Vranes, K., and R. A. Pielke, Jr., 2008 (in press). Normalized earthquake damage and fatalities in the United States: 1900 – 2005, Natural Hazards Review. (PDF)

The dataset that we present in the paper will allow for a range of interesting analyses. Here is the abstract:

Damage estimates from 80 United States earthquakes since 1900 are “normalized” to 2005 dollars by adjusting for inflation, increases in wealth and changes in population. A factors accounting for mitigation at 1% and 2% loss reduction per year are also considered. The earthquake damage record is incomplete, perhaps by up to 25% of total events that cause damage, but all of the most damaging
events are accounted for. For events with damage estimates, cumulative normalized losses since 1900 total $453 billion, or $235 billion and $143 billion when 1% and 2% mitigation is factored respectively. The 1906 San Francisco earthquake and fire adjusts to $39 – $328 billion depending on assumptions and mitigation factors used, likely the most costly natural disaster in U.S. history in normalized 2005 values. Since 1900, 13 events would have caused $1B or more in losses had they
occurred in 2005; five events adjust to more than $10 billion in damages. Annual average losses range from $1.3 billion to $5.7 billion with an average across datasets and calculation methods of $2.5 billion, below catastrophe model estimates and estimates of average annual losses from hurricanes. Fatalities are adjusted for population increase and mitigation, with five events causing over 100 fatalities when mitigation is not considered, four (three) events when 1% (2%) mitigation is considered. Fatalities in the 1906 San Francisco event adjusts from 3,000 to over 24,000, or 8,900 (3,300) if 1% (2%) mitigation is considered. Implications for comparisons of normalized results with catastrophe model output and with normalized damage profiles of other hazards are considered.

Good Intelligence

June 5th, 2008

Posted by: Roger Pielke, Jr.

Chapter 7 of The Honest Broker talks about the role of intelligence in the decision to go to war in Iraq. Today, the Senate Intelligence Committee released two reports (PDF, PDF) documenting how the Bush Administration misled policy makers and the public by politicizing government intelligence. Here is what Senator Jay Rockefeller had to say in a press release:

“Before taking the country to war, this Administration owed it to the American people to give them a 100 percent accurate picture of the threat we faced. Unfortunately, our Committee has concluded that the Administration made significant claims that were not supported by the intelligence,” Rockefeller said. “In making the case for war, the Administration repeatedly presented intelligence as fact when in reality it was unsubstantiated, contradicted, or even non-existent. As a result, the American people were led to believe that the threat from Iraq was much greater than actually existed.”

“It is my belief that the Bush Administration was fixated on Iraq, and used the 9/11 attacks by al Qa’ida as justification for overthrowing Saddam Hussein. To accomplish this, top Administration officials made repeated statements that falsely linked Iraq and al Qa’ida as a single threat and insinuated that Iraq played a role in 9/11. Sadly, the Bush Administration led the nation into war under false pretenses.

“There is no question we all relied on flawed intelligence. But, there is a fundamental difference between relying on incorrect intelligence and deliberately painting a picture to the American people that you know is not fully accurate.

“These reports represent the final chapter in our oversight of prewar intelligence. They complete the story of mistakes and failures – both by the Intelligence Community and the Administration – in the lead up to the war. Fundamentally, these reports are about transparency and holding our government accountable, and making sure these mistakes never happen again.”

I explain in The Honest Broker there is an important difference between serving as an issue advocate and serving as an honest broker. In this situation, the distinction was lost. The Administration had every right to make whatever case to the public that it wanted to make.

However, as the second report linked about argues, it warped the process of intelligence gathering in order to generate (suppress) information that supported (did not support) its desired outcomes. This represented a pathological politicization of the intelligence community and limited the scope of options available for debate among the public and policy makers.

Protecting the function of honest brokering among relevant experts is hard to do.