Archive for August, 2008

Gustav’s Historical Analogues

August 31st, 2008

Posted by: Roger Pielke, Jr.

Hurricane Gustav is heading for the Louisiana coast. This post briefly discusses some of the relevant historical analogues, and the normalized damage amounts from our research for some of those analogues. Here is the latest Gutav track forecast map from the National Hurricane Center.

Using the very useful historical hurricane track tool from the NOAA Coastal Services Center (CSC) indicates that this region has seen plenty of hurricane activity since 1900. The following figure shows all tracks of storms that reached Category 3 or higher intensity that passed within 200 nautical miles of Houma, Louisiana (which is roughly the present size of the NHC cone at landfall).

Focusing in on a much smaller location, simply for purposes of providing a few historical analogues, here is a map of storms that passed closest to the center of the current area of uncertainty.

Here is a table of normalized damage from this subset of storms (from this source in PDF):

Year Cat Norm$
1909 3 $1.1B
1915 4 $2.5B
1965 3 $17.9B (Betsy)
1974 3 $1.0B (Carmen)
1992 3 $1.9B (Andrew)
2005 3 $81.0B (Katrina)

How might we interpret this data? Here are a few thoughts:

1. There is a huge difference between the utter devastation of New Orleans (Katrina), partial devastation of New Orleans (Betsy), and a landfall further to the west. However, Rita (2005, Cat3) made landfall closer to the LA-TX border and resulted in $10.0B in damage.

2. The strength of the storm makes a huge difference. Over our entire dataset, we find that Category 4 storms result in 5.4 times the average damage of a Category 3 storm.

3. Population makes a big difference. Regions with more than 3 million people have 2.6 times the damage of regions with less than 1 million for Category 3 storms, and 4.7 times for Category 4 storms. We did not look at finer levels of population. If you’d like to have a look at the population of various locations under warning for Gustav you can use the NOAA CSC coastal population tool.

So how much damage will Gustav cause?

This is an impossible question to answer with any hope of accuracy, not knowing the exact point of landfall or the intensity at that time. But if Gustav does avoid a direct hit on New Orleans and makes landfall as a Category 3 (very important qualifications), then it would be safe to conclude that losses won’t even remotely approach those of Katrina, and will likely be an order of magnitude less, and perhaps even lower. If levees are overtopped or burst in New Orleans, then damage could rapidly escalate.

How Think Tanks Use “Sound Science” – Neither Soundly Nor Scientifically

August 30th, 2008

Posted by: admin

Besides an emphasis on climate science policy, Prometheus posts have often wrangled with the common practice of appealing to science (sometimes referred to as “sound science”) as a means to close a policy debate.  There are many problems with such a rhetorical and political strategy, not the least of which is the hiding of debate and differences within this closed box of ‘expertise’ – an area Science and Technology Studies has mined well, and Science and Technology Policy Research could do well to visit that foundry more often.

Think tanks are no stranger to this appeal to authority.  Of course, those with more ideological purposes (Heritage, Center for American Progress, Cato) are easier to spot when they make these appeals as a way to advance their particular agendas.  Those with no apparent ideological axe to grind are harder to sniff out.

Let me present today’s example, from the Information Technology and Innovation Foundation – a small think tank that usually does fine with their various economic analyses of IT and innovation related topics.  They have announced an event on September 10 in Washington titled: Is the U.S. Falling Behind in Science and Technology or Not?  The associated press release explains why they ask the question:

Over the last several years a number of reports – headlined by Ris­ing Above the Gathering Storm have raised alarm over the deteriorating state of U.S. science and technology (S&T) competitiveness and documented how the country is falling behind in key building blocks of the S&T base.

Now bursting onto the scene is a new report from the RAND Corporation, U.S. Competitiveness in Science and Technology, that calls into question what appeared to be settled assessments of slipping U.S. S&T competitiveness. Covered widely in magazines like The Economist (What Crisis? June 12, 2008 issue), RAND’s report has been interpreted by many to suggest that worries about the United States losing its edge in S&T are actually overblown: everything’s just fine

But RAND’s report contains se­rious structural and analytic flaws that misread the fundamental position of U.S. science and technology competitiveness. In a new report, RAND’s Rose-Colored Glasses: How RAND’s Report on U.S. Competitiveness in Science and Technology Gets it Wrong, ITIF will offer a detailed critique of the RAND report showing that in contrast to RAND’s rosy assessment, America’s lead on a number of key S&T indicators is eroding rapidly, where not vanishing entirely.

I want to be careful here.  The event hasn’t happened yet, but the framing of the discussion is a great example of how a policy debate can easily be hidden away by appeals to sound science, or in the case of this particular event “settled assessments.”

And in the interest of noting where Prometheus stands, we have criticized Rising Above the Gathering Storm (RAGS) to the point where it has its own category.  Both Roger and I have noted the RAND report targeted by ITIF in a favorable light.  It lacks the heated rhetoric and dubious calculations of RAGS, and makes the point that action is necessary, regardless of the state of U.S. competitiveness (something ITIF conveniently ignores).  The notion of steady-state equilibrium in science and technology support should have been discredited once technology was incorporated into economic growth models, but that’s a post for another day, if not another blog.

Scientific consensus does not dictate particular policy actions.  Look at the climate change debates to see proof of this.  The press release is framed in such a way that this ‘bad’ RAND report must be fought, or the policy consensus over improving U.S. competitiveness (such as it is) will fail.  But ITIF won’t fight the report on it’s tone (which appears to be the main source of disagreement), but on supposed flaws in its science.  All the while, flaws in the science of reports supportive of ITIF’s position (the sky is falling, we’re doomed unless science gets more money, you know the drill) is ignored (ITIF isn’t the only one to ignore the flaws, or the related criticism).

There’s a fine line between rhetoric and, well, I’ll call it delusion (Henry Frankfurt has another name for it), and I think this falls on the wrong side.  Instead of picking apart reports read by very few people that matter in competitiveness decisions, political action is better spent persuading the right people that certain programs need full funding.

Oil from the Persian Gulf

August 29th, 2008

Posted by: Roger Pielke, Jr.

In his speech last night at the Democratic National Convention Barack Obama said the following:

And for the sake of our economy, our security, and the future of our planet, I will set a clear goal as president in ten years, we will finally end our dependence on oil from the Middle East.

I’ll discuss this in depth next week, but for now I simply would like to put up a placeholder indicating how much oil the U.S. currently imports from the Persian Gulf region, as provided by the US EIA.

[UPDATE: The Environmental Capital blog of the WSJ has a critical take here.]

Consider Submitting Your S&T Policy Research to Minerva

August 28th, 2008

Posted by: Roger Pielke, Jr.

Peter Weingart, of the Institute of Science and Technology Studies at the University of Bielefeld and one of the world’s leading science policy scholars, has taken over the editorship of the journal Minerva. He writes in an editorial essay:

Like my predecessors I see Minerva’s terms of reference broadly concerned with science, technology, higher education and the policies addressing them. The specialization that has afflicted this area of study has two facets. One is that scholarship has become more professional. The other, however, is that many writings in science studies have become inward looking to the point of being trivial. It is particularly evident that both science policy and economic perspectives have been neglected. The obvious task is to maintain and even strengthen the professional quality of the work while at the same time (re-)opening the perspective of science studies. This means explicitly linking historical, sociological, economic, political and philosophical reflection of scientific knowledge production, empirical analysis of the interaction between political and legal institutions that shape knowledge production and are constantly shaped by it. Every author should ask her/himself when submitting an article to whom is it interesting and why. If the answer is: ‘to the few colleagues working on the same or similar topic’, one of the more specialized journals may be better suited than Minerva. Minerva should be the forum where, based on professional scholarship, broader issues of science and higher education should be raised and debated, whether they be theoretical or political. The framing of these issues should not be dictated by the borderlines of disciplines but by the question regarding which kind of knowledge is needed in order to promote the discussion.

Science and technology policy scholars should consider Minerva as an outlet for their research publications.

Arguing Both Sides at Climate Progress

August 27th, 2008

Posted by: Roger Pielke, Jr.

I haven’t engaged much with Joe Romm of late, but I can’t let this one pass. When Tom Wigley, Chris Green and I published our analysis of the spontaneous emissions reductions built into all IPCC scenarios (PDF), Joe Romm put up a post titled: “Why did Nature run Pielke’s pointless, misleading, embarrassing nonsense?”

It turned out in subsequent discussions that Romm didn’t even understand our analysis, failing to appreciate the difference between a reference scenario and a mitigation scenario.

So it was with some surprise that I read over at the Cato-Unbound climate policy debate Joe invoking our Nature paper as evidence in support of the idea that the IPCC scenarios have built in assumptions about aggressive reductions in carbon and energy intensities. I have long argued that Joe and I differ in our interpretation of the significance of the Nature paper’s findings, not the analysis itself.

However, Joe never posted up an apology for mistakenly trashing our paper or a correction noting that in fact, he finds the analysis sound. This leads to the embarrassing circumstance in which Joe Romm is over at the Cato site using our paper to support his arguments while his Climate Progress guest blogger Ken Levenson is arguing on Joe’s behalf at another online debate at the Economist arguing against the analysis in the Nature article, explaining that “Joe actually throughly debunked that Nature article too” linking to several of Joe’s many articles trying (unsuccessfully) to “debunk” our analysis.

Those guys at Climate Progress seem to want things both ways — the analysis in our article is both “debunked” and an authority. Maybe Joe Romm should set the record straight?

Yohe vs. Lomborg

August 24th, 2008

Posted by: Roger Pielke, Jr.

There in an interesting debate over at the Guardian between Bjorn Lomborg and Gary Yohe, who is the lead author on the climate change analysis that is part of the Copenhagen Consensus exercise 2008. Yohe is also lead author for the IPCC 2007 assessment report on mitigation.

Yohe claims to have been grossly, and purposely, misrepresented by Lomborg (Lomborg’s Guardian piece is here, Yohe’s response is here):

Bjorn Lomborg, author of The Skeptical Environmentalist, makes headlines around the world by arguing that capping carbon dioxide emissions is a waste of resources. He recently published a piece in the Guardian in which he dismissed efforts to craft a global carbon cap as “constant outbidding by frantic campaigners” to “get the public to accept their civilisation-changing proposals”.

To support his argument, Lomborg often cites the Copenhagen Consensus project, a 2008 effort intended to inform climate negotiators. But there’s just one problem: as one of the authors of the Copenhagen Consensus Project’s principal climate paper, I can say with certainty that Lomborg is misrepresenting our findings thanks to a highly selective memory.

What is the nature of the misrepresentation? Yohe states:

Lomborg claims that our “bottom line is that benefits from global warming right now outweigh the costs” and that “[g]lobal warming will continue to be a net benefit until about 2070.” This is a deliberate distortion of our conclusions.

We did find that climate change will result in some benefits for developed countries, but only for modest climate change (up to global temperature increases of 2C – not the 4 degrees that Lomborg is discussing in his piece).

In the good guy-bad guy Manichean climate debate, making sense of such back-and-forths can be difficult, but here is my attempt. I will invite Lomborg, Yohe et al. to comment here on my interpretation.

First, lets take a look at Yohe et al. (here in PDF) and see what they actually said. On temperature change, Yohe et al. present this in their Figure 3.2, reproduced below.

I’ve annotated it to show that the temperature change associated with business as usual is just under 3.5 degrees in 2100. So it is indeed less than the 4 degrees suggested by Lomborg, but far above the 2 degrees suggested by Yohe. Neither Lomborg or Yohe gets good marks for precision here, but maybe Yohe is looking at 2075 and Lomborg 2100 and they are talking past each other (??).

Now on the issue of benefits of climate change, Yohe et al. present this in their Figure 4.1, reproduced below.

This figure clearly shows that the analysis of Yohe et al. concludes that “[g]lobal warming will continue to be a net benefit until about 2070.” The big red arrow that I’ve added shows where the impacts of global warming go from positive to negative, around 2070. Based on this graph, it is difficult for me to see the egregious misrepresentation that Yohe has claimed.

Perhaps what we are seeing here is a disagreement over values, not economic analyses. But in the climate debate, it is often a short journey from reasoned disagreement over analyses and their implications to allegations of willful misrepresentation and worse. A shame.

Science on NCAR

August 22nd, 2008

Posted by: Roger Pielke, Jr.

This week’s issue of Science has a long, investigative article on the troubles at the National Center for Atmospheric Research (NCAR), which we have been discussing here a bit over the past few weeks. The focus of the article is on NCAR’s climate modeling efforts, and it explores the budget and management issues that are currently troubling the institution.

The article, by Eli Kintisch, confirms what we have claimed about NCAR’s budget situation — specifically, NCAR’s woes can be traced to commitments made based on overly optimistic expectations for budget growth, and not budget cuts:

In 2004, NCAR’s then-director, Timothy Killeen, had launched a major restructuring that included expanding the lab, banking on a 2002 congressional promise of a 5-year doubling of the budget of its main funder, the National Science Foundation (NSF). But Congress failed to keep its promise, and NSF’s contribution to NCAR, instead of rising by double-digit percentages, has grown by only 2.6% annually in the past 5 years

Inflation has grown by an average of 2.9% per year over the same time period. The Science article thus confirms the analysis that I presented here a few weeks ago, finding that after accounting for inflation, NCAR’s budget is down, but only by $0.3 million since 2004.

The article has an interesting subsection titled “Gambling on Growth,” which explains how NCAR gained a reputation for climate modeling but sought to expand. Here is an excerpt:

But Killeen, who became director in 2000, wanted NCAR to do more, including increasingly detailed forecasts of the impacts of climate change and interdisciplinary studies on weather. So in 2004, he regrouped existing divisions into ESSL and labs for Earth observations, computing, and airborne-weather projects. He created a fifth lab to respond to growing interest in the societal impacts of climate change. Within the labs, Killeen also set up institutes devoted to interdisciplinary work and to the application of modeling and mathematical methods in geoscience. “It was a bold new initiative,” says Richard Anthes, president of the University Corporation for Atmospheric Research in Boulder, which operates NCAR for NSF.

Killeen hoped that NSF would finance the expansion as its overall budget grew. NSF’s contribution to NCAR did rise by 19% between 2001 and 2004, but in the past 5 years it has increased by only 10%. That below-inflationary rise has triggered “chronic wasting disease” at NCAR, says Anthes. It also spawned fears among some scientists about the cost of the new bureaucracy. Managers estimate that the reorganization has added $5 million in staffing and other administrative costs over 4 years. “Shouldn’t we really be about putting the money instead into scientific programs?” NCAR veteran scientist Peter Gilman recalls asking an assembly at NCAR in 2004. But without a growing contribution from NSF, Killeen was forced to ask NCAR managers to tighten their belts, including dipping into research funds to meet other expenses.

The article places blame for NCAR’s woes squarely at the feet of Tim Killeen, its former director who has moved to the NSF division that oversees NCAR (itself somewhat troubling, given how readily his peers seem willing to blame him for NCAR’s troubles). Placing blame on Killeen is a bit unfair, as Killeen’s actions were under the guidance and approval of UCAR, the organization with responsibility for management of NCAR, and accountability for its performance. Rick Anthes, President of UCAR, seems all too willing to shift the responsibility for the management decisions entirely onto Killeen, when in fact he is ultimately responsible for the problems at NCAR.

Eric Barron, who has succeed Killeen, may have walked into a buzz saw in taking the job. I wonder how deeply he was informed of NCAR’s troubles before taking the reins, as he is certainly on the hot seat now, maybe a bit unfairly. If Barron is paying attention he will notice how readily Rick Anthes has thrown Tim Killeen under the bus, and perhaps Barron will create a bit of space between NCAR and UCAR. (Though actions thus far suggest the opposite.) After all UCAR is not much without its crown jewel.

Finally, the article does leave unanswered a question about staffing levels that I discussed earlier this week:

The resulting belt-tightening has meant pink slips for 55 employees since 2003 (out of a workforce that has averaged 800 since then) and not replacing 77 others who retired or left.

Official figures show about 900 employees, so perhaps some are not included in the number Kintisch used.

The bottom line is that NCAR has some deep problems, which Kintisch describes in some detail, with a surprising number of climate modelers willing to go the record with strong criticisms of NCAR and UCAR management. One thing seems clear, more money won’t improve bad management, and may have the opposite effect. NSF needs to get UCAR and NCAR in line, before entities higher up the food chain step in and start asking questions.

Not Nearly Top of the League Table

August 21st, 2008

Posted by: Roger Pielke, Jr.

Over at Dot Earth Andy Revkin points to a call by climate science organizations for a doubling of their current level of funding during the period 2010-2014, asking for almost $20 billion (available here in PDF),as part of the priorities of the next U.S. administration.

The incoherence of the request is remarkable. They invoke Hurricane Katrina as an example of the sort of costly disaster that better models and predictions will help us deal with. However, the forecasts for Katrina were near perfect, and the American Meteorological Society concluded that, “Hurricane Katrina reminds us that even with excellent track forecasts, the United States remains vulnerable to large losses of life from hurricanes.” A large body of research clearly shows that improved forecasts are not the main obstacle to reducing economic or human losses from extreme events, especially in developing countries.

So while it makes sense to invest in the science of modeling and prediction, and maybe even to increase that investment, the transition document fails to make this case. To be fair, now that it has been generally accepted that climate forecasts are plenty good enough for action, we should expect to see a search for new justifications for increased funding, and it will probably take a while for this community to work out the new arguments, as this document indicates.

But if you think that the justifications offered by the climate community are weak, they’ve got a lot of room for additional hyperbole and bad argumentation if they want to catch up to the neurotechnology community which is asking for $1 billion over the next five years to spin up a new U.S. National Neurotechnology Initiative. That initiative is based on a remarkable claim, found in the legislation introduced earlier this year by Senators Murray and Domenici (here in PDF):

Nearly 100,000,000 Americans suffer from brain or nervous system disease, injury, or disorder . . .

That would be 1 in 3 Americans. Like I said, the climate scientists have a ways to go before their poor justifications for new funding sink to this level. Anyone who thinks that they have a more impressive example of a bad justification for increased science funding is invited to share in the comments!

The Proper Grounding of Policy – Science is Insufficient

August 20th, 2008

Posted by: admin

David Guston has a policy commentary in the latest issue of Nature – 454:7207, pp. 940-941, (21 August 2008) where he discusses the capability of policy to influence innovative research.  It’s worth a read (as most of the Nature articles on innovation are) in it’s own right, but I want to take a couple of his points as the start of a discussion on what should ground policy.  From Guston’s article:

“Policies are, and should rightly be, about articulating public values.” (940, referencing Bozeman’s Public Values and Public Interest)

The second premise [of three that Guston considers to underly contradictions within innovation policy] is that policy is supposed to be grounded on a clear understanding of the natural world.

Guston goes on to note that science provides an incomplete and changing picture of the world.  I agree, and certainly think that grounding policy in a clear understanding of the natural world – in good natural science – is insufficient.  Even adding the work of the social scientists and humanities scholars will not ground policy as well as it might be grounded.

What I think is also missing here is a clear understanding of the policy world.  In other words, it’s not enough to know what the science (or technology) says about specific policy outcomes or particular research portfolios.  The linkage between them and the values articulated in policies is tenuous at best, and often absent.  That I know the scientific or technical consensus behind a particular issue or concern is fine, but that’s not going to tell me what the best policy instruments will be to make sure that the articulated public values are achieved.  This is true even when considering those policies that Harvey Brooks called “policy for science.”  I do not mean to say that knowledge and values are incommensurate, but that different processes and questions are needed to produce knowledge than are needed to make sure values are properly articulated through policy.

Policy research (or science and technology policy research if we want to be specific) must complement other scholarly research in order to best inform policymakers in making sure that their policies properly articulate the values that they want.  While the National Science Foundation’s (NSF) Science of Science Policy (SciSP) program will likely produce new knowledge about innovation, metrics, and research, it remains to be seen if that program can say anything about what policy instruments are most effective in articulating the values that policymakers want.  I’ve been a skeptic of this program for a while, and this ‘missing half’ of SciSP is a big reason why.  Now maybe NSF is not well suited to handle that second half.  But there is a big unanswered question about how the knowledge produced through this program will be used by policymakers, if by anyone at all.

Consistent With Chronicles: A New Record

August 19th, 2008

Posted by: Roger Pielke, Jr.

As we’ve documented here on many occasions, some climate scientists like to assert that recent observations of weather and short-term climate are “consistent with” predictions from climate models (see also this essay).

A lot of attention has been paid to recent global average temperature trends, because they have not shown an increase since 1998, 2000, or 2001 (to pick three years often cited in such discussions). An Australian newspaper recently commented on this:

A careful analysis of global temperature graphs from each of the measurement agencies confirm that – despite variations between them – there has not been any notable warming since 2000. Depending on which graphs you use, global temperatures since 2000 have been more or less flat. Some, such as the GISS data, show a modest rise, while others show negligible movement and even a small fall in recent years.

Sceptics like to use graphs that date from 1998 because that was the hottest year on record due to El Nino influences and therefore the temperature trends for the decade look flattest when 1998 is the starting point.

But ultimately this is a phony war because most mainstream scientists do not dispute that global temperatures have remained relatively flat during the past decade. Where they differ with the sceptics is on how this outcome should be interpreted.

“The changes in temperature over the past 10 years have basically plateaued,” says Andy Pitman, co-director of the Climate Change Research Centre at the University of NSW. “But scientists did not anticipate a gradual year-by-year warming in temperature. What matters is the long-term trend. This outcome does not change any of the science but it does change the spin climate deniers can put on it.”

What I found most interesting in the article, aside from the denigration of people wanting to understand recent trends, was an assertion by Monash University’s Amanda Lynch (climate scientist and former faculty colleague from here at Colorado) who stated that it might be 40 years (!) of no warming/cooling before observations would be inconsistent with predictions:

Most scientists are adamant that any assessment of climate change based on only 10 years of data is not only meaningless but reckless.

“From a climate standpoint it is far too short a period to have any significance,” says Amanda Lynch, a climate change scientist at Melbourne’s Monash University. “What we are seeing now is consistent with our understanding of variability between decades. If we hung about for another 30 years and it kept going down, then you might start to think there is something we don’t understand. But the evidence at this point suggests this is not something we should hang around and wait for.”

Excuse me? Another 30 years? That is a record in the “consistent with” chronicles.

Meantime, I continue to applaud serious and rigorous efforts to compare observations with predictions, in real time, and efforts to interpret their significance. The best example of such an effort continues to be the solid (but apparently reckless;-) work by Lucia Liljegren.