Archive for May, 2009

Changing Outcome Targets in Health Care

May 31st, 2009

Posted by: admin

In what might be a microcosm of the pending struggle over the nation’s health care system, a radiologist has proposed in Nature magazine that stablizing tumors at a manageable size, rather than eliminating them entirely (H/T 60 Second Science Blog).  This is a significant shift in traditional strategy, which is usually to knock out the cancer – sometimes repeatedly, should it recur.  Put another way, treat the disease as chronic and manage it with smaller doses of chemotherapy, rather than seeking the death strike and risk the potential of resistant cells returning with a vengeance.  Robert Gatenby, the radiologist behind this idea, compares this different strategy to the integrated pest management approach to invasive species, which was accepted by the Agriculture Department as far back as the Nixon Administration (as it happens, the beginning of the War on Cancer can be traced to the Nixon Administration as well).

A parallel with the upcoming debates over the U.S. health care system would be the current emphasis in the United States on treatment and prolonging the last years of life over prevention and other longevity measures.  While it may be an open question in either situation whether or not changing tactics would be more effective, there will be significant resistance to entertaining the idea in either case.

More Journals Should Learn from Failure

May 30th, 2009

Posted by: admin

I’m not speaking about the ongoing sea changes in print journalism, but about the tendency of researchers to submit, and scientific publishers to print, successful research and ignore the failures.  While anyone in any field can learn from what went wrong, certain areas of research (medicine and engineering come to mind) are in stronger positions to make meaningful contributions to knowledge and to the use of that knowledge from reporting what failed.  The same is true of history, where learning why something didn’t happen can provide insight.  Understanding that something didn’t work in a field can shift choices in products purchased, treatments sought, or services offered.  Yet most published research excludes that kind of result.

One small step to addressing the ignorance of failure is starting with the journal Restoration EcologyAccording to Nature, the editors of Restoration Ecology will host a regular section in their journal for reports on experiments and projects in the field that did not meet expectations.  Understanding the baggage attached to the work failure, the section is titled “Set-Backs and Surprises.”  Perhaps this will catch on with other journals, especially if the setback or surprise is couple with some kind of analysis or discussion of lessons learned.  While policymakers are often focused more on the successes than what didn’t work, they do respond to lessons learned.  In that way, they may have a healthier attitude toward failure than the researchers they support.

New FDA Leadership Outlines Their Priorities

May 29th, 2009

Posted by: admin

The new Commissioner and Deputy Commissioner of the Food and Drug Administration, Margaret Hamburg and Joshua Sharfstein, outlined their priorities for the FDA in a recent Perspectives article in the New England Journal of Medicine.  Acknowledging the stumbles of the FDA in the last few years, they seem concerned about making sure the FDA is more of a supporter than a hindrance to action in public health and food safety.

“From our vantage point, the recent salmonella outbreak linked to contaminated peanut butter represented far more than a sanitation problem at one troubled facility. It reflected a failure of the FDA and its regulatory partners to identify risk and to establish and enforce basic preventive controls. And it exposed the failure of scores of food manufacturers to adequately monitor the safety of ingredients purchased from this facility.”


A Methodological Embarassment

May 29th, 2009

Posted by: Roger Pielke, Jr.

I am quoted in today’s NYT on a new report issued by the Global Humanitarian Forum which makes the absurd claim that 315,000 deaths a year can be attributed to the effects of rising greenhouse gas concentrations. Here is what I said:

Roger A. Pielke Jr., a political scientist at the University of Colorado, Boulder, who studies disaster trends, said the forum’s report was “a methodological embarrassment” because there was no way to distinguish deaths or economic losses related to human-driven global warming amid the much larger losses resulting from the growth in populations and economic development in vulnerable regions. Dr. Pielke said that “climate change is an important problem requiring our utmost attention.” But the report, he said, “will harm the cause for action on both climate change and disasters because it is so deeply flawed.”

Strong comments I know. Shoddy work on disasters and climate change is the norm, unfortunately, and something I’ve been closely following for well over a decade. I have no illusions that this latest concoction will be repeatedly cited regardless.

Below are my comments to the NYT upon reading the report (cleaned up and formatted). Caution, strong views ahead.

Let me apologize for the length of this reply. But it is important to be clear and to set the record straight.

Let me say first that human-caused climate change is an important problem requiring our utmost attention. Second, the effects of disasters, particularly in poorer countries, is also an important problem that to some degree has been overlooked, as I have argued for many years.

However, I cannot express how strongly I feel that this report has done a disservice to both issues. It is a methodological embarrassment and poster child for how to lie with statistics. The report will harm the cause for action on both climate change and disasters because it is so deeply flawed.

It will give ammunition to those opposed to action and divert attention away from the people who actually need help in the face of disasters, yet through this report have been reduced to a bloodless statistic for use in the promotional battle over climate policies. The report is worse than fiction, it is a lie. These are strong words I know.

1. Let me first start by noting that the same group that did the analysis for the UN, the Geo-Risks group in Munich Re, earlier this year published a peer-reviewed paper arguing that the signal of human-caused climate change could not presently be seen in the loss data on disasters. They wrote (emphasis added):

It should be noted when assessing the results of both this paper and Schmidt et al. (2008) that it is generally difficult to obtain valid quantitative findings about the role of socioeconomics and climate change in loss increases. This is because of criteria such as the stochastic nature of weather extremes, a shortage of quality data, and the role of various other potential factors that act in parallel and interact. We therefore regard our results as being an indication only of the extent to which socio-economic and climate changes account for the increase in losses. Both studies confirm the consensus reached in May 2006 at the international workshop in Hohenkammer attended by leading experts on climate change and natural catastrophe losses.

I co-organized the Hohenkammer workshop (referred to in the quote above) with Peter Hoeppe of Munich Re and that workshop concluded (among other things):

Due to data-quality issues, the stochastic nature of extreme event impacts, the lengths of the time series, and various societal factors present in the disaster loss records, it is still not possible to determine what portion of the increase in damage may be due to climate changes caused by GHG emissions.


The quantitative link (attribution) between storm/flood loss trends and GHG-induced climate changes is unlikely to be determined unequivocally in the near future.

On p. 84 the GHF report itself says:

However, there is not yet any widely accepted global estimate of the share of weather related disasters that are attributable to climate change.

One would think that would be the end of the story. However, to fill in for the fact that there is no accepted estimate, the report conjures up a number using an approach that is grounded in neither logic, science, or common sense.

2. Specifically, to get around the fact that there has been no attribution of the relationship of GHG emissions and disasters, this report engages in a very strange comparison of earthquake and weather disasters in 1980 and 2005. The first question that comes to mind is, why? They are comparing phenomena with many “moving parts” over a short time frame, and attributing 100% of the resulting difference to human-caused climate change. This boggles the mind. The IPCC itself says that 30 years are needed for the detection of changes in the climate system, and this time frame does not even reach that threshold. More to the point earthquakes and weather events do not have the same variability and earthquake disasters affect only a small part of the total inhabited area of the earth, whereas weather disasters occur much more widely. The assumption that weather disasters should track earthquake disasters is flawed from the outset for both geophysical and socio-economic reasons.

An alternative, more scientifically robust approach would be to look specifically at weather-related disasters, and consider the role of socio-economic changes, and to the extent possible, try to remove that signal and see what trends remain. When that has been done, in every case (US floods, hurricanes, Australia, India TCs, Latin America and elsewhere, all in the peer-reviewed literature) there is not a remaining signal of increasing disasters. In other words, the increase in disasters observed worldwide can be entirely attributed to socio-economic changes. This is what has been extensively documented in the peer reviewed literature, and yet — none of this literature is cited in this report. None of it! Instead they rely on this cooked up comparison between earthquakes and weather related disasters.

(Consider also that in no continental location has there been an observed increase in tropical cyclone landfalls, and yet this accounts for almost all of the windstorm disasters cited in the report. The increase must therefore be due to factors other than geophysical changes. This fact renders the
comparison with earthquakes even more meaningless).

Munich Re’s own peer-reviewed work supports the fact that socio-economic factors can explain the entire increase in global disasters in recent decades.

Consider that in 2005 there were 11 earthquakes magnitude 7 or higher and in 1980 there were 14. by contrast, 1980 was a quiet weather year, and 2005 was very active, and included Katrina.

3. The report cites and undates the Stern Review Report estimates of disaster losses, however, in a peer-reviewed paper I showed that these estimates were off by an order of magnitude and relied on a similar sort of statistical gamesmanship to develop its results (and of course this critique was ignored):

Pielke, Jr., R. A., 2007. Mistreatment of the economic impacts of extreme events in the Stern Review Report on the Economics of Climate Change, Global Environmental Change, 17:302-310. (PDF)

This report is an embarrassment to the GHF and to those who have put their names on it as representing a scientifically robust analysis. It is not even close.

Best regards,


Purists Who Demand that Facts be “Correct”

May 28th, 2009

Posted by: Roger Pielke, Jr.

Over at Seed Magazine in a collection of views on “framing,” Penn State climatologist Michael Mann explains why it was necessary to misrepresent what the IPCC does on the cover of his co-authored book titled “Dire Predictions: Understanding Global Warming”:

Often, in our communication efforts, scientists are confronted with critical issues of language and framing. A case in point is a book I recently co-authored with Penn State colleague Lee Kump, called Dire Predictions: Understanding Global Warming. The purists among my colleagues would rightly point out that the potential future climate changes we describe, are, technically speaking, projections rather than predictions because the climate models are driven by hypothetical pathways of future fossil fuel burning (i.e. conceivable but not predicted futures). But Dire Projections doesn’t quite roll off the tongue. And it doesn’t convey — in the common vernacular — what the models indicate: Climate change could pose a very real threat to society and the environment. In this case, use of the more technically “correct” term is actually less likely to convey the key implications to a lay audience.

As one of those “purists” who would like to receive information that is technically “correct” I probably can judge that book by its cover.

In contrast, in another commentary on framing at Seed, ASU science policy expert Clark Miller suggests an alternative, richer view of framing:

Two competing models of framing exist. The first views framing as a tactical choice in communication. Spinning information to comport with culturally embedded narratives purportedly raises its credibility with target audiences. This model presumes an ignorant and uninformed public, with all the dangers that implies for democracy. I reject this model.

The second model views framing, instead, as how humans make sense of and give meaning to events in the world — the lens through which they interpret disparate observations, models, data, and evidence in light of their values. This model posits framing as an ineradicable element of reasoning, even in science, and a facility for rich, nuanced storytelling as a foundation for human community.

Both models recognize that humans structure their understanding of policy through narrative and story. Rather than exploiting this structure for political gain, however, the second model acknowledges that any specific policy frame is, at best, partial and incomplete. Any frame reflects only one way of looking at a policy problem, leaving out potentially critical pieces of knowledge and significance.

Professional Research Techs: A Way to Address Ph.D. Overproduction?

May 28th, 2009

Posted by: admin

This article at Science Progress describes an oversight in research funding proposals – both from funding agencies and from those seeking grants.  The scientific equipment, and more importantly, the personnel necessary to operate and maintain it, don’t get much attention in funding.  The main recommendations from the author, a Ph.D. candidate in Biochemistry:

“In order to stay globally competitive, the federal government must invest in developing a sustainable and professional technical research workforce to supply research demands at both our nation’s institutes and universities. This could be undertaken by:

  • Boosting job stability in this sector by increasing the number of state and federal positions for permanent university research staff
  • Restructuring the grant process with an emphasis on personnel and service contracts in conjunction with new equipment
  • Sponsoring the creation of university degree programs for technical training in the laboratory sciences.”


A National Climate Service?

May 27th, 2009

Posted by: admin

The House Science and Technology Committee will consider H.R. 2407, the Climate Service Act of 2009, during a markup hearing on June 3.  The bill was introduced by the Committee’s Chairman, Rep. Bart Gordon, so it stands a decent chance of passing out of committee.  I have no idea how far it might move after that.  An open question is how closely the fate of this bill is tied to the fate of Waxman-Markey.

The bill would establish a stand-alone National Climate Service within the National Oceanic and Atmospheric Administration (NOAA).  The purpose of the service would be to:

(1) advance understanding of climate variability and change at the global, national, and regional levels;

(2) provide forecasts, warnings, and other information to the public on variability and change in weather and climate that affect geographic areas, natural resources, infrastructure, economic sectors, and communities; and

(3) support development of adaptation and response plans by Federal agencies, State, local, and tribal governments, the private sector, and the public.

More specific functional responsibilities of the proposed Service are in Section 4(c)(5) of the bill.  If the bill is passed, the NOAA Administrator would have to develop an implementation plan that would provide more detail about the responsibilities for the NCS and how it would fit with the rest of NOAA.


Has Joe Romm Gone Missing?

May 27th, 2009

Posted by: Roger Pielke, Jr.

I’ve come to depend upon Joe Romm for ideological rigidity and his unwavering faith in his own infallibility. Such commitment provides a useful touchstone in the climate debate. So I have been dismayed to see Romm not just abandon some of his most firmly held views, but sprint in the opposite direction while at the same time lambasting those who would have the gall to espouse views that he only recently held. Such relativism smacks of kowtowing to political expediency while ignoring policy outcomes or even something even more sinister, maybe even involving the . . . deniers.

Perhaps the real Joe Romm has been kidnapped, and an offset-loving, climate-delayer-eq, fossil fuel drinking replacement has been quietly spirited into his place? A look at the recent flip-flopping by Joe Romm might help us understand the transformation, and with some luck, locate the real Joe Romm and return him to his proper place in the climate debate.

When a coalition of middle-of-the-road environmental groups and large businesses came out with the USCAP proposal for cap and trade, Joe Romm came out in his usual forceful way against their proposal (and especially its reliance on offsets):

The U.S. Climate Action Partnership — a coalition of businesses and enviros once though to be important — have released their wimpy Blueprint for Legislative Action.

I can sort of understand why, say, Duke Energy, signed on to this, but NRDC and EDF and WRI have a lot of explaining to do. As we will see, this proposal would be wholly inadequate as a final piece of legislation. As a starting point it is unilateral disarmament to the conservative politicians and big fossil fuel companies who will be working hard to gut any bill. Kudos to the National Wildlife Federation for withdrawing from USCAP rather than signing on.

It turns out that the USCAP proposal “was the basis for the Waxman-Markey” bill, according to Romm. At some point Romm’s views changed, and not just by a little — his rhetoric on USCAP/Waxman-Markey has flip-flopped 180 degrees, going from labeling USCAP/Waxman-Markey as representing “unilateral disarmament to the conservative politicians and big fossil fuel companies” to a cryptic message explaining “How I learned to stop worrying and love Waxman-Markey.” Particularly odd is Romm coming out in strong defense of the potential use of domestic and international offsets to fulfill the bulk of the “emissions reduction” requirements of Waxman-Markey, given that Romm has been/was among the most vocal opponents of the use of offsets to represent “emissions reductions” — and rightly so in my view.. Has Romm joined the board of Duke Energy? What gives? Is he now a “denier-eq 1000″ (to speak in Romm-ese)?

To remind readers of what Joe Romm had once said about offsets, here are a few selected quotes that leave absolutely no ambiguity:

On the use of offsets generally:

Q: What is the difference between carbon offsets and mortgage-backed securites?

A: Lipstick.

Carbon offsets and mortgage-backed securities are quite similar in that is impossible for the vast majority of people, even experts, to know what value they have, if any.

On the use of offsets in last year’s Lieberman-Warner bill:

Now when I redo the math, it seems the most likely outcome of this bill is that U.S. energy-related CO2 emissions in 2025 would we about the same as they are now, and possibly higher. If that’s the best we can do for a piece of legislation that’s deader than a dead parrot — it is a dead parrot whose body has been given to a veterinary anatomy class for dissection and had its heart removed — why bother?

On the use of offsets in the USCAP proposal:

The USCAP plan would call for a reduction of 1.0 to 1.4 billion tons of U.S. GHGs in 2020, while allowing 2 billion or more tons of offsets, at least half of which don’t even have to be in this country. When would US carbon dioxide emissions see serious reductions under this plan? Who knows?

And on offsets via forestry:

Offset projects should simply not include tree planting.

On an earlier McCain cap and trade plan:

If emissions reductions can be done through a rigorous and verifiable process, then they can and should be included in the overall cap. The probability that there are offset-like emissions reductions floating around the ether that are both abundant and cheap is quite small. That is why a major offset-based strategy would “involve substantial issuance of credits that do not represent real emissions reductions,” as the Stanford study concluded. That report’s policy conclusion:

We argue that the U.S., which is in the midst of designing a national regulatory system, should not rely on offsets to provide a reliable ceiling on compliance costs….

Offsets can play a role in engaging developing countries, but only as one small element in a portfolio of strategies….

The entire foundation of McCain’s climate plan is built on quicksand.

On the use of offsets in cap and trade:

Let’s hope Congress actually listens to GAO and sharply scales back the use of offsets in future climate bills.

So please, help us find the real Joe Romm. He was right on offsets and USCAP/Waxman-Markey before he was wrong about offsets and USCAP/Waxman-Markey.

Obsession With Targets

May 27th, 2009

Posted by: Roger Pielke, Jr.

In my just accepted paper on the UK Climate Change Act (prepub version here in PDF) I argue that:

Policy should focus less on targets and timetables for emissions reductions, and more on the process for achieving those goals, and the various steps along the way. Setting targets and timetables for sectoral efficiency gains and expansion of carbon-free energy supply would be a step in the right direction.

U.S. Energy Secretary Steven Chu would seem to agree with at least some of this line of argument, based on this quote in a Reuters story:

Energy Secretary Stephen Chu said on Tuesday that setting exact targets for carbon dioxide emissions had led to an “over-obsession” with numbers, as the United States moved closer to overhauling its energy policy.

The comment came less than a week after a congressional panel approved President Barack Obama’s landmark draft bill on climate change, bringing it closer to debate in Congress.

“There was a great deal of discussion on the Kyoto targets, and I’m not really sure which fraction of the countries that took part in that actually met their targets,” Chu, a Nobel laureate for physics, said at a conference in London. “In terms of the targets, whether it’s 17 percent or 20 or 25 percent, I think there’s perhaps … an over-obsession on these percentages.”

However, somehow I think that we are a long way from breaking from the auctioning of empty promises in the debates over targets and timetables for emissions reductions.

Why The Industrial Revolution Started in Britain

May 26th, 2009

Posted by: Roger Pielke, Jr.

Robert Allen, an Oxford professor, has a new book out with Cambridge University Press titled “The British Industrial Revolution in Global Perspective.” Allen has a precis up over at VoxEU which provokes a few thoughts about efforts to spark a new green global economy.

Allen argues that a combination of factors led to the industrial revolution, among them international trade associated with the British Empire, an educated and wealthy populace which created a demand for the fruits of technology as well as the skills necessary to produce them, and, crucially, cheap energy. Allen provides the following graph, showing a comparison of energy costs across Europe in the early 1700s.

Allen writes:

The famous inventions of the Industrial Revolution were responses to the high wages and cheap energy of the British economy. These inventions also substituted capital and energy for labour. The steam engine increased the use of capital and coal to raise output per worker. The cotton mill used machines to raise labour productivity in spinning and weaving. New technologies of iron making substituted cheap coal for expensive charcoal and mechanised production to increase output per worker.

These technologies eventually revolutionised the world, but at the outset they were barely profitable in Britain, and their commercial success depended on increasing the use of inputs that were relatively cheap in Britain. In other countries, where wages were lower and energy more expensive, it did not pay to use technology that reduced employment and increased the consumption of fuel.

The French government was very active in trying to promote advanced British technology in the eighteenth century, but its efforts failed since the British techniques were not cost effective at French prices. James Hargreaves perfected the spinning jenny, the first machine that successfully spun cotton, in the late 1760s. In 1771, John Holker, an English Jacobite who held the post of Inspector General of Foreign Manufactures, spirited a jenny into France. Demonstration models were made, but the jenny was only installed in large, state supported workshops. By the late 1780s, over 20,000 jennies were used in England and only 900 in France. Likewise, the French government sponsored the construction of an English style iron works (including four coke blast furnaces) in Burgundy in the 1780s. The raw materials were adequate, the enterprise was well capitalised, and they hired outstanding and experienced English engineers to oversee the project. Yet it was a commercial flop because coal was too expensive in France.

Since the technologies of the Industrial Revolution were only profitable to adopt in Britain, that was also the only country where it paid to invent them. The ideas embodied in the breakthrough technologies were simple; the difficult problem was the engineering challenge of making them work. Responding to that challenged required research and development, which emerged as an important business practice in the eighteenth century. It was accompanied by the appearance of venture capitalists to finance the R&D and a reliance on patents to recoup the benefits of successful development. The Industrial Revolution was invented in Britain in the eighteenth century because that was where it paid to invent it.

The Economist reviews Allen’s new book this week, writing that “when governments from America to Japan are reinventing industrial policy with each off-the-cuff bail-out, this study offers some useful reminders.” Cheap energy and ample wealth as the mothers of invention appear to be among them. The importance of R&D in creating both should not be overlooked either.