Archive for April, 2009

Chu is Right, But Not About Cap and Trade

April 23rd, 2009

Posted by: Roger Pielke, Jr.

The following comment by Energy Secretary Steven Chu from the Earth Day congressional hearings on the Waxman-Markey cap and trade bill is really interesting:

As Secretary of Energy, I think especially now in today’s economic climate it would be completely unwise to want to increase the price of gasoline. And so we are looking forward to reducing the price of transportation in the American family. And this is done by encouraging fuel-efficient cars; this is done by developing alternative forms of fuel like biofuels that can lead to a separate source, an independent source of transportation fuel.

It is interesting for several reasons. First, The EPA analysis of the Waxman-Markey bill shows clearly that the bill increases the costs of gasoline (see p. 29 in the report here in PDF and p. 50 in the Appendix here in PDF). Apparently neither Chu or those critical of the bill who were questioning him were aware of this fact (or else Chu wouldn’t have phrased his response as he did and those critical would have followed up on this exact point).

More importantly however is that Chu’s answer contradicts the entire point of cap and trade legislation. The point is to make fossil fuels more expensive and this new expense will motivate efforts to improve efficiency and invest in alternative energy technologies. Henry Waxman explained this logic just last week:

If we raise the price of energy, which will happen if we’re reducing the amount of carbon emissions, and industries have to figure out how to live in a carbon-constrained environment, they are going to have to figure it out because it’s in their profitable interest to figure it out.

The key point is that under a cap and trade regime increases in the costs of energy come before (and indeed are the reason behind) advances in efficiency and technology. So if Chu really believes that it would be unwise to raise the price of gasoline, then he is not a fan of cap and trade.

Chu is correct that a successful strategy will be based on making energy cheaper. But this approach is not cap and trade.

OSTP Seeks Public Comment on Scientific Integrity

April 23rd, 2009

Posted by: admin

The Office of Science and Technology Policy released today a Request for Public Comment in the Federal Register (H/T ScienceInsider).  The comments would inform the drafting of recommendations to the President for action to preserve scientific integrity in the executive branch.  These recommendations were required by a Presidential Memorandum* issued by President Obama in early March.  As noted here when it was released, the memo seems to be better as a political statement than as effective policy.  Hopefully the comments process can nudge it towards the latter.

The comment period is brief – it ends at 5 p.m. Eastern time on May 13.  You can submit comments via email (scientificintegrity@ostp.gov), online (though I’d make sure they fixed the link on that page), or by mail (address is listed in the Federal Register notice).  Comments can also be made on the new OSTP Blog, with blog posts for each of the principles outlined below.  You will need to register in order to comment on the blog.

There is some guidance for the comments, which are after the jump:

(more…)

A Letter from the UK Parliament

April 23rd, 2009

Posted by: Roger Pielke, Jr.

Peter Lilley, a Member of the UK Parliament and Prometheus reader, sends us this note:

In the light of your interesting piece on the Cost Benefit Analysis of the UK climate change act you may be interested in this letter I have sent to the Secretary of State

The “interesting piece” I wrote on the cost-benefit study is here. The full letter from Mr. Lilley to Ed Miliband can be found below. But I do want to highlight one passage in particular:

I support sensible measures to reduce CO2 emissions, economise on hydrocarbon use and help the poorest countries adapt to adverse climate change whatever it cause – as long as the measures we adopt are sensible and cost effective. But we cannot judge what is sensible and cost effective if we do not have reliable figures, and subject them to proper parliamentary scrutiny.

When the Department slips out figures which it appears to be unable to explain, unwilling to debate and which are so flaky they vary by a factor of ten – it can only provoke scepticism.

I support these comments. Just because climate change is important does not mean that we should abandon our critical thinking capabilities or commitment to democratic accountability. Here is the full letter:

(more…)

Senator Lugar Wants to Create Science Envoys

April 22nd, 2009

Posted by: admin

Update – the bill text is now available

Senator Richard Lugar, (R-Indiana) introduced a bill yesterday calling for the creation of “Science Envoys” through the State Department’s Bureau of Educational and Cultural Affairs (H/T ScienceInsider).  These envoys would receive grants for short-term stays abroad.  While abroad, the scientists would forge connections between their host institutions in the U.S. and those in the country where they are posted.  This would be distinct from the Consular Officers responsible for science that are posted to some State Department facilities around the world.

Unfortunately, Senator Lugar’s website is not as current as you might like, so the statement he issued on the program is not yet available online (though it appears to have been excerpted in full at a different site).  Likewise, the bill (S838), is not yet available via THOMAS, the legislative tracking site administered by the Library of Congress.  Once it is, you can read it here.

I like the idea, as it would be a welcome supplement to the international exchanges already happening with some universities establishing branch campuses overseas, and with scholars pursuing opportunities like the Fulbright program to pursue research and teaching in other countries.  The U.S. has succeeded with this kind of public diplomacy before, so why not try again?  Hopefully the legislation will allow for stays of sufficient duration, and will be funded with sufficient resources to encourage top scientists to pursue these opportunities.

Inexpert Elicitation by RMS on Hurricanes

April 22nd, 2009

Posted by: Roger Pielke, Jr.

Risk Managment Solutions (RMS) is a leading company that provides catastrophe models which are used to asses risk in the insurance and reinsurance industries. RMS plays a very important role in enabling companies and regulatiors to quantify risks and uncertainties associated with extreme events such as floods, hurricanes and terrorism.

I have in the past been somewhat critical of RMS for issuing short-term hurricane predictions (e.g., see here and here and here). I don’t believe that the science has demonstrated that such predictions can be made with any skill, and further, by issuing predictions, RMS creates at least the appearance of a conflict of interest as many of its customers will benefit (or lose) according to how these predictions are made. So RMS has an financial interest in the predictions that it issues. The issues faced by catastrophe modeling firms are similar to those faced by ratings agencies in the financial services sector.

A key part of RMS short-term prediction of hurricane activity is an expert elicitation. The elicitation is described in the paper by Jewsen et al. from RMS (PDF). I participated in the 2008 RMS expert elicitation held last October. The elicitation allows experts to allocate weights to different models of hurricane activity and then these weights are used to integrate the models for each expert, thus providing a view of that particular expert for the coming five years. For instance, if there were two models with annual landfall rates of 0.4 and 0.8 Catergory 3-4-5 hurricanes, and I decided to give the first 20% weight and the second 80%, then the result would be 0.4 * 20% + 0.8 * 80% = 0.72 landfalls per year. The views of the individual experts are combined to provide the results of the group elicitation. RMS conducts its elicitation with the experts blind to the results of the models, focusing instead on the models themselves.

The RMS elicitation has been controversial because it has resulted in a short-term prediction of activity higher than the climatological average, which means higher risk, and for coastal property owners the result is higher insurance rates. In 2006 for example the elicitation by the RMS panel of experts resulted in a prediction of 0.92 Category 3-4-5 storms making landfall every year for the period 2007 to 2011, which is considerably higher than the 1950-2006 average of 0.64. At the time, loss estimates increased by about 40%, with consequences for insurance and reinsurance rates.

Because Jewsen et al. conveniently provide the results for future storm activity from the 20 models used in the 2006 elicitation, I recently decided to do my own inexpert elicitation. I created a panel of 5 “monkeys” by allocating weights randomly across the 20 models for each of my participating monkeys. My panel of 5 monkeys came up with an estimated 0.91 storms per year for 2007 to 2011. I did this three more times, and these three different panels of “monkeys” projected 0.90, 0.93, 0.92 landfalling Category 3-4-5 storms per year. The RMS experts came up with 0.92.

In short, my panels of “monkeys” produced the exact same results as those produced by the RMS expert panels comprised of world leading scientists on hurricanes and climate. How can this be? The reason for this outcome is that the 20 models used in the expert elicitation (i.e., from Jewsen et al.) provide results that range from 0.63 on the low side to 1.21 at the high end, with an average of 0.90. Thus, if the elicitiors spread their weights across models the process will always result in values considerably higher than the historical average. The more spreading of weights and the more particpants will necessarily mean that the results will gravitate to the average across the models. So with apologies to my colleagues, we seem to be of no greater intellectual value to RMS than a bunch of monkeys. Though we do have credentials, which monkeys do not.

The stablity of results for the first 3 times that RMS ran its elicitation provided a hint that something in the process constrained possible results. In 2008 there was a downward change in the results as compared to the first three years. Based on the monkey exercise, I’d guess the change is largely due to the fact that the number of models used in the elicitation expanded to 39 from 20, and while I don’t have the output from the 39 models, I’d bet a beer that the average is lower than that of the previously used 20 models.

What does this mean? It means that the RMS elicitation process is biased to give results above the historical average. Of course, it may in fact be the case (and probably is) that many of the elicited experts actually do believe that storm activity will be enhanced during this period. However the fact that the RMS elicitation methodology does not distinguish between the views of these experts and a bunch of monkeys should give some concern about the fidelity of the process. Another way to look at it is that RMS is providing guidance on the future based on what RMS believes to be the case and the experts are added in the mix to provide some legitimacy to the process. I’m not a big fan of experts being used as props in the marketing of false certainties.

The RMS expert elicitation process is based on questionable atmospheric science and plain old bad social science. This alone should lead RMS to get out of the near-term prediction business. Adding in the appearance of a conflict of interest from clients who benefit when forecasts are made to emphasize risk above the historical average makes a stronger case for RMS to abandon this particular practice. RMS is a leading company with an important role in a major financial industry. It should let its users determine what information on possible futures they want to incorporate when using a catastrophe model. RMS should abandon its expert elicitation and its effort to predict future hurricane landfalls for the good of the industray, but also in service of its own reputation.

How EPA Lost $1.22 Trillion In Its Waxman-Markey Analysis

April 22nd, 2009

Posted by: Roger Pielke, Jr.

You can have a lot of fun playing games with assumptions. For instance, EPA managed to lose $1.22 trillion in the costs of the Waxman-Markey cap and trade bill to 2019. Here is how.

The budget put forward by President Obama contained a set of assumptions for GDP growth that would result from his administrations efforts to restart the economy. For 2010-2019 the average GDP growth rate is projected to be about 3.3% (for exact numbers see Table S-8 in the President’s Budget in PDF). By contrast, the EPA’s analysis of the Waxman-Markey Bill uses a GDP growth rate of 2.5% per year (see page 14 in the EPA analysis here in PDF).

Starting with the GDP figures in the President’s Budget and applying the different GDP growth rates leads to a difference in the size of the economy of $1.22 trillion between the President’s Budget projections and the EPA analysis of Waxman-Markey. Not a small number. (The 2010 GDP values start at different values as well, which could add another $400 billion to the total.)

A lower GDP means fewer emissions. It also means that needed efficiency gains are smaller to meet the same targets. For instance the EPA analysis assumes that the US will see energy intensity improve at a rate faster than the 2.5% per year GDP growth. To maintain the same energy demand figures this rate would have to increase to above 3.3% per year. Both of the values stretch credulity, but that is for another time. The point is that in addition to the “lost” GDP growth, there would be considerable extra costs for emissions reduction (determining how much would require re-running the EPA analysis, but it is safe to say that it would be a lot).

How much is $1.22 trillion? About $11,000 per U.S. household (assuming 115 million households). This number obly gets larger in the out years if GDP growth rates exceed the 2.5% assumed by EPA. Adding in the costs of addition emissions reductions would make this number considerably larger.

There are plenty of other hidden and acknowledged assumptions in the EPA analysis as well.

How Effective Are University Rankings?

April 21st, 2009

Posted by: admin

The European Commissioner for Science and Research, Januz Potocnik, recently blogged (H/T ScienceInsider) about a workshop focused on measuring and comparing universities.  The Commission has an Expert Group focused on assessing university research and recently released an interim report on the use of rankings to assess university based research.  While acknowledging some honest disagreement over whether such rankings are useful, the group did note that the current methodologies behind rankings like the Times QS World could use improvement.

What I found particularly interesting was Commissioner Potocnik’s idea that more coherent university rankings could stimulate the funding of university research.

Rankings, which have been shown to influence the behaviour of universities and promote strategic thinking and planning, could help universities to develop better management practices and thus attract potential external funders. This is in the end what we are trying to do – secure the future for research activities in universities which will benefit our societies for a long time to come…

While the notion is encouraging, I’m reminded of how universities (at least in the U.S.) have set goals like aiming for the top 30 on a particular ranking or to boost their position in a magazine’s list.  These efforts often seem focused on the output – the numerical ranking – rather than desired outcomes like increased research funding and quality.  While by no means perfect (and certainly more expensive), the more involved United Kingdom university ratings seem a better means of acheiving these desired outcomes.

“Science” By Press Release

April 21st, 2009

Posted by: Roger Pielke, Jr.

A new study on global streamflows has just been announced via press release by NCAR. Here is how the press release opens:

Rivers in some of the world’s most populous regions are losing water, according to a new comprehensive study of global stream flow. The study, led by scientists at the National Center for Atmospheric Research (NCAR), suggests that in many cases the reduced flows are associated with climate change. The process could potentially threaten future supplies of food and water.

Here is what the paper actually says:

We emphasize, however, that the actual streamflow and discharge examined here likely include changes induced by human activities, such as withdrawal of stream water and building dams, and thus they are not readily suitable for quantifying the effects of global warming on streamflow

Lets see how many news stories follow the press release rather than the paper.

The Spanish Green Jobs Study

April 21st, 2009

Posted by: Roger Pielke, Jr.

There has been a lot of discussion of a recent research paper out of Spain arguing that the Spanish experience in the creation of “green jobs” is not one to emulate here in the US, despite President Obama’s suggestion that Spain is a leader in green jobs. The paper can be found here in PDF. A few of its key findings follow:

1. As President Obama correctly remarked, Spain provides a reference for the establishment of government aid to renewable energy. No other country has given such broad support to the construction and production of electricity through renewable sources. The arguments for Spain’s and Europe’s “green jobs” schemes are the same arguments now made in the U.S., principally that massive public support would produce large numbers of green jobs. The question that this paper answers is “at what price?”

2. Optimistically treating European Commission partially funded data, we find that for every renewable energy job that the State manages to finance, Spain’s experience cited by President Obama as a model reveals with high confidence, by two different methods, that the U.S. should expect a loss of at least 2.2 jobs on average, or about 9 jobs lost for every 4 created, to which we have to add those jobs that non-subsidized investments with the same resources would have created.

3. Therefore, while it is not possible to directly translate Spain’s experience with exactitude to claim that the U.S. would lose at least 6.6 million to 11 million jobs, as a direct consequence were it to actually create 3 to 5 million “green jobs” as promised (in addition to the jobs lost due to the opportunity cost of private capital employed in renewable energy), the study clearly reveals the tendency that the U.S. should expect such an outcome.

4. At minimum, therefore, the study’s evaluation of the Spanish model cited as one for the U.S. to replicate in quick pursuit of “green jobs” serves a note of caution, that the reality is far from what has typically been presented, and that such schemes also offer considerable employment consequences and implications for emerging from the economic crisis.

5. Despite its hyper-aggressive (expensive and extensive) “green jobs” policies it appears that Spain likely has created a surprisingly low number of jobs, two thirds of which came in construction, fabrication and installation, one quarter in administrative positions, marketing and projects engineering, and just one out of ten jobs has been created at the more permanent level of actual operation and maintenance of the renewable sources of electricity.

6. This came at great financial cost as well as cost in terms of jobs destroyed elsewhere in the economy.

7. The study calculates that since 2000 Spain spent €571,138 to create each “green job”, including subsidies of more than €1 million per wind industry job.

8. The study calculates that the programs creating those jobs also resulted in the destruction of nearly 110,000 jobs elsewhere in the economy, or 2.2 jobs destroyed for every “green job” created.

9. Principally, these jobs were lost in metallurgy, non-metallic mining and food processing, beverage and tobacco.

10. Each “green” megawatt installed destroys 5.28 jobs on average elsewhere in the economy: 8.99 by photovoltaics, 4.27 by wind energy, 5.05 by mini-hydro.

11. These costs do not appear to be unique to Spain’s approach but instead are largely inherent in schemes to promote renewable energy sources.

Is the study sound? Did Obama err in highlighting Spain as a model to emulate?

Science Does Not Matter in the EPA Endangerment Finding

April 21st, 2009

Posted by: Roger Pielke, Jr.

Jonathan Adler explains why it is that the science of climate change is really not a critical issue in the EPA endangerment finding or the debate that is likely to ensure:

The proposed findings will now go through a 60-day public comment period. Shortly thereafter, the findings will be finalized. Industry and anti-regulatory groups will almost certainly challenge the findings in court, and their legal challenges will almost certainly fail. Even if one doubts the accumulated scientific evidence that anthropogenic emissions of greenhouse gases contribute to climate change and that climate change is a serious environmental concern, the standard of review is such that the EPA will have no difficulty defending its rule. Federal courts are extremely deferential to agency assessments of the relevant scientific evidence when reviewing such determinations. Moreover, under the Clean Air Act, the EPA Administrator need only “reasonably . . . anticipate” in her own “judgment” that GHG emissions threaten public health and welfare in order to make the findings, and there is ample evidence upon which the EPA Administrator could conclude that climate change is a serious threat. This is a long way of saying that even if climate skeptics are correct, the EPA has ample legal authority to make the endangerment findings.

What will happen next is fully about costs, jobs, and politics.