Accountability and Federally Funded Research – Not Mutually Exculsive

July 10th, 2008

Posted by: admin

Among the many different old, ill-formed, and just plain inaccurate tenets found in science and technology policy rhetoric is the notion that accountability for federal research funds only means one thing: an overly simplistic metric of dollars per discovery (much, much easier said than done). The most recent example of this can be found in the August 2008 issue of Seed magazine. In an interview found on pages 22 and 24 of that issue (titled “Foundation Building,” not yet available online), Dr. Colwell notes, in response to a question about difficulties in building support for curiosity-driven basic research, answered:

Well, I didn’t really get the question, “How many discoveries are you going to make this year if we give you the money?” but there was an implication too often that they wanted to have some sort of accountability. That is, if you spent x number of dollars, you would get y number of discoveries. Fortunately good sense and intelligence prevailed.

Accountability is a good thing, particularly accountability where taxpayer money is involved. But the way Colwell defines accountability forces her to speak of it as though it is a bad thing. Not a great example of good sense. And not an isolated incident.

So, agreeing that a measure of dollars per discovery is an ineffective measure of the impact of research spending (and probably a difficult metric to capture), how should we consider accountability for federal research money?


My first observation is that this discussion is usually framed in terms of what accountability should not be. Another relevant point is that what the scientific communities would consider as being accountable for their money or using it effectively will not completely overlap with what the federal government will consider accountable or effective.

All that said, the NSF, like all federal agencies, is obligated to submit Performance Reports with each Budget Request. So the FY 2007 Report was submitted with the FY 2009 Request. You can access it through the NSF website.

Reviewing the FY 2007 Performance Report Highlights, many of the research and education goals are handled through external expert review, per recommendations from a 2001 National Academies report, Implementing the Government Performance and Results Act for Research: A Status Report (full disclosure: I helped staff the report). The recommendations in that report encouraged that scientific research be evaluating on criteria of quality, relevance and leadership. Now there are assumptions behind those criteria that have not been really debated or questioned outside of the scientific community. But this is a measure of accountability, so for Colwell to suggest that there isn’t is odd, and to not mention the means by which NSF tries to assess its effectiveness is to miss an opportunity to boost the perception of those scientists and engineers beating their tin Ehrlemeyer flasks for federal research dollars.

To the extent practical, democratic government functions better for its citizens the more transparent it is. Scientific communities fight this for fear of micromanagement. To celebrate and advertise the assessment measures for scientific research can help strengthen the perception of those politicians and policymakers outside the House Science and Technology Committee that are at best indifferent to the fate of the scientific and technological enterprise in the U.S.

I don’t expect such a recognition of assessment to purge the linear model from the halls of Congress, nor do I expect it to open the eyes of people to the point that science funding is no longer an afterthought in the appropriations process (the deficit model is an even bigger cognitive block than the linear model). I do expect a better ability of people to see what is happening with scientific research dollars, and perhaps detect changes to the workforce (grant trends – including personnel support – are probably the easiest metrics to capture about research) in a way that can move policy arguments past the collections of anecdotes that often pass for data.

Comments are closed.