Prometheus » Scientific Assessments Fri, 02 Jul 2010 16:53:16 +0000 en hourly 1 UK Backs Away from a Bibliometric Research Assessment Exercise Fri, 19 Jun 2009 18:40:33 +0000 admin According to ScienceInsider (and Times Higher Education), the planned shift of the U.K. Research Assessment Exercise from peer review to bibliometric analysis may not happen.  Bibliometric analysis may still be used, but only to guide the extensive peer review process that has determined how the U.K. government distributes research money to its universities.  Part of the reason for the proposed shift had to deal with the significant costs involved.

Moving foward the Higher Education Funding Council will need to figure out how bibliometrics could be used in a meaningful way.  Otherwise there will be additional pressure to try and bump up citation counts and numbers of publications with little regard to the quality of either.  There are already ways to game the assessment in terms of who and what is selected by institutions to be assessed.  If bibliometric analysis isn’t crafted carefully, the ways to skew results will increase.

]]> 0
More Journals Should Learn from Failure Sun, 31 May 2009 00:48:29 +0000 admin I’m not speaking about the ongoing sea changes in print journalism, but about the tendency of researchers to submit, and scientific publishers to print, successful research and ignore the failures.  While anyone in any field can learn from what went wrong, certain areas of research (medicine and engineering come to mind) are in stronger positions to make meaningful contributions to knowledge and to the use of that knowledge from reporting what failed.  The same is true of history, where learning why something didn’t happen can provide insight.  Understanding that something didn’t work in a field can shift choices in products purchased, treatments sought, or services offered.  Yet most published research excludes that kind of result.

One small step to addressing the ignorance of failure is starting with the journal Restoration EcologyAccording to Nature, the editors of Restoration Ecology will host a regular section in their journal for reports on experiments and projects in the field that did not meet expectations.  Understanding the baggage attached to the work failure, the section is titled “Set-Backs and Surprises.”  Perhaps this will catch on with other journals, especially if the setback or surprise is couple with some kind of analysis or discussion of lessons learned.  While policymakers are often focused more on the successes than what didn’t work, they do respond to lessons learned.  In that way, they may have a healthier attitude toward failure than the researchers they support.

]]> 0
Information on Emerging Diseases Relying More on Internet Tue, 26 May 2009 03:07:09 +0000 admin From the New England Journal of Medicine we have this review of digitally-enabled public health surveillance.  The recent online activity over the swine flu and Google Flu Trends are only the latest efforts in activity that go back 15 years.  What started with reporting systems has grown to include news aggregation, mashups, and, yes, even the Internet darling of the moment, Twitter.  The authors are concerned, however, that the increasing online capacity for assessment and analysis not displace the work of public health practitioners and clinics.  Put another way, its fine to look up symptoms online, but you should still see a doctor if needed rather than self-diagnosing.  There are other concerns:

Information overload, false reports, lack of specificity of signals, and sensitivity to external forces such as media interest may limit the realization of their potential for public health practice and clinical decision making. Sources such as analyses of search-term use and news media may also face difficulties with verification and follow-up. Though they hold promise, these new technologies require careful evaluation. Ultimately, the Internet provides a powerful communications channel, but it is health care professionals and the public who will best determine how to use this channel for surveillance, prevention, and control of emerging diseases.

Given the title of the NEJM piece – “Digital Disease Detection – Harnessing the Web for Public Health Surveillance” – I think another pair of concerns to add is privacy and security.  If IP addresses can be tracked, which is true with some of these cases, then it is possible to connect particular incidents to particular individuals.  Unless its necessary to communicate with specific individuals, measures should be taken to preserve the anonymity of those whose information supports this public health monitoring.  The security of the databases and other systems using this information need to be strong enough to guard against breaches and inadvertent exposure.

]]> 0
Stimulus Oversight Includes Science Funding Sat, 09 May 2009 16:17:09 +0000 admin While this won’t satisfy everyone, oversight of the spending in the stimulus package will include scrutiny of science agencies.  As the stimulus spending represents significant increases for most of these agencies, this only makes sense.  The House Science and Technology Committee has held hearings on the subject, and various agencies are training their personnel to be more aware of the potential for fraud and abuse.  While past practice suggests that NASA and DOE will be the more likely agencies for fraud and abuse, the sheer volume of the funding increase at the National Institutes of Health should draw additional scrutiny.

This is all well and good.  I simply hope that the differences between scientific misconduct and contractor fraud and abuse are remembered moving forward.  Both are serious, but finding evidence of each will require different skills and techniques, and the deception at the heart of each offense is very different. Inappropriate cost overruns or profligate spending are not the same thing as fraudulent grant applications or reporting of research results.  As the former is the more likely abuse, I’m concerned that incidents of the former will be confused for the latter.  Given the challenges scientists have in communicating what they do right, problems in communicating what they do wrong are likely, and could be more damaging.

]]> 0
Funding Opportunity on the Scientific Workforce Thu, 30 Apr 2009 17:45:09 +0000 admin I make note of this particular grant opportunity because it’s supported by the National Institutes of Health.  When thinking about research on modeling the scientific workforce, the NIH would not necessarily be the first agency that comes to mind.  But the NIH has significantly more resources than most other federal research agencies – at one point its budget for physical sciences research was larger than the physical sciences research budget of the National Science Foundation.

You can read the Request For Applications for detailed information, but here are some important points.

Purpose of the grant program:

The National Institute of General Medical Sciences, National Institutes of Health, solicits grant applications that propose to develop computational models of the dynamics of the scientific workforce in the United States.  These models may be used to inform program development and management, identify questions that need additional research, and guide the collection and analysis of the data to answer these questions.

Application Receipt Date: October 8, 2009
Peer Review Date(s): February – March 2010
Council Review Date: May 2010
Earliest Anticipated Start Date: July 2, 2010
Expiration Date: October 9, 2009

This will be a cooperative agreement research program, with each Principal Investigator taking lead research and administrative responsibilities with NIH staff assisting (this may seem odd to those not familiar with the amount of in-house/intramural research supported at NIH compared to non-biomedical agencies).

]]> 0
How Effective Are University Rankings? Wed, 22 Apr 2009 00:25:34 +0000 admin The European Commissioner for Science and Research, Januz Potocnik, recently blogged (H/T ScienceInsider) about a workshop focused on measuring and comparing universities.  The Commission has an Expert Group focused on assessing university research and recently released an interim report on the use of rankings to assess university based research.  While acknowledging some honest disagreement over whether such rankings are useful, the group did note that the current methodologies behind rankings like the Times QS World could use improvement.

What I found particularly interesting was Commissioner Potocnik’s idea that more coherent university rankings could stimulate the funding of university research.

Rankings, which have been shown to influence the behaviour of universities and promote strategic thinking and planning, could help universities to develop better management practices and thus attract potential external funders. This is in the end what we are trying to do – secure the future for research activities in universities which will benefit our societies for a long time to come…

While the notion is encouraging, I’m reminded of how universities (at least in the U.S.) have set goals like aiming for the top 30 on a particular ranking or to boost their position in a magazine’s list.  These efforts often seem focused on the output – the numerical ranking – rather than desired outcomes like increased research funding and quality.  While by no means perfect (and certainly more expensive), the more involved United Kingdom university ratings seem a better means of acheiving these desired outcomes.

]]> 0
What’s Hiding in Old Data? Fri, 10 Apr 2009 16:23:03 +0000 admin The recent discovery of an exoplanet in Hubble Space Telescope footage (H/T 80beats) is not so dramatic, until you learn that it’s footage from 1998.  It wasn’t an oversight, but the application of a new technique for filtering starlight from images.  Now astronomers are giddy with the prospect of finding more objects that were previously hidden.

While this makes the astronomy minor in me happy, the policy analyst in me is wondering how much new stuff is hiding in old data, and how finding it can be encouraged.  On the face of it, digging through old stuff sounds like history, like archival research.  I can easily see some fields looking askance at such work as not cutting edge, not likely to produce results, and generally a waste of time.  Perhaps the robotic forms of research I posted about earlier could do some of this work, but I think there would be a certain amount of human judgment required.  Maybe that will change.

]]> 1
Whig History and Science Policy Tue, 07 Apr 2009 17:50:16 +0000 admin Science Progress gave two historians a few column inches to remind us that not all science and technology narratives reflect the history of their disciplines.  Folks focused on nanotechnology will find the article of interest, but the main points are more broadly applicable than to just the really, really small.  The lessons, if you want to boil them down (which is a lousy thing to do with history, but expected in blogging) resemble some obvious statements, but statements that aren’t effectively applied and rarely considered when dealing with science and technology.  The Whig history mentioned here and in the Science Progress piece refers to historical treatments that treat current conditions as another step along a steady path of progress.

There is a history.  Nearly every person engaged with science and technology policy in the United States seems to think their field started and ended with Vannevar Bush in the late 1940s.  This ignores over 150 years of prior activity in the United States.  The Lewis and Clark Expedition and the U.S. Census are two ventures in the field that date back nearly to the founding of the republic.   The Forest Service and Geological Survey are also good pre-World War II examples of federal science and technology at work.

There were ‘losers’. Arguably Vannevar Bush is both a loser and a winner.  While his name and Science: The Endless Frontier still carry cachet in science policy circles, the plan in that document was not approved by President Truman.  He sought the advice of a different panel, chaired by John Steelman.  That report, and Steelman, are treated as footnotes by many, while the report is a good faith effort to chart the health of the national research enterprise – a valuable contribution to debates about how to support it, both then and now.

Change is usually incremental – Part of the consequence of ignoring history is to think of new developments as being more new than they actually are.  While World War II is usually acknowledged as an influence on post-war transformations of science and technology policy, the assessment stops there.  World War I (which saw the advent of the National Research Council) and the interwar period were also influential in establishing or refining some of the institutions that contributed to the post-war landscape of U.S. science policy.

So, what’s the bottom line?  Treating an initiative – or some acheivement – as an obvious outcome is to ignore history.  Science has a tendency to focus on the winners, on the successful experiments and theories.  For science policy to follow the same trend is to ignore knowledge that can be as vaulable and useful as what you can learn from the winners and success stories.

]]> 0
Sisyphean Quest to Reform OTA Continues Wed, 01 Apr 2009 01:19:51 +0000 admin It what appears to have nothing to do with the Harold Varmus appearance I mentioned earlier this week, and seems coincidental with this essay by Gerald Epstein, there appears to be another push to re-establish the Office of Technology Assessment (OTA).  The OTA was an office within Congress that provided advise on science and technology issues to its members.  It was defunded (but not officially disbanded) in the mid-1990s.  There are plenty of Prometheus posts connected to the OTA, but a good refresher would include this post with comments from OTA staffers, and the last big push to reinstitute some kind of technology assessment capacity for Congress.  More on the last push (late 2007) can be found at Denialism.

The recent push appears to start from the remaining legislative champion of the OTA, Representative Rush Holt of New Jersey.  According to Science Cheerleader (H/T The Intersection), Rep. Holt will make a request for OTA funds this week, and argue his case before appropriators in May.  Since the OTA was just defunded, and not dissolved, technically the request for funds is sufficient to restart the agency.  Assuming Rep. Holt is successful, we shall see.  I wish this movement weren’t so focused on reconstituting the past, as I’m not sure that’s the easiest (or best) means of re-establishing science and technology advisory capacity in the Congress.  At a minimum, it’s not the only way, yet the advocates seem to act as though it is.  If there’s a compelling reason for this, I’d love to hear it.

]]> 6
Technology in the Courts Thu, 19 Mar 2009 02:51:31 +0000 admin Two items of recent note about how technology supports – or doesn’t – the judicial process in the United States.

Soon in San Diego there will be the first test of a functional MRI (fMRI) machine in a court case.  Defendant’s counsel in the proceeding will introduce evidence from an fMRI scan to indicate that the defendant was telling the truth.  Wired Science has the details.  In short, the idea is that lying is connected to particular activity in a part of the pre-frontal cortex.  This idea has received a share of skepticism from some researchers, and that may determine whether or not such evidence is ruled admissible.  For California law, such science-based evidence must be readily accepted in the scientific community before the courts will accept it.  At the moment it’s unclear whether the technology will make the move from House to CSI (in TV terms).

In other court related news, judges are now having to deal (H/T Scientific American’s 60 Second Science Blog) with jurors spreading news about their cases during trial via social networking sites like Twitter.  Arguably this is an old problem – jurors are supposed to keep their information to themselves, and are supposed to consider only what is admitted in court – augmented by recent technology.  Given the added expenses of sequestering a jury (and the challenge of prohibiting such broadcasting and research in any setting), its unclear how the jury system will react to this trend.

]]> 0