Archive for the ‘Scientific Assessments’ Category

UK Backs Away from a Bibliometric Research Assessment Exercise

June 19th, 2009

Posted by: admin

According to ScienceInsider (and Times Higher Education), the planned shift of the U.K. Research Assessment Exercise from peer review to bibliometric analysis may not happen.  Bibliometric analysis may still be used, but only to guide the extensive peer review process that has determined how the U.K. government distributes research money to its universities.  Part of the reason for the proposed shift had to deal with the significant costs involved.

Moving foward the Higher Education Funding Council will need to figure out how bibliometrics could be used in a meaningful way.  Otherwise there will be additional pressure to try and bump up citation counts and numbers of publications with little regard to the quality of either.  There are already ways to game the assessment in terms of who and what is selected by institutions to be assessed.  If bibliometric analysis isn’t crafted carefully, the ways to skew results will increase.

More Journals Should Learn from Failure

May 30th, 2009

Posted by: admin

I’m not speaking about the ongoing sea changes in print journalism, but about the tendency of researchers to submit, and scientific publishers to print, successful research and ignore the failures.  While anyone in any field can learn from what went wrong, certain areas of research (medicine and engineering come to mind) are in stronger positions to make meaningful contributions to knowledge and to the use of that knowledge from reporting what failed.  The same is true of history, where learning why something didn’t happen can provide insight.  Understanding that something didn’t work in a field can shift choices in products purchased, treatments sought, or services offered.  Yet most published research excludes that kind of result.

One small step to addressing the ignorance of failure is starting with the journal Restoration EcologyAccording to Nature, the editors of Restoration Ecology will host a regular section in their journal for reports on experiments and projects in the field that did not meet expectations.  Understanding the baggage attached to the work failure, the section is titled “Set-Backs and Surprises.”  Perhaps this will catch on with other journals, especially if the setback or surprise is couple with some kind of analysis or discussion of lessons learned.  While policymakers are often focused more on the successes than what didn’t work, they do respond to lessons learned.  In that way, they may have a healthier attitude toward failure than the researchers they support.

Information on Emerging Diseases Relying More on Internet

May 25th, 2009

Posted by: admin

From the New England Journal of Medicine we have this review of digitally-enabled public health surveillance.  The recent online activity over the swine flu and Google Flu Trends are only the latest efforts in activity that go back 15 years.  What started with reporting systems has grown to include news aggregation, mashups, and, yes, even the Internet darling of the moment, Twitter.  The authors are concerned, however, that the increasing online capacity for assessment and analysis not displace the work of public health practitioners and clinics.  Put another way, its fine to look up symptoms online, but you should still see a doctor if needed rather than self-diagnosing.  There are other concerns:

Information overload, false reports, lack of specificity of signals, and sensitivity to external forces such as media interest may limit the realization of their potential for public health practice and clinical decision making. Sources such as analyses of search-term use and news media may also face difficulties with verification and follow-up. Though they hold promise, these new technologies require careful evaluation. Ultimately, the Internet provides a powerful communications channel, but it is health care professionals and the public who will best determine how to use this channel for surveillance, prevention, and control of emerging diseases.

Given the title of the NEJM piece – “Digital Disease Detection – Harnessing the Web for Public Health Surveillance” – I think another pair of concerns to add is privacy and security.  If IP addresses can be tracked, which is true with some of these cases, then it is possible to connect particular incidents to particular individuals.  Unless its necessary to communicate with specific individuals, measures should be taken to preserve the anonymity of those whose information supports this public health monitoring.  The security of the databases and other systems using this information need to be strong enough to guard against breaches and inadvertent exposure.

Stimulus Oversight Includes Science Funding

May 9th, 2009

Posted by: admin

While this won’t satisfy everyone, oversight of the spending in the stimulus package will include scrutiny of science agencies.  As the stimulus spending represents significant increases for most of these agencies, this only makes sense.  The House Science and Technology Committee has held hearings on the subject, and various agencies are training their personnel to be more aware of the potential for fraud and abuse.  While past practice suggests that NASA and DOE will be the more likely agencies for fraud and abuse, the sheer volume of the funding increase at the National Institutes of Health should draw additional scrutiny.

This is all well and good.  I simply hope that the differences between scientific misconduct and contractor fraud and abuse are remembered moving forward.  Both are serious, but finding evidence of each will require different skills and techniques, and the deception at the heart of each offense is very different. Inappropriate cost overruns or profligate spending are not the same thing as fraudulent grant applications or reporting of research results.  As the former is the more likely abuse, I’m concerned that incidents of the former will be confused for the latter.  Given the challenges scientists have in communicating what they do right, problems in communicating what they do wrong are likely, and could be more damaging.

Funding Opportunity on the Scientific Workforce

April 30th, 2009

Posted by: admin

I make note of this particular grant opportunity because it’s supported by the National Institutes of Health.  When thinking about research on modeling the scientific workforce, the NIH would not necessarily be the first agency that comes to mind.  But the NIH has significantly more resources than most other federal research agencies – at one point its budget for physical sciences research was larger than the physical sciences research budget of the National Science Foundation.

You can read the Request For Applications for detailed information, but here are some important points.

(more…)

How Effective Are University Rankings?

April 21st, 2009

Posted by: admin

The European Commissioner for Science and Research, Januz Potocnik, recently blogged (H/T ScienceInsider) about a workshop focused on measuring and comparing universities.  The Commission has an Expert Group focused on assessing university research and recently released an interim report on the use of rankings to assess university based research.  While acknowledging some honest disagreement over whether such rankings are useful, the group did note that the current methodologies behind rankings like the Times QS World could use improvement.

What I found particularly interesting was Commissioner Potocnik’s idea that more coherent university rankings could stimulate the funding of university research.

Rankings, which have been shown to influence the behaviour of universities and promote strategic thinking and planning, could help universities to develop better management practices and thus attract potential external funders. This is in the end what we are trying to do – secure the future for research activities in universities which will benefit our societies for a long time to come…

While the notion is encouraging, I’m reminded of how universities (at least in the U.S.) have set goals like aiming for the top 30 on a particular ranking or to boost their position in a magazine’s list.  These efforts often seem focused on the output – the numerical ranking – rather than desired outcomes like increased research funding and quality.  While by no means perfect (and certainly more expensive), the more involved United Kingdom university ratings seem a better means of acheiving these desired outcomes.

What’s Hiding in Old Data?

April 10th, 2009

Posted by: admin

The recent discovery of an exoplanet in Hubble Space Telescope footage (H/T 80beats) is not so dramatic, until you learn that it’s footage from 1998.  It wasn’t an oversight, but the application of a new technique for filtering starlight from images.  Now astronomers are giddy with the prospect of finding more objects that were previously hidden.

While this makes the astronomy minor in me happy, the policy analyst in me is wondering how much new stuff is hiding in old data, and how finding it can be encouraged.  On the face of it, digging through old stuff sounds like history, like archival research.  I can easily see some fields looking askance at such work as not cutting edge, not likely to produce results, and generally a waste of time.  Perhaps the robotic forms of research I posted about earlier could do some of this work, but I think there would be a certain amount of human judgment required.  Maybe that will change.

Whig History and Science Policy

April 7th, 2009

Posted by: admin

Science Progress gave two historians a few column inches to remind us that not all science and technology narratives reflect the history of their disciplines.  Folks focused on nanotechnology will find the article of interest, but the main points are more broadly applicable than to just the really, really small.  The lessons, if you want to boil them down (which is a lousy thing to do with history, but expected in blogging) resemble some obvious statements, but statements that aren’t effectively applied and rarely considered when dealing with science and technology.  The Whig history mentioned here and in the Science Progress piece refers to historical treatments that treat current conditions as another step along a steady path of progress.

There is a history.  Nearly every person engaged with science and technology policy in the United States seems to think their field started and ended with Vannevar Bush in the late 1940s.  This ignores over 150 years of prior activity in the United States.  The Lewis and Clark Expedition and the U.S. Census are two ventures in the field that date back nearly to the founding of the republic.   The Forest Service and Geological Survey are also good pre-World War II examples of federal science and technology at work.

(more…)

Sisyphean Quest to Reform OTA Continues

March 31st, 2009

Posted by: admin

It what appears to have nothing to do with the Harold Varmus appearance I mentioned earlier this week, and seems coincidental with this essay by Gerald Epstein, there appears to be another push to re-establish the Office of Technology Assessment (OTA).  The OTA was an office within Congress that provided advise on science and technology issues to its members.  It was defunded (but not officially disbanded) in the mid-1990s.  There are plenty of Prometheus posts connected to the OTA, but a good refresher would include this post with comments from OTA staffers, and the last big push to reinstitute some kind of technology assessment capacity for Congress.  More on the last push (late 2007) can be found at Denialism.

The recent push appears to start from the remaining legislative champion of the OTA, Representative Rush Holt of New Jersey.  According to Science Cheerleader (H/T The Intersection), Rep. Holt will make a request for OTA funds this week, and argue his case before appropriators in May.  Since the OTA was just defunded, and not dissolved, technically the request for funds is sufficient to restart the agency.  Assuming Rep. Holt is successful, we shall see.  I wish this movement weren’t so focused on reconstituting the past, as I’m not sure that’s the easiest (or best) means of re-establishing science and technology advisory capacity in the Congress.  At a minimum, it’s not the only way, yet the advocates seem to act as though it is.  If there’s a compelling reason for this, I’d love to hear it.

Technology in the Courts

March 18th, 2009

Posted by: admin

Two items of recent note about how technology supports – or doesn’t – the judicial process in the United States.

Soon in San Diego there will be the first test of a functional MRI (fMRI) machine in a court case.  Defendant’s counsel in the proceeding will introduce evidence from an fMRI scan to indicate that the defendant was telling the truth.  Wired Science has the details.  In short, the idea is that lying is connected to particular activity in a part of the pre-frontal cortex.  This idea has received a share of skepticism from some researchers, and that may determine whether or not such evidence is ruled admissible.  For California law, such science-based evidence must be readily accepted in the scientific community before the courts will accept it.  At the moment it’s unclear whether the technology will make the move from House to CSI (in TV terms).

In other court related news, judges are now having to deal (H/T Scientific American’s 60 Second Science Blog) with jurors spreading news about their cases during trial via social networking sites like Twitter.  Arguably this is an old problem – jurors are supposed to keep their information to themselves, and are supposed to consider only what is admitted in court – augmented by recent technology.  Given the added expenses of sequestering a jury (and the challenge of prohibiting such broadcasting and research in any setting), its unclear how the jury system will react to this trend.