Archive for the ‘Scientific Assessments’ Category

Maps of Science: Possible Policy Tool?

March 17th, 2009

Posted by: admin

Wired Science highlighted a new effort in “mapping” science – a map representing clickstreams of searches that shows connections between fields.  It’s an effort of a team at the Los Alamos National Laboratory that was recently published in PLoS ONE.  A main distinction of this work compared to others (examples can be found at Maps of Science and Places and Spaces), and is that it analyzes web searches, and other maps have tracked journal citations.  Both are useful, but there is an important difference.  The citation traffic highlights what scientific communities consider important in their specific fields, and the search traffic focuses more attention on connections between knowledge clumps.

Map of Science

I like this kind of work, and the other mapping exercises like those sampled in the Wired Science post, because I don’t think enough attention is paid to the interrelationships between clumps of knowledge.  And because often the research questions that spawn those clumps of knowledge aren’t the same questions as policy questions, maps showing possible connections have the potential to guide policymakers to more relevant knowledge, or to identifying gaps in knowledge, than through a traditional literature search.

Math Errors Not Limited to NASA

March 15th, 2009

Posted by: admin

In what reminded me of the 1999 conversion error that led to the loss of the Mars Climate Orbiter, Scientific American’s 60 Second Science Blog noted a math error that contributed to the shuttering FutureGen, a clean-coal test-bed project, in 2008.  The Government Accountability Office released a report last week noting that the cost assessments for the project failed to consistently account for inflation.  This led to cost figures that appeared higher than they actually were, and the perceived cost overruns led to the cancellation of the project.

It’s entirely possible that FutureGen may not be able to deliver on what is promised.  But that gamble is better made when cost estimates, and other related math, are done properly.

UK Chief Scientist Argues for More Science Advice in the EU.

March 13th, 2009

Posted by: admin

Professor John Beddington, The U.K. Government’s Chief Scientific Adviser, recommended in remarks with BBC News (H/T Nature News) that the European Union needs stronger scientific advice.  He specifically recommended following the American model, oddly enough, pointing to President Obama’s “dream team” as a good example for Europe to follow.  Professor Beddington was positive about the research support provided to the Commission, but feels that more “brutal” policy advice was needed.

Perhaps he didn’t want to appear self-serving, but it appears to me that the American model is not nearly as well suited to what Professor Beddington wants as the U.K. model is.  Throughout the BBC News piece you’ll note descriptions of the British system (which includes scientific advisers in 17 different departments) as independent, proactive and sometimes irritating.  While that certainly describes science policy advocates in this country, American science advisers are not set up to be independent or proactive.  At least not those advisers with formal government positions.  So I am a bit perplexed as why the less independent system would be advanced as the example to follow.

Peer Review – Great Idea, but the Execution Needs Work

March 5th, 2009

Posted by: admin

From the folks at Scientific Blogging I read a synopsis of a Policy Forum in the March 5 issue of Science (subscription required) on what happens to articles that appear similar to previously published work.  According to the work of a team at the Southwestern Medical Center of the University of Texas, a data analysis (conducted with software the team developed) of two years worth of abstracts in the Medline database showed nearly 70,000 citations that appeared pretty similar.  After a deeper analysis of a sample of those citations, the team found 207 pairs of articles that suggested possible plagiarism.

Before you go tsk-tsking about the gap between scientific ideals and scientific practice, I do want to point out that with an ever increasing amount of journals and research that conducting a thorough literature review is getting harder to do.  That said, researchers, editors and reviewers need to do a better job of reviewing extant literature.  Perhaps tools similar to the one developed by the UT Medical Center researchers are needed to make sure plagiarism – both intentional and inadvertant – is confronted.

The numbers are disappointing.

(more…)

One Picture of the State of Science

March 1st, 2009

Posted by: admin

I’m a bit behind in my dead tree reading, but I still think the SEED magazine feature, The State of Science, is worth posting about.  Sadly, I think it reads better in the print magazine than the online version, but you work with what you have.  The feature includes snapshots of five rising scientists and five science cities, as well as perspectives from a science historian (Steven Shapin) and a science innovator (Craig Ventner).  The survey of scientists is not yet online but should be soon.  Scattered throughout the feature are discussions of underlying issues integral to all of science – intellectual property, resources, publishing, measurements, and public perception.

Each reader will probably take away something a bit different from this work, there’s so much there to choose from.  I was particularly taken with the section on measurements and informatics, as well as the different ways various points where represented graphically.  With so many assumptions and models throught science and science policy, new ways of seeing and questioning what we do and how we do it are welcome.

Broadband and Measurement

February 23rd, 2009

Posted by: admin

In a sort-of follow-up to my post on the deficit, I offer this New York Times Bits blog post on a study indicating that contrary to the hue and cry about the United States falling behind in broadband deployment,  the U.S. is actually number one.  As you might expect, it’s because the measurement is not of connectivity speed, nor is it of the percentage of homes with broadband.  In these areas the U.S. does rank behind other countries.  But with the “Connectivity Scorecard” devised by the researcher behind the study, the U.S. ranks number one, as its consumers, businesses and government supposedly are more efficient in the use of its broadband.

While conflicting budget numbers are a problem mostly because they can hide the nature of the nation’s financial health, these conflicting broadband measures try and hide a values or policy discussion behind numbers.  If you find a study saying U.S. is number one in broadband, then you don’t worry about it and you certainly have little motivation to have a debate over what the nation should expect from broadband.  Use the numbers as part of that debate, not as replacements for it.

Grissom Left Vegas Just in Time

February 5th, 2009

Posted by: admin

From today’s New York Times we have a report on a pending National Academies study (probably from this Academies unit) that will put forth a significant critique of current criminal forensics techniques.  Those who are frustrated by the use of science in politics and policy may well go apoplectic if they are unfamiliar with the use of science in the courts and law enforcement.  The report was prompted by failures at several crime labs a few years ago, and could prompt greater supervision or regulation of the field.  The final report has yet to be released, so the recommendations mentioned in the Times piece may change.

In short, but hopefully not surprising, the television programs are much better at telling stories than describing accurate scientific practice in forensics.  But it does seem that both in fiction and in reality that the accuracy of the science is overstated.  While I do not object to having different standards of proof in science than in law, the overstatement of accuracy is a problem in both venues.

Office of Technology Assessment Archive Available Online

January 31st, 2009

Posted by: admin

This isn’t new, but it is new to me.  The Federation of American Scientists has been hosting reports and other scholarship on the Office of Technology Assessment (OTA) online (H/T Science Cheerleader).  They’ve been at it for almost a year, if the archive’s blog is any indication.  Those who aren’t familiar with the OTA, it was a Congressional organization that conducted technology assessment for Congress.  It ran from the early 1970s until the mid 1990s, when it was a casualty of the Newt Gingrich-led Republican revolution.  That Mr. Gingrich fancies himself a champion of science and technology is sufficient to trigger the sense of irony for many people.  Peruse the archive at your leisure, along with the other work of the Federation (for instance, they are frequently a good source for Congressional Research Service reports).

PLoS Looking for New Metrics

January 24th, 2009

Posted by: admin

The Scientist (registration required) reports about a decision of the Public Library of Science (PLoS) to release other measures for some of its articles over the next several months.  For journals, the metric for the quality of the publication is a citation index, which notes the ‘impact factor’ of the publication and its articles.  What PLoS intends to do is put up several different measures for its articles – including “usage data, page views, citations from Scopus and CrossRef, social networlking links, press coverage, comments, and user ratings for each of PLoS ONE’s thousands of articles.”  From there it will be up to the users to see what metrics are used, and to what effect.  As PLoS is not currently listed in the major science-oriented citation index, this is a necessary measure to help demonstrate the value of its research, independent of the benefits of open access.

Humanities Indicators Now Available

January 11th, 2009

Posted by: admin

As part of its Initiative on the Humanities and Culture, the American Academy of Arts and Science released its Online Humanities Research Center this past week.  Analogous to the National Science Foundation’s Science and Engineering Indicators, the Humanities Indicators available at that site are an effort to establish a systematic compendium of data on humanities research and education in the United States (this would include the history and philosophy of sciences and medicine, but not the other disciplines usually found under the Science and Technology Studies/Science and Technology Policy Research umbrellas).

The indicators volume covers humanities education (K-graduate), the humanities workforce, research and funding in the field, and aspects of the humanities in American life (such as language, reading, and historic preservation).  Companion essays help place the indicators in various contexts, including public and higher education as well as public life.  One essay also outlines the idea of a humanities workforce that parallels the idea of a science and engineering workforce.  The essays are helpful in underlining a major goal of the project – to have a better understanding of the humanities and their ability to engage with various publics.  I find it a good model for other areas that have had challenges in connecting their academic research with practitioners ‘in the field.’

(more…)