There has been a lot of discussion in the pages of Ogmius on the use of scientific information in decision making. Many of the projects of the Center for Science and Technology Policy Research revolve around studying science policies that enable the creation of scientific information that can be “more usable.” Often, these policies must negotiate and challenge established scientific norms and cultures— cultures which are extremely valuable in producing top-rated basic research, but not necessarily in producing information that is more obviously useful to society. Lisa Dilling, a visiting fellow at the Center, is working on just such a conundrum in the carbon cycle science arena. Carbon cycle science is one of the highlighted topics of the U.S. Climate Change Science Program, and has a long history as a scientific endeavor.
Designing a Carbon Program to Produce “Usable Science”
As carbon cycle science has become more organized in the United States, it has been repeatedly justified by statements that the science conducted in the program will be useful in supporting decision making or informing policy. With the recent emergence of carbon cycle science over the past several years as a prominent element of the U.S. Global Change Research Program, the Climate Change Research Initiative and now the Climate Change Science Program, this goal has been reaffirmed. For example, the U.S. Carbon Cycle Science Plan calls for “coordinated rigorous, interdisciplinary research that is strategically prioritized to address societal needs” and states that “the planned activities must not only enhance understanding of the carbon cycle, but also improve capabilities to anticipate future conditions and to make informed management decisions”.
For truly basic research - research that is advancing the frontier of knowledge solely for its own sake - being driven by scientific curiosity alone is likely a fine approach. But for providing information that specifically addresses societal need, it is questionable whether basic research devoid of societal connection is a particularly effective mechanism to meet that goal. Previous research in policy-relevant scientific issues such as acid rain, ozone depletion and water management has revealed that providing policy-relevant scientific information is a complex and delicate process. If deliberate, ongoing mechanisms are not put in place to connect the scientific priority setting process with societal goals, research will tend to proceed on its own assumptions about what might be useful, perhaps only to find over time that its results are not very useful for decision-makers.
How might one define “usable science”? Definitions used thus far place the character of usable science squarely in the realm of meeting users’ needs while maintaining the high quality of rigorous scientific research. Lemos and Morehouse suggest that “the knowledge produced should directly reflect expressed constituent needs, should be understandable to users, should be available at the times and places it is needed, and should be accessible through the media available to the user community.” They define usable knowledge as “that which can be incorporated into the decision-making processes of all stakeholders and which enhances their ability to avoid, mitigate or adapt to stressors in their environment.”
It therefore seems clear from these studies and others that creating usable science involves a two-way interaction between scientists and users of scientific knowledge. But which users, and at what scale? How does one select users, and what are the implications of selection of some users over others? And who does the selecting of users—the agency funding the research? The principal investigators? Congress? At what stage are users involved? In the writing of the proposal? Within the first year? In the priority setting process of the agency issuing a call for proposals? What happens if various users’ needs are in conflict and resources are limited? Whose priorities are followed, and by what process? These questions go to the heart of priority setting in scientific agendas and the role of public participation in science.
Moving toward usable science also carries with it several differences in the metrics, reward structure and accountability of projects. As described by Nowotny et al. (2003), this type of knowledge production by definition is more socially distributed, application-oriented, transdisciplinary and subject to multiple accountabilities. Rather than being subject only to standards of peer-review, science produced for use outside the scientific community is also accountable to the users it aims to serve. These multiple accountabilities combined with the transdisciplinary nature of the work can make it difficult for new researchers aiming to make a career in this type of work. Non-traditional products such as face-to-face interactions may be more valuable to users than traditional deliverables such as journal articles. The time commitment involved in interactions with users and working in an interdisciplinary environment on longer time scales can be at odds with the reward structure of disciplines that many researchers still experience. Criteria for developing and evaluating usable science projects must therefore take into account these realities. How are these programs evaluated? What does success look like for a usable science program?
My colleagues and I are currently examining these questions. I organized a workshop held June 13-14 in Boulder, CO that brought together carbon cycle scientists, science policy decision makers, researchers of science policy and experts in user-climate science interactions. The results of the workshop will contribute to developing a research and practice agenda for programs and scientists in carbon cycle science who are interested in serving the needs of users outside of the scientific community. For more information visit the workshop website.Lisa Dilling
Center for Science and Technology