OPPORTUNITIES FOR INNOVATION
Changing the processes of funding and managing science is not easy. In the course of our research we have identified a variety of challenges common to many organizations that support science for decision making. We present those below, followed by a preliminary list of opportunities for programmatic innovation, drawn from our interactions with program managers from across the federal government. As we argue in the final section, program managers could build on this list by coming together to share experiences and learn about new approaches.
Supporting Researchers: Program managers wanting to encourage reconciliation of supply and demand need to be aware of the challenges this poses for grant recipients. In the case of university research, for example, these challenges require thinking creatively about how to reorient incentives in an academic system that traditionally emphasizes publications, citations, grant-writing, patenting, and other metrics of scholarly merit instead of relationship-building and decision support.
Funding Cycles: Research agendas are often geared to relatively short lifecycles of three to five years. This timeline does not always match the needs or expectations of users. The normal duration of a grant may be too short to establish trusting relationships among producers of information and potential users. As one individual involved with emergency planning and management in the Pacific said, “Don’t even bother bringing your briefcase for the first two years… it takes that long before the stakeholders will trust you.”
In many cases normal funding cycles may be too slow to respond to user needs. There are exceptions, however. In the anthrax attacks after September 11, decision makers urgently required new research on testing and monitoring for anthrax. While the National Institute of Standards and Technology’s (NIST) normal research programs would not have addressed this need, they adapted to this timeline and successfully met the demand. It is important to understand the need for and create flexible structures that can be nimble in the face of changing problems.
SEE LEADERSHIP IN AGENCIES: MIKE HALL
Evaluation and Performance Measures: Evaluation of research often focuses on quantitative measures such as the number and citation impact of publications that emerge from a research grant. Such performance measures discourage and impede pursuit of outcomes that, while qualitative rather than quantitative, relate more strongly to the mission and goals of a program than traditional measures.
Justifications used to secure support should provide the basis for developing criteria for program evaluation and should extend to program metrics and accountability. Too often, considerations of use presented in the process of securing support for a program are forgotten once the funds arrive.
Organizations: The culture and inertia of an organization tend to favor the existing way of doing things. While not necessarily a bad thing, this constrains entrepreneurship. Disciplinary stovepipes do not lend themselves to addressing interdisciplinary, complex, societal issues. Individuals seeking to motivate more usable science must work to break down these divisions, or look for creative ways to organize in spite of existing structures. This means striving for a supportive environment where managers can take risks and be innovative in their development of programs.
There are many opportunities to enhance the creation of usable science and there have been many successes in the U.S. research enterprise, including within the climate science community. Many individuals make decisions that influence science programs, from Congressional staff, to Office of Management and Budget (OMB) examiners, to agency program managers, to members of NRC panels proposing priorities for research. Science-policy decision makers, reflecting institutional, political, and other constraints, play an essential role in shaping programs and their outcomes. Leverage points in this shaping process include writing requests for proposals or announcements of opportunity, setting budget priorities or examining budgets during the agency pass-back process, conducting or testifying at hearings, writing legislation, contributing to expert reports, reviewing proposals, and making funding decisions on individual grants. Individuals involved in any of these at any level of the science policy process have an opportunity to make decisions that improve the usability of science.
SEE CONGRESS AND NSF’S “BROADER IMPACTS” CRITERION
Mandate and Mission: The mandate and mission driving an agency or program can be quite broad, leaving room for interpretation and opportunities for new approaches. In almost all cases, federally funded science does have a mandate to address particular classes of problems, whether in defense, energy, safety, health, or national competitiveness. Moreover, such problems are often articulated in terms of desired social outcomes.
Metrics: Science is changing. Interdisciplinary efforts are far more common, and “broader impacts” or evidence of use of science in society are becoming a more common goal. This sea change may accommodate new metrics commensurate with the task of creating usable science.
Review and Advisory Mechanisms: Through peer review and expert advice, the prioritization and decision-making process for science has remained largely within the scientific community. A science manager might consider expanding review and advisory processes to include a wider cross-section of experts, including potential users, who can assess usability and relevance along with scholarly merit. Both NOAA (RISA and Sectoral Applications Research Program [SARP]) and NASA Applied Sciences have experimented with this in some of their programs.
Science-policy decision makers, especially those involved with distributing resources, have a unique opportunity to foster dialogue among existing constituencies through workshops, town halls at science and professional conferences, hearings, and so on. Often these are high value activities taken on in addition to core responsibilities. Program managers can work to demonstrate the benefits of such endeavors, while looking for ways to make them a part of job descriptions, performance evaluations, and other metrics.
SEE LEADERSHIP IN CONGRESS: GEORGE BROWN