July 1997 Workshop
10-12 July 1997, Boulder, Colorado, USA

Synthesis

Following presentation and discussion of the case studies, participants were asked to generate a set of heuristics, targeted at policy makers, and focused on achieving beneficial use of predictions in the policy process. The heuristics were expected to be generated from common mistakes made (and subsequent lessons learned) from the case studies. Because each of the lessons is derived from actual experience, it can be considered non-trivial and non-obvious, at least to some subset of actors. The general question asked of the group was

How can policy makers (or more accurately, those informing policy makers) evaluate the effectiveness of scientific predictions in the policy process?

Following the second day of the workshop, the organizers developed a list of 13 cross-cutting "straw principles" that were emerging from the case histories in group discussion. On the final morning of the workshop, the group together discussed, modified, and extended the list, leading to 24 lessons from the nine cases that would merit attention by policy makers in any context in which predictions were sought. These lessons are reproduced below in raw form, as they were presented at the workshop. Subsequent discussion will refine and condense these lessons to a common format which will then form the basis for revision of the case studies. A first cut at organizing the lessons by unifying questions with input from workshop participants is provided after the raw version.

LESSONS LEARNED (raw version)

  1. State the [purpose(s)/policy goal(s)] of the prediction.

  2. Examine alternatives to prediction for achieving the purpose. Maintain flexibility of the system as work on predictions proceeds.

  3. Examine alternative impacts [on society] that might result from the prediction.

  4. Evaluate past predictions in terms of a) impacts on society and b) scientific validity.

  5. Recognize that different approaches can yield equally valid predictions.

  6. Recognize that the prediction itself can be a significant event.

  7. [Subtract the costs/Assess the impacts] of inadequate predictions [from the benefits/relative to the impacts] of successful ones.

  8. Recognize that prediction may be more successful at bringing problems to attention than forcing them to effective solution.

  9. Recognize that prediction is not a substitute for data collection, analysis, experience, or reality.

  10. Recognize that predictions are always uncertain.

  11. Pay attention to conflicts of interest.

  12. Recognize that a choice to focus on prediction will constrain future policy alternatives.

  13. Beware of precision without accuracy.

  14. Recognize that [quantification/prediction] is not a) accuracy; b) certainty; c) relevance; d) reality.

  15. Understand who becomes empowered when the prediction is made? Who are the winners and losers?

  16. Computers hide assumptions. Computers don't kill predictions, assumptions do.

  17. Recognize that perceptions of predictions may differ from what predictors intend [and may lead to unexpected responses].

  18. Recognize that the societal benefits of a prediction are not necessarily of function of its accuracy.

  19. Pay attention to the ethical issues raised by the release of predictions.

  20. Make the prediction methodology as transparent as possible.

  21. Predictions should be communicated a) in terms of their implications for societal response and b) in terms of their uncertainties.

  22. Predictions should be developed with an awareness of international context.

  23. Recognize that the science base may be inadequate for a given type of prediction.

  24. Recognize that there are many types of prediction, and their potential uses in society are diverse.


Back to Workshop Home Page
Back to Predictions Home Page