Following presentation and discussion of the case studies, participants were asked to generate a
set of heuristics, targeted at policy makers, and focused on achieving beneficial use of
predictions in the policy process. The heuristics were expected to be generated from common
mistakes made (and subsequent lessons learned) from the case studies. Because each of the
lessons is derived from actual experience, it can be considered non-trivial and non-obvious, at
least to some subset of actors. The general question asked of the group was
How can policy makers (or more accurately, those informing policy makers) evaluate the
effectiveness of scientific predictions in the policy process?
Following the second day of the workshop, the organizers developed a list of 13 cross-cutting
"straw principles" that were emerging from the case histories in group discussion. On the final
morning of the workshop, the group together discussed, modified, and extended the list, leading
to 24 lessons from the nine cases that would merit attention by policy makers in any context in
which predictions were sought. These lessons are reproduced below in raw form, as they were
presented at the workshop. Subsequent discussion will refine and condense these lessons to a
common format which will then form the basis for revision of the case studies. A first cut at
organizing the lessons by unifying questions with input from workshop participants is provided
after the raw version.
LESSONS LEARNED (raw version)
- State the [purpose(s)/policy goal(s)] of
- Examine alternatives to prediction for
achieving the purpose. Maintain
flexibility of the system as work on
- Examine alternative impacts [on
society] that might result from the
- Evaluate past predictions in terms of
a) impacts on society and b) scientific validity.
- Recognize that different approaches can yield equally valid predictions.
- Recognize that the prediction itself can be a significant event.
- [Subtract the costs/Assess the impacts] of inadequate predictions [from the
benefits/relative to the impacts] of successful ones.
- Recognize that prediction may be more successful at bringing problems to attention than
forcing them to effective solution.
- Recognize that prediction is not a substitute for data collection, analysis, experience, or
- Recognize that predictions are always uncertain.
- Pay attention to conflicts of interest.
- Recognize that a choice to focus on prediction will constrain future policy alternatives.
- Beware of precision without accuracy.
- Recognize that [quantification/prediction] is not a) accuracy; b) certainty; c) relevance; d)
- Understand who becomes empowered when the prediction is made? Who are the winners
- Computers hide assumptions. Computers don't kill predictions, assumptions do.
- Recognize that perceptions of predictions may differ from what predictors intend [and
may lead to unexpected responses].
- Recognize that the societal benefits of a prediction are not necessarily of function of its
- Pay attention to the ethical issues raised by the release of predictions.
- Make the prediction methodology as transparent as possible.
- Predictions should be communicated a) in terms of their implications for societal
response and b) in terms of their uncertainties.
- Predictions should be developed with an awareness of international context.
- Recognize that the science base may be inadequate for a given type of prediction.
- Recognize that there are many types of prediction, and their potential uses in society are