Prediction and Decision

October 2nd, 2006

Posted by: Roger Pielke, Jr.

Across a number of threads comments have arisen about the role of forecasting in decision making. Questions that have come up include:

What is a good forecast?
When should research forecasts transition to operational forecasts?
What sorts of decisions require quantitative probabilities?
In what contexts can good decisions result without accurate predictions?

It was questions like these that motivated Rad Byerly, Dan Sarewitz, and I to work on a project in the late 1990s focused on prediction. the results of this work were published in a book by Island Press in 2000, titled “Prediction.”

With this post I’d like to motivate discussion on this subject, and to point to our book’s concluding chapter, which may provide a useful point of departure:

Pielke Jr., R. A., D. Sarewitz and R. Byerly Jr., 2000: Decision Making and the Future of Nature: Understanding and Using Predictions. Chapter 18 in Sarewitz, D., R. A. Pielke Jr., and R. Byerly Jr., (eds.), Prediction: Science Decision Making and the Future of Nature. Island press: Washington, DC. (PDF)

See in particular Table 18.1 on p. 383 which summarizes the criteria we developed in the form of questions which might be used to “question predictions.”

Comments welcomed on any of the questions raised above, and others as appropriate as well.

7 Responses to “Prediction and Decision”

    1
  1. Markk Says:

    Hi,
    Reading through the chapter, it struck me is that it would be interesting to see the focus from the policy implementer point of view. The idea of “use cases” from systems design might be useful as a way to get criteria to measure how predictions are useful. That is, one could take a situation, say flood prediction,
    and look at the following scenarios -

    1) Perfect knowledge – suppose the policy maker had a perfect prediction, and they knew it, what would they do with it? Sometimes this wouldn’t make much difference!

    2) The policy make has very good predictions, but they don’t know that…

    and so on until – there are bad predictions but people think they are good.

    The point being that looking at the policy maker’s actions and how they would differ from case to case, would lead to some knowledge about what predictions to put effort into. Some predicition would be just to change political climate.

  2. 2
  3. James Annan Says:

    Roger,

    I think you present a false dichotomy in your treatment of the subject. Eg in you first box 18.1, you present this as a case of “experiential information coupled with understanding” as an alternative to prediction. But the calculation of the 1/2500 risk is precisely a prediction that a large earthquake (sufficient to rupture the pipe) is that unlikely.

    What you are talking about is, I think, well summarised as “decision making under uncertainty”. Whether you talk about a specific act in terms of eg taking action in the face of a prediction, or in making robust decisions in the face of uncertainty seems mostly a matter of semantics to me.

  4. 3
  5. Sylvain Says:

    “In what contexts can good decisions result without accurate predictions?”

    In my opinion from what I read predictions aren’t a necessity for making good decisions, at least for climate science, even more that these prediction aren’t good enough, particularly for climate science, at least for any particular area, while when taken global they don’t mean much.

    Here is why:

    To my knowledge no single event has been recorded outside natural variability. Since we can’t know which area will be hit next by an extreme event (flood, drought, huricane, tornado) each area is best serve by adapting and preparing to face the worst event possible. For example, if you live along an oceanic coastline that could mean having levees that could sustain a strong category five and housing that can sustain strong winds and/or restricting acces to those coastline.

  6. 4
  7. Roger Pielke, Jr. Says:

    James-

    Thanks for your comment.

    You are absolutely correct that any decision is based on some expectation about the future consequences of that decision. But at the same time there are a range of different options for how we assess options and their consequences.

    Tom Anderson’s Box 18.1 distinguishes between predicting geophysical events versus predicting the performance of structures. He argues that predicting the performance of structures is far more tractable and should be the focus of attention if the goal is to build critical facilities. This has implciations for research policies (e.g., balancing resources between, say, earthquake prediction and structural engineering research) and also hazards policies (e.g., what to build and where).

    So while at one level I agree with you that we always rely on predictions as expectations of the future, what is far more interesting to me are the questions about what it is we should be predicting in order to provide useful information to decision makers in different contexts. There are real choices to be made about how and when to rely on different types of prediction.

    Thanks!

  8. 5
  9. James Annan Says:

    “the questions about what it is we should be predicting”

    Oh, put in those terms you’ll have no disagreement from me. I think (and certainly hope!) that all those who are involved in prediction realise that the value of the forecast depends very much on the end-user*, and that designing prediction systems ideally involves some interaction with these people – in my most “hands-on” experience (many years ago, in predicting disease spread in crops), that can involve both the scientists learning what the users need/want, and also helping to educate them about how unreasonable their expectations are and how we think they might be able to make use of the rather more limited info that we can hope to provide!

    But it’s a broad subject, and in situations where we are some way from the operational sphere, there is plenty of justification for doing more “pure” prediction too – even if only as a way of developing and testing our understanding of the processes involved. In that phase, we predict what we can, partly in the hope/expectation that this will develop into a genuinely useful output in the future, and partly for more pure scientific reasons.

    *Of course, where forecasts have a broad range of users with different requirements, standard skill measures are a lot easier to evaluate than an overall economic benefit.

  10. 6
  11. Markk Says:

    “the questions about what it is we should be predicting”
    That is what I was trying to get at with the example of use cases. It seems like in a lot of situations simply having flexibility in the face of disaster is better use of resources than actual prediction to a fine level. There are probably some different strategies for what to do depending on whether a fire is a wildfire or an earthquake caused fire, but if you don’t have the basic firefighting and rescue resources, then what does a prediction gain you?

    In a larger sense I think our (the U.S.) planning and use of predictions is much better than our implementation. I base this on local decisions on flood management where predictions were not bad, but actions didn’t happen anyway. Only by getting slammed in the face several times did actual mitigation efforts start to get funded.

  12. 7
  13. Roger Pielke, Jr. Says:

    Markk-

    Thanks for these comments. In our chapter on earthquakes, by Joanna Nigg, you find exactly this sort of thinking.

    Thanks!