Comments on: Prediction and Decision http://cstpr.colorado.edu/prometheus/?p=3954 Wed, 29 Jul 2009 22:36:51 -0600 http://wordpress.org/?v=2.9.1 hourly 1 By: Roger Pielke, Jr. http://cstpr.colorado.edu/prometheus/?p=3954&cpage=1#comment-6053 Roger Pielke, Jr. Tue, 03 Oct 2006 16:54:27 +0000 http://sciencepolicy.colorado.edu/prometheusreborn/?p=3954#comment-6053 Markk- Thanks for these comments. In our chapter on earthquakes, by Joanna Nigg, you find exactly this sort of thinking. Thanks! Markk-

Thanks for these comments. In our chapter on earthquakes, by Joanna Nigg, you find exactly this sort of thinking.

Thanks!

]]>
By: Markk http://cstpr.colorado.edu/prometheus/?p=3954&cpage=1#comment-6052 Markk Tue, 03 Oct 2006 13:13:13 +0000 http://sciencepolicy.colorado.edu/prometheusreborn/?p=3954#comment-6052 "the questions about what it is we should be predicting" That is what I was trying to get at with the example of use cases. It seems like in a lot of situations simply having flexibility in the face of disaster is better use of resources than actual prediction to a fine level. There are probably some different strategies for what to do depending on whether a fire is a wildfire or an earthquake caused fire, but if you don't have the basic firefighting and rescue resources, then what does a prediction gain you? In a larger sense I think our (the U.S.) planning and use of predictions is much better than our implementation. I base this on local decisions on flood management where predictions were not bad, but actions didn't happen anyway. Only by getting slammed in the face several times did actual mitigation efforts start to get funded. “the questions about what it is we should be predicting”
That is what I was trying to get at with the example of use cases. It seems like in a lot of situations simply having flexibility in the face of disaster is better use of resources than actual prediction to a fine level. There are probably some different strategies for what to do depending on whether a fire is a wildfire or an earthquake caused fire, but if you don’t have the basic firefighting and rescue resources, then what does a prediction gain you?

In a larger sense I think our (the U.S.) planning and use of predictions is much better than our implementation. I base this on local decisions on flood management where predictions were not bad, but actions didn’t happen anyway. Only by getting slammed in the face several times did actual mitigation efforts start to get funded.

]]>
By: James Annan http://cstpr.colorado.edu/prometheus/?p=3954&cpage=1#comment-6051 James Annan Tue, 03 Oct 2006 12:28:39 +0000 http://sciencepolicy.colorado.edu/prometheusreborn/?p=3954#comment-6051 "the questions about what it is we should be predicting" Oh, put in those terms you'll have no disagreement from me. I think (and certainly hope!) that all those who are involved in prediction realise that the value of the forecast depends very much on the end-user*, and that designing prediction systems ideally involves some interaction with these people - in my most "hands-on" experience (many years ago, in predicting disease spread in crops), that can involve both the scientists learning what the users need/want, and also helping to educate them about how unreasonable their expectations are and how we think they might be able to make use of the rather more limited info that we can hope to provide! But it's a broad subject, and in situations where we are some way from the operational sphere, there is plenty of justification for doing more "pure" prediction too - even if only as a way of developing and testing our understanding of the processes involved. In that phase, we predict what we can, partly in the hope/expectation that this will develop into a genuinely useful output in the future, and partly for more pure scientific reasons. *Of course, where forecasts have a broad range of users with different requirements, standard skill measures are a lot easier to evaluate than an overall economic benefit. “the questions about what it is we should be predicting”

Oh, put in those terms you’ll have no disagreement from me. I think (and certainly hope!) that all those who are involved in prediction realise that the value of the forecast depends very much on the end-user*, and that designing prediction systems ideally involves some interaction with these people – in my most “hands-on” experience (many years ago, in predicting disease spread in crops), that can involve both the scientists learning what the users need/want, and also helping to educate them about how unreasonable their expectations are and how we think they might be able to make use of the rather more limited info that we can hope to provide!

But it’s a broad subject, and in situations where we are some way from the operational sphere, there is plenty of justification for doing more “pure” prediction too – even if only as a way of developing and testing our understanding of the processes involved. In that phase, we predict what we can, partly in the hope/expectation that this will develop into a genuinely useful output in the future, and partly for more pure scientific reasons.

*Of course, where forecasts have a broad range of users with different requirements, standard skill measures are a lot easier to evaluate than an overall economic benefit.

]]>
By: Roger Pielke, Jr. http://cstpr.colorado.edu/prometheus/?p=3954&cpage=1#comment-6050 Roger Pielke, Jr. Tue, 03 Oct 2006 09:20:18 +0000 http://sciencepolicy.colorado.edu/prometheusreborn/?p=3954#comment-6050 James- Thanks for your comment. You are absolutely correct that any decision is based on some expectation about the future consequences of that decision. But at the same time there are a range of different options for how we assess options and their consequences. Tom Anderson's Box 18.1 distinguishes between predicting geophysical events versus predicting the performance of structures. He argues that predicting the performance of structures is far more tractable and should be the focus of attention if the goal is to build critical facilities. This has implciations for research policies (e.g., balancing resources between, say, earthquake prediction and structural engineering research) and also hazards policies (e.g., what to build and where). So while at one level I agree with you that we always rely on predictions as expectations of the future, what is far more interesting to me are the questions about what it is we should be predicting in order to provide useful information to decision makers in different contexts. There are real choices to be made about how and when to rely on different types of prediction. Thanks! James-

Thanks for your comment.

You are absolutely correct that any decision is based on some expectation about the future consequences of that decision. But at the same time there are a range of different options for how we assess options and their consequences.

Tom Anderson’s Box 18.1 distinguishes between predicting geophysical events versus predicting the performance of structures. He argues that predicting the performance of structures is far more tractable and should be the focus of attention if the goal is to build critical facilities. This has implciations for research policies (e.g., balancing resources between, say, earthquake prediction and structural engineering research) and also hazards policies (e.g., what to build and where).

So while at one level I agree with you that we always rely on predictions as expectations of the future, what is far more interesting to me are the questions about what it is we should be predicting in order to provide useful information to decision makers in different contexts. There are real choices to be made about how and when to rely on different types of prediction.

Thanks!

]]>
By: Sylvain http://cstpr.colorado.edu/prometheus/?p=3954&cpage=1#comment-6049 Sylvain Tue, 03 Oct 2006 07:42:32 +0000 http://sciencepolicy.colorado.edu/prometheusreborn/?p=3954#comment-6049 "In what contexts can good decisions result without accurate predictions?" In my opinion from what I read predictions aren't a necessity for making good decisions, at least for climate science, even more that these prediction aren't good enough, particularly for climate science, at least for any particular area, while when taken global they don't mean much. Here is why: To my knowledge no single event has been recorded outside natural variability. Since we can't know which area will be hit next by an extreme event (flood, drought, huricane, tornado) each area is best serve by adapting and preparing to face the worst event possible. For example, if you live along an oceanic coastline that could mean having levees that could sustain a strong category five and housing that can sustain strong winds and/or restricting acces to those coastline. “In what contexts can good decisions result without accurate predictions?”

In my opinion from what I read predictions aren’t a necessity for making good decisions, at least for climate science, even more that these prediction aren’t good enough, particularly for climate science, at least for any particular area, while when taken global they don’t mean much.

Here is why:

To my knowledge no single event has been recorded outside natural variability. Since we can’t know which area will be hit next by an extreme event (flood, drought, huricane, tornado) each area is best serve by adapting and preparing to face the worst event possible. For example, if you live along an oceanic coastline that could mean having levees that could sustain a strong category five and housing that can sustain strong winds and/or restricting acces to those coastline.

]]>
By: James Annan http://cstpr.colorado.edu/prometheus/?p=3954&cpage=1#comment-6048 James Annan Tue, 03 Oct 2006 07:08:37 +0000 http://sciencepolicy.colorado.edu/prometheusreborn/?p=3954#comment-6048 Roger, I think you present a false dichotomy in your treatment of the subject. Eg in you first box 18.1, you present this as a case of "experiential information coupled with understanding" as an alternative to prediction. But the calculation of the 1/2500 risk is precisely a prediction that a large earthquake (sufficient to rupture the pipe) is that unlikely. What you are talking about is, I think, well summarised as "decision making under uncertainty". Whether you talk about a specific act in terms of eg taking action in the face of a prediction, or in making robust decisions in the face of uncertainty seems mostly a matter of semantics to me. Roger,

I think you present a false dichotomy in your treatment of the subject. Eg in you first box 18.1, you present this as a case of “experiential information coupled with understanding” as an alternative to prediction. But the calculation of the 1/2500 risk is precisely a prediction that a large earthquake (sufficient to rupture the pipe) is that unlikely.

What you are talking about is, I think, well summarised as “decision making under uncertainty”. Whether you talk about a specific act in terms of eg taking action in the face of a prediction, or in making robust decisions in the face of uncertainty seems mostly a matter of semantics to me.

]]>
By: Markk http://cstpr.colorado.edu/prometheus/?p=3954&cpage=1#comment-6047 Markk Mon, 02 Oct 2006 14:16:21 +0000 http://sciencepolicy.colorado.edu/prometheusreborn/?p=3954#comment-6047 Hi, Reading through the chapter, it struck me is that it would be interesting to see the focus from the policy implementer point of view. The idea of "use cases" from systems design might be useful as a way to get criteria to measure how predictions are useful. That is, one could take a situation, say flood prediction, and look at the following scenarios - 1) Perfect knowledge - suppose the policy maker had a perfect prediction, and they knew it, what would they do with it? Sometimes this wouldn't make much difference! 2) The policy make has very good predictions, but they don't know that... and so on until - there are bad predictions but people think they are good. The point being that looking at the policy maker's actions and how they would differ from case to case, would lead to some knowledge about what predictions to put effort into. Some predicition would be just to change political climate. Hi,
Reading through the chapter, it struck me is that it would be interesting to see the focus from the policy implementer point of view. The idea of “use cases” from systems design might be useful as a way to get criteria to measure how predictions are useful. That is, one could take a situation, say flood prediction,
and look at the following scenarios -

1) Perfect knowledge – suppose the policy maker had a perfect prediction, and they knew it, what would they do with it? Sometimes this wouldn’t make much difference!

2) The policy make has very good predictions, but they don’t know that…

and so on until – there are bad predictions but people think they are good.

The point being that looking at the policy maker’s actions and how they would differ from case to case, would lead to some knowledge about what predictions to put effort into. Some predicition would be just to change political climate.

]]>