

We encourage your correspondence, and although we cannot print all that we receive, we will include at least one short, perhaps edited, letter per issue.
Dear WeatherZine
The most recent issue was very poignant, indeed. Whether realized
or not, the theme of editorial #1 ties together those of guest editorials
#2 and #3 (see WeatherZine #20).
As a National Weather Service meteorologist, and one who has dealt with several of these issues from both sides of the fence, I felt compelled to respond.
The Prediction Hall of Fame
Roger Pielke hits the nail on the head with the idea that "technically accurate" weather predictions are only as good as the proactive actions taken by those most affected – which requires the "symphony" of communication and decision making. The "over selling" of capabilities is an opinion widely held by operational forecasters, and can be somewhat justified by the simplistic marketing of Doppler radar and supercomputers as "cure-alls" rather than tools to be used by the human being.
While technical training for operational forecasters has been sufficient, customer service training has been inadequate. With the exception of Warning Coordination Meteorologists and others who partake in storm surveys, public outreach, or verification, operational forecasters are poorly versed in how the end product is perceived. Most forecasters do not get to meet the local television, radio, and print media – and contact with emergency managers is rare. Some customer feedback does arrive via electronic mail or conventional mail, but there really is no substitute for meeting customers face-to-face.
Weather Forecast Limitations Point
to Need for More Research and
Modernization: The Challenge Continues
The juxtaposition of these editorials illustrates a growing disconnect between the direction of NOAA/NWS policymakers and that of service evaluators, both inside and outside of NOAA. Simply put, we are pitting automation against human interpretation.
In a post-mortem from the Washington Post (January 26), some were quoted as saying "...the Eta [forecast model] had it right." It was also implied that the Eta model forecasts storms such as these better than most models. What wasn't said was that the Eta also has a known bias toward rapid cyclogenesis – which means that forecasters need to exercise caution and look at what is really going on before accepting the Eta – or any – model solution.
Drs. Baker and Anthes make a strong claim for the need for more research based on the perceived failure to predict, with sufficient lead-time, the winter storm of January 25, 2000. Though it is unquestionable that more atmospheric research is needed, singling out this particular event is misguided.
That fact is, the perceived forecast bust could have been ameliorated with directed human actions – since there was a 9-hour lead-time. NWS warning services don't end with product issuance. In fact, the product issuance often is the last thing to be completed.
Conference calls involving state emergency management offices, state transportation offices, and state school systems, and military support (i.e., The National Guard) can be initiated by NWS offices prior to official warning issuance to allow rapid deployment of those personnel necessary to keep a local region from "shutting down".
With rapid deployment and readiness comes a sense of preparedness. Thus, despite the fact that today's media are hungry for sensational reporting, the headlines for this event could very well have read "Storm a Surprise for Some (lead headline)...but highway crews were ready" (sub headline).
The best models in the world can't do this; human decision-making under time pressure can. Such decision-making requires a strong confidence in predictive abilities – borne of the forecaster selection process referred to by Dr. Stewart.
For years, verification and customer service efforts were de-emphasized – largely due to attention given the entire modernization effort. Recently (some would say at long last) verification and customer service have been given
their due. Unfortunately, the balance has been shifted toward pure verification: the touting of numerical improvements in warning accuracy (also known as POD), critical success index, and lead time. Even the initial rise in false alarms after implementation of Doppler Radar has begun to decrease.
However, the true measure of forecast and warning utility is the fair perception of the end user. Forecasters often believe they have succeeded with short-fused, and in some cases long-fused, warnings when, upon further evaluation, the end users did not have enough time or resources to take preventive action.
Dr. Stewart makes oblique reference to what I call forecast alchemy: the combination of computer models with forecaster skill – and a "sixth sense" that occurs through observing – the sky, wind, air mass type, etc. This alchemy is then applied to the climatological nuances of the local area to issue the best possible forecasts and warnings.
For these reasons, there will always be a need for local forecast offices – be they public or private – as well as humans to fill them.
— Barry S. Goldsmith
Senior Forecaster
National Weather Service
Tampa Bay, FL
Comments? thunder@ucar.edu
[ Top of Page ]
WeatherZine #21 Home Page
| Comments and Feedback
| Site Map
ESIG Home Page | Roger
Pielke, Jr.'s Home Page | Societal
Aspects of Weather
[ Societal Aspects of Weather – Text Version ]
|