OFFICIAL FORECAST VERSUS MODEL GUIDANCE: COMPARATIVE VERIFICATION FOR MAXIMUM AND MINIMUM TEMPERATURES. Roman Krzysztofowicz and W.

Size: px
Start display at page:

Download "OFFICIAL FORECAST VERSUS MODEL GUIDANCE: COMPARATIVE VERIFICATION FOR MAXIMUM AND MINIMUM TEMPERATURES. Roman Krzysztofowicz and W."

Transcription

1 OFFICIAL FORECAST VERSUS MODEL GUIDANCE: COMPARATIVE VERIFICATION FOR MAXIMUM AND MINIMUM TEMPERATURES By Roman Krzysztofowicz and W. Britt Evans University of Virginia Charlottesville,Virginia Research Paper RK August 2007 Revised December 2008 Copyright c 2007 by R. Krzysztofowicz and W.B. Evans Corresponding author address: Professor Roman Krzysztofowicz, University of Virginia, P.O. Box , Charlottesville,VA rk@virginia.edu

2 ABSTRACT A comparative verification is reported of 13,034 matched pairs of the National Weather Service official forecasts and MOS guidance forecasts of the daily maximum temperature prepared between 2 October 2004 and 28 February The total sample is arranged into 420 cases (5 stations in diverse climates, 7 lead times of h, and 12 sampling windows of four-month length). The attributes being verified are informativeness, calibration, and accuracy. In addition, the performance of forecasts for extreme temperature events is examined in detail, and the potential marginal gain from combining the official forecast with the MOS guidance is evaluated. The verification measures and the statistical tests of significance support these conclusions. (i) The official forecast is consistently (in 79% of all cases) and significantly (in 32% of all cases) less informative than the MOS guidance; only for short lead times (24 48 h) and a few months per year is the relation reversed. (ii) Neither product is well calibrated (with significant miscalibration in 30 40% of the cases); the official forecast is slightly better calibrated as the median, while the MOS guidance is slightly better calibrated as the mean. (iii) For extreme day-to-day changes in the maximum temperature (having the climatic exceedance probability less than 0.1), the official forecast actually depreciates the informativeness and the accuracy of the MOS guidance. (iv) Combining the two forecasts would yield mostly sporadic and small marginal gains because the two forecasts are conditionally dependent, strongly and consistently in 96% of the cases, and the official forecast is uninformative (economically worthless), given the MOS guidance, in 36% of the cases. Similar patterns of performance are found in the daily minimum temperature forecasts: 9,799 matched pairs arranged into 300 cases (5 stations, 5 lead times of h, and 12 sampling windows). While not exhaustive, the = 720 cases are representative enough to call for ii

3 further investigation of the revealed patterns of forecast performance. If confirmed, these patterns should prompt re-examining and re-designing the role of the field forecasters to better suit the improved guidance products and the emerging paradigm of probabilistic forecasting. iii

4 TABLE OF CONTENTS ABSTRACT...ii 1. INTRODUCTION BasicQuestions ExperimentalDesign The Importance of Temperature Forecasts The Verifications of Temperature Forecasts VERIFICATION METHODOLOGY JointSamples Informativeness Calibration Accuracy PERFORMANCE IN GENERAL Informativeness CalibrationasMedian CalibrationasMean Accuracy PERFORMANCE FOR EXTREMES ForecastAccuracy NoisyChannel WOULD COMBINING ADD VALUE? Conditional Independence and Uninformativeness TestsandResults CLOSURE Summary Conclusion Discussion...21 ACKNOWLEDGMENTS...23 APPENDIX: RESULTS FOR DAILY MINIMUM TEMPERATURE...24 REFERENCES...27 TABLES...30 FIGURES...36 iv

5 1. INTRODUCTION 1.1 Basic Questions... the local officers might be trained with advantage to supplement the present reports with forecasts for their several communities... so wrote Walter S. Nichols on the pages of the American Meteorological Journal more than a century ago (Nichols, 1890), thus expressing the intellectual foundation for the modus operandi of the National Weather Service (NWS) to this day: that estimates of future weather calculated centrally from numerical weather prediction models and statistical post-processors offer only a guidance to the forecasters in some 120 field offices; that the field forecasters retain the authority to select and adjust the guidance based on recent observations, local analysis, knowledge of local influences, and experience; and that the adjustments are made judgmentally (subjectively), notwithstanding various computer aids available these days to perform the task. The premise of this modus operandi, of course, is that the field forecasters can and do improve upon the guidance estimates. But as the numerical weather prediction models continue to improve as well, it is wise to re-check periodically the validity of this premise. This is the objective of this paper, albeit limited in scope to two predictands, the daily maximum and minimum temperatures, and to a set of representative stations. The paper describes a general statistical methodology and reports results of a matched comparative verification of the NWS official forecasts, produced subjectively by the field forecasters, and the guidance forecasts, produced by the Model Output Statistics (MOS) technique (Glahn and Lowry, 1972) and used in the NWS field offices, along with other guidance products, to initialize the digital forecast fields (Glahn and Ruth, 2003). The methodology for the matched comparative verification is structured to answer five basic questions: 1

6 1. Is the official forecast more informative than the model guidance? Or, in other words, does the forecaster s judgment add economic value to the guidance? 2. Is the official forecast better calibrated than the model guidance? Or, in other words, can the user take the official forecast (or the model guidance) at face value? 3. Is the official forecast more accurate than the model guidance? 4. Is the official forecast better than the model guidance at predicting extremes? 5. Can a more informative forecast be obtained by fusing the official forecast with the model guidance? 1.2 Experimental Design To answer these questions, 22,833 pairs of official and guidance forecasts of daily maximum and minimum temperatures are verified at five climatically diverse stations thoughout the United States: Savannah, Georgia (KSAV); Portland, Maine (KPWM); Kalispell, Montana (KFCA); San Antonio, Texas (KSAT); Fresno, California (KFAT). With the exception of KFCA, the samples contain data from 2 October 2004 through 28 February 2006; for KFCA, the sample contains data only through 30 June The sample sizes for each forecast, official and guidance, are identical but vary with the predictand and the station. For the daily maximum temperature, they are 2,887 (KSAV), 2,890 (KPWM), 1,484 (KFCA), 2,892 (KSAT), 2,881 (KFAT); for the daily minimum temperature, they are 2,187 (KSAV), 2,162 (KPWM), 1,115 (KFCA), 2,168 (KSAT), 2,167 (KFAT); they are about evenly distributed among the lead times. The body of the paper reports results for the daily maximum temperature. The 13,034 pairs of official and guidance forecasts are arranged into 420 cases: 5 stations, 7 lead times (24, 48, 72, 96, 120, 144, 168 h), and 12 sampling windows of four-month length. The appendix reports selected results for the daily minimum temperature. The 9,799 pairs of official and guidance forecasts are 2

7 arranged into 300 cases: 5 stations, 5 lead times (36, 60, 84, 108, 132), and 12 sampling windows of four-month length. For both predictands, the revealed patterns of forecast performance are similar and thus support the same answers to the five basic questions. 1.3 The Importance of Temperature Forecasts Temperature forecasts are important to many sectors of the nation s economy: agriculture, transportation, energy production, and healthcare. For example, orchardists must decide whether or not to protect their orchards each night during the frost season (Baquet et al., 1976). Since the cost of heating an orchard is substantial, and since an entire season s harvest is at stake, forecasts of minimum temperature are needed to optimally weigh the tradeoffs (Murphy and Winkler, 1979). Forecasts of maximum temperature during the warm season and of minimum temperature during the cool season are used regularly by electric utility companies whose operators must decide when to commit additional generating capacity, or to purchase supplemental power from other utilities, or to schedule maintenance and repairs. When used in an optimal decision procedure for power generation planning, the deterministic temperature forecasts yield substantial economic gains; the probabilistic forecasts yield even higher gains (Alexandridis and Krzysztofowicz, 1982, 1985). Minimum temperature forecasts can be of value in scheduling aircraft de-icing, outdoor painting, artificial snow production, and service calls for cars (Murphy and Winkler, 1979). Extreme maximum temperatures need to be forecasted because they can be dangerous: The Centers for Disease Control and Prevention (2004) report that excessive heat exposure caused 8,966 deaths in the United States between 1979 and So, extreme heat caused more deaths than hurricanes, lightning, tornadoes, floods, and earthquakes combined. 3

8 1.4 The Verifications of Temperature Forecasts Murphy et al. (1989) compared MOS guidance forecasts of maximum temperature to subjective forecasts produced by the NWS forecasters in Minneapolis, Minnesota; no conclusion was reached regarding which forecast is superior. Roebber and Bosart (1996) compared the value of MOS guidance forecasts of daily maximum temperature to the value of official forecasts for Albany, New York, in the years for several potential users and found that human intervention to produce the official forecasts has generally let to minimal gains in value beyond that which is obtainable through direct use of numerical-statistical guidance. 4

9 2. VERIFICATION METHODOLOGY 2.1 Joint Samples The essence of the verification methodology is to compare three continuous variates: W the predictand, which is the uncertain quantity being forecasted, X O theofficial forecast variate, and X M the model guidance variate. Their realizations (observations) are denoted w, x O, x M, respectively. Their joint sample of size N is denoted {(w(n), x O (n), x M (n)) : n =1,..., N}. For each station and lead time, the joint observations from the available record are allocated to twelve 4-month verification windows, designated by the end month. For example, the joint sample for May contains all the forecasts issued between 1 February and 31 May. The 3-month stagger of the verification windows implies that any joint observation affects the values of a verification measure in four consecutive months. As a result, the sample size for each month is increased, and the time series of a measure behaves similarly to a moving average. This is a statistical compromise, of course. It reduces the month-to-month sampling variability of the measure (which helps to discern seasonal patterns in forecast performance), while risking a degree of heterogeneity because of the non-stationarity of the predictand (due to seasonality) but not as much as the common verifications for 6-month seasons (cool and warm). In this design, the joint samples are formed and the basic verification measures are calculated and compared for cases: 420 cases for daily maximum temperature (5 stations, 7 lead times, 12 months), and 300 cases for daily minimum temperature (5 stations, 5 lead times, 12 months). The measures characterize three attributes of forecasts: informativeness, calibration, and accuracy. Within the Bayesian decision theory, which represents the viewpoint of a rational decision maker, only informativeness and calibration matter; accuracy is included herein because of its traditional usage. 5

10 2.2 Informativeness Informativeness of a forecast system with respect to a given predictand is a concept defined within the Bayesian decision theory for the purpose of ordering forecast systems according to their economic values. The theoretic foundation of informativeness was layed down by Blackwell (1951, 1953), while the specific measure of informativeness to be employed herein was derived by Krzysztofowicz (1987, 1992, 1996). The gist of the concept is this. Forecast system A is said to be more informative than forecast system B if and only if the value of the forecast produced by A is at least as high as the value of the forecast produced by B for all rational decision makers faced with structurally similar decision problems. (The value of the forecast is to be understood in the mathematical sense, as defined in the Bayesian decision theory (e.g., Alexandridis and Krzysztofowicz, 1982, 1985).) The informativeness score, IS, whose calculation is detailed below, is bounded, 0 IS 1, with IS =0implying an uninformative (worthless) forecast system, and IS =1implying a perfect forecast system. When it is determined for each of two forecast systems, the following inference can be made: If IS A >IS B, then forecast system A is more informative than forecast system B. If IS A = IS B, then the two systems are equivalent, and one should be indifferent between selecting A or B. (The informativeness score was called the Bayesian correlation score in the original publication by Krzysztofowicz (1992).) This inference rule establishes an ordinal correspondence between a statistical performance measure and an economic performance measure, which has a profound implication: the forecast system having the maximum informativeness score ensures maximum economic value to every rational decision maker and, therefore, should be preferred by the utilitarian society. (The informativeness score can also be interpreted as a measure of the degree by which the forecast produced 6

11 by a given system reduces the uncertainty about the predictand, relative to the prior (climatic) uncertainty.) The informativeness scores of the official forecast, IS O, and of the model guidance, IS M,are specified as follows. Let G, K O, K M denote the marginal distribution functions of variates W, X O, X M, respectively; let Q 1 denote the inverse of the standard normal distribution function; and let the normal quantile transform (NQT) of each variate be defined by V = Q 1 (G(W )), (1a) Z O = Q 1 (K O (X O )), (1b) Z M = Q 1 (K M (X M )). (1c) The informativeness scores are equal to the Pearson s product-moment correlation coefficients from the correlation matrix R = {r ij } of the standard normal variates (V, Z O,Z M ): IS O = r 12 = Cor(V, Z O ), (2a) IS M = r 13 = Cor(V, Z M ). (2b) The statistical procedures for implementing (1) (2) can be found in Krzysztofowicz (1992). To determine if there is a statistically significant difference between IS O and IS M, Williams test statistic T W can be used (Williams, 1959): s T W =(r 12 r 13 ) 2 N 1 N 3 (N 1)(1 + r 23 ) R + r2 (1 r 23 ), (3) 3 where R is the determinant of the correlation matrix, r =(r 12 + r 13 )/2, r 23 = Cor(Z O,Z M ), and N isthesamplesize. ThestatisticT W has the t distribution with N 3 degrees of freedom. This statistic is ideally suited for testing the null hypothesis that two correlation coefficients are equal (r 12 = r 13 ) under the trivariate normal distribution when one of the variates is common and the sample size is small or moderate (Neill and Dunn, 1975; Sakaori, 2002). 7

12 2.3 Calibration A forecast system is said to be well calibrated if the forecast has a well-defined interpretation which is consistently maintained over time (Krzysztofowicz and Sigrest, 1999a, 1999b). A well calibrated forecast can be taken at face value by every user a basic requirement in communicating scientific information. The fundamental deficiency of the NWS forecast is the lack of any official interpretation. Therefore, two interpretations are considered: the median and the mean. Only the marginal calibration is verified, which is necessary for the conditional calibration. (A discussion of the conditional calibration canbefoundinthereferencescitedabove.) A condition for the marginal calibration of X as the median of W is P (W > X)=0.5, where P stands for probability. A measure of calibration is the exceedance frequency: F = m N, (4) where m is the number of times w(n) >x(n) in the joint sample, and N isthesamplesize. The forecast may be interpreted as the median of the predictand if F =0.5. A two-sided exact binomial test of the null hypothesis that F =0.5can be used to determine if the forecast is significantly uncalibrated as the median. A measure of the degree of miscalibration is the calibration score: CS = F 0.5. (5) A condition for the marginal calibration of X as the mean of W is E(X) =E(W ), wheree stands for expectation. A measure of calibration is the forecast bias, or the mean error: B = 1 N NX [x(n) w(n)]. (6) n=1 The forecast may be interpreted as the mean of the predictand if B =0. A two-sided t-test of the null hypothesis that B =0can be used to determine if the forecast is significantly uncalibrated as the mean. 8

13 2.4 Accuracy Accuracy is a popular attribute of forecasts. Herein it is measured in terms of the mean absolute error: MAE = 1 N NX x(n) w(n). (7) n=1 This measure is consistent with the NWS Operations Manual (National Weather Service, 2000), and is easier to interpret than the mean square error. A two-sided paired t-test can be performed of the null hypothesis that MAE O = MAE M. In advocating the MAE, Brooks and Doswell (1996) note that bias says nothing about the accuracy of a forecast. For example, a forecast system that makes five forecasts each 20 too warm and five forecasts each 20 too cold has the same bias as a forecast system that makes ten perfect forecasts. We must note that accuracy confounds informativeness with calibration. For example, a forecast system that makes every forecast 20 too high is inaccurate; however, once the bias is detected, it can easily be subtractedtoobtainperfectforecasts. TheMAE =20,as in the Brooks and Doswell s example, but the two Bayesian verification measures offer a superior diagnosis: the system is miscalibrated, because B =20, yet most informative, because IS =1. 9

14 3. PERFORMANCE IN GENERAL To recall, there are 420 cases (5 stations, 7 lead times, 12 verification windows designated by the end months). In each case, four basic performance measures are computed for the official forecast and the MOS guidance. Then the corresponding measures are compared directly and via statistical tests. The overall results of the case-by-case comparisons are reported in Table 1; they are discussed below together with the results of more detailed analyses. 3.1 Informativeness The time series (Fig. 1) of the informativeness score, IS, of the official forecast and the MOS guidance at every station for lead times of 24, 96, and 168 h reveal four properties. First, the official forecast IS generally tracks the MOS guidance IS. Second, the IS decreases with lead time, as expected. Third, at every station, the month-to-month variability of IS increases with lead time: for the 24 h lead time, the IS is nearly stationary (except for MOS at KFAT), but for the 168 hleadtime,itisdefinitely non-stationary. Fourth, the degree of non-stationarity varies across the stations: it is the largest at San Antonio, TX, where the forecasts with the h lead times in August September are the least informative. In the case-by-case comparisons (Table 1), IS M >IS O as many as 332 times (79% of the cases), implying that the MOS guidance is more informative than the official forecast; IS O >IS M only 88 times (21% of the cases). Using Williams test statistic (3), IS M is superior 135 times (32% of the cases) at the 0.05 significance level; IS O is superior only 15 times (4% of the cases) at the 0.05 significance level. This yields the winning ratio 135/15 = 9/1 in favor of the MOS guidance. Whereas the inequality IS M >IS O is statistically significant in only 32% of the cases, it is present in 79% of the cases. In other words, the MOS guidance is superior consistently, though not 10

15 always significantly, in the majority of the cases. What are these cases and is the consistent superiority statistically significant? To find the answers, Fig. 2 compares the average informativeness scores of the MOS guidance IS M and the official forecast IS O across stations and months for each lead time. Also shown are P -values from the t-test of the null hypothesis against the one-sided alternative hypothesis IS M > IS O. For lead times of 24 h and 48 h, the hypothesis that neither forecast is more informative cannot be rejected (P -values 0.491, 0.143). For lead times greater than 48 h, the MOS guidance is significantly more informative than the official forecast with a P -value near zero. As lead time increases, the difference between the average informativeness scores also increases. A closer examination of the individual cases reveals that at four out of five stations, there exists a season of consistently improved informativeness (CII); it applies to short lead times, either 24 h, or 24 h and 48 h (Table 2). Within the CII season, IS O > IS M with a P -value of nearly zero. However, outside this season, IS M > IS O with a P -value of about zero. Thus, a statistically significant explanation of the consistent superiority is this: At each station, there exists a CII season during which the officialforecastswithleadtimesofupto48haremoreinformativethan the MOS guidance; there are 32 such cases (8%). For lead times longer than those within the CII season, and for all lead times outside the CII season, the MOS guidance is more informative; there are 332 such cases (79%). Finally, let us digress that the CII season varies from station to station, and only at Portland, ME, and at Fresno, CA, does it fully overlap the official cool season (October March). This underscores the deficiency of verification studies that assume the stationarity of forecast system performance during a fixed six-month season (cool or warm) at every station and pool the samples from many stations (even from all stations in the U.S.). Such studies may misrepresent the sys- 11

16 tem performance because they may wash out the statistically significant differences between the stations and the sub-seasons. 3.2 Calibration as Median The exceedance frequency F measures the degree to which the forecast is calibrated as the median of the predictand. The time series (Fig. 3) of F for the official forecast and the MOS guidance at every station for lead times of 24, 96, and 168 h exhibit four properties. First, both the MOS guidance and the official forecast lack a consistent probabilistic interpretation; for instance, the 96-h official forecast at San Antonio, TX, constitutes the 0.32 exceedance probability quantile of the predictand (essentially the third tercile) in July, and the 0.67 exceedance probability quantile of the predictand (the first tercile) in December. Second, the differences in the interpretations of forecasts at various stations in the same month are equally large. Third, a seasonal trend is present as F generally declines in the summer, below 0.5 in 26 out of 30 time series, indicating that the forecasts are notoriously too high in the summer. Fourth, the cross-station and the month-to-month variability of F appears to be unaffected by the lead time this is how it should be. The last property is validated formally in Fig. 4, which shows the average calibration scores of the MOS guidance CS M and the official forecast CS O across stations and months for each lead time. Also shown are P -values from the two-sided t-test of the null hypothesis CS M = CS O. While the smallest scores are recorded for 96 h, CS M > CS O for lead times up to 72 h, and CS M < CS O for lead times longer than 72 h, the pattern is weak and the null hypothesis is never rejected. A forecast system is said to be uncalibrated if the null hypothesis F =0.5 is rejected at the 0.05 significance level. The official forecast is uncalibrated 153 times and the MOS guidance is uncalibrated 136 times; however, when F O is compared directly to F M, the official forecast is 12

17 better calibrated 214 times, while the MOS guidance is better calibrated 187 times (Table 1). Because the MOS guidance is better calibrated as the mean (the fact to be demonstrated in Section 3.3), but the official forecast is better calibrated as the median, it appears the forecasters make their judgmental forecasts as if they were re-calibrating the MOS guidance from one interpretation to another. This type of adjustment to guidance has been documented before (Krzysztofowicz and Sigrest, 1999b). It is natural from the cognitive standpoint because an estimate of the median has an intuitive interpretation, which the forecaster can validate judgmentally in real time (via the question: Is the actual observation equally likely to fall below or above my estimate?). On the contrary, an estimate of the mean (as the mathematical expectation) is an abstraction, which has no intuitive interpretation and cannot be validated judgmentally. This is the first reason for adopting the median for the official interpretation of the guidance and the forecast. 3.3 Calibration as Mean The bias B measures the degree to which the forecast is calibrated as the mean of the predictand. Of the 420 cases (Table 1), the official forecast bias is less than the MOS guidance bias in 201cases(48%),whilethereverseistruein214cases(51%). Thus,theMOSguidanceisgenerally better calibrated as the mean of the predictand than the official forecast, but the difference is not substantial. The test of the null hypothesis B =0 against the two-sided alternative hypothesis sharpens the contrast. At the 0.05 significance level, the official forecast has a non-zero bias in 156 cases (37%), while the MOS guidance has a non-zero bias in 129 cases (31%). Thus, the MOS guidance is calibrated as the mean slightly more often than the official forecast. Whereas the extent to which the forecasters actually anchored their judgments on the MOS guidance cannot be inferred from the present data, it is still instructive to examine the official 13

18 forecasts as if they were derived from the MOS guidance (Table 3). The most distressful are the cases wherein the forecasters switched the sign of the bias and increased the magnitude: = 68. Next are the cases wherein the forecasters retained the sign of the bias but increased the magnitude: = 143. Lastly, there are 3 cases wherein the MOS guidance was sans bias and the forecasters introduced one. All in all, as if tossing a coin, the forecasters worsened the bias in 50% of the cases, and reduced the bias in 48% of the cases. The time series of B, not shown herein, exhibit some characteristics similar to those seen in Fig. 3. The bias is station-dependent and month-dependent. Thus, in general, no spatial and no temporal stationarity of the bias can be assumed. But unlike the times series of F in Fig. 3, the time series of B growmore and more variable as the lead time increases. This is confirmed by plotting (Fig. 5) the average bias magnitudes of the MOS guidance B M and the official forecast B O for each lead time, along with P -values from the two-sided t-test of the null hypothesis B M = B O. From the comparison of Fig. 5 with Fig. 4, it is apparent that the calibration of the MOS guidance and the official forecast is more stable in the median than in the mean. This is the second reason for adopting the median for the official interpretation of the guidance and the forecast. 3.4 Accuracy In terms of the accuracy measured by MAE (Table 1), the MOS guidance is superior 306 times (73% of the cases), whereas the official forecast is superior just 111 times (26% of the cases). At the 0.05 significance level, the corresponding numbers are 48 (11%) and 5 (1%). ThetimeseriesofMAE, not shown herein, for all stations and lead times reveal the already familiar properties: non-stationarity and station-to-station variability, both of which increase with the lead time. 14

19 4. PERFORMANCE FOR EXTREMES 4.1 Forecast Accuracy A common justification of subjective adjustments by the NWS forecasters to guidance forecasts rests on the presumption that, thanks to their expertise and experience, the forecasters can diagnose a particular weather pattern and assess model performance in evolving that pattern. Therefore, the argument goes, the forecasters can improve upon the guidance, especially when the weather forecasts matter most in times of extremes; however, verifications performed on samples comprising all observations may not reveal this presumed advantage of the official forecasts. We set to test this hypothesis statistically. An extreme temperature event is said to occur on day n if the absolute difference w(n) w(n 1) between the maximum temperatures on two consecutive days, n 1 and n, has the exceedance probability of 0.1 or lower. The objective is to verify the forecaster s ability to predict the extreme day-to-day changes in the daily maximum temperature. Because such changes are rare, the four-month sampling window is abandoned. Instead, for every station and lead time, asubsamplecontainingallextremeeventsisformedbyselectingfromtheentire joint sample the 10% of the days on which the largest absolute differences were recorded. There are thus 35 cases (5 stations and 7 lead times) with the sample sizes between 35 and 40, except at KFCA where the sample sizes are 20 or 21. In each case, MAE M and MAE O are calculated and used in the two-sided t-test of the null hypothesis MAE M = MAE O. The results for two stations offering the strongest (KSAV) and the weakest (KFCA) support for the null hypothesis are reported in Table 4. Out of all 35 cases, the MOS guidance is superior to the official forecast 28 times (80% of the cases). In the case-by-case tests, the difference MAE M MAE O is significant only twice at the 0.05 level. However, the consistency of the 15

20 difference is overwhelming, as the null hypothesis on the averages, MAE M = MAE O, is rejected by the t-test in favor of the alternative hypothesis MAE M < MAE O with the P -value of In conclusion, the above analysis does not support the hypothesis that human judgment is superior to the meteorological models in forecasting the extreme day-to-day changes in the daily maximum temperature. On the contrary, it is the MOS guidance that is superior, while not significantly in individual cases, consistently across all stations and lead times. 4.2 Noisy Channel Another way to draw an inference from Table 4 is to note that the best improvement the forecasters could muster in the MAE is 0.42 F (which is statistically insignificant at the 0.05 level) at Savannah, GA, for the 24-h lead time. At the same time, by making their own forecasts rather than adopting the MOS guidance as the official forecast, the forecasters deteriorated the MAE by more than 0.42 F in 23 cases, with the largest deterioration of 4.15 F (which is statistically significant at the 0.05 level) at Kalispell, MT, for the 120-h lead time. To visualize the failure of the official forecasts in this last case, all sample points are shown in Fig. 6. The scatter plot in the upper left corner shows the mapping of the MOS guidance into the official forecast. The other two scatter plots compare each of the two forecasts with the observation. The overall impression they convey is that the scatter of the official forecasts around the diagonal (on which perfect forecasts would lie) is larger than the scatter of the MOS guidance. The statisticians have a technical word for such a mapping of one forecast into another (DeGroot, 1970, Chapter 14; Krzysztofowicz, 1992) an auxiliary randomization. It is as if the forecaster took the MOS guidance and processed it through a noisy channel. Of course, the output from the noisy channel is always less informative than the input. The informativeness score quantifies exactly that: IS O =0.732 < = IS M. 16

21 5. WOULD COMBINING ADD VALUE? Even though in most of the cases the official forecast X O is less informative than the MOS guidance X M,itisstillpossiblethattheinformation imparted to X O by the field forecaster supplements in some ways the information contained in X M. If this is so, then combining X O with X M through a Bayesian processor may yield a forecast X C which is more informative than either X O or X M (and which, therefore, has the economicvalueatleastashighaseitherx O or X M ). Our objective is to test this hypothesis. 5.1 Conditional Independence and Uninformativeness Let f(x O,x M w) denote the joint density function of variates (X O,X M ), evaluated at the point (x O,x M ), and conditional on the hypothesis that W = w. Whenviewedasafunctionof w at a fixed point (x O,x M ), it is called the likelihood function of predictand W. All points (x O, x M ) specify a family of likelihood functions. The family of likelihood functions is the key construct in a Bayesian combining model. It is also the construct that allows us to determine if there would be any gain from combining the official forecast X O with the MOS guidance X M. Toward this end, the likelihood function is factorized: f(x O,x M w) =f(x O x M,w)f(x M w). (8) One can interpret this factorization as a framework for developing a combining model in two stages. First, the MOS guidance is introduced as a predictor through the likelihood function f(x M w). Second, with X M already used, the official forecast X O is introduced as the second predictor through the conditional likelihood function f(x O x M,w). Two particular situations are of special interest. 17

22 Definition 1 (Conditional Independence). The predictor X O is independent of predictor X M, conditional on predictand W,ifateverypoint(x O,x M,w) of the sample space, f(x O x M,w)=f(x O w). (9) Definition 2 (Conditional Uninformativeness). The predictor X O is uninformative for predictand W, conditional on predictor X M, if at every point (x O,x M,w) of the sample space, f(x O x M,w)=f(x O x M ). (10) These two situations bound the economic gain from combining two predictors. The maximum gain results if X O is conditionally independent of X M. NogainresultsifX O is conditionally uninformative, given X M ;andifx M is also more informative than X O,thenX O is worthless. 5.2 Tests and Results To test the hypotheses of conditional independence and conditional uninformativeness, each of the three variates is subjected to the NQT, as defined in Section 2.2, and then Z O is regressed on V and Z M in the standard normal space: Z O = av + bz M + c + Θ, (11) where a and b are regression coefficients, c is the intercept, and Θ is the residual. A two-sided t-test is next performed on the significance of each regression coefficient. If the null hypothesis b =0cannot be rejected, then the official forecast X O is conditionally independent of the MOS guidance X M. If the null hypothesis a =0cannot be rejected, then the official forecast X O is conditionally uninformative, given the MOS guidance X M. As before, all results are reported at the 0.05 significance level. Among the 420 cases, the official forecast is conditionally independent of the MOS guidance only 8 times (2%). Most significantly, the P -values less than occur 405 times (96%), 18

23 making it convincing to accept the alternative hypothesis that the official forecast is conditionally dependent on the MOS guidance. Among the 420 cases, the official forecast is conditionally uninformative, given the MOS guidance, 150 times (36%). Overall, these results imply that only sporadic gains can be expected from combining the official forecast with the MOS guidance of the daily maximum temperature. The official forecast provides independent information in only 2% of the cases, offers no additional information (is economically worthless) in 36% of the cases, and is conditionally dependent on the MOS guidance in the remaining 62% of the cases; the strong significance of this dependence suggests a small marginal gain from combining. 19

24 6. CLOSURE 6.1 Summary The = 720 verification cases reported herein comprise 22,833 pairs of forecasts for12leadtimes(24 168h),andspannearly1 1 2 years (October 2004 February 2006) of daily maximum and minimum temperatures at five stations in diverse climates. While not exhaustive, these cases are representative enough to reveal any patterns in forecast performance across stations, lead times, and seasons, which are worth attention. Four such patterns have emerged. 1. In terms of informativeness, the official forecast not only fails to improve upon the MOS guidance, but performs consistently worse (in 79 81% of the cases) and often significantly worse (in 32 43% of the cases). Only for short lead times (24 48 h) and during a few months per year, which vary from station to station, is the official forecast consistently more informative than the MOS guidance. 2. In terms of calibration, neither product is particularly well calibrated (with significant miscalibration present in 30 40% of the cases); the official forecast is slightly better calibrated as the median of the daily maximum temperature, whereas the MOS guidance is slightly better calibrated as the mean. When the official forecast is viewed as an adjustment of the MOS guidance, adjusting has the statistical effect of tinkering: the calibration of the forecast as the mean is improved in 45 48% of the cases and worsened in 50 54% of the cases. 3. Contrary to the popular notion that field forecasters can predict extreme events better than models do, the official forecast actually depreciates the informativeness and the accuracy of the MOS guidance for extreme events (those having the climatic exceedance probability less than 0.1). From the viewpoint of a forecast user, the official forecast appears like the MOS guidance processed through a noisy channel. 20

25 4. Combining the official forecast with the MOS guidance statistically, in the hope of producing a forecast superior to either one of those combined, would yield mostly sporadic and small marginal gains because the two forecasts are conditionally dependent, strongly and consistently (in 96% of the cases), and the official forecast does not provide any additional information beyond that contained in the MOS guidance in 36% of the cases. 6.2 Conclusion The verification measures and the statistical tests of significance summarized above imply that the answers to the five basic questions (Section 1.1) are most likely: No, No, No, No, No. Together with the meaning of the informativeness score (Section 2.2), these answers imply mathematically that a rational decision maker may expect the economic value of the model guidance to be at least as high as the economic value of the official forecast. Ergo, the model guidance for the daily maximum or minimum temperature is the preferred forecast from the viewpoint of the utilitarian society. 6.3 Discussion Inasmuch as the advances in numerical weather prediction models and in statistical postprocessors improve the predictive performance of guidance products, the role of the field forecasters should be re-examined periodically, and the tasks they perform should be re-designed to suit the improved information they receive. The basic question is What tasks should a field forecaster perform in order to make the optimal use of his judgmental capabilities towards improving the local weather forecasts? As the verification results reported herein imply, at least for forecasting the daily maximum and minimum temperatures, the task that has been the staple of the field forecaster s job throughout the 20th century the judgmental adjustment of a deterministic guidance forecast produced centrally to account for information available locally is about to be rendered 21

26 purposeless by the improvements in the numerical-statistical models, as anticipated (Bosart, 2003). A major technical innovation at the onset of the 21st century is the quantification of uncertainty in operational weather forecasts. Whereas ensemble forecasting methods and statistical post-processors are in the center of attention, they are far from maturity: their assessments of uncertainty utilize only a fraction of the total weather information available at any place and time. Thus arises a significant new opportunity: to re-design the role of the field forecaster to make it compatible with, and beneficial to, the emerging paradigm of probabilistic forecasting. Research has shown that weather forecasters can judgmentally assess vast amounts of complex information to detect and quantify the uncertainty (e.g., Murphy and Winkler, 1979; Winkler and Murphy, 1979; Murphy, 1981; Murphy and Daan, 1984; Krzysztofowicz and Sigrest, 1999a). This research has been ahead of its time. Now is the opportunity to harness its results in modernizing the role of the field forecaster for the 21st century. As apreludetosystemicre-design,two steps are recommended, after Krzysztofowicz and Sigrest (1999b, p.452): (i) The deterministic guidance forecast of every continuous predictand should be given an official probabilistic interpretation. The median of the predictand (not the mean) is the preferred interpretation because it conveys at least some rudimentary assessment of uncertainty (the 50% chance of the actual observation being either below or above the forecast estimate), which the field forecasters and the decision makers can grasp intuitively. (ii) The official forecast should be given the same probabilistic interpretation as the guidance forecast, so that the field forecasters can channel their skills toward improving the calibration of an estimate rather than re-calibrating the estimate from one interpretation to another (and degrading the guidance forecast in the process as they do currently for the daily maximum and minimum temperatures). 22

27 Acknowledgments. This material is based upon work supported by the National Science Foundation under Grant No. ATM , New Statistical Techniques for Probabilistic Weather Forecasting. The Meteorological Development Laboratory of the National Weather Service provided the data. Mark S. Antolik researched a set of representative stations for the U.S. from which the five stations used herein were selected. An anonymous reviewer suggested expanding the study to daily minimum temperature forecasts. 23

28 APPENDIX: RESULTS FOR DAILY MINIMUM TEMPERATURE A reviewer of an earlier manuscript, which reported verification results for daily maximum temperature only, advised expanding the study to daily minimum temperature based on the following argument: The maximum temperature should be the easiest predictand for MOS because it is generally well related to large-scale lower tropospheric variables that one expects to be handled well by numerical weather prediction models. On the other hand, the minimum temperature is much more subject to local influences. If the human-mediated minimum temperature forecastsalsodonotshowimprovementovermos,then the thesis of this paper would be much more strongly supported. In parallel to the results reported in the body of this paper, we performed a comparative verification of the NWS official forecasts and MOS guidance forecasts of the daily minimum temperature. Having found similar patterns of performance, we report herein the main results of the case-by-case comparisons. There are 300 cases (5 stations, 5 lead times, 12 verification windows designated by the end months). In each case, four basic performance measures are computed for the official forecast and the MOS guidance. Then the corresponding measures are compared directly and via statistical tests. A.1 Informativeness In the case-by-case comparisons (Table A1), IS M >IS O as many as 244 times (81% of the cases), implying that MOS guidance is more informative than the official forecast; IS O >IS M only 56 times (19% of the cases). Using Williams test statistic (3), IS M is superior 130 times (43% of the cases) at the 0.05 significance level; IS O is superior only 10 times (3% of the cases) at the 0.05 significance level. This yields the winning ratio 130/10 = 13/1 in favor of the MOS 24

29 guidance. For the maximum temperature, this winning ratio is 9/1 in favor of the MOS guidance (Table 1). Thus, even if the minimum temperature is subject to local influences, the forecasters appear incapable of taking advantage of this knowledge. A.2 Calibration as Median Ofthe300cases(TableA1), theofficial forecast is better calibrated 140 times (47% of the cases), while the MOS guidance is better calibrated 145 times (48% of the cases). The difference in favor of the MOS guidance (by 1%) is negligible, but is one of the small distinctions between the daily minimum temperature and the daily maximum temperature (Table 1), for which the official forecast is better calibrated (by 6%) than the MOS guidance. A.3 Calibration as Mean Of the 300 cases (Table A1), the official forecast bias is less than the MOS guidance bias in 135 cases (45%), while the reverse is true in 162 cases (54%). Thus, the MOS guidance is generally better calibrated as the mean of the predictand than the official forecast. Though not substantial, this difference of 9% for the daily minimum temperature is even slightly larger than the difference of 3% for the daily maximum temperature (Table 1). In parallel to Table 3 for the daily maximum temperature, Table A2 compares the official forecast to the MOS guidance for the daily minimum temperature; the comparison is in terms of the bias, when both the forecast and the guidance are biased. The purpose of this comparison is to evaluate the goodness of the modifications through which the official forecast is derived from the MOS guidance under the hypothesis that the forecasters actually anchored their judgments on the MOS guidance. The most distressful are the cases wherein the forecasters switched the sign of the bias and increased the magnitude: = 33. Next are the cases wherein the forecasters retained the sign of the bias but increased the magnitude: = 128. All in all, as if tossing 25

30 a coin, the forecasters worsened the bias in 54% of the cases, and reduced the bias in 45% of the cases. In general, this performance of the forecasters for the daily minimum temperature follows the pattern similar to that seen for the daily maximum temperature (Table 3). A.4 Accuracy In terms of the accuracy (Table A1), the MOS guidance is superior 239 times (80% of the cases), whereas the official forecast is superior just 57 times (19% of the cases). This difference of 61% for the daily minimum temperature is even larger than the difference of 47% for the daily maximum temperature (Table 1). 26

31 REFERENCES Alexandridis, M.G., and R. Krzysztofowicz, 1982: Economic gains from probabilistic temperature forecasts. Proceedings of the International Symposium on Hydrometeorology,A.I.Johnson and R.A. Clark (eds.), American Water Resources Association, Bethesda, Maryland, Alexandridis, M.G., and R. Krzysztofowicz, 1985: Decision models for categorical and probabilistic weather forecasts. Applied Mathematics and Computation, 17, Baquet, A.E., A.N. Halter, and F.S. Conklin, 1976: The value of frost forecasting: A Bayesian appraisal. American Journal of Agricultural Economics, 58, Blackwell, D., 1951: Comparison of experiments. Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, J. Neyman (ed.), University of California Press, Berkeley, pp Blackwell, D., 1953: Equivalent comparisons of experiments. Annals of Mathematical Statistics, 24, Bosart, L.F., 2003: Whither the weather analysis and forecasting process? Weather and Forecasting, 18, Brooks, H.E., and C.A. Doswell, III, 1996: A comparison of measures-oriented and distributionsoriented approaches to forecast verification. Weather and Forecasting, 11, Centers for Disease Control and Prevention, 2004: About extreme heat. Available online at DeGroot, M.H., 1970: Optimal Statistical Decisions. McGraw Hill, 490 pp. 27

32 Glahn, H.R., and D.A. Lowry, 1972: The use of model output statistics (MOS) in objective weather forecasting. Journal of Applied Meteorology, 11, Glahn, H.R., and D.P. Ruth, 2003: The new digital forecast database of the National Weather Service. Bulletin of the American Meteorological Society, 84, Krzysztofowicz, R., 1987: Markovian forecast processes. Journal of the American Statistical Association, 82, Krzysztofowicz, R., 1992: Bayesian correlation score: A utilitarian measure of forecast skill. Monthly Weather Review, 120, Krzysztofowicz, R., 1996: Sufficiency, informativeness, and value of forecasts. Proceedings, Workshop on the Evaluation of Space Weather Forecasts, Space Environment Center, NOAA, Boulder, Colorado, Krzysztofowicz, R., and A.A. Sigrest, 1999a: Calibration of probabilistic quantitative precipitation forecasts. Weather and Forecasting, 14, Krzysztofowicz, R., and A.A. Sigrest, 1999b: Comparative verification of guidance and local quantitative precipitation forecasts: Calibration analyses. Weather and Forecasting, 14, Krzysztofowicz,R.,W.J.Drzal,T.R.Drake,J.C.Weyman,andL.A.Giordano,1993:Probabilistic quantitative precipitation forecasts for river basins. Weather and Forecasting, 8, Murphy, A.H., 1981: Subjective quantification of uncertainty in weather forecasts in the United States. Meteorologische Rundschau, 34, Murphy,A.H.,andR.L.Winkler,1979:Probabilistic temperature forecasts: The case for an operational program. Bulletin of the American Meteorological Society, 60,

33 Murphy, A.H., and H. Daan, 1984: Impacts of feedback and experience on the quality of subjective probability forecasts: Comparison of results from the first and second years of the Zierikzee experiment. Monthly Weather Review, 112, Murphy, A.H., B.G. Brown, and Y. Sheng Chen, 1989: Diagnostic verification of temperature forecasts. Weather and Forecasting, 4, National Weather Service, 2000: National Weather Service operations manual. Available online at Neill, J.J., and O.J. Dunn, 1975: Equality of dependent correlation coefficients. Biometrics, 31, Nichols, W.S., 1890: The mathematical elements in the estimation of the Signal Service reports. American Meteorological Journal, 6, Roebber, P.J., and L.F. Bosart, 1996: The complex relationship between forecast skill and forecast value: A real-world analysis. Weather and Forecasting, 11, Sakaori, F., 2002: A nonparametric test for the equality of dependent correlation coefficients under normality. Communications in Statistics: Theory and Methods, 31, Williams, E.J., 1959: The comparison of regression variables. Journal of the Royal Statistical Society, Series B, 21, Winkler, R.L., and A.H. Murphy, 1979: The use of probabilities in forecasts of maximum and minimum temperatures. Meteorological Magazine, 108, Zhu, Y., Z. Toth, R. Wobus, D. Richardson, and K. Mylne, 2002: The economic value of ensemblebased weather forecasts. Bulletin of the American Meteorological Society, 83,

BAYESIAN PROCESSOR OF OUTPUT: A NEW TECHNIQUE FOR PROBABILISTIC WEATHER FORECASTING. Roman Krzysztofowicz

BAYESIAN PROCESSOR OF OUTPUT: A NEW TECHNIQUE FOR PROBABILISTIC WEATHER FORECASTING. Roman Krzysztofowicz BAYESIAN PROCESSOR OF OUTPUT: A NEW TECHNIQUE FOR PROBABILISTIC WEATHER FORECASTING By Roman Krzysztofowicz Department of Systems Engineering and Department of Statistics University of Virginia P.O. Box

More information

BAYESIAN PROCESSOR OF OUTPUT FOR PROBABILISTIC FORECASTING OF PRECIPITATION OCCURRENCE. Coire J. Maranzano. Department of Systems Engineering

BAYESIAN PROCESSOR OF OUTPUT FOR PROBABILISTIC FORECASTING OF PRECIPITATION OCCURRENCE. Coire J. Maranzano. Department of Systems Engineering BAYESIAN PROCESSOR OF OUTPUT FOR PROBABILISTIC FORECASTING OF PRECIPITATION OCCURRENCE By Coire J. Maranzano Department of Systems Engineering University of Virginia P.O. Box 400747 Charlottesville, VA

More information

Enhancing Weather Information with Probability Forecasts. An Information Statement of the American Meteorological Society

Enhancing Weather Information with Probability Forecasts. An Information Statement of the American Meteorological Society Enhancing Weather Information with Probability Forecasts An Information Statement of the American Meteorological Society (Adopted by AMS Council on 12 May 2008) Bull. Amer. Meteor. Soc., 89 Summary This

More information

Allison Monarski, University of Maryland Masters Scholarly Paper, December 6, Department of Atmospheric and Oceanic Science

Allison Monarski, University of Maryland Masters Scholarly Paper, December 6, Department of Atmospheric and Oceanic Science Allison Monarski, University of Maryland Masters Scholarly Paper, December 6, 2011 1 Department of Atmospheric and Oceanic Science Verification of Model Output Statistics forecasts associated with the

More information

BAYESIAN PROCESSOR OF OUTPUT: PROBABILITY OF PRECIPITATION OCCURRENCE. Roman Krzysztofowicz. University of Virginia. Charlottesville,Virginia.

BAYESIAN PROCESSOR OF OUTPUT: PROBABILITY OF PRECIPITATION OCCURRENCE. Roman Krzysztofowicz. University of Virginia. Charlottesville,Virginia. BAYESIAN PROCESSOR OF OUTPUT: PROBABILITY OF PRECIPITATION OCCURRENCE By Roman Krzysztofowicz University of Virginia Charlottesville,Virginia and Coire J. Maranzano Johns Hopkins University Baltimore,

More information

(Statistical Forecasting: with NWP). Notes from Kalnay (2003), appendix C Postprocessing of Numerical Model Output to Obtain Station Weather Forecasts

(Statistical Forecasting: with NWP). Notes from Kalnay (2003), appendix C Postprocessing of Numerical Model Output to Obtain Station Weather Forecasts 35 (Statistical Forecasting: with NWP). Notes from Kalnay (2003), appendix C Postprocessing of Numerical Model Output to Obtain Station Weather Forecasts If the numerical model forecasts are skillful,

More information

4.3. David E. Rudack*, Meteorological Development Laboratory Office of Science and Technology National Weather Service, NOAA 1.

4.3. David E. Rudack*, Meteorological Development Laboratory Office of Science and Technology National Weather Service, NOAA 1. 43 RESULTS OF SENSITIVITY TESTING OF MOS WIND SPEED AND DIRECTION GUIDANCE USING VARIOUS SAMPLE SIZES FROM THE GLOBAL ENSEMBLE FORECAST SYSTEM (GEFS) RE- FORECASTS David E Rudack*, Meteorological Development

More information

BAYESIAN PROCESSOR OF ENSEMBLE (BPE): PRIOR DISTRIBUTION FUNCTION

BAYESIAN PROCESSOR OF ENSEMBLE (BPE): PRIOR DISTRIBUTION FUNCTION BAYESIAN PROCESSOR OF ENSEMBLE (BPE): PRIOR DISTRIBUTION FUNCTION Parametric Models and Estimation Procedures Tested on Temperature Data By Roman Krzysztofowicz and Nah Youn Lee University of Virginia

More information

HYDROLOGIC FORECAST PRODUCTS from BAYESIAN FORECASTING SYSTEM

HYDROLOGIC FORECAST PRODUCTS from BAYESIAN FORECASTING SYSTEM HYDROLOGIC FORECAST PRODUCTS from BAYESIAN FORECASTING SYSTEM Roman Krzysztofowicz University of Virginia USA Presented at the CHR-WMO Workshop-Expert Consultation on Ensemble Prediction and Uncertainty

More information

Adding Value to the Guidance Beyond Day Two: Temperature Forecast Opportunities Across the NWS Southern Region

Adding Value to the Guidance Beyond Day Two: Temperature Forecast Opportunities Across the NWS Southern Region Adding Value to the Guidance Beyond Day Two: Temperature Forecast Opportunities Across the NWS Southern Region Néstor S. Flecha Atmospheric Science and Meteorology, Department of Physics, University of

More information

J11.5 HYDROLOGIC APPLICATIONS OF SHORT AND MEDIUM RANGE ENSEMBLE FORECASTS IN THE NWS ADVANCED HYDROLOGIC PREDICTION SERVICES (AHPS)

J11.5 HYDROLOGIC APPLICATIONS OF SHORT AND MEDIUM RANGE ENSEMBLE FORECASTS IN THE NWS ADVANCED HYDROLOGIC PREDICTION SERVICES (AHPS) J11.5 HYDROLOGIC APPLICATIONS OF SHORT AND MEDIUM RANGE ENSEMBLE FORECASTS IN THE NWS ADVANCED HYDROLOGIC PREDICTION SERVICES (AHPS) Mary Mullusky*, Julie Demargne, Edwin Welles, Limin Wu and John Schaake

More information

Application and verification of ECMWF products in Austria

Application and verification of ECMWF products in Austria Application and verification of ECMWF products in Austria Central Institute for Meteorology and Geodynamics (ZAMG), Vienna Alexander Kann 1. Summary of major highlights Medium range weather forecasts in

More information

Application and verification of ECMWF products in Austria

Application and verification of ECMWF products in Austria Application and verification of ECMWF products in Austria Central Institute for Meteorology and Geodynamics (ZAMG), Vienna Alexander Kann, Klaus Stadlbacher 1. Summary of major highlights Medium range

More information

1. INTRODUCTION 2. QPF

1. INTRODUCTION 2. QPF 440 24th Weather and Forecasting/20th Numerical Weather Prediction HUMAN IMPROVEMENT TO NUMERICAL WEATHER PREDICTION AT THE HYDROMETEOROLOGICAL PREDICTION CENTER David R. Novak, Chris Bailey, Keith Brill,

More information

P3.1 Development of MOS Thunderstorm and Severe Thunderstorm Forecast Equations with Multiple Data Sources

P3.1 Development of MOS Thunderstorm and Severe Thunderstorm Forecast Equations with Multiple Data Sources P3.1 Development of MOS Thunderstorm and Severe Thunderstorm Forecast Equations with Multiple Data Sources Kathryn K. Hughes * Meteorological Development Laboratory Office of Science and Technology National

More information

Operational Hydrologic Ensemble Forecasting. Rob Hartman Hydrologist in Charge NWS / California-Nevada River Forecast Center

Operational Hydrologic Ensemble Forecasting. Rob Hartman Hydrologist in Charge NWS / California-Nevada River Forecast Center Operational Hydrologic Ensemble Forecasting Rob Hartman Hydrologist in Charge NWS / California-Nevada River Forecast Center Mission of NWS Hydrologic Services Program Provide river and flood forecasts

More information

5.2 PRE-PROCESSING OF ATMOSPHERIC FORCING FOR ENSEMBLE STREAMFLOW PREDICTION

5.2 PRE-PROCESSING OF ATMOSPHERIC FORCING FOR ENSEMBLE STREAMFLOW PREDICTION 5.2 PRE-PROCESSING OF ATMOSPHERIC FORCING FOR ENSEMBLE STREAMFLOW PREDICTION John Schaake*, Sanja Perica, Mary Mullusky, Julie Demargne, Edwin Welles and Limin Wu Hydrology Laboratory, Office of Hydrologic

More information

Basic Verification Concepts

Basic Verification Concepts Basic Verification Concepts Barbara Brown National Center for Atmospheric Research Boulder Colorado USA bgb@ucar.edu Basic concepts - outline What is verification? Why verify? Identifying verification

More information

Application and verification of ECMWF products in Austria

Application and verification of ECMWF products in Austria Application and verification of ECMWF products in Austria Central Institute for Meteorology and Geodynamics (ZAMG), Vienna Alexander Kann 1. Summary of major highlights Medium range weather forecasts in

More information

April Forecast Update for North Atlantic Hurricane Activity in 2019

April Forecast Update for North Atlantic Hurricane Activity in 2019 April Forecast Update for North Atlantic Hurricane Activity in 2019 Issued: 5 th April 2019 by Professor Mark Saunders and Dr Adam Lea Dept. of Space and Climate Physics, UCL (University College London),

More information

A skill score based on economic value for probability forecasts

A skill score based on economic value for probability forecasts Meteorol. Appl. 8, 209 219 (2001) A skill score based on economic value for probability forecasts D S Wilks, Department of Earth and Atmospheric Sciences, Cornell University, Ithaca, New York, USA An approach

More information

Activities of NOAA s NWS Climate Prediction Center (CPC)

Activities of NOAA s NWS Climate Prediction Center (CPC) Activities of NOAA s NWS Climate Prediction Center (CPC) Jon Gottschalck and Dave DeWitt Improving Sub-Seasonal and Seasonal Precipitation Forecasting for Drought Preparedness May 27-29, 2015 San Diego,

More information

Operational Perspectives on Hydrologic Model Data Assimilation

Operational Perspectives on Hydrologic Model Data Assimilation Operational Perspectives on Hydrologic Model Data Assimilation Rob Hartman Hydrologist in Charge NOAA / National Weather Service California-Nevada River Forecast Center Sacramento, CA USA Outline Operational

More information

Hypothesis Testing. Part I. James J. Heckman University of Chicago. Econ 312 This draft, April 20, 2006

Hypothesis Testing. Part I. James J. Heckman University of Chicago. Econ 312 This draft, April 20, 2006 Hypothesis Testing Part I James J. Heckman University of Chicago Econ 312 This draft, April 20, 2006 1 1 A Brief Review of Hypothesis Testing and Its Uses values and pure significance tests (R.A. Fisher)

More information

PRICING AND PROBABILITY DISTRIBUTIONS OF ATMOSPHERIC VARIABLES

PRICING AND PROBABILITY DISTRIBUTIONS OF ATMOSPHERIC VARIABLES PRICING AND PROBABILITY DISTRIBUTIONS OF ATMOSPHERIC VARIABLES TECHNICAL WHITE PAPER WILLIAM M. BRIGGS Abstract. Current methods of assessing the probability distributions of atmospheric variables are

More information

1.2 DEVELOPMENT OF THE NWS PROBABILISTIC EXTRA-TROPICAL STORM SURGE MODEL AND POST PROCESSING METHODOLOGY

1.2 DEVELOPMENT OF THE NWS PROBABILISTIC EXTRA-TROPICAL STORM SURGE MODEL AND POST PROCESSING METHODOLOGY 1.2 DEVELOPMENT OF THE NWS PROBABILISTIC EXTRA-TROPICAL STORM SURGE MODEL AND POST PROCESSING METHODOLOGY Huiqing Liu 1 and Arthur Taylor 2* 1. Ace Info Solutions, Reston, VA 2. NOAA / NWS / Science and

More information

August Forecast Update for Atlantic Hurricane Activity in 2016

August Forecast Update for Atlantic Hurricane Activity in 2016 August Forecast Update for Atlantic Hurricane Activity in 2016 Issued: 5 th August 2016 by Professor Mark Saunders and Dr Adam Lea Dept. of Space and Climate Physics, UCL (University College London), UK

More information

USA National Weather Service Community Hydrologic Prediction System

USA National Weather Service Community Hydrologic Prediction System USA National Weather Service Community Hydrologic Prediction System Rob Hartman Hydrologist in Charge NOAA / National Weather Service California-Nevada River Forecast Center Sacramento, CA Background Outline

More information

Behind the Climate Prediction Center s Extended and Long Range Outlooks Mike Halpert, Deputy Director Climate Prediction Center / NCEP

Behind the Climate Prediction Center s Extended and Long Range Outlooks Mike Halpert, Deputy Director Climate Prediction Center / NCEP Behind the Climate Prediction Center s Extended and Long Range Outlooks Mike Halpert, Deputy Director Climate Prediction Center / NCEP September 2012 Outline Mission Extended Range Outlooks (6-10/8-14)

More information

Prediction of Snow Water Equivalent in the Snake River Basin

Prediction of Snow Water Equivalent in the Snake River Basin Hobbs et al. Seasonal Forecasting 1 Jon Hobbs Steve Guimond Nate Snook Meteorology 455 Seasonal Forecasting Prediction of Snow Water Equivalent in the Snake River Basin Abstract Mountainous regions of

More information

NOTES AND CORRESPONDENCE. Improving Week-2 Forecasts with Multimodel Reforecast Ensembles

NOTES AND CORRESPONDENCE. Improving Week-2 Forecasts with Multimodel Reforecast Ensembles AUGUST 2006 N O T E S A N D C O R R E S P O N D E N C E 2279 NOTES AND CORRESPONDENCE Improving Week-2 Forecasts with Multimodel Reforecast Ensembles JEFFREY S. WHITAKER AND XUE WEI NOAA CIRES Climate

More information

Probability and Statistics

Probability and Statistics Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be CHAPTER 4: IT IS ALL ABOUT DATA 4a - 1 CHAPTER 4: IT

More information

Application and verification of ECMWF products in Croatia - July 2007

Application and verification of ECMWF products in Croatia - July 2007 Application and verification of ECMWF products in Croatia - July 2007 By Lovro Kalin, Zoran Vakula and Josip Juras (Hydrological and Meteorological Service) 1. Summary of major highlights At Croatian Met

More information

Dr Harvey Stern. University Of Melbourne, School Of Earth Sciences

Dr Harvey Stern. University Of Melbourne, School Of Earth Sciences Dr Harvey Stern University Of Melbourne, School Of Earth Sciences Evaluating the Accuracy of Weather Predictions for Melbourne Leading Up to the Heavy Rain Event of Early December 2017 ABSTRACT: The opening

More information

The Weather Information Value Chain

The Weather Information Value Chain The Weather Information Value Chain Jeffrey K. Lazo Societal Impacts Program National Center for Atmospheric Research Boulder CO April 27 2016 HIWeather Exeter, England Outline Shout out on WMO/USAID/World

More information

Notes on Decision Theory and Prediction

Notes on Decision Theory and Prediction Notes on Decision Theory and Prediction Ronald Christensen Professor of Statistics Department of Mathematics and Statistics University of New Mexico October 7, 2014 1. Decision Theory Decision theory is

More information

Adaptation for global application of calibration and downscaling methods of medium range ensemble weather forecasts

Adaptation for global application of calibration and downscaling methods of medium range ensemble weather forecasts Adaptation for global application of calibration and downscaling methods of medium range ensemble weather forecasts Nathalie Voisin Hydrology Group Seminar UW 11/18/2009 Objective Develop a medium range

More information

BUILDING A GRIDDED CLIMATOLOGICAL DATASET FOR USE IN THE STATISTICAL INTERPRETATION OF NUMERICAL WEATHER PREDICTION MODELS

BUILDING A GRIDDED CLIMATOLOGICAL DATASET FOR USE IN THE STATISTICAL INTERPRETATION OF NUMERICAL WEATHER PREDICTION MODELS JP 1.6 BUILDING A GRIDDED CLIMATOLOGICAL DATASET FOR USE IN THE STATISTICAL INTERPRETATION OF NUMERICAL WEATHER PREDICTION MODELS Rachel A. Trimarco, Kari L. Sheets, and Kathryn K. Hughes Meteorological

More information

Peter P. Neilley. And. Kurt A. Hanson. Weather Services International, Inc. 400 Minuteman Road Andover, MA 01810

Peter P. Neilley. And. Kurt A. Hanson. Weather Services International, Inc. 400 Minuteman Road Andover, MA 01810 6.4 ARE MODEL OUTPUT STATISTICS STILL NEEDED? Peter P. Neilley And Kurt A. Hanson Weather Services International, Inc. 400 Minuteman Road Andover, MA 01810 1. Introduction. Model Output Statistics (MOS)

More information

July Forecast Update for North Atlantic Hurricane Activity in 2018

July Forecast Update for North Atlantic Hurricane Activity in 2018 July Forecast Update for North Atlantic Hurricane Activity in 2018 Issued: 5 th July 2018 by Professor Mark Saunders and Dr Adam Lea Dept. of Space and Climate Physics, UCL (University College London),

More information

EVALUATION OF NDFD AND DOWNSCALED NCEP FORECASTS IN THE INTERMOUNTAIN WEST 2. DATA

EVALUATION OF NDFD AND DOWNSCALED NCEP FORECASTS IN THE INTERMOUNTAIN WEST 2. DATA 2.2 EVALUATION OF NDFD AND DOWNSCALED NCEP FORECASTS IN THE INTERMOUNTAIN WEST Brandon C. Moore 1 *, V.P. Walden 1, T.R. Blandford 1, B. J. Harshburger 1, and K. S. Humes 1 1 University of Idaho, Moscow,

More information

Towards Operational Probabilistic Precipitation Forecast

Towards Operational Probabilistic Precipitation Forecast 5 Working Group on Verification and Case Studies 56 Towards Operational Probabilistic Precipitation Forecast Marco Turco, Massimo Milelli ARPA Piemonte, Via Pio VII 9, I-10135 Torino, Italy 1 Aim of the

More information

Weather Analysis and Forecasting

Weather Analysis and Forecasting Weather Analysis and Forecasting An Information Statement of the American Meteorological Society (Adopted by AMS Council on 25 March 2015) Bull. Amer. Meteor. Soc., 88 This Information Statement describes

More information

Air Force Weather Ensembles

Air Force Weather Ensembles 16 th Weather Squadron Fly - Fight - Win Air Force Weather Ensembles Scott Rentschler Fine Scale and Ensemble Models 16WS/WXN Background Air Force Weather Decision Support: Weather impacts on specific

More information

Improving Sub-Seasonal to Seasonal Prediction at NOAA

Improving Sub-Seasonal to Seasonal Prediction at NOAA Improving Sub-Seasonal to Seasonal Prediction at NOAA Dr. Louis W. Uccellini Director, National Weather Service NOAA Assistant Administrator for Weather Services July 13, 2016 Congressional Briefing Value

More information

Stochastic Hydrology. a) Data Mining for Evolution of Association Rules for Droughts and Floods in India using Climate Inputs

Stochastic Hydrology. a) Data Mining for Evolution of Association Rules for Droughts and Floods in India using Climate Inputs Stochastic Hydrology a) Data Mining for Evolution of Association Rules for Droughts and Floods in India using Climate Inputs An accurate prediction of extreme rainfall events can significantly aid in policy

More information

July Forecast Update for Atlantic Hurricane Activity in 2016

July Forecast Update for Atlantic Hurricane Activity in 2016 July Forecast Update for Atlantic Hurricane Activity in 2016 Issued: 5 th July 2016 by Professor Mark Saunders and Dr Adam Lea Dept. of Space and Climate Physics, UCL (University College London), UK Forecast

More information

Deterministic and Probabilistic prediction approaches in Seasonal to Inter-annual climate forecasting

Deterministic and Probabilistic prediction approaches in Seasonal to Inter-annual climate forecasting RA 1 EXPERT MEETING ON THE APPLICATION OF CLIMATE FORECASTS FOR AGRICULTURE Banjul, Gambia, 9-13 December 2002 Deterministic and Probabilistic prediction approaches in Seasonal to Inter-annual climate

More information

Distributions-Oriented Verification of Probability Forecasts for Small Data Samples

Distributions-Oriented Verification of Probability Forecasts for Small Data Samples 903 Distributions-Oriented Verification of Probability Forecasts for Small Data Samples A. ALLE BRADLEY AD TEMPEI HASHIO IIHR Hydroscience and Engineering, and Department of Civil and Environmental Engineering,

More information

Basic Verification Concepts

Basic Verification Concepts Basic Verification Concepts Barbara Brown National Center for Atmospheric Research Boulder Colorado USA bgb@ucar.edu May 2017 Berlin, Germany Basic concepts - outline What is verification? Why verify?

More information

Chapter 7: Simple linear regression

Chapter 7: Simple linear regression The absolute movement of the ground and buildings during an earthquake is small even in major earthquakes. The damage that a building suffers depends not upon its displacement, but upon the acceleration.

More information

1 Introduction. Station Type No. Synoptic/GTS 17 Principal 172 Ordinary 546 Precipitation

1 Introduction. Station Type No. Synoptic/GTS 17 Principal 172 Ordinary 546 Precipitation Use of Automatic Weather Stations in Ethiopia Dula Shanko National Meteorological Agency(NMA), Addis Ababa, Ethiopia Phone: +251116639662, Mob +251911208024 Fax +251116625292, Email: Du_shanko@yahoo.com

More information

Combining Deterministic and Probabilistic Methods to Produce Gridded Climatologies

Combining Deterministic and Probabilistic Methods to Produce Gridded Climatologies Combining Deterministic and Probabilistic Methods to Produce Gridded Climatologies Michael Squires Alan McNab National Climatic Data Center (NCDC - NOAA) Asheville, NC Abstract There are nearly 8,000 sites

More information

BUREAU OF METEOROLOGY

BUREAU OF METEOROLOGY BUREAU OF METEOROLOGY Building an Operational National Seasonal Streamflow Forecasting Service for Australia progress to-date and future plans Dr Narendra Kumar Tuteja Manager Extended Hydrological Prediction

More information

April Forecast Update for Atlantic Hurricane Activity in 2018

April Forecast Update for Atlantic Hurricane Activity in 2018 April Forecast Update for Atlantic Hurricane Activity in 2018 Issued: 5 th April 2018 by Professor Mark Saunders and Dr Adam Lea Dept. of Space and Climate Physics, UCL (University College London), UK

More information

Air Force Weather Ensembles

Air Force Weather Ensembles 16 th Weather Squadron Fly - Fight - Win Air Force Weather Ensembles Evan Kuchera Fine Scale and Ensemble Models 16WS/WXN Background Meeting theme: Develop better, ensemble-derived, decision support products

More information

The benefits and developments in ensemble wind forecasting

The benefits and developments in ensemble wind forecasting The benefits and developments in ensemble wind forecasting Erik Andersson Slide 1 ECMWF European Centre for Medium-Range Weather Forecasts Slide 1 ECMWF s global forecasting system High resolution forecast

More information

Seasonal Predictions for South Caucasus and Armenia

Seasonal Predictions for South Caucasus and Armenia Seasonal Predictions for South Caucasus and Armenia Anahit Hovsepyan Zagreb, 11-12 June 2008 SEASONAL PREDICTIONS for the South Caucasus There is a notable increase of interest of population and governing

More information

Pre-Season Forecast for North Atlantic Hurricane Activity in 2018

Pre-Season Forecast for North Atlantic Hurricane Activity in 2018 Pre-Season Forecast for North Atlantic Hurricane Activity in 2018 Issued: 30 th May 2018 by Professor Mark Saunders and Dr Adam Lea Dept. of Space and Climate Physics, UCL (University College London),

More information

April Forecast Update for Atlantic Hurricane Activity in 2016

April Forecast Update for Atlantic Hurricane Activity in 2016 April Forecast Update for Atlantic Hurricane Activity in 2016 Issued: 5 th April 2016 by Professor Mark Saunders and Dr Adam Lea Dept. of Space and Climate Physics, UCL (University College London), UK

More information

National level products generation including calibration aspects

National level products generation including calibration aspects National level products generation including calibration aspects Dr. Cedric J. VAN MEERBEECK, Climatologist (cmeerbeeck@cimh.edu.bb), Adrian R. Trotman, Chief of Applied Meteorology and Climatology (atrotman@cimh.edu.bb),

More information

Seasonal drought predictability in Portugal using statistical-dynamical techniques

Seasonal drought predictability in Portugal using statistical-dynamical techniques Seasonal drought predictability in Portugal using statistical-dynamical techniques A. F. S. Ribeiro and C. A. L. Pires University of Lisbon, Institute Dom Luiz QSECA workshop IDL 2015 How to improve the

More information

Measuring the quality of updating high resolution time-lagged ensemble probability forecasts using spatial verification techniques.

Measuring the quality of updating high resolution time-lagged ensemble probability forecasts using spatial verification techniques. Measuring the quality of updating high resolution time-lagged ensemble probability forecasts using spatial verification techniques. Tressa L. Fowler, Tara Jensen, John Halley Gotway, Randy Bullock 1. Introduction

More information

Monthly Long Range Weather Commentary Issued: APRIL 1, 2015 Steven A. Root, CCM, President/CEO

Monthly Long Range Weather Commentary Issued: APRIL 1, 2015 Steven A. Root, CCM, President/CEO Monthly Long Range Weather Commentary Issued: APRIL 1, 2015 Steven A. Root, CCM, President/CEO sroot@weatherbank.com FEBRUARY 2015 Climate Highlights The Month in Review The February contiguous U.S. temperature

More information

WSWC/NOAA Workshops on S2S Precipitation Forecasting

WSWC/NOAA Workshops on S2S Precipitation Forecasting WSWC/NOAA Workshops on S2S Precipitation Forecasting San Diego, May 2015 Salt Lake City at NWS Western Region HQ, October 2015 Las Vegas at Colorado River Water Users Association, December 2015 College

More information

Evaluating Forecast Quality

Evaluating Forecast Quality Evaluating Forecast Quality Simon J. Mason International Research Institute for Climate Prediction Questions How do we decide whether a forecast was correct? How do we decide whether a set of forecasts

More information

Using an Artificial Neural Network to Predict Parameters for Frost Deposition on Iowa Bridgeways

Using an Artificial Neural Network to Predict Parameters for Frost Deposition on Iowa Bridgeways Using an Artificial Neural Network to Predict Parameters for Frost Deposition on Iowa Bridgeways Bradley R. Temeyer and William A. Gallus Jr. Graduate Student of Atmospheric Science 31 Agronomy Hall Ames,

More information

Introductory Econometrics. Review of statistics (Part II: Inference)

Introductory Econometrics. Review of statistics (Part II: Inference) Introductory Econometrics Review of statistics (Part II: Inference) Jun Ma School of Economics Renmin University of China October 1, 2018 1/16 Null and alternative hypotheses Usually, we have two competing

More information

Brian McGurk, P.G. DEQ Office of Water Supply. Contents. Overview of Virginia s Drought Assessment & Response Plan

Brian McGurk, P.G. DEQ Office of Water Supply. Contents. Overview of Virginia s Drought Assessment & Response Plan Drought Preparedness in Virginia Or, Whatcha Gonna Do When the Well, Creek, River, or Reservoir (Might) Run Dry? Rappahannock-Rapidan Regional Commission Living Lands Workshop November 18, 2014 Brian McGurk,

More information

August Forecast Update for Atlantic Hurricane Activity in 2015

August Forecast Update for Atlantic Hurricane Activity in 2015 August Forecast Update for Atlantic Hurricane Activity in 2015 Issued: 5 th August 2015 by Professor Mark Saunders and Dr Adam Lea Dept. of Space and Climate Physics, UCL (University College London), UK

More information

Bus 216: Business Statistics II Introduction Business statistics II is purely inferential or applied statistics.

Bus 216: Business Statistics II Introduction Business statistics II is purely inferential or applied statistics. Bus 216: Business Statistics II Introduction Business statistics II is purely inferential or applied statistics. Study Session 1 1. Random Variable A random variable is a variable that assumes numerical

More information

Baseline Climatology. Dave Parker ADD PRESENTATION TITLE HERE (GO TO: VIEW / MASTER / SLIDE MASTER TO AMEND) ADD PRESENTER S NAME HERE / ADD DATE HERE

Baseline Climatology. Dave Parker ADD PRESENTATION TITLE HERE (GO TO: VIEW / MASTER / SLIDE MASTER TO AMEND) ADD PRESENTER S NAME HERE / ADD DATE HERE Baseline Climatology Dave Parker ADD PRESENTATION TITLE HERE (GO TO: VIEW / MASTER / SLIDE MASTER TO AMEND) ADD PRESENTER S NAME HERE / ADD DATE HERE Copyright EDF Energy. All rights reserved. Introduction

More information

Localized Aviation Model Output Statistics Program (LAMP): Improvements to convective forecasts in response to user feedback

Localized Aviation Model Output Statistics Program (LAMP): Improvements to convective forecasts in response to user feedback Localized Aviation Model Output Statistics Program (LAMP): Improvements to convective forecasts in response to user feedback Judy E. Ghirardelli National Weather Service Meteorological Development Laboratory

More information

How far in advance can we forecast cold/heat spells?

How far in advance can we forecast cold/heat spells? Sub-seasonal time scales: a user-oriented verification approach How far in advance can we forecast cold/heat spells? Laura Ferranti, L. Magnusson, F. Vitart, D. Richardson, M. Rodwell Danube, Feb 2012

More information

Challenges of Communicating Weather Information to the Public. Sam Lashley Senior Meteorologist National Weather Service Northern Indiana Office

Challenges of Communicating Weather Information to the Public. Sam Lashley Senior Meteorologist National Weather Service Northern Indiana Office Challenges of Communicating Weather Information to the Public Sam Lashley Senior Meteorologist National Weather Service Northern Indiana Office Dilbert the Genius Do you believe him? Challenges of Communicating

More information

Calibration of ECMWF forecasts

Calibration of ECMWF forecasts from Newsletter Number 142 Winter 214/15 METEOROLOGY Calibration of ECMWF forecasts Based on an image from mrgao/istock/thinkstock doi:1.21957/45t3o8fj This article appeared in the Meteorology section

More information

Overview of Achievements October 2001 October 2003 Adrian Raftery, P.I. MURI Overview Presentation, 17 October 2003 c 2003 Adrian E.

Overview of Achievements October 2001 October 2003 Adrian Raftery, P.I. MURI Overview Presentation, 17 October 2003 c 2003 Adrian E. MURI Project: Integration and Visualization of Multisource Information for Mesoscale Meteorology: Statistical and Cognitive Approaches to Visualizing Uncertainty, 2001 2006 Overview of Achievements October

More information

Calibrating forecasts of heavy precipitation in river catchments

Calibrating forecasts of heavy precipitation in river catchments from Newsletter Number 152 Summer 217 METEOROLOGY Calibrating forecasts of heavy precipitation in river catchments Hurricane Patricia off the coast of Mexico on 23 October 215 ( 215 EUMETSAT) doi:1.21957/cf1598

More information

At the start of the talk will be a trivia question. Be prepared to write your answer.

At the start of the talk will be a trivia question. Be prepared to write your answer. Operational hydrometeorological forecasting activities of the Australian Bureau of Meteorology Thomas Pagano At the start of the talk will be a trivia question. Be prepared to write your answer. http://scottbridle.com/

More information

PRMS WHITE PAPER 2014 NORTH ATLANTIC HURRICANE SEASON OUTLOOK. June RMS Event Response

PRMS WHITE PAPER 2014 NORTH ATLANTIC HURRICANE SEASON OUTLOOK. June RMS Event Response PRMS WHITE PAPER 2014 NORTH ATLANTIC HURRICANE SEASON OUTLOOK June 2014 - RMS Event Response 2014 SEASON OUTLOOK The 2013 North Atlantic hurricane season saw the fewest hurricanes in the Atlantic Basin

More information

[NEACOF] Status Report (Survey)

[NEACOF] Status Report (Survey) [NEACOF] Status Report (Survey) Annotated Outline Specific Climate features of concerned region In North Eurasian region, wintertime temperature is mainly a result from interplay of advection of the warm

More information

ORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing

ORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing ORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing Robert Vanderbei Fall 2014 Slides last edited on November 24, 2014 http://www.princeton.edu/ rvdb Coin Tossing Example Consider two coins.

More information

EL NINO-SOUTHERN OSCILLATION (ENSO): RECENT EVOLUTION AND POSSIBILITIES FOR LONG RANGE FLOW FORECASTING IN THE BRAHMAPUTRA-JAMUNA RIVER

EL NINO-SOUTHERN OSCILLATION (ENSO): RECENT EVOLUTION AND POSSIBILITIES FOR LONG RANGE FLOW FORECASTING IN THE BRAHMAPUTRA-JAMUNA RIVER Global NEST Journal, Vol 8, No 3, pp 79-85, 2006 Copyright 2006 Global NEST Printed in Greece. All rights reserved EL NINO-SOUTHERN OSCILLATION (ENSO): RECENT EVOLUTION AND POSSIBILITIES FOR LONG RANGE

More information

AN ANALYSIS OF THE TORNADO COOL SEASON

AN ANALYSIS OF THE TORNADO COOL SEASON AN ANALYSIS OF THE 27-28 TORNADO COOL SEASON Madison Burnett National Weather Center Research Experience for Undergraduates Norman, OK University of Missouri Columbia, MO Greg Carbin Storm Prediction Center

More information

Drought forecasting methods Blaz Kurnik DESERT Action JRC

Drought forecasting methods Blaz Kurnik DESERT Action JRC Ljubljana on 24 September 2009 1 st DMCSEE JRC Workshop on Drought Monitoring 1 Drought forecasting methods Blaz Kurnik DESERT Action JRC Motivations for drought forecasting Ljubljana on 24 September 2009

More information

Operational MRCC Tools Useful and Usable by the National Weather Service

Operational MRCC Tools Useful and Usable by the National Weather Service Operational MRCC Tools Useful and Usable by the National Weather Service Vegetation Impact Program (VIP): Frost / Freeze Project Beth Hall Accumulated Winter Season Severity Index (AWSSI) Steve Hilberg

More information

INFLUENCE OF THE AVERAGING PERIOD IN AIR TEMPERATURE MEASUREMENT

INFLUENCE OF THE AVERAGING PERIOD IN AIR TEMPERATURE MEASUREMENT INFLUENCE OF THE AVERAGING PERIOD IN AIR TEMPERATURE MEASUREMENT Hristomir Branzov 1, Valentina Pencheva 2 1 National Institute of Meteorology and Hydrology, Sofia, Bulgaria, Hristomir.Branzov@meteo.bg

More information

NOAA/WSWC Workshop on Seasonal Forecast Improvements. Kevin Werner, NOAA Jeanine Jones, CA/DWR

NOAA/WSWC Workshop on Seasonal Forecast Improvements. Kevin Werner, NOAA Jeanine Jones, CA/DWR NOAA/WSWC Workshop on Seasonal Forecast Improvements Kevin Werner, NOAA Jeanine Jones, CA/DWR Outline Workshop motivation Goals Agenda 2 Workshop Motivation Opportunity for application of improved seasonal

More information

A Comparison of Tornado Warning Lead Times with and without NEXRAD Doppler Radar

A Comparison of Tornado Warning Lead Times with and without NEXRAD Doppler Radar MARCH 1996 B I E R I N G E R A N D R A Y 47 A Comparison of Tornado Warning Lead Times with and without NEXRAD Doppler Radar PAUL BIERINGER AND PETER S. RAY Department of Meteorology, The Florida State

More information

Supplementary figures

Supplementary figures Supplementary material Supplementary figures Figure 1: Observed vs. modelled precipitation for Umeå during the period 1860 1950 http://www.demographic-research.org 1 Åström et al.: Impact of weather variability

More information

SRI Briefing Note Series No.8 Communicating uncertainty in seasonal and interannual climate forecasts in Europe: organisational needs and preferences

SRI Briefing Note Series No.8 Communicating uncertainty in seasonal and interannual climate forecasts in Europe: organisational needs and preferences ISSN 2056-8843 Sustainability Research Institute SCHOOL OF EARTH AND ENVIRONMENT SRI Briefing Note Series No.8 Communicating uncertainty in seasonal and interannual climate forecasts in Europe: organisational

More information

Missouri River Basin Water Management

Missouri River Basin Water Management Missouri River Basin Water Management US Army Corps of Engineers Missouri River Navigator s Meeting February 12, 2014 Bill Doan, P.E. Missouri River Basin Water Management US Army Corps of Engineers BUILDING

More information

Communicating uncertainty from short-term to seasonal forecasting

Communicating uncertainty from short-term to seasonal forecasting Communicating uncertainty from short-term to seasonal forecasting MAYBE NO YES Jay Trobec KELO-TV Sioux Falls, South Dakota USA TV weather in the US Most TV weather presenters have university degrees and

More information

Analysis on Characteristics of Precipitation Change from 1957 to 2015 in Weishan County

Analysis on Characteristics of Precipitation Change from 1957 to 2015 in Weishan County Journal of Geoscience and Environment Protection, 2017, 5, 125-133 http://www.scirp.org/journal/gep ISSN Online: 2327-4344 ISSN Print: 2327-4336 Analysis on Characteristics of Precipitation Change from

More information

Traffic Flow Impact (TFI)

Traffic Flow Impact (TFI) Traffic Flow Impact (TFI) Michael P. Matthews 27 October 2015 Sponsor: Yong Li, FAA ATO AJV-73 Technical Analysis & Operational Requirements Distribution Statement A. Approved for public release; distribution

More information

Seasonal prediction of extreme events

Seasonal prediction of extreme events Seasonal prediction of extreme events C. Prodhomme, F. Doblas-Reyes MedCOF training, 29 October 2015, Madrid Climate Forecasting Unit Outline: Why focusing on extreme events? Extremeness metric Soil influence

More information

Testing for Regime Switching in Singaporean Business Cycles

Testing for Regime Switching in Singaporean Business Cycles Testing for Regime Switching in Singaporean Business Cycles Robert Breunig School of Economics Faculty of Economics and Commerce Australian National University and Alison Stegman Research School of Pacific

More information

2.7 A PROTOTYPE VERIFICATION SYSTEM FOR EXAMINING NDFD FORECASTS

2.7 A PROTOTYPE VERIFICATION SYSTEM FOR EXAMINING NDFD FORECASTS 2.7 A PROTOTYPE VERIFICATION SYSTEM FOR EXAMINING NDFD FORECASTS Valery J. Dagostaro*, Wilson A. Shaffer, Michael J. Schenk, Jerry L. Gorline Meteorological Development Laboratory Office of Science and

More information

The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS

The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS S. L. Mullen Univ. of Arizona R. Buizza ECMWF University of Wisconsin Predictability Workshop,

More information

CLIMATE CHANGE ADAPTATION BY MEANS OF PUBLIC PRIVATE PARTNERSHIP TO ESTABLISH EARLY WARNING SYSTEM

CLIMATE CHANGE ADAPTATION BY MEANS OF PUBLIC PRIVATE PARTNERSHIP TO ESTABLISH EARLY WARNING SYSTEM CLIMATE CHANGE ADAPTATION BY MEANS OF PUBLIC PRIVATE PARTNERSHIP TO ESTABLISH EARLY WARNING SYSTEM By: Dr Mamadou Lamine BAH, National Director Direction Nationale de la Meteorologie (DNM), Guinea President,

More information