Extending logistic regression to provide full-probability-distribution MOS forecasts

Size: px
Start display at page:

Download "Extending logistic regression to provide full-probability-distribution MOS forecasts"

Transcription

1 METEOROLOGICAL APPLICATIONS Meteorol. Appl. 16: (29) Published online 9 March 29 in Wiley InterScience ( Extending logistic regression to provide full-probability-distribution MOS forecasts Daniel S. Wilks* Department of Earth and Atmospheric Sciences, Cornell University, Ithaca NY 14853, USA ABSTRACT: Statistical post-processing of dynamical forecasts, using the Model Output Statistics (MOS) approach, continues to be an essential component of weather forecasting. Even in the current era of ensemble forecasting, ensemble- MOS methods are used to transform raw ensemble forecasts into well-calibrated probability forecasts. Logistic regression has been found to be an especially useful method for this purpose for predictands, such as precipitation amounts, that are distinctly non-gaussian. However, the usual implementation of logistic regression fits separate forecast equations for different predictand thresholds, yielding finite sets of threshold probabilities rather than full forecast probability distributions. Furthermore, these individual threshold probabilities are not constrained to be mutually consistent, so that negative probabilities may be implied for some ranges of the predictand. In this paper, logistic regressions are extended to yield full continuous, and coherent, probability distribution forecasts by including the predictand threshold itself as an additional predictor in the forecast equation. The procedure is illustrated using 6 1 day precipitation forecasts for a sample of locations in the U.S., drawn from the GFS reforecast dataset. Copyright 29 Royal Meteorological Society KEY WORDS MOS; ensemble forecasts; reforecasts; generalized linear models Received 22 October 28; Revised 26 November 28; Accepted 13 January Introduction Dynamical forecast models are the mainstay of weather forecasting for lead times of a few hours through a few weeks. However, even at the lead times for which dynamical models are most accurate, their forecast can be improved through statistical post-processing. Usually the Model Output Statistics (MOS) approach (Glahn and Lowry, 1972) is preferred, in which statistical forecast equations are derived that include one or more dynamical outputs as predictors. The result is amelioration of many of the deficiencies in the raw dynamical forecasts, such as conditional and unconditional biases, yielding more accurate forecasts. Most frequently linear regression is used to compute nonprobabilistic MOS forecasts, although nonlinear statistical models can also be used, and probability forecasts can also be made (e.g. Wilks, 26a). MOS-based probability forecasts have usually been formulated as linear regressions predicting binary predictands, following the seminal work of Glahn and Lowry (1972), which is sometimes known as REEP (Regression Estimation of Event Probabilities) (Glahn, 1985). However, this method presents some theoretical and potential practical difficulties, and a good but computationally more demanding approach is to formulate MOS probability forecasting equations as logistic regressions (e.g. * Correspondence to: Daniel S. Wilks, Department of Earth and Atmospheric Sciences, Cornell University, Ithaca NY 14853, USA. dsw5@cornell.edu Wilks, 26a). Logistic regression appears to have been used relatively infrequently for MOS post-processing of conventional single-integration dynamical-model forecasts (Lemcke and Kruizinga, 1988; Vislocky and Young, 1989; Sokol, 23; Detlefsen, 26), even though performance of logistic-regression MOS compares well with the REEP approach (Applequist et al., 22; Sohn et al., 25). Increasingly, forecasting practice includes representation of forecast uncertainties due to uncertainties in initial conditions through integration of ensembles of dynamical forecasts (Molteni et al., 1996; Toth and Kalnay, 1993). However, initial conditions for individual ensemble members are not chosen as random samples from the probability distribution of initial condition uncertainty (e.g. Hamill et al., 23; Wang and Bishop, 23) and, in addition, ensemble forecasts suffer from the same kinds of structural model deficiencies as do conventional single-integration dynamical forecasts. Consequently a forecast ensemble cannot be regarded as a sample from the true probability distribution of forecast uncertainty (e.g. Hansen, 22; Smith, 26), and probability forecasts derived from forecast ensembles can be improved through MOS post-processing. Ensemble-MOS post-processing actually presents a more difficult problem than ordinary MOS forecasting, because both ordinary biases and biases in ensemble dispersion (usually, ensemble underdispersion) are to be corrected. A variety of methods has been proposed for this purpose (Gneiting et al., 25; Hamill and Colucci, Copyright 29 Royal Meteorological Society

2 362 D. S. WILKS 1998; Hamill et al., 24; Raftery et al., 25; Roulston and Smith, 23; Stephenson et al., 25), and these and other ensemble-mos methods have been compared in an idealized setting in Wilks (26b). Wilks and Hamill (27) examined the performance of the best of these methods using ensembles taken from the GFS reforecast dataset (Hamill et al., 26), concluding that nonhomogeneous Gaussian regression (Gneiting et al., 25) generally performed best for medium-range temperature forecasts, and that logistic regression, a conventional statistical method, was generally best for daily temperature forecasts and for medium-range precipitation forecasts. Although probabilistic MOS forecasts based on logistic regressions have been found to perform well, notable difficulties arise from the conventional approach to deriving these equations. Specifically, separate prediction equations are conventionally derived to predict probabilities corresponding to different predictand thresholds. For example, different logistic regression equations would generally be used to forecast probabilities that future precipitation will be no greater than, 2, 5, 1, 2 mm, etc., even though the same predictor variables (which could be, for example, ensemble mean and ensemble standard deviation) might be used in each of the forecast equations. One problem with this approach is that probabilities for intermediate predictand thresholds (e.g. 15 mm in the above example) must be interpolated from the finite collection MOS equations. In addition, fitting separate equations for different thresholds requires estimation of a relatively large number of regression parameters in total, which may lead to poor estimates unless the available training sample is quite large. However, the most problematic consequence of separate MOS equations for different predictand thresholds is that forecasts derived from the different equations are not constrained to be mutually consistent. For example, because of sampling variations the forecast probability for precipitation at or below 2 mm may be smaller than the forecast probability for precipitation at or below 1 mm. All of these problems can be circumvented by extending the logistic regression structure to allow prediction of probabilities for all thresholds simultaneously, by including the predictand threshold itself as one of the regression predictors. In addition to providing smoothly-varying forecast probabilities for any and all predictand thresholds, the approach requires fitting substantially fewer parameters as compared to many separate logistic regressions, and ensures that nonsense negative probabilities cannot be produced. This kind of extension to ordinary logistic regression is not a new concept, and indeed is an instance of the well-known statistical approach called generalized linear modeling (McCullagh and Nelder, 1989). Section 2 outlines use of logistic regression in the context of MOS forecasts, and the extension proposed here. Section 3 describes the ensemble forecast data used to illustrate the procedure, which are the same GFS reforecasts (Hamill et al., 26) used by Wilks and Hamill (27). Note, however, that the proposed structure is equally applicable to MOS post-processing of conventional single-integration dynamical forecasts. Section 4 presents representative forecast performance results, and Section 5 concludes. 2. Logistic regression Logistic regression is a nonlinear regression method that is well suited to probability forecasting, i.e. situations where the predictand is a probability rather than a measurable physical quantity. Denoting as p the probability being forecast, a logistic regression takes the form: p = exp[f(x)] 1 exp[f(x)] (1) where f(x) is a linear function of the predictor variables, x, f(x) = b b 1 x 1 b 2 x 2 b K x K (2) The mathematical form of the logistic regression equation yields S-shaped prediction functions that are strictly bounded on the unit interval ( <p<1). The name logistic regression follows from the regression equation being linear on the logistic, or log-odds scale: [ ] p ln = f(x) (3) 1 p Even though the form of Equation (3) is linear, standard linear regression methods cannot be applied to estimate the regression parameters because in the training data the predictand values are binary (i.e. or 1), so that the left-hand side of Equation (3) is not defined. Rather, the parameters are generally estimated using an iterative maximum likelihood procedure (e.g. McCullagh and Nelder, 1989; Wilks, 26a). An important recent use of logistic regression has been in the statistical post-processing of ensemble forecasts of continuous predictands such as temperature or precipitation (e.g. Hamill et al., 24; Hamill et al., 28; Wilks and Hamill, 27), for which the forecast probabilities pertain to the occurrence of the verification, V, above or below a prediction threshold corresponding a particular data quantile q: p = Pr {V q} (4) In the ensemble-mos context the primary predictor, x 1, is generally the ensemble mean, and to the extent that ensemble spread provides significant predictive information a second predictor x 2 may involve ensemble standard deviation, either alone (Hamill et al., 28) or multiplied by the ensemble mean (Wilks and Hamill, 27). To date, logistic regressions have been used for MOS post-processing by fitting separate equations for selected predictand quantile thresholds. For example, consider probability forecasts for both the lower tercile (the data value defining the boundary between the lower third and

3 MOS LOGISTIC REGRESSION 363 the remainder of a distribution), q 1/3, and upper tercile, q 2/3, of the climatological distribution of a predictand. The two threshold probabilities, p 1/3 = Pr {V q 1/3 ) and p 2/3 = Pr {V q 2/3 ) would be forecast using the two logistic regression functions ln[p 1/3 /(1 p 1/3 )] = f 1/3 (x) and ln[p 2/3 /(1 p 2/3 )] = f 2/3 (x). Unless the regression functions f 1/3 (x) and f 2/3 (x) are exactly parallel (i.e. they differ only with respect to their intercept parameters, b ) they will cross for some values of the predictor(s) x, leading to the nonsense result of p 1/3 >p 2/3, implying Pr {q 1/3 <V <q 2/3 } <. Other problems with this approacharethatestimating probabilities corresponding to threshold quantiles for which regressions have not been fitted requires some kind of interpolation, yet fitting many prediction equations requires that a large number of parameters be estimated. All of these problems can be alleviated if a wellfitting regression can be estimated simultaneously for all forecast quantiles. A potentially promising approach is to extend Equations (1) and (3) to include a nondecreasing function g(q) of the threshold quantile q, unifying equations for individual quantiles into a single equation that pertains to any quantile: or, p(q) = exp[f(x) g(q)] 1 exp[f(x) g(q)] (5) [ ] p(q) ln = f(x) g(q) (6) 1 p(q) One interpretation of Equation (6) is that it specifies parallel functions of the predictors x, whose intercepts b (q) increase monotonically with the threshold quantile, q: [ ] p(q) ln = b g(q) b 1 x 1 b 2 x 2 b K x K 1 p(q) = b (q) b 1x 1 b 2 x 2 b K x K (7) The question from a practical perspective is whether a functional form for g(q) can be specified, for which Equation (5) provides forecasts of competitive quality to those from the traditional single-quantile Equation (1). 3. Data and unified forecast equations Forecast and observation data sets used here are the same as those used in Wilks and Hamill (27). Ensemble forecasts have been taken from the Hamill et al. (26) reforecast dataset, which contains retrospectively recomputed, 15-member ensemble forecasts beginning in January 1979, using a ca (T62, or roughly km horizontal resolution) version of the U.S. National Centers for Environmental Prediction Global Forecasting Model (GFS) (Caplan et al., 1997). Precipitation forecasts for days 6 1 were aggregated to yield medium-range ensemble forecasts for this lead time, through February 25. These forecasts are available on a grid, and nearest gridpoint values are used to forecast precipitation at 19 U.S. first-order National Weather Service stations: Atlanta, Georgia (ATL); Bismarck, North Dakota (BIS); Boston, Massachusetts (BOS); Buffalo, New York (BUF); Washington, DC (DCA); Denver, Colorado (DTW); Great Falls, Montana (GTF); Los Angeles, California (LAX); Miami, Florida (MIA); Minneapolis, Minnesota (MSP); New Orleans, Louisiana (MSY); Omaha, Nebraska (OMA); Phoenix, Arizona (PHX); Seattle, Washington (SEA); San Francisco, California (SFO); Salt Lake City, Utah (SLC); and St Louis, Missouri (STL). These subjectively chosen stations provide reasonably uniform and representative coverage of the conterminous United States. Probabilistic forecasts for 6 1 day accumulated precipitation were made for the seven climatological quantiles q.5 (5th percentile), q.1 (lower decile), q.33 (lower tercile), q.5 (median), q.67 (upper tercile), q.9 (upper decile) and q.95 (95th percentile); estimated using the full 26 year observation data set. The verification data were constructed from running 5-day totals of the midnight-to-midnight daily precipitation accumulations. The climatological quantiles were tabulated locally, both by forecast date and individually by verifying station, in order to avoid artificial skill deriving from correct forecasting of variations in climatological values (Hamill and Juras, 26). For many locations and times of year, two or more of these seven quantiles of 5-day accumulated precipitation are zero, and in these cases only the single zero quantile corresponding to the largest probability was used in regression fitting and verification of forecasts. For example, if % of the climatological 5-day precipitation values for a particular location and date are zero, then both q.5 and q.1 are equal to mm, but only q.1 and the five larger quantiles are used. Again following Wilks and Hamill (27), forecast equations were fitted using 1, 2, 5, 15, and years of training data, and evaluated using cross validation so that all forecasts are out-of-sample. Separate forecast equations were fitted for each day of the 26 year data period, using a training-data window of ±45 days around the forecast date. To the extent possible, training years were chosen as those immediately preceding the year omitted for cross validation, and to the extent that this was not possible the nearest subsequent years were used. For example, equations used to forecast from 1 March, 198 using 1 year of training data were fitted using data from 15 January through 15 April, These procedures were followed both for individual logistic regressions, Equation (1), and the unified formulation in Equation (5), although as noted above only one quantile corresponding to zero accumulated precipitation was forecast and verified in any one instance. Only a single ensemble predictor, the square-root of the ensemble mean, was used in the function f(x): f(x)= b b 1 xens (8)

4 364 D. S. WILKS This predictor choice yields slightly better, but, overall, very similar forecasts, to equations using the untransformed ensemble mean as the single predictor. Adding the ensemble standard deviation or its square root, alone or in combination with the ensemble mean, did not improve either the separate-equation or the unified forecasts, a result consistent with the medium-range precipitation forecast results reported by Hamill et al. (24) and Wilks and Hamill (27), although ensemble spread has been found to be a significant logistic regression predictor for shorter lead times (Hamill et al., 28; Wilks and Hamill, 27). Unification of the logistic regressions for all forecast quantiles was achieved using the square root of the forecast quantile as the sole predictor in the function g(q): g(q) = b 2 q (9) This choice forg(q) was entirely empirical, but yielded substantially better forecasts than did g(q) = b 2 q,and only marginally less accurate forecasts overall than those made using g(q) = b 2 q b3 q. Thus, a full set of separate-equation forecasts (Equation (1)) for a given location and day required fitting as many as 14 parameters (seven equations with two parameters each), whereas the unified approach (Equation (5)) required fitting only three parameters. 4. Results 4.1. Characteristics of the individual and unified logistic regressions Before presenting the forecast verification statistics, it is worthwhile to illustrate the gains in logical consistency and comprehensiveness that derive from using the unified logistic regression framework. Figure 1(a) shows Equation (6), evaluated at 6 selected climatological quantiles, for the 23 November 21 forecast made for Minneapolis, and fitted using the full year training sample, which pertains to accumulated precipitation the period 28 November-2 December 21. Here f(x)= x ens, so that all of the regression lines are parallel, with slope b 1 = mm 1/2. Here also g(q) =.836 q, and the positive regression parameter b 2 =.836 mm 1/2 ensures that the regression intercepts b (q) (Equation (7)) produce forecast probabilities, given any ensemble mean, that are strictly increasing in q. It is thus impossible for the specified cumulative probability pertaining to a smaller precipitation accumulation threshold to be larger than that for a larger threshold. In contrast, Figure 1(b) shows the six corresponding individual logistic regressions, fitted separately for the same six climatological quantiles, using Equation (3) in each case. Here nothing constrains the six fitted equations to be mutually consistent, and indeed they clearly are not. The equations for q.1 and q.33 happen to exhibit similar slopes, as do the equations for q.5, q.67 and q.95, whereas these two groups of regressions are inconsistent with each other, and the equation for q.9 is clearly inconsistent with all of the others. As a practical matter these equations would not yield jointly nonsensical predictions for relatively small values of x ens, but for x ens larger than about 3 mm (the point at which the regression functions for q.33 and q.5 cross) the resulting forecast probabilities overall would be incoherent. Indeed, unless the separate logistic regression equations are exactly parallel, logically inconsistent sets of forecasts are inevitable for sufficiently extreme values of the predictor. Note that the plotted regressions in Figure 1(a) have been chosen to match the threshold quantiles for those fitted in Figure 1(b), but results in (a) 4 q (b) 4 q q.9 =.6 mm =.6 mm ln [p / (1 p)] 2 2 q.67 q.5 q.33 q.1 = 16.5 mm = 4.83 mm = 2.3 mm =.51 mm =. mm Cumulative Probability ln [p / (1 p)] 2 2 q.67 = 4.83 mm q.5 = 2.3 mm q.9 = 16.5 mm q.33 =.51 mm q.1 =. mm Cumulative Probability Ensemble mean, mm Ensemble mean, mm Figure 1. Logistic regressions plotted on the log-odds scale, for 28 November 2 December 21, fitted using the full year training length, for Minneapolis. Forecasts from Equation (6), evaluated at selected quantiles, are shown by the parallel lines in Figure 1(a), which cannot yield logically inconsistent sets of forecasts. Regressions for the same quantiles, fitted separately using Equation (3), are shown in Figure 1(b). Because these regressions are not constrained to be parallel, logically inconsistent forecasts are inevitable for sufficiently extreme values of the predictor.

5 MOS LOGISTIC REGRESSION 365 (a) 1. X ens = mm (b) 1. X ens = mm Pr (V q) X ens = 1 mm X ens = 2 mm X ens = 5 mm X ens = 1 mm X ens = 15 mm Pr (V q) X ens = 1 mm X ens = 2 mm X ens = 5 mm X ens = 1 mm X ens = 15 mm Threshold quantile q, mm Threshold quantile q, mm Figure 2. Probability distribution functions for accumulated precipitation, evaluated at selected values of the x ens predictor, corresponding to Figure 1. The unified Equation (5), plotted in Figure 2(a), yields smooth, continuous and coherent probabilities. In Figure 2(b) results for precipitation amounts between the six fitted equations in the form of Equation (1) must be interpolated, and results for sufficiently large x ens are necessarily logically inconsistent. Figure 1(a) could have been plotted for any precipitation quantiles. Figure 2 shows an alternative representation of the forecasts portrayed in Figure 1, namely cumulative forecast probabilities as functions of the threshold quantile, for selected values of the predictor x ens. (In this plot the characteristic S-shape of the logistic function is distorted because the threshold quantile rather than its square root is plotted on the horizontal axis.) Figure 2(a), computed from the unified regressions (Equation (5)), shows that this approach allows smooth and continuous probability distributions to be specified, for any value of x ens that may be produced by the dynamical ensemble. In contrast, Figure 2(b) illustrates two shortcomings of the traditional approach of fitting separate logistic regressions for different threshold quantiles. First, Equation (1) can only be evaluated for the six threshold quantiles for which regressions have been fitted, shown as the black dots: each vertical column of black dots has been computed from one of the six fitted logistic regressions in Figure 1(b). The implied probability distribution function for other thresholds must be interpolated between these forecast points, which has been done in Figure 2(b) using straight lines. These implied probability distribution functions are not non-decreasing in q, forx ens larger than about 3 mm, as could have been anticipated from the first crossing of the regression lines in Figure 1(b). For example for x ens = 1 mm (a relatively extreme forecast), these individual regressions jointly specify the nonsense result that Pr { <V 16.5 mm)<. Figure 3, again pertaining to the forecast equations portrayed in Figures 1(a) and 2(a), illustrates that the unified logistic regression equation (Equation (5)) provides probabilities that vary continuously as functions of both the predictor and the threshold quantile. The figure shows cumulative probabilities, for values of both ensemble mean and threshold quantiles between and mm. For any given value of the ensemble mean, the median of the forecast distribution (heavy contour) is larger than x ens, indicating a dry bias in the GFS ensemble for this location and date, which is being corrected by this MOS regression Forecast verification Figure 4 compares Brier scores for the individually fitted regressions (Equation (1), x), and the unified fitting for all threshold quantiles jointly (Equation (5), o), for the threshold quantiles used in fitting the individual (Equation (1)) logistic regressions. (Results for q.5 have been Ensemble mean, mm Threshold quantile q, mm Figure 3. Contour plot of cumulative forecast probabilities, Pr {V q}, according to the unified forecast Equation (5), as a function of both q and the forecast ensemble mean, again for the 28 November 2 December, 21 forecast at Minneapolis shown in Figures 1(a) and 2(a). Forecast medians (bold contour) below the 45 diagonal reflect a dry bias in the ensemble for this location and date

6 366 D. S. WILKS (a).175 (b) BSClim (c) (d) (e) Training length, years (f) Training length, years Figure 4. Brier scores aggregated over the 19 forecast locations, for six climatological precipitation quantiles defined locally with respect to both time of year and geographic location, as a function of training-data length. x indicate individual logistic regressions (Equation (1)) fitted for each threshold quantile, and o indicate unified fitting (Equation (5)) for all threshold quantiles jointly. Dashed horizontal lines show Brier scores for constant climatological relative frequencies. (a) 1th percentile, (b) 33rd percentile, (c) 5th percentile, (d) 67th percentile, (e) 9th percentile, (f) 95th percentile. omitted because q.1 = for a large majority of the locations and dates). As expected, all Brier scores improve (become smaller) as the training-data length is increased. For the larger training lengths, overall accuracy as measured by the Brier score is essentially equivalent for the two forecast methods. That is, no price has been paid in terms of reduced accuracy for the gains in smoothness and enforced logical consistency of the unified approach in Equation (5). Interestingly, Brier scores for the unified approach are somewhat better for the shorter training lengths, which presumably reflects the advantage of fitting fewer parameters when available training data are limited. The number of forecasts averaged for each point in Figure 4 is , so that even allowing for effects of temporal and spatial correlation among the forecasts, the associated sampling variability will be very small. Finally, Figure 5 compares reliability diagrams for forecasts of the lower (Figure 5(a)) and upper (Figure 5(b)) terciles, for individual logistic regression forecasts (Equation (1), x) and unified regressions for all quantiles (Equation (5), o), aggregated over all 19 locations, and for the full year training length. Here, forecast probabilities have been rounded and binned to the nearest.5, and points representing fewer than 5 forecasts have not been plotted. Results for the two logistic regression approaches are quite similar for both threshold quantiles, both with respect to the conditional relative frequencies and the inset histograms indicating frequencies with which the different forecast probabilities have been used. The apparent underforecasting for the lower probabilities in Figure 5(b) is produced mainly by forecasts for the two Pacific-coast stations Los Angeles and San Francisco, and may reflect unrepresentativeness of the nearest GFS gridpoints used to compute the ensemble mean predictors for these locations. 5. Summary and conclusions In this paper, conventional logistic regressions pertaining to single predictand thresholds are extended to yield full continuous, and coherent, probability distribution

7 ( MOS LOGISTIC REGRESSION 367 (a) Observed Relative Frequency ( Eq. 5) ( Eq. 1) (b) Observed Relative Frequency ( Eq. 5) Eq. 1) Forecast Probability Forecast Probability Figure 5. Reliability diagrams for (a) lower-tercile (Pr {V q 1/3 }) and (b) upper-tercile (Pr {V q 2/3 }) forecasts, for all locations pooled and the full year training samples. x indicate individual logistic regressions (Equation (1)) fitted for each threshold quantile, and o indicate unified fitting (Equation (5)) for all threshold quantiles jointly. Cases with fewer than 5 forecasts are not plotted. Insets show frequency of use of the binned forecast probabilities. forecasts for any and all thresholds, by including the predictand threshold itself as an additional predictor in the forecast equation. The procedure is illustrated using 6 1 day precipitation forecasts for 19 locations across the U.S., drawn from the GFS reforecast dataset (Hamill et al., 26). A key advantage to the approach is that the resulting forecast probabilities are necessarily mutually consistent, in the sense that the cumulative probability for a smaller predictand threshold cannot be larger that the probability for a larger threshold, which condition is not guaranteed if regression parameters are estimated separately for different thresholds. Indeed, unless (log-odds) regression slopes for single-threshold logistic regressions are exactly parallel, probabilistically inconsistent forecasts will necessarily occur for sufficiently extreme values of the predictand(s). No accuracy penalty, in terms of Brier score, was paid for these benefits, and indeed accuracy of forecasts from the unified framework was better for smaller training samples, apparently because fewer parameters need to be estimated. The results obtained here could almost certainly be improved upon, because the various choices faced in construction of an MOS scheme have not been exhaustively tuned. For example, Equations (5) and (6) were fitted using the arbitrarily chosen set of threshold quantiles q.5, q.1, q.33, q.5, q.67, q.9,andq.95.how many and which such quantiles will yield best results could be investigated in any given instance. Similarly, the forms of the function f(x) (Equation (8)) involving the GFS ensemble mean that were tried included using xens or x ens only, whereas other authors (Hamill et al., 28; Sloughter et al., 27) have reported that better results were obtained with different transformations of the ensemble-mean precipitation predictor. Similarly, no exploration of the use of other predictors (e.g. quadratic functions of x ens, predictor values from more or other GFS gridpoints, or different GFS predictors such as vorticity or precipitable water) have been attempted. Different logistic regression structures may be more appropriate for different locations and/or different times of year, and different predictor variables may be more effective for predictands other than medium-range precipitation totals. For shorter lead times, use of the ensemble standard deviation may improve the forecasts, as has been demonstrated for ensemble-mos post-processing using conventional single-threshold logistic regression (Hamill et al., 28; Wilks and Hamill, 27), and incorporation of an ensemble spread predictor into f(x) would be straightforward. Different forms for the function g(q), which unifies forecasts for all quantiles in Equation (5), will also likely be required in different forecast settings. Here, results using g(q) = b 2 q (Equation (9)) were substantially better than those achieved using g(q) = b 2 q,andalmost indistinguishably less accurate than those achieved using g(q) = b 2 q b3 q (results not shown). In this last case, it was necessary to constrain the maximum-likelihood parameter-fitting procedure to ensure dp(q)/dq >at each iteration. Finally, use of this extended logistic regression approach is not limited to MOS post-processing of ensemble forecasts. It is equally applicable to MOS postprocessing of output from conventional single-integration dynamical forecasts, for which simple linear regressions (i.e. REEP; Glahn, 1985) are often still used, even though logistic regression can provide more accurate forecasts (Applequist et al., 22; Sohn et al., 25). Indeed, a modification of Equation (6) for conventional linear regression, namely p = f(x) g(q), could also be employed in a REEP setting, although fast modern computing capabilities likely justify preference for the probabilistically consistent structure of logistic regression. Acknowledgements Tom Hamill and two anonymous reviewers provided valuable comments on earlier versions of this manuscript.

8 368 D. S. WILKS References Applequist S, Gahrs GE, Pfeffer FL. 22. Comparison of methodologies for probabilistic quantitative precipitation forecasting. Weather and Forecasting 17: Caplan P, Derber J, Gemmill W, Hong S-Y, Pan H-L, Parrish D Changes to the 1995 NCEP operational medium-range forecast model analysis-forecast system. Weather and Forecasting 12: Detlefsen NK. 26. Probability forecasts for weather-dependent agricultural operations using generalized estimating equations. Telllus A 58: Glahn HR Statistical weather forecasting. In: Probability, Statistics and Decision Making in the Atmospheric Sciences, Murphy AH, Katz RW (eds). Westview Press: Boulder, CO; Glahn HR, Lowry DA The use of Model Output Statistics (MOS) in objective weather forecasting. Journal of Applied Meteorology 11: Gneiting T, Raftery AE, Westveld A, Goldman T. 25. Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation. Monthly Weather Review 133: Hamill TM, Colucci SJ Evaluation of Eta-RSM ensemble probabilistic precipitation forecasts. Monthly Weather Review 126: Hamill TM, Hagedorn R, Whitaker JS. 28. Probabilistic forecast calibration using ECMWF and GFS ensemble reforecasts. Part II: precipitation. Monthly Weather Review 136: Hamill TM, Juras J. 26. Measuring forecast skill: is it real or is it the varying climatology? Quarterly Journal of the Royal Meteorological Society 132: Hamill TM, Snyder C, Whitaker JS. 23. Ensemble forecasts and the properties of flow-dependent analysis-error covariance singular vectors. Monthly Weather Review 131: Hamill TM, Whitaker JS, Mullen SL. 26. Reforecasts: an important dataset for improving weather predictions. Bulletin of the American Meteorological Society 87: Hamill TM, Whitaker JS, Wei X. 24. Ensemble reforecasting: improving medium-range forecast skill using retrospective forecasts. Monthly Weather Review 132: Hansen JA. 22. Accounting for model error in ensemble-based state estimation and forecasting. Monthly Weather Review 13: Lemcke C, Kruizinga S Model output statistics forecasts: three years of operational experience in the Netherlands. Monthly Weather Review 116: McCullagh P, Nelder JA Generalized Linear Models, 2nd Edn. Chapman and Hall: London; 511. Molteni F, Buizza R, Palmer TN, Petroliagis T The new ECMWF ensemble prediction system: methodology and validation. Quarterly Journal of the Royal Meteorological Society 122: Raftery AE, Balabdaoui F, Gneiting T, Polakowski M. 25. Using Bayesian model averaging to calibrate forecast ensembles. Monthly Weather Review 133: Roulston MS, Smith LA. 23. Combining dynamical and statistical ensembles. Tellus Series A 55: Sloughter JM, Raftery AE, Gneiting T, Fraley C. 27. Probabilistic quantitative precipitation forecasting using Bayesian model averaging. Monthly Weather Review 135: Smith LA. 26. Predictability past, predictability present. In Predictability of Weather and Climate, Palmer T, Hagedorn R (eds). Cambridge University Press: New York, 217. Sohn KT, Lee JH, Lee SH, Ryu CS. 25. Statistical prediction of heavy rain in South Korea. Advances in Atmospheric Sciences 22: Sokol Z. 23. MOS-based precipitation forecasts for river basins. Weather and Forecasting 18: Stephenson DB, Coelho CAS, Balmaseda M, Doblas-Reyes FJ. 25. Forecast assimilation: a unified framework for the combination of multi-model weather and climate predictions. Tellus Series A 57: Toth Z, Kalnay E Ensemble forecasting at NMC: the generation of perturbations. Bulletin of the American Meteorological Society 74: Vislocky RL, Young GS The use of perfect prog forecasts to improve model output statistics forecasts of precipitation probability. Weather and Forecasting 4: Wang X, Bishop CH. 23. A comparison of breeding and ensemble transform Kalman filter ensemble forecast schemes. Journal of the Atmospheric Sciences 6: Wilks DS. 26a. Statistical Methods in the Atmospheric Sciences, 2nd Edn. Academic Press; 627. Wilks DS. 26b. Comparison of ensemble-mos methods in the Lorenz 96 setting. Meteorological Applications 13: Wilks DS, Hamill TM. 27. Comparison of ensemble-mos methods using GFS reforecasts. Monthly Weather Review 135:

NOTES AND CORRESPONDENCE. Improving Week-2 Forecasts with Multimodel Reforecast Ensembles

NOTES AND CORRESPONDENCE. Improving Week-2 Forecasts with Multimodel Reforecast Ensembles AUGUST 2006 N O T E S A N D C O R R E S P O N D E N C E 2279 NOTES AND CORRESPONDENCE Improving Week-2 Forecasts with Multimodel Reforecast Ensembles JEFFREY S. WHITAKER AND XUE WEI NOAA CIRES Climate

More information

A comparison of ensemble post-processing methods for extreme events

A comparison of ensemble post-processing methods for extreme events QuarterlyJournalof theroyalmeteorologicalsociety Q. J. R. Meteorol. Soc. 140: 1112 1120, April 2014 DOI:10.1002/qj.2198 A comparison of ensemble post-processing methods for extreme events R. M. Williams,*

More information

Calibration of extreme temperature forecasts of MOS_EPS model over Romania with the Bayesian Model Averaging

Calibration of extreme temperature forecasts of MOS_EPS model over Romania with the Bayesian Model Averaging Volume 11 Issues 1-2 2014 Calibration of extreme temperature forecasts of MOS_EPS model over Romania with the Bayesian Model Averaging Mihaela-Silvana NEACSU National Meteorological Administration, Bucharest

More information

Exploring ensemble forecast calibration issues using reforecast data sets

Exploring ensemble forecast calibration issues using reforecast data sets Exploring ensemble forecast calibration issues using reforecast data sets Thomas M. Hamill (1) and Renate Hagedorn (2) (1) NOAA Earth System Research Lab, Boulder, Colorado, USA 80303 Tom.hamill@noaa.gov

More information

NOAA Earth System Research Lab, Boulder, Colorado, USA

NOAA Earth System Research Lab, Boulder, Colorado, USA Exploring ensemble forecast calibration issues using reforecast data sets Thomas M. Hamill 1 and Renate Hagedorn 2 1 NOAA Earth System Research Lab, Boulder, Colorado, USA 80303,Tom.hamill@noaa.gov; http://esrl.noaa.gov/psd/people/tom.hamill

More information

Standardized Anomaly Model Output Statistics Over Complex Terrain.

Standardized Anomaly Model Output Statistics Over Complex Terrain. Standardized Anomaly Model Output Statistics Over Complex Terrain Reto.Stauffer@uibk.ac.at Outline statistical ensemble postprocessing introduction to SAMOS new snow amount forecasts in Tyrol sub-seasonal

More information

Adaptation for global application of calibration and downscaling methods of medium range ensemble weather forecasts

Adaptation for global application of calibration and downscaling methods of medium range ensemble weather forecasts Adaptation for global application of calibration and downscaling methods of medium range ensemble weather forecasts Nathalie Voisin Hydrology Group Seminar UW 11/18/2009 Objective Develop a medium range

More information

Walter C. Kolczynski, Jr.* David R. Stauffer Sue Ellen Haupt Aijun Deng Pennsylvania State University, University Park, PA

Walter C. Kolczynski, Jr.* David R. Stauffer Sue Ellen Haupt Aijun Deng Pennsylvania State University, University Park, PA 7.3B INVESTIGATION OF THE LINEAR VARIANCE CALIBRATION USING AN IDEALIZED STOCHASTIC ENSEMBLE Walter C. Kolczynski, Jr.* David R. Stauffer Sue Ellen Haupt Aijun Deng Pennsylvania State University, University

More information

Exploring ensemble forecast calibration issues using reforecast data sets

Exploring ensemble forecast calibration issues using reforecast data sets NOAA Earth System Research Laboratory Exploring ensemble forecast calibration issues using reforecast data sets Tom Hamill and Jeff Whitaker NOAA Earth System Research Lab, Boulder, CO tom.hamill@noaa.gov

More information

4.3.2 Configuration. 4.3 Ensemble Prediction System Introduction

4.3.2 Configuration. 4.3 Ensemble Prediction System Introduction 4.3 Ensemble Prediction System 4.3.1 Introduction JMA launched its operational ensemble prediction systems (EPSs) for one-month forecasting, one-week forecasting, and seasonal forecasting in March of 1996,

More information

Ensemble Reforecasting: Improving Medium-Range Forecast Skill Using Retrospective Forecasts

Ensemble Reforecasting: Improving Medium-Range Forecast Skill Using Retrospective Forecasts 1434 MONTHLY WEATHER REVIEW VOLUME 132 Ensemble Reforecasting: Improving Medium-Range Forecast Skill Using Retrospective Forecasts THOMAS M. HAMILL University of Colorado and NOAA CIRES Climate Diagnostics

More information

Probabilistic wind speed forecasting in Hungary

Probabilistic wind speed forecasting in Hungary Probabilistic wind speed forecasting in Hungary arxiv:1202.4442v3 [stat.ap] 17 Mar 2012 Sándor Baran and Dóra Nemoda Faculty of Informatics, University of Debrecen Kassai út 26, H 4028 Debrecen, Hungary

More information

Probabilistic Quantitative Precipitation Forecasting Using Bayesian Model Averaging

Probabilistic Quantitative Precipitation Forecasting Using Bayesian Model Averaging SEPTEMBER 2007 S L O U G H T E R E T A L. 3209 Probabilistic Quantitative Precipitation Forecasting Using Bayesian Model Averaging J. MCLEAN SLOUGHTER, ADRIAN E. RAFTERY, TILMANN GNEITING, AND CHRIS FRALEY

More information

Using Bayesian Model Averaging to Calibrate Forecast Ensembles

Using Bayesian Model Averaging to Calibrate Forecast Ensembles MAY 2005 R A F T E R Y E T A L. 1155 Using Bayesian Model Averaging to Calibrate Forecast Ensembles ADRIAN E. RAFTERY, TILMANN GNEITING, FADOUA BALABDAOUI, AND MICHAEL POLAKOWSKI Department of Statistics,

More information

István Ihász, Máté Mile and Zoltán Üveges Hungarian Meteorological Service, Budapest, Hungary

István Ihász, Máté Mile and Zoltán Üveges Hungarian Meteorological Service, Budapest, Hungary Comprehensive study of the calibrated EPS products István Ihász, Máté Mile and Zoltán Üveges Hungarian Meteorological Service, Budapest, Hungary 1. Introduction Calibration of ensemble forecasts is a new

More information

Ensemble Verification Metrics

Ensemble Verification Metrics Ensemble Verification Metrics Debbie Hudson (Bureau of Meteorology, Australia) ECMWF Annual Seminar 207 Acknowledgements: Beth Ebert Overview. Introduction 2. Attributes of forecast quality 3. Metrics:

More information

BAYESIAN PROCESSOR OF OUTPUT FOR PROBABILISTIC FORECASTING OF PRECIPITATION OCCURRENCE. Coire J. Maranzano. Department of Systems Engineering

BAYESIAN PROCESSOR OF OUTPUT FOR PROBABILISTIC FORECASTING OF PRECIPITATION OCCURRENCE. Coire J. Maranzano. Department of Systems Engineering BAYESIAN PROCESSOR OF OUTPUT FOR PROBABILISTIC FORECASTING OF PRECIPITATION OCCURRENCE By Coire J. Maranzano Department of Systems Engineering University of Virginia P.O. Box 400747 Charlottesville, VA

More information

4.3. David E. Rudack*, Meteorological Development Laboratory Office of Science and Technology National Weather Service, NOAA 1.

4.3. David E. Rudack*, Meteorological Development Laboratory Office of Science and Technology National Weather Service, NOAA 1. 43 RESULTS OF SENSITIVITY TESTING OF MOS WIND SPEED AND DIRECTION GUIDANCE USING VARIOUS SAMPLE SIZES FROM THE GLOBAL ENSEMBLE FORECAST SYSTEM (GEFS) RE- FORECASTS David E Rudack*, Meteorological Development

More information

Peter P. Neilley. And. Kurt A. Hanson. Weather Services International, Inc. 400 Minuteman Road Andover, MA 01810

Peter P. Neilley. And. Kurt A. Hanson. Weather Services International, Inc. 400 Minuteman Road Andover, MA 01810 6.4 ARE MODEL OUTPUT STATISTICS STILL NEEDED? Peter P. Neilley And Kurt A. Hanson Weather Services International, Inc. 400 Minuteman Road Andover, MA 01810 1. Introduction. Model Output Statistics (MOS)

More information

J11.5 HYDROLOGIC APPLICATIONS OF SHORT AND MEDIUM RANGE ENSEMBLE FORECASTS IN THE NWS ADVANCED HYDROLOGIC PREDICTION SERVICES (AHPS)

J11.5 HYDROLOGIC APPLICATIONS OF SHORT AND MEDIUM RANGE ENSEMBLE FORECASTS IN THE NWS ADVANCED HYDROLOGIC PREDICTION SERVICES (AHPS) J11.5 HYDROLOGIC APPLICATIONS OF SHORT AND MEDIUM RANGE ENSEMBLE FORECASTS IN THE NWS ADVANCED HYDROLOGIC PREDICTION SERVICES (AHPS) Mary Mullusky*, Julie Demargne, Edwin Welles, Limin Wu and John Schaake

More information

P3.1 Development of MOS Thunderstorm and Severe Thunderstorm Forecast Equations with Multiple Data Sources

P3.1 Development of MOS Thunderstorm and Severe Thunderstorm Forecast Equations with Multiple Data Sources P3.1 Development of MOS Thunderstorm and Severe Thunderstorm Forecast Equations with Multiple Data Sources Kathryn K. Hughes * Meteorological Development Laboratory Office of Science and Technology National

More information

Numerical Weather Prediction. Medium-range multi-model ensemble combination and calibration

Numerical Weather Prediction. Medium-range multi-model ensemble combination and calibration Numerical Weather Prediction Medium-range multi-model ensemble combination and calibration Forecasting Research Technical Report No. 517 Christine Johnson and Richard Swinbank c Crown Copyright email:nwp

More information

Ensemble Copula Coupling (ECC)

Ensemble Copula Coupling (ECC) Ensemble Copula Coupling (ECC) Tilmann Gneiting Institut für Angewandte Mathematik Universität Heidelberg BfG Kolloquium, Koblenz 24. September 2013 Statistical Postprocessing of Ensemble Forecasts: EMOS/NR

More information

Inflow Forecasting for Hydropower Operations: Bayesian Model Averaging for Postprocessing Hydrological Ensembles

Inflow Forecasting for Hydropower Operations: Bayesian Model Averaging for Postprocessing Hydrological Ensembles Inflow Forecasting for Hydropower Operations: Bayesian Model Averaging for Postprocessing Hydrological Ensembles Andreas Kleiven, Ingelin Steinsland Norwegian University of Science & Technology Dept. of

More information

12.2 PROBABILISTIC GUIDANCE OF AVIATION HAZARDS FOR TRANSOCEANIC FLIGHTS

12.2 PROBABILISTIC GUIDANCE OF AVIATION HAZARDS FOR TRANSOCEANIC FLIGHTS 12.2 PROBABILISTIC GUIDANCE OF AVIATION HAZARDS FOR TRANSOCEANIC FLIGHTS K. A. Stone, M. Steiner, J. O. Pinto, C. P. Kalb, C. J. Kessinger NCAR, Boulder, CO M. Strahan Aviation Weather Center, Kansas City,

More information

Decision Science Letters

Decision Science Letters Decision Science Letters 4 (2015) 373 378 Contents lists available at GrowingScience Decision Science Letters homepage: www.growingscience.com/dsl Optimization of continuous ranked probability score using

More information

BAYESIAN PROCESSOR OF OUTPUT: A NEW TECHNIQUE FOR PROBABILISTIC WEATHER FORECASTING. Roman Krzysztofowicz

BAYESIAN PROCESSOR OF OUTPUT: A NEW TECHNIQUE FOR PROBABILISTIC WEATHER FORECASTING. Roman Krzysztofowicz BAYESIAN PROCESSOR OF OUTPUT: A NEW TECHNIQUE FOR PROBABILISTIC WEATHER FORECASTING By Roman Krzysztofowicz Department of Systems Engineering and Department of Statistics University of Virginia P.O. Box

More information

Calibration of ECMWF forecasts

Calibration of ECMWF forecasts from Newsletter Number 142 Winter 214/15 METEOROLOGY Calibration of ECMWF forecasts Based on an image from mrgao/istock/thinkstock doi:1.21957/45t3o8fj This article appeared in the Meteorology section

More information

Application and verification of ECMWF products 2015

Application and verification of ECMWF products 2015 Application and verification of ECMWF products 2015 Hungarian Meteorological Service 1. Summary of major highlights The objective verification of ECMWF forecasts have been continued on all the time ranges

More information

Upscaled and fuzzy probabilistic forecasts: verification results

Upscaled and fuzzy probabilistic forecasts: verification results 4 Predictability and Ensemble Methods 124 Upscaled and fuzzy probabilistic forecasts: verification results Zied Ben Bouallègue Deutscher Wetterdienst (DWD), Frankfurter Str. 135, 63067 Offenbach, Germany

More information

(Statistical Forecasting: with NWP). Notes from Kalnay (2003), appendix C Postprocessing of Numerical Model Output to Obtain Station Weather Forecasts

(Statistical Forecasting: with NWP). Notes from Kalnay (2003), appendix C Postprocessing of Numerical Model Output to Obtain Station Weather Forecasts 35 (Statistical Forecasting: with NWP). Notes from Kalnay (2003), appendix C Postprocessing of Numerical Model Output to Obtain Station Weather Forecasts If the numerical model forecasts are skillful,

More information

Calibration of short-range global radiation ensemble forecasts

Calibration of short-range global radiation ensemble forecasts Calibration of short-range global radiation ensemble forecasts Zied Ben Bouallègue, Tobias Heppelmann 3rd International Conference Energy & Meteorology Boulder, Colorado USA June 2015 Outline EWeLiNE:

More information

Assessment of Ensemble Forecasts

Assessment of Ensemble Forecasts Assessment of Ensemble Forecasts S. L. Mullen Univ. of Arizona HEPEX Workshop, 7 March 2004 Talk Overview Ensemble Performance for Precipitation Global EPS and Mesoscale 12 km RSM Biases, Event Discrimination

More information

Statistical post-processing of probabilistic wind speed forecasting in Hungary

Statistical post-processing of probabilistic wind speed forecasting in Hungary Meteorologische Zeitschrift, Vol. 22, No. 3, 1 (August 13) Ó by Gebrüder Borntraeger 13 Article Statistical post-processing of probabilistic wind speed forecasting in Hungary Sándor Baran 1,*, András Horányi

More information

Monthly forecast and the Summer 2003 heat wave over Europe: a case study

Monthly forecast and the Summer 2003 heat wave over Europe: a case study ATMOSPHERIC SCIENCE LETTERS Atmos. Sci. Let. 6: 112 117 (2005) Published online 21 April 2005 in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/asl.99 Monthly forecast and the Summer 2003

More information

Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation

Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation Tilmann Gneiting, Anton H. Westveld III, Adrian E. Raftery and Tom Goldman Department of Statistics

More information

Calibrated Surface Temperature Forecasts from the Canadian Ensemble Prediction System Using Bayesian Model Averaging

Calibrated Surface Temperature Forecasts from the Canadian Ensemble Prediction System Using Bayesian Model Averaging 1364 M O N T H L Y W E A T H E R R E V I E W VOLUME 135 Calibrated Surface Temperature Forecasts from the Canadian Ensemble Prediction System Using Bayesian Model Averaging LAURENCE J. WILSON Meteorological

More information

Application and verification of ECMWF products in Croatia - July 2007

Application and verification of ECMWF products in Croatia - July 2007 Application and verification of ECMWF products in Croatia - July 2007 By Lovro Kalin, Zoran Vakula and Josip Juras (Hydrological and Meteorological Service) 1. Summary of major highlights At Croatian Met

More information

Basic Verification Concepts

Basic Verification Concepts Basic Verification Concepts Barbara Brown National Center for Atmospheric Research Boulder Colorado USA bgb@ucar.edu Basic concepts - outline What is verification? Why verify? Identifying verification

More information

Probabilistic temperature post-processing using a skewed response distribution

Probabilistic temperature post-processing using a skewed response distribution Probabilistic temperature post-processing using a skewed response distribution Manuel Gebetsberger 1, Georg J. Mayr 1, Reto Stauffer 2, Achim Zeileis 2 1 Institute of Atmospheric and Cryospheric Sciences,

More information

The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS

The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS S. L. Mullen Univ. of Arizona R. Buizza ECMWF University of Wisconsin Predictability Workshop,

More information

Calibrating surface temperature forecasts using BMA method over Iran

Calibrating surface temperature forecasts using BMA method over Iran 2011 2nd International Conference on Environmental Science and Technology IPCBEE vol.6 (2011) (2011) IACSIT Press, Singapore Calibrating surface temperature forecasts using BMA method over Iran Iman Soltanzadeh

More information

Enhancing Weather Information with Probability Forecasts. An Information Statement of the American Meteorological Society

Enhancing Weather Information with Probability Forecasts. An Information Statement of the American Meteorological Society Enhancing Weather Information with Probability Forecasts An Information Statement of the American Meteorological Society (Adopted by AMS Council on 12 May 2008) Bull. Amer. Meteor. Soc., 89 Summary This

More information

Probabilistic Weather Prediction with an Analog Ensemble

Probabilistic Weather Prediction with an Analog Ensemble 3498 M O N T H L Y W E A T H E R R E V I E W VOLUME 141 Probabilistic Weather Prediction with an Analog Ensemble LUCA DELLE MONACHE National Center for Atmospheric Research, Boulder, Colorado F. ANTHONY

More information

Calibrating forecasts of heavy precipitation in river catchments

Calibrating forecasts of heavy precipitation in river catchments from Newsletter Number 152 Summer 217 METEOROLOGY Calibrating forecasts of heavy precipitation in river catchments Hurricane Patricia off the coast of Mexico on 23 October 215 ( 215 EUMETSAT) doi:1.21957/cf1598

More information

ensemblebma: An R Package for Probabilistic Forecasting using Ensembles and Bayesian Model Averaging

ensemblebma: An R Package for Probabilistic Forecasting using Ensembles and Bayesian Model Averaging ensemblebma: An R Package for Probabilistic Forecasting using Ensembles and Bayesian Model Averaging Chris Fraley, Adrian E. Raftery Tilmann Gneiting, J. McLean Sloughter Technical Report No. 516 Department

More information

Calibrating Multi-Model Forecast Ensembles with Exchangeable and Missing Members using Bayesian Model Averaging

Calibrating Multi-Model Forecast Ensembles with Exchangeable and Missing Members using Bayesian Model Averaging Calibrating Multi-Model Forecast Ensembles with Exchangeable and Missing Members using Bayesian Model Averaging Chris Fraley, Adrian E. Raftery, and Tilmann Gneiting Technical Report No. 556 Department

More information

Mesoscale Predictability of Terrain Induced Flows

Mesoscale Predictability of Terrain Induced Flows Mesoscale Predictability of Terrain Induced Flows Dale R. Durran University of Washington Dept. of Atmospheric Sciences Box 3516 Seattle, WA 98195 phone: (206) 543-74 fax: (206) 543-0308 email: durrand@atmos.washington.edu

More information

132 ensemblebma: AN R PACKAGE FOR PROBABILISTIC WEATHER FORECASTING

132 ensemblebma: AN R PACKAGE FOR PROBABILISTIC WEATHER FORECASTING 132 ensemblebma: AN R PACKAGE FOR PROBABILISTIC WEATHER FORECASTING Chris Fraley, Adrian Raftery University of Washington, Seattle, WA USA Tilmann Gneiting University of Heidelberg, Heidelberg, Germany

More information

Probabilistic Weather Forecasting in R

Probabilistic Weather Forecasting in R CONTRIBUTED ARTICLE 1 Probabilistic Weather Forecasting in R by Chris Fraley, Adrian Raftery, Tilmann Gneiting, McLean Sloughter and Veronica Berrocal Abstract This article describes two R packages for

More information

Strategy for Using CPC Precipitation and Temperature Forecasts to Create Ensemble Forcing for NWS Ensemble Streamflow Prediction (ESP)

Strategy for Using CPC Precipitation and Temperature Forecasts to Create Ensemble Forcing for NWS Ensemble Streamflow Prediction (ESP) Strategy for Using CPC Precipitation and Temperature Forecasts to Create Ensemble Forcing for NWS Ensemble Streamflow Prediction (ESP) John Schaake (Acknowlements: D.J. Seo, Limin Wu, Julie Demargne, Rob

More information

FORECASTING poor air quality events associated with

FORECASTING poor air quality events associated with A Comparison of Bayesian and Conditional Density Models in Probabilistic Ozone Forecasting Song Cai, William W. Hsieh, and Alex J. Cannon Member, INNS Abstract Probabilistic models were developed to provide

More information

Predicting uncertainty in forecasts of weather and climate (Also published as ECMWF Technical Memorandum No. 294)

Predicting uncertainty in forecasts of weather and climate (Also published as ECMWF Technical Memorandum No. 294) Predicting uncertainty in forecasts of weather and climate (Also published as ECMWF Technical Memorandum No. 294) By T.N. Palmer Research Department November 999 Abstract The predictability of weather

More information

Accounting for the effect of observation errors on verification of MOGREPS

Accounting for the effect of observation errors on verification of MOGREPS METEOROLOGICAL APPLICATIONS Meteorol. Appl. 15: 199 205 (2008) Published online in Wiley InterScience (www.interscience.wiley.com).64 Accounting for the effect of observation errors on verification of

More information

National Oceanic and Atmospheric Administration, Silver Spring MD

National Oceanic and Atmospheric Administration, Silver Spring MD Calibration and downscaling methods for quantitative ensemble precipitation forecasts Nathalie Voisin 1,3, John C. Schaake 2 and Dennis P. Lettenmaier 1 1 Department of Civil and Environmental Engineering,

More information

Bias Correction and Bayesian Model Averaging for Ensemble Forecasts of Surface Wind Direction

Bias Correction and Bayesian Model Averaging for Ensemble Forecasts of Surface Wind Direction Bias Correction and Bayesian Model Averaging for Ensemble Forecasts of Surface Wind Direction Le Bao 1, Tilmann Gneiting 1, Eric P. Grimit 2, Peter Guttorp 1 and Adrian E. Raftery 1 Department of Statistics,

More information

Measuring the Ensemble Spread-Error Relationship with a Probabilistic Approach: Stochastic Ensemble Results

Measuring the Ensemble Spread-Error Relationship with a Probabilistic Approach: Stochastic Ensemble Results Measuring the Ensemble Spread-Error Relationship with a Probabilistic Approach: Stochastic Ensemble Results Eric P. Grimit and Clifford F. Mass Department of Atmospheric Sciences University of Washington,

More information

VERFICATION OF OCEAN WAVE ENSEMBLE FORECAST AT NCEP 1. Degui Cao, H.S. Chen and Hendrik Tolman

VERFICATION OF OCEAN WAVE ENSEMBLE FORECAST AT NCEP 1. Degui Cao, H.S. Chen and Hendrik Tolman VERFICATION OF OCEAN WAVE ENSEMBLE FORECAST AT NCEP Degui Cao, H.S. Chen and Hendrik Tolman NOAA /National Centers for Environmental Prediction Environmental Modeling Center Marine Modeling and Analysis

More information

Ensemble forecasting: Error bars and beyond. Jim Hansen, NRL Walter Sessions, NRL Jeff Reid,NRL May, 2011

Ensemble forecasting: Error bars and beyond. Jim Hansen, NRL Walter Sessions, NRL Jeff Reid,NRL May, 2011 Ensemble forecasting: Error bars and beyond Jim Hansen, NRL Walter Sessions, NRL Jeff Reid,NRL May, 2011 1 Why ensembles Traditional justification Predict expected error (Perhaps) more valuable justification

More information

The economic value of weather forecasts for decision-making problems in the profit/loss situation

The economic value of weather forecasts for decision-making problems in the profit/loss situation METEOROLOGICAL APPLICATIONS Meteorol. Appl. 4: 455 463 (27) Published online in Wiley InterScience (www.interscience.wiley.com) DOI:.2/met.44 The economic value of weather forecasts for decision-making

More information

Information gain as a score for probabilistic forecasts

Information gain as a score for probabilistic forecasts METEOROLOGICAL APPLICATIONS Meteorol. Appl. 18: 9 17 (2011) Published online 10 March 2010 in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/met.188 Information gain as a score for probabilistic

More information

15 day VarEPS introduced at. 28 November 2006

15 day VarEPS introduced at. 28 November 2006 Comprehensive study of the calibrated EPS products István Ihász Hungarian Meteorological Service Thanks to Máté Mile Zoltán Üveges & Gergő Kiss Mihály Szűcs Topics 15 day VarEPS introduced at the ECMWF

More information

Feature-specific verification of ensemble forecasts

Feature-specific verification of ensemble forecasts Feature-specific verification of ensemble forecasts www.cawcr.gov.au Beth Ebert CAWCR Weather & Environmental Prediction Group Uncertainty information in forecasting For high impact events, forecasters

More information

Experimental MOS Precipitation Type Guidance from the ECMWF Model

Experimental MOS Precipitation Type Guidance from the ECMWF Model Experimental MOS Precipitation Type Guidance from the ECMWF Model Phillip E. Shafer David E. Rudack National Weather Service Meteorological Development Laboratory Silver Spring, MD Development Overview:

More information

Five years of limited-area ensemble activities at ARPA-SIM: the COSMO-LEPS system

Five years of limited-area ensemble activities at ARPA-SIM: the COSMO-LEPS system Five years of limited-area ensemble activities at ARPA-SIM: the COSMO-LEPS system Andrea Montani, Chiara Marsigli and Tiziana Paccagnella ARPA-SIM Hydrometeorological service of Emilia-Romagna, Italy 11

More information

Locally Calibrated Probabilistic Temperature Forecasting Using Geostatistical Model Averaging and Local Bayesian Model Averaging

Locally Calibrated Probabilistic Temperature Forecasting Using Geostatistical Model Averaging and Local Bayesian Model Averaging Locally Calibrated Probabilistic Temperature Forecasting Using Geostatistical Model Averaging and Local Bayesian Model Averaging William Kleiber, Adrian E. Raftery Department of Statistics, University

More information

Predictability from a Forecast Provider s Perspective

Predictability from a Forecast Provider s Perspective Predictability from a Forecast Provider s Perspective Ken Mylne Met Office, Bracknell RG12 2SZ, UK. email: ken.mylne@metoffice.com 1. Introduction Predictability is not a new issue for forecasters or forecast

More information

P1.8 INTEGRATING ASSESSMENTS OF USER NEEDS WITH WEATHER RESEARCH: DEVELOPING USER-CENTRIC TOOLS FOR RESERVOIR MANAGEMENT

P1.8 INTEGRATING ASSESSMENTS OF USER NEEDS WITH WEATHER RESEARCH: DEVELOPING USER-CENTRIC TOOLS FOR RESERVOIR MANAGEMENT P1.8 INTEGRATING ASSESSMENTS OF USER NEEDS WITH WEATHER RESEARCH: DEVELOPING USER-CENTRIC TOOLS FOR RESERVOIR MANAGEMENT Andrea J. Ray 1, Joseph J. Barsugli 1,2, and Thomas Hamill 1 1 NOAA Earth Systems

More information

P1.8 INTEGRATING ASSESSMENTS OF USER NEEDS WITH WEATHER RESEARCH: DEVELOPING USER-CENTRIC TOOLS FOR RESERVOIR MANAGEMENT

P1.8 INTEGRATING ASSESSMENTS OF USER NEEDS WITH WEATHER RESEARCH: DEVELOPING USER-CENTRIC TOOLS FOR RESERVOIR MANAGEMENT P1.8 INTEGRATING ASSESSMENTS OF USER NEEDS WITH WEATHER RESEARCH: DEVELOPING USER-CENTRIC TOOLS FOR RESERVOIR MANAGEMENT Andrea J. Ray 1, Joseph J. Barsugli 1,2, and Thomas Hamill 1 1 NOAA Earth Systems

More information

A Comparative Study of 4D-VAR and a 4D Ensemble Kalman Filter: Perfect Model Simulations with Lorenz-96

A Comparative Study of 4D-VAR and a 4D Ensemble Kalman Filter: Perfect Model Simulations with Lorenz-96 Tellus 000, 000 000 (0000) Printed 20 October 2006 (Tellus LATEX style file v2.2) A Comparative Study of 4D-VAR and a 4D Ensemble Kalman Filter: Perfect Model Simulations with Lorenz-96 Elana J. Fertig

More information

Application and verification of ECMWF products in Austria

Application and verification of ECMWF products in Austria Application and verification of ECMWF products in Austria Central Institute for Meteorology and Geodynamics (ZAMG), Vienna Alexander Kann 1. Summary of major highlights Medium range weather forecasts in

More information

operational status and developments

operational status and developments COSMO-DE DE-EPSEPS operational status and developments Christoph Gebhardt, Susanne Theis, Zied Ben Bouallègue, Michael Buchhold, Andreas Röpnack, Nina Schuhen Deutscher Wetterdienst, DWD COSMO-DE DE-EPSEPS

More information

Comparison of nonhomogeneous regression models for probabilistic wind speed forecasting

Comparison of nonhomogeneous regression models for probabilistic wind speed forecasting Comparison of nonhomogeneous regression models for probabilistic wind speed forecasting Sebastian Lerch and Thordis L. Thorarinsdottir May 10, 2013 arxiv:1305.2026v1 [stat.ap] 9 May 2013 In weather forecasting,

More information

Application and verification of ECMWF products in Croatia

Application and verification of ECMWF products in Croatia Application and verification of ECMWF products in Croatia August 2008 1. Summary of major highlights At Croatian Met Service, ECMWF products are the major source of data used in the operational weather

More information

P 1.86 A COMPARISON OF THE HYBRID ENSEMBLE TRANSFORM KALMAN FILTER (ETKF)- 3DVAR AND THE PURE ENSEMBLE SQUARE ROOT FILTER (EnSRF) ANALYSIS SCHEMES

P 1.86 A COMPARISON OF THE HYBRID ENSEMBLE TRANSFORM KALMAN FILTER (ETKF)- 3DVAR AND THE PURE ENSEMBLE SQUARE ROOT FILTER (EnSRF) ANALYSIS SCHEMES P 1.86 A COMPARISON OF THE HYBRID ENSEMBLE TRANSFORM KALMAN FILTER (ETKF)- 3DVAR AND THE PURE ENSEMBLE SQUARE ROOT FILTER (EnSRF) ANALYSIS SCHEMES Xuguang Wang*, Thomas M. Hamill, Jeffrey S. Whitaker NOAA/CIRES

More information

ECMWF products to represent, quantify and communicate forecast uncertainty

ECMWF products to represent, quantify and communicate forecast uncertainty ECMWF products to represent, quantify and communicate forecast uncertainty Using ECMWF s Forecasts, 2015 David Richardson Head of Evaluation, Forecast Department David.Richardson@ecmwf.int ECMWF June 12,

More information

Proper Scores for Probability Forecasts Can Never Be Equitable

Proper Scores for Probability Forecasts Can Never Be Equitable APRIL 2008 J O L LIFFE AND STEPHENSON 1505 Proper Scores for Probability Forecasts Can Never Be Equitable IAN T. JOLLIFFE AND DAVID B. STEPHENSON School of Engineering, Computing, and Mathematics, University

More information

Application and verification of the ECMWF products Report 2007

Application and verification of the ECMWF products Report 2007 Application and verification of the ECMWF products Report 2007 National Meteorological Administration Romania 1. Summary of major highlights The medium range forecast activity within the National Meteorological

More information

ensemblebma: An R Package for Probabilistic Forecasting using Ensembles and Bayesian Model Averaging

ensemblebma: An R Package for Probabilistic Forecasting using Ensembles and Bayesian Model Averaging ensemblebma: An R Package for Probabilistic Forecasting using Ensembles and Bayesian Model Averaging Chris Fraley, Adrian E. Raftery Tilmann Gneiting, J. McLean Sloughter Technical Report No. 516 Department

More information

Adding Value to the Guidance Beyond Day Two: Temperature Forecast Opportunities Across the NWS Southern Region

Adding Value to the Guidance Beyond Day Two: Temperature Forecast Opportunities Across the NWS Southern Region Adding Value to the Guidance Beyond Day Two: Temperature Forecast Opportunities Across the NWS Southern Region Néstor S. Flecha Atmospheric Science and Meteorology, Department of Physics, University of

More information

The Calibration Simplex: A Generalization of the Reliability Diagram for Three-Category Probability Forecasts

The Calibration Simplex: A Generalization of the Reliability Diagram for Three-Category Probability Forecasts 1210 W E A T H E R A N D F O R E C A S T I N G VOLUME 28 The Calibration Simplex: A Generalization of the Reliability Diagram for Three-Category Probability Forecasts DANIEL S. WILKS Department of Earth

More information

EVALUATION OF NDFD AND DOWNSCALED NCEP FORECASTS IN THE INTERMOUNTAIN WEST 2. DATA

EVALUATION OF NDFD AND DOWNSCALED NCEP FORECASTS IN THE INTERMOUNTAIN WEST 2. DATA 2.2 EVALUATION OF NDFD AND DOWNSCALED NCEP FORECASTS IN THE INTERMOUNTAIN WEST Brandon C. Moore 1 *, V.P. Walden 1, T.R. Blandford 1, B. J. Harshburger 1, and K. S. Humes 1 1 University of Idaho, Moscow,

More information

Forecasting wave height probabilities with numerical weather prediction models

Forecasting wave height probabilities with numerical weather prediction models Ocean Engineering 32 (2005) 1841 1863 www.elsevier.com/locate/oceaneng Forecasting wave height probabilities with numerical weather prediction models Mark S. Roulston a,b, *, Jerome Ellepola c, Jost von

More information

Locally Calibrated Probabilistic Temperature Forecasting Using Geostatistical Model Averaging and Local Bayesian Model Averaging

Locally Calibrated Probabilistic Temperature Forecasting Using Geostatistical Model Averaging and Local Bayesian Model Averaging 2630 M O N T H L Y W E A T H E R R E V I E W VOLUME 139 Locally Calibrated Probabilistic Temperature Forecasting Using Geostatistical Model Averaging and Local Bayesian Model Averaging WILLIAM KLEIBER*

More information

ensemblebma: An R Package for Probabilistic Forecasting using Ensembles and Bayesian Model Averaging

ensemblebma: An R Package for Probabilistic Forecasting using Ensembles and Bayesian Model Averaging ensemblebma: An R Package for Probabilistic Forecasting using Ensembles and Bayesian Model Averaging Chris Fraley, Adrian E. Raftery Tilmann Gneiting, J. McLean Sloughter Technical Report No. 516R Department

More information

Lightning Data Assimilation using an Ensemble Kalman Filter

Lightning Data Assimilation using an Ensemble Kalman Filter Lightning Data Assimilation using an Ensemble Kalman Filter G.J. Hakim, P. Regulski, Clifford Mass and R. Torn University of Washington, Department of Atmospheric Sciences Seattle, United States 1. INTRODUCTION

More information

Combining Deterministic and Probabilistic Methods to Produce Gridded Climatologies

Combining Deterministic and Probabilistic Methods to Produce Gridded Climatologies Combining Deterministic and Probabilistic Methods to Produce Gridded Climatologies Michael Squires Alan McNab National Climatic Data Center (NCDC - NOAA) Asheville, NC Abstract There are nearly 8,000 sites

More information

Basic Verification Concepts

Basic Verification Concepts Basic Verification Concepts Barbara Brown National Center for Atmospheric Research Boulder Colorado USA bgb@ucar.edu May 2017 Berlin, Germany Basic concepts - outline What is verification? Why verify?

More information

Theory and Practice of Data Assimilation in Ocean Modeling

Theory and Practice of Data Assimilation in Ocean Modeling Theory and Practice of Data Assimilation in Ocean Modeling Robert N. Miller College of Oceanic and Atmospheric Sciences Oregon State University Oceanography Admin. Bldg. 104 Corvallis, OR 97331-5503 Phone:

More information

Will it rain? Predictability, risk assessment and the need for ensemble forecasts

Will it rain? Predictability, risk assessment and the need for ensemble forecasts Will it rain? Predictability, risk assessment and the need for ensemble forecasts David Richardson European Centre for Medium-Range Weather Forecasts Shinfield Park, Reading, RG2 9AX, UK Tel. +44 118 949

More information

Wind Forecasting using HARMONIE with Bayes Model Averaging for Fine-Tuning

Wind Forecasting using HARMONIE with Bayes Model Averaging for Fine-Tuning Available online at www.sciencedirect.com ScienceDirect Energy Procedia 40 (2013 ) 95 101 European Geosciences Union General Assembly 2013, EGU Division Energy, Resources & the Environment, ERE Abstract

More information

Evaluating Forecast Quality

Evaluating Forecast Quality Evaluating Forecast Quality Simon J. Mason International Research Institute for Climate Prediction Questions How do we decide whether a forecast was correct? How do we decide whether a set of forecasts

More information

Forecasting precipitation for hydroelectric power management: how to exploit GCM s seasonal ensemble forecasts

Forecasting precipitation for hydroelectric power management: how to exploit GCM s seasonal ensemble forecasts INTERNATIONAL JOURNAL OF CLIMATOLOGY Int. J. Climatol. 27: 1691 1705 (2007) Published online in Wiley InterScience (www.interscience.wiley.com).1608 Forecasting precipitation for hydroelectric power management:

More information

6.5 Operational ensemble forecasting methods

6.5 Operational ensemble forecasting methods 6.5 Operational ensemble forecasting methods Ensemble forecasting methods differ mostly by the way the initial perturbations are generated, and can be classified into essentially two classes. In the first

More information

Using Bayesian Model Averaging (BMA) to calibrate probabilistic surface temperature forecasts over Iran

Using Bayesian Model Averaging (BMA) to calibrate probabilistic surface temperature forecasts over Iran doi:10.5194/angeo-29-1295-2011 Author(s) 2011. CC Attribution 3.0 License. Annales Geophysicae Using Bayesian Model Averaging (BMA) to calibrate probabilistic surface temperature forecasts over Iran I.

More information

TC/PR/RB Lecture 3 - Simulation of Random Model Errors

TC/PR/RB Lecture 3 - Simulation of Random Model Errors TC/PR/RB Lecture 3 - Simulation of Random Model Errors Roberto Buizza (buizza@ecmwf.int) European Centre for Medium-Range Weather Forecasts http://www.ecmwf.int Roberto Buizza (buizza@ecmwf.int) 1 ECMWF

More information

Application and verification of ECMWF products in Austria

Application and verification of ECMWF products in Austria Application and verification of ECMWF products in Austria Central Institute for Meteorology and Geodynamics (ZAMG), Vienna Alexander Kann, Klaus Stadlbacher 1. Summary of major highlights Medium range

More information

Ensemble Copula Coupling: Towards Physically Consistent, Calibrated Probabilistic Forecasts of Spatio-Temporal Weather Trajectories

Ensemble Copula Coupling: Towards Physically Consistent, Calibrated Probabilistic Forecasts of Spatio-Temporal Weather Trajectories Ensemble Copula Coupling: Towards Physically Consistent, Calibrated Probabilistic Forecasts of Spatio-Temporal Weather Trajectories Tilmann Gneiting and Roman Schefzik Institut für Angewandte Mathematik

More information

Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the. Pacific Northwest. Eric P. Grimit

Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the. Pacific Northwest. Eric P. Grimit Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the Eric P. Grimit Pacific Northwest Advisor: Dr. Cliff Mass Department of Atmospheric Sciences, University of

More information

Upgrade of JMA s Typhoon Ensemble Prediction System

Upgrade of JMA s Typhoon Ensemble Prediction System Upgrade of JMA s Typhoon Ensemble Prediction System Masayuki Kyouda Numerical Prediction Division, Japan Meteorological Agency and Masakazu Higaki Office of Marine Prediction, Japan Meteorological Agency

More information