Assessing the Ensemble Spread-Error Relationship

Size: px
Start display at page:

Download "Assessing the Ensemble Spread-Error Relationship"

Transcription

1 MONTHLY WEATHER REVIEW, VOL., NO., PAGES 1 31, Assessing the Ensemble Spread-Error Relationship T. M. Hopson Research Applications Laboratory, National Center for Atmospheric Research, Boulder, Colorado, USA T. M. Hopson, RAL - NCAR, P. O. Box 3000, Boulder, Colorado , USA. (hopson@ucar.edu)

2 2 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP Abstract. With the increased utilization of ensemble forecasts in weather and hydrologic applications, there is a need for verification tools to test their benefit over less expensive deterministic forecasts. This paper examines the ensemble spread-error relationship, beginning with the ability of the Pearson correlation to verify a forecast system s capacity to represent its own varying forecast error. Considering only perfect model conditions, this work theoretically extends the results from previous numerical studies showing the correlation s diagnostic limitations: it can never reach its maximum value of one; its theoretical asymptotic value depends on the specific definition of spread and error used, ranging from 0 and asymptoting to either 1/ 3 or 2/π; and, perhaps most fatal to its utility, its theoretical limits depend on the varying stability properties of the physical system being modeled. Building from this, we argue there are two aspects of an ensembles dispersion that should be assessed. First, and perhaps more fundamentally: is there enough variability in the ensembles dispersion to justify the maintenance of an expensive ensemble prediction system (EPS), irrespective of whether the EPS is well-calibrated or not? To diagnose this, the factor that controls the theoretical upper limit of the spread-error correlation can be useful. Secondly, does the variable dispersion of an ensemble relate to variable expectation of forecast error? Representing the spread-error correlation in relation to its theoretical limit can provide a simple diagnostic of this attribute. A context for these concepts is provided by assessing two operational ensembles: Western US temperature forecasts and Brahmaputra River flow.

3 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP 3 1. Introduction The development of ensemble weather, climate, and hydrologic forecasting has brought new opportunities to provide significant economic and humanitarian benefit over a single best guess forecast (Richardson 2000, Zhu et al. 2002, Palmer 2002, among others). One potentially significant if not fundamental attribute of an ensemble prediction system (EPS) is its ability to forecast its own expected forecast error. This is accomplished if the EPS provides an accurate expectation of its temporally-varying errors through its temporally-varying ensemble dispersion (Molteni et al. 1996, Toth and Kalnay 1997, Houtekamer et al. 1996, Toth et al. 2003, Zhu et al. 2002, Hopson and Webster 2010). Given that one would expect that larger ensemble dispersion implies more uncertainty in the forecast ensemble mean or in any one ensemble member (likewise, smaller dispersion implying less uncertainty), many past approaches have used the Pearson correlation coefficient as a diagnostic for this potential EPS property by linearly-correlating differing measures of ensemble spread with differing measures of forecast error. However, the conclusions drawn from the use of this metric have often been ambiguous in many of these studies (Barker 1991; Molteni et al. 1996; Buizza 1997; Scherrer et al. 2004). Houtekamer (1993), Whitaker and Loughe (1998), and Grimit and Mass (2007) have investigated why linear correlation may not be a conclusive metric, primarily in the context of a statistical model presented originally by Kruizinga and Kok (1988; hereafter KK ). The above authors analyses were done in the context of an EPS perfect forecast assumption, one in which the underlying probability distribution function (PDF) of the forecast error is known, and individual ensemble members represent random draws from this distribution, with the ensemble spread providing a measure of the expected forecast

4 4 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP error. Note the distinction between perfect forecast and EPS perfect forecast assumptions: the former being when the forecast is identical to the future observation; the latter being when the distribution of the EPS ensembles is statistically indistinguishable from the forecast error PDF. In the context of the KK model, these authors showed that even for a perfect EPS, the correlation between skill and spread need not be statistically significant, with the magnitude of the linear correlation depending on the day-to-day variability of spread: for verification data where there is large temporal variation in ensemble spreads, the correlation between spread and skill is at a maximum (but less than one), and in regions where the ensemble spread is more temporally uniform, the correlation is at a minimum. Grimit and Mass (2007) also numerically assessed the behavior of the spread-error correlation with the same KK model in the context of differing continuous and categorical spread and error metrics, and for ensemble systems of finite size, showing additional dependencies of the spread-skill correlation on these additional factors. Although conducted in the context of one particular statistical model (i.e. KK), the general conclusion one could draw from these analyses is that the linear correlation is deficient as a verification measure by virtue of its dependence on factors other than exclusive properties of EPS forecast performance. One purpose of this current paper is to elaborate on and generalize this last point further by presenting some of these dependencies from a more theoretical framework for continuous spread and error measures. Among the dependencies that can effect the spread-error correlation, many studies assessing the forecast spread-skill correlation used differing definitions and combinations of measures representing spread and skill. It is not clear how these different combinations of measures affect the theoretical limits of the correlation, and therefore how these studies might interrelate.

5 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP 5 Here we calculate some of the theoretical limits of the correlation for different spread and error combinations, which we argue provide two generalizable metrics to test the utility of an EPS s ability to provide ensemble members with varying dispersion. In section 2 we start by presenting some of the possible continuous error and spread measures, arguing that only certain combinations of these spread and error metrics are dimensionally well-matched and should be used in conjunction. Later in the section we provide explicit calculations for theoretical simplifications on the linear correlation for four different matched spread-skill metrics. For this we also utilize the EPS perfect forecast assumption with no sampling limitations, but do not rely on a particular functional form for the distribution of ensemble spread. In section 3 we discuss the results of section 2 s calculations, showing how the theoretical asymptotic limits of the spread-skill correlation can vary greatly depending on which spread-skill metrics are used, and providing the results for the KK model as one particular case study. In section 4, we discuss two metrics for assessing the utility of an ensemble s temporally varying dispersion, which itself were generalized from the analysis provided in section 2. In section 5, we place our analysis in the context of two particular EPS examples of spread and error using ensemble temperature forecasts for a region of southwest USA, and ensemble river discharge forecasts for Bangladesh. 2. Calculations In this section we present calculations to simplify the linear correlation for four pairings of continuous error and spread metrics. The purpose of these calculations is to simplify these theoretical correlations to a point where the mathematical form of the asymptotic limits become clear, as well as the dependencies dictating these limits. It is assumed

6 6 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP there are no sampling limitations and that the EPS perfect forecast assumption holds, such that for a given forecast, there is an underlying PDF from which both individual ensemble members and the associated observable (verification) are randomly drawn. As a result, the expected error of an ensemble forecast is completely determined by this PDF, and the theoretical form of error-spread correlation reduces to only the PDF moments. To make these simplifications, without loss of generality (WLOG) we can introduce in the equation for the Pearson correlation coefficient a calculation to replace the forecast error with its expected value; and in the case of an EPS perfect forecast, the domain of this calculation over all errors is equivalent to the forecast ensemble member PDF. This replaces the error with its expected value, proportional to a measure of ensemble spread. As well, WLOG, expectation value operations over all possible ensemble members are also done Notation The population of members of an ensemble forecast is represented by Ψ, with an individual member (realization) represented by ψ. Similarly, for some measure of spread s, we represent the population of ensemble forecasts, each with a value of s, as Σ. Consider that Ψ could be viewed as the underlying (implied) PDF of an ensemble forecast at a particular time from which the ensemble members are randomly drawn. Likewise, Σ could be viewed as representing the whole set of ensemble forecasts, each with an identifiable value of associated ensemble spread, over all the times forecasts are generated. Bra-ket expectation value notation is used for the expectation value of some quantity A = A(ψ) over an ensemble population Ψ, which could be in terms of discrete variables

7 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP 7 with probability density function P (ψ) A(ψ) Ψ ψ A(ψ)P (ψ), (1) or in terms of continuous variables with associated probability density function f(ψ) A(ψ) Ψ Ψ A(ψ)f(ψ)dψ. (2) The subscript (Ψ) on the brackets ( ) specifies the population domain over which the expectation is calculated. Similarly, we define the expectation value of A = A(s) over a population of forecasts, each with defined ensemble spread s, as A Σ, and we represent the double expectation value of A = A(ψ, s) over both populations Ψ and Σ as A Ψ,Σ. In terms of expectation values, the Pearson correlation coefficient between a generic spread (s) and error (ɛ) measure is given by r = (s s Σ )(ɛ ɛ Σ ) Σ [ (s s Σ ) 2 Σ (ɛ ɛ Σ ) 2 Σ ] 1/2 (3) where the population domain over which the expectation (average) is calculated is the set of ensemble forecasts Σ (with associated spread measure s). For further simplifications as we will show below, for a given ensemble forecast with some measure of spread s, an average can also be made over the possible realizations of the observable Ψo ; or over the population of ensemble members Ψ(s) given by Ψ. Note by our perfect model definition, Ψo Ψ Spread-error measures The forecast member spread is often defined as the variance, standard deviation, mean absolute difference of the ensemble members about the ensemble mean, or less commonly, mean absolute difference of the ensemble members about a chosen ensemble member. In addition, we include the 4th moment of the ensemble members about the mean, which

8 8 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP arises in the calculations. The forecast error of an ensemble forecast is often defined in terms of the squared or absolute difference between the verification (observation) and either any one ensemble member or the ensemble mean forecast. Symbolic notation for these measures are given in Tables 1 and 2, respectively. Arguably only certain of these error and spread measures are appropriately matched if one wants to directly relate expected error to a measure of ensemble spread. Measures that are naturally paired have a direct functional relationship relating forecast error to forecast spread, and have the same moments (physical units). Of the measures presented here, these pairings are: 1) the set of squared error measures with the variance as spread measure; and 2) the set of absolute difference error measures with either the standard deviation or mean absolute difference as spread measure. Although other error and spread measures could also be used (e.g. rank probability skill score) to assess the forecast spread-error relationship, arguably the useful information in the ensemble spread is that it should be a statement about the expected error in the forecast, and these error and spread measures directly make this connection. For reference, Table 3 shows how the expected values of the error measures ɛ (column 1) can be given in terms of measures of forecast spread s (column 2) for an EPS perfect forecast (i.e. one in which the observation ψ o is equivalent to a random draw from the forecast ensemble member PDF). These relationships are used in the calculations below. WLOG, these relationships were derived by introducing an expectation value operation over all possible observational states, and in some cases, over all possible ensemble members. Column 3 of this table shows how the expected value of error corresponds to

9 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP 9 either the standard deviation σ ψ or variance σψ 2 when the forecast ensembles are normally distributed. Figure 1 provides a schematic of the correlation coefficient simplification calculation. Shown are ensemble six-member forecasts of a continuous variable ψ for three different forecast times. The ensemble members are represented by the six thin black vertical lines, with the implied PDF p(ψ; s i ) from which the members are samples given by the bellshaped curves. The PDF represents the forecast in the asymptotic limit of no sampling limitations. The observations corresponding to the forecasts are shown by the vertical red lines, with the ensemble mean given by the dashed vertical lines. Some measure of error ɛ (shown here as a distance the observation is from the ensemble mean) for each forecast is also shown, as is some measure of ensemble member spread s. In our calculations to simplify the correlation between spread s and error ɛ, we replace the error by its expected value, which can be calculated by performing a weighted integration of the observation over all possible values. The result is that the expected value is proportional to a measure of ensemble member spread: ɛ Ψi = Ψ i (s) ɛ p(ψ; s i ) dψ s i. (4) In practice, p(ψ; s i ) does not have to be explicitly given, and the relationship of the expected value of the error to a measure of ensemble member spread can be shown either through algebraic manipulation or by inspection (see Table 3 for examples). In this example, the expected value of the error over all forecasts then is proportional to: ɛ Σ 1 n s i. (5) n i=1

10 10 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP 2.3. Correlation of s abs with ɛ µ and the correlation of σ 2 ψ with ɛ µ 2 In this section we simplify the correlations for two specific cases: 1) (s abs, ɛ µ ) and 2) (σψ, 2 ɛ µ 2). As seen in Table 3, these pairings are especially well matched since (for an EPS perfect forecast) the expectation value of the error measure is the spread measure itself ( ɛ Ψo = s). Left in terms of a generic ɛ and s for these two sets of spread-error measures, WLOG we can introduce into (3) an expectation value Ψo over all possible states of the observation within each expectation value of error ɛ Σ over the population of forecasts Σ r = Noting that s Ψo = s Ψ = s and expanding, (s s Σ,Ψo )(ɛ ɛ Σ,Ψo ) Σ,Ψo. (6) [ (s s Σ,Ψo ) 2 Σ,Ψo (ɛ ɛ Σ,Ψo ) 2 Σ,Ψo ] 1/2 r = and using ɛ Ψo = ɛ Ψ = s, (s s Σ )( ɛ Ψo ɛ Σ,Ψo ) Σ, (7) [ (s s Σ ) 2 Σ (ɛ ɛ Σ,Ψo ) 2 Σ,Ψo ] 1/2 r = so the correlation coefficient further simplifies to (s s Σ )(s s Σ ) Σ, (8) [ (s s Σ ) 2 Σ (ɛ s Σ ) 2 Σ,Ψ ] 1/2 r = s2 Σ s 2 Σ. (9) ɛ 2 Σ,Ψ s 2 Σ To simplify things further, we return to the specific metrics of cases 1) and 2). Simplifying for case 1), we have ɛ 2 µ Ψ o ψ Ψ ψ o 2 Ψo = ( ψ Ψ ψ) 2 Ψ σ 2 ψ by definition. And for case 2), we have ɛ µ 2 Ψo ( ψ Ψ ψ o ) 2 Ψo = ( ψ Ψ ψ) 2 Ψ σ 2 ψ again by definition. In addition for case 2), ɛ 2 µ 2 Ψo ( ψ Ψ ψ o ) 4 Ψo = ( ψ Ψ ψ) 4 Ψ m 4, where m 4 is the 4th moment about the mean ψ Ψ defined in Table 1. Substituting into

11 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP 11 (9) for cases 1) and 2) we have and r = s2 abs Σ s abs 2 Σ σψ 2, (10) Σ s abs 2 Σ r = (σ2 ψ )2 Σ σψ 2 2 Σ, (11) m 4 Σ σψ 2 2 Σ respectively, which are now dependent only on the moments of the ensemble member spread. To simplify (10) and (11) further, we would need to impose a requirement on the distribution of the ensemble members holding for all forecasts, and specific to each case. These requirements are: for case 1) s abs = βσ ψ ; for case 2) m 4 = α(σ 2 ψ) 2, where α and β are constants determined by the PDF of the ensemble distribution. Note that normallydistributed ensemble members satisfy the requirements for both of these cases, where for case 1) β = 2/π), and for case 2) α = 3. Imposing these requirements on s abs (case 1) and on m 4 (case 2), (10) and (11) become r = β 1 σ ψ 2 Σ/ σψ 2 Σ 1 β 2 σ ψ 2 Σ/ σψ 2 Σ (12) and r = 1 σ2 ψ 2 Σ/ (σψ 2 )2 Σ (13) α σψ 2 2 Σ/ (σψ 2 )2 Σ respectively Correlation of σ 2 ψ with ɛ d 2 For the case of (σ 2 ψ, ɛ d 2), we have r = (σψ 2 σψ 2 Σ )( ɛ d 2 Ψo,Ψ ɛ d 2 Σ,Ψo,Ψ) Σ [ (σψ 2 σ2 ψ. (14) Σ) 2 Σ (ɛ d 2 ɛ d 2 Σ,Ψo,Ψ) 2 Σ,Ψo,Ψ] 1/2

12 12 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP where, WLOG, we have introduced an additional expectation value operation ( Ψ ) over the population of ensemble members (Ψ) (performed for each forecast, with specific σ 2 ψ value). This was done in addition to the expectation value operation ( Ψo ) over the observation population (Ψ o ) as was introduced in the previous calculation. Under the EPS perfect forecast assumption, we have ɛ d 2 Ψo,Ψ = 2( ψ 2 Ψ ψ 2 Ψ) = 2σψ, 2 and the numerator simplifies to 2[ (σψ) 2 2 Σ σψ 2 2 Σ]. Similarly, the denominator simplifies to [( (σψ) 2 2 Σ σψ 2 2 Σ)( ɛ 2 d 2 Σ,Ψa,Ψ 4 σψ 2 2 Σ)] 1/2. Again, using the EPS perfect forecast assumption, ɛ 2 d 2 Ψa,Ψ (ψ ψ o ) 4 Σ,Ψo,Ψ = 2 (ψ ψ Ψ ) 4 Ψ + 6 (ψ ψ Ψ ) 2 2 Ψ = 2m 4 + 6(σψ) 2 2. Putting this together, (14) simplifies to (σ r = ψ 2 )2 Σ σψ 2 2 Σ m 4 Σ /2 + 3 (σψ 2 )2 Σ /2 σψ 2 2 Σ (15) and the correlation coefficient is now given only in terms of the moments of the ensemble member spread. To simplify the relationship further, we would need to impose a requirement on the distribution of the ensemble members holding for all forecasts. As done in the previous section, if we impose m 4 = α(σψ) 2 2, where α is a proportionality constant, then substituting for m 4 in the denominator, combining, and simplifying, we get (σ r = ψ 2 )2 Σ σψ 2 2 Σ. (16) (α + 3) (σψ 2 )2 Σ /2 σψ 2 2 Σ For normally distributed ensembles α = 3, and we derive the same result as given in the previous section for (σψ, 2 ɛ µ 2) (case 2) Correlation of σ ψ and ɛ µ Finally, we consider the case of (σ ψ, ɛ µ ), given by r = (σ ψ σ ψ Σ )(ɛ µ ɛ µ Σ ) Σ. (17) [ (σ ψ σ ψ Σ ) 2 Σ (ɛ µ ɛ µ Σ ) 2 Σ ] 1/2

13 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP 13 To simplify this expression, we expand the denominator noting that ɛ µ ɛ µ = ɛ µ 2, WLOG introduce an expectation value operation over the possible observational states ( Ψo ), and use ɛ µ Ψo ψ Ψ ψ o Ψo = s abs and ɛ µ 2 Ψo ( ψ Ψ ψ o ) 2 Ψo = σψ 2 by the EPS perfect forecast assumption. Doing so, (17) simplifies to or r = (σ ψ σ ψ Σ )(s abs s abs Σ ) Σ [( σψ 2 Σ σ ψ 2 Σ)( σψ 2, (18) Σ s abs 2 1/2 Σ)] r = σ ψ s abs Σ σ ψ Σ s abs Σ [( σψ 2 Σ σ ψ 2 Σ)( σψ 2, (19) Σ s abs 2 1/2 Σ)] and again, the correlation coefficient is given only in terms of moments of the ensemble member spread. To simplify the relationship for the correlation coefficient further, we impose the same requirement on the distribution of the ensemble members holding for all forecasts as was done with (s abs, ɛ µ ) above, namely s abs = βσ ψ (which applies for normally-distributed ensemble members, with β = 2/π). Using this, we obtain which is identical to the result for (s abs, ɛ µ ). r = β 1 σ ψ 2 Σ/ σψ 2 Σ 1 β 2 σ ψ 2 Σ/ σψ 2, (20) Σ 3. Results of correlation analysis One focus of this paper has been to assess the limited utility of the linear spread-error correlation as a verification measure from a theoretical perspective. In the process of doing so, we have clarified the dependencies of the correlation through calculations performed under the assumptions of an EPS perfect forecast (i.e. the observation is statistically indistinguishable from any one ensemble member) for different combinations of continuous

14 14 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP spread and error measures and in the case of no sampling limitation (i.e. large ensemble size). Tables 4 and 5 show results of these calculations, and from these we make the following points: (1) The spread-error correlation can be simplified to forms no longer explicitly dependent on the error metric, but dependent only on different moments of the ensemble member distribution, and what the average value (i.e. expectation value) of these moments are over the forecast verification set. This can be seen in column 2 of Table 4, for different combinations of spread (s) and error (ɛ) measures. To clarify, none of these simplifications explicitly depend on either how the ensemble members are distributed, or how the varying spread metric (moments) of these distributions are distributed themselves. The dependence is instead implicit, by virtue of what the average value of these moments are when averaged over the set of all forecasts used in the verification. (2) Because, even for a perfect forecast, the correlation remains dependent on attributes of the ensemble member distribution, these dependencies cloud the ability of the spread-error correlation to provide a diagnostic of EPS performance for an imperfect model. One would rather hope for a verification metric to at least be asymptoticallyconstant (e.g. value of 1.0) when tested with perfect model results. Further dependence of ensemble size on the correlation s value further clouds this metric s utility (see Grimit and Mass 2007 and Kolczynski et al for a numerical studies of this issue). Although the variability of ensemble member spread over a verification set could be indicative of EPS performance, such variability also could depend on the stability properties of the environmental system being modeled. In particular, if the system being modeled is in a very stable regime, then one may expect that the distribution of ensemble spreads would

15 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP 15 be relatively narrow, and as we argue below, this would lead to a very different result for r than if the system samples a variety of stable/unstable states (i.e. a large spread in the ensemble spreads). More to the point, one would hope that for a perfect model, a measure of forecast performance such as r would be a fixed value, and not depend on the inherent properties of the system the forecast is trying to model. (3) If further constraints are placed on the relationship between the moments of the ensemble member distribution (column 3 of Table 4), then further simplifications can be made on the form of the correlation (column 4, Table 4), reducing to only three forms for the six combinations considered in Table 4. For the metrics with the same units as the weather variable itself, with the constraint that s abs = βσ ψ and β is some constant, this is given by r = β 1 σ ψ 2 Σ/ σψ 2 Σ 1 β 2 σ ψ 2 Σ/ σψ 2. (21) Σ For the two squared metrics in the table, with the constraint that m 4 = α(σ 2 ψ) 2 and α is some constant, the two correlation expressions are r = 1 σ2 ψ 2 Σ/ (σψ 2 )2 Σ (22) α σψ 2 2 Σ/ (σψ 2 )2 Σ and 1 σ r = ψ 2 2 Σ/ (σψ 2 )2 Σ. (23) (α + 3)/2 σψ 2 2 Σ/ (σψ 2 )2 Σ β = More specifically, if the ensemble member distribution is normally-distributed (satisfying 2/π and α = 3), the theoretical form of the correlation is given in column 2, Table 5, which reduces to two forms for the metrics considered. For the metrics with same units

16 16 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP as the weather variable itself, this is given by For the squared metrics, the correlation is 2 r = 1 σ ψ 2 Σ/ σψ 2 Σ π 1 (2/π) σ ψ 2 Σ/ σψ 2. (24) Σ r = 1 σ2 ψ 2 Σ/ (σψ 2 )2 Σ. (25) 3 σψ 2 2 Σ/ (σψ 2 )2 Σ What can be seen, then, is that depending on what paired metric definitions are used, one can get different correlations for the same EPS forecasts, and along with this, different values for the correlations upper bounds, as shown below. This, then, would allow one to artificially increase or decrease the spread-error correlation through optimal choice of metric depending on the result desired. (4) Examining the more general (21)-(23), and (24)-(25) specific to normally-distributed ensembles, one can see there are two governing ratios (g) that determine the value of the correlation. For the metrics with the same units as the weather variable itself (rows 1 through 4 of Table 5), the ratio is g 1 = σ ψ 2 Σ/ σ 2 ψ Σ = σ ψ 2 Σ/[ σ ψ 2 Σ + var(σ ψ )] (26) where var( ) represents the variance. For the squared metrics (rows 5 through 6 of Table 5), the governing ratio is g 2 = σ 2 ψ 2 Σ/ (σ 2 ψ) 2 Σ = σ 2 ψ 2 Σ/[ σ 2 ψ 2 Σ + var(σ 2 ψ)]. (27) Consider the situation where the EPS consistently generates a probabilistic forecast with similar ensemble member dispersion from one forecast to the next. In the limit as the change in the dispersion vanishes, both var(σ ψ ) 0 and var(σψ) 2 0, and g 1 in both (26) and (27). As a result r 0 in (21)-(25).

17 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP 17 In the other extreme limit as the EPS generates a (infinitely-) wide range of ensemble dispersion, then both var(σ ψ ) and var(σ 2 ψ), and g 0 in both (26) and (27). As a result r β in (21), r For normally-distributed ensemble members, r 1/α in (22), and r 2/(α + 3) in (23). 2/π in (24), and r 1/3 in (25). Figure 2 provides a graphic illustration of how r varies as a function of σ ψ 2 Σ/ σ 2 ψ Σ and σ 2 ψ 2 Σ/ (σ 2 ψ) 2 Σ for normally-distributed ensemble members. (5) The more general results in Tables 4 and 5 compare well with past numeric results in the literature. Barker (1991) examined the correlation between the ensemble variance (s; row 1, Table 1) and the square error of any one ensemble member (ɛ; row 2, Table 2) using geopotential height anomalies from extended range forecasts. He numerically generated a maximum correlation value of 0.58, which is the same result we derive in row 6, Table 5 ( 1/3 = 0.58). Also consider a specific distribution for the standard deviation σ ψ of the ensemble member spread. If the possible values of σ ψ over the forecasts of interest are lognormally distributed, then r takes on the specific form given in column 5 of Table 5. Modified versions of the lognormal distribution for σ ψ were presented earlier by KK. This distribution is given by f(σ ψ ) = ( 1 exp (ln(σ ψ) ln(σ ψm )) 2 ), (28) σ ψ σ Σ 2π 2σ Σ where σ Σ is the standard deviation of the distribution of ln(σ ψ ), and σ ψm is the median value of σ ψ. (Note: for the lognormal distribution, the mean σ ψ Σ and median σ ψm are not identical but are related by σ ψ Σ = σ ψm exp(σ 2 Σ/2).) For specified values of σ ψm and σ Σ, values of σ ψ can be derived from ln (σ ψ ) = N (ln (σ ψm ), σ Σ ), where N (γ, δ) represents a random draw from a Normal distribution with mean γ and standard deviation δ. For

18 18 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP normally-distributed ensemble members, with spread metric σ ψ and error metric ɛ µ, with σ ψ lognormally distributed, we then have the same case explored by Houtekamer (1993), Whitaker and Loughe (1998), and Grimit and Mass (2007). For this case, the governing ratio simplifies to g = σ ψ 2 Σ/ σ 2 ψ Σ = Exp[ σ 2 Σ], (29) and the correlation simplifies to the expression in column 5, row 2 of Table 5, which itself duplicates (33) of Houtekamer (1993). Note, however, that defining the specific distribution of the ensemble member spread is not important to determining the limiting behavior of the correlation, which for this case is given by column 2, row 2 of Table 5, with correlation limits of [0, 2/π] [0, 0.80]. This same limit was numerically estimated by Houtekamer (1993), Whitaker and Loughe (1998), and Grimit and Mass (2007). 4. Two aspects of the variation of ensemble dispersion In this section we argue that there are two aspects of an ensemble s variation in dispersion that should be assessed. The first aspect is: do the day-to-day variations in the dispersion of an ensemble forecast relate to day-to-day variations in the expectated forecast error? The second aspect is: is there enough variability in the EPS dispersion to justify the expense of generating the ensemble? We respectively address each of these aspects in turn below. We have argued in the previous section that the Pearson correlation does not provide a definite tool to assess the reliability of the ensemble spread-error relationship due to the fact that even for an EPS perfect forecast, the correlation can vary widely by virtue of its dependence on factors other than exclusive properties of EPS forecast performance.

19 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP 19 However, this does not necessarily mean that the correlation does not still have utility in answering this question, which we will return below. Because of the correlation s deficiences, Wang and Bishop (2003) suggested creating bins of the spread measure of choice (in their case, ensemble variance), and then averaging the corresponding error metrics (e.g. square error of the ensemble mean) over these bins to remove statistical noise. After this bin-averaging, properly matched spread and error measures should then equate (with the removal of observation error), and a perfect EPS forecast should therefore produce points lying along a 45 degree line. As the variations in an ensemble s dispersion become less informative, the slope of this curve (binned error versus binned spread) becomes more horizontal. However, as visually informative as this approach can be, ambiguities in the EPS s error-spread reliability can arise due to ambiguities in the sufficient number of bins and number of points in each bin required for this test, especially for small verification data sets. Similarly, Wang and Bishop (2003) also argued that the rate at which the binned error metric becomes noisier as bin size (thus sample size) decreases, and the degree of kurtosis in the binned sample of errors, both provide measures of the accuracy in the EPS error variation prediction. However, both of these latter two approaches rely on an assumption of gaussianity for proper interpretation. An alternative to the Wang and Bishop approach that produces a single scalar of EPS error-spread reliability and requires no distributional assumptions, can be created from the Pearson correlation r. Benefits of single scalar metrics are that they can better leverage limited verification data sets, they can often provide a more objective metric for assessing EPS performance as compared to, say, graphical assessments, and they can more easily lend themselves to constructing confidence bounds. This alternative can be constructed by

20 20 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP reframing r relative to a perfect EPS forecast in the context of a skill score (Wilks, 1995). Note that although skill scores need to be used with care since they can be improper in certain contexts (Gneiting and Raftery 2007; Murphy 1973), they can still provide a useful relative measure of forecast system improvement. A candidate for an error-spread Pearson correlation skill score SS r is SS r = r forc r ref r perf r ref, (30) where r forc is the EPS spread-error correlation, r ref is that of a reference forecast, and r perf is that for a perfect EPS forecast. For the possible correlation s spread-error metrics we use the standard deviation of the ensemble (σ ψ ) and the absolute error of the ensemble mean (ɛ µ ), respectively. If we take the no-skill forecast or the reference forecast, such that r ref = 0, then SS r simplifies to SS r = r forc r perf. (31) For simplicity, we could also take the perfect EPS forecast as assumed to have close to normally-distributed ensemble forecasts, such that r perf is given by (24) above. A second, and perhaps more essential, aspect of an ensemble s variation in dispersion that should be assessed is whether there is enough variability in the dispersion to begin with to justify the generation of an expensive ensemble, irrespective of whether the EPS spread-error relationship is reliable or not. Implicitly, both Wang and Bishop (2003) and Grimit and Mass (2007) also examined this issue in the context of the binned error and spread metric comparison approach discussed above. Wang and Bishop used the y- axis range as a metric (binned error metric variation); while after applying an analogue calibration approach to each bin, Grimit and Mass used gains in the rank probability

21 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP 21 (RPS) skill score as a gauge (where the RPS of a fixed ensemble-mean error climatology was used as a reference). However, the former approach does not provide a normalized metric (thus retaining sensitivity to unit scale). And likewise both of these approaches do not isolate the issue of degree of variability in the ensemble s native dispersion; this is because both EPS accuracy in discerning error variability, as well as issues in bin size, cloud this issue for both approaches. One possible metric for measuring the degree of variability in the ensemble s native dispersion is to utilize the governing ratios g presented above, but in the context of a skill score, as was done with the correlation coefficient s use in a skill score for EPS error-spread reliability assessment. Because g is calculated using only the moments of the ensemble member set, it focuses on the EPS potential to produce dispersion variability. In terms of the governing ratio skill score SS g, we have SS g = g forc g ref g perf g ref, (32) where g forc is the EPS governing ratio, g ref is that of a reference forecast, and g perf is that for a perfect forecast. Considering only the governing ratio, g 1, of (26), and taking g ref = 1 (i.e. no dispersion variability), and g perf = 0 (i.e. extremely-large dispersion variability), and after simplifying, we then have SS g = 1 g forc = σ2 ψ Σ σ ψ 2 Σ σ 2 ψ Σ = var(σ ψ ) σ ψ 2 Σ + var(σ ψ ), (33) where var(σ ψ ) represents the variance of the ensemble member standard deviation over the verification data set. SS g can be viewed as a normalized, or relative, measure of how much variability there is in the ensemble day-to-day dispersion as compared to the mean, or average, amount of this dispersion.

22 22 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP 5. EPS examples In this section we show two examples of EPS forecasts to highlight some of the points made above. The first example EPS produces ensembles from a mixture of WRF and MM5 mesoscale models, using a variety of different intitial conditions, outer boundary conditions, and physics packages (Liu et. al 2007), and post-processed with a quantile regression approach (Hopson et. al 2010) to produce a calibrated 30-member ensemble, although in this paper we use a 19-member subset. The ensemble generates gridded temperature forecasts over the Dugway Proving Grounds of the Army Test and Evaluation Command (ATEC) outside Salt Lake City, Utah. Figure 3 shows time-series and rank histograms of this EPS out-of-sample verification set. Panel 3a shows a subset timeseries of 3-hr lead-time sorted ensembles (colored lines) downscaled to a meteorological station (black line) over the ATEC range, while 3b shows the out-of-sample post-processed results. Panels 3c and 3d show rank histograms of the same forecasts, respectively, with the red dashed lines in the figure showing 95% confidence bounds on the histograms (for which we could expect approximately one bin to lie outside of these bounds for a perfectly-calibrated 19-member ensemble). From the rank histograms we see significant under-dispersion (U-shaped) in the pre-processed forecasts, but near-perfect calibration in the post-processed ensemble member set. Panels 3e - 3h show results for 36-hr forecasts with similar conclusions concerning pre- and post-processed forecasts under- and nearperfect dispersion, respectively, as for the 3-hr forecasts. Figure 4 shows our second example of EPS forecasts. In this figure is shown ensemble streamflow forecasts (colored lines) for the Brahmaputra river at the Bahadurabad gauging station within Bangladesh of the Climate Forecast Applications in Bangladesh

23 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP 23 (CFAB) project for years (Hopson and Webster 2010), along with observed streamflow from the Bangladesh Flood Forecasting and Warning Centre (FFWC; black line). Panels a) and e) show time-series of sorted 51-member multi-model forecasts of river flow at 1- and 10-day lead-times, respectively. These forecasts were generated by using ensemble weather forecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF) 51-member Ensemble Prediction System (EPS) (Molteni et. al 1996), near-real-time satellite-derived precipitation products from the NASA Tropical Rainfall Measuring Mission (TRMM; Huffman et al. 2005, 2007) and the NOAA CPC morphing technique (CMORPH; Joyce et al. 2004), a GTS-NOAA rain gauge product (Xie et al. 1996), and near-real-time river flow estimates from the FFWC. Panels b) and f) show the respective post-processed results of these forecasts, where a k-nearest-neighbor analogue approach (KNN) was used for this application. Panels c) and d) show the respective pre- and post-processed rank histograms and 95% confidence bounds (for which we could expect approximately three bins to lie outside of these bounds for a perfectlycalibrated 51-member ensemble) for the 1-day lead-time forecasts, and panels g) and h) show the same but for the 10-day forecasts. As with our first example, from the rank histograms we see significant under-dispersion (U-shaped) in the pre-processed forecasts, but near-perfect calibration in the post-processed ensemble member set. Utilizing the CFAB EPS 10-day lead-time streamflow forecasts post-processed with a KNN algorithm, we examine the concepts discussed in section 3. Figure 5 presents scatter plots of ensemble error versus spread using the metric pairings shown in Tables 4 and 5. The black dots are the actual error-spread data. The blue dots are calculated by treating the CFAB forecasts as if they were derived from an EPS perfect forecast, which is practially

24 24 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP done here by each day randomly choosing one member to represent the verification from the set of 51 ensemble forecast members plus the observation, with the remaining 51 unchosen members treated as the ensemble forecast. Linear fits to both actual and perfect model data sets are included (black and blue lines, respectively). In the upper right corner of each panel are included the following correlation values for the error-spread data: ensemble r derived from the actual forecast metrics (black dots); perf. model r derived from the EPS perfect forecast metrics (blue dots); perf. gaussian r derived from actual forecasts moments but using the theoretical form for normally-distributed EPS perfect forecast ensemble members (column 2, Table 5); theor. up. lim. the theoretical maximum value the correlation can attain for normally-distributed ensembles (column 4, Table 5). In Figure 5 notice the positive slope to both actual and perfect model data in each panel, such that as the spread increases, the error also is more likely to be larger. But also notice that even for large spread values of either the perfect model (blue dots) or actual forecast data (black dots), the error can be very small, and as such the correlation is not (cannot be) perfect (i.e. 1.0) as shown by both the ensemble r and perf. model r values ranging from [.21,.29] and [.22,.27], respectively. The similarity of the actual and perfect model ranges also shows that the KNN post-processing algorithm appears to have produced well-calibrated ensembles with respect to the error-spread relationship. Also notice that the perf. gaussian r values are quite close to the perf. model r values, showing the normally-distributed ensemble member assumption is a good approximation for this data set, and thus could provide a much simpler theoretical r value to calculate (column 2, Table 5) than the method to generate perf. model r discussed above. But also note

25 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP 25 that the actual and perfect model values are well below the theoretical maximum values they could attain of 2/π.80 (panels a - d) and 1/3.58 (panels e - f), respectively, showing that the data s governing ratios (column 3, Table 5) are not at their minimum. Finally, and non-intuitively, notice the almost identical values of all the respective actual forecast correlations, even though the theoretical maximum value of panels a - d is very different from that of panels e - f. 6. Conclusions There clearly is a need to verify the value of the 2nd moment of ensemble forecasts: if, for a particular forecast, the forecast ensemble spread is large or small, does this mean the forecast skill is diminished or increased, respectively? This paper has argued that the Pearson correlation coefficient r of forecast spread and error is not a good verification measure to directly test this relationship between ensemble spread and skill, since it depends on factors other than just forecast model performance. The important point here is that the forecast model s correlation coefficient can take on a wide range of values, for a perfectly calibrated model. What this correlation is could depend on an inherent property of the EPS (such as its resolution), but it could also depend on the variety of states available to the physical system being modeled, completely irrespective of the forecast model s performance. Given this latter dependence, we argue that the spread-skill correlation is not an adequate verification gauge of how well a variation in ensemble spread forecasts a change in forecast certainty. These ideas were examined in the context of ensemble temperature forecasts for Utah and for streamflow forecasts for the Brahmaputra River. It was shown that even for a perfect model, r depends on how one defines forecast spread and forecast skill (error); and

26 26 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP in Tables 4 and 5 of the previous section we also showed how the spread-error correlation r for a variety of different measures of spread and error was dependent on higher moments of the distribution of the ensemble spreads, which themselves should be dependent on the stability properties of the modeled system during the period the forecasts are being verified (among other factors). In particular, we showed that under certain conditions, the correlation depends on the ratio of how much the forecast spread varies from forecast to forecast compared to its mean value of spread, s 2 / s 2 = s 2 /[ s 2 + var(s)], (34) where s is some measure of forecast ensemble spread, s its mean value, and var(s) = (s s ) 2 represents its variance. As this ratio approaches zero, the skill-spread correlation asymptotes to its upper value of 2/π or 1/3, depending on how the skill and spread measures are defined. These theoretical results validate and generalize some of the previous numerical and theoretical findings of Barker (1991) Houtekamer (1993), in particular (see section 2). Because r is strongly dependent on factors other than just the skill of the forecast system, we argue that r is an unreliable verification measure of whether changes in forecast skill can be associated with changes in ensemble forecast spread. To meet the clear need of a measure that can objectively test the usefulness of the variability of the forecast ensemble spread, we propose in the second part to this paper three alternatives to the skill-spread correlation. In particular, if there is no usefulness in this 2nd moment of an ensemble forecast, then one might lose little benefit (and possibly gain) by using hindcasts to calculate a much less expensive invariant climatological error distribution (Leith 1974, Atger 1999), or fit a simple heteroscedastic error model (i.e. error variance

27 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP 27 that depends on the magnitude of the variable) to use in conjunction with the ensemble mean or control member forecast instead of using the full suite of forecast ensembles themselves. References Atger, F., 1999: The Skill of Ensemble Prediction Systems. Mon. Wea. Rev., 127, Barker, T. W., 1991: The relationship between spread and forecast error in extended-range forecasts. J. Climate, 4, Buizza, R., 1997: Potential forecast skill of ensemble prediction and spread and skill distributions of the ECMWF ensemble prediction system. Mon. Wea. Rev., 125, Gneiting, T., and A. E. Raftery, 2007: Strictly Proper Scoring Rules, Prediction, and Estimation, J. Amer. Stat. Assoc., 102(477), , doi: / Grimit, E. P., and C. F. Mass, 2007: Measuring the Ensemble Spread-Error Relationship with a Probabilistic Appraach: Stochastic Ensemble Results. Mon. Wea. Rev., 135, Hopson, T. M., and P. J. Webster, 2010: Operational flood forecasting for Bangladesh using ECMWF ensemble weather forecasts. J. Hydrometeor., 11, Hopson, T., J. Hacker, Y. Liu, G. Roux, W. Wu, J. Knievel, T. Warner, S. Swerdlin, J. Pace and S. Halvorson, 2010: Quantile regression as a means of calibrating and verifying a mesoscale NWP ensemble. Prob. Fcst Symp., American Meteorological Society, Atlanta, GA, January 2010.

28 28 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP Houtekamer, P. L., 1993: Global and local skill forecasts. Mon. Wea. Rev., 121, Houtekamer, P. L., L. Lefaivre, J. Derome, H. Ritchie, and H. L. Mitchell, 1996: system simulation approach to ensemble prediction. Mon. Wea. Rev., 124, Huffman, G. J., R. F. Adler, S. Curtis, D. T. Bolvin, and E. J. Nelkin, 2005: Global rainfall analyses at monthly and 3-hr time scales. Measuring Precipitation from Space: EURAINSAT and the Future, V. Levizzani, P. Bauer, and J. F. Turk, Eds., Springer, 722 pp. Huffman, G. J., R. F. Adler, D. T. Bolvin, G. Gu, E. J. Nelkin, K. P. Bowman, Y. Hong, E. F. Stocker, D. B. Wolff, 2007: The TRMM Multisatellite Precipitation Analysis (TMPA): Quasi-global, multiyear, combined sensor precipitation estimates at fine scales. J. Hydrometeor., 8, Joyce, R. J., J. E. Janowiak, P. A. Arkin, and P. P. Xie, 2004: CMORPH: A method that produces global precipitation estimates from passive microwave and infrared data at high spatial and temporal resolution. J. Hydrometeor., 5, Kolczynski, W. C., D. R. Stauffer, S. E. Haupt, N. S. Altman, and A. Deng, 2011: Investigation of Ensemble Variance as a Measure of True Forecast Variance. Mon. Wea. Rev., 139, Kruizinga, S., and C. J. Kok, 1988: Evaluation of the ECMWF experimental skill prediction scheme and a statistical analysis of forecast errors. Proc. ECMWF Workshop on Predictability in the Medium and Extended Range, Reading, United Kingdom, ECMWF,

29 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP 29 Leith, C. E., 1974: Theoretical Skill of Monte Carlo Forecasts. Mon. Wea. Rev., 102, Liu, Y., M. Xu, J. Hacker, T. Warner, and S. Swerdlin, 2007: A WRF and MM5- based 4-D mesoscale ensemble data analysis and prediction system (E-RTFDDA) developed for ATEC operational applications. 18th Conf. on Numerical Weather Prediction, Amer. Meteor. Soc., June 25-29, Park City, Utah. Molteni, F., R. Buizza, T. N. Palmer, and T. Petroliagis, 1996: The ECMWF Ensemble Prediction System: Methodology and validation. Q. J. R. Meteorol. Soc., 122, Murphy, A. H., 1973: Hedging and Skill Scores for Probability Forecasts, J. of Applied Meteor., 12, Palmer, T. N., 2002: The economic value of ensemble forecasts as a tool for risk assessment: From days to decades. Q. J. R. Meteorol. Soc., 128, Richardson, D. S., 2000: Skill and relative economic value of the ECMWF ensemble prediction system. Q. J. R. Meteorol. Soc., 126, Scherrer, S.C., C. Appenzeller, P. Eckert, D. Cattani, 2004: Analysis of the spread-skill relations using the ECMWF ensemble prediction system over Europe. Wea. Forecasting, 19 (3), Toth, Z. and E. Kalnay, 1997: Ensemble forecasting at NCEP and the breeding method. Mon. Wea. Rev., 125, Toth, Z. and O. Talagrand and G. Candille and Y. Zhu, 2003: Probability and Ensemble Forecasts. Chapter 7 of Forecast Verification: A Practitioner s Guide in Atmospheric Science. John Wiley and Sons, 254pp.

30 30 HOPSON: ASSESSING THE ENSEMBLE SPREAD-ERROR RELATIONSHIP Wang, X., and C. H. Bishop, 2003: A Comparison of Breeding and Ensemble Transform Kalman Filter Ensemble Forecast Schemes. J. Atmos. Sci., 60, Whitaker, J. S., and A. F. Loughe, 1998: The Relationship between Ensemble Spread and Ensemble Mean Skill. Mon. Wea. Rev., 126, Wilks, D. S., 1995: Statistical Methods in the Atmospheric Sciences. Academic Press, 467pp. Xie, P. P., B. Rudolf, U. Schneider, and P. A. Arkin, 1996: Gauge-based monthly analysis of global land precipitation from 1971 to J. Geophys. Res. - Atmos., 101 (D14), Zhu, Y., Z. Toth, R. Wobus, D. Richardson, and K. Mylne, 2002: The Economic Value Of Ensemble-Based Weather Forecasts. Bull. Amer. Meteor. Soc., 83,

Verifying the Relationship between Ensemble Forecast Spread and Skill

Verifying the Relationship between Ensemble Forecast Spread and Skill Verifying the Relationship between Ensemble Forecast Spread and Skill Tom Hopson ASP-RAL, NCAR Jeffrey Weiss, U. Colorado Peter Webster, Georgia Instit. Tech. Motivation for generating ensemble forecasts:

More information

Will it rain? Predictability, risk assessment and the need for ensemble forecasts

Will it rain? Predictability, risk assessment and the need for ensemble forecasts Will it rain? Predictability, risk assessment and the need for ensemble forecasts David Richardson European Centre for Medium-Range Weather Forecasts Shinfield Park, Reading, RG2 9AX, UK Tel. +44 118 949

More information

NOTES AND CORRESPONDENCE. Improving Week-2 Forecasts with Multimodel Reforecast Ensembles

NOTES AND CORRESPONDENCE. Improving Week-2 Forecasts with Multimodel Reforecast Ensembles AUGUST 2006 N O T E S A N D C O R R E S P O N D E N C E 2279 NOTES AND CORRESPONDENCE Improving Week-2 Forecasts with Multimodel Reforecast Ensembles JEFFREY S. WHITAKER AND XUE WEI NOAA CIRES Climate

More information

Walter C. Kolczynski, Jr.* David R. Stauffer Sue Ellen Haupt Aijun Deng Pennsylvania State University, University Park, PA

Walter C. Kolczynski, Jr.* David R. Stauffer Sue Ellen Haupt Aijun Deng Pennsylvania State University, University Park, PA 7.3B INVESTIGATION OF THE LINEAR VARIANCE CALIBRATION USING AN IDEALIZED STOCHASTIC ENSEMBLE Walter C. Kolczynski, Jr.* David R. Stauffer Sue Ellen Haupt Aijun Deng Pennsylvania State University, University

More information

Measuring the Ensemble Spread-Error Relationship with a Probabilistic Approach: Stochastic Ensemble Results

Measuring the Ensemble Spread-Error Relationship with a Probabilistic Approach: Stochastic Ensemble Results Measuring the Ensemble Spread-Error Relationship with a Probabilistic Approach: Stochastic Ensemble Results Eric P. Grimit and Clifford F. Mass Department of Atmospheric Sciences University of Washington,

More information

12.2 PROBABILISTIC GUIDANCE OF AVIATION HAZARDS FOR TRANSOCEANIC FLIGHTS

12.2 PROBABILISTIC GUIDANCE OF AVIATION HAZARDS FOR TRANSOCEANIC FLIGHTS 12.2 PROBABILISTIC GUIDANCE OF AVIATION HAZARDS FOR TRANSOCEANIC FLIGHTS K. A. Stone, M. Steiner, J. O. Pinto, C. P. Kalb, C. J. Kessinger NCAR, Boulder, CO M. Strahan Aviation Weather Center, Kansas City,

More information

Proper Scores for Probability Forecasts Can Never Be Equitable

Proper Scores for Probability Forecasts Can Never Be Equitable APRIL 2008 J O L LIFFE AND STEPHENSON 1505 Proper Scores for Probability Forecasts Can Never Be Equitable IAN T. JOLLIFFE AND DAVID B. STEPHENSON School of Engineering, Computing, and Mathematics, University

More information

P3.11 A COMPARISON OF AN ENSEMBLE OF POSITIVE/NEGATIVE PAIRS AND A CENTERED SPHERICAL SIMPLEX ENSEMBLE

P3.11 A COMPARISON OF AN ENSEMBLE OF POSITIVE/NEGATIVE PAIRS AND A CENTERED SPHERICAL SIMPLEX ENSEMBLE P3.11 A COMPARISON OF AN ENSEMBLE OF POSITIVE/NEGATIVE PAIRS AND A CENTERED SPHERICAL SIMPLEX ENSEMBLE 1 INTRODUCTION Xuguang Wang* The Pennsylvania State University, University Park, PA Craig H. Bishop

More information

Ensemble Verification Metrics

Ensemble Verification Metrics Ensemble Verification Metrics Debbie Hudson (Bureau of Meteorology, Australia) ECMWF Annual Seminar 207 Acknowledgements: Beth Ebert Overview. Introduction 2. Attributes of forecast quality 3. Metrics:

More information

Operational Short-Term Flood Forecasting for Bangladesh: Application of ECMWF Ensemble Precipitation Forecasts

Operational Short-Term Flood Forecasting for Bangladesh: Application of ECMWF Ensemble Precipitation Forecasts Operational Short-Term Flood Forecasting for Bangladesh: Application of ECMWF Ensemble Precipitation Forecasts Tom Hopson Peter Webster Climate Forecast Applications for Bangladesh (CFAB): USAID/CU/GT/ADPC/ECMWF

More information

The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS

The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS S. L. Mullen Univ. of Arizona R. Buizza ECMWF University of Wisconsin Predictability Workshop,

More information

Evaluation of the Version 7 TRMM Multi-Satellite Precipitation Analysis (TMPA) 3B42 product over Greece

Evaluation of the Version 7 TRMM Multi-Satellite Precipitation Analysis (TMPA) 3B42 product over Greece 15 th International Conference on Environmental Science and Technology Rhodes, Greece, 31 August to 2 September 2017 Evaluation of the Version 7 TRMM Multi-Satellite Precipitation Analysis (TMPA) 3B42

More information

TC/PR/RB Lecture 3 - Simulation of Random Model Errors

TC/PR/RB Lecture 3 - Simulation of Random Model Errors TC/PR/RB Lecture 3 - Simulation of Random Model Errors Roberto Buizza (buizza@ecmwf.int) European Centre for Medium-Range Weather Forecasts http://www.ecmwf.int Roberto Buizza (buizza@ecmwf.int) 1 ECMWF

More information

Assessment of Ensemble Forecasts

Assessment of Ensemble Forecasts Assessment of Ensemble Forecasts S. L. Mullen Univ. of Arizona HEPEX Workshop, 7 March 2004 Talk Overview Ensemble Performance for Precipitation Global EPS and Mesoscale 12 km RSM Biases, Event Discrimination

More information

Operational quantitative precipitation estimation using radar, gauge g and

Operational quantitative precipitation estimation using radar, gauge g and Operational quantitative precipitation estimation using radar, gauge g and satellite for hydrometeorological applications in Southern Brazil Leonardo Calvetti¹, Cesar Beneti¹, Diogo Stringari¹, i¹ Alex

More information

Adaptation for global application of calibration and downscaling methods of medium range ensemble weather forecasts

Adaptation for global application of calibration and downscaling methods of medium range ensemble weather forecasts Adaptation for global application of calibration and downscaling methods of medium range ensemble weather forecasts Nathalie Voisin Hydrology Group Seminar UW 11/18/2009 Objective Develop a medium range

More information

Application and verification of ECMWF products in Austria

Application and verification of ECMWF products in Austria Application and verification of ECMWF products in Austria Central Institute for Meteorology and Geodynamics (ZAMG), Vienna Alexander Kann 1. Summary of major highlights Medium range weather forecasts in

More information

Enhancing Weather Information with Probability Forecasts. An Information Statement of the American Meteorological Society

Enhancing Weather Information with Probability Forecasts. An Information Statement of the American Meteorological Society Enhancing Weather Information with Probability Forecasts An Information Statement of the American Meteorological Society (Adopted by AMS Council on 12 May 2008) Bull. Amer. Meteor. Soc., 89 Summary This

More information

Application and verification of ECMWF products in Austria

Application and verification of ECMWF products in Austria Application and verification of ECMWF products in Austria Central Institute for Meteorology and Geodynamics (ZAMG), Vienna Alexander Kann, Klaus Stadlbacher 1. Summary of major highlights Medium range

More information

Basic Verification Concepts

Basic Verification Concepts Basic Verification Concepts Barbara Brown National Center for Atmospheric Research Boulder Colorado USA bgb@ucar.edu Basic concepts - outline What is verification? Why verify? Identifying verification

More information

Exploring ensemble forecast calibration issues using reforecast data sets

Exploring ensemble forecast calibration issues using reforecast data sets Exploring ensemble forecast calibration issues using reforecast data sets Thomas M. Hamill (1) and Renate Hagedorn (2) (1) NOAA Earth System Research Lab, Boulder, Colorado, USA 80303 Tom.hamill@noaa.gov

More information

NOAA Earth System Research Lab, Boulder, Colorado, USA

NOAA Earth System Research Lab, Boulder, Colorado, USA Exploring ensemble forecast calibration issues using reforecast data sets Thomas M. Hamill 1 and Renate Hagedorn 2 1 NOAA Earth System Research Lab, Boulder, Colorado, USA 80303,Tom.hamill@noaa.gov; http://esrl.noaa.gov/psd/people/tom.hamill

More information

Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the. Pacific Northwest. Eric P. Grimit

Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the. Pacific Northwest. Eric P. Grimit Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the Eric P. Grimit Pacific Northwest Advisor: Dr. Cliff Mass Department of Atmospheric Sciences, University of

More information

Probabilistic Weather Forecasting and the EPS at ECMWF

Probabilistic Weather Forecasting and the EPS at ECMWF Probabilistic Weather Forecasting and the EPS at ECMWF Renate Hagedorn European Centre for Medium-Range Weather Forecasts 30 January 2009: Ensemble Prediction at ECMWF 1/ 30 Questions What is an Ensemble

More information

Combining Deterministic and Probabilistic Methods to Produce Gridded Climatologies

Combining Deterministic and Probabilistic Methods to Produce Gridded Climatologies Combining Deterministic and Probabilistic Methods to Produce Gridded Climatologies Michael Squires Alan McNab National Climatic Data Center (NCDC - NOAA) Asheville, NC Abstract There are nearly 8,000 sites

More information

Guo-Yuan Lien*, Eugenia Kalnay, and Takemasa Miyoshi University of Maryland, College Park, Maryland 2. METHODOLOGY

Guo-Yuan Lien*, Eugenia Kalnay, and Takemasa Miyoshi University of Maryland, College Park, Maryland 2. METHODOLOGY 9.2 EFFECTIVE ASSIMILATION OF GLOBAL PRECIPITATION: SIMULATION EXPERIMENTS Guo-Yuan Lien*, Eugenia Kalnay, and Takemasa Miyoshi University of Maryland, College Park, Maryland 1. INTRODUCTION * Precipitation

More information

4.3.2 Configuration. 4.3 Ensemble Prediction System Introduction

4.3.2 Configuration. 4.3 Ensemble Prediction System Introduction 4.3 Ensemble Prediction System 4.3.1 Introduction JMA launched its operational ensemble prediction systems (EPSs) for one-month forecasting, one-week forecasting, and seasonal forecasting in March of 1996,

More information

EXPERIMENTAL ASSIMILATION OF SPACE-BORNE CLOUD RADAR AND LIDAR OBSERVATIONS AT ECMWF

EXPERIMENTAL ASSIMILATION OF SPACE-BORNE CLOUD RADAR AND LIDAR OBSERVATIONS AT ECMWF EXPERIMENTAL ASSIMILATION OF SPACE-BORNE CLOUD RADAR AND LIDAR OBSERVATIONS AT ECMWF Marta Janisková, Sabatino Di Michele, Edouard Martins ECMWF, Shinfield Park, Reading, U.K. Abstract Space-borne active

More information

Effects of observation errors on the statistics for ensemble spread and reliability

Effects of observation errors on the statistics for ensemble spread and reliability 393 Effects of observation errors on the statistics for ensemble spread and reliability Øyvind Saetra, Jean-Raymond Bidlot, Hans Hersbach and David Richardson Research Department December 22 For additional

More information

Application and verification of the ECMWF products Report 2007

Application and verification of the ECMWF products Report 2007 Application and verification of the ECMWF products Report 2007 National Meteorological Administration Romania 1. Summary of major highlights The medium range forecast activity within the National Meteorological

More information

PROBABILISTIC FORECASTS OF MEDITER- RANEAN STORMS WITH A LIMITED AREA MODEL Chiara Marsigli 1, Andrea Montani 1, Fabrizio Nerozzi 1, Tiziana Paccagnel

PROBABILISTIC FORECASTS OF MEDITER- RANEAN STORMS WITH A LIMITED AREA MODEL Chiara Marsigli 1, Andrea Montani 1, Fabrizio Nerozzi 1, Tiziana Paccagnel PROBABILISTIC FORECASTS OF MEDITER- RANEAN STORMS WITH A LIMITED AREA MODEL Chiara Marsigli 1, Andrea Montani 1, Fabrizio Nerozzi 1, Tiziana Paccagnella 1, Roberto Buizza 2, Franco Molteni 3 1 Regional

More information

Probabilistic wind speed forecasting in Hungary

Probabilistic wind speed forecasting in Hungary Probabilistic wind speed forecasting in Hungary arxiv:1202.4442v3 [stat.ap] 17 Mar 2012 Sándor Baran and Dóra Nemoda Faculty of Informatics, University of Debrecen Kassai út 26, H 4028 Debrecen, Hungary

More information

Estimation of seasonal precipitation tercile-based categorical probabilities. from ensembles. April 27, 2006

Estimation of seasonal precipitation tercile-based categorical probabilities. from ensembles. April 27, 2006 Estimation of seasonal precipitation tercile-based categorical probabilities from ensembles MICHAEL K. TIPPETT, ANTHONY G. BARNSTON AND ANDREW W. ROBERTSON International Research Institute for Climate

More information

Basic Verification Concepts

Basic Verification Concepts Basic Verification Concepts Barbara Brown National Center for Atmospheric Research Boulder Colorado USA bgb@ucar.edu May 2017 Berlin, Germany Basic concepts - outline What is verification? Why verify?

More information

Verification of Continuous Forecasts

Verification of Continuous Forecasts Verification of Continuous Forecasts Presented by Barbara Brown Including contributions by Tressa Fowler, Barbara Casati, Laurence Wilson, and others Exploratory methods Scatter plots Discrimination plots

More information

Theory and Practice of Data Assimilation in Ocean Modeling

Theory and Practice of Data Assimilation in Ocean Modeling Theory and Practice of Data Assimilation in Ocean Modeling Robert N. Miller College of Oceanic and Atmospheric Sciences Oregon State University Oceanography Admin. Bldg. 104 Corvallis, OR 97331-5503 Phone:

More information

Calibration of extreme temperature forecasts of MOS_EPS model over Romania with the Bayesian Model Averaging

Calibration of extreme temperature forecasts of MOS_EPS model over Romania with the Bayesian Model Averaging Volume 11 Issues 1-2 2014 Calibration of extreme temperature forecasts of MOS_EPS model over Romania with the Bayesian Model Averaging Mihaela-Silvana NEACSU National Meteorological Administration, Bucharest

More information

Exploring ensemble forecast calibration issues using reforecast data sets

Exploring ensemble forecast calibration issues using reforecast data sets NOAA Earth System Research Laboratory Exploring ensemble forecast calibration issues using reforecast data sets Tom Hamill and Jeff Whitaker NOAA Earth System Research Lab, Boulder, CO tom.hamill@noaa.gov

More information

National Oceanic and Atmospheric Administration, Silver Spring MD

National Oceanic and Atmospheric Administration, Silver Spring MD Calibration and downscaling methods for quantitative ensemble precipitation forecasts Nathalie Voisin 1,3, John C. Schaake 2 and Dennis P. Lettenmaier 1 1 Department of Civil and Environmental Engineering,

More information

J1.3 GENERATING INITIAL CONDITIONS FOR ENSEMBLE FORECASTS: MONTE-CARLO VS. DYNAMIC METHODS

J1.3 GENERATING INITIAL CONDITIONS FOR ENSEMBLE FORECASTS: MONTE-CARLO VS. DYNAMIC METHODS J1.3 GENERATING INITIAL CONDITIONS FOR ENSEMBLE FORECASTS: MONTE-CARLO VS. DYNAMIC METHODS Thomas M. Hamill 1, Jeffrey S. Whitaker 1, and Chris Snyder 2 1 NOAA-CIRES Climate Diagnostics Center, Boulder,

More information

Upscaled and fuzzy probabilistic forecasts: verification results

Upscaled and fuzzy probabilistic forecasts: verification results 4 Predictability and Ensemble Methods 124 Upscaled and fuzzy probabilistic forecasts: verification results Zied Ben Bouallègue Deutscher Wetterdienst (DWD), Frankfurter Str. 135, 63067 Offenbach, Germany

More information

Calibration of ECMWF forecasts

Calibration of ECMWF forecasts from Newsletter Number 142 Winter 214/15 METEOROLOGY Calibration of ECMWF forecasts Based on an image from mrgao/istock/thinkstock doi:1.21957/45t3o8fj This article appeared in the Meteorology section

More information

PACS: Wc, Bh

PACS: Wc, Bh Acta Phys. Sin. Vol. 61, No. 19 (2012) 199203 * 1) 1) 2) 2) 1) (, 100081 ) 2) (, 730000 ) ( 2012 1 12 ; 2012 3 14 ).,, (PBEP).,, ;,.,,,,. :,,, PACS: 92.60.Wc, 92.60.Bh 1,,, [1 3]. [4 6].,., [7] [8] [9],,,

More information

GLOBAL SATELLITE MAPPING OF PRECIPITATION (GSMAP) PROJECT

GLOBAL SATELLITE MAPPING OF PRECIPITATION (GSMAP) PROJECT GLOBAL SATELLITE MAPPING OF PRECIPITATION (GSMAP) PROJECT Tomoo Ushio 1, Kazumasa Aonashi 2, Takuji Kubota 3, Shoichi Shige 4, Misako Kachi 3, Riko Oki 3, Ken ichi Okamoto 5, Satoru Yoshida 1, Zen-Ichiro

More information

Convective scheme and resolution impacts on seasonal precipitation forecasts

Convective scheme and resolution impacts on seasonal precipitation forecasts GEOPHYSICAL RESEARCH LETTERS, VOL. 30, NO. 20, 2078, doi:10.1029/2003gl018297, 2003 Convective scheme and resolution impacts on seasonal precipitation forecasts D. W. Shin, T. E. LaRow, and S. Cocke Center

More information

István Ihász, Máté Mile and Zoltán Üveges Hungarian Meteorological Service, Budapest, Hungary

István Ihász, Máté Mile and Zoltán Üveges Hungarian Meteorological Service, Budapest, Hungary Comprehensive study of the calibrated EPS products István Ihász, Máté Mile and Zoltán Üveges Hungarian Meteorological Service, Budapest, Hungary 1. Introduction Calibration of ensemble forecasts is a new

More information

A study on the spread/error relationship of the COSMO-LEPS ensemble

A study on the spread/error relationship of the COSMO-LEPS ensemble 4 Predictability and Ensemble Methods 110 A study on the spread/error relationship of the COSMO-LEPS ensemble M. Salmi, C. Marsigli, A. Montani, T. Paccagnella ARPA-SIMC, HydroMeteoClimate Service of Emilia-Romagna,

More information

Ensemble forecasting: Error bars and beyond. Jim Hansen, NRL Walter Sessions, NRL Jeff Reid,NRL May, 2011

Ensemble forecasting: Error bars and beyond. Jim Hansen, NRL Walter Sessions, NRL Jeff Reid,NRL May, 2011 Ensemble forecasting: Error bars and beyond Jim Hansen, NRL Walter Sessions, NRL Jeff Reid,NRL May, 2011 1 Why ensembles Traditional justification Predict expected error (Perhaps) more valuable justification

More information

VERFICATION OF OCEAN WAVE ENSEMBLE FORECAST AT NCEP 1. Degui Cao, H.S. Chen and Hendrik Tolman

VERFICATION OF OCEAN WAVE ENSEMBLE FORECAST AT NCEP 1. Degui Cao, H.S. Chen and Hendrik Tolman VERFICATION OF OCEAN WAVE ENSEMBLE FORECAST AT NCEP Degui Cao, H.S. Chen and Hendrik Tolman NOAA /National Centers for Environmental Prediction Environmental Modeling Center Marine Modeling and Analysis

More information

Upgrade of JMA s Typhoon Ensemble Prediction System

Upgrade of JMA s Typhoon Ensemble Prediction System Upgrade of JMA s Typhoon Ensemble Prediction System Masayuki Kyouda Numerical Prediction Division, Japan Meteorological Agency and Masakazu Higaki Office of Marine Prediction, Japan Meteorological Agency

More information

Application and verification of ECMWF products 2009

Application and verification of ECMWF products 2009 Application and verification of ECMWF products 2009 Danish Meteorological Institute Author: Søren E. Olufsen, Deputy Director of Forecasting Services Department and Erik Hansen, forecaster M.Sc. 1. Summary

More information

Downscaling in Time. Andrew W. Robertson, IRI. Advanced Training Institute on Climate Variability and Food Security, 12 July 2002

Downscaling in Time. Andrew W. Robertson, IRI. Advanced Training Institute on Climate Variability and Food Security, 12 July 2002 Downscaling in Time Andrew W. Robertson, IRI Advanced Training Institute on Climate Variability and Food Security, 12 July 2002 Preliminaries Crop yields are driven by daily weather variations! Current

More information

Towards Operational Probabilistic Precipitation Forecast

Towards Operational Probabilistic Precipitation Forecast 5 Working Group on Verification and Case Studies 56 Towards Operational Probabilistic Precipitation Forecast Marco Turco, Massimo Milelli ARPA Piemonte, Via Pio VII 9, I-10135 Torino, Italy 1 Aim of the

More information

Application and verification of ECMWF products 2010

Application and verification of ECMWF products 2010 Application and verification of ECMWF products Hydrological and meteorological service of Croatia (DHMZ) Lovro Kalin. Summary of major highlights At DHMZ, ECMWF products are regarded as the major source

More information

Computationally Efficient Dynamical Downscaling with an Analog Ensemble

Computationally Efficient Dynamical Downscaling with an Analog Ensemble ENERGY Computationally Efficient Dynamical Downscaling with an Analog Ensemble Application to Wind Resource Assessment Daran L. Rife 02 June 2015 Luca Delle Monache (NCAR); Jessica Ma and Rich Whiting

More information

The Coupled Model Predictability of the Western North Pacific Summer Monsoon with Different Leading Times

The Coupled Model Predictability of the Western North Pacific Summer Monsoon with Different Leading Times ATMOSPHERIC AND OCEANIC SCIENCE LETTERS, 2012, VOL. 5, NO. 3, 219 224 The Coupled Model Predictability of the Western North Pacific Summer Monsoon with Different Leading Times LU Ri-Yu 1, LI Chao-Fan 1,

More information

J11.5 HYDROLOGIC APPLICATIONS OF SHORT AND MEDIUM RANGE ENSEMBLE FORECASTS IN THE NWS ADVANCED HYDROLOGIC PREDICTION SERVICES (AHPS)

J11.5 HYDROLOGIC APPLICATIONS OF SHORT AND MEDIUM RANGE ENSEMBLE FORECASTS IN THE NWS ADVANCED HYDROLOGIC PREDICTION SERVICES (AHPS) J11.5 HYDROLOGIC APPLICATIONS OF SHORT AND MEDIUM RANGE ENSEMBLE FORECASTS IN THE NWS ADVANCED HYDROLOGIC PREDICTION SERVICES (AHPS) Mary Mullusky*, Julie Demargne, Edwin Welles, Limin Wu and John Schaake

More information

Monthly forecast and the Summer 2003 heat wave over Europe: a case study

Monthly forecast and the Summer 2003 heat wave over Europe: a case study ATMOSPHERIC SCIENCE LETTERS Atmos. Sci. Let. 6: 112 117 (2005) Published online 21 April 2005 in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/asl.99 Monthly forecast and the Summer 2003

More information

Multivariate Correlations: Applying a Dynamic Constraint and Variable Localization in an Ensemble Context

Multivariate Correlations: Applying a Dynamic Constraint and Variable Localization in an Ensemble Context Multivariate Correlations: Applying a Dynamic Constraint and Variable Localization in an Ensemble Context Catherine Thomas 1,2,3, Kayo Ide 1 Additional thanks to Daryl Kleist, Eugenia Kalnay, Takemasa

More information

Forecasting wave height probabilities with numerical weather prediction models

Forecasting wave height probabilities with numerical weather prediction models Ocean Engineering 32 (2005) 1841 1863 www.elsevier.com/locate/oceaneng Forecasting wave height probabilities with numerical weather prediction models Mark S. Roulston a,b, *, Jerome Ellepola c, Jost von

More information

Standardized Anomaly Model Output Statistics Over Complex Terrain.

Standardized Anomaly Model Output Statistics Over Complex Terrain. Standardized Anomaly Model Output Statistics Over Complex Terrain Reto.Stauffer@uibk.ac.at Outline statistical ensemble postprocessing introduction to SAMOS new snow amount forecasts in Tyrol sub-seasonal

More information

COMPOSITE-BASED VERIFICATION OF PRECIPITATION FORECASTS FROM A MESOSCALE MODEL

COMPOSITE-BASED VERIFICATION OF PRECIPITATION FORECASTS FROM A MESOSCALE MODEL J13.5 COMPOSITE-BASED VERIFICATION OF PRECIPITATION FORECASTS FROM A MESOSCALE MODEL Jason E. Nachamkin, Sue Chen, and Jerome M. Schmidt Naval Research Laboratory, Monterey, CA 1. INTRODUCTION Mesoscale

More information

Probabilistic Weather Prediction with an Analog Ensemble

Probabilistic Weather Prediction with an Analog Ensemble 3498 M O N T H L Y W E A T H E R R E V I E W VOLUME 141 Probabilistic Weather Prediction with an Analog Ensemble LUCA DELLE MONACHE National Center for Atmospheric Research, Boulder, Colorado F. ANTHONY

More information

Impact of Stochastic Convection on Ensemble Forecasts of Tropical Cyclone Development

Impact of Stochastic Convection on Ensemble Forecasts of Tropical Cyclone Development 620 M O N T H L Y W E A T H E R R E V I E W VOLUME 139 Impact of Stochastic Convection on Ensemble Forecasts of Tropical Cyclone Development ANDREW SNYDER AND ZHAOXIA PU Department of Atmospheric Sciences,

More information

Five years of limited-area ensemble activities at ARPA-SIM: the COSMO-LEPS system

Five years of limited-area ensemble activities at ARPA-SIM: the COSMO-LEPS system Five years of limited-area ensemble activities at ARPA-SIM: the COSMO-LEPS system Andrea Montani, Chiara Marsigli and Tiziana Paccagnella ARPA-SIM Hydrometeorological service of Emilia-Romagna, Italy 11

More information

TESTING GEOMETRIC BRED VECTORS WITH A MESOSCALE SHORT-RANGE ENSEMBLE PREDICTION SYSTEM OVER THE WESTERN MEDITERRANEAN

TESTING GEOMETRIC BRED VECTORS WITH A MESOSCALE SHORT-RANGE ENSEMBLE PREDICTION SYSTEM OVER THE WESTERN MEDITERRANEAN TESTING GEOMETRIC BRED VECTORS WITH A MESOSCALE SHORT-RANGE ENSEMBLE PREDICTION SYSTEM OVER THE WESTERN MEDITERRANEAN Martín, A. (1, V. Homar (1, L. Fita (1, C. Primo (2, M. A. Rodríguez (2 and J. M. Gutiérrez

More information

1.2 DEVELOPMENT OF THE NWS PROBABILISTIC EXTRA-TROPICAL STORM SURGE MODEL AND POST PROCESSING METHODOLOGY

1.2 DEVELOPMENT OF THE NWS PROBABILISTIC EXTRA-TROPICAL STORM SURGE MODEL AND POST PROCESSING METHODOLOGY 1.2 DEVELOPMENT OF THE NWS PROBABILISTIC EXTRA-TROPICAL STORM SURGE MODEL AND POST PROCESSING METHODOLOGY Huiqing Liu 1 and Arthur Taylor 2* 1. Ace Info Solutions, Reston, VA 2. NOAA / NWS / Science and

More information

Prediction of Snow Water Equivalent in the Snake River Basin

Prediction of Snow Water Equivalent in the Snake River Basin Hobbs et al. Seasonal Forecasting 1 Jon Hobbs Steve Guimond Nate Snook Meteorology 455 Seasonal Forecasting Prediction of Snow Water Equivalent in the Snake River Basin Abstract Mountainous regions of

More information

P 1.86 A COMPARISON OF THE HYBRID ENSEMBLE TRANSFORM KALMAN FILTER (ETKF)- 3DVAR AND THE PURE ENSEMBLE SQUARE ROOT FILTER (EnSRF) ANALYSIS SCHEMES

P 1.86 A COMPARISON OF THE HYBRID ENSEMBLE TRANSFORM KALMAN FILTER (ETKF)- 3DVAR AND THE PURE ENSEMBLE SQUARE ROOT FILTER (EnSRF) ANALYSIS SCHEMES P 1.86 A COMPARISON OF THE HYBRID ENSEMBLE TRANSFORM KALMAN FILTER (ETKF)- 3DVAR AND THE PURE ENSEMBLE SQUARE ROOT FILTER (EnSRF) ANALYSIS SCHEMES Xuguang Wang*, Thomas M. Hamill, Jeffrey S. Whitaker NOAA/CIRES

More information

Surface Hydrology Research Group Università degli Studi di Cagliari

Surface Hydrology Research Group Università degli Studi di Cagliari Surface Hydrology Research Group Università degli Studi di Cagliari Evaluation of Input Uncertainty in Nested Flood Forecasts: Coupling a Multifractal Precipitation Downscaling Model and a Fully-Distributed

More information

AN ENSEMBLE STRATEGY FOR ROAD WEATHER APPLICATIONS

AN ENSEMBLE STRATEGY FOR ROAD WEATHER APPLICATIONS 11.8 AN ENSEMBLE STRATEGY FOR ROAD WEATHER APPLICATIONS Paul Schultz 1 NOAA Research - Forecast Systems Laboratory Boulder, Colorado 1. INTRODUCTION In 1999 the Federal Highways Administration (FHWA) initiated

More information

Radar data assimilation using a modular programming approach with the Ensemble Kalman Filter: preliminary results

Radar data assimilation using a modular programming approach with the Ensemble Kalman Filter: preliminary results Radar data assimilation using a modular programming approach with the Ensemble Kalman Filter: preliminary results I. Maiello 1, L. Delle Monache 2, G. Romine 2, E. Picciotti 3, F.S. Marzano 4, R. Ferretti

More information

Accounting for the effect of observation errors on verification of MOGREPS

Accounting for the effect of observation errors on verification of MOGREPS METEOROLOGICAL APPLICATIONS Meteorol. Appl. 15: 199 205 (2008) Published online in Wiley InterScience (www.interscience.wiley.com).64 Accounting for the effect of observation errors on verification of

More information

A COMPARISON OF VERY SHORT-TERM QPF S FOR SUMMER CONVECTION OVER COMPLEX TERRAIN AREAS, WITH THE NCAR/ATEC WRF AND MM5-BASED RTFDDA SYSTEMS

A COMPARISON OF VERY SHORT-TERM QPF S FOR SUMMER CONVECTION OVER COMPLEX TERRAIN AREAS, WITH THE NCAR/ATEC WRF AND MM5-BASED RTFDDA SYSTEMS A COMPARISON OF VERY SHORT-TERM QPF S FOR SUMMER CONVECTION OVER COMPLEX TERRAIN AREAS, WITH THE NCAR/ATEC WRF AND MM5-BASED RTFDDA SYSTEMS Wei Yu, Yubao Liu, Tom Warner, Randy Bullock, Barbara Brown and

More information

Overview of the TAMSAT drought forecasting system

Overview of the TAMSAT drought forecasting system Overview of the TAMSAT drought forecasting system The TAMSAT drought forecasting system produces probabilistic forecasts of drought by combining information on the contemporaneous condition of the land

More information

Statistical post-processing of probabilistic wind speed forecasting in Hungary

Statistical post-processing of probabilistic wind speed forecasting in Hungary Meteorologische Zeitschrift, Vol. 22, No. 3, 1 (August 13) Ó by Gebrüder Borntraeger 13 Article Statistical post-processing of probabilistic wind speed forecasting in Hungary Sándor Baran 1,*, András Horányi

More information

EMC Probabilistic Forecast Verification for Sub-season Scales

EMC Probabilistic Forecast Verification for Sub-season Scales EMC Probabilistic Forecast Verification for Sub-season Scales Yuejian Zhu Environmental Modeling Center NCEP/NWS/NOAA Acknowledgement: Wei Li, Hong Guan and Eric Sinsky Present for the DTC Test Plan and

More information

The Structure of Background-error Covariance in a Four-dimensional Variational Data Assimilation System: Single-point Experiment

The Structure of Background-error Covariance in a Four-dimensional Variational Data Assimilation System: Single-point Experiment ADVANCES IN ATMOSPHERIC SCIENCES, VOL. 27, NO. 6, 2010, 1303 1310 The Structure of Background-error Covariance in a Four-dimensional Variational Data Assimilation System: Single-point Experiment LIU Juanjuan

More information

Verification of ECMWF products at the Deutscher Wetterdienst (DWD)

Verification of ECMWF products at the Deutscher Wetterdienst (DWD) Verification of ECMWF products at the Deutscher Wetterdienst (DWD) DWD Martin Göber 1. Summary of major highlights The usage of a combined GME-MOS and ECMWF-MOS continues to lead to a further increase

More information

Developing Operational MME Forecasts for Subseasonal Timescales

Developing Operational MME Forecasts for Subseasonal Timescales Developing Operational MME Forecasts for Subseasonal Timescales Dan C. Collins NOAA Climate Prediction Center (CPC) Acknowledgements: Stephen Baxter and Augustin Vintzileos (CPC and UMD) 1 Outline I. Operational

More information

LATE REQUEST FOR A SPECIAL PROJECT

LATE REQUEST FOR A SPECIAL PROJECT LATE REQUEST FOR A SPECIAL PROJECT 2016 2018 MEMBER STATE: Italy Principal Investigator 1 : Affiliation: Address: E-mail: Other researchers: Project Title: Valerio Capecchi LaMMA Consortium - Environmental

More information

Drought forecasting methods Blaz Kurnik DESERT Action JRC

Drought forecasting methods Blaz Kurnik DESERT Action JRC Ljubljana on 24 September 2009 1 st DMCSEE JRC Workshop on Drought Monitoring 1 Drought forecasting methods Blaz Kurnik DESERT Action JRC Motivations for drought forecasting Ljubljana on 24 September 2009

More information

Verification of Probability Forecasts

Verification of Probability Forecasts Verification of Probability Forecasts Beth Ebert Bureau of Meteorology Research Centre (BMRC) Melbourne, Australia 3rd International Verification Methods Workshop, 29 January 2 February 27 Topics Verification

More information

INM/AEMET Short Range Ensemble Prediction System: Tropical Storm Delta

INM/AEMET Short Range Ensemble Prediction System: Tropical Storm Delta INM/AEMET Short Range Ensemble Prediction System: Tropical Storm Delta DANIEL SANTOS-MUÑOZ, ALFONS CALLADO, PAU SECRIBA, JOSE A. GARCIA-MOYA, CARLOS SANTOS AND JUAN SIMARRO. Predictability Group Workshop

More information

Ensemble-based Data Assimilation of TRMM/GPM Precipitation Measurements

Ensemble-based Data Assimilation of TRMM/GPM Precipitation Measurements January 16, 2014, JAXA Joint PI Workshop, Tokyo Ensemble-based Data Assimilation of TRMM/GPM Precipitation Measurements PI: Takemasa Miyoshi RIKEN Advanced Institute for Computational Science Takemasa.Miyoshi@riken.jp

More information

AMPS Update June 2016

AMPS Update June 2016 AMPS Update June 2016 Kevin W. Manning Jordan G. Powers Mesoscale and Microscale Meteorology Laboratory National Center for Atmospheric Research Boulder, CO 11 th Antarctic Meteorological Observation,

More information

Multimodel Ensemble forecasts

Multimodel Ensemble forecasts Multimodel Ensemble forecasts Calibrated methods Michael K. Tippett International Research Institute for Climate and Society The Earth Institute, Columbia University ERFS Climate Predictability Tool Training

More information

Application of a medium range global hydrologic probabilistic forecast scheme to the Ohio River. Basin

Application of a medium range global hydrologic probabilistic forecast scheme to the Ohio River. Basin Application of a medium range global hydrologic probabilistic forecast scheme to the Ohio River Basin Nathalie Voisin 1, Florian Pappenberger 2, Dennis P. Lettenmaier 1,4, Roberto Buizza 2, John C. Schaake

More information

Use of medium-range ensembles at the Met Office I: PREVIN a system for the production of probabilistic forecast information from the ECMWF EPS

Use of medium-range ensembles at the Met Office I: PREVIN a system for the production of probabilistic forecast information from the ECMWF EPS Meteorol. Appl. 9, 255 271 (2002) DOI:10.1017/S1350482702003018 Use of medium-range ensembles at the Met Office I: PREVIN a system for the production of probabilistic forecast information from the ECMWF

More information

VERIFICATION OF HIGH RESOLUTION WRF-RTFDDA SURFACE FORECASTS OVER MOUNTAINS AND PLAINS

VERIFICATION OF HIGH RESOLUTION WRF-RTFDDA SURFACE FORECASTS OVER MOUNTAINS AND PLAINS VERIFICATION OF HIGH RESOLUTION WRF-RTFDDA SURFACE FORECASTS OVER MOUNTAINS AND PLAINS Gregory Roux, Yubao Liu, Luca Delle Monache, Rong-Shyang Sheu and Thomas T. Warner NCAR/Research Application Laboratory,

More information

Calibrating surface temperature forecasts using BMA method over Iran

Calibrating surface temperature forecasts using BMA method over Iran 2011 2nd International Conference on Environmental Science and Technology IPCBEE vol.6 (2011) (2011) IACSIT Press, Singapore Calibrating surface temperature forecasts using BMA method over Iran Iman Soltanzadeh

More information

Predictability from a Forecast Provider s Perspective

Predictability from a Forecast Provider s Perspective Predictability from a Forecast Provider s Perspective Ken Mylne Met Office, Bracknell RG12 2SZ, UK. email: ken.mylne@metoffice.com 1. Introduction Predictability is not a new issue for forecasters or forecast

More information

J5.8 ESTIMATES OF BOUNDARY LAYER PROFILES BY MEANS OF ENSEMBLE-FILTER ASSIMILATION OF NEAR SURFACE OBSERVATIONS IN A PARAMETERIZED PBL

J5.8 ESTIMATES OF BOUNDARY LAYER PROFILES BY MEANS OF ENSEMBLE-FILTER ASSIMILATION OF NEAR SURFACE OBSERVATIONS IN A PARAMETERIZED PBL J5.8 ESTIMATES OF BOUNDARY LAYER PROFILES BY MEANS OF ENSEMBLE-FILTER ASSIMILATION OF NEAR SURFACE OBSERVATIONS IN A PARAMETERIZED PBL Dorita Rostkier-Edelstein 1 and Joshua P. Hacker The National Center

More information

Strategy for Using CPC Precipitation and Temperature Forecasts to Create Ensemble Forcing for NWS Ensemble Streamflow Prediction (ESP)

Strategy for Using CPC Precipitation and Temperature Forecasts to Create Ensemble Forcing for NWS Ensemble Streamflow Prediction (ESP) Strategy for Using CPC Precipitation and Temperature Forecasts to Create Ensemble Forcing for NWS Ensemble Streamflow Prediction (ESP) John Schaake (Acknowlements: D.J. Seo, Limin Wu, Julie Demargne, Rob

More information

Skill prediction of local weather forecasts based on the ECMWF ensemble

Skill prediction of local weather forecasts based on the ECMWF ensemble Skill prediction of local weather forecasts based on the ECMWF ensemble C. Ziehmann To cite this version: C. Ziehmann. Skill prediction of local weather forecasts based on the ECMWF ensemble. Nonlinear

More information

A Hybrid Ensemble Kalman Filter 3D Variational Analysis Scheme

A Hybrid Ensemble Kalman Filter 3D Variational Analysis Scheme 2905 A Hybrid Ensemble Kalman Filter 3D Variational Analysis Scheme THOMAS M. HAMILL AND CHRIS SNYDER National Center for Atmospheric Research,* Boulder, Colorado (Manuscript received 15 October 1999,

More information

Estimation of Forecat uncertainty with graphical products. Karyne Viard, Christian Viel, François Vinit, Jacques Richon, Nicole Girardot

Estimation of Forecat uncertainty with graphical products. Karyne Viard, Christian Viel, François Vinit, Jacques Richon, Nicole Girardot Estimation of Forecat uncertainty with graphical products Karyne Viard, Christian Viel, François Vinit, Jacques Richon, Nicole Girardot Using ECMWF Forecasts 8-10 june 2015 Outline Introduction Basic graphical

More information

Mesoscale Predictability of Terrain Induced Flows

Mesoscale Predictability of Terrain Induced Flows Mesoscale Predictability of Terrain Induced Flows Dale R. Durran University of Washington Dept. of Atmospheric Sciences Box 3516 Seattle, WA 98195 phone: (206) 543-74 fax: (206) 543-0308 email: durrand@atmos.washington.edu

More information

Verification of intense precipitation forecasts from single models and ensemble prediction systems

Verification of intense precipitation forecasts from single models and ensemble prediction systems Verification of intense precipitation forecasts from single models and ensemble prediction systems F Atger To cite this version: F Atger Verification of intense precipitation forecasts from single models

More information

Observations and Modeling of SST Influence on Surface Winds

Observations and Modeling of SST Influence on Surface Winds Observations and Modeling of SST Influence on Surface Winds Dudley B. Chelton and Qingtao Song College of Oceanic and Atmospheric Sciences Oregon State University, Corvallis, OR 97331-5503 chelton@coas.oregonstate.edu,

More information