Verification Marion Mittermaier

Size: px
Start display at page:

Download "Verification Marion Mittermaier"

Transcription

1 MOSAC November 2012 PAPER 17.6 Verification Marion Mittermaier Crown Copyright

2 1. Introduction Whilst it has been relatively easy to show an increase in skill when comparing forecasts from models with greater than 30 km to less than 20 km horizontal resolutions, it has been somewhat more difficult to do so for horizontal resolutions further improving from greater than10 km to socalled convection-permitting grid resolutions, e.g. 4 km or less (e.g. Mass et al. 2002). This has been especially true for precipitation where the forecast detail is realistic but inaccurate. Lorenz (1969) pointed out that output from km-scale models may not be accurate at the grid-scale. This inaccuracy introduces the so-called double-penalty effect where the traditional approach to forecast verification of precise matching of forecasts to point observations means that even small displacement errors in features get penalized, and closeness is not rewarded. This has led to a multitude of new verification methods being developed to assess precipitation spatially. Ebert (2008) made the first attempt at categorizing these new methods. One group of methods have been referred to as neighbourhood methods, where either a single observation (SO) or observation neighbourhood (NO) is used to compare to a forecast neighbourhood (NF). All these new methods were primarily developed to produce objective verification results that showed the benefit of convection-permitting model forecasts, because the traditional methods and metrics failed to match subjective assessment results. Many of these new methods rely on a gridded truth, e.g. radar accumulations, which are not without issues. This will be discussed further in Section 2. For many parameters the main source of truth is still single site synoptic observations, where representativeness remains a factor. A model grid box now represents a very small area and for some variables such as cloud the model often resembles a binary response rather than a range of values. This was demonstrated by Mittermaier (2012) in assessing synoptic cloud observations and forecasts from four different MetUM configurations. Our forecast approach is becoming increasingly probabilistic and focused on the early detection of changes of weather type and significant events that lead to downstream impacts. Methods which can help detect whether forecasts can discriminate and detect a significant signal and how far in advance, are becoming increasingly important. Some thoughts will be offered in Section 3. Finally a short round-up of other relevant research highlights are given in Section Benefit of km-scale NWP Routine verification of deterministic Numerical Weather Prediction (NWP) forecasts from the convection-permitting 4 km (UK4) and near-convection-resolving 1.5 km (UKV) configurations of the Met Office Unified Model (MetUM) has typically shown that it is hard to consistently prove that the higher resolution near-convection-resolving model is more skilful. The UK index has long been considered the benchmark by which we measure model performance. Six surface parameters are considered: 2 m temperature, 10 m wind speed, total cloud amount (TCA), cloud base height (CBH), visibility and hourly precipitation. The time series of the 6 individual UK index components for UK4 vs UKV is shown in Fig. 1. A positive (negative) bar indicates the UKV is better (worse). For temperature the trends are seasonal with indications that the raw UKV temperatures are better in the colder months. For wind the score is uniformly negative and this is largely because the UKV has a lot more detailed structure in the wind fields which is often interpreted as noise by a metric based on the root-mean-square error which rewards smoothness. Until recently cloud was considerably worse in the UKV but model upgrades have essentially turned this skill deficit around such that now the differences are marginal. The signal for CBH is also not conclusive. For visibility there are hints that the UKV is better in the cold season, whereas for hourly precipitation the UKV forecasts verified at point locations suffered severely from a combination of representativeness and double penalty issues. Crown Copyright

3 Figure 1: History of the 6 individual UK index components comparing the UK4 and UKV Precipitation at least can (and should) also be assessed against a gridded truth such as radar accumulations. The realism of the spatial distribution of high-resolution precipitation forecasts is one of the key benefits when assessing forecasts subjectively. At the Met Office the spatial assessment of precipitation (using the Fractions Skill Score) has been part of our routine verification suite since Mittermaier et al. (2011) summarised a 5-year assessment between the soon-to-be retired 12 km NAE and the UK4. Unfortunately the long-term monitoring of Crown Copyright

4 precipitation forecast skill against radar accumulations is not straightforward as any changes in the forecast bias are not the only influence on verification metrics. The radar baseline is not as stable in time as that provided by gauges. Radar hardware changes (through component failure or upgrade), gauge calibrations, and other artefacts can result in considerable fluctuations. Disentangling the reasons for changes in the verification signal therefore becomes virtually impossible at times, and not all change is attributable to the model alone. Nevertheless, the comparison against radar is extremely valuable when comparing model configurations against each other in a relative sense, as they are all compared against the same observations set. Therefore the comparisons have been expanded to cross-compare the UK4, UKV and the 4-km global downscaler referred to as the UK4 extended or UK4X. Results are shown in Fig 2. These schematics show a range of thresholds for 6-h precipitation totals for 5 accumulation windows for validity times (VT) ranging from t+3h to t+36h. See the caption for further details. So subjectively we can determine that the UKV is better than the UK4. Objectively we can also show that the UKV precipitation forecasts are better than other models using a spatial verification method and a gridded truth (radar). What about the other parameters? We typically do not have gridded independent data sets to compare against, so standard synoptic observations are still the only viable means for routine assessment. Figure 2: The coloured tables show the percentage difference of verification times that the higher-resolution model had a better score than the lower-resolution model. That is, they show [the percentage of VTs the higher-resolution model scored better] minus [the percentage of VTs the lower-resolution model scored better]. If the lower-resolution model has more VTs with a better score, the cell is shaded red. If the higherresolution model has more superior VTs, the cell is green. Grey cells are either equally matched or statistically insignificant. Stronger shaded cells are more statistically significant. Bright green implies results significant at least at the 5% level. Mittermaier (2012) describes a proposed framework for verifying a variety of surface parameters from km-scale NWP against surface observations by using forecast neighbourhoods centred on the observing site. The framework is inherently probabilistic in that it uses the Brier Score, Ranked and Continuous Ranked Probability Scores (See Appendix) and a range of forecast neighbourhood sizes emulating coarser MetUM configurations. The method enables a 3-way comparison of: the impact of the neighbourhood size over a single nearest grid point or direct model output (DMO) for quantifying the relative skill gained through using a forecast neighbourhood (double-penalty effect and/or representativeness), as well as the relative skill between different model configurations for equivalent neighbourhoods, the skill relative to a reference forecast, e.g. persistence. This framework represents just one way in which the representativeness and double penalty effect can potentially be estimated. In the paper the framework is applied to consider whether the 1.5 km MetUM configuration is more skilful than the 4 km configuration. Currently the other scenarios of importance for the operational utilisation of such a framework are being assessed. Crown Copyright

5 These are: a. Can the framework assist in the decision-making for model-upgrades at the same horizontal resolution? The classic test vs. control scenario; and b. Can the framework successfully compare deterministic forecasts with ensembles? It is hoped that in proving and then adopting this strategic approach comparisons between deterministic and ensemble prediction systems (EPS) are easy to do, and enable the optimization of post-processing to ensure optimal skill of forecast products, through identifying the neighbourhood sizes that optimise skill. Incorporated in this process is statistical significance testing of the score differences to add credibility to the results and assist in decision making. The benefits of this strategy include: Providing a different perspective of deterministic forecast skill through using probabilistic metrics, and treating the near-convection-resolving deterministic model forecasts probabilistically through the use of spatial fractions (probabilities). Providing a readiness for comparison to convective-scale ensembles, potentially providing useful guidance on the pre- and post-processing of individual ensemble members to spatial probabilities to maximize ensemble skill. Counteracting the spatial and temporal representativeness errors of both the single site observations and forecasts on verification metrics. To illustrate one part of the new framework Fig. 3 shows the cumulative skill benefit (deficit) of the UKV over the UK4 for December Here DMO and equivalent neighbourhoods from the UKV and UK4 are compared. These graphs summarise all hourly output from the 36h forecast. The S denotes that the score differences are statistically significant at the 5% level. An interesting result is that for 10 m wind the UKV DMO is more skilful than the UK4 DMO when RPS is used. This is in contrast to the skill deficit seen in Fig. 1. This result suggests that the UKV DMO places the wind speed in the correct category more frequently. When comparing neighbourhoods of the same size, UKV is still more skilful but with a reduced margin as the neighbourhood size increases. From the results we conclude that: o Any neighbourhood has, for the most part, a positive impact. o The general behaviour of the scores as a function of neighbourhood size can provide clues as to whether the lack of skill seen at the grid scale is to do with the double penalty or representativeness mismatches, or whether it is simply due to a model's inability to forecast certain event thresholds, e.g. low cloud bases, visibility, cloud amounts. Using neighbourhoods is generally beneficial for more extreme thresholds. o Using a neighbourhood can mitigate against the double penalty effect and representativeness mismatches, by introducing or enhancing forecast skill, but a basic level of skill must be present to do so. o A poor, relatively unskilful forecast can not be made to appear more skilful very easily. o It can be shown that a near-convection-resolving model with a neighbourhood is more skilful than a convection-permitting model single grid point or neighbourhood. o The benefit is parameter dependent, e.g. for this forecast sample temperature and wind results suggest ~15% can be ascribed to these effects. For the non-continuous parameters such as TCA % or more is possible. o Failure to show skill relative to another (coarser) model using a neighbourhood would seem to suggest a more fundamental model deficiency. Clearly UKV TCA and CBH forecasts shown here are inherently less skilful, and the difference is not simply due to the doublepenalty effect. Crown Copyright

6 Figure 3: Cumulative skill benefit (deficit) for the UKV over theuk4 for 03Z forecast initialisations in December Results show comparison of DMO vs DMO and equivalent sized neighbourhoods. Some important development areas remain, for example, an assessment of the bias. We also want to know whether all parameters need a neighbourhood as default, and whether it is sufficient to use the same neighbourhood size for all parameters. Perhaps the neighbourhood sizes should increase with forecast lead time to represent the increase in forecast error. All these issues will be actively addressed in the next year. Crown Copyright

7 3. Demonstrating model improvement across timescales At the Met Office we are fortunate that the Unified Model is a true seamless nested modelling system which is run from the (sub-) km short-range Numerical Weather Prediction (NWP) scale to climate scale where the resolution is O(100 km). In maintaining and developing a common model code across all these space and time scales, a strategy is needed to ensure, for example, that improvements which improve climate simulations do not detrimentally affect forecast skill at the short-range NWP scale. How can one assess the relative skill whilst still factoring in that shortrange NWP and a decadal forecasts answer fundamentally different questions? It is as yet unclear whether we can assess the same parameters (in terms of their definition) at increasing temporal scales, or the same parameters in the same way (i.e. using the same method for all time scales). One attribute we want to assess is whether our model can reliably predict events of interest, bearing in mind that the nature of the forecast changes as we move from shortto long-range, and from deterministic to probabilistic. The design of a seamless verification approach must incorporate this change. The method should always be aligned to the question that one wishes to answer. Therefore the method must be able to adapt as the forecast definition changes, yet remain capable of assessing forecasts in a relative sense. One of the key attributes that forecasts of all types require, is the ability to discriminate. Mason and Weigel (2009) and Weigel and Mason (2011) put forward a Generalized Discrimination Score (GDS) which at least might provide a relatively simple candidate score to fulfil this requirement. The score can be applied to binary dichotomous (yes-no), polychotomous (multi-category), continuous, discrete and continuous probabilistic forecasts. This is a score we hope to explore more fully in the coming year or two. 4. Other research highlights In addressing the question of comparing forecasts at different time scales, one of the key questions is how skilful longer-lead forecasts (which are potentially much coarser resolution) are with respect to surface weather elements (not just circulation patterns). As a first step to bridging this gap an experimental global surface index has been formulated (J. Maksymczuk, pers. comm.) using the traditional UK index framework, measuring raw model skill. Forecasts are verified at 00 and 12Z using the following lead times: t+6, 12, 18, 24, 30, 36, 42, 48, 60, 72, 96, 120, 144h. For the UK version the UK index site list is used (which is QC assured). Globally we are more vulnerable to poor observations. Parameters in this index include temperature, 10 m vector wind, cloud cover, cloud base height, 24hr precipitation (same thresholds as in current UK Index for ETS scores). Figure 4 tracks the monthly and 12-month mean indices for the UK and globally. Tracking the changes in surface parameters for core model development will be important as NWP and longer time scales (including coupling atmosphere and ocean) converge to ensure that forecasts of surface parameters are not detrimentally impacted at weather time scales. Other key highlights are: Novel application of spatial methods (MODE Davies et al. 2007, FSS Roberts and Lean 2008, SAL Wernli et al. 2009) to satellite cloud masks/analyses and cloud cover forecasts to understand spatial biases in cloud cover (Crocker and Mittermaier 2012, Mittermaier and Bullock 2012). Implementing and testing the Symmetric Equitable Error in Probablity Space (SEEPS, Rodwell et al. 2009, ECMWF headline measure) score for 6-h precipitation in regional models as part of the EUMETNET SRNWP-V programme (North et al. 2012). Crown Copyright

8 Figure 4: Global surface indices for UK and global site lists. Both monthly and 12-month running means are shown. Time series starts April Crown Copyright

9 Appendix: Definition of scores The Brier Score (BS, Brier 1950), is by far the most common scalar accuracy measure for twocategory (event/no-event) probabilistic forecasts. By definition the BS is analogous to the meansquared error (MSE), but where the forecast is a probability and the observation is either a 0 or 1, corresponding to not observed, or observed. It represents an error in probability space and is a measure of accuracy. It follows that the smaller the error, the smaller the BS (Eq. 1), the more accurate the forecast. 1 2 (1) BS (pi oi ) N i 1 The BS can also be calculated for a reference forecast, often the long-term climatological frequency of an event, which represents a constant reference forecast. Other reference forecasts can be used. This enables the calculation of a Brier Skill Score like that described in Eq. 2. BS BSref (2) BSS BSref For parameters like wind and temperature there is a clear desire to verify the distribution instead of focusing on a few thresholds, also in part due the large seasonal variations for these parameters. The Ranked Probability Score (RPS, see e.g. Epstein, 1969) is defined as: (3) 1 K 2 RPS ( CDF fcst, k CDFobs, k ) K 1 k 1 The RPS is a multi-category extension of the BS over J categories. It is a squared-error score with respect to the bin into which the observed event falls. This makes it sensitive to distance, and Murphy (1970) points out that it is particularly suited to ordered variables where being more than one category off would be significant from a forecaster s perspective, and this should be reflected in the score. The key difference is that cumulative probabilities CDF fcst,k and CDF obs,k are used to compute the score. The empirical distribution of the forecast values is determined through the application of a range of thresholds that should encapsulate all forecast values. Like the BS the perfect score is 0 with the range being between 0 and 1. It measures the sum of the squared differences in cumulative probability space for a multi-category probability forecast and tends to penalize forecasts more severely when probabilities are further from the actual outcome. The Continuous Ranked Probability Score (CRPS, e.g. Matheson and Winkler, 1976) is written as: (4) CRPS N Pfcst ( x) Pobs( x) This determines the differences between the forecast (P fcst ) and observed (P obs, Heaviside) cumulative probability distributions. A perfect forecast has a CRPS of 0. Skill scores can be calculated for both the RPS and CRPS by defining a reference forecast, using the general skill score formula, analogous to Eq dx Crown Copyright

10 References Brier G. 1950: Verification of forecasts expressed in terms of probability. Mon. Wea. Rev., 78, 1 3. Crocker R. and M. Mittermaier 2012: Exploratory use of a satellite cloud mask to verify NWP models. Submitted to Meteorol. Apps. Davis, C. A., B. G. Brown, and R. G. Bullock (2006): Object-based verification of precipitation forecasts. Part I: Methodology and application to mesoscale rain areas. Mon. Wea. Rev., 134, Ebert E. 2008: Fuzzy verification of high-resolution gridded forecasts: A review and proposed framework. Meteorol. Appl., 15, Epstein E. (1969): A scoring system for probability forecasts of ranked probabilities. J. Appl. Meteor., 8, Lorenz E. 1969: Atmospheric predictability as revealed by naturally occurring analogues. J.Atmos. Sci., 26, Mason S.J. and A. P. Weigel 2009: A generic forecast verification framework for administrative purposes. Mon. Wea. Rev., 137: Mass C., D. Ovens, K. Westrick and B. Colle 2002: Does increasing horizontal resolution produce more skillful forecasts? the results of two years of real-time numerical weather prediction over the pacific northwest. Bull. Amer. Meteorol. Soc., 83(3), Matheson J. and R. Winkler 1976: Scoring rules for continuous probability distributions. Manage. Sci., 22, Mittermaier M.P., N. Roberts and S.A. Thompson 2011: A long-term assessment of precipitation forecast skill using the Fractions Skill Score. Meteorol. Apps, DOI= /met.296. Mittermaier M.P. 2012: A strategy for verifying near-convection-resolving forecasts at observing sites. Submitted to Wea. Forecasting. Mittermaier M. 2012: A critical assessment of surface cloud observations and their use for verifying cloud forecasts. In press, QJRMS, March Mittermaier M.P., and R. Bullock 2012: Using MODE to explore the spatial and temporal characteristics of cloud cover forecasts from high-resolution NWP models. Submitted to Meteorol. Apps. Murphy A. (1970). The ranked probability score and the probability score: a comparison. Mon. Wea. Rev., 98, North R., M. Trueman, M. Mittermaier, M. Rodwell 2012:An assessment of the SEEPS and SEDI metrics for the verification of 6h forecast precipitation accumulations. Submitted to Meteorol. Apps. Roberts N.M., Lean H.W. 2008: Scale-selective verification of rainfall accumulations from highresolution forecasts of convective events. Mon. Wea. Rev., 136: Rodwell, M.J., D.S Richardson, T.D. Hewson and T. Haiden A new equitable score suitable for verifying precipitation in numerical weather prediction. Quart. J. Roy. Meteor. Soc. 136: Wernli H., M. Paulat, M. Hagen, and C. Frei SAL - a novel quality measure for the verification of quantitative precipitation forecasts. Mon. Wea. Rev., 136, Weigel A.P. and S.J. Mason 2011: The Generalized Discrimination Score for ensemble forecasts. Mon. Wea. Rev., 139: Crown Copyright

Current verification practices with a particular focus on dust

Current verification practices with a particular focus on dust Current verification practices with a particular focus on dust Marion Mittermaier and Ric Crocker Outline 1. Guide to developing verification studies 2. Observations at the root of it all 3. Grid-to-point,

More information

Ensemble Verification Metrics

Ensemble Verification Metrics Ensemble Verification Metrics Debbie Hudson (Bureau of Meteorology, Australia) ECMWF Annual Seminar 207 Acknowledgements: Beth Ebert Overview. Introduction 2. Attributes of forecast quality 3. Metrics:

More information

Using time-lag ensemble techniques to assess behaviour of high-resolution precipitation forecasts

Using time-lag ensemble techniques to assess behaviour of high-resolution precipitation forecasts Using time-lag ensemble techniques to assess behaviour of high-resolution precipitation forecasts Marion Mittermaier 3 rd Int l Verification Methods Workshop, ECMWF, 31/01/2007 Crown copyright Page 1 Outline

More information

Upscaled and fuzzy probabilistic forecasts: verification results

Upscaled and fuzzy probabilistic forecasts: verification results 4 Predictability and Ensemble Methods 124 Upscaled and fuzzy probabilistic forecasts: verification results Zied Ben Bouallègue Deutscher Wetterdienst (DWD), Frankfurter Str. 135, 63067 Offenbach, Germany

More information

Generating probabilistic forecasts from convectionpermitting. Nigel Roberts

Generating probabilistic forecasts from convectionpermitting. Nigel Roberts Generating probabilistic forecasts from convectionpermitting ensembles Nigel Roberts Context for this talk This is the age of the convection-permitting model ensemble Met Office: MOGREPS-UK UK 2.2km /12

More information

Implementation of global surface index at the Met Office. Submitted by Marion Mittermaier. Summary and purpose of document

Implementation of global surface index at the Met Office. Submitted by Marion Mittermaier. Summary and purpose of document WORLD METEOROLOGICAL ORGANIZATION COMMISSION FOR BASIC SYSTEMS OPAG on DPFS MEETING OF THE CBS (DPFS) TASK TEAM ON SURFACE VERIFICATION GENEVA, SWITZERLAND 20-21 OCTOBER 2014 DPFS/TT-SV/Doc. 4.1a (X.IX.2014)

More information

Verification of Probability Forecasts

Verification of Probability Forecasts Verification of Probability Forecasts Beth Ebert Bureau of Meteorology Research Centre (BMRC) Melbourne, Australia 3rd International Verification Methods Workshop, 29 January 2 February 27 Topics Verification

More information

Basic Verification Concepts

Basic Verification Concepts Basic Verification Concepts Barbara Brown National Center for Atmospheric Research Boulder Colorado USA bgb@ucar.edu Basic concepts - outline What is verification? Why verify? Identifying verification

More information

The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS

The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS S. L. Mullen Univ. of Arizona R. Buizza ECMWF University of Wisconsin Predictability Workshop,

More information

Probabilistic Weather Prediction

Probabilistic Weather Prediction Probabilistic Weather Prediction George C. Craig Meteorological Institute Ludwig-Maximilians-Universität, Munich and DLR Institute for Atmospheric Physics Oberpfaffenhofen Summary (Hagedorn 2009) Nothing

More information

Identifying skilful spatial scales using the Fractions Skill Score

Identifying skilful spatial scales using the Fractions Skill Score Identifying skilful spatial scales using the Fractions Skill Score Marion Mittermaier and Nigel Roberts Outline 1. Brief overview of the method 2. Synthetic cases Geometric cases Perturbed cases 3. Domain

More information

Spatial Forecast Verification Methods

Spatial Forecast Verification Methods Spatial Forecast Verification Methods Barbara Brown Joint Numerical Testbed Program Research Applications Laboratory, NCAR 22 October 2014 Acknowledgements: Tara Jensen, Randy Bullock, Eric Gilleland,

More information

Spatial forecast verification

Spatial forecast verification Spatial forecast verification Manfred Dorninger University of Vienna Vienna, Austria manfred.dorninger@univie.ac.at Thanks to: B. Ebert, B. Casati, C. Keil 7th Verification Tutorial Course, Berlin, 3-6

More information

Basic Verification Concepts

Basic Verification Concepts Basic Verification Concepts Barbara Brown National Center for Atmospheric Research Boulder Colorado USA bgb@ucar.edu May 2017 Berlin, Germany Basic concepts - outline What is verification? Why verify?

More information

4.3.2 Configuration. 4.3 Ensemble Prediction System Introduction

4.3.2 Configuration. 4.3 Ensemble Prediction System Introduction 4.3 Ensemble Prediction System 4.3.1 Introduction JMA launched its operational ensemble prediction systems (EPSs) for one-month forecasting, one-week forecasting, and seasonal forecasting in March of 1996,

More information

Five years of limited-area ensemble activities at ARPA-SIM: the COSMO-LEPS system

Five years of limited-area ensemble activities at ARPA-SIM: the COSMO-LEPS system Five years of limited-area ensemble activities at ARPA-SIM: the COSMO-LEPS system Andrea Montani, Chiara Marsigli and Tiziana Paccagnella ARPA-SIM Hydrometeorological service of Emilia-Romagna, Italy 11

More information

Operational convective scale NWP in the Met Office

Operational convective scale NWP in the Met Office Operational convective scale NWP in the Met Office WSN09 Symposium. 18 st of May 2011 Jorge Bornemann (presenting the work of several years by many Met Office staff and collaborators) Contents This presentation

More information

Spatial verification of NWP model fields. Beth Ebert BMRC, Australia

Spatial verification of NWP model fields. Beth Ebert BMRC, Australia Spatial verification of NWP model fields Beth Ebert BMRC, Australia WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 New approaches are needed to quantitatively evaluate high resolution

More information

NOTES AND CORRESPONDENCE. Improving Week-2 Forecasts with Multimodel Reforecast Ensembles

NOTES AND CORRESPONDENCE. Improving Week-2 Forecasts with Multimodel Reforecast Ensembles AUGUST 2006 N O T E S A N D C O R R E S P O N D E N C E 2279 NOTES AND CORRESPONDENCE Improving Week-2 Forecasts with Multimodel Reforecast Ensembles JEFFREY S. WHITAKER AND XUE WEI NOAA CIRES Climate

More information

Probabilistic verification

Probabilistic verification Probabilistic verification Chiara Marsigli with the help of the WG and Laurie Wilson in particular Goals of this session Increase understanding of scores used for probability forecast verification Characteristics,

More information

Complimentary assessment of forecast performance with climatological approaches

Complimentary assessment of forecast performance with climatological approaches Complimentary assessment of forecast performance with climatological approaches F.Gofa, V. Fragkouli, D.Boucouvala The use of SEEPS with metrics that focus on extreme events, such as the Symmetric Extremal

More information

12.2 PROBABILISTIC GUIDANCE OF AVIATION HAZARDS FOR TRANSOCEANIC FLIGHTS

12.2 PROBABILISTIC GUIDANCE OF AVIATION HAZARDS FOR TRANSOCEANIC FLIGHTS 12.2 PROBABILISTIC GUIDANCE OF AVIATION HAZARDS FOR TRANSOCEANIC FLIGHTS K. A. Stone, M. Steiner, J. O. Pinto, C. P. Kalb, C. J. Kessinger NCAR, Boulder, CO M. Strahan Aviation Weather Center, Kansas City,

More information

Assessment of Ensemble Forecasts

Assessment of Ensemble Forecasts Assessment of Ensemble Forecasts S. L. Mullen Univ. of Arizona HEPEX Workshop, 7 March 2004 Talk Overview Ensemble Performance for Precipitation Global EPS and Mesoscale 12 km RSM Biases, Event Discrimination

More information

Met Office convective-scale 4DVAR system, tests and improvement

Met Office convective-scale 4DVAR system, tests and improvement Met Office convective-scale 4DVAR system, tests and improvement Marco Milan*, Marek Wlasak, Stefano Migliorini, Bruce Macpherson Acknowledgment: Inverarity Gordon, Gareth Dow, Mike Thurlow, Mike Cullen

More information

Calibration of ECMWF forecasts

Calibration of ECMWF forecasts from Newsletter Number 142 Winter 214/15 METEOROLOGY Calibration of ECMWF forecasts Based on an image from mrgao/istock/thinkstock doi:1.21957/45t3o8fj This article appeared in the Meteorology section

More information

Predicting rainfall using ensemble forecasts

Predicting rainfall using ensemble forecasts Predicting rainfall using ensemble forecasts Nigel Roberts Met Office @ Reading MOGREPS-UK Convection-permitting 2.2 km ensemble now running routinely Embedded within MOGREPS-R ensemble members (18 km)

More information

Towards Operational Probabilistic Precipitation Forecast

Towards Operational Probabilistic Precipitation Forecast 5 Working Group on Verification and Case Studies 56 Towards Operational Probabilistic Precipitation Forecast Marco Turco, Massimo Milelli ARPA Piemonte, Via Pio VII 9, I-10135 Torino, Italy 1 Aim of the

More information

Verifying Ensemble Forecasts Using A Neighborhood Approach

Verifying Ensemble Forecasts Using A Neighborhood Approach Verifying Ensemble Forecasts Using A Neighborhood Approach Craig Schwartz NCAR/MMM schwartz@ucar.edu Thanks to: Jack Kain, Ming Xue, Steve Weiss Theory, Motivation, and Review Traditional Deterministic

More information

Upgrade of JMA s Typhoon Ensemble Prediction System

Upgrade of JMA s Typhoon Ensemble Prediction System Upgrade of JMA s Typhoon Ensemble Prediction System Masayuki Kyouda Numerical Prediction Division, Japan Meteorological Agency and Masakazu Higaki Office of Marine Prediction, Japan Meteorological Agency

More information

ECMWF products to represent, quantify and communicate forecast uncertainty

ECMWF products to represent, quantify and communicate forecast uncertainty ECMWF products to represent, quantify and communicate forecast uncertainty Using ECMWF s Forecasts, 2015 David Richardson Head of Evaluation, Forecast Department David.Richardson@ecmwf.int ECMWF June 12,

More information

Allison Monarski, University of Maryland Masters Scholarly Paper, December 6, Department of Atmospheric and Oceanic Science

Allison Monarski, University of Maryland Masters Scholarly Paper, December 6, Department of Atmospheric and Oceanic Science Allison Monarski, University of Maryland Masters Scholarly Paper, December 6, 2011 1 Department of Atmospheric and Oceanic Science Verification of Model Output Statistics forecasts associated with the

More information

Verification of ensemble and probability forecasts

Verification of ensemble and probability forecasts Verification of ensemble and probability forecasts Barbara Brown NCAR, USA bgb@ucar.edu Collaborators: Tara Jensen (NCAR), Eric Gilleland (NCAR), Ed Tollerud (NOAA/ESRL), Beth Ebert (CAWCR), Laurence Wilson

More information

Improvements in IFS forecasts of heavy precipitation

Improvements in IFS forecasts of heavy precipitation from Newsletter Number 144 Suer 215 METEOROLOGY Improvements in IFS forecasts of heavy precipitation cosmin4/istock/thinkstock doi:1.21957/jxtonky This article appeared in the Meteorology section of ECMWF

More information

Observations needed for verification of additional forecast products

Observations needed for verification of additional forecast products Observations needed for verification of additional forecast products Clive Wilson ( & Marion Mittermaier) 12th Workshop on Meteorological Operational Systems, ECMWF, 2-6 November 2009 Additional forecast

More information

Medium-range Ensemble Forecasts at the Met Office

Medium-range Ensemble Forecasts at the Met Office Medium-range Ensemble Forecasts at the Met Office Christine Johnson, Richard Swinbank, Helen Titley and Simon Thompson ECMWF workshop on Ensembles Crown copyright 2007 Page 1 Medium-range ensembles at

More information

Assessing high resolution forecasts using fuzzy verification methods

Assessing high resolution forecasts using fuzzy verification methods Assessing high resolution forecasts using fuzzy verification methods Beth Ebert Bureau of Meteorology Research Centre, Melbourne, Australia Thanks to Nigel Roberts, Barbara Casati, Frederic Atger, Felix

More information

Assessing the spatial and temporal variation in the skill of precipitation forecasts from an NWP model

Assessing the spatial and temporal variation in the skill of precipitation forecasts from an NWP model METEOROLOGICAL APPLICATIONS Meteorol. Appl. 15: 163 169 (2008) Published online in Wiley InterScience (www.interscience.wiley.com).57 Assessing the spatial and temporal variation in the skill of precipitation

More information

Application and verification of the ECMWF products Report 2007

Application and verification of the ECMWF products Report 2007 Application and verification of the ECMWF products Report 2007 National Meteorological Administration Romania 1. Summary of major highlights The medium range forecast activity within the National Meteorological

More information

Nesting and LBCs, Predictability and EPS

Nesting and LBCs, Predictability and EPS Nesting and LBCs, Predictability and EPS Terry Davies, Dynamics Research, Met Office Nigel Richards, Neill Bowler, Peter Clark, Caroline Jones, Humphrey Lean, Ken Mylne, Changgui Wang copyright Met Office

More information

Overview of Verification Methods

Overview of Verification Methods Overview of Verification Methods Joint Working Group on Forecast Verification Research (JWGFVR) Greg Smith on behalf of Barbara Casati, ECCC Existing Verification Techniques Traditional (point-by-point)

More information

Verification of ECMWF products at the Finnish Meteorological Institute

Verification of ECMWF products at the Finnish Meteorological Institute Verification of ECMWF products at the Finnish Meteorological Institute by Juha Kilpinen, Pertti Nurmi, Petra Roiha and Martti Heikinheimo 1. Summary of major highlights A new verification system became

More information

Feature-specific verification of ensemble forecasts

Feature-specific verification of ensemble forecasts Feature-specific verification of ensemble forecasts www.cawcr.gov.au Beth Ebert CAWCR Weather & Environmental Prediction Group Uncertainty information in forecasting For high impact events, forecasters

More information

Scatterometer Wind Assimilation at the Met Office

Scatterometer Wind Assimilation at the Met Office Scatterometer Wind Assimilation at the Met Office James Cotton International Ocean Vector Winds Science Team (IOVWST) meeting, Brest, June 2014 Outline Assimilation status Global updates: Metop-B and spatial

More information

Precipitation verification. Thanks to CMC, CPTEC, DWD, ECMWF, JMA, MF, NCEP, NRL, RHMC, UKMO

Precipitation verification. Thanks to CMC, CPTEC, DWD, ECMWF, JMA, MF, NCEP, NRL, RHMC, UKMO Precipitation verification Thanks to CMC, CPTEC, DWD, ECMWF, JMA, MF, NCEP, NRL, RHMC, UKMO Outline 1) Status of WGNE QPF intercomparisons 2) Overview of the use of recommended methods for the verification

More information

Application and verification of ECMWF products 2013

Application and verification of ECMWF products 2013 Application and verification of EMWF products 2013 Hellenic National Meteorological Service (HNMS) Flora Gofa and Theodora Tzeferi 1. Summary of major highlights In order to determine the quality of the

More information

Proper Scores for Probability Forecasts Can Never Be Equitable

Proper Scores for Probability Forecasts Can Never Be Equitable APRIL 2008 J O L LIFFE AND STEPHENSON 1505 Proper Scores for Probability Forecasts Can Never Be Equitable IAN T. JOLLIFFE AND DAVID B. STEPHENSON School of Engineering, Computing, and Mathematics, University

More information

LATE REQUEST FOR A SPECIAL PROJECT

LATE REQUEST FOR A SPECIAL PROJECT LATE REQUEST FOR A SPECIAL PROJECT 2016 2018 MEMBER STATE: Italy Principal Investigator 1 : Affiliation: Address: E-mail: Other researchers: Project Title: Valerio Capecchi LaMMA Consortium - Environmental

More information

Verification of ECMWF products at the Deutscher Wetterdienst (DWD)

Verification of ECMWF products at the Deutscher Wetterdienst (DWD) Verification of ECMWF products at the Deutscher Wetterdienst (DWD) DWD Martin Göber 1. Summary of major highlights The usage of a combined GME-MOS and ECMWF-MOS continues to lead to a further increase

More information

Enhancing Weather Information with Probability Forecasts. An Information Statement of the American Meteorological Society

Enhancing Weather Information with Probability Forecasts. An Information Statement of the American Meteorological Society Enhancing Weather Information with Probability Forecasts An Information Statement of the American Meteorological Society (Adopted by AMS Council on 12 May 2008) Bull. Amer. Meteor. Soc., 89 Summary This

More information

Exploring ensemble forecast calibration issues using reforecast data sets

Exploring ensemble forecast calibration issues using reforecast data sets NOAA Earth System Research Laboratory Exploring ensemble forecast calibration issues using reforecast data sets Tom Hamill and Jeff Whitaker NOAA Earth System Research Lab, Boulder, CO tom.hamill@noaa.gov

More information

Severe storm forecast guidance based on explicit identification of convective phenomena in WRF-model forecasts

Severe storm forecast guidance based on explicit identification of convective phenomena in WRF-model forecasts Severe storm forecast guidance based on explicit identification of convective phenomena in WRF-model forecasts Ryan Sobash 10 March 2010 M.S. Thesis Defense 1 Motivation When the SPC first started issuing

More information

Convective-scale NWP for Singapore

Convective-scale NWP for Singapore Convective-scale NWP for Singapore Hans Huang and the weather modelling and prediction section MSS, Singapore Dale Barker and the SINGV team Met Office, Exeter, UK ECMWF Symposium on Dynamical Meteorology

More information

Peter P. Neilley. And. Kurt A. Hanson. Weather Services International, Inc. 400 Minuteman Road Andover, MA 01810

Peter P. Neilley. And. Kurt A. Hanson. Weather Services International, Inc. 400 Minuteman Road Andover, MA 01810 6.4 ARE MODEL OUTPUT STATISTICS STILL NEEDED? Peter P. Neilley And Kurt A. Hanson Weather Services International, Inc. 400 Minuteman Road Andover, MA 01810 1. Introduction. Model Output Statistics (MOS)

More information

Application and verification of ECMWF products in Austria

Application and verification of ECMWF products in Austria Application and verification of ECMWF products in Austria Central Institute for Meteorology and Geodynamics (ZAMG), Vienna Alexander Kann 1. Summary of major highlights Medium range weather forecasts in

More information

COMPOSITE-BASED VERIFICATION OF PRECIPITATION FORECASTS FROM A MESOSCALE MODEL

COMPOSITE-BASED VERIFICATION OF PRECIPITATION FORECASTS FROM A MESOSCALE MODEL J13.5 COMPOSITE-BASED VERIFICATION OF PRECIPITATION FORECASTS FROM A MESOSCALE MODEL Jason E. Nachamkin, Sue Chen, and Jerome M. Schmidt Naval Research Laboratory, Monterey, CA 1. INTRODUCTION Mesoscale

More information

The forecast skill horizon

The forecast skill horizon The forecast skill horizon Roberto Buizza, Martin Leutbecher, Franco Molteni, Alan Thorpe and Frederic Vitart European Centre for Medium-Range Weather Forecasts WWOSC 2014 (Montreal, Aug 2014) Roberto

More information

Predicting uncertainty in forecasts of weather and climate (Also published as ECMWF Technical Memorandum No. 294)

Predicting uncertainty in forecasts of weather and climate (Also published as ECMWF Technical Memorandum No. 294) Predicting uncertainty in forecasts of weather and climate (Also published as ECMWF Technical Memorandum No. 294) By T.N. Palmer Research Department November 999 Abstract The predictability of weather

More information

Extracting probabilistic severe weather guidance from convection-allowing model forecasts. Ryan Sobash 4 December 2009 Convection/NWP Seminar Series

Extracting probabilistic severe weather guidance from convection-allowing model forecasts. Ryan Sobash 4 December 2009 Convection/NWP Seminar Series Extracting probabilistic severe weather guidance from convection-allowing model forecasts Ryan Sobash 4 December 2009 Convection/NWP Seminar Series Identification of severe convection in high-resolution

More information

Understanding Weather and Climate Risk. Matthew Perry Sharing an Uncertain World Conference The Geological Society, 13 July 2017

Understanding Weather and Climate Risk. Matthew Perry Sharing an Uncertain World Conference The Geological Society, 13 July 2017 Understanding Weather and Climate Risk Matthew Perry Sharing an Uncertain World Conference The Geological Society, 13 July 2017 What is risk in a weather and climate context? Hazard: something with the

More information

Application and verification of ECMWF products in Croatia

Application and verification of ECMWF products in Croatia Application and verification of ECMWF products in Croatia August 2008 1. Summary of major highlights At Croatian Met Service, ECMWF products are the major source of data used in the operational weather

More information

Predictability from a Forecast Provider s Perspective

Predictability from a Forecast Provider s Perspective Predictability from a Forecast Provider s Perspective Ken Mylne Met Office, Bracknell RG12 2SZ, UK. email: ken.mylne@metoffice.com 1. Introduction Predictability is not a new issue for forecasters or forecast

More information

The ECMWF Extended range forecasts

The ECMWF Extended range forecasts The ECMWF Extended range forecasts Laura.Ferranti@ecmwf.int ECMWF, Reading, U.K. Slide 1 TC January 2014 Slide 1 The operational forecasting system l High resolution forecast: twice per day 16 km 91-level,

More information

Convective scheme and resolution impacts on seasonal precipitation forecasts

Convective scheme and resolution impacts on seasonal precipitation forecasts GEOPHYSICAL RESEARCH LETTERS, VOL. 30, NO. 20, 2078, doi:10.1029/2003gl018297, 2003 Convective scheme and resolution impacts on seasonal precipitation forecasts D. W. Shin, T. E. LaRow, and S. Cocke Center

More information

Will it rain? Predictability, risk assessment and the need for ensemble forecasts

Will it rain? Predictability, risk assessment and the need for ensemble forecasts Will it rain? Predictability, risk assessment and the need for ensemble forecasts David Richardson European Centre for Medium-Range Weather Forecasts Shinfield Park, Reading, RG2 9AX, UK Tel. +44 118 949

More information

Application and verification of ECMWF products 2012

Application and verification of ECMWF products 2012 Application and verification of ECMWF products 2012 Met Eireann, Glasnevin Hill, Dublin 9, Ireland. J.Hamilton 1. Summary of major highlights The verification of ECMWF products has continued as in previous

More information

A Scientific Challenge for Copernicus Climate Change Services: EUCPXX. Tim Palmer Oxford

A Scientific Challenge for Copernicus Climate Change Services: EUCPXX. Tim Palmer Oxford A Scientific Challenge for Copernicus Climate Change Services: EUCPXX Tim Palmer Oxford Aspects of my worldline 1. EU Framework Programme PROVOST, DEMETER EUROSIP 2. Committee on Climate Change Adaptation

More information

How ECMWF has addressed requests from the data users

How ECMWF has addressed requests from the data users How ECMWF has addressed requests from the data users David Richardson Head of Evaluation Section, Forecast Department, ECMWF David.richardson@ecmwf.int ECMWF June 14, 2017 Overview Review the efforts made

More information

Assimilation of SEVIRI cloud-top parameters in the Met Office regional forecast model

Assimilation of SEVIRI cloud-top parameters in the Met Office regional forecast model Assimilation of SEVIRI cloud-top parameters in the Met Office regional forecast model Ruth B.E. Taylor, Richard J. Renshaw, Roger W. Saunders & Peter N. Francis Met Office, Exeter, U.K. Abstract A system

More information

Application and verification of ECMWF products 2009

Application and verification of ECMWF products 2009 Application and verification of ECMWF products 2009 Danish Meteorological Institute Author: Søren E. Olufsen, Deputy Director of Forecasting Services Department and Erik Hansen, forecaster M.Sc. 1. Summary

More information

Developments towards multi-model based forecast product generation

Developments towards multi-model based forecast product generation Developments towards multi-model based forecast product generation Ervin Zsótér Methodology and Forecasting Section Hungarian Meteorological Service Introduction to the currently operational forecast production

More information

A simple method for seamless verification applied to precipitation hindcasts from two global models

A simple method for seamless verification applied to precipitation hindcasts from two global models A simple method for seamless verification applied to precipitation hindcasts from two global models Matthew Wheeler 1, Hongyan Zhu 1, Adam Sobel 2, Debra Hudson 1 and Frederic Vitart 3 1 Bureau of Meteorology,

More information

EMC Probabilistic Forecast Verification for Sub-season Scales

EMC Probabilistic Forecast Verification for Sub-season Scales EMC Probabilistic Forecast Verification for Sub-season Scales Yuejian Zhu Environmental Modeling Center NCEP/NWS/NOAA Acknowledgement: Wei Li, Hong Guan and Eric Sinsky Present for the DTC Test Plan and

More information

Linking climate change modelling to impacts studies: recent advances in downscaling techniques for hydrological extremes

Linking climate change modelling to impacts studies: recent advances in downscaling techniques for hydrological extremes Linking climate change modelling to impacts studies: recent advances in downscaling techniques for hydrological extremes Dr Hayley Fowler, Newcastle University, UK CMOS-AMS Congress 2012, Montreal, Canada

More information

Application and verification of ECMWF products in Croatia - July 2007

Application and verification of ECMWF products in Croatia - July 2007 Application and verification of ECMWF products in Croatia - July 2007 By Lovro Kalin, Zoran Vakula and Josip Juras (Hydrological and Meteorological Service) 1. Summary of major highlights At Croatian Met

More information

Heavier summer downpours with climate change revealed by weather forecast resolution model

Heavier summer downpours with climate change revealed by weather forecast resolution model SUPPLEMENTARY INFORMATION DOI: 10.1038/NCLIMATE2258 Heavier summer downpours with climate change revealed by weather forecast resolution model Number of files = 1 File #1 filename: kendon14supp.pdf File

More information

A spatial verification method applied to the evaluation of high-resolution ensemble forecasts

A spatial verification method applied to the evaluation of high-resolution ensemble forecasts METEOROLOGICAL APPLICATIONS Meteorol. Appl. 15: 125 143 (2008) Published online in Wiley InterScience (www.interscience.wiley.com).65 A spatial verification method applied to the evaluation of high-resolution

More information

Sensitivity of COSMO-LEPS forecast skill to the verification network: application to MesoVICT cases Andrea Montani, C. Marsigli, T.

Sensitivity of COSMO-LEPS forecast skill to the verification network: application to MesoVICT cases Andrea Montani, C. Marsigli, T. Sensitivity of COSMO-LEPS forecast skill to the verification network: application to MesoVICT cases Andrea Montani, C. Marsigli, T. Paccagnella Arpae Emilia-Romagna Servizio IdroMeteoClima, Bologna, Italy

More information

1. INTRODUCTION 2. QPF

1. INTRODUCTION 2. QPF 440 24th Weather and Forecasting/20th Numerical Weather Prediction HUMAN IMPROVEMENT TO NUMERICAL WEATHER PREDICTION AT THE HYDROMETEOROLOGICAL PREDICTION CENTER David R. Novak, Chris Bailey, Keith Brill,

More information

Model verification and tools. C. Zingerle ZAMG

Model verification and tools. C. Zingerle ZAMG Model verification and tools C. Zingerle ZAMG Why verify? The three most important reasons to verify forecasts are: to monitor forecast quality - how accurate are the forecasts and are they improving over

More information

Adaptation for global application of calibration and downscaling methods of medium range ensemble weather forecasts

Adaptation for global application of calibration and downscaling methods of medium range ensemble weather forecasts Adaptation for global application of calibration and downscaling methods of medium range ensemble weather forecasts Nathalie Voisin Hydrology Group Seminar UW 11/18/2009 Objective Develop a medium range

More information

The Australian Operational Daily Rain Gauge Analysis

The Australian Operational Daily Rain Gauge Analysis The Australian Operational Daily Rain Gauge Analysis Beth Ebert and Gary Weymouth Bureau of Meteorology Research Centre, Melbourne, Australia e.ebert@bom.gov.au Daily rainfall data and analysis procedure

More information

Application and verification of ECMWF products in Austria

Application and verification of ECMWF products in Austria Application and verification of ECMWF products in Austria Central Institute for Meteorology and Geodynamics (ZAMG), Vienna Alexander Kann, Klaus Stadlbacher 1. Summary of major highlights Medium range

More information

Accounting for the effect of observation errors on verification of MOGREPS

Accounting for the effect of observation errors on verification of MOGREPS METEOROLOGICAL APPLICATIONS Meteorol. Appl. 15: 199 205 (2008) Published online in Wiley InterScience (www.interscience.wiley.com).64 Accounting for the effect of observation errors on verification of

More information

Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the Pacific Northwest

Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the Pacific Northwest Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the Pacific Northwest Eric P. Grimit and Clifford F. Mass Department of Atmospheric Sciences, University of Washington

More information

Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the Pacific Northwest

Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the Pacific Northwest Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the Pacific Northwest Eric P. Grimit and Clifford F. Mass Department of Atmospheric Sciences, University of Washington

More information

Representation of model error in a convective-scale ensemble

Representation of model error in a convective-scale ensemble Representation of model error in a convective-scale ensemble Ross Bannister^*, Stefano Migliorini^*, Laura Baker*, Ali Rudd* ^ National Centre for Earth Observation * DIAMET, Dept of Meteorology, University

More information

TC/PR/RB Lecture 3 - Simulation of Random Model Errors

TC/PR/RB Lecture 3 - Simulation of Random Model Errors TC/PR/RB Lecture 3 - Simulation of Random Model Errors Roberto Buizza (buizza@ecmwf.int) European Centre for Medium-Range Weather Forecasts http://www.ecmwf.int Roberto Buizza (buizza@ecmwf.int) 1 ECMWF

More information

A new mesoscale NWP system for Australia

A new mesoscale NWP system for Australia A new mesoscale NWP system for Australia www.cawcr.gov.au Peter Steinle on behalf of : Earth System Modelling (ESM) and Weather&Environmental Prediction (WEP) Research Programs, CAWCR Data Assimilation

More information

VERFICATION OF OCEAN WAVE ENSEMBLE FORECAST AT NCEP 1. Degui Cao, H.S. Chen and Hendrik Tolman

VERFICATION OF OCEAN WAVE ENSEMBLE FORECAST AT NCEP 1. Degui Cao, H.S. Chen and Hendrik Tolman VERFICATION OF OCEAN WAVE ENSEMBLE FORECAST AT NCEP Degui Cao, H.S. Chen and Hendrik Tolman NOAA /National Centers for Environmental Prediction Environmental Modeling Center Marine Modeling and Analysis

More information

Validation of Forecasts (Forecast Verification) Overview. Ian Jolliffe

Validation of Forecasts (Forecast Verification) Overview. Ian Jolliffe Validation of Forecasts (Forecast Verification) Overview Ian Jolliffe 1 Outline 1. Introduction and history (4) 2. Types of forecast (2) 3. Properties of forecasts (3) verification measures (2) 4. Terminology

More information

The Nowcasting Demonstration Project for London 2012

The Nowcasting Demonstration Project for London 2012 The Nowcasting Demonstration Project for London 2012 Susan Ballard, Zhihong Li, David Simonin, Jean-Francois Caron, Brian Golding, Met Office, UK Introduction The success of convective-scale NWP is largely

More information

Observation requirements for regional reanalysis

Observation requirements for regional reanalysis Observation requirements for regional reanalysis Richard Renshaw 30 June 2015 Why produce a regional reanalysis? Evidence from operational NWP 25km Global vs 12km NAE ...the benefits of resolution global

More information

Application and verification of ECMWF products 2016

Application and verification of ECMWF products 2016 Application and verification of ECMWF products 2016 Met Eireann, Glasnevin Hill, Dublin 9, Ireland. J.Hamilton 1. Summary of major highlights The verification of ECMWF products has continued as in previous

More information

Verification Methods for High Resolution Model Forecasts

Verification Methods for High Resolution Model Forecasts Verification Methods for High Resolution Model Forecasts Barbara Brown (bgb@ucar.edu) NCAR, Boulder, Colorado Collaborators: Randy Bullock, John Halley Gotway, Chris Davis, David Ahijevych, Eric Gilleland,

More information

Model error and seasonal forecasting

Model error and seasonal forecasting Model error and seasonal forecasting Antje Weisheimer European Centre for Medium-Range Weather Forecasts ECMWF, Reading, UK with thanks to Paco Doblas-Reyes and Tim Palmer Model error and model uncertainty

More information

Combining Deterministic and Probabilistic Methods to Produce Gridded Climatologies

Combining Deterministic and Probabilistic Methods to Produce Gridded Climatologies Combining Deterministic and Probabilistic Methods to Produce Gridded Climatologies Michael Squires Alan McNab National Climatic Data Center (NCDC - NOAA) Asheville, NC Abstract There are nearly 8,000 sites

More information

A comparison of ensemble post-processing methods for extreme events

A comparison of ensemble post-processing methods for extreme events QuarterlyJournalof theroyalmeteorologicalsociety Q. J. R. Meteorol. Soc. 140: 1112 1120, April 2014 DOI:10.1002/qj.2198 A comparison of ensemble post-processing methods for extreme events R. M. Williams,*

More information

operational status and developments

operational status and developments COSMO-DE DE-EPSEPS operational status and developments Christoph Gebhardt, Susanne Theis, Zied Ben Bouallègue, Michael Buchhold, Andreas Röpnack, Nina Schuhen Deutscher Wetterdienst, DWD COSMO-DE DE-EPSEPS

More information

Strategy for Using CPC Precipitation and Temperature Forecasts to Create Ensemble Forcing for NWS Ensemble Streamflow Prediction (ESP)

Strategy for Using CPC Precipitation and Temperature Forecasts to Create Ensemble Forcing for NWS Ensemble Streamflow Prediction (ESP) Strategy for Using CPC Precipitation and Temperature Forecasts to Create Ensemble Forcing for NWS Ensemble Streamflow Prediction (ESP) John Schaake (Acknowlements: D.J. Seo, Limin Wu, Julie Demargne, Rob

More information

Application and verification of ECMWF products in Austria

Application and verification of ECMWF products in Austria Application and verification of ECMWF products in Austria Central Institute for Meteorology and Geodynamics (ZAMG), Vienna Alexander Kann 1. Summary of major highlights Medium range weather forecasts in

More information