VERFICATION OF OCEAN WAVE ENSEMBLE FORECAST AT NCEP 1. Degui Cao, H.S. Chen and Hendrik Tolman

Similar documents
Performance of the ocean wave ensemble forecast system at NCEP 1

Verification of Probability Forecasts

Methods of forecast verification

Ensemble Verification Metrics

Evaluating Forecast Quality

4.3.2 Configuration. 4.3 Ensemble Prediction System Introduction

ESTABLISHMENT AND PERFORMANCE OF THE OCEAN WAVE ENSEMBLE FORECAST SYSTEM AT CWB

The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS

Towards Operational Probabilistic Precipitation Forecast

Probabilistic verification

Predicting uncertainty in forecasts of weather and climate (Also published as ECMWF Technical Memorandum No. 294)

NOTES AND CORRESPONDENCE. Improving Week-2 Forecasts with Multimodel Reforecast Ensembles

REPORT ON APPLICATIONS OF EPS FOR SEVERE WEATHER FORECASTING

Ensemble forecast and verification of low level wind shear by the NCEP SREF system

Drought forecasting methods Blaz Kurnik DESERT Action JRC

Validation of a Multi-Grid WAVEWATCH III TM Modeling System

Verification of ensemble and probability forecasts

Feature-specific verification of ensemble forecasts

Probabilistic Weather Forecasting and the EPS at ECMWF

Recent advances in Tropical Cyclone prediction using ensembles

Five years of limited-area ensemble activities at ARPA-SIM: the COSMO-LEPS system

Model error and seasonal forecasting

Exploring ensemble forecast calibration issues using reforecast data sets

A Local Ensemble Prediction System for Fog and Low Clouds: Construction, Bayesian Model Averaging Calibration, and Validation

Verification of ECMWF products at the Finnish Meteorological Institute

Upscaled and fuzzy probabilistic forecasts: verification results

Application and verification of the ECMWF products Report 2007

Probabilistic seasonal forecast verification

Will it rain? Predictability, risk assessment and the need for ensemble forecasts

S e a s o n a l F o r e c a s t i n g f o r t h e E u r o p e a n e n e r g y s e c t o r

The ECMWF Extended range forecasts

EMC Probabilistic Forecast Verification for Sub-season Scales

Forecast Verification Analysis of Rainfall for Southern Districts of Tamil Nadu, India

Medium-range Ensemble Forecasts at the Met Office

Application and verification of ECMWF products in Norway 2008

Predictability from a Forecast Provider s Perspective

Severe storm forecast guidance based on explicit identification of convective phenomena in WRF-model forecasts

Enhancing Weather Information with Probability Forecasts. An Information Statement of the American Meteorological Society

Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the. Pacific Northwest. Eric P. Grimit

An Investigation of Reforecasting Applications for NGGPS Aviation Weather Prediction: An Initial Study of Ceiling and Visibility Prediction

3.6 NCEP s Global Icing Ensemble Prediction and Evaluation

Application and verification of ECMWF products 2015

Extracting probabilistic severe weather guidance from convection-allowing model forecasts. Ryan Sobash 4 December 2009 Convection/NWP Seminar Series

A simple method for seamless verification applied to precipitation hindcasts from two global models

A look at forecast capabilities of modern ocean wave models

J11.5 HYDROLOGIC APPLICATIONS OF SHORT AND MEDIUM RANGE ENSEMBLE FORECASTS IN THE NWS ADVANCED HYDROLOGIC PREDICTION SERVICES (AHPS)

Spatial forecast verification

Using time-lag ensemble techniques to assess behaviour of high-resolution precipitation forecasts

12.2 PROBABILISTIC GUIDANCE OF AVIATION HAZARDS FOR TRANSOCEANIC FLIGHTS

The benefits and developments in ensemble wind forecasting

Application and verification of ECMWF products: 2010

ECMWF 10 th workshop on Meteorological Operational Systems

Analog Ensemble for Probabilis1c Weather Predic1ons

Recommendations on trajectory selection in flight planning based on weather uncertainty

The Latest Science of Seasonal Climate Forecasting

INTERCOMPARISON OF THE CANADIAN, ECMWF, AND NCEP ENSEMBLE FORECAST SYSTEMS. Zoltan Toth (3),

QUANTIFYING THE ECONOMIC VALUE OF WEATHER FORECASTS: REVIEW OF METHODS AND RESULTS

Accounting for the effect of observation errors on verification of MOGREPS

István Ihász, Máté Mile and Zoltán Üveges Hungarian Meteorological Service, Budapest, Hungary

P3.11 A COMPARISON OF AN ENSEMBLE OF POSITIVE/NEGATIVE PAIRS AND A CENTERED SPHERICAL SIMPLEX ENSEMBLE

Ensembles, Uncertainty and Climate Projections. Chris Brierley (Room 117)

Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the Pacific Northwest

Revisiting predictability of the strongest storms that have hit France over the past 32 years.

Verification of WAQUA/DCSMv5 s operational water level probability forecasts

TC/PR/RB Lecture 3 - Simulation of Random Model Errors

NCEP ENSEMBLE FORECAST SYSTEMS

Assessment of the ECMWF ensemble prediction system for waves and marine winds

North Carolina State University Land Grant University founded in undergraduate fields of study 80 master's degree areas 51 doctoral degree

JMA Contribution to SWFDDP in RAV. (Submitted by Yuki Honda and Masayuki Kyouda, Japan Meteorological Agency) Summary and purpose of document

Predicting climate extreme events in a user-driven context

2013 ATLANTIC HURRICANE SEASON OUTLOOK. June RMS Cat Response

Verification at JMA on Ensemble Prediction

Model verification / validation A distributions-oriented approach

operational status and developments

Effects of observation errors on the statistics for ensemble spread and reliability

ENSO-DRIVEN PREDICTABILITY OF TROPICAL DRY AUTUMNS USING THE SEASONAL ENSEMBLES MULTIMODEL

Verification statistics and evaluations of ECMWF forecasts in

Numerical Weather Prediction. Medium-range multi-model ensemble combination and calibration

OCEAN WAVE FORECASTING AT E.C.M.W.F.

Motivation & Goal. We investigate a way to generate PDFs from a single deterministic run

Allison Monarski, University of Maryland Masters Scholarly Paper, December 6, Department of Atmospheric and Oceanic Science

Fig. F-1-1. Data finder on Gfdnavi. Left panel shows data tree. Right panel shows items in the selected folder. After Otsuka and Yoden (2010).

Overview of Verification Methods

Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the Pacific Northwest

TESTING GEOMETRIC BRED VECTORS WITH A MESOSCALE SHORT-RANGE ENSEMBLE PREDICTION SYSTEM OVER THE WESTERN MEDITERRANEAN

LAMEPS Limited area ensemble forecasting in Norway, using targeted EPS

5.2 PRE-PROCESSING OF ATMOSPHERIC FORCING FOR ENSEMBLE STREAMFLOW PREDICTION

Florida State University Libraries

Forecasting wave height probabilities with numerical weather prediction models

Monthly probabilistic drought forecasting using the ECMWF Ensemble system

E. P. Berek. Metocean, Coastal, and Offshore Technologies, LLC

Decadal changes in sea surface temperature, wave forces and intertidal structure in New Zealand

Understanding Weather and Climate Risk. Matthew Perry Sharing an Uncertain World Conference The Geological Society, 13 July 2017

PRMS WHITE PAPER 2014 NORTH ATLANTIC HURRICANE SEASON OUTLOOK. June RMS Event Response

Chapter 6: Ensemble Forecasting and Atmospheric Predictability. Introduction

Ensemble TC Track/Genesis Products in

Verification of intense precipitation forecasts from single models and ensemble prediction systems

Assessment of Ensemble Forecasts

Categorical Verification

Verification of nowcasts and short-range forecasts, including aviation weather

Transcription:

VERFICATION OF OCEAN WAVE ENSEMBLE FORECAST AT NCEP Degui Cao, H.S. Chen and Hendrik Tolman NOAA /National Centers for Environmental Prediction Environmental Modeling Center Marine Modeling and Analysis Branch Camp Springs, Maryland, USA. INTRODUCTION Ensemble forecast is a probabilistic forecast technique which deals with forecast with uncertainty and improves the forecast skills. The National Centers for Environmental Prediction (NCEP) has developed and implemented the ensemble global ocean wave forecast system (EGOWaFS) operationally (Chen, 2006). The EGOWaFS consists of one control member and ten ensemble members using the NWW3 (Tolman at al, 2002 and Tolman, 2003) as the wave model to generate the wave data. The model wind input is obtained from NOAA/NCEP Global Forecast System (GFS) 0m wind field and is updated every three hours. The control forecast is the current operational wave forecast. The ten members are generated using the GFS ensemble wind fields made by the breeding method (Toth and Kalnay, 997). The initial wave fields of the ten members of wave forecast are the same as that of the control forecast. These ensemble wave forecasts are running four hours per day out to 26 hours. The main outputs of the system include the ensemble mean, spread, spaghetti diagram and probability at different thresholds every six hours, and are posted on NOAA/NCEP website 2. The ensemble forecast theory is resolving two main uncertainties existing in the nonlinear numerical forecast objectives: prediction and forecast uncertainties (Toth and Kalnay, 997). ). The prediction uncertainty is from the deterministic nonlinear system itself. The chaotic system is very sensitive to the initial conditions. Very small perturbation of the initial conditions could generate considerable different outputs and lead to big divergence in the model forecast over a finite time. These initial errors could easily come from the data observation and analysis. The forecast uncertainty is from the predictability of the numerical model. Because of our imperfect understanding and knowledge of the natural nonlinear systems, the model can not reflect completely all physical processes of the systems. In addition there are some errors coming from the numerical methods, dynamical formulation and physical parameterizations. Therefore verification of any developed ensemble forecast Marine Modeling and Analysis Branch Contribution Number 26 Corresponding author: Degui Cao. Degui.Cao@noaa.gov 2 http://polar.ncep.noaa.gov/

system is performed to answer to which extent the ensemble system resolves these uncertainties (Jolliffe and Stephenson, 2003). After the EGOWaFS was set up, Chen (2006) used the significant wave height (H S ) and the wind speed at 0m height (U 0 ) from nearly 30 deep water buoys to compare with the ensemble forecast. The spread increases with the forecast hour. The ensemble forecast of U 0 and H S hit all the observation data while operational forecasts miss many. Chen also evaluated the system for the storms in May through July, 2004. The ensemble system is more realistic than the deterministic system. Chen (2006) made the conclusions only studying several cases within three months. More ensemble outputs have been generated since then. This study covers the period from June, 2006 to March 3, 2007. H S and U 0 from the ten ensemble members and the control run are used for verification. There are five sections in this paper. Section is the introduction. Section 2 gives a brief description of the buoy data used in the study. Section 3 describes the conventional verification methods used: Brier skill and Brier skill score, cost-lost analysis and relative operating characteristics. Section 4 analyzes the results of each verification skill or score. The conclusions are drawn in section 5. 2. BUOY DATA USED FOR VERIFICATION The wind and wave data from 67 NOAA/NDBC buoys at the same period from June 2006 to March 3 2007 are treated as the ground truth for verification. Fig. shows the global distribution of the buoys. The hourly and quality controlled buoy data are compared with the ensemble output each hour. The wind and wave climate data are generated using the NDBC datasets from January 997 to December 3 2006. The climate data are hourly averaged at each buoy location without smoothing at larger time scale. 3. METHODOLOGY OF VERIFICATION 3. Brier score (BS) and Brier skill score (BSS) BS measures the mean squared probability error. The quadratic scoring measure for a probabilistic binary forecast is defined as (Jolliffe and Stephenson, 2003) N 2 BS= ( p i o i ) () N i= where N is the total event number. p i is the forecast probability, o i is the observed data ( for the event happening, 0 for the event not happening). For a probabilistic forecast, p i is between 0.0 to.0 since ensemble forecasts usually give an uncertain forecast unlike the deterministic forecast where the probability is 0 or. The best value for a BS is 0 for perfect forecast system. Murphy (973) decomposed the above formula into three terms: M M 2 2 BS= nm ( pm om ) - nm ( om o) + o (- o ) (2) N m= N m= Reliability Resolution Uncertainty 2

where M is the probability classes. o is the climate data. Reliability measures the difference between the forecast and observed probability distribution. The best reliability value is 0. Resolution is the ability to distinguish forecast from averaged observed data or climate data. The best resolution value is 0. Uncertainty measures the error or variability in the observed data used in either the initial conditions or in the comparisons. Uncertainty is always greater than 0. BS is zero when the forecast is a perfect deterministic forecast. The BSS is defines as BS BS ref BS BSS= = - (3) BS BS P ref BS ref where the reference BS ref could be the climate data, operational run or different forecast system. A positive BSS shows skill in comparison to the reference (BS>BS ref ), otherwise there is no skill. 3.2 Reliability diagram The reliability diagram plots the observed frequency against the forecast probability for different probability range. There are ranges in this study (0., 0., 0.2,,.0). The reliability diagram answers how well the prediction corresponding to its observation. The diagonal line indicates perfect reliability. No resolution line indicates the ensemble system can not catch the events occurred below that probability (see Fig. 4 and 5). The no skill line means no forecast skill if the event occurs below it. 3.3 Cost-lost analysis 3.3. Economical value The so-called decision-analytic model (Jolliffe and Stephenson, 2003) (Table ) provides the users to alter their actions based on forecast information and different economical purposes. E v e n t Yes No Forecast /Action Yes No Hit(h) Miss(m) Mitigated loss (C+L u ) Loss (L=L p + L u ) False alarm(f) Cost(C) Correct rejection (c) No cost (N) Table. The costs and losses accrued by the use of the wave prediction, depending on prediction and observed events If the event does not occur, and the user takes no action, there is no cost, N=0. If the event does not occur, and user takes action, there is a cost C. If the event occurs, and user takes action, there is a cost, C, plus unprotected loss, L u. If the event occurs and user takes no action, then there is a loss of L u and protected loss, L p. The economic value is defined as: 3

E E V= E forecast perfect E E c lim ate c lim ate Where E c lim ate is the expected expenses associated with using the climatological data. = ol u + min(ol p, C). o is the climatological frequency of the event through c lim ate calculating the buoy climate data. E forecast is the expected user expense of a forecast system. E forecast = h(c+l u ) + fc+ m(l p + L u ). E perfect is the minimum expense of a user, given a perfect forecast system that provides accurate predictions for the occurrence and nonoccurrence of a particular event. E perfect =o (C+ L u ). 3.3.2 Relative operating characteristics (ROC) ROC measures the ability of the forecast to discriminate between the events and nonevents. ROC curve is plotted using hit rate with false alarm rate against a set of varying probability thresholds. Hit rate = hit/(hit +miss) False alarm rate=false / (false + correct rejection). (5) The area under the curve defined as the ROC area is a useful summary measure of a forecast skill. ROC area is which means the perfect forecast. Only the ideal deterministic forecast can reach it. ROC is 0.5 which means no forecast skill. A good ROC indicates by the curve which is close to the upper left corner (low false alarm rate and high probability detection) (see Fig. 7). The ROC can be considered as a measure of potential usefulness and is a good companion of the reliability diagram. 4. RESULTS Below results for the various validation parameters are presented. BBS is calculated using five days forecasts, the others are obtained for the day 5 forecast. 4. Capacity of the ensemble forecast system and reliability of its forecast In Fig. 2, BSS is plotted for the wind and wave field, the reference is the climate field. The threshold values are 2m, 4m, 6m and 8m for H S and 0m/s, 4m/s, 7m/s and 20 m/s for U 0. The skill scores are rather good for H S > 2m, 4m and U 0 >0m/s and 4m/s. But the system lost predictability after Day 4 for 6m and 8m and with all forecast range for U 0 > 7m/s and 20m/s.These results are mainly caused by a lack of sufficient observation data. We therefore do not discuss Hs>6m, 8m and U 0 >7m/s, 20m/s in costlost analysis and reliability diagram. Fig. 3 are the BSS plots for wave and wind fields, but the reference is the operational NCEP run. All BSS increase over the forecast days. This means the ensemble forecast has higher forecast skill than the deterministic operational forecast. Fig. 4 and 5 are the reliability diagrams for day 5 of wave height (H S > 2m, 4m) and wind field (U 0 > 0m/s, 4m/s). The forecast probabilities are divided into ranges from 0 to.0 increasing by 0.. The observed relative frequency is defined the fraction of the observed events against the total number of the forecast events in the same probability class. The striking results from the diagrams are the no resolution lines (4) 4

which are very low, less than 0. for wave height and 0.02 for wind fields. These show that this ensemble wave forecast system has very high forecast capacity. It could catch nearly all events occurred. In the low probability, the forecast system is under-forecast. In the high probability, the forecast system is over-forecast. The probability 0.9 and for U 0 >4m/s are obvious abnormal caused by less observation data. 4.2 Cost-loss analysis of the ensemble forecast system Fig. 6 is the relative economic value as a function of the cost-loss ratio of the ensemble wave forecast and the operational NWW3 forecast. The benefit of the ensemble forecast system is apparent in these four cases. The economic value of the ensemble system is larger than that of operational run at each ratio. The economic value 0 means the value calculated using the climatology data and means the value using the perfect forecast system. The economic value could be obtained from a larger range of cost-loss ratio using the ensemble forecast than using the operational forecast. Analyzing the relative operating characteristics (ROC) for different thresholds could be used to perceive the performance of ensemble forecast system (Fig. 7). ROC areas are over 0.85 for H S >2m and U 0 >0m/s and 0.98 for H S >4m and U0>4m/s. 5. DISCUSSION AND CONCLUSION The ensemble forecast system has better forecast skill than the deterministic operational forecast after the probabilistic improvement. The EGOWaFS system has good forecast capacity which could catch most forecasted events. The ensemble system is under forecasting in low probability and over forecasting in high probability (Fig. 4). The sharpness diagram (histogram) in this figure clearly identifies that the spread generated by the ensemble is too narrow. Furthermore higher wind speeds are biased high, and, not surprisingly, so are higher wave heights. This behavior is well known from deterministic validation of the control run; see for instance, Bidlot et al. (2007), where wind speed and wave height biases are known to drift upward with forecast time. At the five day forecast, all such biases are generally positive. Note that the underestimation of the spread in wave heights in EGOWaFS is not unique to this ensemble system. It is also observed in the ECMWF ensemble forecast system (see Saetra and Bidlot, 2004). A possible explanation for the narrowness of the spread can be found in the design of the ensemble. By perturbing the forcing only, only wind sea perturbations are generated. Swells older than five days are essentially identical in all members of the ensemble. This is caused by the use of identical initial conditions in all ensemble members. Swells therefore do not contribute to the spread in the ensemble, whereas there clearly is uncertainty in the swells at the time of their origination. Swell contributions to the ensemble spread are therefore by definition underestimated. To assess if this can explain the spurious narrowness of the ensemble wave height distribution, we intend to isolate wind seas in the ensemble and in the data, and perform the same validation based on wind seas only. An additional problem with the validation study presented here is the general lack of data. First, the 0 month period considered here is too short to provide a sufficient sample size of extreme wave conditions. Second, buoys used in this verification are located in the Northern Hemisphere, they can not represent the Southern part. Alternatively, altimeter data for several years should be used in the validation. We intend to use the altimeter data for such a validation study in the near future. 5

6. REFERENCES Bidlot J.R., Li J.G., Fauchon M., Chen H.S., Lefevre J.M., Bruns T., Greenslade D., Ardhuin F., Kohno N., Park S., Gomez M., 2007: Inter-comparison of operational wave forecasting systems. In 0 th international workshop on wave hindcasting and forecasting. Chen, H. S., 2006, An ensemble global ocean wave forecast system, NOAA THORPEX PI Workshop, Camp Springs, MD. Jolliffe I. T. and Stephenson D. B., 2003, Forecast Verification, Wiley. 3). Murphy, A.H., 973, Hedging and skill scores for probability forecasts. J. App. Met., 2, 25-223. Saetra O. and Bidlot J.R., 2004, Potential benefits of using probabilistic forecasts for waves and marine winds based on the ECMWF ensemble prediction system. Weather. and Forecasting. 673-689. Tolman H. L., Balasubramani B., Burroughs L.D., Chaliko D.V., Chao Y.Y., Chen H. S. and Gerald V.M., 2002, Development and implementation of wind-generated ocean surface wave models at NCEP. Weather and Forecasting, 7, 3-333. Tolman, 2003, Treatment of unresolved islands and ice in wind wave models. Ocean Modeling, 5(2003), 29-23. Toth, Z. and Kalnay, E., 997. Ensemble forecasting at NCEP and the breeding method. Mon. Wea. Rev., 25, 3297-339. 6

FIG.. Global buoy locations used in the ensemble wave forecast verification FIG. 2. Brier Skill Scores using the wave and wind climate data as the reference 7

FIG. 3. Brier Skill Scores using the NWW3 operational run as the reference FIG. 4. Day 5 reliability diagram for wave height. Left: H S > 2m, Right: H S >4m. 8

FIG. 5. Day 5 reliability diagram for wind field. Left: U 0 >0m/s, Right: U 0 :4m/s. 9

FIG. 6 Day 5 economic value for wave (H S >2m and H S >4m) and wind (U0 >0m/s and U0 >4m/s) FIG. 7 Day 5 relative operating characteristics (ROC) curve for wave (H S >2m and H S >4m) and wind (U0> m/s and U0 > 4m/s) 0