Verification of Probabilistic Quantitative Precipitation Forecasts over the Southwest United States during Winter 2002/03 by the RSM Ensemble System

Size: px
Start display at page:

Download "Verification of Probabilistic Quantitative Precipitation Forecasts over the Southwest United States during Winter 2002/03 by the RSM Ensemble System"

Transcription

1 JANUARY 2005 Y U A N E T A L. 279 Verification of Probabilistic Quantitative Precipitation Forecasts over the Southwest United States during Winter 2002/03 by the RSM Ensemble System HUILING YUAN Department of Civil and Environmental Engineering, University of California, Irvine, Irvine, California STEVEN L. MULLEN Department of Atmospheric Sciences, The University of Arizona, Tucson, Arizona XIAOGANG GAO AND SOROOSH SOROOSHIAN Department of Civil and Environmental Engineering, University of California, Irvine, Irvine, California JUN DU AND HANN-MING HENRY JUANG NWS/NOAA/NCEP/Environmental Modeling Center, Washington, D.C. (Manuscript received 26 March 2004, in final form 20 June 2004) ABSTRACT The National Centers for Environmental Prediction (NCEP) Regional Spectral Model (RSM) is used to generate ensemble forecasts over the southwest United States during the 151 days of 1 November 2002 to 31 March RSM forecasts to 24 h on a 12-km grid are produced from 0000 and 1200 UTC initial conditions. Eleven ensemble members are run each forecast cycle from the NCEP Global Forecast System (GFS) ensemble analyses (one control and five pairs of bred modes) and forecast lateral boundary conditions. The model domain covers two NOAA River Forecast Centers: the California Nevada River Forecast Center (CNRFC) and the Colorado Basin River Forecast Center (CBRFC). Ensemble performance is evaluated for probabilistic forecasts of 24-h accumulated precipitation in terms of several accuracy and skill measures. Differences among several NCEP precipitation analyses are assessed along with their impact on model verification, with NCEP stage IV blended analyses selected to represent truth. Forecast quality and potential value are found to depend strongly on the verification dataset, geographic region, and precipitation threshold. In general, the RSM forecasts are skillful over the CNRFC region for thresholds between 1 and 50 mm but are unskillful over the CBRFC region. The model exhibits a wet bias for all thresholds that is larger over Nevada and the CBRFC region than over California. Mitigation of such biases over the Southwest will pose serious challenges to the modeling community in view of the uncertainties inherent in verifying analyses. 1. Introduction The southwest United States is marked by highly heterogeneous topography and diverse vegetation. The complexity of topographical, hydrological, and biological scenarios poses challenges for numerical weather prediction over this region. Because of the significant economic and social impacts of accurate precipitation forecasts on this fast-developing area (Pielke and Downton 2000), quantitative precipitation forecasts (QPFs) and probabilistic quantitative precipitation Corresponding author address: Huiling Yuan, Department of Civil and Environmental Engineering, University of California, Irvine, E-4130 Engineering Gateway, Irvine, CA yhl@hwr.arizona.edu forecasts (PQPFs) are considered to be critical parameters for weather predictions (Fritsch et al. 1998). Unfortunately, the skill of precipitation forecasts has improved relatively slowly (Sanders 1986) compared to other weather elements. Rapid loss of skill is a consequence, in part, of the initial errors that saturate much faster for precipitation forecasting than for other sensible weather elements (Wandishin et al. 2001). Model errors associated with physical parameterizations also play a large role in rapid degradation of precipitation forecasts. Numerous early studies (e.g., Epstein 1969; Leith 1974; Hoffman and Kalnay 1983; Kalnay 2003) suggested that ensemble forecasts, the running of several model forecasts, could provide more useful and more skillful weather forecasts than a single deterministic 2005 American Meteorological Society

2 280 M O N T H L Y W E A T H E R R E V I E W VOLUME 133 forecast run at higher resolution. Ensemble forecasts are performed by introducing perturbations in initial conditions or in model physical processes, or by using different model configurations. One common method of producing ensemble forecasts is to run multiple members using the same model but from slightly different initial conditions for the same forecast cycle. Global ensemble forecasts have been run operationally at both the National Centers for Environmental Prediction (NCEP; Tracton and Kalnay 1993; Toth and Kalnay 1993; Toth et al. 1997) and the European Centre for Medium-Range Weather Forecasts (ECMWF; Palmer et al. 1993; Molteni et al. 1996) since December The success of global ensemble systems (e.g., Tracton and Kalnay 1993; Toth and Kalnay 1993; Mureau et al. 1993; Molteni et al. 1996) led to examination of ensemble forecasts with limited-area models at finer resolutions. It was found that the ensemble approach could also improve short-range weather forecasts, especially precipitation forecasting (Brooks et al. 1995; Du et al. 1997; Hamill and Colucci 1998; Buizza et al. 1999; Mullen et al. 1999; Stensrud et al. 1999; Hamill et al. 2000; Toth et al. 2002). Currently, NCEP runs the regional short-range ensemble forecasting (SREF; Du and Tracton 2001) system (0 63 h starting from 0900 and 2100 UTC initial conditions), which uses the Regional Spectral Model (RSM; Juang and Kanamitsu 1994; Juang et al. 1997) and Eta Model (Black 1994; Rogers et al. 1995) at an equivalent grid spacing of 48 km (32 km was implemented in summer 2004) over the contiguous United States (CONUS) region. Motivated by the previous investigations on SREF and the operational implementation of regional ensemble forecasting at NCEP, this paper assesses twicedaily ensemble forecasts over the southwest United States using the RSM (1997 version) at a grid spacing of 12 km, one much finer than the current operational ensemble configuration. Daily forecasts at 0000 and 1200 UTC with 6-h interval output are performed for winter 2002/03 (a weak El Niño year; ggweather.com/winter0203.htm), a period during which several major winter storms, floods, and heavy snowfall occurred over the region (Kelsch 2004). We selected a cool season for this study because runoff from wintertime precipitation provides most of the freshwater supply for the western United States (Palmer 1988; Serreze et al. 1999). The objective of this study is to examine how much a short-range ensemble system, at a model resolution typical of the next generation ensemble systems, can benefit PQPFs during a cool season. The verification focuses on evaluating 24-h PQPFs for hydrological regions. To meet high-resolution requirements of hydrological applications, the NCEP Stage IV 4-km precipitation analyses are selected as verification data. This ensemble configuration and experimental design are described in section 2. Verification procedures, differences among potential verification datasets, and their impact on assessment of forecast accuracy and skill are explained in section 3. Section 4 presents the results of PQPFs and analyzes the model performance for different hydrologic regions. Findings are summarized and recommendations for future research are addressed in section Overview of the RSM ensemble prediction system The NCEP RSM ensemble forecasting system uses the RSM, 1997 version. The model can be run in two configurations: the hydrostatic version (Juang and Kanamitsu 1994) is called the RSM, while the nonhydrostatic version is called the mesoscale spectral model (Juang 2000). Physical packages in the RSM include a surface layer with planetary boundary layer physics, shallow convection, simplified Arakawa Schubert convective parameterization scheme, large-scale precipitation, shortwave and longwave radiation with diurnal variation, gravity wave drag, the double Laplacian-type horizontal diffusion to control noise accumulation, and some hydrological processes (Kanamitsu 1989). A lateral boundary relaxation is applied to reduce the noise from lateral boundary conditions (Juang et al. 1997). The NCEP RSM ensemble system uses the breeding method to produce initial perturbations (Toth and Kalnay 1997; Kalnay 2003). The ensemble consists of 11 members: an unperturbed control forecast and five pairs of bred forecasts. Each pair of bred members consists of twin forecasts with positive and negative initial perturbations. Four meteorological variables, U, V, T, and Q (two components of wind speeds, temperature, and humidity) at all model levels, plus mean sea level pressure (MSLP), are perturbed in breeding processes. In this study, the RSM ensemble forecasts are run to 24 h at 12-km grid spacing with 28 sigma levels over the southwest United States using the hydrostatic version (the RSM). The daily forecasts for each 0000 and 1200 UTC cycle are performed for all 151 days during the period 1 November 2002 to 31 March Initial conditions (ICs) are generated from the NCEP Global Forecast System (GFS; nwp/pcu2/avintro.htm) products. At 0000 and 1200 UTC, the GFS provides global analyses (once called AVN) at T254L64 resolution (T denotes triangular wave truncation, L denotes vertical layers) and runs a deterministic forecast to 84 h at T254L64. To maintain consistency between ICs and the global ensemble forecasts at T126L28 that are used for boundary conditions (BCs) of regional ensemble models, ICs of the control run at 0000 and 1200 UTC are truncated from T254L64 global analyses to T126L28 resolution. At the initial integration time (0000 UTC 1 November 2002), ICs for five pairs of positive and negative bred perturbations are directly obtained from five pairs of the GFS ensemble runs at T126L28 resolution. Afterward, the perturbations of bred members are constructed from the previous 12-h RSM forecasts, so-called regional breed-

3 JANUARY 2005 Y U A N E T A L. 281 ing (Tracton and Du 2003, manuscript submitted to Wea. Forecasting). BCs for the control run at 0000 UTC and five pairs of bred members come from updating GFS ensemble 6-h outputs. The GFS ensemble forecasts of the 1200 UTC control run at T126L28 resolution are not archived at NCEP, so as a substitute the BCs of the control run at 1200 UTC are truncated from the GFS deterministic global forecasts from T254L64 to T126L28 resolution. Computational limitations always constrain the choice of domain size, grid spacing, and number of ensemble members (Hamill et al. 2001). Du and Tracton (1999) find that restricting domain size has a detrimental effect on the performance of regional ensembles through suppression of spread. They also note, however, that precipitation possesses a much lower dependence on the domain size than other model parameters (e.g., geopotential height, temperature, wind). Because the sole focus of this study is PQPF, a limited domain is chosen to allow for a 12-km grid separation, or a minimum wavelength of 24 km in the RSM, that covers the entire Southwest. The model domain contains grid points and is centered at 36.5 N, W (Fig. 1). We hereafter refer to this 12-km RSM configuration as RSM12. The grid covers two National Oceanic and Atmospheric Administration (NOAA) RFCs over the Southwest: the California Nevada River Forecast Center (CNRFC) and the Colorado Basin River Forecast Center (CBRFC). It also includes four entire U.S. Geological Survey (USGS) hydrologic unit regions: the Upper Colorado region, the Lower Colorado region, the Great Basin region, and the California region. Model elevation is interpolated from 5 5 topographic data (equivalent to about 5 km 5 km; Juang 2000) to 12-km grids over the study domain. 3. Accuracy metrics and verification analyses Multiple verification measures are needed to sample the high dimensionality of forecast quality (Murphy and Winkler 1987; Murphy 1991). For that reason, a suite of measures that are appropriate for assessing FIG. 1. The study area ( grid points, 12-km mesh) and topography (contour interval 500 m). There are two River Forecast Centers (bold solid line): (left) the CNRFC and (right) the CBRFC, and four USGS hydrologic regions (shaded area): A the Upper Colorado region, B the Lower Colorado region, C the Great Basin region, and D the California region.

4 282 M O N T H L Y W E A T H E R R E V I E W VOLUME 133 FIG. 2. The distribution of daily gauge stations at the RFCs for the study period of 151 days. The grayscale represents the percentage of days with reports available. probabilistic forecasts is employed. These include the Brier skill score (BSS; Stanski et al. 1989; Wilks 1995), ranked probability skill score (RPSS; Stanski et al. 1989; Wilks 1995), ranked histograms (RH; Anderson 1996; Hamill and Colucci 1998; Hamill 2001), attributes diagrams (AD; Stanski et al. 1989; Wilks 1995), relative operating characteristic (ROC; Stanski et al. 1989; Wilson 2000) curves, and potential economic value (PEV; Richardson 2000; Buizza 2001; Zhu et al. 2002). Details on these verification techniques can be found in books (e.g., Stanski et al. 1989; Wilks 1995; Jolliffe and Stephenson 2003) and references contained within. Confidence bounds for the verification scores are estimated using resampling procedures (Hamill 1999; Mullen and Buizza 2001). Distributed hydrological models often require precipitation input at resolutions finer than the RSM12. Because of their fine resolution, the NCEP stage IV precipitation analyses, which are mapped onto a 4 km 4 km national grid (hereafter Stage4; wwwt.emc.ncep.noaa.gov/mmb/ylin/pcpanl/stage4/) for the CONUS, are selected to verify QPFs and PQPFs. Stage4 provides quantitative precipitation estimates (QPEs) for the 6-h accumulations valid at 0000, 0600, 1200, and 1800 UTC, whereas many NCEP datasets only archive 24-h estimates valid at 1200 UTC. Stage4 24-h QPEs are obtained from the 6-h QPEs for the 12 RFCs of the CONUS. Approximately 3000 automated, hourly rain gauge observations and hourly radar estimations of precipitation are blended together to generate hourly and 6-h precipitation analyses at most RFCs. The algorithm produces an estimate for any grid cell within approximately 50 km of the nearest gauge report or within approximately 200 km of each successfully decoded radar report. The gauge data is subjected to manual quality control (Young et al. 2000), and radar data undergo a bias adjustment (Fulton et al. 1998). The CNRFC and CBRFC use the Mountain Mapper method with the Precipitation Elevation Regressions Independent Slopes Model (PRISM, Daly et al. 1994) to produce precipitation distributions that incorporate gauge data ( summary2.htm). Major sources of uncertainty in the Stage4 analyses over the southwest United States come from variations in the spatial density of rain gauges (Fig. 2) and imperfect radar estimation associated with the radar reflectivity over the complex terrain (Fulton et al. 1998). Another source of uncertainty is that Stage4 does not supply a mask for missing or suspect data. Although the grid cells outside the RFC domains are easy to process as missing data in Stage4, it is difficult to discriminate between zero values and the missing data over the land, especially over the regions with relatively few rain gauges (e.g., Nevada). As a way to examine the impact of a different QPE with a comparable resolution on forecast verification, analyses based solely on the RFC gauge data (hereafter RFC4) are used. The RFC4 analyses utilize the same 4-km grid and same gauge-to-grid algorithm as Stage4, but automated and manual gauge reports that number 7000 to 8000 (Higgins et al. 2003) are analyzed. The input data for the RFC4 analyses is not subjected to the quality control at the RFCs, however. The RFC4 mask is then applied to the Stage4 analyses to create a third verification analysis (hereafter RfcS4) that allows comparison over the same set of grid points. The impact of using a coarser-resolution QPE on skill can be examined by comparing verifications based on 1/8 ( 14 km) 24-h analyses (hereafter RFC14) from the NCEP Climate Prediction Center (CPC) against the Stage4 analyses. The PRISM/least squares scheme (more information available online at noaa.gov/mmb/ylin/pcpverif/scores/docs/qanda.html) is used to map RFC daily gauges (Higgins et TABLE 1. NCEP precipitation analyses. QC: quality control done at RFCs; CONUS: continental United States. Resolution Data source QC Interval Time (UTC) Mask Gauge for CONUS RFC14 1/8 (14 km) Radar gauge Yes 24 h 1200 Yes RFC4 4 km Gauge only No 24 h 1200 Yes Stage4 4 km Radar gauge Yes 6/24 h 0000, 1200 No 3000

5 JANUARY 2005 Y U A N E T A L. 283 FIG. 3. The 24-h precipitation during the period 1200 UTC 8 Nov UTC 9 Nov 2002 for three datasets and RSM forecasts (see section 3): (a) RFC14 (1/8 ), (b) RFC4 (4 km), (c) Stage4 (4 km), (e) RSM member n5, (f) RSM member p5, and (g) RSM ensemble mean. The grayscale indicates the precipitation amount (mm); blank areas have no data. al. 2003) onto the 1/8 grid. The RFC14 scheme also includes a quality control step that combines gauge, radar, and satellite data. Note that the 1/8 spacing is close to the native grid spacing of the RSM forecasts. Thus, 24-h precipitation at 1200 UTC is available for four datasets: RFC14 (1/8 or 14 km), RFC4 (4 km with own mask), Stage4 (4 km, no mask), and RfcS4 (4 km, with RFC4 mask). The main differences between

6 284 M O N T H L Y W E A T H E R R E V I E W VOLUME 133 the first three QPEs are summarized in Table 1. Figures 3a c illustrate representative differences between three QPEs for the 24-h period ending at 1200 UTC 9 November There are obvious local inconsistencies among the QPEs in terms of precipitation amount and masked areas. Figure 3 also shows 24-h precipitation forecasts for two ensemble members (Figs. 3d,e) and the 11-member ensemble mean (Fig. 3f); noteworthy differences are apparent between the RSM12 forecasts. Comparison of the three QPEs and the two RSM12 forecasts indicates localized differences among the QPEs that are comparable to differences between the RSM12 QPFs. The general impression is that differences among the QPEs are smaller than differences between the ensemble members, but the QPE differences are not negligible. The differences among the panels of Fig. 3 suggest that estimates of the forecast skill can change markedly if different verification analyses are used as truth. To illustrate this point, the BSS over the entire verification domain for all 151 forecasts was computed with the four QPEs (Fig. 4a), with the respective sample frequencies used as the referenced forecast in skill estimate (Fig. 4b). Bilinear interpolation is used to transform model output to the analysis grids for this estimate and for all verifications that follow. The most meaningful comparisons are between Stage4 RfcS4 to assess the impact of masked regions, RfcS4 RFC4 to assess the impact of different analyses on the same 4-km grid, and Stage4 RFC14 to assess the impact of a coarser verification. Elimination of potentially suspect grids from Stage4 analyses (Stage4 RfcS4) has a minimal (but mostly positive) impact on skill. The skill is significantly higher for the gauge-only analyses (RFC4) compared to the blended analyses at the same grid points (RfcS4) for thresholds below 20 mm. This comparison vividly points out the large impact that the choice of verifying analysis can have on skill assessment. In fact, the Brier skill score for the lightest thresholds is markedly higher for the RFC14 and RFC4 analyses than for the Stage4/ RfcS4 products. The fact that RFC14 analyses yield the highest skill is not surprising since it does not contain a strong influence from scales that are too small to be resolved by the RSM12. On the other hand, the comparable performance of the RFC4 gauge-only analyses is more surprising. Differences in performance at the small thresholds are related, in good part, to variations in sample climatology (Fig. 4b). Stage4 and RfcS4 analyses systematically possess a lower frequency of occurrence below 25 mm compared to the RFC4 and RFC14 analysis. For example, note the 0.14 frequency at 1 mm for Stage4/RfcS4 versus for RFC14 and RFC4. Differences of this magnitude would yield large differences in estimates of model biases. It turns out that the RSM ensemble possesses a large wet bias (next section), so use of drier Stage4 analyses would lead to degradation in skill relative to use of the RFC4 and RFC14 analyses. FIG. 4. The (a) Brier skill score and (b) sample climatological frequency of 24-h precipitation (1200 UTC) at eight thresholds of 1 75 mm during 151 days for four datasets (see section 3): RFC14, RFC4, RfcS4, and Stage4. Error bars in (a) give 90% confidence interval for the Brier skill score based on the Stage4 analyses. Many hydrological applications, such as flood forecasting, require precipitation input at fine resolutions (Droegemeier et al. 2000). For that reason, the 4-km Stage4 is selected as baseline truth for the scores that follow. Major drawbacks of using Stage4 products are questions of QPE fidelity in regions with complex terrain and a paucity of rain gauges, the lack of a mask for missing or suspect data, and errors associated with the interpolation of 12-km model fields to 4-km grids. The reader should keep in mind that the verification statistics that follow exhibit variations if different analyses are used for truth. 4. Forecast quality and skill a. Comparison of deterministic RSM12 forecasts and NCEP operational forecasts It is of interest to examine briefly whether the accuracy of the RSM12 precipitation forecasts is compa-

7 JANUARY 2005 Y U A N E T A L. 285 FIG. 5. Monthly averaged precipitation for the five study periods. (a) The Stage4 average monthly precipitation. (b) The average monthly difference between the ensemble mean of the 0000 UTC forecasts and the Stage4 precipitation. (c) The average monthly difference of the ensemble mean between 1200 and 0000 UTC precipitation. The top grayscale is for (a) and (b), while the bottom scale is for (c); units are mm per 30 days (month). rable to, and preferably better than, that for the NCEP suite of operational models. It is difficult to justify, a priori, the running of higher-resolution ensemble forecasts based solely on initial perturbations if RSM12 performance is significantly worse. NCEP continually evaluates QPF performance from its deterministic models and from manual forecasts issued by the Hydrometeorological Prediction Center (HPC), NWS Weather Forecast Offices (WFOs), and River Forecast Centers (RFCs) (e.g., Charba et al. 2003; Reynolds 2003). Detailed, quantitative comparison between RSM12 and NCEP forecasts for the 2002/03 cool season is not possible in view of differences in the verification period (November December for RMS12 versus October March for NCEP). Nevertheless, a qualitative comparison (results not shown) of the biases, mean absolute errors, and threat scores reveals that the RSM12 control forecast possesses an accuracy comparable to Nested Grid Model (NGM), Eta Model, GFS model, HPC forecasts, and CNRFC/CBRFC forecasts during the 2002/03 cool season. We believe that the comparable performance of the RSM12 configuration justifies its use for the short-range ensemble PQPFs. b. Biases The average monthly precipitation for the 5-month forecast period (Fig. 5a) contains two heavy precipitation bands along the Coastal Range and Sierra Nevadas of California. Drier conditions prevailed over the Great Basin region and Desert Southwest, with higher pre-

8 286 M O N T H L Y W E A T H E R R E V I E W VOLUME 133 FIG. 6. Rank histograms of 24-h precipitation for 0000 UTC cycle (black bar) and 1200 UTC cycle (white bar) during 151 days over the whole domain. The abscissa index shows the rank of Stage4 among 1 Stage4 and 11 forecast ensembles. The ordinate shows frequency. The horizontal dashed line denotes frequency for uniform rank distribution. The error bars plotted to the right of each bar give 90% confidence interval for 0000 and 1200 UTC, respectively. cipitation confined to the higher terrain of the CBRFC. Monthly precipitation varied significantly from month to month during the 2002/03 winter (not shown). California experienced widespread regions with heavy precipitation ( 200 mm month 1 ) in November and December, with much drier conditions thereafter. The CBRFC zone experienced an opposite trend: a relatively dry November January, replaced by a wetter pattern in February and March. The difference between the 151-day average precipitation from the 0000 UTC ensembles and the Stage4 average reveals a widespread wet bias (Fig. 5b) in the RSM forecasts. The wet bias is worse in the 1200 UTC forecasts over most regions (Fig. 5c). Moreover, this tendency for wetter 1200 UTC cycles is noted in all five months (not shown). Ranked histograms (Fig. 6) reveal distributions consistent with a wet bias. Distributions for both 0000 and 1200 UTC deviate significantly from a uniform rank, the expected distribution for a reliable ensemble. The L shape indicates that verification preferentially falls at the lower end of distribution, which means that the ensemble members are too frequently wetter than verification. The bias is more severe for the 1200 UTC cycle, with the lowest rank being 4% more populated at 1200 UTC than at 0000 UTC. Attributes diagrams also indicate that the wet bias extends to probability space for every threshold examined. The reliability for the 0000 UTC forecasts is somewhat better than for the 1200 UTC forecasts (Fig. 7), albeit both curves are significantly below the 1:1 diagonal for nonzero forecast probabilities (i.e., forecast probabilities are higher than observed frequencies), which indicates a wet conditional bias. Furthermore, distributions of the forecast probabilities (insert histograms) reveal that the 1200 UTC cycle produces fewer 0% forecasts and more nonzero forecasts across a vast majority of probability values than the 0000 UTC cycle. In fact, the conditional bias at 1200 UTC is so severe that only the 0% and very highest probability forecasts make positive contributions to the Brier skill score; that is, the reliability curve for that probability category is closer to 1:1 diagonal than the no-skill line. The Brier skill score can be decomposed into three terms: reliability, resolution, and uncertainty. The uncertainty term [f (1 f ); Fig. 7] depends solely on the sample climatological frequency (f ). The resolution term is positively oriented (higher values are better) and measures the ability to properly sort occurrences from nonoccurrences at a level above climatology; on an attributes diagram this corresponds to the distance squared between the reliability curve and the sample climatological frequency weighted by the forecast frequency. The reliability term is negatively oriented and measures the agreement between forecast probability and observed frequency of occurrence; this corresponds to distance squared between the reliability curve and the 1:1 diagonal line. Whenever the resolution term exceeds the reliability term, the forecast system is skillful with respect to sample climatology; this corresponds to the reliability curve lying farther from the horizontal climatology line than the no-skill line. Note that forecasts based on sample climatology have perfect reliability, but they possess no ability to discriminate events. Figure 8 shows that the resolution terms for the two cycles are virtually equivalent, but the reliability terms for the 1200 UTC forecasts are noticeably larger (worse) than the 0000 UTC ones for 1 25-mm thresholds. Evidently degraded skill at 1200 UTC is due solely to a larger conditional wet bias, and not a diminished capacity to discriminate precipitation events.

9 JANUARY 2005 Y U A N E T A L. 287 FIG. 7. Attributes diagrams for 24-h precipitation at four thresholds: (a) 1, (b) 10, (c) 20, and (d) 50 mm during 151 days over the whole domain. The horizontal dashed line denotes the sample climatological frequency (f ). Solid curves with black dots and white circles denote reliability at 0000 and 1200 UTC, respectively. Error bars give 90% confidence bounds for reliability at 0000 UTC. The abscissa index shows number of 11 forecast ensembles (probability 1/11). The ordinate shows the observed frequency (Stage4). The sloped solid line denotes the perfect forecast. The sloped dotted line denotes no skill. The internal bar graphs show the percentage numbers of whole grids in terms of forecast probability for 11 members. The numbers of 0% forecast probability are shown on the left bar graph, while the numbers of 1/11, 2/11,... 11/11 forecast probability are shown on the right bar graph on each panel. In summary, the RSM12 ensemble shows a significant wet bias that is reflected in the winter average precipitation, frequency of observations that lie outside the ensemble range, and forecast probabilities for thresholds up to 50 mm day 1. The wet bias is significantly larger for the 1200 UTC cycle compared to the 0000 UTC cycle. c. Regional and monthly verification Comparison of the BSS for the whole domain, the two RFC districts, and the four USGS hydrological basins (Fig. 9) reveals large regional differences in skill. The CNRFC (Fig. 9b) and California (Fig. 9f) are the only regional domains with positive BSS values for the 0000 and 1200 UTC cycles over the entire range of thresholds, but the confidence intervals (CIs) for the 50- and 75-mm thresholds extend below zero. The other domains are deemed not skillful since the 90% CI extends so far below the zero line that it should probably be considered lacking skill even if the mean value lies above. The regional stratification also shows a strong tendency for smaller BSS values at 1200 UTC for every domain.

10 288 M O N T H L Y W E A T H E R R E V I E W VOLUME 133 FIG. 8. Decomposition of the Brier skill score for the (a) 0000 and (b) 1200 UTC cycles for 151 days over the whole domain into three terms: reliability (chain line with circles), resolution (solid line with asterisk), and uncertainties (dashed line with triangles). The three terms are on a log 10 scale. The RPSS, which is an extension of the BSS to multicategorical forecasts, measures the agreement between the forecast and observed probability distributions. The RSM ensembles (Fig. 10) are most skillful (RPSS 0) over California, followed by the CNRFC. Forecasts over the CBRFC, the Great Basin, and the Upper and Lower Colorado watersheds are unskillful. The 0000 UTC forecasts also systematically exhibit higher RPSS than the 1200 UTC forecasts. The RPSS distributions are in basic agreement with the BSS results. The gridpoint distribution of the RPSS at 0000 UTC (Fig. 11) reveals that no more than half of the verification domain contains positive skill. The most skillful regions are situated along the mountain ranges and coastal regions of California, and over the high terrain of the interior regions. Areas with the highest skill (RPSS 0.5) are confined to the Pacific coast, the windward slopes of Sierra Nevadas and the Mogollon Rim of the central Arizona. On the other hand, forecasts over the Great Basin generally lack skill, except over the highest terrain. BSS values over the whole domain (Fig. 12) exhibit the highest scores in November and December, when the monthly averaged precipitation intensified along the windward slopes in California but it was relatively dry elsewhere. California received less precipitation in February and March, while the Great Basin region and the CBRFC received heavier precipitation than early in the season. It appears that RSM skill is related to the distribution of precipitation, with wet conditions in California favoring high skill. d. Discriminating ability and potential economic value The ROC curve extends the concept of the hit rate and false alarm rate to probability forecasts (Table 2). Because the ROC curve stratifies events according to observations, it is insensitive to conditional biases. The area under the ROC curve provides a scalar summary of the ability of a forecast system to discriminate a dichotomous event. An area of 1.0 denotes a perfect deterministic forecast, while an area of 0.5 indicates no discriminating ability. ROC areas (Fig. 13) for the entire domain and various hydrological regions run 0.9 or higher for thresholds of 1 75 mm. The ROC areas, unlike the bias score itself or verification scores (BSS and RPSS) that are sensitive to bias, show little variation for the 0000 and 1200 UTC forecasts. Buizza et al. (1999) suggest that a ROC area of 0.7 might be considered a threshold for a useful forecast system, and that an area of 0.8 or higher indicates good discriminating ability. It has been argued that ROC area should be computed after modeling the hit rates and false alarm rates as straight lines in normal deviate space (e.g., Wilson 2000), but a fitted curve gives even higher ROC areas than those in Fig. 13 (not shown). In any event, a straight line ROC curve with an area 0.9 must be considered indicative of an excellent discriminating ability. Moreover, these high ROC areas indicate that the BSS and RPSS would increase markedly after removal of conditional biases through postprocessing procedures that do not reduce the resolution term. Potential economic value curves can be computed from the ROC curve. The PEV curves give the forecast value relative to the economic value from use of climatology information. The simple economic model assumes that the cost in taking a preventative action to protect from a loss is less than the loss from the weather event itself and that the protective action reduces the loss to zero. A value of 1.0 denotes the PEV for a perfect deterministic forecast, while 0.0 denotes the value for climatology. Forecast value varies by the

11 JANUARY 2005 Y U A N E T A L. 289 FIG. 9. The Brier skill scores for 24-h accumulated precipitation for the 0000 UTC (solid line with asterisk) and 1200 UTC (dashed line with dot) cycles at eight thresholds (1 75 mm) for 151 days. Seven domains are shown: (a) the whole domain; and only (b) the CNRFC, (c) the CBRFC, (d) the Great Basin region, (e) the Upper Colorado region, (f) the California region, and (g) the Lower Colorado region. Error bars give 90% confidence bounds of the BSS. sample frequency (Fig. 4b), the cost/loss (C/L) ratio, and the threshold probability at which to take a preventive action. The optimal PEV is defined as the highest value among all possible probability thresholds. The PEV for 0000 UTC forecasts (Fig. 14) is positive for C/L ratios between and 0.6 for 1 mm, and gradually evolves to a range of for 50 mm. A positive value for a C/L up to 0.6 indicates that some users who require quite confident forecasts could benefit from the RSM12 forecasts. However, the limit of C/L ratios associated with a relatively high level of value, say 0.5 or higher, shrinks considerably to C/L 0.2. Value curves for the 1200 UTC forecasts (not shown) are very similar to the 0000 UTC ones. 5. Summary and conclusions The NCEP RSM was used to generate ensemble forecasts over the southwest United States during the 151 days from 1 November 2002 to 31 March RSM forecasts to 24 h on a 12-km grid were produced from 0000 and 1200 UTC initial conditions. Eleven ensemble members were run each forecast cycle starting from NCEP GFS ensemble analyses (one control and five pairs of bred modes) and forecast lateral boundary conditions. Various verification metrics for 24-h accumulations were performed for hydrological zones using several different NCEP precipitation analyses as truth, but results for 4-km Stage4 that allowed comparison of

12 290 M O N T H L Y W E A T H E R R E V I E W VOLUME 133 The RSM ensemble possesses a significant wet bias. A wet bias exists over most of the domain and during both forecast cycles. Forecasts starting at 1200 UTC show a strong tendency for lower skill than forecasts at 0000 UTC. There are large spatial variations in skill. In general, California is a region of high skill, whereas the Great Basin is a region with little or no skill. The RSM ensemble is able to discern precipitation events over a wide range of thresholds. Discriminating ability is highest over California, but it is also exceptional over the individual hydrologic zones with ROC areas on the order of 0.9. The ROC curves, and hence discriminating ability, do not show a significant variation with analysis cycle. FIG. 10. The ranked probability skill score (RPSS) for 0000 UTC (black bar) and 1200 UTC (white bar) over different hydrological regions using four thresholds (1, 10, 20, and 50 mm). From the left to the right: the whole domain, the CNRFC, the CBRFC, the Upper Colorado region, the Lower Colorado region, the Great Basin region, and the California region. Right error bars give 90% confidence bounds for the RPSS. the 0000 and 1200 UTC forecast cycles were emphasized. The main findings of the study are as follows: Skill scores show significant sensitivity to the analysis that is used for verification. Brier skill scores for the RSM can be either skillful or unskillful depending on analysis. FIG. 11. Spatial distribution of the ranked probability skill score for 0000 UTC cycle forecasts. Perhaps the most perplexing finding is the large difference in verification scores between the 0000 and 1200 UTC forecast cycles. The underlying cause for this difference is not known, but a couple of possibilities could be contributing factors. The 1200 UTC analyses over the Pacific Ocean, the region upwind of the model domain, correspond to a 6-h assimilation cycle during nighttime hours. This means an absence of visible satellite data and fewer reports from ships of opportunity during the 1200 UTC cycle. The relative lack of data could produce poorer analyses at 1200 UTC, which in turn could lead to lower skill. However, the ROC areas and PEV (Figs. 13, 14) do not change appreciably between 0000 and 1200 UTC; thus it is unclear why scores insensitive to biases would not also be degraded at 1200 UTC if poorer analyses were a major factor. Spinup, together with a significant diurnal cycle of precipitation, is another possibility that could cause a bias differential if the diurnal cycle of wintertime precipitation over virtually all of the western United States had its maximum much closer to 0000 than 1200 UTC. However, the diurnal cycle during winter is weak over the West, where morning maxima seem to be prevalent at most stations (Wallace 1975). On the other hand, if the wet bias began during a specific portion of the diurnal cycle, then it could produce UTC bias differences that would be especially large in the first 24-h of the forecast. For example, if the wet bias began during the 6-h period of UTC, then the 24-h forecasts starting at 1200 UTC would exhibit a larger bias than the 24-h forecasts starting at 0000 UTC. We note that simulations with coarser versions of the RSM exhibit a diurnal cycle where convection initiates too early during the UTC period (Hong and Leetmaa 1999). Whatever the source of the differences noted here, model deficiency or sampling fluctuation, the situation warrants further examination to see if the behavior continues. The diurnal issue aside, the existence of overall wet bias, relative to the Stage4 analyses, is undeniable. It appears in spatial distributions of time-average rainfall, attributes diagrams, and rank histograms. Forecast

13 JANUARY 2005 Y U A N E T A L. 291 FIG. 12. Monthly Brier skill scores for 24-h accumulated precipitation at 0000 UTC for (left) the whole domain and (right) the CNRFC at 1 75-mm thresholds. The plain solid line represents the 5-month average. quality jumps significantly if the RSM is verified against the RFC4 analyses or the RFC14 analyses, however. The improvement comes solely from a reduction of the wet bias. The frequency of precipitation events is noticeably higher in the RFC4 and RFC14 analyses compared to the Stage4 analyses for thresholds of 25 mm and smaller, precisely those thresholds for which the improvement is greatest. Evidently, exclusion of rain gauges in the Stage4 product vis-à-vis RFC4 leads to the difference in sample frequencies. It is plausible that the practice of eliminating rain gauges in a region of sparse gauge distribution (Fig. 2) and spotty radar coverage owing to terrain blockage (Maddox et al. 2002) could be adversely affecting the Stage4 analyses, and for that reason alone we recommend that NCEP review its practice over the Intermountain West. On the other hand, the RSM wet bias could indeed be real if the Stage4 analyses are closer to the truth than the RFC4 and RFC14. In that case, the large RSM overestimation could be produced by a variety of model deficiencies, for example, insufficient representation of topography at 12 km or parameterization schemes in the presence of the extremely heterogeneous terrain that have not been carefully tuned for 12-km grid, among many others. Unfortunately, we have no way of knowing. What is clear is that mitigation of biases and other forecast errors over the West, if they are real, will pose challenges to the modeling community in view of the uncertainties inherent in the verifying precipitation analyses. Comparison of the spatial distributions of the rain gauge density (Fig. 2), observed precipitation (Fig. 5), and the RPSS (Fig. 11) indicates a similar pattern. Areas with dense gauge coverage and high monthly precipitation seem to coincide with the areas of high skill. The spatial correlation coefficient between the RPSS and monthly precipitation runs 0.6; the correlation between the RPSS and gauge density, though much lower, is significantly positive 0.3. This raises the issue of skill for the RSM being related to precipitation amount and data density. Positive correlation between skill and average precipitation is arguably not surprising in view of RSM wet bias, but any significant correlations between skill and gauge density, however small, only serve to cloud interpretation of these results further. Brier and rank probability skill scores seem to contain spatial variations too large to be attributed to differential gauge density. RSM precipitation forecasts are clearly most accurate along the windward slopes of the Sierra Nevadas and the Coastal Range of California, and to a lesser degree over the Mogollon Rim of Arizona. Skill worsens downstream of these ranges that act as initial barriers to moisture-laden westerly flow from the Pacific Ocean. RSM performance in interior valleys of the West seems particularly poor. It appears that upslope rains during times of wet winter flow (Fig. 11) are relatively easy for the RSM ensemble to predict, whereas precipitation forecasts over the interior are more problematic. These regional variations in skill are almost exclusively related to biases, however. Because statistical postprocessing techniques are excellent at mitigating conditional biases in probabilistic forecasts without significant deterioration of the resolution term (e.g., Hamill and Colucci 1998; Eckel and Walters 1998), this suggests that calibration of the RSM12 ensemble forecasts, interpolated to the finer 4-km Stage4 grid, has the potential to yield unbiased PQPFs that can TABLE 2. The contingency table of forecasts and observed events. The hit rate is X/(X Y), and the false alarm rate is Z/ (Z W). Forecast Yes No Total Yes X Y X Y Observed No Z W Z W Total X Z Y W Total

14 292 M O N T H L Y W E A T H E R R E V I E W VOLUME 133 FIG. 13. Area under the relative operating characteristic curve for 24-h accumulated precipitation for 0000 UTC (dotted line) and 1200 UTC cycle (dashed line) during 151 days over the whole domain at 1 75-mm thresholds. Error bars give 90% confidence bounds for ROC areas. discriminate 24-h rain events with a high level of confidence. However, the sample size and minimal training period required for precipitation calibration needs to be tested for regional calibration, especially under the challenge of frequent changes to the operational analysis forecast system. Accurate estimates of precipitation amounts at fine spatial (4 km 4 km) and temporal resolution are a critical input for hydrologic flood and river flow forecasting models (Droegemeier et al. 2000). Current operational hydrologic models use 6-h accumulations for general flood forecasting ( cnrfc/flood_forecasting.html), and they require precipitation intensity for accumulation periods as short as 30 min for flash flood forecasting (Kelsch 2002). With recent improvements in short-range precipitation forecasts generated by mesoscale ensemble systems, it is becoming feasible to consider their use to drive hydrologic models for general flood forecasting. The 4-km Stage4 analyses and RFC hourly rain gauge reports offer the opportunity to verify and calibrate precipitation forecasts near the requisite minimal temporal and spatial scales. Although we realize that the 24-h accumulation period used in this study has not yet reached the desired temporal scales for many hydrologic applications, we believe the results of this paper support the notion that the time is ripe to pursue an accelerated development of ensemble hydrometeorological prediction systems. Before the goal of skillful coupled ensemble forecast runoff system can be routinely realized, however, comprehensive validation studies and multivariate calibration efforts must be performed for atmospheric variables that historically have not been scrutinized. Such studies must be extended to longer forecast projections and the more difficult warm season, plus they should include the impact of analysis/observation uncertainty on verification. In that spirit, verification of the RSM12 ensemble for 6-h accumulations is underway and will be reported in due course. Acknowledgments. The authors acknowledge the support of NASA EOS-IDS Grant NAG5-3460, NSF STC program (Agreement EAR ). The second author (SLM) also received support from ONR N Computer resources were obtained under support of ONR N The NCEP provided the RSM model. We thank Mr. M. Leuthold and Dr. J. E. Combariza for assisting with the configuration and maintenance of the RSM ensemble system at the University of Arizona. Dr. R. Wobus, Dr. Z. Toth, and others at NCEP provided GSM breeding perturbation code. Y. Lin, B. A. Gordon, and others at NCEP facilitated data access. T. E. Vega, D. K. Braithwaite, J. Broermann, and E. Halper at the University of Arizona assisted with downloading data and providing the coverage information of the river basins. Dr. L. J. Wilson provided ROC curve fitting code. We also thank the two anonymous reviewers for their insightful reviews. FIG. 14. The optimal potential economic value (negative values not shown) for 24-h precipitation at 0000 UTC for 151 days over the whole domain at four thresholds: 1 mm (dotted line), 10 mm (solid line), 20 mm (dashed line), and 50 mm (chain-dashed line). REFERENCES Anderson, J. L., 1996: A method for producing and evaluating probabilistic forecasts from ensemble model integrations. J. Climate, 9,

15 JANUARY 2005 Y U A N E T A L. 293 Black, T. L., 1994: The new NMC mesoscale Eta Model: Description and forecast examples. Wea. Forecasting, 9, Brooks, H. E., M. S. Tracton, D. J. Stensrud, G. DiMego, and Z. Toth, 1995: Short-range ensemble forecasting: Report from a workshop, July Bull. Amer. Meteor. Soc., 76, Buizza, R., 2001: Accuracy and potential economic value of categorical and probabilistic forecasts of discrete events. Mon. Wea. Rev., 129, , A. Hollingsworth, F. Lalaurette, and A. Ghelli, 1999: Probabilistic predictions of precipitation using the ECMWF ensemble prediction system. Wea. Forecasting, 14, Charba, J. P., D. W. Reynolds, B. E. McDonald, and G. M. Carter, 2003: Comparative verification of recent quantitative precipitation forecasts in the National Weather Service: A simple approach for scoring forecast accuracy. Wea. Forecasting, 18, Daly, C., R. P. Neilson, and D. L. Phillips, 1994: A statistical topographic model for mapping climatological precipitation over mountainous terrain. J. Appl. Meteor., 33, Droegemeier, K. K., and Coauthors, 2000: Hydrological aspects of weather prediction and flood warnings: Report of the ninth prospectus development team of the U.S. weather research program. Bull. Amer. Meteor. Soc., 81, Du, J., and M. S. Tracton, 1999: Impact of lateral boundary conditions on regional-model ensemble prediction. Research activities in atmospheric and oceanic modelling. H. Ritchie, Ed., Rep. 28, CAS/JSC Working Group Numerical Experimentation WMO/TD-No. 942, , and, 2001: Implementation of a real-time short-range ensemble forecasting system at NCEP: An update. Preprints, Ninth Conf. on Mesoscale Processes, Ft. Lauderdale, FL, Amer. Meteor. Soc., , S. Mullen, and F. Sanders, 1997: Short-range ensemble forecasting of quantitative precipitation. Mon. Wea. Rev., 125, Eckel, F. A., and M. K. Walters, 1998: Calibrated probabilistic quantitative pecipitation forecasts based on the MRF ensemble. Wea. Forecasting, 13, Epstein, E. S., 1969: Stochastic dynamic prediction. Tellus, 21, Fritsch, J. M., and Coauthors, 1998: Quantitative precipitation forecasting: Report of the eighth prospectus development team, U.S. weather research program. Bull. Amer. Meteor. Soc., 79, Fulton, R. A., J. P. Breidenbach, D.-J. Seo, D. A. Miller, and T. O Bannon, 1998: The WSR-88D rainfall algorithm. Wea. Forecasting, 13, Hamill, T. M., 1999: Hypothesis tests for evaluating numerical precipitation forecasts. Wea. Forecasting, 14, , 2001: Interpretation of rank histograms for verifying ensemble forecasts. Mon. Wea. Rev., 129, , and S. J. Colucci, 1998: Evaluation of Eta RSM ensemble probabilistic precipitation forecasts. Mon. Wea. Rev., 126, , S. L. Mullen, C. Snyder, Z. Toth, and D. P. Baumhefner, 2000: Ensemble forecasting in the short to medium range: Report from a workshop. Bull. Amer. Meteor. Soc., 81, , J. S. Whitaker, and C. Snyder, 2001: Distance-dependent filtering of background error covariance estimates in an ensemble Kalman filter. Mon. Wea. Rev., 129, Higgins, R. W., W. Shi, E. Yarosh, and R. Joyce, 2003: Improved United States Precipitation Quality Control System and Analysis. NCEP/CPC Atlas 7, 47 pp. [Available online at index.html.] Hoffman, R. N., and E. Kalnay, 1983: Lagged average forecasting, an alternative to Monte Carlo forecasting. Tellus, 35A, Hong, S.-Y., and A. Leetmaa, 1999: An evaluation of the NCEP RSM for regional climate modeling. J. Climate, 12, Jolliffe, I. T., and D. B. Stephenson, 2003: Forecast Verification: A Practitioner s Guide in Atmospheric Science. Wiley and Sons, 240 pp. Juang, H.-M. H., 2000: The NCEP mesoscale spectral model: A revised version of the nonhydrostatic regional spectral model. Mon. Wea. Rev., 128, , and M. Kanamitsu, 1994: The NMC nested regional spectral model. Mon. Wea. Rev., 122, 3 26., S.-Y. Hong, and M. Kanamitsu, 1997: The NCEP regional spectral model: An update. Bull. Amer. Meteor. Soc., 78, Kalnay, E., 2003: Atmospheric Modeling, Data Assimilation and Predictability. Cambridge University Press, 341 pp. Kanamitsu, M., 1989: Description of the NMC global data assimilation and forecast system. Wea. Forecasting, 4, Kelsch, M., 2002: COMET flash flood cases: Summary of characteristics. Preprints, 16th Conf. on Hydrology, Orlando, FL, Amer. Meteor. Soc., CD-ROM, 2.1., 2004: A review of some significant urban floods across the United States in Preprints, 2004 AMS Annual Weather Review Preliminary Program, Seattle, WA, Amer. Meteor. Soc., 2 3. Leith, C. E., 1974: Theoretical skill of Monte Carlo forecasts. Mon. Wea. Rev., 102, Maddox, R. A., J. Zhang, J. J. Gourley, and K. W. Howard, 2002: Weather radar coverage over the contiguous United States. Wea. Forecasting, 17, Molteni, F., R. Buizza, T. N. Palmer, and T. Petroliagis, 1996: The ECMWF ensemble prediction system: Methodology and validation. Quart. J. Roy. Meteor. Soc., 122, Mullen, S. L., and R. Buizza, 2001: Quantitative precipitation forecasts over the United States by the ECMWF ensemble prediction system. Mon. Wea. Rev., 129, , J. Du, and F. Sanders, 1999: The dependence of ensemble dispersion on analysis forecast systems: Implications to short-range ensemble forecasting of precipitation. Mon. Wea. Rev., 127, Mureau, R., F. Molteni, and T. N. Palmer, 1993: Ensemble prediction using dynamically-conditioned perturbations. Quart. J. Roy. Meteor. Soc., 119, Murphy, A. H., 1991: Forecast verification: Its complexity and dimensionality. Mon. Wea. Rev., 119, , and R. L. Winkler, 1987: A general framework for forecast verification. Mon. Wea. Rev., 115, Palmer, P. L., 1988: The SCS snow survey water supply forecasting program: Current operations and future directions. Proc. 56th Annual Western Snow Conf., Kalispell, MT, Western Snow Conference, Palmer, T. N., F. Molteni, R. Mureau, R. Buizza, P. Chapelet, and J. Tribbia, 1993: Ensemble prediction. Proc. ECMWF Seminar on Validation of Models over Europe, Vol. 1, ECMWF, Shinfield Park, Reading, UK, Pielke, R. A., Jr., and M. W. Downton, 2000: Precipitation and damaging floods: Trends in the United States, J. Climate, 13, Reynolds, D., 2003: Value-added quantitative precipitation forecasts: How valuable is the forecaster? Bull. Amer. Meteor. Soc., 84, Richardson, D. S., 2000: Skill and economic value of the ECMWF ensemble prediction system. Quart. J. Roy. Meteor. Soc., 126, Rogers, E., D. G. Deaven, and G. S. Dimego, 1995: The regional analysis system for the operational early Eta Model: Original 80-km configuration and recent changes. Wea. Forecasting, 10, Sanders, F., 1986: Trends in skill of Boston forecasts made at MIT, Bull. Amer. Meteor. Soc., 67, Serreze, M., M. Clark, R. Armstrong, D. McGinnis, and R. Pul-

Assessment of Ensemble Forecasts

Assessment of Ensemble Forecasts Assessment of Ensemble Forecasts S. L. Mullen Univ. of Arizona HEPEX Workshop, 7 March 2004 Talk Overview Ensemble Performance for Precipitation Global EPS and Mesoscale 12 km RSM Biases, Event Discrimination

More information

The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS

The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS S. L. Mullen Univ. of Arizona R. Buizza ECMWF University of Wisconsin Predictability Workshop,

More information

NOTES AND CORRESPONDENCE. Improving Week-2 Forecasts with Multimodel Reforecast Ensembles

NOTES AND CORRESPONDENCE. Improving Week-2 Forecasts with Multimodel Reforecast Ensembles AUGUST 2006 N O T E S A N D C O R R E S P O N D E N C E 2279 NOTES AND CORRESPONDENCE Improving Week-2 Forecasts with Multimodel Reforecast Ensembles JEFFREY S. WHITAKER AND XUE WEI NOAA CIRES Climate

More information

TC/PR/RB Lecture 3 - Simulation of Random Model Errors

TC/PR/RB Lecture 3 - Simulation of Random Model Errors TC/PR/RB Lecture 3 - Simulation of Random Model Errors Roberto Buizza (buizza@ecmwf.int) European Centre for Medium-Range Weather Forecasts http://www.ecmwf.int Roberto Buizza (buizza@ecmwf.int) 1 ECMWF

More information

J11.5 HYDROLOGIC APPLICATIONS OF SHORT AND MEDIUM RANGE ENSEMBLE FORECASTS IN THE NWS ADVANCED HYDROLOGIC PREDICTION SERVICES (AHPS)

J11.5 HYDROLOGIC APPLICATIONS OF SHORT AND MEDIUM RANGE ENSEMBLE FORECASTS IN THE NWS ADVANCED HYDROLOGIC PREDICTION SERVICES (AHPS) J11.5 HYDROLOGIC APPLICATIONS OF SHORT AND MEDIUM RANGE ENSEMBLE FORECASTS IN THE NWS ADVANCED HYDROLOGIC PREDICTION SERVICES (AHPS) Mary Mullusky*, Julie Demargne, Edwin Welles, Limin Wu and John Schaake

More information

P3.1 Development of MOS Thunderstorm and Severe Thunderstorm Forecast Equations with Multiple Data Sources

P3.1 Development of MOS Thunderstorm and Severe Thunderstorm Forecast Equations with Multiple Data Sources P3.1 Development of MOS Thunderstorm and Severe Thunderstorm Forecast Equations with Multiple Data Sources Kathryn K. Hughes * Meteorological Development Laboratory Office of Science and Technology National

More information

Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the Pacific Northwest

Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the Pacific Northwest Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the Pacific Northwest Eric P. Grimit and Clifford F. Mass Department of Atmospheric Sciences, University of Washington

More information

Prediction of Snow Water Equivalent in the Snake River Basin

Prediction of Snow Water Equivalent in the Snake River Basin Hobbs et al. Seasonal Forecasting 1 Jon Hobbs Steve Guimond Nate Snook Meteorology 455 Seasonal Forecasting Prediction of Snow Water Equivalent in the Snake River Basin Abstract Mountainous regions of

More information

4.3.2 Configuration. 4.3 Ensemble Prediction System Introduction

4.3.2 Configuration. 4.3 Ensemble Prediction System Introduction 4.3 Ensemble Prediction System 4.3.1 Introduction JMA launched its operational ensemble prediction systems (EPSs) for one-month forecasting, one-week forecasting, and seasonal forecasting in March of 1996,

More information

Will it rain? Predictability, risk assessment and the need for ensemble forecasts

Will it rain? Predictability, risk assessment and the need for ensemble forecasts Will it rain? Predictability, risk assessment and the need for ensemble forecasts David Richardson European Centre for Medium-Range Weather Forecasts Shinfield Park, Reading, RG2 9AX, UK Tel. +44 118 949

More information

Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the Pacific Northwest

Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the Pacific Northwest Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the Pacific Northwest Eric P. Grimit and Clifford F. Mass Department of Atmospheric Sciences, University of Washington

More information

DEVELOPMENT OF A LARGE-SCALE HYDROLOGIC PREDICTION SYSTEM

DEVELOPMENT OF A LARGE-SCALE HYDROLOGIC PREDICTION SYSTEM JP3.18 DEVELOPMENT OF A LARGE-SCALE HYDROLOGIC PREDICTION SYSTEM Ji Chen and John Roads University of California, San Diego, California ABSTRACT The Scripps ECPC (Experimental Climate Prediction Center)

More information

Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the. Pacific Northwest. Eric P. Grimit

Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the. Pacific Northwest. Eric P. Grimit Implementation and Evaluation of a Mesoscale Short-Range Ensemble Forecasting System Over the Eric P. Grimit Pacific Northwest Advisor: Dr. Cliff Mass Department of Atmospheric Sciences, University of

More information

COMPOSITE-BASED VERIFICATION OF PRECIPITATION FORECASTS FROM A MESOSCALE MODEL

COMPOSITE-BASED VERIFICATION OF PRECIPITATION FORECASTS FROM A MESOSCALE MODEL J13.5 COMPOSITE-BASED VERIFICATION OF PRECIPITATION FORECASTS FROM A MESOSCALE MODEL Jason E. Nachamkin, Sue Chen, and Jerome M. Schmidt Naval Research Laboratory, Monterey, CA 1. INTRODUCTION Mesoscale

More information

Modeling rainfall diurnal variation of the North American monsoon core using different spatial resolutions

Modeling rainfall diurnal variation of the North American monsoon core using different spatial resolutions Modeling rainfall diurnal variation of the North American monsoon core using different spatial resolutions Jialun Li, X. Gao, K.-L. Hsu, B. Imam, and S. Sorooshian Department of Civil and Environmental

More information

VERFICATION OF OCEAN WAVE ENSEMBLE FORECAST AT NCEP 1. Degui Cao, H.S. Chen and Hendrik Tolman

VERFICATION OF OCEAN WAVE ENSEMBLE FORECAST AT NCEP 1. Degui Cao, H.S. Chen and Hendrik Tolman VERFICATION OF OCEAN WAVE ENSEMBLE FORECAST AT NCEP Degui Cao, H.S. Chen and Hendrik Tolman NOAA /National Centers for Environmental Prediction Environmental Modeling Center Marine Modeling and Analysis

More information

Combining Deterministic and Probabilistic Methods to Produce Gridded Climatologies

Combining Deterministic and Probabilistic Methods to Produce Gridded Climatologies Combining Deterministic and Probabilistic Methods to Produce Gridded Climatologies Michael Squires Alan McNab National Climatic Data Center (NCDC - NOAA) Asheville, NC Abstract There are nearly 8,000 sites

More information

Upgrade of JMA s Typhoon Ensemble Prediction System

Upgrade of JMA s Typhoon Ensemble Prediction System Upgrade of JMA s Typhoon Ensemble Prediction System Masayuki Kyouda Numerical Prediction Division, Japan Meteorological Agency and Masakazu Higaki Office of Marine Prediction, Japan Meteorological Agency

More information

Ensemble Verification Metrics

Ensemble Verification Metrics Ensemble Verification Metrics Debbie Hudson (Bureau of Meteorology, Australia) ECMWF Annual Seminar 207 Acknowledgements: Beth Ebert Overview. Introduction 2. Attributes of forecast quality 3. Metrics:

More information

4.5 Comparison of weather data from the Remote Automated Weather Station network and the North American Regional Reanalysis

4.5 Comparison of weather data from the Remote Automated Weather Station network and the North American Regional Reanalysis 4.5 Comparison of weather data from the Remote Automated Weather Station network and the North American Regional Reanalysis Beth L. Hall and Timothy. J. Brown DRI, Reno, NV ABSTRACT. The North American

More information

Precipitation Structure and Processes of Typhoon Nari (2001): A Modeling Propsective

Precipitation Structure and Processes of Typhoon Nari (2001): A Modeling Propsective Precipitation Structure and Processes of Typhoon Nari (2001): A Modeling Propsective Ming-Jen Yang Institute of Hydrological Sciences, National Central University 1. Introduction Typhoon Nari (2001) struck

More information

Diagnosing the Climatology and Interannual Variability of North American Summer Climate with the Regional Atmospheric Modeling System (RAMS)

Diagnosing the Climatology and Interannual Variability of North American Summer Climate with the Regional Atmospheric Modeling System (RAMS) Diagnosing the Climatology and Interannual Variability of North American Summer Climate with the Regional Atmospheric Modeling System (RAMS) Christopher L. Castro and Roger A. Pielke, Sr. Department of

More information

PROBABILISTIC FORECASTS OF MEDITER- RANEAN STORMS WITH A LIMITED AREA MODEL Chiara Marsigli 1, Andrea Montani 1, Fabrizio Nerozzi 1, Tiziana Paccagnel

PROBABILISTIC FORECASTS OF MEDITER- RANEAN STORMS WITH A LIMITED AREA MODEL Chiara Marsigli 1, Andrea Montani 1, Fabrizio Nerozzi 1, Tiziana Paccagnel PROBABILISTIC FORECASTS OF MEDITER- RANEAN STORMS WITH A LIMITED AREA MODEL Chiara Marsigli 1, Andrea Montani 1, Fabrizio Nerozzi 1, Tiziana Paccagnella 1, Roberto Buizza 2, Franco Molteni 3 1 Regional

More information

Enhancing Weather Information with Probability Forecasts. An Information Statement of the American Meteorological Society

Enhancing Weather Information with Probability Forecasts. An Information Statement of the American Meteorological Society Enhancing Weather Information with Probability Forecasts An Information Statement of the American Meteorological Society (Adopted by AMS Council on 12 May 2008) Bull. Amer. Meteor. Soc., 89 Summary This

More information

Quantitative Precipitation Forecasts over the United States by the ECMWF Ensemble Prediction System

Quantitative Precipitation Forecasts over the United States by the ECMWF Ensemble Prediction System Research Department Technical Memorandum No. 307 Quantitative Precipitation Forecasts over the United States by the ECMWF Ensemble Prediction System by Steven L. Mullen 1 Roberto Buizza 2 1 Institute of

More information

Mesoscale Predictability of Terrain Induced Flows

Mesoscale Predictability of Terrain Induced Flows Mesoscale Predictability of Terrain Induced Flows Dale R. Durran University of Washington Dept. of Atmospheric Sciences Box 3516 Seattle, WA 98195 phone: (206) 543-74 fax: (206) 543-0308 email: durrand@atmos.washington.edu

More information

Peter P. Neilley. And. Kurt A. Hanson. Weather Services International, Inc. 400 Minuteman Road Andover, MA 01810

Peter P. Neilley. And. Kurt A. Hanson. Weather Services International, Inc. 400 Minuteman Road Andover, MA 01810 6.4 ARE MODEL OUTPUT STATISTICS STILL NEEDED? Peter P. Neilley And Kurt A. Hanson Weather Services International, Inc. 400 Minuteman Road Andover, MA 01810 1. Introduction. Model Output Statistics (MOS)

More information

5.4 Postprocessing multimodel ensemble data for improved short-range forecasting

5.4 Postprocessing multimodel ensemble data for improved short-range forecasting 5.4 Postprocessing multimodel ensemble data for improved short-range forecasting David J. Stensrud 1 and Nusrat Yussouf 2 1 NOAA/National Severe Storms Laboratory, Norman, Oklahoma 73069 2 Cooperative

More information

Imke Durre * and Matthew J. Menne NOAA National Climatic Data Center, Asheville, North Carolina 2. METHODS

Imke Durre * and Matthew J. Menne NOAA National Climatic Data Center, Asheville, North Carolina 2. METHODS 9.7 RADAR-TO-GAUGE COMPARISON OF PRECIPITATION TOTALS: IMPLICATIONS FOR QUALITY CONTROL Imke Durre * and Matthew J. Menne NOAA National Climatic Data Center, Asheville, North Carolina 1. INTRODUCTION Comparisons

More information

Application and verification of the ECMWF products Report 2007

Application and verification of the ECMWF products Report 2007 Application and verification of the ECMWF products Report 2007 National Meteorological Administration Romania 1. Summary of major highlights The medium range forecast activity within the National Meteorological

More information

The Australian Operational Daily Rain Gauge Analysis

The Australian Operational Daily Rain Gauge Analysis The Australian Operational Daily Rain Gauge Analysis Beth Ebert and Gary Weymouth Bureau of Meteorology Research Centre, Melbourne, Australia e.ebert@bom.gov.au Daily rainfall data and analysis procedure

More information

6.5 Operational ensemble forecasting methods

6.5 Operational ensemble forecasting methods 6.5 Operational ensemble forecasting methods Ensemble forecasting methods differ mostly by the way the initial perturbations are generated, and can be classified into essentially two classes. In the first

More information

Application and verification of ECMWF products 2016

Application and verification of ECMWF products 2016 Application and verification of ECMWF products 2016 Icelandic Meteorological Office (www.vedur.is) Bolli Pálmason and Guðrún Nína Petersen 1. Summary of major highlights Medium range weather forecasts

More information

The Experimental Regional Seasonal Ensemble Forecasting at NCEP

The Experimental Regional Seasonal Ensemble Forecasting at NCEP The Experimental Regional Seasonal Ensemble Forecasting at NCEP Hann-Ming Henry Juang Environment Modeling Center, NCEP, Washington, DC Jun Wang SAIC contractor under NCEP/EMC John Roads Experimental Climate

More information

Verification of intense precipitation forecasts from single models and ensemble prediction systems

Verification of intense precipitation forecasts from single models and ensemble prediction systems Verification of intense precipitation forecasts from single models and ensemble prediction systems F Atger To cite this version: F Atger Verification of intense precipitation forecasts from single models

More information

REQUIREMENTS FOR WEATHER RADAR DATA. Review of the current and likely future hydrological requirements for Weather Radar data

REQUIREMENTS FOR WEATHER RADAR DATA. Review of the current and likely future hydrological requirements for Weather Radar data WORLD METEOROLOGICAL ORGANIZATION COMMISSION FOR BASIC SYSTEMS OPEN PROGRAMME AREA GROUP ON INTEGRATED OBSERVING SYSTEMS WORKSHOP ON RADAR DATA EXCHANGE EXETER, UK, 24-26 APRIL 2013 CBS/OPAG-IOS/WxR_EXCHANGE/2.3

More information

USING GRIDDED MOS TECHNIQUES TO DERIVE SNOWFALL CLIMATOLOGIES

USING GRIDDED MOS TECHNIQUES TO DERIVE SNOWFALL CLIMATOLOGIES JP4.12 USING GRIDDED MOS TECHNIQUES TO DERIVE SNOWFALL CLIMATOLOGIES Michael N. Baker * and Kari L. Sheets Meteorological Development Laboratory Office of Science and Technology National Weather Service,

More information

A Multidecadal Variation in Summer Season Diurnal Rainfall in the Central United States*

A Multidecadal Variation in Summer Season Diurnal Rainfall in the Central United States* 174 JOURNAL OF CLIMATE VOLUME 16 A Multidecadal Variation in Summer Season Diurnal Rainfall in the Central United States* QI HU Climate and Bio-Atmospheric Sciences Group, School of Natural Resource Sciences,

More information

Convective scheme and resolution impacts on seasonal precipitation forecasts

Convective scheme and resolution impacts on seasonal precipitation forecasts GEOPHYSICAL RESEARCH LETTERS, VOL. 30, NO. 20, 2078, doi:10.1029/2003gl018297, 2003 Convective scheme and resolution impacts on seasonal precipitation forecasts D. W. Shin, T. E. LaRow, and S. Cocke Center

More information

REPORT ON APPLICATIONS OF EPS FOR SEVERE WEATHER FORECASTING

REPORT ON APPLICATIONS OF EPS FOR SEVERE WEATHER FORECASTING WORLD METEOROLOGICAL ORGANIZATION COMMISSION FOR BASIC SYSTEMS OPAG DPFS EXPERT TEAM ON ENSEMBLE PREDICTION SYSTEMS CBS-DPFS/EPS/Doc. 7(2) (31.I.2006) Item: 7 ENGLISH ONLY EXETER, UNITED KINGDOM 6-10 FEBRUARY

More information

Feature-specific verification of ensemble forecasts

Feature-specific verification of ensemble forecasts Feature-specific verification of ensemble forecasts www.cawcr.gov.au Beth Ebert CAWCR Weather & Environmental Prediction Group Uncertainty information in forecasting For high impact events, forecasters

More information

Report on EN2 DTC Ensemble Task 2015: Testing of Stochastic Physics for use in NARRE

Report on EN2 DTC Ensemble Task 2015: Testing of Stochastic Physics for use in NARRE Report on EN2 DTC Ensemble Task 2015: Testing of Stochastic Physics for use in NARRE Motivation: With growing evidence that initial- condition uncertainties are not sufficient to entirely explain forecast

More information

Five years of limited-area ensemble activities at ARPA-SIM: the COSMO-LEPS system

Five years of limited-area ensemble activities at ARPA-SIM: the COSMO-LEPS system Five years of limited-area ensemble activities at ARPA-SIM: the COSMO-LEPS system Andrea Montani, Chiara Marsigli and Tiziana Paccagnella ARPA-SIM Hydrometeorological service of Emilia-Romagna, Italy 11

More information

EMC Probabilistic Forecast Verification for Sub-season Scales

EMC Probabilistic Forecast Verification for Sub-season Scales EMC Probabilistic Forecast Verification for Sub-season Scales Yuejian Zhu Environmental Modeling Center NCEP/NWS/NOAA Acknowledgement: Wei Li, Hong Guan and Eric Sinsky Present for the DTC Test Plan and

More information

Operational Hydrologic Ensemble Forecasting. Rob Hartman Hydrologist in Charge NWS / California-Nevada River Forecast Center

Operational Hydrologic Ensemble Forecasting. Rob Hartman Hydrologist in Charge NWS / California-Nevada River Forecast Center Operational Hydrologic Ensemble Forecasting Rob Hartman Hydrologist in Charge NWS / California-Nevada River Forecast Center Mission of NWS Hydrologic Services Program Provide river and flood forecasts

More information

Allison Monarski, University of Maryland Masters Scholarly Paper, December 6, Department of Atmospheric and Oceanic Science

Allison Monarski, University of Maryland Masters Scholarly Paper, December 6, Department of Atmospheric and Oceanic Science Allison Monarski, University of Maryland Masters Scholarly Paper, December 6, 2011 1 Department of Atmospheric and Oceanic Science Verification of Model Output Statistics forecasts associated with the

More information

Ensemble forecast and verification of low level wind shear by the NCEP SREF system

Ensemble forecast and verification of low level wind shear by the NCEP SREF system Ensemble forecast and verification of low level wind shear by the NCEP SREF system Binbin Zhou*, Jeff McQueen, Jun Du, Geoff DiMego, Zoltan Toth, Yuejian Zhu NOAA/NWS/NCEP/Environment Model Center 1. Introduction

More information

4.3. David E. Rudack*, Meteorological Development Laboratory Office of Science and Technology National Weather Service, NOAA 1.

4.3. David E. Rudack*, Meteorological Development Laboratory Office of Science and Technology National Weather Service, NOAA 1. 43 RESULTS OF SENSITIVITY TESTING OF MOS WIND SPEED AND DIRECTION GUIDANCE USING VARIOUS SAMPLE SIZES FROM THE GLOBAL ENSEMBLE FORECAST SYSTEM (GEFS) RE- FORECASTS David E Rudack*, Meteorological Development

More information

Comparison of the NCEP and DTC Verification Software Packages

Comparison of the NCEP and DTC Verification Software Packages Comparison of the NCEP and DTC Verification Software Packages Point of Contact: Michelle Harrold September 2011 1. Introduction The National Centers for Environmental Prediction (NCEP) and the Developmental

More information

INVESTIGATION FOR A POSSIBLE INFLUENCE OF IOANNINA AND METSOVO LAKES (EPIRUS, NW GREECE), ON PRECIPITATION, DURING THE WARM PERIOD OF THE YEAR

INVESTIGATION FOR A POSSIBLE INFLUENCE OF IOANNINA AND METSOVO LAKES (EPIRUS, NW GREECE), ON PRECIPITATION, DURING THE WARM PERIOD OF THE YEAR Proceedings of the 13 th International Conference of Environmental Science and Technology Athens, Greece, 5-7 September 2013 INVESTIGATION FOR A POSSIBLE INFLUENCE OF IOANNINA AND METSOVO LAKES (EPIRUS,

More information

Walter C. Kolczynski, Jr.* David R. Stauffer Sue Ellen Haupt Aijun Deng Pennsylvania State University, University Park, PA

Walter C. Kolczynski, Jr.* David R. Stauffer Sue Ellen Haupt Aijun Deng Pennsylvania State University, University Park, PA 7.3B INVESTIGATION OF THE LINEAR VARIANCE CALIBRATION USING AN IDEALIZED STOCHASTIC ENSEMBLE Walter C. Kolczynski, Jr.* David R. Stauffer Sue Ellen Haupt Aijun Deng Pennsylvania State University, University

More information

120 ASSESMENT OF MULTISENSOR QUANTITATIVE PRECIPITATION ESTIMATION IN THE RUSSIAN RIVER BASIN

120 ASSESMENT OF MULTISENSOR QUANTITATIVE PRECIPITATION ESTIMATION IN THE RUSSIAN RIVER BASIN 120 ASSESMENT OF MULTISENSOR QUANTITATIVE PRECIPITATION ESTIMATION IN THE RUSSIAN RIVER BASIN 1 Delbert Willie *, 1 Haonan Chen, 1 V. Chandrasekar 2 Robert Cifelli, 3 Carroll Campbell 3 David Reynolds

More information

Linking the Hydrologic and Atmospheric Communities Through Probabilistic Flash Flood Forecasting

Linking the Hydrologic and Atmospheric Communities Through Probabilistic Flash Flood Forecasting Linking the Hydrologic and Atmospheric Communities Through Probabilistic Flash Flood Forecasting Wallace Hogsett Science and Operations Officer NOAA/NWS/Weather Prediction Center with numerous contributions

More information

ASSESMENT OF THE SEVERE WEATHER ENVIROMENT IN NORTH AMERICA SIMULATED BY A GLOBAL CLIMATE MODEL

ASSESMENT OF THE SEVERE WEATHER ENVIROMENT IN NORTH AMERICA SIMULATED BY A GLOBAL CLIMATE MODEL JP2.9 ASSESMENT OF THE SEVERE WEATHER ENVIROMENT IN NORTH AMERICA SIMULATED BY A GLOBAL CLIMATE MODEL Patrick T. Marsh* and David J. Karoly School of Meteorology, University of Oklahoma, Norman OK and

More information

Towards Operational Probabilistic Precipitation Forecast

Towards Operational Probabilistic Precipitation Forecast 5 Working Group on Verification and Case Studies 56 Towards Operational Probabilistic Precipitation Forecast Marco Turco, Massimo Milelli ARPA Piemonte, Via Pio VII 9, I-10135 Torino, Italy 1 Aim of the

More information

Lightning Data Assimilation using an Ensemble Kalman Filter

Lightning Data Assimilation using an Ensemble Kalman Filter Lightning Data Assimilation using an Ensemble Kalman Filter G.J. Hakim, P. Regulski, Clifford Mass and R. Torn University of Washington, Department of Atmospheric Sciences Seattle, United States 1. INTRODUCTION

More information

Comparison of a 51-member low-resolution (T L 399L62) ensemble with a 6-member high-resolution (T L 799L91) lagged-forecast ensemble

Comparison of a 51-member low-resolution (T L 399L62) ensemble with a 6-member high-resolution (T L 799L91) lagged-forecast ensemble 559 Comparison of a 51-member low-resolution (T L 399L62) ensemble with a 6-member high-resolution (T L 799L91) lagged-forecast ensemble Roberto Buizza Research Department To appear in Mon. Wea.Rev. March

More information

Verification of Probability Forecasts

Verification of Probability Forecasts Verification of Probability Forecasts Beth Ebert Bureau of Meteorology Research Centre (BMRC) Melbourne, Australia 3rd International Verification Methods Workshop, 29 January 2 February 27 Topics Verification

More information

12.2 PROBABILISTIC GUIDANCE OF AVIATION HAZARDS FOR TRANSOCEANIC FLIGHTS

12.2 PROBABILISTIC GUIDANCE OF AVIATION HAZARDS FOR TRANSOCEANIC FLIGHTS 12.2 PROBABILISTIC GUIDANCE OF AVIATION HAZARDS FOR TRANSOCEANIC FLIGHTS K. A. Stone, M. Steiner, J. O. Pinto, C. P. Kalb, C. J. Kessinger NCAR, Boulder, CO M. Strahan Aviation Weather Center, Kansas City,

More information

Verification of ensemble and probability forecasts

Verification of ensemble and probability forecasts Verification of ensemble and probability forecasts Barbara Brown NCAR, USA bgb@ucar.edu Collaborators: Tara Jensen (NCAR), Eric Gilleland (NCAR), Ed Tollerud (NOAA/ESRL), Beth Ebert (CAWCR), Laurence Wilson

More information

National Weather Service-Pennsylvania State University Weather Events

National Weather Service-Pennsylvania State University Weather Events National Weather Service-Pennsylvania State University Weather Events Abstract: West Coast Heavy Precipitation Event of January 2012 by Richard H. Grumm National Weather Service State College PA 16803

More information

Impact of Stochastic Convection on Ensemble Forecasts of Tropical Cyclone Development

Impact of Stochastic Convection on Ensemble Forecasts of Tropical Cyclone Development 620 M O N T H L Y W E A T H E R R E V I E W VOLUME 139 Impact of Stochastic Convection on Ensemble Forecasts of Tropical Cyclone Development ANDREW SNYDER AND ZHAOXIA PU Department of Atmospheric Sciences,

More information

LATE REQUEST FOR A SPECIAL PROJECT

LATE REQUEST FOR A SPECIAL PROJECT LATE REQUEST FOR A SPECIAL PROJECT 2016 2018 MEMBER STATE: Italy Principal Investigator 1 : Affiliation: Address: E-mail: Other researchers: Project Title: Valerio Capecchi LaMMA Consortium - Environmental

More information

5.2 PRE-PROCESSING OF ATMOSPHERIC FORCING FOR ENSEMBLE STREAMFLOW PREDICTION

5.2 PRE-PROCESSING OF ATMOSPHERIC FORCING FOR ENSEMBLE STREAMFLOW PREDICTION 5.2 PRE-PROCESSING OF ATMOSPHERIC FORCING FOR ENSEMBLE STREAMFLOW PREDICTION John Schaake*, Sanja Perica, Mary Mullusky, Julie Demargne, Edwin Welles and Limin Wu Hydrology Laboratory, Office of Hydrologic

More information

Model error and seasonal forecasting

Model error and seasonal forecasting Model error and seasonal forecasting Antje Weisheimer European Centre for Medium-Range Weather Forecasts ECMWF, Reading, UK with thanks to Paco Doblas-Reyes and Tim Palmer Model error and model uncertainty

More information

A Comparison of Tornado Warning Lead Times with and without NEXRAD Doppler Radar

A Comparison of Tornado Warning Lead Times with and without NEXRAD Doppler Radar MARCH 1996 B I E R I N G E R A N D R A Y 47 A Comparison of Tornado Warning Lead Times with and without NEXRAD Doppler Radar PAUL BIERINGER AND PETER S. RAY Department of Meteorology, The Florida State

More information

Application and verification of ECMWF products 2015

Application and verification of ECMWF products 2015 Application and verification of ECMWF products 2015 Hungarian Meteorological Service 1. Summary of major highlights The objective verification of ECMWF forecasts have been continued on all the time ranges

More information

Operational quantitative precipitation estimation using radar, gauge g and

Operational quantitative precipitation estimation using radar, gauge g and Operational quantitative precipitation estimation using radar, gauge g and satellite for hydrometeorological applications in Southern Brazil Leonardo Calvetti¹, Cesar Beneti¹, Diogo Stringari¹, i¹ Alex

More information

09 December 2005 snow event by Richard H. Grumm National Weather Service Office State College, PA 16803

09 December 2005 snow event by Richard H. Grumm National Weather Service Office State College, PA 16803 09 December 2005 snow event by Richard H. Grumm National Weather Service Office State College, PA 16803 1. INTRODUCTION A winter storm produced heavy snow over a large portion of Pennsylvania on 8-9 December

More information

1. INTRODUCTION 2. QPF

1. INTRODUCTION 2. QPF 440 24th Weather and Forecasting/20th Numerical Weather Prediction HUMAN IMPROVEMENT TO NUMERICAL WEATHER PREDICTION AT THE HYDROMETEOROLOGICAL PREDICTION CENTER David R. Novak, Chris Bailey, Keith Brill,

More information

Application and verification of ECMWF products: 2010

Application and verification of ECMWF products: 2010 Application and verification of ECMWF products: 2010 Hellenic National Meteorological Service (HNMS) F. Gofa, D. Tzeferi and T. Charantonis 1. Summary of major highlights In order to determine the quality

More information

A simple method for seamless verification applied to precipitation hindcasts from two global models

A simple method for seamless verification applied to precipitation hindcasts from two global models A simple method for seamless verification applied to precipitation hindcasts from two global models Matthew Wheeler 1, Hongyan Zhu 1, Adam Sobel 2, Debra Hudson 1 and Frederic Vitart 3 1 Bureau of Meteorology,

More information

2015: A YEAR IN REVIEW F.S. ANSLOW

2015: A YEAR IN REVIEW F.S. ANSLOW 2015: A YEAR IN REVIEW F.S. ANSLOW 1 INTRODUCTION Recently, three of the major centres for global climate monitoring determined with high confidence that 2015 was the warmest year on record, globally.

More information

Snow, freezing rain, and shallow arctic Air 8-10 February 2015: NCEP HRRR success story

Snow, freezing rain, and shallow arctic Air 8-10 February 2015: NCEP HRRR success story Snow, freezing rain, and shallow arctic Air 8-10 February 2015: NCEP HRRR success story By Richard H. Grumm National Weather Service State College, PA 1. Overview A short-wave (Fig. 1) moved over the strong

More information

NIDIS Intermountain West Drought Early Warning System January 15, 2019

NIDIS Intermountain West Drought Early Warning System January 15, 2019 NIDIS Drought and Water Assessment NIDIS Intermountain West Drought Early Warning System January 15, 2019 Precipitation The images above use daily precipitation statistics from NWS COOP, CoCoRaHS, and

More information

Predicting uncertainty in forecasts of weather and climate (Also published as ECMWF Technical Memorandum No. 294)

Predicting uncertainty in forecasts of weather and climate (Also published as ECMWF Technical Memorandum No. 294) Predicting uncertainty in forecasts of weather and climate (Also published as ECMWF Technical Memorandum No. 294) By T.N. Palmer Research Department November 999 Abstract The predictability of weather

More information

P3.11 A COMPARISON OF AN ENSEMBLE OF POSITIVE/NEGATIVE PAIRS AND A CENTERED SPHERICAL SIMPLEX ENSEMBLE

P3.11 A COMPARISON OF AN ENSEMBLE OF POSITIVE/NEGATIVE PAIRS AND A CENTERED SPHERICAL SIMPLEX ENSEMBLE P3.11 A COMPARISON OF AN ENSEMBLE OF POSITIVE/NEGATIVE PAIRS AND A CENTERED SPHERICAL SIMPLEX ENSEMBLE 1 INTRODUCTION Xuguang Wang* The Pennsylvania State University, University Park, PA Craig H. Bishop

More information

Impacts of the April 2013 Mean trough over central North America

Impacts of the April 2013 Mean trough over central North America Impacts of the April 2013 Mean trough over central North America By Richard H. Grumm National Weather Service State College, PA Abstract: The mean 500 hpa flow over North America featured a trough over

More information

Exploring ensemble forecast calibration issues using reforecast data sets

Exploring ensemble forecast calibration issues using reforecast data sets Exploring ensemble forecast calibration issues using reforecast data sets Thomas M. Hamill (1) and Renate Hagedorn (2) (1) NOAA Earth System Research Lab, Boulder, Colorado, USA 80303 Tom.hamill@noaa.gov

More information

Justin Arnott and Michael Evans NOAA National Weather Service, Binghamton, NY. Richard Grumm NOAA National Weather Service, State College, PA

Justin Arnott and Michael Evans NOAA National Weather Service, Binghamton, NY. Richard Grumm NOAA National Weather Service, State College, PA 3A.5 REGIONAL SCALE ENSEMBLE FORECAST OF THE LAKE EFFECT SNOW EVENT OF 7 FEBRUARY 2007 Justin Arnott and Michael Evans NOAA National Weather Service, Binghamton, NY Richard Grumm NOAA National Weather

More information

MULTIRESOLUTION ENSEMBLE FORECASTS OF AN OBSERVED TORNADIC THUNDERSTORM SYSTEM. PART II: STORM-SCALE EXPERIMENTS

MULTIRESOLUTION ENSEMBLE FORECASTS OF AN OBSERVED TORNADIC THUNDERSTORM SYSTEM. PART II: STORM-SCALE EXPERIMENTS MULTIRESOLUTION ENSEMBLE FORECASTS OF AN OBSERVED TORNADIC THUNDERSTORM SYSTEM. PART II: STORM-SCALE EXPERIMENTS Fanyou Kong Center for Analysis and Prediction of Storms, University of Oklahoma, Norman,

More information

Mesoscale predictability under various synoptic regimes

Mesoscale predictability under various synoptic regimes Nonlinear Processes in Geophysics (2001) 8: 429 438 Nonlinear Processes in Geophysics c European Geophysical Society 2001 Mesoscale predictability under various synoptic regimes W. A. Nuss and D. K. Miller

More information

Heavier summer downpours with climate change revealed by weather forecast resolution model

Heavier summer downpours with climate change revealed by weather forecast resolution model SUPPLEMENTARY INFORMATION DOI: 10.1038/NCLIMATE2258 Heavier summer downpours with climate change revealed by weather forecast resolution model Number of files = 1 File #1 filename: kendon14supp.pdf File

More information

An Overview of Operations at the West Gulf River Forecast Center Gregory Waller Service Coordination Hydrologist NWS - West Gulf River Forecast Center

An Overview of Operations at the West Gulf River Forecast Center Gregory Waller Service Coordination Hydrologist NWS - West Gulf River Forecast Center National Weather Service West Gulf River Forecast Center An Overview of Operations at the West Gulf River Forecast Center Gregory Waller Service Coordination Hydrologist NWS - West Gulf River Forecast

More information

DEFINITION OF DRY THUNDERSTORMS FOR USE IN VERIFYING SPC FIRE WEATHER PRODUCTS. 2.1 Data

DEFINITION OF DRY THUNDERSTORMS FOR USE IN VERIFYING SPC FIRE WEATHER PRODUCTS. 2.1 Data S47 DEFINITION OF DRY THUNDERSTORMS FOR USE IN VERIFYING SPC FIRE WEATHER PRODUCTS Paul X. Flanagan *,1,2, Christopher J. Melick 3,4, Israel L. Jirak 4, Jaret W. Rogers 4, Andrew R. Dean 4, Steven J. Weiss

More information

István Ihász, Máté Mile and Zoltán Üveges Hungarian Meteorological Service, Budapest, Hungary

István Ihász, Máté Mile and Zoltán Üveges Hungarian Meteorological Service, Budapest, Hungary Comprehensive study of the calibrated EPS products István Ihász, Máté Mile and Zoltán Üveges Hungarian Meteorological Service, Budapest, Hungary 1. Introduction Calibration of ensemble forecasts is a new

More information

A COMPREHENSIVE 5-YEAR SEVERE STORM ENVIRONMENT CLIMATOLOGY FOR THE CONTINENTAL UNITED STATES 3. RESULTS

A COMPREHENSIVE 5-YEAR SEVERE STORM ENVIRONMENT CLIMATOLOGY FOR THE CONTINENTAL UNITED STATES 3. RESULTS 16A.4 A COMPREHENSIVE 5-YEAR SEVERE STORM ENVIRONMENT CLIMATOLOGY FOR THE CONTINENTAL UNITED STATES Russell S. Schneider 1 and Andrew R. Dean 1,2 1 DOC/NOAA/NWS/NCEP Storm Prediction Center 2 OU-NOAA Cooperative

More information

Evaluation of the Version 7 TRMM Multi-Satellite Precipitation Analysis (TMPA) 3B42 product over Greece

Evaluation of the Version 7 TRMM Multi-Satellite Precipitation Analysis (TMPA) 3B42 product over Greece 15 th International Conference on Environmental Science and Technology Rhodes, Greece, 31 August to 2 September 2017 Evaluation of the Version 7 TRMM Multi-Satellite Precipitation Analysis (TMPA) 3B42

More information

Precipitation processes in the Middle East

Precipitation processes in the Middle East Precipitation processes in the Middle East J. Evans a, R. Smith a and R.Oglesby b a Dept. Geology & Geophysics, Yale University, Connecticut, USA. b Global Hydrology and Climate Center, NASA, Alabama,

More information

JP 2.8 SPATIAL DISTRIBUTION OF TROPICAL CYCLONE INDUCED PRECIPITATION AND OPERATIONAL APPLICATIONS IN SOUTH CAROLINA

JP 2.8 SPATIAL DISTRIBUTION OF TROPICAL CYCLONE INDUCED PRECIPITATION AND OPERATIONAL APPLICATIONS IN SOUTH CAROLINA JP 2.8 SPATIAL DISTRIBUTION OF TROPICAL CYCLONE INDUCED PRECIPITATION AND OPERATIONAL APPLICATIONS IN SOUTH CAROLINA R. Jason Caldwell *, Hope P. Mizzell, and Milt Brown South Carolina State Climatology

More information

2009 Progress Report To The National Aeronautics and Space Administration NASA Energy and Water Cycle Study (NEWS) Program

2009 Progress Report To The National Aeronautics and Space Administration NASA Energy and Water Cycle Study (NEWS) Program 2009 Progress Report To The National Aeronautics and Space Administration NASA Energy and Water Cycle Study (NEWS) Program Proposal Title: Grant Number: PI: The Challenges of Utilizing Satellite Precipitation

More information

Land Data Assimilation at NCEP NLDAS Project Overview, ECMWF HEPEX 2004

Land Data Assimilation at NCEP NLDAS Project Overview, ECMWF HEPEX 2004 Dag.Lohmann@noaa.gov, Land Data Assimilation at NCEP NLDAS Project Overview, ECMWF HEPEX 2004 Land Data Assimilation at NCEP: Strategic Lessons Learned from the North American Land Data Assimilation System

More information

Predictability from a Forecast Provider s Perspective

Predictability from a Forecast Provider s Perspective Predictability from a Forecast Provider s Perspective Ken Mylne Met Office, Bracknell RG12 2SZ, UK. email: ken.mylne@metoffice.com 1. Introduction Predictability is not a new issue for forecasters or forecast

More information

New Climate Divisions for Monitoring and Predicting Climate in the U.S.

New Climate Divisions for Monitoring and Predicting Climate in the U.S. New Climate Divisions for Monitoring and Predicting Climate in the U.S. Klaus Wolter and Dave Allured, University of Colorado at Boulder, CIRES Climate Diagnostics Center, and NOAA-ESRL Physical Sciences

More information

statistical methods for tailoring seasonal climate forecasts Andrew W. Robertson, IRI

statistical methods for tailoring seasonal climate forecasts Andrew W. Robertson, IRI statistical methods for tailoring seasonal climate forecasts Andrew W. Robertson, IRI tailored seasonal forecasts why do we make probabilistic forecasts? to reduce our uncertainty about the (unknown) future

More information

AN ENSEMBLE STRATEGY FOR ROAD WEATHER APPLICATIONS

AN ENSEMBLE STRATEGY FOR ROAD WEATHER APPLICATIONS 11.8 AN ENSEMBLE STRATEGY FOR ROAD WEATHER APPLICATIONS Paul Schultz 1 NOAA Research - Forecast Systems Laboratory Boulder, Colorado 1. INTRODUCTION In 1999 the Federal Highways Administration (FHWA) initiated

More information

Hydrologic Ensemble Prediction: Challenges and Opportunities

Hydrologic Ensemble Prediction: Challenges and Opportunities Hydrologic Ensemble Prediction: Challenges and Opportunities John Schaake (with lots of help from others including: Roberto Buizza, Martyn Clark, Peter Krahe, Tom Hamill, Robert Hartman, Chuck Howard,

More information

The ECMWF Extended range forecasts

The ECMWF Extended range forecasts The ECMWF Extended range forecasts Laura.Ferranti@ecmwf.int ECMWF, Reading, U.K. Slide 1 TC January 2014 Slide 1 The operational forecasting system l High resolution forecast: twice per day 16 km 91-level,

More information

Basic Verification Concepts

Basic Verification Concepts Basic Verification Concepts Barbara Brown National Center for Atmospheric Research Boulder Colorado USA bgb@ucar.edu Basic concepts - outline What is verification? Why verify? Identifying verification

More information

NOAA Earth System Research Lab, Boulder, Colorado, USA

NOAA Earth System Research Lab, Boulder, Colorado, USA Exploring ensemble forecast calibration issues using reforecast data sets Thomas M. Hamill 1 and Renate Hagedorn 2 1 NOAA Earth System Research Lab, Boulder, Colorado, USA 80303,Tom.hamill@noaa.gov; http://esrl.noaa.gov/psd/people/tom.hamill

More information