8.10 VERIFICATION OF NOWCASTS AND VERY SHORT RANGE FORECASTS. Elizabeth Ebert * Bureau of Meteorology Research Centre, Melbourne, Australia

Size: px
Start display at page:

Download "8.10 VERIFICATION OF NOWCASTS AND VERY SHORT RANGE FORECASTS. Elizabeth Ebert * Bureau of Meteorology Research Centre, Melbourne, Australia"

Transcription

1 8.10 VERIFICATION OF NOWCASTS AND VERY SHORT RANGE FORECASTS Elizabeth Ebert * Bureau of Meteorology Research Centre, Melbourne, Australia 1. INTRODUCTION Nowcasts and very short range forecasts differ from most NWP and public weather forecasts in a number of important ways. Nowcasts are mainly (but not exclusively) concerned with high impact weather associated with storms, with potentially very high losses associated with undetected and unwarned severe events. Nowcasts are primarily observationbased, usually based on radar observations. Very short range model forecasts rely on the most accurate and recent observational data to specify the initial conditions. The spatial and temporal resolution of gridded nowcasts and very short range forecasts is typically much finer than that of most NWP to attempt to capture the extreme values experienced in high impact events. The forecasts are often relevant for a relatively small spatial domain such as the coverage of a single radar or the limited area domain of a high resolution mesoscale model. Nowcasts are made much more frequently than most other weather forecasts to take advantage of the rapid update of radar-based information. The observational data used for verification are often asynoptic, coming from storm spotter observations, high resolution gauge and lightning networks, or from the radar analyses themselves. These differences impact the way forecast verification is done and how the verification information is used. In particular, the frequent availability of new radar and other data to verifying recently issued nowcasts and forecasts enables the forecasts to be objectively or subjectively adjusted "on the fly" to reflect the new information. This paper discusses several issues that are particularly important for the verification of nowcasts and very short range forecasts. It is assumed that most readers are familiar with the standard verification scores. Detailed descriptions are found in the textbooks of Wilks (1995) and Jolliffe and Stephenson (2003). An online source of information on standard and new verification methods is the Forecast Verifica- * Corresponding author address: Elizabeth Ebert, Bureau of Meteorology Research Centre, GPO Box 1289, Melbourne 3001, Australia; e.ebert@bom.gov.au tion web site ( staff/eee/verif/verif_web_page.html). 2. HIGH IMPACT WEATHER Nowcasts and very short range forecasts tend to emphasize the prediction of high impact weather. This includes features related to storms, such as the motion and evolution of thunderstorm cells, mesocyclones and tornados, downbursts, damaging winds, lighting, hail, and heavy rainfall. Examples of larger scale features common to both nowcasts and NWPbased forecasts include frontal passages and landfall of tropical cyclones. The weather associated with severe storms is highly variable in space and time, making it difficult to make accurate detailed forecasts. The predictability is a function of scale with the larger scales persisting much longer than the smaller scales (e.g., Germann and Zawadzki, 2002). Early in the forecast period most nowcast systems explicitly predict the motion and evolution of the weather at high resolution. Later in the forecast when small-scale predictability has disappeared the goal then becomes to estimate areas where significant weather is possible or likely. Probabilistic nowcast systems are beginning to be developed to explicitly account for forecast uncertainty (e.g., Germann and Zawadzki, 2004; Megenhardt et al., 2004; Seed et al., 2005). High impact events also tend to be relatively rare, making them difficult to evaluate in a systematic manner. As a result it is more difficult to improve the forecast system in response to the biases or other errors revealed by the verification. 3. OBSERVATIONS The observations required to verify predictions of mesocyclones, hail occurrence and size, damaging winds, and other severe weather elements are usually obtained via reports from human observers, whether they are relayed by storm spotters or assessed from damage surveys. These data are not without some problems. Severe weather observations are not always reliable. Funnel cloud sightings may be mistaken for

2 tornadoes, maximum hail sizes may be exaggerated, times associated with the sightings may be misreported, etc. Even good surface observations may not be used if the communication process fails. Studies have shown that, all other factors being equal, there are a greater frequency of storm reports during daytime and in more densely populated areas. With storm chasing becoming an increasingly popular pastime the population issue may become less influential. Nevertheless, it is frequently impossible to assess whether the absence of a report of severe weather is due to it not having occurred or simply not having been seen. A related factor is the potential increase in storm reports when warnings were in effect, due simply to enhanced coordination between forecasters and storm spotters or greater efforts being made to gather evidence of storm damage. This non-systematic verification artificially raises the probability of detection (POD) (Smith, 1999), and can only be addressed by putting equal effort into verifying periods when no warnings were in effect. Observation fields for verifying threat areas such as severe weather warning polygons can be defined in different ways, depending on whether the verification is for accounting or diagnostic purposes. Traditionally an observation of the severe weather event anywhere within the warned area (counties or polygons in the US) during the period for which the warning was valid, would count as a hit for the whole area. For verification purposes Mahoney et al. (2004) use a smoothed analysis of high resolution radar observations of convection to simulate the gridded convection forecast products used by the aviation community. Storm motion is most easily verified using radar analyses at the time for which the forecast is valid. Cell tracking algorithms identify each cell separately so it is possible to associate a cell's forecast position with its observed position later. This assumes that the association of cells from one radar scan to the next is accurate, which is not always the case (e.g., Stumpf et al., 2004). Precipitation occurrence and intensity can be verified against radar analyses using grid-to-grid comparison methods. Rain gauge measurements are another important source of verification data for precipitation forecasts. However, scale mismatches between forecast at radar or model resolution and the point data represented by the gauges add some artificial error (Tustison et al., 2001). Rain gauge data can be objectively analyzed onto the forecast grid, preferably using methods such as kriging that also provide estimates of analysis error. Scale recursive filtering (Tustison et al., 2003) and conditional blending (Sinclair and Pegram, 2004) are methods for combining observational data from multiple sources (radar and gauge, for example) into gridded analyses that can be used to verify gridded rain forecasts. The verification should take into account the observational uncertainty, although this is frequently overlooked. Observational data of poor quality should not be used for verification if it can be avoided. It is possible in many cases to quality control the storm reports by referencing them to radar analyses from the same location and time. Quantitative methods for adjusting the verification results to remove the artificial error caused by observational uncertainty are the subject of current research (e.g., Briggs et al., 2005). 4. REAL TIME AND POST-EVENT VERIFICATION In an operational setting feedback on nowcasts and very short term forecasts can be quite rapid as new radar scans become available. These may be used to evaluate the performance of the nowcast system while the event is occurring, allowing the forecaster to adjust his/her forecast and enabling a better understanding of the strengths and weaknesses of the nowcast system. Real time evaluation tends to be subjective in nature, based on graphical displays of forecasts and observations. Real time objective verification of nowcasts in near real time does not appear to be commonly performed. This should become regular practice to help forecasters better learn from their mistakes (see the excellent essay by Doswell at f/verify.html). Real time verification may suffer from the lack of a complete set of observations, and so it is often necessary to perform a post-event verification after all of the observations are available. This enables more robust quantitative measures to be computed. It also allows for a greater variety of verification methodologies to be applied, including more complex diagnostic techniques, to tease out the nature and source of the forecast errors. Whether the verification is done in real time or post event it is important to emphasize that no single measure adequately describes forecast errors. There is often a tendency to focus on one or two scores for administrative purposes. Summary scores generally say little or nothing about what the forecast is doing wrong. To accomplish this using a measures-oriented approach one must compute a variety of scores (CSI, POD, FAR, bias, POFD, RMSE, etc.). Exploration of the forecast errors is better done using a distributionsoriented approach that shows the joint frequency of each forecast and observed category, also known as a contingency table (Brooks and Doswell, 1996). The reporting of confidence intervals on all verification results is highly recommended to quantify the uncertainty in the results. This is especially

3 important when the events being predicted are relatively rare or when intercomparing the results of multiple forecast systems. A simple way to generate confidence intervals for samples that may or may not be independent is the bootstrap, or resampling, method (Kane and Brown, 2000). 5. EXACT MATCH VS "CLOSE ENOUGH" In the case of high spatial and temporal resolution radar-based nowcasts and mesoscale models verified against radar observations, it is much more difficult to achieve a good match between forecasts and observations than at the coarser resolutions used by operational NWP. This, when combined with the inherent unpredictability of high impact weather and the limited usefulness of verification feedback when events occur only rarely, means that grid-to-grid verification statistics often have very poor values even for forecasts that look qualitatively reasonable and are useful to forecasters. This raises the question: When is it necessary for a forecast to be exactly correct in terms of location and timing in order to be useful, and when is it sufficient to be "close enough"? Forecasts for high stakes situations like Space Shuttle launches really do need to be correct in order to be considered successful. Flash flood predictions are useless if the heavy rain is forecast over the wrong catchment. Similarly, tornado forecasts must cover the affected areas so that appropriate actions can be taken. In these high stakes situations the standard deterministic verification scores such as CSI give appropriate summary information about the success of the forecasts. Uncertainties in the observations may make an exact match difficult to achieve even for a perfect forecast. Many verification procedures apply space and time tolerances to relax the requirements for an exact match. A tolerance effectively defines what is considered "close enough" for that application. For example, Megenhardt et al. (2004) extend the forecast and observed boundaries of convective areas in the National Convective Weather Forecast v.2 (NCWF-2) by 10 km. Verification of NSSL's Warning Decision Support System (WDSS) hail detections and forecasts allows a 20 minute time window for matching with surface observations (Witt, 2000). Other meanings of "close enough" are used in practice. When forecasters interpret high resolution output from nowcasts or models they understand that a perfect prediction is impossible and are usually content if the weather is predicted in about the right place and time and has about the right intensity. It is useful to know whether model output resembles observed weather in terms of its spatial, temporal, and intensity distributions, to assess whether the model is correctly representing physical processes. New verification methods are addressing ways to evaluate "close enough" forecasts (see Section 8). 6. VALUE A forecast has value if it helps a user make a better decision than would otherwise be possible. Baldwin and Kain (2004) investigated the behavior of accuracy and value scores for forecasts with displacement errors (predicted thunderstorm position, for example). When the losses associated with a missed event are high, it is advantageous to err on the side of caution and issue forecasts that overpredict the coverage of the event. This helps explain why the false alarm ratio continues to be high for severe weather warnings. For small or infrequent events as the displacement error grows it is necessary to increase the bias in order to maximize the accuracy and value according to most metrics. On the other hand, when the cost for taking preventative action based on a forecast is relatively high and displacement errors are likely, then the value is maximized by underforecasting the frequency of an event. In both cases the accuracy and value do not change at the same rate, indicating that the most valuable forecast is not necessarily the most accurate one! The adjustment of the forecaster's true belief in order to maximize an accuracy or value score is known as hedging. A much better way to increase the value of a forecast is to present it in probabilistic terms and let the user base his/her decision on whatever probability threshold optimizes the value of the forecasts over many realizations. For the simple cost/loss problem (a given cost C to take mitigating action based on a forecast, and a given loss L suffered if no action was taken but the event occurred) it is possible to compute the relative value as a function of C/L (e.g. Thornes and Stephenson, 2001). 7. SKILL A forecast has skill if it is more accurate than a reference forecast, usually an unskilled one such as persistence, climatology, or random chance. Skill measures the relative "smarts" of the forecast system and puts its accuracy into perspective. The general formulation of a skill score for any accuracy measure A is Aforecast Areference skill = (1) Aperfect Areference For assessing the skill of nowcasts the most appropriate reference forecasts are persistence and extrapolation (also known as Lagrangian persistence). Extrapolation is usually quite a good forecast in the

4 short range in fact for many nowcast systems it is the forecast and is difficult to beat (Ebert et al., 2004). It is also a good idea when possible to estimate an upper bound on the potential forecast performance. This can be done by treating the observations themselves as a competing source of forecast information, upscaling as necessary to map them to the resolution of the forecast product. This is the basis for the practically perfect hindcast approach of Kay and Brooks (2000), in which irregularly distributed observations are analyzed onto a regular grid to suggest what the perfect forecast would have looked like had the forecaster been given prior knowledge of the observations. Verification of the grid-interpolated values against the original point observations gives an estimate of the representativeness error in the observations, and provides a maximum-quality estimate against which the actual forecast performance can be compared. If the practically perfect hindcast achieves a CSI of 0.4 then a value of 0.3 for the forecast looks relatively quite good. Forecasters may add skill over and above that provided by the objective guidance. The objective guidance on which the forecast is based should be verified along with the official forecast to see where and whether the forecasters add skill. This assessment is particularly important for nowcasts because the forecaster often has little time to ponder the forecast in extreme weather situations. Mueller et al. (2002) found that forecasters improved NCAR Convective Storm Nowcast System forecasts by inserting more accurate boundaries in the initial conditions. On the other hand, no net skill was added by forecasters modifying storm cell tracks in the Sydney region (Ebert et al., 2005). The lack of forecaster skill may reflect his/her ability to understand the meteorology of the situation, either due to lack of knowledge or the inherent difficulty of the severe weather situation, or it may reflect the high quality of the objective guidance provided. 8. VERIFYING DETERMINISTIC FORECASTS 8.1 Match-up methods Early in the forecast period most nowcast systems try to explicitly predict the motion and evolution of the weather, so standard verification scores for deterministic forecasts are applicable. These are based on direct matches between the forecasts and observations in space and time. It may be necessary to first interpolate the forecasts to the locations and times of site-based observations, analyze the observations to a grid, or otherwise convert from one resolution to another to obtain valid match-ups. When remapping to a common grid it is important to be aware that the choice of grid scale influences the accuracy of the verification, with a coarser grid generally giving more favorable results (Weygandt et al., 2004). "Standard" categorical scores for summarizing the performance of dichotomous (yes/no) forecasts include the frequency bias, probability of detection (POD), false alarm ratio (FAR), critical success index (CSI, also known as the threat score), Heidke Skill Score (HSS), true skill score (TSS, also known as the Hanssen and Kuipers score), equitable threat score (ETS) and percent correct (PC). The fact that severe weather events are relatively rare leads to some issues in interpreting these verification scores. The first serious attempt to verify weather forecasts was made by J.P. Finley in 1884, who computed the percent of correct tornado and notornado forecasts made during an experimental tornado forecasting program. He documented very high accuracies, but his readers were quick to point out that even greater accuracy would have been achieved had he predicted no-tornado at all times. This initiated a flurry of activity to develop summary scores that could not be "played", or took into account the correct forecasts that could be attributable to random chance (see Murphy, 1996, for a comprehensive discussion of the "Finley Affair"). Doswell et al. (1990) noted that the CSI takes no account of correct forecasts of non-events in potentially hazardous weather situations, allowing this score to be optimized by over-forecasting. He recommended that the HSS be used instead for verifying rare events. Stephenson (2004) described the behavior of several commonly used summary scores as the climatological probability of the event tends to 0: FAR! 0 HSS! 0 ETS! 0 POD! 0 TSS! 0 PC! 1 CSI! 0 This behavior makes it difficult to get good assessments of forecast accuracy for rare events, which are often the most interesting ones. Two other scores examined, the odds ratio (OR) and the extreme dependency score (EDS), did not show the same sensitivity to the climatological probability, and so might be good candidate scores for assessing rare event forecasts. Initial investigation to use the EDS to verify tornado forecasts are promising (Harold Brooks, personal communication). Even when events are not especially rare, summary scores such as the CSI, HSS, TSS, and ETS have some potentially undesirable properties when used to verify high resolution forecasts. In the example shown in Fig. 1 all of the forecasts except (e) would score a CSI of 0, although (e) is not likely to be judged subjectively as the best forecast. For forecasts of continuous variables like rain intensity and wind speed standard metrics include the mean error (bias), mean absolute error (MAE), root

5 (a) (c) O O F F (e) O (b) (d) O O F F F explicitly but for gridded forecasts and observations pattern matching algorithms must be used. The contiguous rain area (CRA) method of Ebert and McBride (2000) measures errors in location and properties of forecast rain areas, and also decomposes the total error into contributions from location, volume, and pattern error. The object-based method of Brown et al. (2004) uses sophisticated merging and matching techniques to associate objects in convection or precipitation fields prior to computing their properties. It is particularly well suited to high resolution forecasts. Object identification using cluster analysis is also being investigated (Marzban and Sandgathe, 2005). Figure 1. Hypothetical forecast (F) and observation (O) combinations (after Brown et al., 2004). mean square error (RMSE), and correlation coefficient. For highly detailed forecasts and observations it is common to obtain poor values of these metrics if there are small offsets in space (as in Fig. 1a) or time. This is known as the "double penalty". It is exacerbated by the higher intensities that are usually present in high resolution products. Forecasts of identifiable features such as convective regions or fronts often have location and timing errors, although they may be difficult to separate since they are strongly related. Some new methods combine location and timing errors in the verification of gridded forecasts. The acuity-fidelity approach of Marshall et al. (2004) searches for the best match between the forecasts and observations in space and time simultaneously prior to computing equivalent distance errors. The contour error map (CEM) approach developed by Case et al. (2004) to evaluate sea breeze forecasts looks for timing errors in wind transitions at gridpoints and plots these as a contour map. 8.2 Object-based methods Most meteorologists would subjectively consider forecast (a) in Fig. 1 to be the best forecast because it has a similar size, shape, and orientation as the observations and only a small location error. As was seen earlier, however, the standard scores give it no more credit than forecasts that are clearly worse. Object-based methods attempt to intuitively give credit to better forecasts by assessing their properties in a more holistic way, identifying objects in the forecasts and observations, measuring their location error, and comparing their properties such as size, shape, orientation, mean intensity, etc. For nowcast algorithms using cell tracking this can easily be done 8.3 Fuzzy and scale dependent methods Another approach to give credit to forecasts that are "close" to the observations is fuzzy verification. This philosophy allows forecasts to be partly correct and partly incorrect at the same time. By varying thresholds or tolerances for "closeness" it is possible to evaluate forecast skill as a function of those thresholds. Damrath (2004) varied the spatial window over which forecast precipitation events were matched with observed events and found that the equitable threat score peaked when nearby grid boxes were also considered as potential verifying observations. Several authors use verification methods normally used with probabilistic forecasts (see Section 9) to evaluate high resolution deterministic forecasts by essentially treating all forecasts and/or observations within a given space / time / intensity window as equally likely (Atger, 2001; Roberts, 2004; Germann and Zawadzki, 2004). This blurs the distinction between a deterministic forecast, which is what the nowcast algorithm or high resolution model generates, and the probabilistic manner in which forecasters implicitly view the output. Indeed, Megenhardt et al. (2004) and Germann and Zawadzki (2004) generate probabilistic nowcasts in just this way. Scale dependent verification methods examine the correspondence of the forecasts and observations as a function of spatial and/or temporal scale. Casati et al.'s (2004) intensity-scale method uses wavelet decomposition to separate forecast and observed fields into components at various scales, then examines the correspondence between forecasts and observations at each scale. In the method of Harris et al. (2001) the multiscale properties of forecast and observed fields (power spectra, structure function, fractal dimension) are compared to evaluate the resemblance between forecasts and observations, regardless of any direct correspondence between the two fields.

6 9. VERIFYING PROBABILISTIC NOWCASTS Because it is not possible to verify a probabilistic forecast for a single quantity (for example, the probability of rain exceeding 20 mm h -1 over Denver airport during the next 60 minutes is 30%), a large number of samples is required to assess forecast quality. In the case of high resolution gridded nowcasts and very short range forecasts it is possible to obtain a large number of samples from a single spatial forecast, although the samples are not independent and the effective sample size is smaller. Two widely used verification methods for probabilistic forecasts are the reliability diagram and the relative operating characteristic (ROC). These measure reliability (whether the forecasts are unbiased) and resolution (whether the forecast can separate events from non-events), respectively. The relative value plotted against C/L is also used to verify probabilistic forecasts. Summary measures include the Brier score (BS), ranked probability score (RPS), and the area under the ROC curve. Since the BS and RPS measure errors in probability space they cannot achieve perfect values of 0 unless the forecast is a perfect deterministic forecast (which is not the point of making a probabilistic forecast!) To put them in relative terms they are often expressed as skill scores with respect to climatology. For this it is necessary to obtain estimates of the climatological distribution of the event, which is a difficult task for rare events. Other potential reference forecasts for the skill score are persistence, extrapolation, and the deterministic forecast. Brown et al. (2004) showed that, as with deterministic forecast verification, the probabilistic verification results are very sensitive to displacement errors in the forecast. They suggested that high resolution probabilistic forecasts might also benefit from an object-based verification approach. 10. SUBJECTIVE ASSESSMENT Although objective verification can provide systematic, unbiased (by personal opinion) assessments of forecast quality, it should be supplemented with routine subjective verification. Maps, time series, and scatter plots are simple visual diagnostics for comparing forecasts with observations quickly and easily. Visual displays may also reveal characteristics of forecast performance that are hidden by the verification statistics, and so are an excellent tool for increasing understanding. 11. REFERENCES Atger, F., 2001: Verification of intense precipitation forecasts from single models and ensemble prediction systems. Nonlin. Proc. Geophys., 8, Baldwin, M.E. and J.S. Kain, 2004: Examining the sensitivity of various performance measures. 17 th Conf. Probability and Statistics, Amer. Met. Soc., January 2004, Seattle. Briggs, W., M. Pocernich and D. Rupert, 2005: Incorporating measurement error in skill assessment. Mon. Wea. Rev., in press. Brooks, H.E. and C.A. Doswell III, 1996: A comparison of measures-oriented and distributions-oriented approaches to forecast verification. Wea. Forecasting, 11, Brown, B.G., R. Bullock, C.A. David, J.H. Gotway, M.B. Chapman, A. Takacs, E. Gilleland, K. Manning, J. Mahoney, 2004: New verification approaches for convective weather forecasts. 11 th Conf. Aviation, Range, and Aerospace Meteorology, Amer. Met. Soc., 4-8 Oct 2004, Hyannis, MA. Casati, B., Ross, D.B. Stephenson, 2004: A new intensity-scale approach for the verification of spatial precipitation forecasts, Met. App., 11, Case, J.L., J. Manobianco, J.E. Lane, C.D. Immer, and F.J. Merceret, 2004: On objective technique for verifying sea breezes in high-resolution numerical weather prediction models. Wea. Forecasting, 19, Damrath, U., 2004: Verification against precipitation observations of a high density network what did we learn? Intl. Verification Methods Workshop, September 2004, Montreal, Canada. Doswell, C.A. III, R. Davies-Jones, and D.L. Keller, 1990: On summary measures of skill in rare event forecasting based on contingency tables. Wea. Forecasting, 5, Ebert, E.E. and J.L. McBride, 2000: Verification of precipitation in weather systems: Determination of systematic errors. J. Hydrology, 239, Ebert, E., L.J. Wilson, B.G. Brown, P. Nurmi, H.E. Brooks, J. Bally, and M. Jaeneke, 2004: Verification of nowcasts from the WWRP Sydney 2000 Forecast Demonstration Project. Wea. Forecasting, 19,

7 Ebert, E., T. Keenan, J. Bally and S. Dance, 2005: Verification of operational thunderstorm nowcasts. WWRP Symposium on Nowcasting and Very Short Range Forecasting, Toulouse, France, 5-9 September Germann, U. and I. Zawadski, 2002: Scaledependence of the predictability of precipitation from continental radar images. Part I: Description of the methodology. Mon. Wea. Rev., 130, Germann, U. and I. Zawadski, 2004: Scale dependence of the predictability of precipitation from continental radar images. Part II: Probability forecasts. J. Appl. Meteorol., 43, Harris, D., E. Foufoula-Georgiou, K.K. Droegemeier and J.J. Levit, 2001: Multiscale statistical properties of a high-resolution precipitation forecast. J. Hydromet., 2, Jolliffe, I.T., and D.B. Stephenson, 2003: Forecast Verification. A Practitioner's Guide in Atmospheric Science. Wiley and Sons Ltd, 240 pp. Kane, T.L. and B.G. Brown, 2000: Confidence intervals for some verification measures - a survey of several methods. 15 th Conf. Probability and Statistics in Atmos. Sci., Amer. Met. Soc., 8-11 May 2000, Asheville, NC. Kay, M.P. and H.E. Brooks, 2000: Verification of probabilistic severe storm forecasts at the SPC. 20 th Conf. Severe Local Storms, Amer. Met. Soc., September 2000, Orlando, FL. Mahoney, J.L., J.E. Hart, and B.G. Brown, 2004: Defining observation fields for verification of spatial forecasts of convection. 17 th Conf. Probability and Statistics, Amer. Met. Soc., January 2004, Seattle. Marshall, S.F., P.J. Sousounis, and T.A. Hutchinson, 2004: Verifying mesoscale model precipitation forecasts using an acuity-fidelity approach. 17 th Conf. Probability and Statistics, Amer. Met. Soc., January 2004, Seattle. Marzban, C. and S. Sandgathe, 2005: Cluster analysis for verification of precipitation fields, Wea. Forecasting, submitted. Megenhardt, D.L, C. Mueller, S. Trier, D. Ahijevych, and N. Rehak, 2004: NCWF-2 probabilistic forecasts. 11 th Conf. Aviation, Range, and Aerospace, Amer. Met. Soc., 4-8 October 2004, Hyannis, MA. Mueller, C.K., T. Saxen, R. Roberts, and J. Wilson, 2002: Update on the NCAR Thunderstorm Nowcast system. 10 th Conf. Aviation, Range, and Aerospace Meteorology, Amer. Met. Soc., May 2002, Portland, OR. Murphy, A.H., 1996: The Finley affair: A signal event in the history of forecast verification. Wea. Forecasting, 11, Roberts, N., 2004: Verification of the fit of rainfall analyses and forecasts to radar. Met Office Forecasting Research Tech. Rept. No. 442, 45 pp. Seed, A., N. Bowler and C. Pierce, 2005: Stochastic Ensemble Prediction System. Heuristic Probabilistic Forecasting Workshop, May 2005, McGill University, Montreal ( Sinclair, S. and G. Pegram, 2004: Combining radar and rain gauge rainfall estimates for flood forecasting in South Africa. 6 th Intl Symposium on Hydrol. Applications of Weather Radar, 2-4 February 2004, Melbourne, Australia. Smith, P., 1999: Effects of imperfect storm reporting on the verification of weather warnings. Bull. Amer. Met. Soc., 80, Stephenson, D.B., 2004: Verification of rare extreme events. Intl. Verification Methods Workshop, September, 2004, Montreal, Canada. Stumpf, G.J., W. D. Zittel, A. Witt, T. M. Smith, V. K. McCoy, and S. Dulin, 2004: Using a reflectivity scale filter to improve performance of the Storm Cell Identification and Tracking (SCIT) algorithm. 20th Intl. Conf. Interactive Information and Processing Systems (IIPS) for Meteorology, Oceanography, and Hydrology, Amer. Met. Soc., January 2004, Seattle. Thornes, J.E. and D.B. Stephenson, 2001: How to judge the quality and value of weather forecast products. Meteorol. Appl., 8, Tustison, B., D. Harris, and E. Foufoula-Georgiou, 2001: Scale issues in verification of precipitation forecasts. J. Geophys. Res., 106 (D11), 11,775-11,784. Tustison, B., E. Foufoula-Georgiou, and D. Harris, 2003: Scale-recursive estimation for multisensor Quantitative Precipitation Forecast verification: A

8 preliminary assessment. J. Geophys. Res., 108 (D8), Weygandt, S.S., A.F. Loughe, S.G. Benjamin and J.L. Mahoney, 2004: Scale sensitivities in model precipitation skill scores during IHOP. 22 nd Conf. Severe Local Storms, Amer. Met. Soc., 4-8 October 2004, Hyannis, MA. Wilks, D.S., 1995: Statistical Methods in the Atmospheric Sciences. An Introduction. Academic Press, San Diego, 467 pp. Witt, A., 2000: The WSR-88D hail detection algorithm: A performance update. 20 th Conf. Severe Local Storms, Amer. Met. Soc., September 2000, Orlando, FL.

Spatial verification of NWP model fields. Beth Ebert BMRC, Australia

Spatial verification of NWP model fields. Beth Ebert BMRC, Australia Spatial verification of NWP model fields Beth Ebert BMRC, Australia WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 New approaches are needed to quantitatively evaluate high resolution

More information

Spatial Forecast Verification Methods

Spatial Forecast Verification Methods Spatial Forecast Verification Methods Barbara Brown Joint Numerical Testbed Program Research Applications Laboratory, NCAR 22 October 2014 Acknowledgements: Tara Jensen, Randy Bullock, Eric Gilleland,

More information

Assessing high resolution forecasts using fuzzy verification methods

Assessing high resolution forecasts using fuzzy verification methods Assessing high resolution forecasts using fuzzy verification methods Beth Ebert Bureau of Meteorology Research Centre, Melbourne, Australia Thanks to Nigel Roberts, Barbara Casati, Frederic Atger, Felix

More information

Spatial forecast verification

Spatial forecast verification Spatial forecast verification Manfred Dorninger University of Vienna Vienna, Austria manfred.dorninger@univie.ac.at Thanks to: B. Ebert, B. Casati, C. Keil 7th Verification Tutorial Course, Berlin, 3-6

More information

Upscaled and fuzzy probabilistic forecasts: verification results

Upscaled and fuzzy probabilistic forecasts: verification results 4 Predictability and Ensemble Methods 124 Upscaled and fuzzy probabilistic forecasts: verification results Zied Ben Bouallègue Deutscher Wetterdienst (DWD), Frankfurter Str. 135, 63067 Offenbach, Germany

More information

Verification Methods for High Resolution Model Forecasts

Verification Methods for High Resolution Model Forecasts Verification Methods for High Resolution Model Forecasts Barbara Brown (bgb@ucar.edu) NCAR, Boulder, Colorado Collaborators: Randy Bullock, John Halley Gotway, Chris Davis, David Ahijevych, Eric Gilleland,

More information

Ensemble Verification Metrics

Ensemble Verification Metrics Ensemble Verification Metrics Debbie Hudson (Bureau of Meteorology, Australia) ECMWF Annual Seminar 207 Acknowledgements: Beth Ebert Overview. Introduction 2. Attributes of forecast quality 3. Metrics:

More information

Verification of ensemble and probability forecasts

Verification of ensemble and probability forecasts Verification of ensemble and probability forecasts Barbara Brown NCAR, USA bgb@ucar.edu Collaborators: Tara Jensen (NCAR), Eric Gilleland (NCAR), Ed Tollerud (NOAA/ESRL), Beth Ebert (CAWCR), Laurence Wilson

More information

Enhancing Weather Information with Probability Forecasts. An Information Statement of the American Meteorological Society

Enhancing Weather Information with Probability Forecasts. An Information Statement of the American Meteorological Society Enhancing Weather Information with Probability Forecasts An Information Statement of the American Meteorological Society (Adopted by AMS Council on 12 May 2008) Bull. Amer. Meteor. Soc., 89 Summary This

More information

Verification of Probability Forecasts

Verification of Probability Forecasts Verification of Probability Forecasts Beth Ebert Bureau of Meteorology Research Centre (BMRC) Melbourne, Australia 3rd International Verification Methods Workshop, 29 January 2 February 27 Topics Verification

More information

P r o c e. d i n g s. 1st Workshop. Madrid, Spain September 2002

P r o c e. d i n g s. 1st Workshop. Madrid, Spain September 2002 P r o c e e d i n g s 1st Workshop Madrid, Spain 23-27 September 2002 VERIFYIG SATELLITE PRECIPITATIO ESTIMATES FOR WEATHER AD HYDROLOGICAL APPLICATIOS Elizabeth E. Ebert 1 Abstract ear real time satellite

More information

Categorical Verification

Categorical Verification Forecast M H F Observation Categorical Verification Tina Kalb Contributions from Tara Jensen, Matt Pocernich, Eric Gilleland, Tressa Fowler, Barbara Brown and others Finley Tornado Data (1884) Forecast

More information

TIFS DEVELOPMENTS INSPIRED BY THE B08 FDP. John Bally, A. J. Bannister, and D. Scurrah Bureau of Meteorology, Melbourne, Victoria, Australia

TIFS DEVELOPMENTS INSPIRED BY THE B08 FDP. John Bally, A. J. Bannister, and D. Scurrah Bureau of Meteorology, Melbourne, Victoria, Australia P13B.11 TIFS DEVELOPMENTS INSPIRED BY THE B08 FDP John Bally, A. J. Bannister, and D. Scurrah Bureau of Meteorology, Melbourne, Victoria, Australia 1. INTRODUCTION This paper describes the developments

More information

Evaluating Forecast Quality

Evaluating Forecast Quality Evaluating Forecast Quality Simon J. Mason International Research Institute for Climate Prediction Questions How do we decide whether a forecast was correct? How do we decide whether a set of forecasts

More information

Current verification practices with a particular focus on dust

Current verification practices with a particular focus on dust Current verification practices with a particular focus on dust Marion Mittermaier and Ric Crocker Outline 1. Guide to developing verification studies 2. Observations at the root of it all 3. Grid-to-point,

More information

Basic Verification Concepts

Basic Verification Concepts Basic Verification Concepts Barbara Brown National Center for Atmospheric Research Boulder Colorado USA bgb@ucar.edu Basic concepts - outline What is verification? Why verify? Identifying verification

More information

9.4. Jennifer L. Mahoney NOAA Research-Forecast Systems Laboratory, Boulder, Colorado

9.4. Jennifer L. Mahoney NOAA Research-Forecast Systems Laboratory, Boulder, Colorado 9.4 NEW VERIFICATION APPROACHES FOR CONVECTIVE WEATHER FORECASTS Barbara G. Brown*, Randy R. Bullock, Christopher A. Davis, John Halley Gotway, Michael B. Chapman, Agnes Takacs, Eric Gilleland, Kevin Manning

More information

Verification of nowcasts and short-range forecasts, including aviation weather

Verification of nowcasts and short-range forecasts, including aviation weather Verification of nowcasts and short-range forecasts, including aviation weather Barbara Brown NCAR, Boulder, Colorado, USA WMO WWRP 4th International Symposium on Nowcasting and Very-short-range Forecast

More information

Aurora Bell*, Alan Seed, Ross Bunn, Bureau of Meteorology, Melbourne, Australia

Aurora Bell*, Alan Seed, Ross Bunn, Bureau of Meteorology, Melbourne, Australia 15B.1 RADAR RAINFALL ESTIMATES AND NOWCASTS: THE CHALLENGING ROAD FROM RESEARCH TO WARNINGS Aurora Bell*, Alan Seed, Ross Bunn, Bureau of Meteorology, Melbourne, Australia 1. Introduction Warnings are

More information

WRF Developmental Testbed Center (DTC) Visitor Program Summary on visit achievements by Barbara Casati

WRF Developmental Testbed Center (DTC) Visitor Program Summary on visit achievements by Barbara Casati WRF Developmental Testbed Center (DTC) Visitor Program Summary on visit achievements by Barbara Casati Period of the visit: 31 st March - 25 th April 2008. Hosting laboratory: RAL division at NCAR, Boulder,

More information

Focus on Spatial Verification Filtering techniques. Flora Gofa

Focus on Spatial Verification Filtering techniques. Flora Gofa Focus on Spatial Verification Filtering techniques Flora Gofa Approach Attempt to introduce alternative methods for verification of spatial precipitation forecasts and study their relative benefits Techniques

More information

P5.11 TACKLING THE CHALLENGE OF NOWCASTING ELEVATED CONVECTION

P5.11 TACKLING THE CHALLENGE OF NOWCASTING ELEVATED CONVECTION P5.11 TACKLING THE CHALLENGE OF NOWCASTING ELEVATED CONVECTION Huaqing Cai*, Rita Roberts, Dan Megenhardt, Eric Nelson and Matthias Steiner National Center for Atmospheric Research, Boulder, CO, 80307,

More information

Overview of Verification Methods

Overview of Verification Methods Overview of Verification Methods Joint Working Group on Forecast Verification Research (JWGFVR) Greg Smith on behalf of Barbara Casati, ECCC Existing Verification Techniques Traditional (point-by-point)

More information

COMPOSITE-BASED VERIFICATION OF PRECIPITATION FORECASTS FROM A MESOSCALE MODEL

COMPOSITE-BASED VERIFICATION OF PRECIPITATION FORECASTS FROM A MESOSCALE MODEL J13.5 COMPOSITE-BASED VERIFICATION OF PRECIPITATION FORECASTS FROM A MESOSCALE MODEL Jason E. Nachamkin, Sue Chen, and Jerome M. Schmidt Naval Research Laboratory, Monterey, CA 1. INTRODUCTION Mesoscale

More information

A Comparison of Tornado Warning Lead Times with and without NEXRAD Doppler Radar

A Comparison of Tornado Warning Lead Times with and without NEXRAD Doppler Radar MARCH 1996 B I E R I N G E R A N D R A Y 47 A Comparison of Tornado Warning Lead Times with and without NEXRAD Doppler Radar PAUL BIERINGER AND PETER S. RAY Department of Meteorology, The Florida State

More information

Severe storm forecast guidance based on explicit identification of convective phenomena in WRF-model forecasts

Severe storm forecast guidance based on explicit identification of convective phenomena in WRF-model forecasts Severe storm forecast guidance based on explicit identification of convective phenomena in WRF-model forecasts Ryan Sobash 10 March 2010 M.S. Thesis Defense 1 Motivation When the SPC first started issuing

More information

Validation of Forecasts (Forecast Verification) Overview. Ian Jolliffe

Validation of Forecasts (Forecast Verification) Overview. Ian Jolliffe Validation of Forecasts (Forecast Verification) Overview Ian Jolliffe 1 Outline 1. Introduction and history (4) 2. Types of forecast (2) 3. Properties of forecasts (3) verification measures (2) 4. Terminology

More information

Towards Operational Probabilistic Precipitation Forecast

Towards Operational Probabilistic Precipitation Forecast 5 Working Group on Verification and Case Studies 56 Towards Operational Probabilistic Precipitation Forecast Marco Turco, Massimo Milelli ARPA Piemonte, Via Pio VII 9, I-10135 Torino, Italy 1 Aim of the

More information

Basic Verification Concepts

Basic Verification Concepts Basic Verification Concepts Barbara Brown National Center for Atmospheric Research Boulder Colorado USA bgb@ucar.edu May 2017 Berlin, Germany Basic concepts - outline What is verification? Why verify?

More information

Feature-specific verification of ensemble forecasts

Feature-specific verification of ensemble forecasts Feature-specific verification of ensemble forecasts www.cawcr.gov.au Beth Ebert CAWCR Weather & Environmental Prediction Group Uncertainty information in forecasting For high impact events, forecasters

More information

Verification of the operational NWP models at DWD - with special focus at COSMO-EU

Verification of the operational NWP models at DWD - with special focus at COSMO-EU Verification of the operational NWP models at DWD - with special focus at COSMO-EU Ulrich Damrath Ulrich.Damrath@dwd.de Ein Mensch erkennt (und das ist wichtig): Nichts ist ganz falsch und nichts ganz

More information

Update on CoSPA Storm Forecasts

Update on CoSPA Storm Forecasts Update on CoSPA Storm Forecasts Haig August 2, 2011 This work was sponsored by the Federal Aviation Administration under Air Force Contract No. FA8721-05-C-0002. Opinions, interpretations, conclusions,

More information

Verification and performance measures of Meteorological Services to Air Traffic Management (MSTA)

Verification and performance measures of Meteorological Services to Air Traffic Management (MSTA) Verification and performance measures of Meteorological Services to Air Traffic Management (MSTA) Background Information on the accuracy, reliability and relevance of products is provided in terms of verification

More information

Three Spatial Verification Techniques: Cluster Analysis, Variogram, and Optical Flow

Three Spatial Verification Techniques: Cluster Analysis, Variogram, and Optical Flow VOLUME 24 W E A T H E R A N D F O R E C A S T I N G DECEMBER 2009 Three Spatial Verification Techniques: Cluster Analysis, Variogram, and Optical Flow CAREN MARZBAN Applied Physics Laboratory, and Department

More information

1. INTRODUCTION 2. QPF

1. INTRODUCTION 2. QPF 440 24th Weather and Forecasting/20th Numerical Weather Prediction HUMAN IMPROVEMENT TO NUMERICAL WEATHER PREDICTION AT THE HYDROMETEOROLOGICAL PREDICTION CENTER David R. Novak, Chris Bailey, Keith Brill,

More information

Verification of Weather Warnings

Verification of Weather Warnings Verification of Weather Warnings Did the boy cry wolf or was it just a sheep? David B. Stephenson Exeter Climate Systems olliffe, Clive Wilson, Michael Sharpe, Hewson, and Marion Mittermaier ks also to

More information

P1.10 Synchronization of Multiple Radar Observations in 3-D Radar Mosaic

P1.10 Synchronization of Multiple Radar Observations in 3-D Radar Mosaic Submitted for the 12 th Conf. on Aviation, Range, and Aerospace Meteor. 29 Jan. 2 Feb. 2006. Atlanta, GA. P1.10 Synchronization of Multiple Radar Observations in 3-D Radar Mosaic Hongping Yang 1, Jian

More information

Coefficients for Debiasing Forecasts

Coefficients for Debiasing Forecasts Reprinted from MONTHLY WEATHER REVIEW, VOI. 119, NO. 8, August 1991 American Meteorological Society Coefficients for Debiasing Forecasts University Center for Policy Research, The University at Albany,

More information

EVALUATION AND VERIFICATION OF PUBLIC WEATHER SERVICES. Pablo Santos Meteorologist In Charge National Weather Service Miami, FL

EVALUATION AND VERIFICATION OF PUBLIC WEATHER SERVICES. Pablo Santos Meteorologist In Charge National Weather Service Miami, FL EVALUATION AND VERIFICATION OF PUBLIC WEATHER SERVICES Pablo Santos Meteorologist In Charge National Weather Service Miami, FL WHAT IS THE MAIN DIFFERENCE BETWEEN A GOVERNMENT WEATHER SERVICE FORECAST

More information

Combining Deterministic and Probabilistic Methods to Produce Gridded Climatologies

Combining Deterministic and Probabilistic Methods to Produce Gridded Climatologies Combining Deterministic and Probabilistic Methods to Produce Gridded Climatologies Michael Squires Alan McNab National Climatic Data Center (NCDC - NOAA) Asheville, NC Abstract There are nearly 8,000 sites

More information

Extracting probabilistic severe weather guidance from convection-allowing model forecasts. Ryan Sobash 4 December 2009 Convection/NWP Seminar Series

Extracting probabilistic severe weather guidance from convection-allowing model forecasts. Ryan Sobash 4 December 2009 Convection/NWP Seminar Series Extracting probabilistic severe weather guidance from convection-allowing model forecasts Ryan Sobash 4 December 2009 Convection/NWP Seminar Series Identification of severe convection in high-resolution

More information

Model verification and tools. C. Zingerle ZAMG

Model verification and tools. C. Zingerle ZAMG Model verification and tools C. Zingerle ZAMG Why verify? The three most important reasons to verify forecasts are: to monitor forecast quality - how accurate are the forecasts and are they improving over

More information

A new intensity-scale approach for the verification of spatial precipitation forecasts

A new intensity-scale approach for the verification of spatial precipitation forecasts Meteorol. Appl. 11, 141 14 (4) DOI:.17/S1348271239 A new intensity-scale approach for the verification of spatial precipitation forecasts B. Casati 1,G.Ross 2 & D. B. Stephenson 1 1 Department of Meteorology,

More information

DEFINITION OF DRY THUNDERSTORMS FOR USE IN VERIFYING SPC FIRE WEATHER PRODUCTS. 2.1 Data

DEFINITION OF DRY THUNDERSTORMS FOR USE IN VERIFYING SPC FIRE WEATHER PRODUCTS. 2.1 Data S47 DEFINITION OF DRY THUNDERSTORMS FOR USE IN VERIFYING SPC FIRE WEATHER PRODUCTS Paul X. Flanagan *,1,2, Christopher J. Melick 3,4, Israel L. Jirak 4, Jaret W. Rogers 4, Andrew R. Dean 4, Steven J. Weiss

More information

Analysis and accuracy of the Weather Research and Forecasting Model (WRF) for Climate Change Prediction in Thailand

Analysis and accuracy of the Weather Research and Forecasting Model (WRF) for Climate Change Prediction in Thailand ก ก ก 19 19 th National Convention on Civil Engineering 14-16 2557. ก 14-16 May 2014, Khon Kaen, THAILAND ก ก ก ก ก WRF Analysis and accuracy of the Weather Research and Forecasting Model (WRF) for Climate

More information

QPE and QPF in the Bureau of Meteorology

QPE and QPF in the Bureau of Meteorology QPE and QPF in the Bureau of Meteorology Current and future real-time rainfall products Carlos Velasco (BoM) Alan Seed (BoM) and Luigi Renzullo (CSIRO) OzEWEX 2016, 14-15 December 2016, Canberra Why do

More information

THE REAL-TIME VERIFICATION SYSTEM (RTVS) AND ITS APPLICATION TO AVIATION WEATHER FORECASTS

THE REAL-TIME VERIFICATION SYSTEM (RTVS) AND ITS APPLICATION TO AVIATION WEATHER FORECASTS 9.8 THE REAL-TIME VERIFICATION SYSTEM (RTVS) AND ITS APPLICATION TO AVIATION WEATHER FORECASTS Jennifer Luppens Mahoney 1 NOAA Research-Forecast Systems Laboratory Boulder, CO Judy K. Henderson 2, Barbara

More information

Methods of forecast verification

Methods of forecast verification Methods of forecast verification Kiyotoshi Takahashi Climate Prediction Division Japan Meteorological Agency 1 Outline 1. Purposes of verification 2. Verification methods For deterministic forecasts For

More information

7.26 THE NWS/NCAR MAN-IN-THE-LOOP (MITL) NOWCASTING DEMONSTRATION: FORECASTER INPUT INTO A THUNDERSTORM NOWCASTING SYSTEM

7.26 THE NWS/NCAR MAN-IN-THE-LOOP (MITL) NOWCASTING DEMONSTRATION: FORECASTER INPUT INTO A THUNDERSTORM NOWCASTING SYSTEM 7.26 THE NWS/NCAR MAN-IN-THE-LOOP (MITL) NOWCASTING DEMONSTRATION: FORECASTER INPUT INTO A THUNDERSTORM NOWCASTING SYSTEM Rita Roberts 1, Steven Fano 2, William Bunting 2, Thomas Saxen 1, Eric Nelson 1,

More information

The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS

The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS S. L. Mullen Univ. of Arizona R. Buizza ECMWF University of Wisconsin Predictability Workshop,

More information

Comparison of Estimated and Observed Storm Motions to Environmental Parameters

Comparison of Estimated and Observed Storm Motions to Environmental Parameters Comparison of Estimated and Observed Storm Motions to Environmental Parameters Eric Beamesderfer 1, 2, 3, 4, Kiel Ortega 3, 4, Travis Smith 3, 4, and John Cintineo 4, 5 1 National Weather Center Research

More information

Forecast Verification Analysis of Rainfall for Southern Districts of Tamil Nadu, India

Forecast Verification Analysis of Rainfall for Southern Districts of Tamil Nadu, India International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 6 Number 5 (2017) pp. 299-306 Journal homepage: http://www.ijcmas.com Original Research Article https://doi.org/10.20546/ijcmas.2017.605.034

More information

7B.4 EVALUATING A HAIL SIZE DISCRIMINATION ALGORITHM FOR DUAL-POLARIZED WSR-88Ds USING HIGH RESOLUTION REPORTS AND FORECASTER FEEDBACK

7B.4 EVALUATING A HAIL SIZE DISCRIMINATION ALGORITHM FOR DUAL-POLARIZED WSR-88Ds USING HIGH RESOLUTION REPORTS AND FORECASTER FEEDBACK 7B.4 EVALUATING A HAIL SIZE DISCRIMINATION ALGORITHM FOR DUAL-POLARIZED WSR-88Ds USING HIGH RESOLUTION REPORTS AND FORECASTER FEEDBACK Kiel L. Ortega 1, Alexander V. Ryzhkov 1, John Krause 1, Pengfei Zhang

More information

THE CRUCIAL ROLE OF TORNADO WATCHES IN THE ISSUANCE OF WARNINGS FOR SIGNIFICANT TORNADOS

THE CRUCIAL ROLE OF TORNADO WATCHES IN THE ISSUANCE OF WARNINGS FOR SIGNIFICANT TORNADOS THE CRUCIAL ROLE OF TORNADO WATCHES IN THE ISSUANCE OF WARNINGS FOR SIGNIFICANT TORNADOS John E. Hales, Jr. National Severe Storms Forecast Center Kansas City, Missouri Abstract The tornado warning is

More information

Proper Scores for Probability Forecasts Can Never Be Equitable

Proper Scores for Probability Forecasts Can Never Be Equitable APRIL 2008 J O L LIFFE AND STEPHENSON 1505 Proper Scores for Probability Forecasts Can Never Be Equitable IAN T. JOLLIFFE AND DAVID B. STEPHENSON School of Engineering, Computing, and Mathematics, University

More information

Improving real time observation and nowcasting RDT. E de Coning, M Gijben, B Maseko and L van Hemert Nowcasting and Very Short Range Forecasting

Improving real time observation and nowcasting RDT. E de Coning, M Gijben, B Maseko and L van Hemert Nowcasting and Very Short Range Forecasting Improving real time observation and nowcasting RDT E de Coning, M Gijben, B Maseko and L van Hemert Nowcasting and Very Short Range Forecasting Introduction Satellite Application Facilities (SAFs) are

More information

On the use of radar rainfall estimates and nowcasts in an operational heavy rainfall warning service

On the use of radar rainfall estimates and nowcasts in an operational heavy rainfall warning service On the use of radar rainfall estimates and nowcasts in an operational heavy rainfall warning service Alan Seed, Ross Bunn, Aurora Bell Bureau of Meteorology Australia The Centre for Australian Weather

More information

National Weather Service Warning Performance Associated With Watches

National Weather Service Warning Performance Associated With Watches National Weather Service Warning Performance Associated With es Jessica Ram National Weather Center Research Experiences for Undergraduates, and The Pennsylvania State University, University Park, Pennsylvania

More information

Predicting rainfall using ensemble forecasts

Predicting rainfall using ensemble forecasts Predicting rainfall using ensemble forecasts Nigel Roberts Met Office @ Reading MOGREPS-UK Convection-permitting 2.2 km ensemble now running routinely Embedded within MOGREPS-R ensemble members (18 km)

More information

Merging Rain-Gauge and Radar Data

Merging Rain-Gauge and Radar Data Merging Rain-Gauge and Radar Data Dr Sharon Jewell, Obserations R&D, Met Office, FitzRoy Road, Exeter sharon.jewell@metoffice.gov.uk Outline 1. Introduction The Gauge and radar network Interpolation techniques

More information

OBJECTIVE CALIBRATED WIND SPEED AND CROSSWIND PROBABILISTIC FORECASTS FOR THE HONG KONG INTERNATIONAL AIRPORT

OBJECTIVE CALIBRATED WIND SPEED AND CROSSWIND PROBABILISTIC FORECASTS FOR THE HONG KONG INTERNATIONAL AIRPORT P 333 OBJECTIVE CALIBRATED WIND SPEED AND CROSSWIND PROBABILISTIC FORECASTS FOR THE HONG KONG INTERNATIONAL AIRPORT P. Cheung, C. C. Lam* Hong Kong Observatory, Hong Kong, China 1. INTRODUCTION Wind is

More information

CHARACTERISATION OF STORM SEVERITY BY USE OF SELECTED CONVECTIVE CELL PARAMETERS DERIVED FROM SATELLITE DATA

CHARACTERISATION OF STORM SEVERITY BY USE OF SELECTED CONVECTIVE CELL PARAMETERS DERIVED FROM SATELLITE DATA CHARACTERISATION OF STORM SEVERITY BY USE OF SELECTED CONVECTIVE CELL PARAMETERS DERIVED FROM SATELLITE DATA Piotr Struzik Institute of Meteorology and Water Management, Satellite Remote Sensing Centre

More information

Verification of wind forecasts of ramping events

Verification of wind forecasts of ramping events Verification of wind forecasts of ramping events Matt Pocernich Research Application Laboratory - NCAR pocernic@ucar.edu Thanks to Brice Lambi, Seth Linden and Gregory Roux Key Points A single verification

More information

VERIFICATION OF PROXY STORM REPORTS DERIVED FROM ENSEMBLE UPDRAFT HELICITY

VERIFICATION OF PROXY STORM REPORTS DERIVED FROM ENSEMBLE UPDRAFT HELICITY VERIFICATION OF PROXY STORM REPORTS DERIVED FROM ENSEMBLE UPDRAFT HELICITY MALLORY ROW 12,JAMES CORRIEA JR. 3, AND PATRICK MARSH 3 1 National Weather Center Research Experiences for Undergraduates Program

More information

Using Cell-Based VIL Density to Identify Severe-Hail Thunderstorms in the Central Appalachians and Middle Ohio Valley

Using Cell-Based VIL Density to Identify Severe-Hail Thunderstorms in the Central Appalachians and Middle Ohio Valley EASTERN REGION TECHNICAL ATTACHMENT NO. 98-9 OCTOBER, 1998 Using Cell-Based VIL Density to Identify Severe-Hail Thunderstorms in the Central Appalachians and Middle Ohio Valley Nicole M. Belk and Lyle

More information

Generating probabilistic forecasts from convectionpermitting. Nigel Roberts

Generating probabilistic forecasts from convectionpermitting. Nigel Roberts Generating probabilistic forecasts from convectionpermitting ensembles Nigel Roberts Context for this talk This is the age of the convection-permitting model ensemble Met Office: MOGREPS-UK UK 2.2km /12

More information

Application and verification of ECMWF products 2013

Application and verification of ECMWF products 2013 Application and verification of EMWF products 2013 Hellenic National Meteorological Service (HNMS) Flora Gofa and Theodora Tzeferi 1. Summary of major highlights In order to determine the quality of the

More information

Probabilistic verification

Probabilistic verification Probabilistic verification Chiara Marsigli with the help of the WG and Laurie Wilson in particular Goals of this session Increase understanding of scores used for probability forecast verification Characteristics,

More information

J8.4 NOWCASTING OCEANIC CONVECTION FOR AVIATION USING RANDOM FOREST CLASSIFICATION

J8.4 NOWCASTING OCEANIC CONVECTION FOR AVIATION USING RANDOM FOREST CLASSIFICATION J8.4 NOWCASTING OCEANIC CONVECTION FOR AVIATION USING RANDOM FOREST CLASSIFICATION Huaqing Cai*, Cathy Kessinger, David Ahijevych, John Williams, Nancy Rehak, Daniel Megenhardt and Matthias Steiner National

More information

4.3.2 Configuration. 4.3 Ensemble Prediction System Introduction

4.3.2 Configuration. 4.3 Ensemble Prediction System Introduction 4.3 Ensemble Prediction System 4.3.1 Introduction JMA launched its operational ensemble prediction systems (EPSs) for one-month forecasting, one-week forecasting, and seasonal forecasting in March of 1996,

More information

Verification of the operational NWP models at DWD - with special focus at COSMO-EU. Ulrich Damrath

Verification of the operational NWP models at DWD - with special focus at COSMO-EU. Ulrich Damrath Verification of the operational NWP models at DWD - with special focus at COSMO-EU Ulrich Damrath Ulrich.Damrath@dwd.de Ein Mensch erkennt (und das ist wichtig): Nichts ist ganz falsch und nichts ganz

More information

P3.1 Development of MOS Thunderstorm and Severe Thunderstorm Forecast Equations with Multiple Data Sources

P3.1 Development of MOS Thunderstorm and Severe Thunderstorm Forecast Equations with Multiple Data Sources P3.1 Development of MOS Thunderstorm and Severe Thunderstorm Forecast Equations with Multiple Data Sources Kathryn K. Hughes * Meteorological Development Laboratory Office of Science and Technology National

More information

Progress in Operational Quantitative Precipitation Estimation in the Czech Republic

Progress in Operational Quantitative Precipitation Estimation in the Czech Republic Progress in Operational Quantitative Precipitation Estimation in the Czech Republic Petr Novák 1 and Hana Kyznarová 1 1 Czech Hydrometeorological Institute,Na Sabatce 17, 143 06 Praha, Czech Republic (Dated:

More information

The development of a Kriging based Gauge and Radar merged product for real-time rainfall accumulation estimation

The development of a Kriging based Gauge and Radar merged product for real-time rainfall accumulation estimation The development of a Kriging based Gauge and Radar merged product for real-time rainfall accumulation estimation Sharon Jewell and Katie Norman Met Office, FitzRoy Road, Exeter, UK (Dated: 16th July 2014)

More information

Radius of reliability: A distance metric for interpreting and verifying spatial probability forecasts

Radius of reliability: A distance metric for interpreting and verifying spatial probability forecasts Radius of reliability: A distance metric for interpreting and verifying spatial probability forecasts Beth Ebert CAWCR, Bureau of Meteorology Melbourne, Australia Introduction Wish to warn for high impact

More information

A COMPREHENSIVE 5-YEAR SEVERE STORM ENVIRONMENT CLIMATOLOGY FOR THE CONTINENTAL UNITED STATES 3. RESULTS

A COMPREHENSIVE 5-YEAR SEVERE STORM ENVIRONMENT CLIMATOLOGY FOR THE CONTINENTAL UNITED STATES 3. RESULTS 16A.4 A COMPREHENSIVE 5-YEAR SEVERE STORM ENVIRONMENT CLIMATOLOGY FOR THE CONTINENTAL UNITED STATES Russell S. Schneider 1 and Andrew R. Dean 1,2 1 DOC/NOAA/NWS/NCEP Storm Prediction Center 2 OU-NOAA Cooperative

More information

Probabilistic Quantitative Precipitation Forecasts for Tropical Cyclone Rainfall

Probabilistic Quantitative Precipitation Forecasts for Tropical Cyclone Rainfall Probabilistic Quantitative Precipitation Forecasts for Tropical Cyclone Rainfall WOO WANG CHUN HONG KONG OBSERVATORY IWTCLP-III, JEJU 10, DEC 2014 Scales of Atmospheric Systems Advection-Based Nowcasting

More information

WWRP Recommendations for the Verification and Intercomparison of QPFs and PQPFs from Operational NWP Models. Revision 2 October 2008

WWRP Recommendations for the Verification and Intercomparison of QPFs and PQPFs from Operational NWP Models. Revision 2 October 2008 WWRP 009-1 For more information, please contact: World Meteorological Organization Research Department Atmospheric Research and Environment Branch 7 bis, avenue de la Paix P.O. Box 300 CH 111 Geneva Switzerland

More information

Monthly probabilistic drought forecasting using the ECMWF Ensemble system

Monthly probabilistic drought forecasting using the ECMWF Ensemble system Monthly probabilistic drought forecasting using the ECMWF Ensemble system Christophe Lavaysse(1) J. Vogt(1), F. Pappenberger(2) and P. Barbosa(1) (1) European Commission (JRC-IES), Ispra Italy (2) ECMWF,

More information

Tornado and Severe Thunderstorm Warning Forecast Skill and its Relationship to Storm Type

Tornado and Severe Thunderstorm Warning Forecast Skill and its Relationship to Storm Type Tornado and Severe Thunderstorm Warning Forecast Skill and its Relationship to Storm Type Eric M. Guillot National Weather Center Research Experience for Undergraduates, University of Oklahoma, Norman,

More information

Measuring the quality of updating high resolution time-lagged ensemble probability forecasts using spatial verification techniques.

Measuring the quality of updating high resolution time-lagged ensemble probability forecasts using spatial verification techniques. Measuring the quality of updating high resolution time-lagged ensemble probability forecasts using spatial verification techniques. Tressa L. Fowler, Tara Jensen, John Halley Gotway, Randy Bullock 1. Introduction

More information

Application and verification of ECMWF products 2016

Application and verification of ECMWF products 2016 Application and verification of ECMWF products 2016 Icelandic Meteorological Office (www.vedur.is) Bolli Pálmason and Guðrún Nína Petersen 1. Summary of major highlights Medium range weather forecasts

More information

A technique for creating probabilistic spatio-temporal forecasts

A technique for creating probabilistic spatio-temporal forecasts 1 A technique for creating probabilistic spatio-temporal forecasts V Lakshmanan University of Oklahoma and National Severe Storms Laboratory lakshman@ou.edu Kiel Ortega Sch. of Meteorology University of

More information

Strategic Radar Enhancement Project (SREP) Forecast Demonstration Project (FDP) The future is here and now

Strategic Radar Enhancement Project (SREP) Forecast Demonstration Project (FDP) The future is here and now Strategic Radar Enhancement Project (SREP) Forecast Demonstration Project (FDP) The future is here and now Michael Berechree National Manager Aviation Weather Services Australian Bureau of Meteorology

More information

icast: A Severe Thunderstorm Forecasting, Nowcasting and Alerting Prototype Focused on Optimization of the Human-Machine Mix

icast: A Severe Thunderstorm Forecasting, Nowcasting and Alerting Prototype Focused on Optimization of the Human-Machine Mix icast: A Severe Thunderstorm Forecasting, Nowcasting and Alerting Prototype Focused on Optimization of the Human-Machine Mix 1Cloud Physics and Severe Weather Research Section, Toronto, ON 2Meteorological

More information

P2.12 An Examination of the Hail Detection Algorithm over Central Alabama

P2.12 An Examination of the Hail Detection Algorithm over Central Alabama P2.12 An Examination of the Hail Detection Algorithm over Central Alabama Kevin B. Laws *, Scott W. Unger and John Sirmon National Weather Service Forecast Office Birmingham, Alabama 1. Introduction With

More information

The Australian Operational Daily Rain Gauge Analysis

The Australian Operational Daily Rain Gauge Analysis The Australian Operational Daily Rain Gauge Analysis Beth Ebert and Gary Weymouth Bureau of Meteorology Research Centre, Melbourne, Australia e.ebert@bom.gov.au Daily rainfall data and analysis procedure

More information

Peter P. Neilley. And. Kurt A. Hanson. Weather Services International, Inc. 400 Minuteman Road Andover, MA 01810

Peter P. Neilley. And. Kurt A. Hanson. Weather Services International, Inc. 400 Minuteman Road Andover, MA 01810 6.4 ARE MODEL OUTPUT STATISTICS STILL NEEDED? Peter P. Neilley And Kurt A. Hanson Weather Services International, Inc. 400 Minuteman Road Andover, MA 01810 1. Introduction. Model Output Statistics (MOS)

More information

Tornado Probabilities Derived from Rapid Update Cycle Forecast Soundings

Tornado Probabilities Derived from Rapid Update Cycle Forecast Soundings Tornado Probabilities Derived from Rapid Update Cycle Forecast Soundings Zachary M. Byko National Weather Center Research Experiences for Undergraduates, and The Pennsylvania State University, University

More information

12.2 PROBABILISTIC GUIDANCE OF AVIATION HAZARDS FOR TRANSOCEANIC FLIGHTS

12.2 PROBABILISTIC GUIDANCE OF AVIATION HAZARDS FOR TRANSOCEANIC FLIGHTS 12.2 PROBABILISTIC GUIDANCE OF AVIATION HAZARDS FOR TRANSOCEANIC FLIGHTS K. A. Stone, M. Steiner, J. O. Pinto, C. P. Kalb, C. J. Kessinger NCAR, Boulder, CO M. Strahan Aviation Weather Center, Kansas City,

More information

Model verification / validation A distributions-oriented approach

Model verification / validation A distributions-oriented approach Model verification / validation A distributions-oriented approach Dr. Christian Ohlwein Hans-Ertel-Centre for Weather Research Meteorological Institute, University of Bonn, Germany Ringvorlesung: Quantitative

More information

Drought forecasting methods Blaz Kurnik DESERT Action JRC

Drought forecasting methods Blaz Kurnik DESERT Action JRC Ljubljana on 24 September 2009 1 st DMCSEE JRC Workshop on Drought Monitoring 1 Drought forecasting methods Blaz Kurnik DESERT Action JRC Motivations for drought forecasting Ljubljana on 24 September 2009

More information

The Thunderstorm Interactive Forecast System: Turning Automated Thunderstorm Tracks into Severe Weather Warnings

The Thunderstorm Interactive Forecast System: Turning Automated Thunderstorm Tracks into Severe Weather Warnings 64 WEATHER AND FORECASTING The Thunderstorm Interactive Forecast System: Turning Automated Thunderstorm Tracks into Severe Weather Warnings JOHN BALLY Bureau of Meteorology Research Center, Melbourne,

More information

FLORA: FLood estimation and forecast in complex Orographic areas for Risk mitigation in the Alpine space

FLORA: FLood estimation and forecast in complex Orographic areas for Risk mitigation in the Alpine space Natural Risk Management in a changing climate: Experiences in Adaptation Strategies from some European Projekts Milano - December 14 th, 2011 FLORA: FLood estimation and forecast in complex Orographic

More information

Verification of Continuous Forecasts

Verification of Continuous Forecasts Verification of Continuous Forecasts Presented by Barbara Brown Including contributions by Tressa Fowler, Barbara Casati, Laurence Wilson, and others Exploratory methods Scatter plots Discrimination plots

More information

WARNING DECISION SUPPORT SYSTEM INTEGRATED INFORMATION (WDSS-II). PART I: MULTIPLE-SENSOR SEVERE WEATHER APPLICATIONS DEVELOPMENT AT NSSL DURING 2002

WARNING DECISION SUPPORT SYSTEM INTEGRATED INFORMATION (WDSS-II). PART I: MULTIPLE-SENSOR SEVERE WEATHER APPLICATIONS DEVELOPMENT AT NSSL DURING 2002 14.8 WARNING DECISION SUPPORT SYSTEM INTEGRATED INFORMATION (WDSS-II). PART I: MULTIPLE-SENSOR SEVERE WEATHER APPLICATIONS DEVELOPMENT AT NSSL DURING 2002 Travis M. Smith 1,2, *, Gregory J. Stumpf 1,2,

More information

A new index for the verification of accuracy and timeliness of weather warnings

A new index for the verification of accuracy and timeliness of weather warnings METEOROLOGICAL APPLICATIONS Meteorol. Appl. 20: 206 216 (2013) Published online in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/met.1404 A new index for the verification of accuracy and timeliness

More information

FORECASTING: A REVIEW OF STATUS AND CHALLENGES. Eric Grimit and Kristin Larson 3TIER, Inc. Pacific Northwest Weather Workshop March 5-6, 2010

FORECASTING: A REVIEW OF STATUS AND CHALLENGES. Eric Grimit and Kristin Larson 3TIER, Inc. Pacific Northwest Weather Workshop March 5-6, 2010 SHORT-TERM TERM WIND POWER FORECASTING: A REVIEW OF STATUS AND CHALLENGES Eric Grimit and Kristin Larson 3TIER, Inc. Pacific Northwest Weather Workshop March 5-6, 2010 Integrating Renewable Energy» Variable

More information

Using Convection-Allowing Models to Produce Forecast Guidance For Severe Thunderstorm Hazards via a Surrogate-Severe Approach!

Using Convection-Allowing Models to Produce Forecast Guidance For Severe Thunderstorm Hazards via a Surrogate-Severe Approach! Using Convection-Allowing Models to Produce Forecast Guidance For Severe Thunderstorm Hazards via a Surrogate-Severe Approach! Ryan Sobash! University of Oklahoma, School of Meteorology, Norman, OK! J.

More information

Predictability from a Forecast Provider s Perspective

Predictability from a Forecast Provider s Perspective Predictability from a Forecast Provider s Perspective Ken Mylne Met Office, Bracknell RG12 2SZ, UK. email: ken.mylne@metoffice.com 1. Introduction Predictability is not a new issue for forecasters or forecast

More information