A new index for the verification of accuracy and timeliness of weather warnings

Size: px
Start display at page:

Download "A new index for the verification of accuracy and timeliness of weather warnings"

Transcription

1 METEOROLOGICAL APPLICATIONS Meteorol. Appl. 20: (2013) Published online in Wiley Online Library (wileyonlinelibrary.com) DOI: /met.1404 A new index for the verification of accuracy and timeliness of weather warnings Laurence J. Wilson a * and Andrew Giles b a Atmospheric Science and Technology Division, Environment Canada, Dorval, Quebec, Canada b Meteorological Service of Canada, Edmonton, Alberta, Canada ABSTRACT: A new scoring index is proposed for the verification of the Canadian weather warning program. Called weather warning index (WWI), the new measure is designed to be sensitive to two attributes of warnings, their timeliness and accuracy, and to summarize the verification information into a single value for a representative set of regions of Canada and for several weather elements for which warnings are issued. Given the opposing nature of the two attributes (long lead times are likely to be associated with lower accuracy), it was necessary to carefully balance the score to keep it proper in a general sense. The design decisions for the WWI are presented and discussed, and the score computation is illustrated with sensitivity experiments using 3 years of warning forecasts and observations Crown in the right of Canada. KEY WORDS forecast verification methods; high impact weather; weather warning verification Received 15 September 2012; Revised 15 March 2013; Accepted 15 March Introduction: the need for a new index In Canada, as in many other countries, weather forecasters issue warnings of impending hazardous weather conditions. Warnings are text messages which specify the time period and exact area for which the warning is valid, and details of the hazardous conditions expected. In Canada, warnings are the responsibility of the five large regional forecast offices; each office sets its own criteria to define thresholds which must be met or exceeded for the weather condition to be considered a hazard and worthy of a warning. While some of the warning thresholds are standard across the country, variations in climatology and the expected impact on users is taken into account in setting the thresholds. For example, the threshold to define a heavy snowfall event is lower for Vancouver than for Montreal since winter snow storms are more common in Montreal, and citizens there are generally better prepared to cope with larger storms, thus reducing their impact. The Canadian warning program is described in detail in Weather and Environmental Services (WES) (2009). Warnings are different for verification purposes than routine public forecasts in one important respect: The forecaster has complete discretion over if and when to issue the warning; the timing of the warning becomes part of the forecaster s strategy. Given the negative relationship between predictability and forecast lead time for all meteorological variables it is expected that forecasts with longer lead times will be less accurate than forecasts with shorter lead time. On the other hand, if the lead time is too short, the user of the forecast will not have time to take action to mitigate the effects of the expected hazardous weather. The forecaster s challenge is to issue the warning early enough to benefit his/her user community, but late enough to expect reasonable accuracy. Given this strong relationship between lead time and accuracy of warnings, it is desirable to include lead time information in their verification. Most verification of weather warnings has concentrated on their accuracy as measured via contingency tables and associated scores. For example, the system used in the UK uses the equitable threat score (ETS) as the main measure of accuracy (Sharpe, 2010). The warning verification system in Austria uses the ETS, the probability of detection (POD), the false alarm ratio (FAR) and the frequency bias computed from a 4 by 4 contingency table representing the three colour coded thresholds for severe weather, yellow, orange and red (Wittmann, 2009). When lead time is considered, it is usually treated as a separate variable, and averaged over those cases for which a warning was issued and an event occurred. For example the NOAA public web page on tornados reports the average tornado warning lead time in the U.S. to be 13 min (Erickson and Brooks, 2006). As far as we know our project represents the first attempt to combine accuracy and lead time measures into a single index. The paper is organized as follows: Section 2 describes the desirable design characteristics of the index; the index is defined and discussed in Section 3 with reference to the data available for testing; Section 4 describes the results of sensitivity tests of the index; and Section 5 presents recommendations and suggested future work to improve both the index and the database used in its computation. * Correspondence: L. Wilson, Atmospheric Science and Technology Division, Environment Canada, Dorval H9P 1J3, Quebec, Canada. Lawrence.wilson@ec.gc.ca 2. Desirable characteristics of the verification index The main motivation for the development of a new weather warning index at this time is the desire of Environment Canada to report regularly to the Canadian public on the quality of its 2013 Crown in the right of Canada.

2 Verification of accuracy and timeliness of weather warnings 207 weather warnings and to track changes in their quality. It was made clear from the outset that the new index must: 1. summarize the quality of the forecasts into a single number representing the entire warning program over several weather elements over the whole country; 2. be sensitive to the attributes to be measured, in this case, accuracy and lead time, and be insensitive to attributes not to be measured; 3. be proper in the general sense; 4. be sensitive to the utility of the forecast to users 5. not be noisy on scales of the order of a year, so that trends can be identified; 6. have a range of 0 10; 7. be positively oriented. That is, better forecasts are represented by higher values. Some of the attributes, especially the first one, are consistent with verification scores of the administrative type, as defined for example by Stanski et al. (1990), where the desire is to summarize the verification information into a single numerical value. This score is intended for the Canadian public, and it is intended to be used to track the long term trends in the quality of weather warnings. The attribute accuracy is defined as the level of agreement between forecasts and the corresponding observations (Murphy, 1993), and is commonly measured in forecast verification activities. The timeliness attribute is measured by the lead time, the time between the issuance of the warning and the onset of the severe weather. These two attributes and only these two are to be measured by the new index. In verification science, properness is a desirable attribute of all verification scores and has a strict mathematical definition. A score is said to be proper if the best score can be obtained only when the issued forecast agrees with the forecaster s true belief about the future weather. In other words, the forecaster cannot obtain a better score by issuing a different forecast from what he/she believes to be the most likely future weather. Properness has been most clearly defined for probabilistic forecasts, and mathematical proofs of this attribute have been published for different scoring rules (Wilks, 2006). The general concept of properness applies especially to our warning index because of the requirement to summarize two compensating or opposing forecast attributes into one index. Properness implies the index must be balanced in the sense that the forecaster cannot improve his/her score by systematically issuing warnings too early or too late. It also means the index must be designed so that the forecaster cannot get a perfect score without a perfect forecast. For the weather warning index proposed, it is unlikely that properness can be evaluated mathematically, mainly because of the averaging provisions of the index, but it is essential to adhere to the general concept and make it as hard as possible for a forecaster to be able to play the score. Finally, it should be noted that the requirement for a proper score is intended not only to reward honesty in forecasting strategies, it also is a way of ensuring that the index is sensitive only to the attributes we want to measure, and not to other factors such as differences in the characteristics of the verification sample, for example. The fourth desirable characteristic listed above, sensitivity to the utility of the forecast to users, applies more to the timeliness attribute than to accuracy. For our purposes it is sufficient to assume that there is a simple and positive relationship between accuracy and utility. That is to say, a more accurate forecast is more useful than a less accurate one. For timeliness, the relationship is not as simple. It is clear that a warning that is delivered with no lead time, or after the onset of the event will be useless, and therefore deserves a score of zero. It is also true that there is some amount of lead time beyond which additional lead time is of negligible benefit. This means that the lead time component of the index should cause the score to go to zero for zero lead time, and that there should be some upper limit placed on the credit assigned to the lead time. (In fact, it could be argued that especially long lead times might be a disadvantage, because the user might forget that the severe event is coming or forget the details of the warning.) As part of the warning verification initiative, target lead times have been established for all warning types. These are considered to be sufficient for users to be able to take appropriate action to mitigate the effects of the severe weather before it arrives, and the proposed index is keyed to these targets. In accordance with the principle that there is a maximum lead time beyond which no further benefits accrue to users, application of the proposed index requires a maximum lead time value be set for each weather element. The remaining requirements listed above are more straightforward. The index has been designed to summarize the verification information into a single number over a variety of forecast types by means of a simple weighted average. Averaging is also applied over a 3 year period to meet the requirement for detection of trends in forecast quality. Most scoring rules operate in the range of 0 1; simple scaling can be applied to change the range to 0 to 10. Finally, the proposed new index and its components are positively oriented. 3. Introduction of a new index WWI In this section, the new index is presented and issues of its computation discussed. Since the issuance of weather warnings is triggered by the expectation that weather conditions will exceed specific pre-defined thresholds, the most appropriate way to verify the accuracy of warnings is by means of a contingency table and its associated scores. Therefore, similar to other centres, we have based the accuracy component of the score on the contingency table and associated scores Definition of the index The new score, called Weather Warning Index (WWI) is defined as follows: WW I i = AS i (LT R i 1) (LT R imax 1) (1 AS i ) for the i th variable if 1 LT R i LT R imax (1) WW I i = AS i (LT R i ) for the i th variable if LT R i < 1 (2) LTR i is the lead time ratio for the i th variable, LT R i = LT i (3) TLT i where LT i is the average lead time over the verification sample for the i th variable and TLT i is the target lead time for the i th variable as defined in the severe weather forecast program. LTR imax is the maximum lead time ratio allowed for the i th variable, and is presently set to 2 for all variables. That is, no further credit is given for lead times which exceed twice the target lead time. The maximum lead time restriction is applied to each case before the average lead time is computed.

3 208 L. J. Wilson and A. Giles 1.2 WWI Sensitivity 1 WWI value/ LTR = 0 LTR = 0.2 LTR = 0.4 LTR = 0.6 LTR = 0.8 LTR = 1 LTR = 1.2 LTR = 1.4 LTR = 1.6 LTR = 1.8 LTR = Accuracy score Figure 1. The WWI as a function of the value of the accuracy score for different lead time ratios. For lead time ratios of 1, the WWI is equal to the accuracy score value. This figure is available in colour online at wileyonlinelibrary.com/journal/met AS is the accuracy score, which is the new Extremal Dependency Index (EDI) (Ferro and Stephenson, 2011), given by: ln (F) ln (H ) AS = EDI = ln (F) + ln (H ), (4) where F is the false alarm rate and H is the hit rate (see the Appendix for definition of contingency table measures). As constituted, the score is effectively bounded at zero, but is theoretically unbounded at the upper end. (If the hit rate should be smaller than the false alarm rate, then the EDI can become negative. This is very unlikely because the false alarm rate, which involves a large number of correct negatives is usually small for severe weather). Figure 1 shows the characteristics of the score as a function of the accuracy component for different values of the lead time ratio (LTR), assuming a maximum LTR of 2. With the maximum average lead time constrained to be no higher than LTR max, then the maximum lead time benefit achievable is constrained to half the difference between the accuracy score value and the maximum value of 1. This expresses the principle that the forecaster can improve his/her accuracy score by at most half of the distance to the perfect score by improving the lead time, which is consistent with the idea that one should not be able to obtain a perfect score without a perfect forecast. The full aggregated score is obtained by a weighted average of the scores computed for each of the separate warning variables. All the land elements are weighted by the number of occurrences of the event and given a weight of 80% in the overall score. The marine warning component is assigned the other 20% weight. WWI = 8 N il il=1 5 n il WW I il + 2WW I m (5) Where the subscripts il refer to the (currently) five land types of severe weather and m refers to the marine wind component. n il is the number of events of the il th severe weather type and N il is the total number of land events in the verification sample. It should be noted that the maximum allowable lead time ratio can be altered individually for specific elements, and can take on any value greater than or equal to (say) 1.5, according to the lead time beyond which no additional warning value accrues to the user. Values less than 1.5 are possible, but will cause some instability in the index as the term involving the maximum lead time ratio approaches zero. The score is conceived as an additive measure for lead times greater than the target value. That is, the benefits of lead times greater than the target value are rewarded by adding to the score value obtained from the AS alone. For average lead times below the target, the score is multiplicative and the AS is lowered. When the average lead time equals the target value, then the score is equal to the AS Selection of the accuracy score At first, the critical success index (CSI, or threat score) was considered as the accuracy score. This is a general and widely used score for assessing forecast accuracy in the contingency table context, but it has the undesirable property that it tends towards zero for rare events (Stephenson et al., 2010). Events subject to warnings are nearly always low frequency events (thankfully), so the use of the CSI for the warning verification program is problematic. Four new scores have been recently proposed (Stephenson et al., 2008; Ferro and Stephenson, 2011) which are specifically designed for high impact, low base rate events. Of these, one has been rejected immediately on the basis that it is not sensitive to false alarms. The other three are the Symmetric Extreme Dependency Score (SEDS), the EDI defined above, and the

4 Verification of accuracy and timeliness of weather warnings 209 Symmetric Extremal Dependency Index (SEDI). The SEDS and SEDI are defined as follows: log q log H SEDS = log p + log H log F log H log (1 F) + log (1 H ) SEDI = log F + log H + log (1 F) + log (1 H ) In the above equations, p is the base rate, q is the forecast frequency, H is the hit rate, and F is the false alarm rate. See the Appendix for definitions of these terms. Tests were carried out to evaluate the behaviour of the SEDS and the SEDI in comparison with both the EDI and the CSI. The results of these tests are shown below, but the selection of the EDI was made mainly on theoretical grounds: The SEDS uses the forecast frequency, which is sensitive to the forecaster s strategy for forecasting the event, and is therefore controlled by the forecaster. Higher forecasting frequencies (overforecasting the event) will result in lower SEDS scores unless accompanied by higher hit rates, which is a desirable trait from a forecaster s point of view. The EDI, which uses the false alarm rate instead of the forecast frequency, adopts a more after the fact or user perspective on the assessment of the forecasts. Using the hit rate (proportion of occurrences which are hits) and the false alarm rate (proportion of non-occurrences which are false alarms), the EDI indicates how well the forecast has been able to discriminate situations leading to the occurrence of the event from situations preceding non-occurrences of the event. The EDI is positive when the hit rate exceeds the false alarm rate, goes to zero when the hit rate equals the false alarm rate, and would be negative if the false alarm rate exceeds the hit rate. A value at or near zero means that the forecast has not been able to distinguish situations preceding the occurrence of the event from those which do not precede severe weather, a forecast which is of no real guidance to a user who wishes to make decisions on whether or not to take preventative action. Given that the verification of the warnings is focused on the utility of the forecast to users, rather than on the evaluation of specific forecasting strategies, the EDI is more in line with the goals of the warning verification program. The SEDI is a symmetric version of the EDI in the sense that replacing the hit rate and false alarm rate for the event with the hit rate and false alarm rate for the non-event, respectively, will give the same score, but negative. The SEDI is more complicated to compute, and proved to be less sensitive to changes in the accuracy of the forecast than the EDI, so was rejected on those grounds. It was also felt that the symmetry property is not important in this context Data collection for the warning verification program The data for the national warning verification program is presently collected by the five regional forecast offices located in Halifax, Montreal, Toronto, Edmonton and Vancouver. Data for the Arctic are collected at Edmonton. While warnings are issued for any conditions considered to be a hazard to Canadians, the verification program is restricted to six variables, cumulative rainfall, cumulative snowfall, high winds, severe thunderstorms, freezing rain, and marine gales. A summary of the thresholds used for these variables is shown in Table 1. As shown, the thresholds vary by region, taking into account the variations of climate, and the corresponding variation in what is considered extreme or a hardship in different parts of the country. Thresholds are summarized here; further details of the exact thresholds and the regions to which they apply are given in Weather and Environmental Services (WES) (2009). All of these types can and do occur anywhere in Canada, with the exception that severe thunderstorms are rare in Atlantic Canada and in the far north. Two distinctly local phenomena are included, Les Suetes and Wreckhouse winds. Both are funnelled downslope winds occurring on the lee sides of local mountain ranges. Verification is carried out for a subset of the fixed public weather forecast regions, again in order to keep the data management effort to a reasonable level. As shown in Figure 2 there are 50 regions for the synoptic variables rain, snow and wind. For convective weather, a somewhat different set of regions was chosen (Figure 3), bearing in mind the availability of radar coverage, and the smaller scale of the phenomenon. Marine gales are verified for the 21 regions shown in Figure 4. Three years of data have been compiled so far, The dataset includes information on issue times and end times of the warnings, along with start times of the event and criterion time for events such as rain storms and snow storms which take place over longer periods of time. The criterion time is defined as the time the warning threshold is surpassed. For rain and snow events, the lead time is measured with respect to this criterion time and is equal to the length of time between the warning issue time and the criterion time. For all other variables, the lead time is measured with respect to the start time of the event, the earliest time at which conditions exceeding the severe weather threshold are reported. The matching and interpretation of the warnings and observations is carried out by the regions; each event is reported as a hit, near hit, false alarm, miss or not verified. Near hits are recorded when observed conditions reach more than 80% of the severe weather threshold, when a warning has been issued. It is incorrect verification practice to allow the threshold to be dependent on whether or not a warning was issued; therefore near hits were counted as false alarms for the tests that were carried out. Lead times are normally defined only for hits, since the calculation requires that there be both a warning issue time and a start time defined for the event. In the present implementation of the warning verification program, lead times have been set to zero for all missed events, so that the average lead times are computed over all observed occurrences of the event. It has been argued that setting lead times to zero for misses and including these in the average is a double penalty to the forecaster since he/she is already penalized on accuracy by logging a missed event. This practice is scientifically acceptable; however strengthening the penalty for missed events might encourage further overforecasting of the events in an effort to limit the number of misses Computation of the WWI estimation of correct negatives The WWI requires a value for the correct negatives, box d of the contingency table. This quantity is often hard to estimate because of the difficulty of establishing bounds on the location and time of a non-occurrence of the event. When nothing is happening weather-wise, questions such as How often is nothing happening?, When does the non-event start and end? etc. arise and are difficult to answer objectively. While establishing the number of occurrences of the non-event may be hard, the forecast of the non-event can be assumed to be the absence of a forecast of the event. It is still necessary to decide how often the non-event is forecast over a verification

5 210 L. J. Wilson and A. Giles Table 1. A summary of weather elements, thresholds and target lead times used in the weather warning verification program. Weather element Threshold Target lead time Synoptic rainfall Mountains of BC Winter (all rgns) Synoptic snowfall Prairies, Arctic and Interior BC ScoastalBC Wind Les Suetes wind Wreckhouse wind N. Coast BC Freezing rain Atlantic region Severe thunderstorm > = 50 mm/24 h; > 75/48 h; > = 100 mm/24 h > = 25 mm/24 h > = 15 cm/12 h > = 10 cm/12 h > = 10 cm/12 h or > = 5cm/6h > = 70+/or gusts > = 90 kmh > = 70+/or gusts > = 90 kmh > = 80+/or gusts > = 100 kmh > = 90+/or gusts > = 110 kmh Any freezing rain reports with total duration over 2 h Total duration > 4h Wind > = 90 kmh; Hail > = 20 mm diameter; Rain > = 50 mm h 1 Marine gales > 34 kt sustained wind 18 h 12 h 18h 12h 6h 30 min Figure 2. Land regions for which verification data is collected. There are 50 regions. This figure is available in colour online at wileyonlinelibrary.com/journal/met period. The problem of estimating d in the weather warning context is discussed at length in Stephenson et al. (2010). In the following paragraphs, the reasoning behind the estimates obtained for correct negatives for the weather elements included in WWI is described. The overall goal in defining N, the total sample size, is to count the number of characteristic periods and regions for which the forecaster needs to consider the possibility of severe weather occurrence, however slight the chance of its occurrence. This means it is necessary to adopt some form of spatial and temporal discretization of the entire verification period over all regions. Spatial discretization is effectively already decided by the selection of regions and the definition of the forecast variable to be the occurrence of severe weather anywhere in the region. The discretization of the time period is harder. This involves the identification of a characteristic period or time length scale for each phenomenon forecast. Severe rain events, snowstorms, and non-convective windstorms are driven by synoptic storms which are deemed to have timescales of 24 h for the purposes of WWI. Convective storms, on the other hand, operate on much smaller time and space scales. A 3 h characteristic period was chosen to discretize these events. For all events and forecasts, if the space scale is defined by the forecast region, then steps must be taken to ensure that there is no double-counting of forecasts or observations within the forecast region. That is, the sum of a, b, c, and d for each discrete characteristic period and each region must equal 1. For example, if a severe thunderstorm develops, and a warning is not issued in time, then a missed event is recorded. If however, a warning is

6 Verification of accuracy and timeliness of weather warnings 211 Figure 3. Regions chosen for verification of convective weather. This figure is available in colour online at wileyonlinelibrary.com/journal/met Figure 4. The 21 marine regions for which verification data is collected. This figure is available in colour online at wileyonlinelibrary.com/journal/met issued for the same storm for downstream areas in the same verification region, then a hit may also be recorded if severe weather occurs in the valid areas defined by the warning. For verification purposes, then, both a miss and a hit may have been recorded for the same 3 h period and the same verification region, which means that each is given a weight of 0.5 for verification. Now, the total sample size N = a + b + c + d could be computed as the total number of characteristic periods in a year times the number of regions. But that would be too easy; the forecaster gets credit for not forecasting snowstorms in summer, for example. And so the next step is to estimate the period of the year when each of the phenomena could possibly occur. For rain, snow and convection, the year was divided

7 212 L. J. Wilson and A. Giles into three periods, June, July, August for convective severe weather, April, May, September and October for heavy rain events, and November to March for snowstorms. Of course these boundaries do not apply exactly to all regions; this is not of great importance as long as the period lengths are about right on average. It was also considered that some of the severe weather types do not ever occur in some regions. Thus these regions were left out of the computation of the total sample size, lowering the total number of land regions considered. For convection, a characteristic length of 3 h was selected, and it was assumed that there would be a possibility of occurrence of a severe convective event during 7 of the 8 3 h periods per day. Marine winds were treated similarly to land winds. That is, a typical synoptic storm frequency of 1 per day, for all the marine regions. Gales are assumed not to occur in June, July and August, except for convectively-driven windstorms, which are covered under convection. In summary, the total number of possible cases over the 2 year test period is: Rain N = 40 regions 120 days 1/day 3 years = Snow N = 45 regions 150 days 1/day 3 years = Wind N = 50 regions 270 days 3 years = Severe thunderstorm N = 30 regions 90 days 7/day 3 years = Marine Gales N = 20 regions 270 days 3 years = One could also think of this from a utility perspective: When Joe Q public gets up on a bright sunny summer day, he does not know that a severe thunderstorm could develop later in the day. Thus, a forecast of no severe storms today has some value to him since a bright sunny morning that precedes a severe thunderstorm could be identical to a bright sunny morning that does not precede a severe storm. Freezing rain is a bit of a special case. Freezing rain requires the setup of a particular atmospheric vertical structure with an above-freezing layer over a sub-freezing surface layer. The occurrence of precipitation is also required. These restrictive conditions for freezing rain mean that there are many circumstances where the forecaster would not give any thought to the freezing rain problem. Considering Joe Q public again, if he wakes up in the morning and it is +20 outside, he does not need the forecaster to tell him there will not be any freezing rain today. Thus, for freezing rain, it was decided there could typically be 30 occasions per year 3 years 45 regions = 4050 situations in 3 years where the forecaster might have to give some thought to freezing rain. For all the other weather types, it is harder to separate the trivial non-events from the less trivial ones. It is necessary to recognize that there are various synoptic situations through the year for every variable where the non-occurrence of severe weather is completely obvious to forecasters and users alike. For example there are many bright sunny days in the year for all regions where nothing at all is happening, let alone severe weather. To help account for those obvious forecast situations the totals above have been divided by 2 to try to bring N closer to the value representative of all situations where severe weather might have to be considered, however briefly. The factor 2 is somewhat arbitrary. The totals, 7200 for rain, for snow, for wind, for convection, 8100 for marine gales, and 4050 for freezing rain are probably on the generous side Computation of the WWI estimating false alarms for convection False alarms must be included for all elements for computation of the WWI. Especially in data sparse areas, it is difficult to distinguish between the non-occurrence of the event and the unreported occurrence of an event in a remote area. For this reason, false alarms were omitted entirely from the severe weather database compiled by regions. Since a false alarm count is needed to keep the scoring proper, it was decided to use an assumed frequency bias along with reported totals of the hits (a) and the misses (c) to calculate the false alarms (b). Severe weather events are often associated with large economic losses, and possibly loss of life. The greater the risk in general, the more important it becomes to avoid missed events; thus it is common to overforecast the occurrence of low frequency (low base rate) high impact events. Roulston and Smith (2004) offer an interesting cost-loss analysis of the Boy who cried wolf fable to illustrate this issue. As they also point out, forecast frequency biases that are too high might lead to a lack of response to the warning on the part of users. However, Barnes et al. (2007) indicate that there is high tolerance among the public for high frequency bias, especially for the most severe weather types. Biases of 3, 4 and 5 were tested for the WWI; a bias of 5 has been chosen for the severe convection component of the index in the absence of actual counts of false alarms Computation of the WWI on the 3 year dataset Table 2 is a summary of the data for the 3 year period The hits, false alarms and misses are as reported by the regions, with near hits counted as false alarms. The total sample size N for each element is determined as outlined above, then the total number of correct negatives is found by subtracting hits, false alarms and misses from N. For most variables, the base rate is in the 2 4% range, while for severe convection it is about 1%. Marine gales are not really a rare event; the base rate is about 29%. The average lead times were calculated assuming a lead time of zero for missed events, and values more than twice the target lead time were reset to twice the target value before computation of the average. This restriction, considered relevant to users of the forecasts, did not have a large impact on the averages computed without the imposition of the limit. Separate sets of contingency table values are listed in Table 2 for convective events, with assumptions of frequency biases of 3, 4 and 5. The number of false alarms in each case has then been calculated from the assumed bias value and the given values of hits and misses, as described above. Figure 5 shows the characteristics of the EDI, SEDS, and SEDI in comparison with the CSI for each of the six weather elements. The first characteristic to note is that the four scores are all consistent; that is, they all vary from element to element in exactly the same way. Secondly, the chosen score, EDI, gives the second highest values for all elements, while the SEDI gives the highest values. Higher values for a given dataset are not necessarily desirable since this leaves less room for improvement of the score in future, and the score will therefore be less sensitive to improvements of a given magnitude. This is one reason the EDI was chosen instead of the SEDI. Third, the tendency for the CSI towards zero for rare events is clearly shown here. For the element with the lowest base rate, convection, the CSI values are by far the lowest, while for

8 Verification of accuracy and timeliness of weather warnings 213 Table 2. Contingency table values, base rate and average lead time for the six variables included in the warning verification index. Weather element Target lead time Total events Total a = hits Total b = FA Total c = miss Total d = correct neg total n base rate Average lead time MAX LTR = 2 Rain, criterion time 12 h Snow, criterion time 18 h Freezing rain 6 h Wind, start time 12 h Convection, bias = 3 30 min Convection, bias = 4 30 min Convection, bias = 5 30 min Marine, gales 18 h Score value Rain, criterion time Snow, criterion time CSI, SEDS, EDI, SEDI for severe weather elements 3 year sample CSI SEDS EDI SEDI Freezing rain Wind, start time Convection, bias = 3 Element Convection, bias = 4 Convection, bias = 5 Marine, gales WWI Rain, criterion time Snow, criterion time WWI scores for individual elements 3 Year sample WWI (SEDS) WWI (EDI) WWI (SEDI) Freezing rain Wind, start time Convection, bias = 3 Convection, bias = 4 Element Convection, bias = 5 Marine, gales Figure 5. Comparison of CSI, SEDS, EDI and SEDI values for the 3 year dataset for each of the six weather elements. the not-so-rare marine gales, the CSI is nearly as large as the other scores. Part of this variation is clearly due to variations in forecast accuracy, but part of it is also due to variations in base rate, which is an undesirable side effect of this score. Fourth, the variations in accuracy for the different assumed biases of severe thunderstorms forecasts are not very large, ranging from 0.76 for a bias of 3 down to 0.72 for a bias of 5. This should translate to an even smaller effect for the overall composite score. This could be good news, for it means that random errors in the estimation of bias for convection might not have a huge impact on the index value. Note that the emphasis here is on random errors, which are sometimes positive and sometimes negative. By contrast, systematic errors, such as the systematic (or deliberate) underestimation of bias, will have a significant impact and could obscure any real trends that may occur in forecast accuracy. Now, we add the timeliness factor. Figure 6 shows the WWI computed separately for each of the weather elements that make up the full index. Values are still in the range 0 to 1; Scaling up to the 0 to 10 range is carried out only when the various elements are combined into one index value. In Figure 6 once again, the SEDS and SEDI score options are included for comparison in the WWI formulation. Results are totally consistent with the above results for the accuracy alone. A few relative differences show up, mainly due to Figure 6. The WWI, including both timeliness and accuracy components, computed on the 3 year sample for each separate element. The range is still 0 1. poorer performance in timeliness for convection. Timeliness for rainstorms, wind and marine wind are quite close to the target values, thus little change in the accuracy value. The average lead time for snowstorms is less than the target value, leading to a penalty in the score. As suggested above, the differences in the score values for the different assumed biases of convective severe weather are not large. And now for the final result: Putting all the weather elements together in an average weighted by the number of events of each type, using 80% for the land variables combined and 20% for the marine component, using an assumed bias of 5 for severe convection, setting lead times of zero for missed events, limiting each lead time to a maximum of twice the target lead time, and referencing rain storm and snow storm events to the criterion time, gives an overall index value of This is the index value for the 3 year dataset, with all the agreed computation methods included, and for the six elements that were chosen for inclusion. 4. Sensitivity tests of WWI The results reported so far apply to the WWI computed over 3 years of data received from the regional offices, and corrected for errors. Further examination of the database suggested

9 214 L. J. Wilson and A. Giles possible inconsistencies among regions in the reporting and interpretation of the observations and forecasts into hits, false alarms and missed events. Since the sensitivity of the score is of interest for the detection of trends, it is instructive to estimate the effects that corrections or changes to the reporting norms might have on the score. The sensitivity is examined under three headings: Rationalization of convective events for Quebec and Ontario, Impact of including local outflow winds, and An assessment of properness. The sensitivity tests used the first 2 years of data, 2009 and For this period, the overall score was Rationalization of convective events for Quebec and Ontario Examination of the 2 year dataset revealed that the number of severe convective events in Quebec was reported as 143, while only 20 such events were identified in Ontario during the same period. Given that the two regions are about the same size and subject to approximately the same convective weather climatology, this suggests significant systematic differences either in the threshold for severe convection or in the counting strategy or both. It was suspected that there was double-counting of events in Quebec region (QR). It was later confirmed that Quebec uses 57 mesoscale regions to assess convective storms, rather than the smaller number of synoptic regions identified in Figure 3. To try to assess the impact of this discrepancy, the scoring for corrected data was rerun with the assumption of the same hit rate as in QR, but the same number of events as in Ontario Region (OR). This meant reducing the total number of events in QR to 20 from 143, of which 11 are designated as hits and the other 9 as misses. The false alarms were recomputed to maintain the assumed bias of 5. The average lead time was also left unchanged, which means assuming that the 123 omitted cases would be representative of the whole sample, with the same average lead time. This change caused the average accuracy to go up from 0.67 to 0.77 for convection with a bias of 5. When the timeliness is included, the scores are still reduced with respect to the other variables, due to the relatively poor timeliness of only 60% of the target value. The impact on the full score is certainly not trivial, the full weighted WWI over all variables rises to , a change of more than 0.3. Therefore, systematic inconsistencies in the counting of events for verification purposes can lead to significant differences in the overall score, and should be corrected before the scores are reported Impact of including local outflow winds At the request of the forecast office responsible, two regions which are subject to specific local extreme effects are included in the national dataset. For a national summary verification program, this is not a good idea unless forecast performance for these effects is representative of the country as a whole. If not, then the national results are likely to be skewed and will not be as representative of forecast accuracy and timeliness. Regions used in the verification should be determined on the basis of the availability of reliable and representative verification data, as well as on user impact considerations such as population density. The regions which are subject to Wreckhouse winds and Les Suetes are the two small areas in the SW corner of Newfoundland, and the NW tip of Nova Scotia on Figure 2. These would seem to have been chosen partly because the two phenomena are relatively well-forecast, rather than just because good observational data are available. To check the impact of this, the 2 year dataset was rerun with only wind events included. Nearly one third of the reported events in the 2 year period are either Les Suetes or Wreckhouse. The impact of considering winds only is really quite large relatively speaking: Both the accuracy score and the timeliness decrease. Accuracy goes from to 0.848, still quite a high score, while the average lead time goes down by almost 2 h, from to 9.35 h. The WWI for winds goes down from to 0.661, leading to a reduction in the overall index from to As for the Ontario-Quebec systematic differences in convection, this is quite a large systematic difference, caused by including a relatively well-understood and relatively frequent local effect in the national system. It is clear that the results without these special wind events are more representative of national wind warning accuracy and timeliness. Once enough data have been collected, it will be possible to compute stable scores specifically for the regions of responsibility of the five forecast offices, at which time it is appropriate to include high impact local phenomena An assessment of properness As outlined above, for this score, properness is concerned with balancing the two opposing component attributes accuracy and timeliness. A forecaster who wanted to try to play the score would need to know something about the impact of his/her decisions on the score. One way to get at this is to examine the tradeoff directly, to answer the question How much lead time does it take to offset the cost of one missed event? In other words, if the expected start time of the event is approaching, should the forecaster worry about whether it is better to issue a forecast that is likely to lower his/her average lead time if it is correct rather than to not issue a forecast and risk a missed event. The first point to make is that the forecaster cannot really know the impact of his/her decision in advance for two reasons: First, the average lead time is not known in advance, so he/she cannot be sure whether a correct forecast would increase or decrease it. This is the advantage of using the average it makes the index harder to play and therefore closer to proper. Second, the accuracy score is quite non-linear the cost of an extra missed event will depend on how many cases there are in total, which the forecaster also does not know in advance. To drive this point home, calculating the derivative of the score with respect to the hit rate yielded a rather complicated expression that involved not only the hit rate itself but also the false alarm rate (not shown). Thus it would seem to be difficult to play the score. Using typical values from the 2 year database, Table 3 shows this investigation. A HR of 0.6 over 200 events was chosen and a FA of about Increasing the HR by 1 event in 200 is a half-percent increase, to The corresponding change in the EDI is about , a percent change of To cause the same change in the WWI due to lead time requires an increase of in the lead time ratio, which, for a 12 h target lead time, corresponds to a 0.03 h increase. But that is a change in the average lead time, which means, for a single event, the lead time must change by 0.03 times 200 events, or 6 h. Thus, the penalty associated with an extra missed event is equivalent to a change in the lead time of 6 h, for lead times less than the target value. In general, that is half the TLT. Over the target value, the credit for additional lead time is less (the

10 Verification of accuracy and timeliness of weather warnings 215 slope of the lines is smaller on Figure 1), and so changes in lead time of greater than 6 h would be needed to offset the credit of an additional hit. The equivalency is with respect to the lead time ratio: if the target lead time is 18 h, then a single forecast would have to have a timeliness of 9 h less than the average to be equivalent in cost to 1 more missed event (decrease of one event in the HR). It can be concluded that the penalty for issuing a short lead time forecast is not large enough to dissuade the forecaster from doing so. Additionally, the choice not to issue the forecast may lead to a missed event, which is penalized with a lead time of 0, a further incentive to issue the forecast as soon as the forecaster judges the event to be likely enough to warrant it. Considering the bottom half of the table, here the false alarm count is increased by 1, resulting in a proportion change in the EDI of Comparing with the percent difference due to a change in HR of one event suggests that the cost of a missed event is about 5.5 times the cost of a false alarm. This is an advantage of the non-linear score: this ratio is conveniently close to the assumed bias of 5 which means the score is consistent with a forecast strategy of five times as many false alarms as missed events, for the base rates and total sample sizes considered here. Note that a linear score which uses the HR and FA would result in a much larger ratio. For example, the Hanssen Kuipers score, which is the difference between the two, would be consistent with a ratio of about 25 the cost of 25 false alarms is about the same as the cost of one miss. That seems to be too extreme. The index as it is formulated and computed quite strongly favours an overforecasting strategy, which may well be appropriate for weather of sufficient severity to warrant a warning. First, there is the relative cost mentioned in the previous paragraph. Then, with missed events being penalized both in the accuracy score and also in the timeliness component by setting the lead time to zero, they are effectively subject to a double penalty. And thirdly, there is no cost associated with the lead time factor for false alarms since the lead time is undefined. There are no specific rules to follow in setting up the relative penalties of misses and false alarms. In a strict mathematical sense, unit bias is often sought, but scores which penalize one type of error less than the other are not incorrect or improper since there is always a penalty for both types of error. The index value can never be raised by incurring more false alarms even if they individually do not cost very much. Finally, if it were desired to adjust the balance away from false alarms, the easiest way would be to eliminate the double penalty for misses. The results reported in this section suggest that this could be done without rendering the index improper. 5. Discussion and conclusions This study describes a new index that has been developed for the purpose of communicating the quality of the weather warning program to Canadians. The index measures both accuracy and timeliness of the forecasts, and summarizes the results into a single value between 0 and 10 for several weather elements and for a representative selection of Canadian forecast regions. In order to facilitate the identification of trends, it is planned to compute the index annually, and take a 3 year running mean of the annual values. Computation of the score over the full 3 years of data so far available gave a value of Sensitivity tests of the index revealed that systematic discrepancies in the reporting of even one type of forecast from region to region can result in changes in score values which are large enough to hinder the detection of trends in performance. Thus it is important to trace and eliminate such systematic variations in the data compilation at the start of the program. Although a rigorous mathematical evaluation of properness is not possible, we conclude that the index in its current format is sufficiently close to proper and that it would be extremely difficult to systematically play the score. This is mainly due to the dependence of the computation of the score on some averaged quantities, which are unknown at the forecast times. As a new score, the EDI has not been widely tested yet. However, results that are available suggest that our EDI results obtained from the 3 year sample are on the high side compared values obtained by others for extreme precipitation events (e.g. Nurmi, 2010). Of course the results are not directly comparable in a quantitative way because of differences in climatology and other aspects of the verification samples, but they are all focused on extremes because that is what these scores are designed to verify. It seems likely that the accuracy score is giving values which are too high because of biases in the methods of identifying and reporting hits, misses and false alarms. For example, there may be a tendency to check more carefully for the occurrence of the event when a warning has been issued than when it has not (Barnes et al., 2007) and the designation of some forecasts or events as unverifiable could lead to the systematic underreporting of misses and false alarms. And finally, the concept of near hit is included whenever the observation at the station is 80% or more of the threshold value. Whenever this is identified in the dataset, the near hits are counted as false alarms, to keep the threshold constant and independent of the issuance of a warning. However, if near hits are coded as actual hits, then this will lead to over reporting of hits, and an overstated accuracy score. To ensure that trends can be estimated, it is important that biases in the reporting and compilation of the dataset be resolved at the start of the warning verification program. The next steps for work with the WWI include further evaluation of year-to-year variations in the score, once enough data has been accumulated. Also, once enough data has been accumulated, we will compute scores separately for the five major regions of Canada. It is also intended to add Table 3. Illustration of the relative impact of changes in HR and FA, versus lead time for typical data from the 2 year dataset. HR FA EDI LTR WWI EDI % diff

11 216 L. J. Wilson and A. Giles bootstrapped confidence intervals to the results as soon as possible. Acknowledgements The authors wish to thank Nelson Shum for his early work on verification of Canadian weather warnings, which set the stage for the present undertaking. Erik Buhler is thanked for his support of the project through numerous discussions and for making funding available under contract KM The authors have no conflict of interest to declare. Appendix contingency tables The contingency table is a summary description of all the possible combinations of forecast and observed events. For weather warnings of the exceedence of a single threshold (for example rain accumulation over 25 mm), there are only four possibilities: Either a warning was issued or it was not, and either the event occurred or it did not. The four possibilities are shown in Figure A1 below in table format, as hits, false alarms, misses and correct negatives. Production of this table from a dataset of observations and warnings requires some interpretation of the data to define the four categories. Each forecast and/or observed event falls into one of the four bins of the table; the totals a, b, c, d are simply the sums of the number of times each possible result was recorded in the dataset. The proposed new index uses all four of the quantities, number of hits = a, number of false alarms = b, number of misses = c, and number of correct non-events, or number of correct negatives = d. The contingency table measures discussed in this paper are defined as follows: The hit rate (HR) or probability of detection (POD). HR = The false alarm rate (FA) a a + c FA = b b + d The critical success index (CSI) or threat score CSI = The frequency bias (FB) a a + b + c (A.1) (A.2) (A.3) FB = a + b (A.4) a + c If the FB, a and c are known, then the false alarms b can be computed: The base rate (p) b = (FB 1) a + FBc p = a + c N The forecast frequency (q) q = a + b N (A.5) (A.6) (A.7) Figure A1. The format of a two-category contingency table. This figure is available in colour online at wileyonlinelibrary.com/journal/met References Barnes LR, Gruntfest EC, Hayden MH, Schultz DM, Benight C False alarms and close calls: a conceptual model of warning accuracy. Weather Forecast. 22: Erickson S, Brooks H Lead time and time under tornado warnings: In Preprints, 23rd Conference on Severe Local Storms. American Meteorological Society: Atlanta, GA; htm. Ferro CAT, Stephenson D Extremal dependence indices: improved verification measures for deterministic forecasts of rare binary events. Weather Forecast. 26: Murphy AH What is a good forecast? An essay on the nature of goodness in weather forecasting. Weather Forecast. 8: Nurmi P Experimentation with new verification measures for categorized QPFs in the verification of high impact precipitation events an ECMWF initiative. Proceedings, 3rd WMO International Conference on Quantitative Precipitation Estimation and Quantitative Precipitation Forecasting and Hydrology, October 2010, Nanjing, China; Roulston MS, Smith LA The boy who cried wolf revisited: the impact of false alarm intolerance on cost loss scenarios. Weather Forecast. 19: Sharpe M Verification of weather warnings. UKMetOffice Internal Report. Met Office: Exeter, UK; 9 pp. Stanski HR, Wilson LJ, Burrows WR Survey of common verification methods in meteorology. World Weather Watch Technical Report No. 8, WMO/TD No. 358, WMO: Geneva; 114 pp. Stephenson DB, Casati B, Ferro CAT, Wilson CA The extreme dependency score: a non-vanishing measure for forecasts of rare events. Meteorol. Appl. 15: Stephenson DB, Jolliffe I, Ferro CAT White paper review on the verification of warnings. UK Met Office Technical Report No Met Office: Exeter, UK. Weather and Environmental Services (WES) Public Alerting Program Hazard criteria, QMS manual, 21 pp. (Internal document, available from Environment Canada). Wilks DS Statistical Methods in the Atmospheric Sciences, 2nd edn. Elsevier: Amsterdam; 627 pp.. Wittmann C Evaluation of Severe Weather Warnings at the Austrian National Weather Service. Working Group for the Cooperation between European Forecasters. Newsletter, Vol. 14; (2009).

A Comparison of Tornado Warning Lead Times with and without NEXRAD Doppler Radar

A Comparison of Tornado Warning Lead Times with and without NEXRAD Doppler Radar MARCH 1996 B I E R I N G E R A N D R A Y 47 A Comparison of Tornado Warning Lead Times with and without NEXRAD Doppler Radar PAUL BIERINGER AND PETER S. RAY Department of Meteorology, The Florida State

More information

Evaluating Forecast Quality

Evaluating Forecast Quality Evaluating Forecast Quality Simon J. Mason International Research Institute for Climate Prediction Questions How do we decide whether a forecast was correct? How do we decide whether a set of forecasts

More information

Verification and performance measures of Meteorological Services to Air Traffic Management (MSTA)

Verification and performance measures of Meteorological Services to Air Traffic Management (MSTA) Verification and performance measures of Meteorological Services to Air Traffic Management (MSTA) Background Information on the accuracy, reliability and relevance of products is provided in terms of verification

More information

Enhancing Weather Information with Probability Forecasts. An Information Statement of the American Meteorological Society

Enhancing Weather Information with Probability Forecasts. An Information Statement of the American Meteorological Society Enhancing Weather Information with Probability Forecasts An Information Statement of the American Meteorological Society (Adopted by AMS Council on 12 May 2008) Bull. Amer. Meteor. Soc., 89 Summary This

More information

Understanding Weather and Climate Risk. Matthew Perry Sharing an Uncertain World Conference The Geological Society, 13 July 2017

Understanding Weather and Climate Risk. Matthew Perry Sharing an Uncertain World Conference The Geological Society, 13 July 2017 Understanding Weather and Climate Risk Matthew Perry Sharing an Uncertain World Conference The Geological Society, 13 July 2017 What is risk in a weather and climate context? Hazard: something with the

More information

Verification of Weather Warnings. Dr Michael Sharpe Operational verification and systems team Weather Science

Verification of Weather Warnings. Dr Michael Sharpe Operational verification and systems team Weather Science Verification of Weather Warnings Dr Michael Sharpe Operational verification and systems team Weather Science Doc-6.1-Annex.doc - 1 Crown copyright 2008 Contents Summary...1 Difficulties Associated with

More information

Heavier summer downpours with climate change revealed by weather forecast resolution model

Heavier summer downpours with climate change revealed by weather forecast resolution model SUPPLEMENTARY INFORMATION DOI: 10.1038/NCLIMATE2258 Heavier summer downpours with climate change revealed by weather forecast resolution model Number of files = 1 File #1 filename: kendon14supp.pdf File

More information

Application and verification of ECMWF products 2010

Application and verification of ECMWF products 2010 Application and verification of ECMWF products 2010 Icelandic Meteorological Office (www.vedur.is) Guðrún Nína Petersen 1. Summary of major highlights Medium range weather forecasts issued at IMO are mainly

More information

The Montague Doppler Radar, An Overview

The Montague Doppler Radar, An Overview ISSUE PAPER SERIES The Montague Doppler Radar, An Overview June 2018 NEW YORK STATE TUG HILL COMMISSION DULLES STATE OFFICE BUILDING 317 WASHINGTON STREET WATERTOWN, NY 13601 (315) 785-2380 WWW.TUGHILL.ORG

More information

Verification of ensemble and probability forecasts

Verification of ensemble and probability forecasts Verification of ensemble and probability forecasts Barbara Brown NCAR, USA bgb@ucar.edu Collaborators: Tara Jensen (NCAR), Eric Gilleland (NCAR), Ed Tollerud (NOAA/ESRL), Beth Ebert (CAWCR), Laurence Wilson

More information

Denver International Airport MDSS Demonstration Verification Report for the Season

Denver International Airport MDSS Demonstration Verification Report for the Season Denver International Airport MDSS Demonstration Verification Report for the 2015-2016 Season Prepared by the University Corporation for Atmospheric Research Research Applications Division (RAL) Seth Linden

More information

Challenges of Communicating Weather Information to the Public. Sam Lashley Senior Meteorologist National Weather Service Northern Indiana Office

Challenges of Communicating Weather Information to the Public. Sam Lashley Senior Meteorologist National Weather Service Northern Indiana Office Challenges of Communicating Weather Information to the Public Sam Lashley Senior Meteorologist National Weather Service Northern Indiana Office Dilbert the Genius Do you believe him? Challenges of Communicating

More information

Application and verification of ECMWF products 2016

Application and verification of ECMWF products 2016 Application and verification of ECMWF products 2016 Icelandic Meteorological Office (www.vedur.is) Bolli Pálmason and Guðrún Nína Petersen 1. Summary of major highlights Medium range weather forecasts

More information

Current verification practices with a particular focus on dust

Current verification practices with a particular focus on dust Current verification practices with a particular focus on dust Marion Mittermaier and Ric Crocker Outline 1. Guide to developing verification studies 2. Observations at the root of it all 3. Grid-to-point,

More information

National Weather Service Warning Performance Associated With Watches

National Weather Service Warning Performance Associated With Watches National Weather Service Warning Performance Associated With es Jessica Ram National Weather Center Research Experiences for Undergraduates, and The Pennsylvania State University, University Park, Pennsylvania

More information

THE CRUCIAL ROLE OF TORNADO WATCHES IN THE ISSUANCE OF WARNINGS FOR SIGNIFICANT TORNADOS

THE CRUCIAL ROLE OF TORNADO WATCHES IN THE ISSUANCE OF WARNINGS FOR SIGNIFICANT TORNADOS THE CRUCIAL ROLE OF TORNADO WATCHES IN THE ISSUANCE OF WARNINGS FOR SIGNIFICANT TORNADOS John E. Hales, Jr. National Severe Storms Forecast Center Kansas City, Missouri Abstract The tornado warning is

More information

Application and verification of the ECMWF products Report 2007

Application and verification of the ECMWF products Report 2007 Application and verification of the ECMWF products Report 2007 National Meteorological Administration Romania 1. Summary of major highlights The medium range forecast activity within the National Meteorological

More information

Improving real time observation and nowcasting RDT. E de Coning, M Gijben, B Maseko and L van Hemert Nowcasting and Very Short Range Forecasting

Improving real time observation and nowcasting RDT. E de Coning, M Gijben, B Maseko and L van Hemert Nowcasting and Very Short Range Forecasting Improving real time observation and nowcasting RDT E de Coning, M Gijben, B Maseko and L van Hemert Nowcasting and Very Short Range Forecasting Introduction Satellite Application Facilities (SAFs) are

More information

The UK National Severe Weather Warning Service - Guidance Unit Perspective

The UK National Severe Weather Warning Service - Guidance Unit Perspective The UK National Severe Weather Warning Service - Guidance Unit Perspective Dan Suri, Chief Operational Meteorologist ECMWF User Workshop June 2015 Contents Who are the Guidance Unit? The National Severe

More information

CHARACTERISATION OF STORM SEVERITY BY USE OF SELECTED CONVECTIVE CELL PARAMETERS DERIVED FROM SATELLITE DATA

CHARACTERISATION OF STORM SEVERITY BY USE OF SELECTED CONVECTIVE CELL PARAMETERS DERIVED FROM SATELLITE DATA CHARACTERISATION OF STORM SEVERITY BY USE OF SELECTED CONVECTIVE CELL PARAMETERS DERIVED FROM SATELLITE DATA Piotr Struzik Institute of Meteorology and Water Management, Satellite Remote Sensing Centre

More information

Guidance on Aeronautical Meteorological Observer Competency Standards

Guidance on Aeronautical Meteorological Observer Competency Standards Guidance on Aeronautical Meteorological Observer Competency Standards The following guidance is supplementary to the AMP competency Standards endorsed by Cg-16 in Geneva in May 2011. Format of the Descriptions

More information

Forecast Verification Analysis of Rainfall for Southern Districts of Tamil Nadu, India

Forecast Verification Analysis of Rainfall for Southern Districts of Tamil Nadu, India International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 6 Number 5 (2017) pp. 299-306 Journal homepage: http://www.ijcmas.com Original Research Article https://doi.org/10.20546/ijcmas.2017.605.034

More information

Verification of Weather Warnings

Verification of Weather Warnings Verification of Weather Warnings Did the boy cry wolf or was it just a sheep? David B. Stephenson Exeter Climate Systems olliffe, Clive Wilson, Michael Sharpe, Hewson, and Marion Mittermaier ks also to

More information

Application and verification of ECMWF products 2009

Application and verification of ECMWF products 2009 Application and verification of ECMWF products 2009 Icelandic Meteorological Office (www.vedur.is) Gu rún Nína Petersen 1. Summary of major highlights Medium range weather forecasts issued at IMO are mainly

More information

REPORT ON APPLICATIONS OF EPS FOR SEVERE WEATHER FORECASTING

REPORT ON APPLICATIONS OF EPS FOR SEVERE WEATHER FORECASTING WORLD METEOROLOGICAL ORGANIZATION COMMISSION FOR BASIC SYSTEMS OPAG DPFS EXPERT TEAM ON ENSEMBLE PREDICTION SYSTEMS CBS-DPFS/EPS/Doc. 7(2) (31.I.2006) Item: 7 ENGLISH ONLY EXETER, UNITED KINGDOM 6-10 FEBRUARY

More information

AERODROME METEOROLOGICAL OBSERVATION AND FORECAST STUDY GROUP (AMOFSG)

AERODROME METEOROLOGICAL OBSERVATION AND FORECAST STUDY GROUP (AMOFSG) AMOFSG/9-SN No. 32 22/8/11 AERODROME METEOROLOGICAL OBSERVATION AND FORECAST STUDY GROUP (AMOFSG) NINTH MEETING Montréal, 26 to 30 September 2011 Agenda Item 5: Observing and forecasting at the aerodrome

More information

Implementation Guidance of Aeronautical Meteorological Observer Competency Standards

Implementation Guidance of Aeronautical Meteorological Observer Competency Standards Implementation Guidance of Aeronautical Meteorological Observer Competency Standards The following guidance is supplementary to the AMP competency Standards endorsed by Cg-16 in Geneva in May 2011. Please

More information

Complimentary assessment of forecast performance with climatological approaches

Complimentary assessment of forecast performance with climatological approaches Complimentary assessment of forecast performance with climatological approaches F.Gofa, V. Fragkouli, D.Boucouvala The use of SEEPS with metrics that focus on extreme events, such as the Symmetric Extremal

More information

NWS Resources For Public Works

NWS Resources For Public Works NWS Resources For Public Works August 28th, 2016 Shawn DeVinny shawn.devinny@noaa.gov Meteorologist National Weather Service Twin Cities/Chanhassen, MN 1 APWA 2016 PWX 8/28/2016 National Weather Service

More information

P3.1 Development of MOS Thunderstorm and Severe Thunderstorm Forecast Equations with Multiple Data Sources

P3.1 Development of MOS Thunderstorm and Severe Thunderstorm Forecast Equations with Multiple Data Sources P3.1 Development of MOS Thunderstorm and Severe Thunderstorm Forecast Equations with Multiple Data Sources Kathryn K. Hughes * Meteorological Development Laboratory Office of Science and Technology National

More information

Atmospheric Moisture, Precipitation, and Weather Systems

Atmospheric Moisture, Precipitation, and Weather Systems Atmospheric Moisture, Precipitation, and Weather Systems 6 Chapter Overview The atmosphere is a complex system, sometimes described as chaotic in nature. In this chapter we examine one of the principal

More information

Verification of the operational NWP models at DWD - with special focus at COSMO-EU

Verification of the operational NWP models at DWD - with special focus at COSMO-EU Verification of the operational NWP models at DWD - with special focus at COSMO-EU Ulrich Damrath Ulrich.Damrath@dwd.de Ein Mensch erkennt (und das ist wichtig): Nichts ist ganz falsch und nichts ganz

More information

Application and verification of ECMWF products 2011

Application and verification of ECMWF products 2011 Application and verification of ECMWF products 2011 Icelandic Meteorological Office (www.vedur.is) Guðrún Nína Petersen 1. Summary of major highlights Medium range weather forecasts issued at IMO are mainly

More information

Operational MRCC Tools Useful and Usable by the National Weather Service

Operational MRCC Tools Useful and Usable by the National Weather Service Operational MRCC Tools Useful and Usable by the National Weather Service Vegetation Impact Program (VIP): Frost / Freeze Project Beth Hall Accumulated Winter Season Severity Index (AWSSI) Steve Hilberg

More information

Application and verification of ECMWF products 2013

Application and verification of ECMWF products 2013 Application and verification of EMWF products 2013 Hellenic National Meteorological Service (HNMS) Flora Gofa and Theodora Tzeferi 1. Summary of major highlights In order to determine the quality of the

More information

PRMS WHITE PAPER 2014 NORTH ATLANTIC HURRICANE SEASON OUTLOOK. June RMS Event Response

PRMS WHITE PAPER 2014 NORTH ATLANTIC HURRICANE SEASON OUTLOOK. June RMS Event Response PRMS WHITE PAPER 2014 NORTH ATLANTIC HURRICANE SEASON OUTLOOK June 2014 - RMS Event Response 2014 SEASON OUTLOOK The 2013 North Atlantic hurricane season saw the fewest hurricanes in the Atlantic Basin

More information

Severe Weather Watches, Advisories & Warnings

Severe Weather Watches, Advisories & Warnings Severe Weather Watches, Advisories & Warnings Tornado Watch Issued by the Storm Prediction Center when conditions are favorable for the development of severe thunderstorms and tornadoes over a larger-scale

More information

Categorical Verification

Categorical Verification Forecast M H F Observation Categorical Verification Tina Kalb Contributions from Tara Jensen, Matt Pocernich, Eric Gilleland, Tressa Fowler, Barbara Brown and others Finley Tornado Data (1884) Forecast

More information

Meteorological vigilance An operational tool for early warning

Meteorological vigilance An operational tool for early warning Meteorological vigilance An operational tool for early warning Jean-Marie Carrière Deputy-director of Forecasting http://www.meteo.fr The French meteorological vigilance procedure Context Routine working

More information

Weather forecasts and warnings: Support for Impact based decision making

Weather forecasts and warnings: Support for Impact based decision making Weather forecasts and warnings: Support for Impact based decision making Gerry Murphy, Met Éireann www.met.ie An Era of Change Climate and weather is changing Societal vulnerability is increasing The nature

More information

138 ANALYSIS OF FREEZING RAIN PATTERNS IN THE SOUTH CENTRAL UNITED STATES: Jessica Blunden* STG, Inc., Asheville, North Carolina

138 ANALYSIS OF FREEZING RAIN PATTERNS IN THE SOUTH CENTRAL UNITED STATES: Jessica Blunden* STG, Inc., Asheville, North Carolina 138 ANALYSIS OF FREEZING RAIN PATTERNS IN THE SOUTH CENTRAL UNITED STATES: 1979 2009 Jessica Blunden* STG, Inc., Asheville, North Carolina Derek S. Arndt NOAA National Climatic Data Center, Asheville,

More information

MSC Monitoring Renewal Project. CMOS 2012 Montreal, Quebec Thursday, May 31 Martin Elie on behalf of Dave Wartman

MSC Monitoring Renewal Project. CMOS 2012 Montreal, Quebec Thursday, May 31 Martin Elie on behalf of Dave Wartman MSC Monitoring Renewal Project CMOS 2012 Montreal, Quebec Thursday, May 31 Martin Elie on behalf of Dave Wartman Presentation Overview Context Monitoring Renewal Components Conclusions Q & A Page 2 Context

More information

Practical Atmospheric Analysis

Practical Atmospheric Analysis Chapter 12 Practical Atmospheric Analysis With the ready availability of computer forecast models and statistical forecast data, it is very easy to prepare a forecast without ever looking at actual observations,

More information

2016 Fall Conditions Report

2016 Fall Conditions Report 2016 Fall Conditions Report Prepared by: Hydrologic Forecast Centre Date: December 13, 2016 Table of Contents TABLE OF FIGURES... ii EXECUTIVE SUMMARY... 1 BACKGROUND... 5 SUMMER AND FALL PRECIPITATION...

More information

Basic Verification Concepts

Basic Verification Concepts Basic Verification Concepts Barbara Brown National Center for Atmospheric Research Boulder Colorado USA bgb@ucar.edu Basic concepts - outline What is verification? Why verify? Identifying verification

More information

We Had No Warning An Overview of Available Forecast Products Before and During Severe Weather Events

We Had No Warning An Overview of Available Forecast Products Before and During Severe Weather Events We Had No Warning An Overview of Available Forecast Products Before and During Severe Weather Events Two main sources for severe weather info NOAA/NWS Storm Prediction Center (SPC) Convective Outlooks

More information

COMPOSITE-BASED VERIFICATION OF PRECIPITATION FORECASTS FROM A MESOSCALE MODEL

COMPOSITE-BASED VERIFICATION OF PRECIPITATION FORECASTS FROM A MESOSCALE MODEL J13.5 COMPOSITE-BASED VERIFICATION OF PRECIPITATION FORECASTS FROM A MESOSCALE MODEL Jason E. Nachamkin, Sue Chen, and Jerome M. Schmidt Naval Research Laboratory, Monterey, CA 1. INTRODUCTION Mesoscale

More information

Spatial forecast verification

Spatial forecast verification Spatial forecast verification Manfred Dorninger University of Vienna Vienna, Austria manfred.dorninger@univie.ac.at Thanks to: B. Ebert, B. Casati, C. Keil 7th Verification Tutorial Course, Berlin, 3-6

More information

ASSESMENT OF THE SEVERE WEATHER ENVIROMENT IN NORTH AMERICA SIMULATED BY A GLOBAL CLIMATE MODEL

ASSESMENT OF THE SEVERE WEATHER ENVIROMENT IN NORTH AMERICA SIMULATED BY A GLOBAL CLIMATE MODEL JP2.9 ASSESMENT OF THE SEVERE WEATHER ENVIROMENT IN NORTH AMERICA SIMULATED BY A GLOBAL CLIMATE MODEL Patrick T. Marsh* and David J. Karoly School of Meteorology, University of Oklahoma, Norman OK and

More information

Application and verification of ECMWF products: 2010

Application and verification of ECMWF products: 2010 Application and verification of ECMWF products: 2010 Hellenic National Meteorological Service (HNMS) F. Gofa, D. Tzeferi and T. Charantonis 1. Summary of major highlights In order to determine the quality

More information

Winter. Here s what a weak La Nina usually brings to the nation with tempseraures:

Winter. Here s what a weak La Nina usually brings to the nation with tempseraures: 2017-2018 Winter Time again for my annual Winter Weather Outlook. Here's just a small part of the items I considered this year and how I think they will play out with our winter of 2017-2018. El Nino /

More information

Severe storm forecast guidance based on explicit identification of convective phenomena in WRF-model forecasts

Severe storm forecast guidance based on explicit identification of convective phenomena in WRF-model forecasts Severe storm forecast guidance based on explicit identification of convective phenomena in WRF-model forecasts Ryan Sobash 10 March 2010 M.S. Thesis Defense 1 Motivation When the SPC first started issuing

More information

EVALUATION AND VERIFICATION OF PUBLIC WEATHER SERVICES. Pablo Santos Meteorologist In Charge National Weather Service Miami, FL

EVALUATION AND VERIFICATION OF PUBLIC WEATHER SERVICES. Pablo Santos Meteorologist In Charge National Weather Service Miami, FL EVALUATION AND VERIFICATION OF PUBLIC WEATHER SERVICES Pablo Santos Meteorologist In Charge National Weather Service Miami, FL WHAT IS THE MAIN DIFFERENCE BETWEEN A GOVERNMENT WEATHER SERVICE FORECAST

More information

The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS

The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS S. L. Mullen Univ. of Arizona R. Buizza ECMWF University of Wisconsin Predictability Workshop,

More information

A Better Way to Do R&R Studies

A Better Way to Do R&R Studies The Evaluating the Measurement Process Approach Last month s column looked at how to fix some of the Problems with Gauge R&R Studies. This month I will show you how to learn more from your gauge R&R data

More information

Aurora Bell*, Alan Seed, Ross Bunn, Bureau of Meteorology, Melbourne, Australia

Aurora Bell*, Alan Seed, Ross Bunn, Bureau of Meteorology, Melbourne, Australia 15B.1 RADAR RAINFALL ESTIMATES AND NOWCASTS: THE CHALLENGING ROAD FROM RESEARCH TO WARNINGS Aurora Bell*, Alan Seed, Ross Bunn, Bureau of Meteorology, Melbourne, Australia 1. Introduction Warnings are

More information

LOCAL TSUNAMIS: CHALLENGES FOR PREPAREDNESS AND EARLY WARNING

LOCAL TSUNAMIS: CHALLENGES FOR PREPAREDNESS AND EARLY WARNING LOCAL TSUNAMIS: CHALLENGES FOR PREPAREDNESS AND EARLY WARNING HARALD SPAHN 1 1 German Technical Cooperation International Services, Jakarta, Indonesia ABSTRACT: Due to the threat of local tsunamis warning

More information

Weather Warning System in Germany. and Ideas for Developing of CAP. Thomas Kratzsch Head of Department Basic Forecasts Deutscher Wetterdienst Germany

Weather Warning System in Germany. and Ideas for Developing of CAP. Thomas Kratzsch Head of Department Basic Forecasts Deutscher Wetterdienst Germany Weather Warning System in Germany and Ideas for Developing of CAP Thomas Kratzsch Head of Department Basic Forecasts Deutscher Wetterdienst Germany Thomas.Kratzsch@dwd.de 1 Disaster Prevention in Germany

More information

77 IDENTIFYING AND RANKING MULTI-DAY SEVERE WEATHER OUTBREAKS. Department of Earth Sciences, University of South Alabama, Mobile, Alabama

77 IDENTIFYING AND RANKING MULTI-DAY SEVERE WEATHER OUTBREAKS. Department of Earth Sciences, University of South Alabama, Mobile, Alabama 77 IDENTIFYING AND RANKING MULTI-DAY SEVERE WEATHER OUTBREAKS Chad M. Shafer 1* and Charles A. Doswell III 2 1 Department of Earth Sciences, University of South Alabama, Mobile, Alabama 2 Cooperative Institute

More information

Weather in Saskatchewan. John Paul Cragg Warning Preparedness Meteorologist Environment and Climate Change Canada

Weather in Saskatchewan. John Paul Cragg Warning Preparedness Meteorologist Environment and Climate Change Canada Weather in Saskatchewan John Paul Cragg Warning Preparedness Meteorologist Environment and Climate Change Canada The Climate of Saskatchewan -35 Average January Low Temperature -30-25 -20-15 -10-5 0 5

More information

REGIONAL VARIABILITY OF CAPE AND DEEP SHEAR FROM THE NCEP/NCAR REANALYSIS ABSTRACT

REGIONAL VARIABILITY OF CAPE AND DEEP SHEAR FROM THE NCEP/NCAR REANALYSIS ABSTRACT REGIONAL VARIABILITY OF CAPE AND DEEP SHEAR FROM THE NCEP/NCAR REANALYSIS VITTORIO A. GENSINI National Weather Center REU Program, Norman, Oklahoma Northern Illinois University, DeKalb, Illinois ABSTRACT

More information

Spatial Forecast Verification Methods

Spatial Forecast Verification Methods Spatial Forecast Verification Methods Barbara Brown Joint Numerical Testbed Program Research Applications Laboratory, NCAR 22 October 2014 Acknowledgements: Tara Jensen, Randy Bullock, Eric Gilleland,

More information

Regional influence on road slipperiness during winter precipitation events. Marie Eriksson and Sven Lindqvist

Regional influence on road slipperiness during winter precipitation events. Marie Eriksson and Sven Lindqvist Regional influence on road slipperiness during winter precipitation events Marie Eriksson and Sven Lindqvist Physical Geography, Department of Earth Sciences, Göteborg University Box 460, SE-405 30 Göteborg,

More information

The benefits and developments in ensemble wind forecasting

The benefits and developments in ensemble wind forecasting The benefits and developments in ensemble wind forecasting Erik Andersson Slide 1 ECMWF European Centre for Medium-Range Weather Forecasts Slide 1 ECMWF s global forecasting system High resolution forecast

More information

Road weather forecasts and MDSS in Slovakia

Road weather forecasts and MDSS in Slovakia ID: 0030 Road weather forecasts and MDSS in Slovakia M. Benko Slovak Hydrometeorological Institute (SHMI), Jeséniova 17, 83315 Bratislava, Slovakia Corresponding author s E-mail: martin.benko@shmu.sk ABSTRACT

More information

A flexible approach to the objective verification of warnings

A flexible approach to the objective verification of warnings METEOROLOGICAL APPLICATIONS Meteorol. Appl. 23: 65 75 (2016) Published online 16 December 2015 in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/met.1530 A flexible approach to the objective

More information

Wind Events. Flooding Events. T-Storm Events. Awareness Alerts / Potential Alerts / Action Alerts / Immediate Action Alerts / Emergency Alerts.

Wind Events. Flooding Events. T-Storm Events. Awareness Alerts / Potential Alerts / Action Alerts / Immediate Action Alerts / Emergency Alerts. Information Updated: February of 2016 Our Alert Terms Definitions * Use exactly as seen below * Wind Events Awareness Alert - Strong Winds Potential Alert - Damaging Winds ACTION Alert - Damaging Winds

More information

Experimental Test of the Effects of Z R Law Variations on Comparison of WSR-88D Rainfall Amounts with Surface Rain Gauge and Disdrometer Data

Experimental Test of the Effects of Z R Law Variations on Comparison of WSR-88D Rainfall Amounts with Surface Rain Gauge and Disdrometer Data JUNE 2001 NOTES AND CORRESPONDENCE 369 Experimental Test of the Effects of Z R Law Variations on Comparison of WSR-88D Rainfall Amounts with Surface Rain Gauge and Disdrometer Data CARLTON W. ULBRICH Department

More information

Exercise Brunswick ALPHA 2018

Exercise Brunswick ALPHA 2018 ALPHA Exercise Brunswick ALPHA 2018 Who we are (our structure) What we do (our forecasts) How you can access the information Tropical cyclone information (basic) Overview of the products used for Exercise

More information

Guided Notes Weather. Part 2: Meteorology Air Masses Fronts Weather Maps Storms Storm Preparation

Guided Notes Weather. Part 2: Meteorology Air Masses Fronts Weather Maps Storms Storm Preparation Guided Notes Weather Part 2: Meteorology Air Masses Fronts Weather Maps Storms Storm Preparation The map below shows North America and its surrounding bodies of water. Country borders are shown. On the

More information

Predictability from a Forecast Provider s Perspective

Predictability from a Forecast Provider s Perspective Predictability from a Forecast Provider s Perspective Ken Mylne Met Office, Bracknell RG12 2SZ, UK. email: ken.mylne@metoffice.com 1. Introduction Predictability is not a new issue for forecasters or forecast

More information

TIFS DEVELOPMENTS INSPIRED BY THE B08 FDP. John Bally, A. J. Bannister, and D. Scurrah Bureau of Meteorology, Melbourne, Victoria, Australia

TIFS DEVELOPMENTS INSPIRED BY THE B08 FDP. John Bally, A. J. Bannister, and D. Scurrah Bureau of Meteorology, Melbourne, Victoria, Australia P13B.11 TIFS DEVELOPMENTS INSPIRED BY THE B08 FDP John Bally, A. J. Bannister, and D. Scurrah Bureau of Meteorology, Melbourne, Victoria, Australia 1. INTRODUCTION This paper describes the developments

More information

Allison Monarski, University of Maryland Masters Scholarly Paper, December 6, Department of Atmospheric and Oceanic Science

Allison Monarski, University of Maryland Masters Scholarly Paper, December 6, Department of Atmospheric and Oceanic Science Allison Monarski, University of Maryland Masters Scholarly Paper, December 6, 2011 1 Department of Atmospheric and Oceanic Science Verification of Model Output Statistics forecasts associated with the

More information

Communicating uncertainty from short-term to seasonal forecasting

Communicating uncertainty from short-term to seasonal forecasting Communicating uncertainty from short-term to seasonal forecasting MAYBE NO YES Jay Trobec KELO-TV Sioux Falls, South Dakota USA TV weather in the US Most TV weather presenters have university degrees and

More information

The Australian Operational Daily Rain Gauge Analysis

The Australian Operational Daily Rain Gauge Analysis The Australian Operational Daily Rain Gauge Analysis Beth Ebert and Gary Weymouth Bureau of Meteorology Research Centre, Melbourne, Australia e.ebert@bom.gov.au Daily rainfall data and analysis procedure

More information

2015 Hurricane Season Summary for Eastern Canada Impacts and Operational Notes

2015 Hurricane Season Summary for Eastern Canada Impacts and Operational Notes 2015 Hurricane Season Summary for Eastern Canada Impacts and Operational Notes John Parker Canadian Hurricane Centre, Meteorological Service of Canada April, 2016 Storms affecting Canadian territory in

More information

Implementation of global surface index at the Met Office. Submitted by Marion Mittermaier. Summary and purpose of document

Implementation of global surface index at the Met Office. Submitted by Marion Mittermaier. Summary and purpose of document WORLD METEOROLOGICAL ORGANIZATION COMMISSION FOR BASIC SYSTEMS OPAG on DPFS MEETING OF THE CBS (DPFS) TASK TEAM ON SURFACE VERIFICATION GENEVA, SWITZERLAND 20-21 OCTOBER 2014 DPFS/TT-SV/Doc. 4.1a (X.IX.2014)

More information

Local Ctimatotogical Data Summary White Hall, Illinois

Local Ctimatotogical Data Summary White Hall, Illinois SWS Miscellaneous Publication 98-5 STATE OF ILLINOIS DEPARTMENT OF ENERGY AND NATURAL RESOURCES Local Ctimatotogical Data Summary White Hall, Illinois 1901-1990 by Audrey A. Bryan and Wayne Armstrong Illinois

More information

Observations needed for verification of additional forecast products

Observations needed for verification of additional forecast products Observations needed for verification of additional forecast products Clive Wilson ( & Marion Mittermaier) 12th Workshop on Meteorological Operational Systems, ECMWF, 2-6 November 2009 Additional forecast

More information

Using Cell-Based VIL Density to Identify Severe-Hail Thunderstorms in the Central Appalachians and Middle Ohio Valley

Using Cell-Based VIL Density to Identify Severe-Hail Thunderstorms in the Central Appalachians and Middle Ohio Valley EASTERN REGION TECHNICAL ATTACHMENT NO. 98-9 OCTOBER, 1998 Using Cell-Based VIL Density to Identify Severe-Hail Thunderstorms in the Central Appalachians and Middle Ohio Valley Nicole M. Belk and Lyle

More information

Shaping future approaches to evaluating highimpact weather forecasts

Shaping future approaches to evaluating highimpact weather forecasts Shaping future approaches to evaluating highimpact weather forecasts David Richardson, and colleagues Head of Evaluation, Forecast Department European Centre for Medium-Range Weather Forecasts (ECMWF)

More information

Application and verification of ECMWF products 2017

Application and verification of ECMWF products 2017 Application and verification of ECMWF products 2017 Slovenian Environment Agency ARSO; A. Hrabar, J. Jerman, V. Hladnik 1. Summary of major highlights We started to validate some ECMWF parameters and other

More information

South African Weather Service. Description of Public Weather and Warning Services. Tshepho Ngobeni. 18 November 2013

South African Weather Service. Description of Public Weather and Warning Services. Tshepho Ngobeni. 18 November 2013 South African Weather Service Description of Public Weather and Warning Services Tshepho Ngobeni 18 November 2013 SAWS-SWFDP_PRES_18-22_Nov_2013 1 Outline Forecasting Descriptions and Processes Severe

More information

LOCAL CLIMATOLOGICAL DATA FOR FREEPORT ILLINOIS

LOCAL CLIMATOLOGICAL DATA FOR FREEPORT ILLINOIS Climatological Summary: LOCAL CLIMATOLOGICAL DATA FOR FREEPORT ILLINOIS 1905-1990 Freeport (Stephenson County) has a temperate continental climate, dominated by maritime tropical air from the Gulf of Mexico

More information

7.1 The Schneider Electric Numerical Turbulence Forecast Verification using In-situ EDR observations from Operational Commercial Aircraft

7.1 The Schneider Electric Numerical Turbulence Forecast Verification using In-situ EDR observations from Operational Commercial Aircraft 7.1 The Schneider Electric Numerical Turbulence Forecast Verification using In-situ EDR observations from Operational Commercial Aircraft Daniel W. Lennartson Schneider Electric Minneapolis, MN John Thivierge

More information

Cataloguing high impact Weather and Climate Events

Cataloguing high impact Weather and Climate Events Cataloguing high impact Weather and Climate Events Omar Baddour Chief Data Management Applications Divison World Meteorological Organisation Geneva, Switzerland 2 3 Estimating and cataloguing loss & damage

More information

UCAR Award No.: S

UCAR Award No.: S Using Lightning Data To Better Identify And Understand Relationships Between Thunderstorm Intensity And The Underlying Topography Of The Lower Mississippi River Valley UCAR Award No.: S08-68830 University:

More information

Winter Weather. National Weather Service Buffalo, NY

Winter Weather. National Weather Service Buffalo, NY Winter Weather National Weather Service Buffalo, NY Average Seasonal Snowfall SNOWFALL = BIG IMPACTS School / government / business closures Airport shutdowns/delays Traffic accidents with injuries/fatalities

More information

Verification of Probability Forecasts

Verification of Probability Forecasts Verification of Probability Forecasts Beth Ebert Bureau of Meteorology Research Centre (BMRC) Melbourne, Australia 3rd International Verification Methods Workshop, 29 January 2 February 27 Topics Verification

More information

Merging Rain-Gauge and Radar Data

Merging Rain-Gauge and Radar Data Merging Rain-Gauge and Radar Data Dr Sharon Jewell, Obserations R&D, Met Office, FitzRoy Road, Exeter sharon.jewell@metoffice.gov.uk Outline 1. Introduction The Gauge and radar network Interpolation techniques

More information

P4.479 A DETAILED ANALYSIS OF SPC HIGH RISK OUTLOOKS,

P4.479 A DETAILED ANALYSIS OF SPC HIGH RISK OUTLOOKS, P4.479 A DETAILED ANALYSIS OF SPC HIGH RISK OUTLOOKS, 2003-2009 Jason M. Davis*, Andrew R. Dean 2, and Jared L. Guyer 2 Valparaiso University, Valparaiso, IN 2 NOAA/NWS Storm Prediction Center, Norman,

More information

Kentucky Weather Hazards: What is Your Risk?

Kentucky Weather Hazards: What is Your Risk? Kentucky Weather Hazards: What is Your Risk? Stuart A. Foster State Climatologist for Kentucky 2010 Kentucky Weather Conference Bowling Green, Kentucky January 16, 2010 Perspectives on Kentucky s Climate

More information

A Preliminary Severe Winter Storms Climatology for Missouri from

A Preliminary Severe Winter Storms Climatology for Missouri from A Preliminary Severe Winter Storms Climatology for Missouri from 1960-2010 K.L. Crandall and P.S Market University of Missouri Department of Soil, Environmental and Atmospheric Sciences Introduction The

More information

Accuracy of Canadian short- and mediumrange weather forecasts

Accuracy of Canadian short- and mediumrange weather forecasts Accuracy of Canadian short- and mediumrange weather forecasts E. A. Ripley 1 and O. W. Archibold 2 1 Department of Plant Sciences, University of Saskatchewan, Canada 2 Department of Geography, University

More information

Maritime Weather Information: Automatic Reporting, A New Paradigm

Maritime Weather Information: Automatic Reporting, A New Paradigm Maritime Weather Information: Automatic Reporting, A New Paradigm Joe Sienkiewicz, NOAA/NWS Ocean Prediction Center Responsibilities under SOLAS Met Services Contracting governments Observations Limited

More information

NWS Resources For School Districts

NWS Resources For School Districts NWS Resources For School Districts January 23rd, 2017 Shawn DeVinny shawn.devinny@noaa.gov Meteorologist National Weather Service Twin Cities/Chanhassen, MN Outline Watches/Warnings/Advisories Example

More information

Application and verification of ECMWF products at the Finnish Meteorological Institute

Application and verification of ECMWF products at the Finnish Meteorological Institute Application and verification of ECMWF products 2010 2011 at the Finnish Meteorological Institute by Juhana Hyrkkènen, Ari-Juhani Punkka, Henri Nyman and Janne Kauhanen 1. Summary of major highlights ECMWF

More information

Discussion Paper on the Impacts of Climate Change for Mount Pearl. August, Darlene Butler. Planning Department. City of Mount Pearl

Discussion Paper on the Impacts of Climate Change for Mount Pearl. August, Darlene Butler. Planning Department. City of Mount Pearl Discussion Paper on the Impacts of Climate Change for Mount Pearl August, 2008 Darlene Butler Planning Department City of Mount Pearl 3 Centennial Street Mount Pearl, NL A1N 1G4 (709) 748 1022 Table of

More information

The Wind Hazard: Messaging the Wind Threat & Corresponding Potential Impacts

The Wind Hazard: Messaging the Wind Threat & Corresponding Potential Impacts The Wind Hazard: Messaging the Wind Threat & Corresponding Potential Impacts Scott Spratt Warning Coordination Meteorologist NWS Melbourne, FL David Sharp Science & Operations Officer NWS Melbourne, FL

More information

Weather Second Grade Virginia Standards of Learning 2.6 Assessment Creation Project. Amanda Eclipse

Weather Second Grade Virginia Standards of Learning 2.6 Assessment Creation Project. Amanda Eclipse Weather Second Grade Virginia Standards of Learning 2.6 Assessment Creation Project 1 Amanda Eclipse Overview and Description of Course The goal of Virginia s science standards is for the students to develop

More information