A flexible approach to the objective verification of warnings

Size: px
Start display at page:

Download "A flexible approach to the objective verification of warnings"

Transcription

1 METEOROLOGICAL APPLICATIONS Meteorol. Appl. 23: (2016) Published online 16 December 2015 in Wiley Online Library (wileyonlinelibrary.com) DOI: /met.1530 A flexible approach to the objective verification of warnings Michael A. Sharpe* Department of Weather Science, Met Office, Exeter, UK ABSTRACT: Warning services are popular due to their conceptually simple nature; however, a simple approach to their verification can lead to misleading results, from which it can be difficult to diagnose poor from excellent performance, and when improvement is required it is often difficult to know where efforts should be focused. A flexible, systematic approach to the verification of warnings enables the performance to be evaluated in terms of space, time, intensity and confidence. This paper describes such an approach, which has been implemented at the Met Office. Flexibility is achieved via the introduction, and separate categorization, of near-hit and compound event types, enabling the spatial, temporal and effectual accuracy to be examined and the most significant errors diagnosed. An illustrative example of this methodology is included in this paper. KEY WORDS verification; forecasting; marine forecasts; warnings Received 10 February 2015; Revised 28 April 2015; Accepted 12 June Introduction Many meteorological organizations provide warning services; indeed the original purpose of the Met Office was to issue warnings of gales in the seas around the British Isles. Weather warnings are often issued in geographical areas rather than at specific sites or locations; these areas are typically regions, counties or sea areas for which a national met service, or other government organization, is responsible. A warning is usually issued within one of these areas whenever a weather condition is expected to exceed a predetermined set of criteria somewhere within that area and a measurable threshold is traditionally used for the warning criteria (e.g. gust speed or rainfall accumulation). At first glance, measuring the accuracy of a warning service appears to be a simple task; therefore, a simple 2 2 contingency table approach to verification is often adopted. However, in reality verifying a warning service is a very challenging problem and a simple approach can lead to double counting of events. Indeed, it is not uncommon for a warning, which is almost correct, to be classified as a miss and a separate false alarm. Clearly, this effect tends to drive the wrong behaviour, because, if there is any doubt, it is often better for a forecaster not to issue a warning, since a double negative score may be awarded if it is almost correct. One way in which this problem can be minimized is to adopt a flexible approach to verification, an idea conceived by Barnes et al. (2007). Flexibility, when applied correctly, reflects the true performance of a warning service better; however, care must be exercised because, when mishandled, it can be used to exaggerate the performance misleadingly. The approach, described in this paper, expands upon initial ideas that were first briefly mentioned in Chapter 10 of Jolliffe and Stephenson (2012). The Met Office has recently implemented a flexible, generic warnings verification system (WVS), which introduces additional near-hit categories and examines the neighbourhood * Correspondence: M. A. Sharpe, Department of Weather Science, Met Office, Fitzroy Road, Exeter EX1 3PB, UK. michael.sharpe@metoffice.gov.uk This article is published with the permission of the Controller of HMSO and the Queen s Printer for Scotland. around each area, to classify situations uniquely where an issued warning does not exactly correspond to an event. Every objective verification system should be designed with two main purposes in mind, and to satisfy both, statistics should, ideally, be generated to: 1. provide feedback to forecasters, model developers and researchers in order to drive system improvements; 2. measure the overall accuracy of a service so that the customer is convinced of the value of the service; that it meets expectations; and that it supports appropriate action by the end user. This paper provides details of the generic functionality of the WVS and how it is used to verify warnings. The following publications are planned to outline the verification of warning services, issued by the Met Office, for customers, including: the Maritime and Coastguard Agency (storm, gale and coastal wind warnings); the Environment Agency for England, National Resources Wales and the Scottish Environmental Protection Agency (heavy rainfall alerts); the national public and emergency responder community (national severe weather warnings); the Civil Aviation Authority and the Ministry of Defence (airport warnings); Public Health England (temperature alerts); the UK Government (space weather warnings); and utilities customers (hazardous weather warnings) No attempt is made to include product-specific detail in this paper; rather, the content is left deliberately generic. Therefore, Section 2 consists of a discussion of the problems associated with a simple approach to warnings verification, together with the methodology/benefits of adopting flexibility, and Section 3 describes how performance measures are calculated for this approach. Finally, Section 4 contains a real example of how the WVS is used in practice Crown copyright.

2 66 M. A. Sharpe Event threshold Miss Hit Miss Independence ensures that the resulting verification system is able to verify many different types of warning services, including those for which certain (or indeed all) types of flex are inappropriate. Non-event False alarm Non-event 2.1. A flexible solution to: the non-event problem for event-oriented verification Issue time start time Warning period End time Figure 1. Event categories for 2 x 2 categorical verification. Time 2. Flexible solutions to the problems associated with warnings verification Weather warning services are popular because they are conceptually very simple to understand and relate to significant high impact weather, which the customer may need to act upon. In addition to threshold (and/or impact) information, a weather warning is composed of an issue time, a start time and an end time, where the: warning period is defined as the period between the start and end time; lead time is defined as the period between the issue and start time. For many warning services, a measurable weather component is forecast to exceed a predetermined threshold during the warning period and 2 2 categorical verification is often used to assess the performance, by classifying each event as: a hit, if the event threshold is exceeded during a warning period; a missed event, if the event threshold is exceeded outside of a warning period; a false alarm, if a warning is issued but the event threshold is not exceeded; a non-event, whenever no warning is issued and no event occurs; and Figure 1 illustrates each of these classifications. If a warning service is to be accurately verified, it is essential that no events are omitted; therefore, a continuous time series of accurate truth data should be available wherever warnings can be issued. A surface observing network is often used as a truth data source, because it is usually accurate and accessible. Consequently, an event occurs, if any observation within a warning area exceeds the event threshold; if this occurs during a warning period, the warning is classified as a hit; otherwise, it is classified as a missed event. Unfortunately, various problems can occur with this simple approach especially when events are rare or challenging to observe (both of which are often the case for warning services). Here, some of these problems are discussed in detail, together with solutions made possible when a flexible approach to verification is adopted. To preserve the generic nature of the system, it is necessary to ensure that each flex (time, intensity, space and confidence) can be applied independently, as it is inappropriate to apply every flex in the same way for every customer product The problem with non-events There are (at least) two approaches to the verification of a warning service; either a single result is awarded to each event, or a result is awarded to predetermined fixed time periods. In the fixed time period approach, the time line is divided into fixed intervals (usually determined by the frequency of the observations) and a verification result is awarded to each interval (Göber et al., 2008). Observations are often available every hour; consequently when a warning is in force, a hit or false alarm is awarded each hour, whereas a miss or non-event is awarded during each remaining hour. However, the majority of events last significantly longer than one hour; therefore, the fixed time interval approach is likely to exaggerate the base rate (the frequency of occurrence of an observed event) grossly. Furthermore, it is not usually expected that above-threshold conditions should persist for the entire warning period; indeed, many warning services (particularly those issued by human forecasters) stipulate that; for a warning to be successful, it is sufficient for the event threshold to be exceeded at some point between the start and end time of the warning (Neal et al., 2014). In fact, a warning that is issued in a large geographical area may deliberately be designed to last longer than is necessary in its individual constituent areas, simply on practical grounds of conciseness of communication alone. Event-oriented verification, on the other hand, awards a single result to each event, regardless of its total duration, where an event is defined as a period of time, during which above-threshold conditions are experienced. As the purpose of a warning service is usually to capture events, an event-oriented verification procedure is highly desirable. However, there are two challenges associated with event-oriented verification; how to determine the total number of non-events and how to determine the start and endofeachevent. A non-event is a period of time during which no warning is issued and no event occurs, and although non-events appear trivial, they are critical for the calculation of some performance statistics. The number of non-events is relatively easy to calculate, even when using an event-oriented verification approach, if the warning service pre-defines the issue time and the duration of every warning. For example, a storm warning service (WMO, 2012) is issued, by the Met Office, on behalf of the Maritime and Coastguard Agency. Storm warnings are only issued at 0800 and 2000 (local time) and every warning endures for 24 h, therefore, by verifying the 0800 and 2000 warnings as separate services, the number of non-events may be easily determined by the number of 24 h periods during which no warning was issued and no event occurred. However, many warning services are not constrained in this way and although the majority of these services retain a pre-defined maximum warning length, for example, warnings issued at civil aerodromes (Directorate of Airspace Policy, 2008), they are usually free to issue warnings that can start and end at any time. Clearly, it is difficult to accurately calculate the number of non-events using event-oriented verification (Stephenson et al., 2010; Jolliffe and Stephenson, 2012) when warnings are issued by this type of relatively unconstrained service.

3 Flexible verification of warnings 67 Lull time Event threshold Event Warning Less than lull time Greater than lull time Event Time Figure 2. Using a lull time to distinguish separate events from continuing events A solution to the non-event problem One way to tackle the non-event problem, for event-oriented verification, is to employ a minimum time period below the event threshold to separate two independent events; in this paper, this minimum period of time is referred to as a lull time. Figure 2 illustrates how a lull time is used to determine whether two consecutive event threshold breaches equate to two separate events or one long event. This figure shows three consecutive breaches of the event threshold and the length of the lull time is represented by the double arrow in the top left hand corner. The time period between the first and the second breach is less than the lull time, so they are treated as a single event; however, the period between the second and the third breach is greater than the lull time, so it is treated as an independent event. An appropriate value for the lull time is likely to depend on a number of different factors; these may include the weather type, location of the site/area and the size of the warning area. One way to calculate an appropriate lull time for a warning service is to examine the long-term climatology in each warning area/site and set the lull time to the mean event length. However, if there is a wide range of area sizes, or the duration of the phenomena is dependent on the geographical location, it may be appropriate to use a different lull time in each area. The Met Office issues national severe weather warnings to UK local authority areas and, traditionally, the threshold for a gale warning has been 70 mph. Using long-term climatology, the mean duration of such an event is approximately 3 h; therefore, the lull time used to verify this service has been set to 3 h. Determining the total number of non-events is a significant challenge for event-oriented verification; one solution is to set the total number of non-events equal to the number of inter-event periods. Unfortunately, because warned-for events are usually rare, this approach leads to a typical non-event lasting many orders of magnitude longer than the longest event. However, once a lull time is defined, inter-event periods can be divided into sub-periods, where the lull time defines the duration of each sub-period. Unfortunately, the base rate is often small, so the number of non-events is often extremely large, and this significantly affects the performance statistics, making it difficult to distinguish poor from excellent performance (an effect that was originally highlighted by Gilbert (1884), following Finley, 1884). However, the majority of non-events represent times when there was virtually no chance that an event could occur (e.g. in Britain, gale force winds rarely occur during the summer). Consequently, it is appropriate to subdivide non-events into: trivial and non-trivial non-events where non-trivial non-events are defined as situations during which a conscious decision is required to not issue a warning, and trivial non-events are defined as occasions when no conscious decision is required. In the absence of other information, it is appropriate to define a non-event threshold to distinguish trivial from non-trivial non-events, such that if the non-event threshold is: exceeded, a non-trivial non-event occurs; not exceeded, a trivial non-event occurs; and as no skill is required to identify trivial non-events it is appropriate to use only non-trivial non-events when calculating performance. The concept of subdividing non-events is not new (Mason, 1989; Brooks, 2004); indeed, it is currently used for road icing forecast verification, where a non-event threshold of 3 Cis used to distinguish marginal nights (Thornes and Stephenson, 2001) and only these are used to calculate performance. For similar reasons, Wilson and Giles (2013) discount the months of June, July and August when verifying gale warnings in Canada. The WVS currently uses a non-event threshold of 0 mm per hour to verify the heavy rainfall alert service issued to the Scottish Environmental Protection Agency, thereby implying that heavy rainfall can only occur when it is raining A flexible solution to: the truth data problem The problem with truth To avoid missing events, it is important that a continuously available source of accurate truth data is available. Observations are often available at individual sites (useful, e.g., for road bridge, wind turbine or airport warnings); however, warnings are also commonly issued for geographical areas when only part of that area is often required to exceed the event threshold, and for these services it is essential that truth data are available throughout the geographical area. A surface observing network, however, tends to have sparsely distributed sites, so there is a significant risk that many (particularly localized) events will not be observed. Consequently, the observed event base rate for an area-warning service is likely to be significantly less than the actual base rate. A warning is supposed to be issued whenever an event is forecast to occur anywhere within a geographical warning area; however, if the service is verified against a traditional observing network, a strong temptation exists to only issue warnings when events are forecast to occur close to observing sites. This strategy may seem to improve performance; however, in reality, it only benefits customers located close to an observing site. Clearly, gaps in the truth data network are likely to produce erroneous false alarms and unidentified missed events A solution to the truth problem In recent years, a large volume of data, representing an approximation of the current state of the atmosphere, has become available in the form of nowcast analyses (Moseley, 2011), a blend of the latest, downscaled, high-resolution model data with very recent observations, both gridded and station-based. Run hourly, this technique currently produces nowcast analyses for most weather variables and for the purposes of verification, gridded nowcast analyses offer a potential solution to observing network coverage problems. However, if these data are used as the truth, caution should be exercised, because nowcast analyses may contain additional intensity, timing and location errors. Therefore, although one real observation above the event threshold is usually regarded as sufficient evidence that an event has occurred, the same cannot be said for a single nowcast analysis grid point. Certain locations are much more likely to report false positives (or negatives) than others, this is particularly true for radar-derived products (Harrison et al., 2009), where locations

4 68 M. A. Sharpe Miss Late issue miss Early hit Hit Late hit Miss Event threshold Low event threshold Low miss Late issue low miss Early low hit Low hit Late low hit Low miss Non-event False alarm Non-event Late issue miss period Early hit period Warning period Late hit period Issue time Start time End time Figure 3. Event categories for flexible verification. far from radar stations are less accurate. Furthermore, spurious echoes (for example, those caused by masts, turbines and hills) are only excluded when the probability of precipitation is small, so the larger the warning area, the larger the probability of error. Tackling the issue of false positives requires an approximation for the confidence that an event has occurred. Clearly, this confidence will increase as more analysis points exceed the event threshold; therefore, in the absence of further information, it seems appropriate to examine the number of points exceeding this threshold. Unfortunately, the exact relationship between the confidence, and the number of points which exceed the threshold, is not obvious, it may depend on many factors and is unlikely to be independent of geographical location or time. Unless further information is available, a first approximation may be obtained by setting thresholds on the proportion of grid points that exceed the event threshold within the warning area, these proportions are defined here as area-confidence thresholds (c). The nowcast analysis grid is regular, so there exists an exact correspondence between the number of points and the proportion of the area; however, because areas usually vary significantly in size, it is appropriate to define c as a proportion rather than a fixed number. Unfortunately, it is by no means obvious which value of c is correct; in fact, it is not necessarily true that c will remain fixed it may legitimately vary from one event to the next. If c is set too high, an event that should have been classified as a hit will mistakenly be classified as a false alarm (a false negative), whereas, if c is set too low, a non-event will be mistakenly classified as a missed event (a false positive). Consequently, comprehensive performance statistics will become functions of c, which are unlikely to change monotonically, rather they are likely to initially increase with c, reach a maximum value and then decrease as c is increased further. Therefore, the WVS has adopted vales of c in the range 1 5%, depending on the particular product, where, for an event to occur at c = 1%, at least 1% of observations must exceed the event threshold. This confidence flexing tackles the problem of false positives, but it does not address the problem of false negatives, i.e. instances when the truth data are under-reporting and a hit is mistakenly diagnosed as a false alarm (or a missed event is mistakenly diagnosed as a non-event). This issue may be addressed by introducing another threshold (below the actual event threshold) into Figure 1, this low event threshold is shown in Figure 3, and it introduces the concept of a low hit and a low miss (the other categories in Figure 3 will be discussed in Section 2.3). These two new categories uniquely classify situations in which the event threshold is almost exceeded. Some of these events will be actual near-hits (or near-misses), but others may be actual events that have been incorrectly diagnosed, due to errors in the truth data source, or where the event threshold was exceeded between observing sites A flexible solution to the double-event problem The 2 2 categorical verification can only use the four outcomes displayed in Figure 1, which are, unfortunately, insufficient to describe all possible outcomes and, consequently, it is possible for one event to be recorded twice or more. There are three reasons why erroneous double counting may occur: intensity errors (discussed in the previous section), temporal errors and spatial errors The problem with temporal double-events Figure 4 shows 3 h maximum rainfall accumulations (generated from post-processed radar data) in the Highlands of Scotland between 0000 and 1800 on 25 October 2008, when a 15 mm in 3 h event threshold was used for heavy rainfall warnings. Clearly, this threshold was first exceeded at 0200 and continued for the following 10 h and, in this particular case, a warning was issued to start before 0200 and finish after midday, so the result was a hit. However, if the warning had instead been forecast to end before midday, the 2 2 categorical verification would award a hit to the event between 0200 and 1200 and an additional missed event to the period after midday. So, even though only a single event had occurred, two separate events would have been recorded. Similarly, if the warning had begun after 0200, three events would have been recorded two missed events (one before 0200 and one after 1200) and one hit A solution to the temporal double-event problem To tackle the issue of temporal double counting, it is necessary to introduce (following suggestions in Barnes et al., 2007) the additional late hit and early hit contingency table categories shown in Figure 3. These categories uniquely identify events that either begin between the issue and start time or stop after the end time. The longer the late hit period, the more late hit

5 Flexible verification of warnings h rainfall accumulation Time Figure 4. Maximum 3 h rainfall accumulation (calculated from hourly post-processed radar data) within the Highlands of Scotland on 25 October type events will occur and the fewer missed events will occur; therefore (where possible), the length of the late hit period should be determined through analysis as, for the sake of consistency, it is often appropriate to set the late hit period equal to the lull time. The application of both temporal and intensity flexing produces the following additional categories: an early low hit, which exceeds the low event threshold between the issue and start time; a late low hit, which exceeds the low event threshold during the late hit period; a low miss, which exceeds the low event threshold after the late hit period of one warning and before the late issue miss period of the next warning. The late issue miss and the late issue low miss categories occur as a result of the introduction of a late issue miss period. If an event has been missed, it is likely that, in an attempt to mitigate the situation, a warning will be issued to start immediately. A simple approach to verification would award credit to such a warning; however, this situation is clearly a missed event and should not normally receive credit (although it may be argued that it does, at least, draw attention to an occurring event). Therefore, each event that occurs just before the issue time of a warning is classified as a late issue miss and (in many cases) treated as a missed event. A late issue miss period is required to introduce the concept of a late issue miss, defined here as the length of time before a warning is issued, during which it can be safely assumed that an event is associated with the warning that follows. The longer the late issue miss period, the greater the risk that an independently occurring miss will be incorrectly associated with an issued warning. However, for the sake of consistency, it is appropriate for the duration of the late issue miss period to be identical to the lull time. These extra categories (shown in Figure 3) alone do not solve the double-event problem; to completely address the double-event problem, it is necessary to allow two or more categories to form (uniquely classified) compound events so that, for example, an event, which would normally be classified as a hit and a separate late hit, is instead classified as the single compound event hit+late hit. A total of 20, meaningful, compound event types are possible ( hit+low hit, for example, is not meaningful), using the categories in Figure 3, and these are shown in the left hand column of Table 1 (the remaining columns are discussed in Section 3). The extra categories shown in Figure 3 allow non-standard warnings to be uniquely categorized, rather than being either harshly judged or reported as two (or more) independent events. Furthermore, the level of credit awarded to the four traditional event classifications is fixed, therefore, without the introduction of extra categories, the level of credit awarded to non-standard warning types is also fixed and cannot be altered in any way. However, by introducing the additional categories, listed in Table 1, the frequency of occurrence of non-standard warnings may be easily monitored and the reward for each non-standard warning type can be set independently. Wilson and Giles (2013) recommend, in effect, that all near-hit type events should be included in a traditional category. However, there are two reasons why it may be appropriate to alter the reward for near-hit type events; firstly, the source of truth data may contain errors and secondly, the customers may be relatively insensitive to particular types of forecast error. Therefore, altering the reward given to near-hit type events enables the verification system to account for confidence in the truth data and customer values The problem with spatial double-events Short isolated periods of convective activity are particularly common during a British summer; however, these episodes are particularly difficult to forecast because of their susceptibility to temporal, spatial and effectual errors. For example, in July 2013, following an extended period of warm weather, humid air was drawn from the continent, causing dramatic thunderstorms and heavy rainfall across the United Kingdom. The more severe storms resulted in flash flooding (on 23 July, an accumulation of 35.6 mm in 1hwasrecorded in Nottingham). A national severe weather warning was issued for these storms, the original warning, shown in Figure 5(a), was issued at 1146 UTC on 20 July (and updated at 1235 on 22 July). However, it did not attempt to pinpoint the location; rather, it contained a blanket warning across a large area of the United Kingdom stating As is common in such situations not everywhere will catch the heaviest of the storms, and some places will escape altogether.

6 70 M. A. Sharpe Table 1. Warning/event categories and scores (H denotes a hit, F a false alarm, M a missed event and N a non-event) for each type of flex (strict S S, temporal S T, intensity S I and flexed S F ). Classification S S S T S I S F Hit H H H H LateHit 1 2F + 1 2M H 1 2F+ 1 2M H EarlyHit 1 2F + 1 2M H 1 2F+ 1 2M H LowHit F F H H LateLowHit F F F H EarlyLowHit F F F H False Alarm F F F F Miss M M M M Low Miss N N M M Late Issue Miss M M M M Late Issue Low Miss N N M M Hit + LateLowHit H H H H Hit + EarlyLowHit H H H H Hit + LateHit 1 2H + 1 2M H 1 2H + 1 2M H Hit + EarlyHit 1 2H + 1 2M H 1 2H + 1 2M H LateHit + EarlyHit 2 / 3 M+ 1 / 3 F H 2 / 3 M+ 1 / 3 F H LateHit + EarlyLowHit 1 2F + 1 2M H 1 2F + 1 2M H LateHit + LowHit 1 2F + 1 2M H 1 2M + 1 2H H LateLowHit + EarlyHit 1 2F + 1 2M H 1 2F + 1 2M H LateLowHit + LowHit F F H H EarlyHit + LowHit 1 2F + 1 2M H 1 2M + 1 2H H LowHit + EarlyLowHit F F H H LateLowHit + EarlyLowHit F F F H Hit+LateLowHit + EarlyLowHit H H H H Hit+LateHit + EarlyLowHit 1 2H + 1 2M H 1 2H + 1 2M H Hit+LateLowHit + EarlyHit 1 2H + 1 2M H 1 2H + 1 2M H Hit+LateHit + EarlyHit 2 / 3 M+ 1 / 3 H H 2 / 3 M+ 1 / 3 H H LowHit+LateHit + EarlyHit 2 / 3 M+ 1 / 3 F H 2 / 3 M+ 1 / 3 H H LowHit+LateHit + EarlyLowHit 1 2F + 1 2M H 1 2M + 1 2H H LowHit+LateLowHit + EarlyHit 1 2F + 1 2M H 1 2M + 1 2H H LowHit+LateLowHit + EarlyLowHit F F H H Figure 5(b) shows the warning that was subsequently issued at 1433 on 23 July in northwest England; this warning was an attempt to pinpoint the location of the most extreme flash flooding. Unfortunately, this attempt did not succeed as it did not include Nottingham, where the largest rainfall accumulation was reported. Figure 5 clearly illustrates the difficulty associated with issuing convective storm warnings; indeed, when using 2 2 categorical verification, situations like this often lead to double negative classifications (one miss and one false alarm). Consequently, forecasters are tempted to cut their losses and not issue a warning like the one illustrated by Figure 5(b); however, attempts at greater precision should be applauded in cases that might be described as a near-miss, rather than leave a blanket widespread warning. Nevertheless, double counting, due to spatial uncertainty, will be an issue whenever the size of the warning area is relatively small compared with the spatial accuracy of the forecast, because whenever there is spatial error, a chance exists that an event may occur outside a warned-for area. Indeed, the likelihood of this scenario increases as the size of the warning area decreases. If a warning is issued, but an event occurs in a neighbouring area: 1. the warning will be categorized as a false alarm; 2. the event will be classified as a miss; therefore, it is possible for a small spatial error to transform a hit into a double negative (a false alarm and a missed event). This issue has also been discussed in the context of high-resolution numerical model forecasts (Mittermaier and Roberts, 2010) A solution to the spatial double-event problem In general terms, the smaller the area, the harder it becomes to score a hit, because a slight spatial error can easily transform it into a double negative, it is easier to score a hit in a large area where small positional errors have a less dramatic impact. Warning areas, however, are usually determined by non-meteorological criteria; consequently, their size and shape can vary greatly and are often small (compared with the spatial accuracy of the forecast). Consequently, slight positional errors in the forecast, or truth data, could lead to many double negative event classifications. Therefore, to account for spatial discrepancies, it is appropriate to examine the truth data within a neighbourhood of each warning area. Figure 6 shows a 10 mm in 1 h heavy rainfall alert issued in the Environment Agency area of Shropshire on 1 October Neither the event threshold nor the low event threshold (set at 8 mm in 1 h) was exceeded within the area; however, the event threshold was exceeded just 4 km to the south of Shropshire. Therefore, this warning was classified as a false alarm within the area and a hit at an extension of 4 km or more. Spatial flexing enables the performance to be examined relative to an extension (measured as a horizontal distance) and this information helps determine spatial accuracy. It is usually very difficult to generate accurate forecasts for particularly small areas; therefore, it may be appropriate to award some credit to near-hit events; however, large areas should not benefit from any reward. Although information regarding the spatial accuracy is particularly interesting, it is inappropriate to reward spatial near-hits if

7 Flexible verification of warnings 71 Figure6. Exampleofanearspatialhitfora10mmin1hheavyrainfall alert issued in Shropshire (black), white and dark grey identify where the event threshold and low event threshold were exceeded. the customer is sensitive to spatial errors. However, knowledge of the spatial accuracy capability of a warnings service will enable areas of an appropriate size to be established at the outset. 3. Performance measures Particular care must be taken to ensure that each flex is correctly quantified, and this is achieved by the use of different scoring methods. It is also necessary to ensure that every event classification contributes a single score, for example, when using a scoring method that has no temporal flex (i.e. it does not recognize early hits and late hits) a hit+latehit+earlyhit is equivalent to one hit and two missed events. Normalization simply converts these three events into one event, by reporting one third of a hit and two thirds of a missed event Normalized scoring methods Four different normalized scoring methods are possible: Figure 5. Two national severe heavy rainfall warnings for 23 July 2013 (a) issued at 1146 on 20 July and (b) issued at 1433 on 23 July. 1. the strict score (hereafter S S ) restricts the categories to hit, miss, false alarm and non-event, thereby penalizing every departure from the strict definition of a hit. The second column of Table 1 displays the strict score reward for each category; 2. the temporal score (hereafter S T ) extends a hit to include early hits and late hits; however, low hit types remain classified as false alarms. The third column of Table 1 displays the temporal score reward for each category; 3. the intensity score (hereafter S I ) extends a hit to include low hits; however, early and late hits remain classified as missed

8 72 M. A. Sharpe events. The 4 th column of Table 1 displays the intensity score reward for each category; 4. the flexed score (hereafter S F ) combines the elements of S T and S I by using both temporal and intensity flexing. The 5 th column of Table 1 displays the flexed score reward for each category. In each case, the score awarded to each event follows immediately from the definition of the scoring method. S S and S F give lower and upper bounds on performance, whereas S I and S T give intermediate values. Uncertainty in the truth data means that it is important to calculate each score at numerous area-confidences The use of parameters to define a customer-oriented score Each product may have a different verification requirement and although it is very valuable to examine the effect of flexing on the performance, this information is often too confusing for customers who often request a single headline score. When generating this customer-oriented score (S C ), it is often appropriate to include some reward from each flex. It is desirable for the level of reward for each flex to depend upon the specific product being verified and the priorities of the customer. Equation (1) defines S C : S C (e, c) = S S (e, c) + p t Δ T (e, c) + p i Δ I (e, c) (1) where e denotes area extension, c denotes area-confidence, p i and p t are intensity and temporal flexing parameters, respectively, and Δ T (e, c) = S T (e, c) S S (e, c) andδ I (e, c) = S I (e, c) S S (e, c). (2) Customer-oriented score: temporal flexing In Equation (1), a proportion p t of Δ T (e, c) contributes towards S C (e, c); in the majority of cases, it is appropriate to use: Average warning period Average warning period + late hit period + average lead time (3) as the default value for p t, because this is the ratio of the average strict warning period to the average flexed warning period. Equation (3) directly relates any improvement to the measured performance, caused by temporal flexing, to the increase in the length of the time during which a hit may be scored Customer-oriented score: intensity flexing In Equation (1), a proportion p i of Δ I (e, c) contributes towards S C (e, c); therefore, in the majority of cases, it is appropriate to use: Low event threshold non event threshold (4) Event threshold non event threshold as the default value of p i, because this is the ratio of the difference between the low event threshold and the non-event threshold to the difference between the event threshold and the non-event threshold. Therefore, Equation (4) directly relates any improvement to the measured performance caused by intensity flexing to the increase in the range of values for which a hit may be scored. In Table 1, low miss events and late issue low miss events are scored as missed events; therefore, because warnings are rare, it is likely that S I (e, c) < S S (e, c). The reason for flexing is to identify (and possibly reward) near-hit type events; it is not the intention to apply additional penalties to near-miss events; after all, near-miss events may indicate a high level of discriminatory skill. Therefore, it can be argued that every low miss/late issue low miss should be treated as a non-trivial non-event, in which case S I (e, c) S S (e, c). However, if low misses and late low misses are treated as missed events, it is only appropriate to apply intensity flexing if S I (e, c) > S S (e, c) Customer-oriented score: spatial flexing If no event occurs within a warned-for area, the spatial accuracy may be ascertained by examining a neighbourhood around the area. S(0, c) awards a hit when an event occurs within the area, whereas S(e, c) awards a hit if an event occurs within e km of the area. It is appropriate that the contribution to S(e, c), made by the improvement caused by spatial flexing, defined here as p s = p s (a, e), satisfies p s (a,0)= 1, p s (a, e) 0and e p s / e < 0 e, where a denotes the size of the warning area. Furthermore, to ensure that small areas benefit from more spatial flexing than larger areas, it is also appropriate for p s e 0. a 0 An example formulation for p s (a, e), which satisfies all these constraints, is given in the Appendix. Equation (5) shows how a generic performance statistic, denoted by S, is evaluated at an optimum extension e o : S ( a, e o, c ) = MAX ( S (a, 0, c) + p s (a, e)(s (a, e, c) S (a, 0, c)), e = 1, 2, 3, ) (5) where S(a, e, c) denotes the value of the performance statistic when every event, within e miles of a warning, is verified as a hit Customer-oriented score: confidence flexing Area-confidence thresholds (denoted by c) are used to evaluate warnings when errors in the truth data source are suspected. Unfortunately, it is likely that the most appropriate value for c will be event-dependent; therefore, whatever value is chosen, the resulting (1-month, 12-month or longer) performance will, almost certainly, contain incorrectly verified events. A warning service will be overly penalized if: 1. c is too low, because the number of missed events is exaggerated, or 2. c is too high, because the number of false alarms is exaggerated. There are three obvious approaches to choose an appropriate c: 1. give the forecaster the benefit of the doubt by assuming that the value of c giving the best performance minimizes the number of incorrectly verified events; 2. minimizing ds/dc may result in a more accurate estimate for c because performance that is overly sensitive to small changes in c could be an indication that c is incorrect and/or the truth data contain gross errors; 3. simply pre-selecting a fixed value of c in advance. These approaches are valid only for statistics that are generated over a long time period and involve (at least) hits, misses and false alarms. They are not valid for statistics that measure only one aspect of the performance (e.g. the hit rate or false alarm ratio), uneventful periods or individual events. Whatever approach is adopted, the final decision should (where possible) be made in consultation with the customer.

9 Flexible verification of warnings 73 Table 2. NSWW (gales) performance results during e (miles) Hit Miss False alarm LowHit Hit + earlylowhit Hit + earlyhit LowHit + earlylowhit Late issue miss S s Hit Miss False alarm S T Hit Miss False alarm S I Hit Miss False alarm S F Hit Miss False alarm Performance statistics To obtain a complete picture of the accuracy S S (e, c), S T (e, c), S I (e, c)ands F (e, c) should be calculated for a range of values of e and c, thereby allowing performance to be examined as each flex is independently applied. To obtain a good understanding of the overall performance, it is appropriate to calculate various statistics; for deterministic warnings, these could include the equitable threat score (ETS) and/or the Symmetric Extremal Dependence Index (SEDI) (Ferro and Stephenson, 2011), together with the hit rate (POD), false alarm ratio (FAR) and false alarm rate (POFD). However, for probabilistic warnings, relative operating characteristic (ROC)-plots and reliability diagrams (Jolliffe and Stephenson, 2012) are more appropriate. 4. Example As part of its remit to the general public, the Met Office provides a National Severe Weather Warning (NSWW) Service to UK local authority areas. One of the weather types included in this service is wind gust, and Table 2 shows the verified performance of 70 mph wind gust warnings issued by operational meteorologists during For completeness, the WVS verified these warnings using the area extensions in Table 2 for area-confidence thresholds between 1% and 5%. However, discussions with the customer led to the choice of an area-confidence threshold of 5%, together with a low event threshold of 63 mph (10% < the event threshold) and the treatment of low miss events and late low miss events as non-events. An average event length of 3h(calculated from the wind gust climatology) was used to determine the lull time and the late hit period. Each row in the top section of Table 2 corresponds to a category in Table 1 (where absent categories did not occur during 2010). The number of hits, missed events and false alarms appear in the lower four sections of Table 2 for each of the scoring methods in Table 1. Using a non-event threshold of 28 mph (chosen to correspond to Beaufort Force 6), a total of h non-trivial non-events were recorded in the 147 county/unitary authority areas of the United Kingdom during This number was calculated by dividing the time between consecutive warnings and/or missed events into 3 h periods and classifying those that exceed 28 mph as non-trivial non-events. The columns of Table 2 show the results for values of e between 0 and 22 miles; they indicate that the most sensitive categories to spatial flexing are False Alarms and Hit+EarlyLowHits, which decrease from 12 to 2 and increase from 1 to 11, respectively. The decreasing number of False Alarms indicates that many events occur just outside a warning area, whereas the number of Hit+EarlyLowHits increases because an approaching gale reaches extended areas before un-extended ones. Interestingly, however, the total number of Hit+EarlyHits is unaffected, presumably because the majority of issued warnings began significantly before the event. The elements in the first column of the rows, corresponding to S s, denote the starting point, prior to the application of flexing. These elements indicate that there were very few 70 mph wind gust events in 2010; indeed, there were only 16 hits, 25 missed events and 22 false alarms recorded. Using this scoring method, any departure from the strict definition of a hit is penalized in terms of space (if the event occurs just outside the area), intensity (if the observed wind speed was just <70 mph) and time (if 70 mph winds began just before the start or ended just after the end of the warning period). The columns of Table 2 show different area extensions; they reveal that the number if hits increases monotonically to 30 and the number of false alarms decreases monotonically to 8 as each area is extended (however, the number of missed events is unaffected because missed events indicate the absence of a warning). Clearly, the effect of spatial flexing on the measured performance are significant, indicating that the size of the warning areas are at (or beyond) the spatial accuracy of the forecast. On the other hand, the effect of temporal flexing is minimal, ascertained by comparing S s with S T ; this indicates that the start and end time of each warning is relatively accurate; however, the effect of intensity flexing, ascertained by comparing S s with S T, is more significant. This could be an indication that the warning service errs on the side of caution

10 74 M. A. Sharpe (by, in effect, using a warning threshold, which is <70 mph) and/or the truth data fail to capture the most extreme gust speed. The effect of full flexing, ascertained by comparing S s with S F, is similar to that for intensity flexing, simply because temporal flexing exerts a minimal effect on the measured performance. Unfortunately, however, S C cannot be evaluated directly because it is not a scoring method but a derived statistic (such as ETS, SEDI, etc.), which is calculated using Equation (1). Therefore, choosing the ETS (defined as the proportion of correct forecasts, adjusted for hits expected due to random chance), and expressed as (hits h r /hits + misses + false alarms h r ) where h r = (hits + misses)(hits + false alarms)/total gives ETS S = 0.232, ETS T = and ETS I = when e = 0 and using Equation (1): ETS (e,5%) ETS c = p t p i (6) Figure 7 displays ETS S,ETS T,ETS C and ETS F for each area extension between 0 and 22 miles. ETS S (denoted by black) improves significantly from to 0.500, as e increases to 14 miles, after which further extension causes minimal change. This indicates that the size of the areas is small compared with the accuracy of the forecast. The minimal improvement to the ETS that is caused by temporal flexing (denoted by grey) is invariant with respect to e. This indicates that expanding the size of each area had little effect on the measured performance of the warnings, which contained a timing error. The greatest improvement to the ETS, caused by intensity flexing (denoted by white), occurs when e = 0; however, increasing e from 0 to 2 causes a sudden decrease in Δ I, which is offset by a similar increase in ETS S. This indicates that warnings were issued in areas where strong gust speeds occurred, but stronger gust speeds ( 70 mph) were observed within 2 miles. ETS C (the horizontal line within the white section) is calculated using Equation (6) where the customer has chosen p i = 0.9 and p t = 0.5; consequently, ETS C is composed of 90% of the white section, 50% of the grey section and 100% of the black section and is, therefore closely correlated with ETS F. The remaining question to answer is which value of e is the most appropriate and the answer is given by Equation (5) using the expression for p s given by Equation (A.1). This expression is particularly useful when (as in this particular case) the size of the warning areas vary greatly, because this p s relates the level of reward to the ratio of the extension to original area size, thereby giving the same reward to a large extension of a small area as to a small extension of a large area. Using Equation (A.1) (with α = 1) ETS C (e 0 ) = (where e 0 is the optimal extension ), and as ETS C (6) = and ETS C (8) = 0.556, a linear relationship implies that e 0 = 6.7 miles for this choice of p s. 5. Summary Warning services have been an integral part of many meteorological organizations since the dawn of weather forecasting in the 18 th Century. Unfortunately, although conceptually simple, they cannot often be adequately verified using a traditional observing network and a simple 2 2 categorical approach. A lack of observations, combined with customer pressure to issue warnings in ever smaller areas, often leads to disappointing results. The flexible approach to warning verification, described in this paper, is one way to address this issue. Flexibility enables the sensitivity of a warnings service to be examined, with respect to time, intensity, space and confidence. This is particularly valuable when the accuracy of the verifying truth data source contains e (miles) Figure 7. The equitable threat score evaluated for c = 0.05, as a function of area extension, for all NSWW (gales) issued during The stacked columns show increasing values of ETS as a function of spatial flex, with ETS s (black), ETS T (grey), ETS C (intermediate black line) and finally ETS F. errors, or the demands of the customer are at (or beyond) the limit of forecast accuracy. Therefore, the Met Office has developed a generic, flexible, WVS to verify both area- and site-specific warning services by uniquely categorizing every type of near-hit event. The use of gridded nowcast model analyses eliminates the coverage problems, encountered when verifying area-based warnings against an observing network, and problems associated with accuracy are addressed by repeating the verification using various area-confidence thresholds. At first glance, the flexible approach to warning verification appears to involve various arbitrary choices; however, in practice, these choices enable appropriate decisions to be made for individual products, thereby enabling a truly generic approach. Therefore, when presented with a particular warning service/customer, these seemingly arbitrary choices become valuable degrees of freedom within a flexible methodology. Appendix When spatial flexing is appropriate, the Met Office currently uses Equation (A.1) to determine the proportion of the improvement contributing towards S C (e 0, c), p s (a, e) = ( ) 1 tanh π (2x 1) ife = 0 otherwise, (A.1) ( π ( αr ) 2 ) 1, e is the warning ( (a where x = π 1 2 e ) ) 2 area extension, r is the mean warning area radius, a is the size of the warning area and α is a parameter defined by the customer. This definition ensures that; p s (a,0)= 1, p s (a, e)/ e < 0 e, p s (a, e) 0andr = 1 N a 1 2 π 1 2, where N is the e i number of warning areas. Equation (A.1) ensures that larger areas, where it is easier to score a hit, do not benefit from extension, whereas smaller areas, where it is more difficult to score a hit, do benefit in other words p s (a,e) > p s (a,e). e a>r e a<r

Verification of Weather Warnings. Dr Michael Sharpe Operational verification and systems team Weather Science

Verification of Weather Warnings. Dr Michael Sharpe Operational verification and systems team Weather Science Verification of Weather Warnings Dr Michael Sharpe Operational verification and systems team Weather Science Doc-6.1-Annex.doc - 1 Crown copyright 2008 Contents Summary...1 Difficulties Associated with

More information

Evaluating Forecast Quality

Evaluating Forecast Quality Evaluating Forecast Quality Simon J. Mason International Research Institute for Climate Prediction Questions How do we decide whether a forecast was correct? How do we decide whether a set of forecasts

More information

Verification and performance measures of Meteorological Services to Air Traffic Management (MSTA)

Verification and performance measures of Meteorological Services to Air Traffic Management (MSTA) Verification and performance measures of Meteorological Services to Air Traffic Management (MSTA) Background Information on the accuracy, reliability and relevance of products is provided in terms of verification

More information

Heavier summer downpours with climate change revealed by weather forecast resolution model

Heavier summer downpours with climate change revealed by weather forecast resolution model SUPPLEMENTARY INFORMATION DOI: 10.1038/NCLIMATE2258 Heavier summer downpours with climate change revealed by weather forecast resolution model Number of files = 1 File #1 filename: kendon14supp.pdf File

More information

Flood Risk Forecasts for England and Wales: Production and Communication

Flood Risk Forecasts for England and Wales: Production and Communication Staines Surrey Flood Risk Forecasts for England and Wales: Production and Communication Jon Millard UEF 2015 : Quantifying and Communicating Uncertainty FFC What is the FFC? Successful partnership between

More information

Verification of Weather Warnings

Verification of Weather Warnings Verification of Weather Warnings Did the boy cry wolf or was it just a sheep? David B. Stephenson Exeter Climate Systems olliffe, Clive Wilson, Michael Sharpe, Hewson, and Marion Mittermaier ks also to

More information

Enhancing Weather Information with Probability Forecasts. An Information Statement of the American Meteorological Society

Enhancing Weather Information with Probability Forecasts. An Information Statement of the American Meteorological Society Enhancing Weather Information with Probability Forecasts An Information Statement of the American Meteorological Society (Adopted by AMS Council on 12 May 2008) Bull. Amer. Meteor. Soc., 89 Summary This

More information

The Wind Hazard: Messaging the Wind Threat & Corresponding Potential Impacts

The Wind Hazard: Messaging the Wind Threat & Corresponding Potential Impacts The Wind Hazard: Messaging the Wind Threat & Corresponding Potential Impacts Scott Spratt Warning Coordination Meteorologist NWS Melbourne, FL David Sharp Science & Operations Officer NWS Melbourne, FL

More information

Ensemble based first guess support towards a risk-based severe weather warning service

Ensemble based first guess support towards a risk-based severe weather warning service METEOROLOGICAL APPLICATIONS Meteorol. Appl. 21: 563 577 (2014) Published online 22 February 2013 in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/met.1377 Ensemble based first guess support

More information

Application and verification of ECMWF products 2016

Application and verification of ECMWF products 2016 Application and verification of ECMWF products 2016 Icelandic Meteorological Office (www.vedur.is) Bolli Pálmason and Guðrún Nína Petersen 1. Summary of major highlights Medium range weather forecasts

More information

Forecasting Flood Risk at the Flood Forecasting Centre, UK. Delft-FEWS User Days David Price

Forecasting Flood Risk at the Flood Forecasting Centre, UK. Delft-FEWS User Days David Price Forecasting Flood Risk at the Flood Forecasting Centre, UK Delft-FEWS User Days 2012 David Price Overview of the Flood Forecasting Centre (FFC) What is the FFC? Partnership between the Met Office and Environment

More information

Challenges of Communicating Weather Information to the Public. Sam Lashley Senior Meteorologist National Weather Service Northern Indiana Office

Challenges of Communicating Weather Information to the Public. Sam Lashley Senior Meteorologist National Weather Service Northern Indiana Office Challenges of Communicating Weather Information to the Public Sam Lashley Senior Meteorologist National Weather Service Northern Indiana Office Dilbert the Genius Do you believe him? Challenges of Communicating

More information

The UK National Severe Weather Warning Service - Guidance Unit Perspective

The UK National Severe Weather Warning Service - Guidance Unit Perspective The UK National Severe Weather Warning Service - Guidance Unit Perspective Dan Suri, Chief Operational Meteorologist ECMWF User Workshop June 2015 Contents Who are the Guidance Unit? The National Severe

More information

Basic Verification Concepts

Basic Verification Concepts Basic Verification Concepts Barbara Brown National Center for Atmospheric Research Boulder Colorado USA bgb@ucar.edu Basic concepts - outline What is verification? Why verify? Identifying verification

More information

The development of a Kriging based Gauge and Radar merged product for real-time rainfall accumulation estimation

The development of a Kriging based Gauge and Radar merged product for real-time rainfall accumulation estimation The development of a Kriging based Gauge and Radar merged product for real-time rainfall accumulation estimation Sharon Jewell and Katie Norman Met Office, FitzRoy Road, Exeter, UK (Dated: 16th July 2014)

More information

Application and verification of the ECMWF products Report 2007

Application and verification of the ECMWF products Report 2007 Application and verification of the ECMWF products Report 2007 National Meteorological Administration Romania 1. Summary of major highlights The medium range forecast activity within the National Meteorological

More information

A new index for the verification of accuracy and timeliness of weather warnings

A new index for the verification of accuracy and timeliness of weather warnings METEOROLOGICAL APPLICATIONS Meteorol. Appl. 20: 206 216 (2013) Published online in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/met.1404 A new index for the verification of accuracy and timeliness

More information

Generating probabilistic forecasts from convectionpermitting. Nigel Roberts

Generating probabilistic forecasts from convectionpermitting. Nigel Roberts Generating probabilistic forecasts from convectionpermitting ensembles Nigel Roberts Context for this talk This is the age of the convection-permitting model ensemble Met Office: MOGREPS-UK UK 2.2km /12

More information

Appendix 1: UK climate projections

Appendix 1: UK climate projections Appendix 1: UK climate projections The UK Climate Projections 2009 provide the most up-to-date estimates of how the climate may change over the next 100 years. They are an invaluable source of information

More information

Merging Rain-Gauge and Radar Data

Merging Rain-Gauge and Radar Data Merging Rain-Gauge and Radar Data Dr Sharon Jewell, Obserations R&D, Met Office, FitzRoy Road, Exeter sharon.jewell@metoffice.gov.uk Outline 1. Introduction The Gauge and radar network Interpolation techniques

More information

Introduction to Meteorology and Weather Forecasting

Introduction to Meteorology and Weather Forecasting Introduction to Meteorology and Weather Forecasting ENVI1400 : Meteorology and Forecasting : lecture 1 2 040909 ENVI1400 : Meteorology and Forecasting : lecture 1 3 040914 ENVI1400 : Meteorology and Forecasting

More information

Review of medium to long term coastal risks associated with British Energy sites: Climate Change Effects - Final Report

Review of medium to long term coastal risks associated with British Energy sites: Climate Change Effects - Final Report Review of medium to long term coastal risks associated with British Energy sites: Climate Change Effects - Final Report Prepared for British Energy Generation Ltd Authors: Reviewed by: Authorised for issue

More information

Current verification practices with a particular focus on dust

Current verification practices with a particular focus on dust Current verification practices with a particular focus on dust Marion Mittermaier and Ric Crocker Outline 1. Guide to developing verification studies 2. Observations at the root of it all 3. Grid-to-point,

More information

Application and verification of ECMWF products 2013

Application and verification of ECMWF products 2013 Application and verification of EMWF products 2013 Hellenic National Meteorological Service (HNMS) Flora Gofa and Theodora Tzeferi 1. Summary of major highlights In order to determine the quality of the

More information

Guidance on Aeronautical Meteorological Observer Competency Standards

Guidance on Aeronautical Meteorological Observer Competency Standards Guidance on Aeronautical Meteorological Observer Competency Standards The following guidance is supplementary to the AMP competency Standards endorsed by Cg-16 in Geneva in May 2011. Format of the Descriptions

More information

Implementation of global surface index at the Met Office. Submitted by Marion Mittermaier. Summary and purpose of document

Implementation of global surface index at the Met Office. Submitted by Marion Mittermaier. Summary and purpose of document WORLD METEOROLOGICAL ORGANIZATION COMMISSION FOR BASIC SYSTEMS OPAG on DPFS MEETING OF THE CBS (DPFS) TASK TEAM ON SURFACE VERIFICATION GENEVA, SWITZERLAND 20-21 OCTOBER 2014 DPFS/TT-SV/Doc. 4.1a (X.IX.2014)

More information

Forecasting the "Beast from the East" and Storm Emma

Forecasting the Beast from the East and Storm Emma Forecasting the "Beast from the East" and Storm Emma Ken Mylne and Rob Neal with contributions from several scientists across the Met Office ECMWF UEF Meeting, 5-8 June 2018 Beast started 24 Feb Emma reached

More information

Model Output Statistics (MOS)

Model Output Statistics (MOS) Model Output Statistics (MOS) Numerical Weather Prediction (NWP) models calculate the future state of the atmosphere at certain points of time (forecasts). The calculation of these forecasts is based on

More information

Accounting for the effect of observation errors on verification of MOGREPS

Accounting for the effect of observation errors on verification of MOGREPS METEOROLOGICAL APPLICATIONS Meteorol. Appl. 15: 199 205 (2008) Published online in Wiley InterScience (www.interscience.wiley.com).64 Accounting for the effect of observation errors on verification of

More information

Denver International Airport MDSS Demonstration Verification Report for the Season

Denver International Airport MDSS Demonstration Verification Report for the Season Denver International Airport MDSS Demonstration Verification Report for the 2015-2016 Season Prepared by the University Corporation for Atmospheric Research Research Applications Division (RAL) Seth Linden

More information

REPORT ON APPLICATIONS OF EPS FOR SEVERE WEATHER FORECASTING

REPORT ON APPLICATIONS OF EPS FOR SEVERE WEATHER FORECASTING WORLD METEOROLOGICAL ORGANIZATION COMMISSION FOR BASIC SYSTEMS OPAG DPFS EXPERT TEAM ON ENSEMBLE PREDICTION SYSTEMS CBS-DPFS/EPS/Doc. 7(2) (31.I.2006) Item: 7 ENGLISH ONLY EXETER, UNITED KINGDOM 6-10 FEBRUARY

More information

Implementation Guidance of Aeronautical Meteorological Observer Competency Standards

Implementation Guidance of Aeronautical Meteorological Observer Competency Standards Implementation Guidance of Aeronautical Meteorological Observer Competency Standards The following guidance is supplementary to the AMP competency Standards endorsed by Cg-16 in Geneva in May 2011. Please

More information

Application and verification of ECMWF products: 2010

Application and verification of ECMWF products: 2010 Application and verification of ECMWF products: 2010 Hellenic National Meteorological Service (HNMS) F. Gofa, D. Tzeferi and T. Charantonis 1. Summary of major highlights In order to determine the quality

More information

A Comparison of Tornado Warning Lead Times with and without NEXRAD Doppler Radar

A Comparison of Tornado Warning Lead Times with and without NEXRAD Doppler Radar MARCH 1996 B I E R I N G E R A N D R A Y 47 A Comparison of Tornado Warning Lead Times with and without NEXRAD Doppler Radar PAUL BIERINGER AND PETER S. RAY Department of Meteorology, The Florida State

More information

CMO Terminal Aerodrome Forecast (TAF) Verification Programme (CMOTafV)

CMO Terminal Aerodrome Forecast (TAF) Verification Programme (CMOTafV) CMO Terminal Aerodrome Forecast (TAF) Verification Programme (CMOTafV) Kathy-Ann Caesar Meteorologist Caribbean Meteorological Council - 47 St. Vincent, 2007 CMOTafV TAF Verification Programme Project:

More information

How advances in atmospheric modelling are used for improved flood forecasting. Dr Michaela Bray Cardiff University

How advances in atmospheric modelling are used for improved flood forecasting. Dr Michaela Bray Cardiff University How advances in atmospheric modelling are used for improved flood forecasting Dr Michaela Bray Cardiff University Overview of current short term rainfall forecasting Advancements and on going research

More information

Verification of Probability Forecasts

Verification of Probability Forecasts Verification of Probability Forecasts Beth Ebert Bureau of Meteorology Research Centre (BMRC) Melbourne, Australia 3rd International Verification Methods Workshop, 29 January 2 February 27 Topics Verification

More information

Improving real time observation and nowcasting RDT. E de Coning, M Gijben, B Maseko and L van Hemert Nowcasting and Very Short Range Forecasting

Improving real time observation and nowcasting RDT. E de Coning, M Gijben, B Maseko and L van Hemert Nowcasting and Very Short Range Forecasting Improving real time observation and nowcasting RDT E de Coning, M Gijben, B Maseko and L van Hemert Nowcasting and Very Short Range Forecasting Introduction Satellite Application Facilities (SAFs) are

More information

Global Flash Flood Forecasting from the ECMWF Ensemble

Global Flash Flood Forecasting from the ECMWF Ensemble Global Flash Flood Forecasting from the ECMWF Ensemble Calumn Baugh, Toni Jurlina, Christel Prudhomme, Florian Pappenberger calum.baugh@ecmwf.int ECMWF February 14, 2018 Building a Global FF System 1.

More information

Nesting and LBCs, Predictability and EPS

Nesting and LBCs, Predictability and EPS Nesting and LBCs, Predictability and EPS Terry Davies, Dynamics Research, Met Office Nigel Richards, Neill Bowler, Peter Clark, Caroline Jones, Humphrey Lean, Ken Mylne, Changgui Wang copyright Met Office

More information

Application and verification of ECMWF products at the Finnish Meteorological Institute

Application and verification of ECMWF products at the Finnish Meteorological Institute Application and verification of ECMWF products 2010 2011 at the Finnish Meteorological Institute by Juhana Hyrkkènen, Ari-Juhani Punkka, Henri Nyman and Janne Kauhanen 1. Summary of major highlights ECMWF

More information

Extracting probabilistic severe weather guidance from convection-allowing model forecasts. Ryan Sobash 4 December 2009 Convection/NWP Seminar Series

Extracting probabilistic severe weather guidance from convection-allowing model forecasts. Ryan Sobash 4 December 2009 Convection/NWP Seminar Series Extracting probabilistic severe weather guidance from convection-allowing model forecasts Ryan Sobash 4 December 2009 Convection/NWP Seminar Series Identification of severe convection in high-resolution

More information

Complimentary assessment of forecast performance with climatological approaches

Complimentary assessment of forecast performance with climatological approaches Complimentary assessment of forecast performance with climatological approaches F.Gofa, V. Fragkouli, D.Boucouvala The use of SEEPS with metrics that focus on extreme events, such as the Symmetric Extremal

More information

National Meteorological Library and Archive

National Meteorological Library and Archive National Meteorological Library and Archive Fact sheet No. 4 Climate of the United Kingdom Causes of the weather in the United Kingdom The United Kingdom lies in the latitude of predominately westerly

More information

Severe Weather Watches, Advisories & Warnings

Severe Weather Watches, Advisories & Warnings Severe Weather Watches, Advisories & Warnings Tornado Watch Issued by the Storm Prediction Center when conditions are favorable for the development of severe thunderstorms and tornadoes over a larger-scale

More information

Verification of Space Weather Forecasts issued by the Met Office Space Weather Operations Centre

Verification of Space Weather Forecasts issued by the Met Office Space Weather Operations Centre Verification of Space Weather Forecasts issued by the Met Office Space Weather Operations Centre M. A. Sharpe 1, S. A. Murray 2 1 Met Office, UK. 2 Trinity College Dublin, Ireland. (michael.sharpe@metoffice.gov.uk)

More information

Ensemble Verification Metrics

Ensemble Verification Metrics Ensemble Verification Metrics Debbie Hudson (Bureau of Meteorology, Australia) ECMWF Annual Seminar 207 Acknowledgements: Beth Ebert Overview. Introduction 2. Attributes of forecast quality 3. Metrics:

More information

Advances in weather and climate science

Advances in weather and climate science Advances in weather and climate science Second ICAO Global Air Navigation Industry Symposium (GANIS/2) 11 to 13 December 2017, Montreal, Canada GREG BROCK Scientific Officer Aeronautical Meteorology Division

More information

Understanding Weather and Climate Risk. Matthew Perry Sharing an Uncertain World Conference The Geological Society, 13 July 2017

Understanding Weather and Climate Risk. Matthew Perry Sharing an Uncertain World Conference The Geological Society, 13 July 2017 Understanding Weather and Climate Risk Matthew Perry Sharing an Uncertain World Conference The Geological Society, 13 July 2017 What is risk in a weather and climate context? Hazard: something with the

More information

Verification of ensemble and probability forecasts

Verification of ensemble and probability forecasts Verification of ensemble and probability forecasts Barbara Brown NCAR, USA bgb@ucar.edu Collaborators: Tara Jensen (NCAR), Eric Gilleland (NCAR), Ed Tollerud (NOAA/ESRL), Beth Ebert (CAWCR), Laurence Wilson

More information

Application and verification of ECMWF products 2015

Application and verification of ECMWF products 2015 Application and verification of ECMWF products 2015 Instituto Português do Mar e da Atmosfera, I.P. 1. Summary of major highlights At Instituto Português do Mar e da Atmosfera (IPMA) ECMWF products are

More information

Montréal, 7 to 18 July 2014

Montréal, 7 to 18 July 2014 INTERNATIONAL CIVIL AVIATION ORGANIZATION WORLD METEOROLOGICAL ORGANIZATION 6/5/14 Meteorology (MET) Divisional Meeting (2014) Commission for Aeronautical Meteorology Fifteenth Session Montréal, 7 to 18

More information

Strategic Radar Enhancement Project (SREP) Forecast Demonstration Project (FDP) The future is here and now

Strategic Radar Enhancement Project (SREP) Forecast Demonstration Project (FDP) The future is here and now Strategic Radar Enhancement Project (SREP) Forecast Demonstration Project (FDP) The future is here and now Michael Berechree National Manager Aviation Weather Services Australian Bureau of Meteorology

More information

Past & Future Services

Past & Future Services Past & Future Services and Integration with Emergency Responders Graeme Leitch UKMO Public Weather Service The PWS provides a coherent range of weather information and weatherrelated warnings that enable

More information

Meteorological vigilance An operational tool for early warning

Meteorological vigilance An operational tool for early warning Meteorological vigilance An operational tool for early warning Jean-Marie Carrière Deputy-director of Forecasting http://www.meteo.fr The French meteorological vigilance procedure Context Routine working

More information

Application and verification of ECMWF products 2017

Application and verification of ECMWF products 2017 Application and verification of ECMWF products 2017 Finnish Meteorological Institute compiled by Weather and Safety Centre with help of several experts 1. Summary of major highlights FMI s forecasts are

More information

Nowcasting for the London Olympics 2012 Brian Golding, Susan Ballard, Nigel Roberts & Ken Mylne Met Office, UK. Crown copyright Met Office

Nowcasting for the London Olympics 2012 Brian Golding, Susan Ballard, Nigel Roberts & Ken Mylne Met Office, UK. Crown copyright Met Office Nowcasting for the London Olympics 2012 Brian Golding, Susan Ballard, Nigel Roberts & Ken Mylne Met Office, UK Outline Context MOGREPS-UK AQUM Weymouth Bay models Summary Forecasting System Generic Products

More information

The benefits and developments in ensemble wind forecasting

The benefits and developments in ensemble wind forecasting The benefits and developments in ensemble wind forecasting Erik Andersson Slide 1 ECMWF European Centre for Medium-Range Weather Forecasts Slide 1 ECMWF s global forecasting system High resolution forecast

More information

VERFICATION OF OCEAN WAVE ENSEMBLE FORECAST AT NCEP 1. Degui Cao, H.S. Chen and Hendrik Tolman

VERFICATION OF OCEAN WAVE ENSEMBLE FORECAST AT NCEP 1. Degui Cao, H.S. Chen and Hendrik Tolman VERFICATION OF OCEAN WAVE ENSEMBLE FORECAST AT NCEP Degui Cao, H.S. Chen and Hendrik Tolman NOAA /National Centers for Environmental Prediction Environmental Modeling Center Marine Modeling and Analysis

More information

EVALUATION AND VERIFICATION OF PUBLIC WEATHER SERVICES. Pablo Santos Meteorologist In Charge National Weather Service Miami, FL

EVALUATION AND VERIFICATION OF PUBLIC WEATHER SERVICES. Pablo Santos Meteorologist In Charge National Weather Service Miami, FL EVALUATION AND VERIFICATION OF PUBLIC WEATHER SERVICES Pablo Santos Meteorologist In Charge National Weather Service Miami, FL WHAT IS THE MAIN DIFFERENCE BETWEEN A GOVERNMENT WEATHER SERVICE FORECAST

More information

Predictability from a Forecast Provider s Perspective

Predictability from a Forecast Provider s Perspective Predictability from a Forecast Provider s Perspective Ken Mylne Met Office, Bracknell RG12 2SZ, UK. email: ken.mylne@metoffice.com 1. Introduction Predictability is not a new issue for forecasters or forecast

More information

Forecast Verification Analysis of Rainfall for Southern Districts of Tamil Nadu, India

Forecast Verification Analysis of Rainfall for Southern Districts of Tamil Nadu, India International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 6 Number 5 (2017) pp. 299-306 Journal homepage: http://www.ijcmas.com Original Research Article https://doi.org/10.20546/ijcmas.2017.605.034

More information

Aurora Bell*, Alan Seed, Ross Bunn, Bureau of Meteorology, Melbourne, Australia

Aurora Bell*, Alan Seed, Ross Bunn, Bureau of Meteorology, Melbourne, Australia 15B.1 RADAR RAINFALL ESTIMATES AND NOWCASTS: THE CHALLENGING ROAD FROM RESEARCH TO WARNINGS Aurora Bell*, Alan Seed, Ross Bunn, Bureau of Meteorology, Melbourne, Australia 1. Introduction Warnings are

More information

Using time-lag ensemble techniques to assess behaviour of high-resolution precipitation forecasts

Using time-lag ensemble techniques to assess behaviour of high-resolution precipitation forecasts Using time-lag ensemble techniques to assess behaviour of high-resolution precipitation forecasts Marion Mittermaier 3 rd Int l Verification Methods Workshop, ECMWF, 31/01/2007 Crown copyright Page 1 Outline

More information

The WMO Global Basic Observing Network (GBON)

The WMO Global Basic Observing Network (GBON) The WMO Global Basic Observing Network (GBON) A WIGOS approach to securing observational data for critical global weather and climate applications Robert Varley and Lars Peter Riishojgaard, WMO Secretariat,

More information

At the start of the talk will be a trivia question. Be prepared to write your answer.

At the start of the talk will be a trivia question. Be prepared to write your answer. Operational hydrometeorological forecasting activities of the Australian Bureau of Meteorology Thomas Pagano At the start of the talk will be a trivia question. Be prepared to write your answer. http://scottbridle.com/

More information

Categorical Verification

Categorical Verification Forecast M H F Observation Categorical Verification Tina Kalb Contributions from Tara Jensen, Matt Pocernich, Eric Gilleland, Tressa Fowler, Barbara Brown and others Finley Tornado Data (1884) Forecast

More information

Spatial forecast verification

Spatial forecast verification Spatial forecast verification Manfred Dorninger University of Vienna Vienna, Austria manfred.dorninger@univie.ac.at Thanks to: B. Ebert, B. Casati, C. Keil 7th Verification Tutorial Course, Berlin, 3-6

More information

Comparison of the NCEP and DTC Verification Software Packages

Comparison of the NCEP and DTC Verification Software Packages Comparison of the NCEP and DTC Verification Software Packages Point of Contact: Michelle Harrold September 2011 1. Introduction The National Centers for Environmental Prediction (NCEP) and the Developmental

More information

Verification of nowcasts and short-range forecasts, including aviation weather

Verification of nowcasts and short-range forecasts, including aviation weather Verification of nowcasts and short-range forecasts, including aviation weather Barbara Brown NCAR, Boulder, Colorado, USA WMO WWRP 4th International Symposium on Nowcasting and Very-short-range Forecast

More information

SNOW COVER MAPPING USING METOP/AVHRR AND MSG/SEVIRI

SNOW COVER MAPPING USING METOP/AVHRR AND MSG/SEVIRI SNOW COVER MAPPING USING METOP/AVHRR AND MSG/SEVIRI Niilo Siljamo, Markku Suomalainen, Otto Hyvärinen Finnish Meteorological Institute, P.O.Box 503, FI-00101 Helsinki, Finland Abstract Weather and meteorological

More information

Helen Titley and Rob Neal

Helen Titley and Rob Neal Processing ECMWF ENS and MOGREPS-G ensemble forecasts to highlight the probability of severe extra-tropical cyclones: Storm Doris UEF 2017, 12-16 June 2017, ECMWF, Reading, U.K. Helen Titley and Rob Neal

More information

From Hazards to Impact: Experiences from the Hazard Impact Modelling project

From Hazards to Impact: Experiences from the Hazard Impact Modelling project The UK s trusted voice for coordinated natural hazards advice From Hazards to Impact: Experiences from the Hazard Impact Modelling project Becky Hemingway, Met Office ECMWF UEF 2017, 14 th June 2017 [People]

More information

OBJECTIVE CALIBRATED WIND SPEED AND CROSSWIND PROBABILISTIC FORECASTS FOR THE HONG KONG INTERNATIONAL AIRPORT

OBJECTIVE CALIBRATED WIND SPEED AND CROSSWIND PROBABILISTIC FORECASTS FOR THE HONG KONG INTERNATIONAL AIRPORT P 333 OBJECTIVE CALIBRATED WIND SPEED AND CROSSWIND PROBABILISTIC FORECASTS FOR THE HONG KONG INTERNATIONAL AIRPORT P. Cheung, C. C. Lam* Hong Kong Observatory, Hong Kong, China 1. INTRODUCTION Wind is

More information

Forecasting Extreme Events

Forecasting Extreme Events Forecasting Extreme Events Ivan Tsonevsky, ivan.tsonevsky@ecmwf.int Slide 1 Outline Introduction How can we define what is extreme? - Model climate (M-climate); The Extreme Forecast Index (EFI) Use and

More information

Confronting Climate Change in the Great Lakes Region. Technical Appendix Climate Change Projections EXTREME EVENTS

Confronting Climate Change in the Great Lakes Region. Technical Appendix Climate Change Projections EXTREME EVENTS Confronting Climate Change in the Great Lakes Region Technical Appendix Climate Change Projections EXTREME EVENTS Human health and well-being, as well as energy requirements, building standards, agriculture

More information

Communicating uncertainty from short-term to seasonal forecasting

Communicating uncertainty from short-term to seasonal forecasting Communicating uncertainty from short-term to seasonal forecasting MAYBE NO YES Jay Trobec KELO-TV Sioux Falls, South Dakota USA TV weather in the US Most TV weather presenters have university degrees and

More information

Monthly probabilistic drought forecasting using the ECMWF Ensemble system

Monthly probabilistic drought forecasting using the ECMWF Ensemble system Monthly probabilistic drought forecasting using the ECMWF Ensemble system Christophe Lavaysse(1) J. Vogt(1), F. Pappenberger(2) and P. Barbosa(1) (1) European Commission (JRC-IES), Ispra Italy (2) ECMWF,

More information

TIFS DEVELOPMENTS INSPIRED BY THE B08 FDP. John Bally, A. J. Bannister, and D. Scurrah Bureau of Meteorology, Melbourne, Victoria, Australia

TIFS DEVELOPMENTS INSPIRED BY THE B08 FDP. John Bally, A. J. Bannister, and D. Scurrah Bureau of Meteorology, Melbourne, Victoria, Australia P13B.11 TIFS DEVELOPMENTS INSPIRED BY THE B08 FDP John Bally, A. J. Bannister, and D. Scurrah Bureau of Meteorology, Melbourne, Victoria, Australia 1. INTRODUCTION This paper describes the developments

More information

FLORA: FLood estimation and forecast in complex Orographic areas for Risk mitigation in the Alpine space

FLORA: FLood estimation and forecast in complex Orographic areas for Risk mitigation in the Alpine space Natural Risk Management in a changing climate: Experiences in Adaptation Strategies from some European Projekts Milano - December 14 th, 2011 FLORA: FLood estimation and forecast in complex Orographic

More information

Application and verification of ECMWF products 2010

Application and verification of ECMWF products 2010 Application and verification of ECMWF products 2010 Icelandic Meteorological Office (www.vedur.is) Guðrún Nína Petersen 1. Summary of major highlights Medium range weather forecasts issued at IMO are mainly

More information

Judit Kerényi. OMSZ-Hungarian Meteorological Service P.O.Box 38, H-1525, Budapest Hungary Abstract

Judit Kerényi. OMSZ-Hungarian Meteorological Service P.O.Box 38, H-1525, Budapest Hungary Abstract Comparison of the precipitation products of Hydrology SAF with the Convective Rainfall Rate of Nowcasting-SAF and the Multisensor Precipitation Estimate of EUMETSAT Judit Kerényi OMSZ-Hungarian Meteorological

More information

Application and verification of ECMWF products 2009

Application and verification of ECMWF products 2009 Application and verification of ECMWF products 2009 Icelandic Meteorological Office (www.vedur.is) Gu rún Nína Petersen 1. Summary of major highlights Medium range weather forecasts issued at IMO are mainly

More information

National Meteorological Library and Archive

National Meteorological Library and Archive National Meteorological Library and Archive Fact sheet No. 4 Climate of the United Kingdom Causes of the weather in the United Kingdom The United Kingdom lies in the latitude of predominately westerly

More information

Verification of ECMWF products at the Finnish Meteorological Institute

Verification of ECMWF products at the Finnish Meteorological Institute Verification of ECMWF products at the Finnish Meteorological Institute by Juha Kilpinen, Pertti Nurmi and Matias Brockmann 1. Summary of major highlights The new verification system is under pre-operational

More information

DATA FUSION NOWCASTING AND NWP

DATA FUSION NOWCASTING AND NWP DATA FUSION NOWCASTING AND NWP Brovelli Pascal 1, Ludovic Auger 2, Olivier Dupont 1, Jean-Marc Moisselin 1, Isabelle Bernard-Bouissières 1, Philippe Cau 1, Adrien Anquez 1 1 Météo-France Forecasting Department

More information

138 ANALYSIS OF FREEZING RAIN PATTERNS IN THE SOUTH CENTRAL UNITED STATES: Jessica Blunden* STG, Inc., Asheville, North Carolina

138 ANALYSIS OF FREEZING RAIN PATTERNS IN THE SOUTH CENTRAL UNITED STATES: Jessica Blunden* STG, Inc., Asheville, North Carolina 138 ANALYSIS OF FREEZING RAIN PATTERNS IN THE SOUTH CENTRAL UNITED STATES: 1979 2009 Jessica Blunden* STG, Inc., Asheville, North Carolina Derek S. Arndt NOAA National Climatic Data Center, Asheville,

More information

Upscaled and fuzzy probabilistic forecasts: verification results

Upscaled and fuzzy probabilistic forecasts: verification results 4 Predictability and Ensemble Methods 124 Upscaled and fuzzy probabilistic forecasts: verification results Zied Ben Bouallègue Deutscher Wetterdienst (DWD), Frankfurter Str. 135, 63067 Offenbach, Germany

More information

Operational MRCC Tools Useful and Usable by the National Weather Service

Operational MRCC Tools Useful and Usable by the National Weather Service Operational MRCC Tools Useful and Usable by the National Weather Service Vegetation Impact Program (VIP): Frost / Freeze Project Beth Hall Accumulated Winter Season Severity Index (AWSSI) Steve Hilberg

More information

WMO Priorities and Perspectives on IPWG

WMO Priorities and Perspectives on IPWG WMO Priorities and Perspectives on IPWG Stephan Bojinski WMO Space Programme IPWG-6, São José dos Campos, Brazil, 15-19 October 2012 1. Introduction to WMO Extended Abstract The World Meteorological Organization

More information

Basic Verification Concepts

Basic Verification Concepts Basic Verification Concepts Barbara Brown National Center for Atmospheric Research Boulder Colorado USA bgb@ucar.edu May 2017 Berlin, Germany Basic concepts - outline What is verification? Why verify?

More information

CHARACTERISATION OF STORM SEVERITY BY USE OF SELECTED CONVECTIVE CELL PARAMETERS DERIVED FROM SATELLITE DATA

CHARACTERISATION OF STORM SEVERITY BY USE OF SELECTED CONVECTIVE CELL PARAMETERS DERIVED FROM SATELLITE DATA CHARACTERISATION OF STORM SEVERITY BY USE OF SELECTED CONVECTIVE CELL PARAMETERS DERIVED FROM SATELLITE DATA Piotr Struzik Institute of Meteorology and Water Management, Satellite Remote Sensing Centre

More information

NWS Resources For Public Works

NWS Resources For Public Works NWS Resources For Public Works August 28th, 2016 Shawn DeVinny shawn.devinny@noaa.gov Meteorologist National Weather Service Twin Cities/Chanhassen, MN 1 APWA 2016 PWX 8/28/2016 National Weather Service

More information

Xinhua Liu National Meteorological Center (NMC) of China Meteorological Administration (CMA)

Xinhua Liu National Meteorological Center (NMC) of China Meteorological Administration (CMA) The short-time forecasting and nowcasting technology of severe convective weather for aviation meteorological services in China Xinhua Liu National Meteorological Center (NMC) of China Meteorological Administration

More information

The Latest Science of Seasonal Climate Forecasting

The Latest Science of Seasonal Climate Forecasting The Latest Science of Seasonal Climate Forecasting Emily Wallace Met Office 7 th June 2018 Research and Innovation Program under Grant 776868. Agreement Background: - Why are they useful? - What do we

More information

Spatial verification of NWP model fields. Beth Ebert BMRC, Australia

Spatial verification of NWP model fields. Beth Ebert BMRC, Australia Spatial verification of NWP model fields Beth Ebert BMRC, Australia WRF Verification Toolkit Workshop, Boulder, 21-23 February 2007 New approaches are needed to quantitatively evaluate high resolution

More information

Application and verification of ECMWF products 2009

Application and verification of ECMWF products 2009 Application and verification of ECMWF products 2009 Hungarian Meteorological Service 1. Summary of major highlights The objective verification of ECMWF forecasts have been continued on all the time ranges

More information

Convection Nowcasting Products Available at the Army Test and Evaluation Command (ATEC) Ranges

Convection Nowcasting Products Available at the Army Test and Evaluation Command (ATEC) Ranges Convection Nowcasting Products Available at the Army Test and Evaluation Command (ATEC) Ranges Cathy Kessinger National Center for Atmospheric Research (NCAR), Boulder, CO with contributions from: Wiebke

More information

UPDATE OF REGIONAL WEATHER AND SMOKE HAZE FOR MAY 2015

UPDATE OF REGIONAL WEATHER AND SMOKE HAZE FOR MAY 2015 UPDATE OF REGIONAL WEATHER AND SMOKE HAZE FOR MAY 2015 1. Review of Regional Weather Conditions in April 2015 1.1 Inter-Monsoon conditions prevailed over the ASEAN region in April 2015. The gradual northward

More information