Automated Precipitation Detection and Typing in Winter: A Two-Year Study

Size: px
Start display at page:

Download "Automated Precipitation Detection and Typing in Winter: A Two-Year Study"

Transcription

1 493 Automated Precipitation Detection and Typing in Winter: A Two-Year Study B. E. SHEPPARD AND P. I. JOE Meteorological Service of Canada, Downsview, Ontario, Canada (Manuscript received May 999, in final form January ) ABSTRACT Precipitation detection and typing estimates from four sensors are evaluated using standard operational meteorological observations as a reference. All are active remote sensors radiating into a measurement volume near the sensor, and measuring some property of the scattered radiation. Three of the sensors use optical wavelengths and one uses microwave wavelengths. A new analysis approach for comparison of the time series from the observer and instruments is presented. This approach reduces the mismatch of the different sampling intervals and response times of humans and instruments when observing precipitation events. The algorithm postprocesses the minutely output estimates from the sensors in a window of time around the nominal time of the human observation. The paper examines the relationship between the probability of detection and false alarm ratio for these sensors. In order to represent the trade-off between these parameters, the Heidke skill score is used as a figure of merit in comparing performance. A statistical method is presented to test for significance in differences of this score between sensors. Each of the sensors demonstrated skill in identifying and typing the precipitation. The window analysis showed improved scores compared to simultaneous comparisons. The difference is attributed to the analysis technique that was designed to approximate the observing instructions for the standard observations. The data processing algorithm that gave the highest Heidke skill score for each sensor resulted in identification scores of 79% for the microwave sensor but only 39% 4% for the three optical sensors when rain was reported by the observer. The identification score in snow was 63% for the microwave sensor and in the range of 53% 7% for the optical sensors. Use of a multiparameter algorithm with the microwave sensor improved the identification of both snow and drizzle over the report from the sensor alone.. Introduction Automated precipitation detection and typing has several applications, from replacement of manual observations in meteorological monitoring networks, to integration in nowcasting systems of icing potential. Increasingly, national meteorological services are replacing the human observer with totally automated meteorological sensing systems. A weakness of those systems is the accurate and precise reporting of precipitation type. A strength is that they can report on a minutely basis, which is important for nowcasting systems, particularly in winter where precipitation type is a crucial observation. A specific nowcasting example is the potential automated application to the hold-over tables currently used to define the time period that de-/anti-icing fluids are effective after their application to aircraft prior to takeoff (D Avirro et al. 997). An- Corresponding author address: B. E. Sheppard, Meteorological Service of Canada, 495 Dufferin St., Downsview, ON M3H 5T4, Canada. brian.sheppard@ec.gc.ca other application is the nowcasting of the rain snow boundary in winter. For this purpose, four present weather sensors were evaluated at a test site at Pearson International Airport in Toronto during the winters of The four sensors tested were the Vaisala, the HSS Model 4B, the Scientific Technology Inc. Weather Identifier and Visibility (), and the Andrew Antenna Precipitation Occurrence System (). Two distinct algorithms for the were tested and so five sensor algorithm combinations were studied in this paper. This latter estimate will be referred to as the /Automated Weather Observing System (AWOS) sensor for convenience. These sensors can only report a limited number of precipitation types compared to human observations. At the moment, the sensors do not report mixed precipitation types but in some cases, they can report obstruction to vision. In this paper, the standard operational human observations provided to the meteorological and aviation community are used as a reference to evaluate the ability of these sensors to automatically detect and identify precipitation types. The Environment Canada Manual of Surface Weather Observations (MANOBS) (Atmo- American Meteorological Society

2 494 JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY VOLUME 7 spheric Environment Service 977) describes the standard operating procedure for observing precipitation type. The standard observations are reported nominally on the hour, but are actually made several minutes earlier. In addition to the hourly observations, special observations are made whenever precipitation starts or stops. Except for freezing precipitation, occurrences of very light intensity do not require a special observation. Also interruptions in precipitation events of less than 5 min do not require a special observation. The standard observations and the sensor observations differ in many ways. The reporting resolution of the sensors is on the order of one minute, or better. This difference in the temporal resolution of the observer and sensor may cause a corresponding mismatch in the start and stop times of precipitation events reported by each. The sensor will report any changes without regard to the MANOBS instruction on issuing specials described earlier. The sensors react to the weather on a minutely (or shorter) basis and do not impose a persistence requirement before reporting a change in observation. As described later, the Vaisala is an exception to this rule. There is a very large difference in the sample volumes of the observations. A human observer will take an observation that is representative of a volume on a scale of several millions of cubic meters and the sensor observations are representative of a volume on the scale of several cubic centimeters to a few cubic meters. There is also a difference in sensitivity before an event is detected that is difficult to quantify considering the differences in human vision versus sensor optical or microwave technologies. Clearly, some data reduction or processing is needed to match the categorical observations in time. It is shown in this paper that comparing only a single observation made on the hour does not represent the performance that can be obtained in an automatic meteorological station environment. Alternatively, the evaluation of sensor performance could be based on clinical (not operational) observations made by an observer with minute resolution. The latter is very expensive to make over an extended period of time. The performance of the automated technologies compared to current observational practices has recently been evaluated in the World Meteorological Organization (WMO) Intercomparison of Present Weather s/systems (Leroy et al. 998). In that experiment two reference data sources were used to evaluate the sensors under test: ) hourly and special operational supplementary aviation (SA) observations and ) special clinical observations. In the WMO analysis using the SA observations, the reported weather type was assumed to persist until the next observation time. This may not always be true, because changes in precipitation of short duration, or occurrences of very light intensity, could occur without requiring that a special observation be reported by the observer. As discussed earlier, this could result in disagreements between the SA observation and the minutely sensor observation of two types: missed events or false alarms. Clinical observations were made on a minutely basis, but only when precipitation was expected. This adds an additional bias to the dataset. This biased dataset cannot be used to evaluate false alarms from the sensor because of the limited number of observations when no precipitation was present. In this paper we have attempted to approximate the way in which the human makes an observation according to the standard procedure. We have applied a windowing analysis strategy to attempt to better match the minutely sensor outputs to the standard human SA observations. reports within a window of time prior to the human observation are used to determine the occurrence and type of precipitation. The window acknowledges that there is uncertainty in the exact time of the observation. Precipitation occurrence requires that a defined percentage of the sensor reports within the window interval are of precipitation. This attempts to approximate the persistence requirement in the standard observing procedure. Once the occurrence criterion is met, the type is defined as the majority type in the window interval. The algorithm makes no attempt to report mixed precipitation types. The result is then compared to the human observation. This paper presents the results of applying different window times and percentages required for detection to the present weather time series reported by the five sensor/algorithm combinations. In addition, the results of the WMO analysis approach as applied to this dataset are also presented for comparison, when applicable. In this way we try to determine the best attainable agreement between the observer and sensor. Although the premise of this paper is that the human observation represents the truth, the sensor observations may be more consistent in reporting those precipitation types they are designed to measure.. Instrumentation The four sensors tested were the Vaisala, the HSS Model 4B, the, and the. All are active remote sensors radiating into a measurement volume near the sensor and measuring some property of the scattered radiation. The first three use optical radiation and the uses microwave radiation. In the latter case, additional information (temperature, humidity, and icing occurrence) from the AWOS is used with the sensor observations in a multiparameter approach. This latter algorithm is called the /AWOS sensor in this paper for convenience. In fact, some of the other sensors use additional information as well. In all cases, the analysis of the scattered optical or microwave signal is both theoretically and empirically related to the precipitation type and water equivalent rate. However, the sensors cannot identify the wide range of precipitation types nor mixed types reported by a human because of a lack of uniquely distinguishing

3 495 TABLE. Definition of observer SA weather codes and reportable values by sensors indicated by.* Observer Definition HSS A BS C F H IC IF IP IPW L R RW S SG SP SPW SW T ZL ZR Hail Blowing snow Clear Fog Haze Ice crystals Ice fog Ice pellets Ice pellet showers Drizzle Rain Rain showers Snow Snow grains Snow pellets Snow pellet showers Snow showers Thunder Freezing drizzle Freezing rain ** ** / AWOS * All sensors report P for indeterminate precipitation. ** reports these parameters only in synoptic code not in SA format. signatures in these limited signals. Most do not report obstruction to vision. The present weather symbols reportable by the observers and sensors are defined in Table. The reportable values for each instrument are also indicated in Table. A brief description of each sensor follows. a. Vaisala The Vaisala, model (Lonnqvist and Nylander 99), projects light into a small sampling volume of air (of the order of cm 3 ), analyzes the signal produced by the forward scattering from precipitation hydrometeors and suspended particles, and from this generates an estimate of precipitation type and rate and obstruction to vision and visibility. A capacitive sensor (model DRD) measures the presence and amount of water on its surface, which provides auxiliary information in determining the type. Once detected, precipitation will continue to be reported, due to a persistence algorithm built into the sensor processing (P. Utela 998; personal communication). b. HSS 4B The HSS, model 4B, also projects light into a small sampling volume of air (approximately cm 3 ) and measures the forward scattered light (Hansen et al. 988; van der Meulen 99). Particles passing through the beam cause a pulse at the receiver detector. The amplitude and duration of each pulse give information on the size and fall speed of each particle, respectively. From this data, the precipitation type and rate is estimated. The forward and additional backscattered measurements from the volume are also used to estimate visibility and obstruction to vision. c. Andrew Antenna The Andrew Antenna is a small Doppler X-band radar (Sheppard 99). The sensor measures the Doppler signal of particles falling through a small measurement volume (of the order of a cubic meter depending on particle size) extending to a maximum range of about m above the sensor. From this, a Doppler power density spectrum is computed and the occurrence, type, and rate of precipitation are estimated. The does not estimate obstruction to vision or visibility. d. /AWOS In addition, a multiparameter algorithm that uses the output, the dewpoint and the output from a Rosemount icing detector, Model 87E; (Wilson and Van Cauwenberghe 993) is used to estimate a precipitation type. These additional sensors were part of the sensor suite at the test site. The primary reason for including these results is to evaluate the performance of the multiparameter algorithm compared to the stand-alone output. e. STI The Scientific Technology Inc. (STI) transmits an infrared beam through a short pathlength of atmosphere (sample volume of a few cm 3 ). Precipitation particles falling through the beam cause scintillation in the light received by the detector (Wang et al. 978). The frequency content of the received signal is analyzed to estimate the precipitation rate and type. The estimates visibility by measuring forward scattering of the transmitted light using a second detector. This sensor does not report obstruction to vision. 3. Performance metrics In this section, we define the evaluation performance metrics. The sensors are evaluated on their ability to both detect the occurrence of precipitation and identify the type of precipitation. Precipitation detection is defined as any report of precipitation by the sensor when the observer is also reporting precipitation. They do not have to agree on the type. Precipitation identification is defined as an exact agreement of precipitation type. The comparative categorical scoring outcome can be represented by a contingency table (Table ). The performance statistics that can be derived are probability of detection (POD), false alarm ratio (FAR), and Heidke Skill Score (HSS) (Wilks 995; Doswell et al. 99). These are defined as follows:

4 496 JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY VOLUME 7 TABLE. Contingency table for precipitation detection parameter definitions. Yes No Total Reference Yes No Total x y x y z w z w x z y w N POD x/(x y) () FAR z/(x z) () HSS (C E)/(N E), (3) where C is the total number of correct reports of precipitation and of no precipitation from the sensor (C x w). Here, E is the expectation value for the number of correct reports that would be achieved by random guesses, with the constraint that the marginal distributions in the resultant contingency table agree with that of the actual dataset, and N is the total number of reports (x y z w). Equation (3) can be expressed in terms of the contingency table values as HSS (xw yz) [y z xw (y z)(x w)]. (4) An HSS value of corresponds to a sensor that has no skill. A sensor that is always correct has an HSS of and if it shows less skill than that obtainable by chance alone, the HSS value is negative. 4. Analysis technique a. Database The instruments operated for two winters during different time periods. Table 3 shows the periods when valid data was acquired from the sensors. All instruments report a precipitation type each minute, except the which reports every 5 s. For this analysis, observer and sensor reports other than precipitation, such as blowing snow, fog, haze, and thunder, were classified as no precipitation (NP). The observations of the station observers were recorded in the SA format. These observations are made on the hour, or in the case of specials, when conditions change as specified by MANOBS (Atmospheric Environment Service 977). b. Matching algorithm The analysis design is to compare the data from any sensor/algorithm to any other sensor/algorithm or to the human meteorological observations. One source is defined by the user as the reference, which is compared to a second test source. Each reference observation defines a single instance for comparison to the test sensor. The user specifies a window time (WT) around the time of the reference observation in which the data from the test sensor are processed by the analysis algorithm. A single precipitation type or no precipitation is assigned by the algorithm. This value is then compared to the reference. The window is an attempt to compensate for differences in sampling times and volumes between the sensors and the observer. A second variable is the percentage of the window (POW). If the percentage of reports of precipitation is less than the POW, then they are considered to be noise, and the sensor report for the event is no precipitation. This is analogous to a detection threshold applied at the signal level for a sensor (e.g., see appendix B), but it is applied at the data level. Small POW values increase the POD but also the FAR and large values will decrease the POD and FAR. This analysis approach is used to filter out noise and extremely short duration precipitation events, which exceed the signal processing detection thresholds, but may be missed or considered too short to be significant by observers. That is, we attempt to approximate the persistence criteria and significance criteria of the standard observations and to partially account for the sensitivity differences. Note that we use a POW value of to indicate that only record of all records in the WT is required for detection. Once the algorithm decides the sensor has detected precipitation then the identification of its type is computed by the majority type reported during the window time. This approach is a generalization of the present weather algorithms currently in use in both the Canadian AWOS and the American Automatic Surface Observing System (ASOS; Nadolski and Gifford 995). In the Canadian AWOS, the window used is the 5 min prior to reporting time and the POW is %. This means that there must be 3 minutely reports of precipitation from the sensor in the 5-min window before it will be reported by the AWOS present weather algorithm. The window of the algorithm used in the American ASOS TABLE 3. test periods /AWOS Nov Apr 996 Nov Apr 996 Nov Apr 996 Dec Apr 996 Feb Apr 996 Nov Apr 997 Nov Apr 997 Nov Apr 997 Nov Apr 997

5 497 FIG.. For the, the Heike Skill Score (HSS) values as a function of the window time (WT) and the POW as represented by the symbols. The WMO analysis value of the HSS is given as dashed line. is min before reporting time and the detection threshold required is %. The analysis here differs from the operational present weather algorithms in that neither an a priori WT nor an a priori POW is assumed to be optimal for every sensor. The optimal performance for each of the sensors may differ. In addition the analysis algorithm used in the WMO intercomparison (Leroy et al. 998) was applied to this dataset. This approach assumed that each hourly or special SA observation persisted until the next observation time. If there were no specials during the hour, then all 6 min of sensor outputs would be compared to the human observation from the previous hour. 5. Results The processed outputs from each sensor were compared to the reference observations made by the meteorological observer for the periods shown in Table 3 during two winters. The relationship between WT (, 5,, 5,, 3 min) and POW( report, %, %, 4%, 6%, 8%, %), and the resultant calculated values of the POD, FAR, and HSS are computed and presented in this section. Also a test was performed to determine the significance of the detection results. FIG. 3. As in Fig. but for the. a. Detection Figures to 4 plot the HSS values as a function of WT for each sensor and the symbols represent the POW. (Note the different scales on the y axis of Figs. 4.) The WMO analysis value of the HSS is given as the dashed line. The statistics for the WT/POW parameters that gave the largest HSS value are summarized in Table 4. For the the HSS values at each WT decrease with increasing POW value. However compared to the other sensors, the values at each WT are less dependent on the POW as seen from the closer grouping of the points. This is a result of a persistence algorithm built into the sensor processing (P. Utela 998, personal communication). Once the has reported precipitation it will continue to do so for several minutes. The HSS value is relatively constant for a POW, regardless of the WT. The best performance (HSS 77.%) is achieved for a -min WT and a POW of. For the, increasing the WT improves the maximum HSS value over that for simultaneous comparisons. The best performance (HSS 75.%) is achieved for a 3-min WT and a POW of. The HSS value for each WT increases with decreasing POW. For the, the best performance (HSS 77.%) is achieved using either a 5- or -min WT and a % POW. Except for the 5-min WT, the maximum HSS is always attained for a POW of %. Increasing the WT from simultaneous matching to 5 min improves the maximum HSS value obtained. At 3 min there is a slight decrease in the maximum HSS value obtained. FIG.. As in Fig. but for the. FIG. 4. As in Fig. but for the.

6 498 JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY VOLUME 7 TABLE 4. Postprocessing parameters that produce the maximum value of the Heidke skill score statistic for each sensor. /AWOS Window (min) 3 5 % or # of records in window (POW) No. of obs. POD (%) FAR (%) HSS (%) % The maximum HSS value (HSS 68.%) for the is obtained for the -min WT and a POW of record. At each WT, the HSS value increases with decreasing POW. The maximum obtainable HSS decreases with WT longer than min. ) TEST OF SIGNIFICANCE FOR THE HEIDKE SKILL SCORE In order to determine if differences between Heidke skill score values are statistically significantly between any two test sensors, a technique called moving blocks bootstrap is used (Wilks 997). This is a nonparametric approach, which uses repeated resampling with replacement of the parent population to estimate the distribution of the HSS differences from which a statistic can be computed to estimate the significance of the differences. The parent population is formed from data triplets of the observer reported weather, and the precipitation type reported from two sensors. A sample population is formed by a random sampling, with replacement, from the parent population and the difference in the HSS values for the two sensors is calculated. Note that the number of samples drawn to create the sample population is equal to the number of samples in the parent population. This creates a sample population from which a single HSS difference is calculated. This process is then repeated many times to determine a distribution of the differences of HSS for the two sensors. Three hundred HSS differences are computed to estimate their distribution. From this distribution, the HSS difference can be tested for statistical significance. Because the time series of data is autocorrelated, a moving block approach was used. This technique attempts to preserve the autocorrelation structure of the parent population in the sample population, by randomly selecting blocks of consecutive data triplets, each of fixed length. The selection of the length of these blocks should reflect the strength of the autocorrelation. It is assumed that successive observations are autocorrelated during a precipitation event. In this analysis a block length of 3 observations was chosen, because it represented approximately the number of observer reports in one day, after which a precipitation event would usually have terminated. The sample population consisted of N observations, in groups of 3 consecutive observations, with the initial observation of each block selected at random from the parent population. This analysis was done to compare the best HSS values produced by the and the to see if there was statistical significance to the differences for the detection of precipitation. The highest HSS value of 77.% for the, obtained for a -min WT and a POW of record, is compared to the highest value of 77.% for the obtained for a 5-min WT and a POW of %. From a total population of triplets of observer,, and precipitation occurrence estimates, a distribution of 3 HSS differences was calculated and is shown in Fig. 5. From this distribution, there is a 9% confidence interval that the difference in HSS values from the two sensors is in the range FIG. 5. Frequency and cumulative distributions of 3 differences in HSS for the and the generated using the moving block bootstrap technique. FIG. 6. For the precipitation detection reports, FAR versus POD for different window times represented by separate curves and different POW times as represented by symbols on each curve. The symbols represent POW values of report, % (except for the, only when WT 5 min), %, 4%, 6%, 8%, % from upper right to lower left, respectively. The larger square symbol represents the results using the WMO algorithm.

7 499 FIG. 7. As in Fig. 6, but for the..6% to.4%. Because this region includes % we cannot reject the null hypothesis that there is no difference in the skill of the two sensors as measured by the HSS parameter at a % level of significance. This procedure was repeated to evaluate the statistical significance of the differences in the HSS values for the and the during the period that they both were operating from February 996 to 3 April 997. In this case, the distribution of 3 bootstrap estimates of the HSS differences ranges from 5.% to 5.7%, and therefore the null hypothesis can be rejected. In addition to the distribution of HSS differences between sensors presented above, the distribution of the 3 bootstrap estimates of HSS values for each of three sensors that operated during the common period from February 996 to 3 April 997 was also calculated. The standard deviation of these three distributions was.3%,.3%, and.%, for the,, and, respectively. FIG. 8. As in Fig. 6 but for the. ) POD FAR RELATIONSHIP The trade-off between POD and FAR is shown in Figs. 6 to 9 for different WT represented by the different curves. Each point on a curve represents the results of increasing the POW from record in the WT (highest POD and FAR values on curve) to % (when WT 5 min this is only possible for the ), %, 4%, 6%, 8%, % of the WT. The simultaneous matching is shown as a single open diamond point. The WMO analysis value is shown as the large-square point. This form of presentation shows the dependency between the POD and FAR parameters. Greater POD is accompanied by greater FAR. For the optical sensors the relationship is approximately linear. For the there is a nonlinear increase in FAR as the POW values increase. The WMO analysis technique produces a poorer combination of POD and FAR values for all sensors, except the, than was produced by any combination of WT and POW tested here. As expected, the assumption of persistence of the no precipitation observations leads to more false alarms, and persistence of the precipitation observations leads to lower probability of detection. Because the already has a persistence algorithm built into its internal processing, the effect of the windowing algorithm used here does not improve the results as much as for the other sensors. b. Identification of precipitation type The WT and POW values that produced the highest HSS value for precipitation detection were used for each sensor in the identification evaluation. These are summarized in Table 4. Because the /AWOS algorithm has already applied a processing window, its performance was evaluated using simultaneous comparisons. The results are presented in both contingency tables (appendix A) and graphical forms Figs. to 5) for selected weather types. The observer can report many types of weather that the sensors can not (Table ). Many of these are of mixed type, which the sensors do not report and were not analyzed. Some combinations were infrequent and are not presented here. Note that classification by intensity of the precipitation (light, moderate, or heavy) is excluded from this analysis. It is important to note that in the contingency tables (appendix A) the POD, FAR, and HSS statistics now refer to the identification of precipitation type not just the detection of precipitation. For this evaluation the rows and columns of Table labeled Yes and No now refer to a report of a specific precipitation type or not, for example, rain and not rain. Table 5 summarizes the HSS results from the contingency tables in appendix A. FIG. 9. As in Fig. 6 but for the.

8 5 JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY VOLUME 7 FIG.. Precipitation typing by each sensor represented by a stacked histogram for all data when the observer reported no precipitation. The bottom solid bar in each stacked histogram indicates the percentage of time that the sensor agreed with the observer. The patterned portions above this section show the precipitation types reported by the sensors that did not agree with the observer. In order to compare the performance of the four sensors and the /AWOS algorithm, a histogram presentation is used in the figures. To assist in the interpretation of these histograms note the following. R Each chart represents one or more related present weather categories as reported by the observer [e.g., no precipitation (NP)]. R Each stacked bar on the chart represents the results from a single sensor for a single observation type (e.g., reports when observer reports NP). R The bar is subdivided to show the percentage of time that the sensor reported the different precipitation types (e.g., reports P, R, and S when observer reports NP). R The patterns used in the legend are consistent and in the same order from bar to bar, and graph to graph, with the exception that the observer precipitation type is always on the bottom of the stack. R It is important to note that the number of occurrences of some weather types was small at this site. When assessing the statistical significance of each graph the number of occurrences should be noted in the No. of observations column of the contingency tables of appendix A. R A favorable comparison is when the sensor and the observer match and the corresponding lowest section of the histogram bar is large. A large percentage of the histogram bar above the lowest section indicates poor identification performance. Figure shows all data when the observer reported no precipitation. This includes observer reports of blowing snow, fog, haze, and thunder. The had the best identification of no precipitation (96%) and the the poorest (85%). In terms of HSS, the,, and had values between.75 and.8 for the identification of nonprecipitation cases, and there is little to choose between them. Figure shows all rain or rain shower observations which may or may not have been mixed with fog. The and /AWOS both identified 79% of these observations as rain, which was the best score of all the sensors. All of the optical sensors identification scores were significantly poorer in 39% to 4% range. This result is not consistent with that reported during the WMO intercomparison for these same sensors for which the discrepancy between the radar and optical sensor identification in rain was not so pronounced. It is not clear why there is such a discrepancy. In terms of HSS, the and /AWOS scores were.8 and.83, respectively while the optical sensors had values of.54 or less. Figure shows all snow or snow shower observations with or without the presence of fog. All four of the sensors identified snow correctly in the range of 53% to 7% of the observations. The had the best identification score of 7%. The /AWOS identification score was 69% which was 6% better than the stand-alone. This improvement was a result of the use the dewpoint measurement in the AWOS algorithm to convert indeterminate (P) reports from the stand-alone to snow. The had the poorest identification primarily due to the fact that it misidentified the snow as other types of frozen precipitation

9 5 FIG.. As for Fig. but for rain or rain shower observations with or without the presence of fog. (SG, IC, IP). In terms of HSS, the range of scores was from.6 () to.76 (). Figure 3 shows all drizzle or drizzle combined with fog. All sensors identified drizzle relatively poorly compared to other precipitation types. Overwhelmingly the most common misidentification was NP. The / AWOS had the best identification POD of 4%. The stand-alone never identified drizzle. Again, as a result of the use the dewpoint measurement in the AWOS algorithm, many indeterminate (P) reports were correctly converted to drizzle. However, some of the reports of P were incorrectly converted to rain and snow by the AWOS algorithm. The HSS scores were.48 and.38 for the /AWOS and, respectively, while the other sensors scores were not significantly different from chance (HSS). These are lower scores than for rain or snow which indicates greater difficulty in detecting the smaller drop size range for drizzle. The number of occurrences of freezing precipitation FIG.. As for Fig. but for snow or snow shower observations with or without the presence of fog.

10 5 JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY VOLUME 7 FIG. 3. As for Fig. but for drizzle observations with or without the presence of fog. and ice pellets was small at this site and comments are based on a small dataset. Only the and the / AWOS can report freezing precipitation. The performance of all the sensors in the detection of freezing drizzle and fog was poor (Fig. 4) and therefore the identification of this precipitation type is also poor. The and /AWOS detected some type of precipitation only 3 and 4 of the 7 occurrences, respectively. The detected none of 6 events and WIV is detected of events. For all sensors, the scores for ZR and ZRF (Fig. 5) were better than those for ZLF (Fig. 4). The /AWOS had the best identification score of 47%. The identified freezing rain 4% of the time. The corresponding HSS scores were.64 and.58 for the and, respectively. Only the had the capability of identifying IP and did so for 6 of the 8 events for which there was FIG. 4. As for Fig. but for all freezing drizzle observations combined with fog observations.

11 53 FIG. 5. As for Fig. but for all freezing rain observations. an IP component reported by the observer. All sensors detected all of the IP occurrences except the which missed of 8 events. There were only IP events while the was operating. The HSS value for the was Discussion Some relevant climatological statistics for the 6 month winter period from November to April at the Pearson International Airport test site are given in Table 6. The site experiences many precipitation types, but there is a difference in the frequency of the types and interpretation and conclusions of the overall performance of the sensors for the identification of precipitation type must be viewed in the context of the testing environment. In particular, the detection and identification of freezing precipitation must be evaluated with caution due to the low numbers of observations. The POD and FAR of sensors depend on both their internal signal processing algorithms and in this paper also on the post-output data processing algorithm. Data TABLE 5. Summary of the HSS results for precipitation identification. processing algorithms of the sensor output can be designed to produce higher POD values but always at the expense of higher FAR values. In this paper, the tradeoff between these parameters is measured by the Heidke skill score. This test score compares the performance of the sensors relative to randomness. It is assumed that the sensor reports and observations are statistically independent and that the marginal distributions of the reports and observations can be used to compute the contingency table. Other skill scores can be used with different assumptions about the nature of the contingency table but this is beyond the scope of this study (Wilks 995). The data processing windowing algorithms used here significantly improved agreement with the SA observations compared to both the persistence algorithm used by the WMO analysis (Leroy et al. 998), and simultaneous comparisons using a single sensor output at the time of the observer report. The improvement in the HSS parameter for precipitation detection was in the range of % to 5% for the, and the and less for the. The windowing algorithm attempts to simulate the human operational observing standards and therefore provide a more representative measure of the sensor s skill. The previously used anal- Type NP Rain Snow Drizzle ZL ZR IP / AWOS TABLE 6. Climatological normals 96 9 for months Nov Apr for Pearson International Airport, Toronto, Canada. Rainfall (mm) Snowfall (cm) Precipitation (mm) Days with measurable precipitation Days with freezing precipitation

12 54 JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY VOLUME 7 ysis techniques presented an overly pessimistic impression of the sensor s capability. Extensions to the windowing technique presented in this paper may improve the comparisons even further. In this study, mixed precipitation types were ignored since the sensors do not report multiple types. Additional postsensor techniques to examine the variation of reported types within the data analysis window may be beneficial but this is beyond the scope of this paper. It is evident from a comparison of the stand-alone and the /AWOS performance statistics that a multiparameter algorithm for present weather produces significant improvements in precipitation identification over single parameter measurements in drizzle and to a lesser extent in snow. All sensors already use this approach to some extent by using temperature sensors and, in the case of the, a precipitation capacitive grid. It is anticipated that improvements in performance will be obtained by incorporation of other technologies such as impact sensors for the detection of ice pellets. 7. Conclusions An observational study of automated precipitation typing using a new windowing analysis algorithm was described. In addition, the POD/FAR interdependency is quantitatively studied and summarized by using the Heidke Skill Score parameter. The moving block bootstrap technique is used to estimate the statistical significance of HSS differences between the sensors. While this is standard practice in statistical analyses, this is a new approach to the evaluation of sensors that report precipitation type. The data from the study was conducted over a two-year winter period at the Pearson International Airport in the Southern Ontario of Canada. The site is characterized by mainly rain and snow events in winter. First, this paper focussed on improving the data analysis algorithm to better approximate human observations. The main premise of this paper is that the simultaneous comparison of the standard human observations and the sensor observations is not a fair comparison due to inherent differences in sensitivity, sample volumes, and in persistence criteria. In an absolute sense, the simultaneous matching penalizes the sensor skill scores if we accept that the standard observations are correct. When the windowing analysis is done, the sensors show improved detection skill by % to 5% over previous analysis techniques. Second, all the sensors showed skill as indicated by the HSS. Based on a rank ordering of the HSS statistics, the best sensor was the /AWOS. It ranked first in four of the precipitation type categories (rain, snow, drizzle, and freezing rain) and second in another category (no precipitation). The ranked first in the no precipitation category and it was the only sensor to report ice pellets and it ranked second for the freezing rain category. The was the least skillful sensor of the set. It never ranked first in any precipitation type category. However, interpretation of these results will depend on the requirements of the user. If the user is particularly interested in the ice pellet category then, at the moment, the is the only sensor that reports this category. Third, the results show that a multiparameter approach to precipitation typing is superior to a single parameter approach and should be further exploited for operational use. Finally, there is room for improvement in all the sensors and algorithms. Not only are the results dependent on improving the sensor and signal processing algorithms but also the data analysis techniques. This paper presented one approach to create a data analysis technique to better approximate the standard observations. Other data analyses techniques could be developed to improve the comparisons. Additionally all the results within the window could be interpreted to create a mixed-precipitation type for the sensors. Acknowledgments. The test site at Pearson International Airport in Toronto was funded from the Transport Canada Dryden Commission Implementation Project. We are grateful to Francis Zwiers of Environment Canada for his suggestion of using the moving block bootstrap technique for the test of significance of the Heidke skill score. We thank Norman Donaldson of Environment Canada for his careful review of the paper and his helpful suggestions. We acknowledge the contribution of an anonymous reviewer who suggested the use of the Heidke skill score instead of the Critical Success Index as a more appropriate measure of the skill of the sensors. APPENDIX A Contingency Tables Each of the following contingency tables (Tables A A7) presents the sensor reports for one or more related present weather types reported by the observer. A row in the tables is the /AWOS algorithm results. It is important to distinguish that the POD, FAR, and HSS statistics presented here apply to identification of precipitation not detection, which are given in Figs. 4 and 6 9. For example, in this context POD means the probability that the sensor will identify the exact precipitation type reported by the observer. R Column gives the sensor name. R Column gives the present weather type reported by the observer. R Column 3 is the number of observations for which a sensor estimate was available. R Columns 4 3 give the number of observations for each precipitation type reported by the sensor. R POD is the fraction of observations that the sensor precipitation type matched the observer type. R FAR is the false alarm ratio when the sensor reported

13 55 TABLE A. Contingency table in no precipitation. A WX NP NP NP NP NP No. of obs. NP IC IP L P R S SG ZL ZR POD FAR HSS A WX R R R R R TABLE A. Contingency table in rain or rain showers with or without fog. No. of obs. NP IC IP L P R S SG ZL ZR POD FAR HSS A S S S S S WX TABLE A3. Contingency table for snow or snow showers with or without fog. No. of obs. NP IC IP L P R S SG ZL ZR POD FAR HSS A WX L L L L L TABLE A4. Contingency table for drizzle with or without fog. No. of obs. NP IC IP L P R S SG ZL ZR POD FAR HSS TABLE A5. Contingency table for freezing drizzle and fog. A WX ZLF ZLF ZLF ZLF ZLF No. of obs. NP IC IP L P R S SG ZL ZR POD FAR HSS A WX ZR ZR ZR ZR ZR TABLE A6. Contingency table for freezing rain with or without fog. No. of obs. NP IC IP L P R S SG ZL ZR POD FAR HSS

14 56 JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY VOLUME 7 TABLE A7. Contingency table for ice pellets and ice pellets with other types. A WX IP IP IP IP IP No. of obs. NP IC IP L P R S SG ZL ZR POD FAR HSS the precipitation type in column and the observer reported no precipitation or another type. R The HSS is for identifying the precipitation type in column. For this evaluation the rows and columns of Table labeled Yes and No now refer to a report of a specific precipitation type or not, for example, rain and not rain. R The symbols NP and P represent no precipitation and indeterminate type, respectively. R in the Tables indicates parameters that are not applicable because they are not reportable by the sensor or undefined. APPENDIX B Illustrative Example of Signal Detection versus Noise Rejection It is important to distinguish signal processing, which we delineate as that portion of the processing internal to the sensor, from data processing, which we delineate as that portion of the analysis which occurs on the sensor outputs. The signal processing distinguishes signal from noise in real time and is designed by the manufacturer and cannot generally be modified by the user of the sensor except perhaps to set the sensitivity of the detector. The is used to illustrate the fundamental difficulty for all the sensors to distinguish the signal produced by precipitation, when present, from the noise, which is always present due to sources that are internal and external to the sensor. The was chosen to illustrate the problem because it was the only sensor for FIG. B. Cumulative frequency distributions of minutely effective radar reflectivitiy (Z e ) values measured by the in both precipitation and non-precipitating atmospheres as defined by the observer. which the raw data used by the internal detection algorithm was available for analysis. For the, the signal used for illustration is the backscattered power from precipitation falling through the measurement volume. This is represented by the effective radar reflectivity factor Z e (Battan 973) expressed in units of dbz. The noise that we want to reject is produced by various sources: the receiver electronics, external radio frequency signals, ground echoes, birds, insects, and variations in the index of refraction in the atmosphere. The noise produced by these sources, if not rejected by the signal processing algorithms, will be interpreted as valid signal from precipitation. For an ideal detection scenario (POD % and FAR %), the probability distribution of reflectivity from precipitation and from noise must have no overlap. In such a case, the detection threshold could be chosen so that the cumulative frequency distribution (CFD) of the measured parameter in the absence of precipitation is % at the threshold, that is, all noise signal levels are below the detection threshold, and the CFD in precipitation is % at the threshold, that is, all precipitation signals exceed the threshold. If there is an overlap in these distributions, there will be a trade-off between POD and FAR. The nature of the trade-off depends on the nature of the relative CFDs. About 79 minutely measurements of Z e in dbz were classified according to whether the observer reported precipitation or no precipitation. A frequency distribution of the radar reflectivity is created from which a CFD is computed. Figure B plots the CFD of Z e (in dbz) for both cases. A simple detection threshold algorithm is applied to the Z e such that precipitation is reported if Z e exceeds this threshold. Three different values of the Z e detection threshold are represented graphically by the vertical lines in Fig. B. For each threshold, the POD and FAR values are defined by the intersection of the vertical threshold line and the precipitation/no precipitation CFD curves, respectively. The resultant HSS value can be calculated using Eq. (4). Lines for three threshold values are shown, two for arbitrarily chosen values of HSS s of 3% and 6% and one at the maximum value of 69%. Two threshold lines corresponding to the HSS value of 6% are given to illustrate that different reflectivity thresholds and hence different POD FAR combinations can result in the same HSS value. The can also measure the Doppler velocity

15 57 TABLE B. Precipitation detection statistics for using a simple threshold algorithm and the more complex algorithm actually implemented in the internal processing software. Algorithm POD (%) FAR (%) HSS (%) Simple Complex spectra of the precipitation. The signal processing algorithm in the sensor software uses additional statistical properties of these spectra to distinguish precipitation from noise. Table B compares the POD, FAR, and HSS values determined from simultaneous comparisons of the observer to both the operational complex algorithm internal to the sensor, and to the simple detection threshold approach described in this appendix. The POD column shows that the complex detection algorithm internal to the sensor detects 5.% more observer precipitation events than the simple threshold algorithm. However, it produces.% more false alarms than the simple algorithm. The overall result is an improvement of.% in the HSS value. REFERENCES Atmospheric Environment Service, 977: Manual of Surface Weather Observations (MANOBS). 7th ed. Meteorological Service of Canada, 4 pp. [Available from Meteorological Service of Canada, 495 Dufferin St., Downsview, ON M3H 5T4, Canada.] Battan, L. J., 973: Radar Observations of the Atmosphere. University of Chicago Press, 34 pp. D Avirro, J., A. Peters, M. Hanna, P. Dawson, and M. Chaput, 997: Aircraft ground de/anti-icing fluid holdover time field testing program for the 996/97 winter. Transportation Development Centre Publication TP 33E, 33 pp. [Available from Transportation Development Centre, Publication Distribution Services, 8 René Lévesque Blvd. West, Suite 6, Montreal, PQ H3B X9, Canada.] Doswell, C. A., III, R. Davies-Jones, and D. L. Keller, 99: On summary measures of skill in rare event forecasting based on contingency tables. Wea. Forecasting, 5, Hansen, D. F., W. K. Shubert, and D. R. Rohleder, 988: A program to improve performance of AFGL automated present weather observing sensors. Air Force Geophysics Laboratory Rep. AFGL-TR-88-39, pp. [Available from AFGL, Hanscom AFB, MA 73.] Leroy, M., C. Bellevaux, and J. P. Jacob, 998: WMO intercomparison of present weather sensors/systems: Canada and France, : Final report. Instruments and Observing Methods Rep. 73, WMO, Geneva, Switzerland, 69 pp. Lonnqvist, J., and P. Nylander, 99: A present weather instrument. Tech. Conf. on Instruments and Methods of Observation, Vienna, Austria, WMO/TD-No. 46, Nadolski, V. L., and M. D. Gifford, 995: An overview of ASOS algorithms. Preprints, Sixth Conf. on Aviation Weather Systems, Dallas, TX, Amer. Meteor. Soc., Sheppard, B. E., 99: The measurement of raindrop size distributions using a small Doppler radar. J. Atmos. Oceanic Technol., 7, van der Meulen, J. P., 99: Present weather observing systems: One year of experience and comparison with human observations. Tech. Conf. on Instruments and Methods of Observation, Vienna, Austria, WMO/TD-No. 46, Wang, T.-i, K. B. Earnshaw, and R. S. Lawrence, 978: Simplified optical path-averaged rain gauge. Appl. Opt., 7, Wilks, D. S., 995: Statistical Methods in the Atmospheric Sciences. Academic Press, 464 pp., 997: Resampling hypothesis tests for autocorrelated fields. J. Climate,, Wilson, R. A., and R. Van Cauwenberghe, 993: Intercomparison results of two types of freezing rain sensors versus human observers. Tech. Conf. on Instruments and Methods of Observation, Vienna, Austria, WMO/TD-No. 46, 67 7.

Precipitation type from the Thies disdrometer

Precipitation type from the Thies disdrometer Precipitation type from the Thies disdrometer Hannelore I. Bloemink 1, Eckhard Lanzinger 2 1 Royal Netherlands Meteorological Institute (KNMI) Instrumentation Division P.O. Box 201, 3730 AE De Bilt, The

More information

Precipitation type detection Present Weather Sensor

Precipitation type detection Present Weather Sensor Precipitation type detection Present Weather Sensor Project no. 1289 Final report February 24 H. Bloemink MI/INSA/IO Contents 1 Introduction...3 2 Present weather determination...3 3 Experiment...4 3.1

More information

PRECIPITATION TYPE AND RAINFALL INTENSITY FROM THE PLUDIX DISDROMETER DURING THE WASSERKUPPE CAMPAIGN

PRECIPITATION TYPE AND RAINFALL INTENSITY FROM THE PLUDIX DISDROMETER DURING THE WASSERKUPPE CAMPAIGN PRECIPITATION TYPE AND RAINFALL INTENSITY FROM THE PLUDIX DISDROMETER DURING THE WASSERKUPPE CAMPAIGN Clelia Caracciolo1, Franco Prodi1,2, Leo Pio D Adderio2 and Eckhard Lanzinger4 1 University of Ferrara,

More information

Verification and performance measures of Meteorological Services to Air Traffic Management (MSTA)

Verification and performance measures of Meteorological Services to Air Traffic Management (MSTA) Verification and performance measures of Meteorological Services to Air Traffic Management (MSTA) Background Information on the accuracy, reliability and relevance of products is provided in terms of verification

More information

REPORTS ON THE PROGRESS IN ADDRESSING THE WORK PLAN OF THE EXPERT TEAM. Standardization in instrumentation and observations

REPORTS ON THE PROGRESS IN ADDRESSING THE WORK PLAN OF THE EXPERT TEAM. Standardization in instrumentation and observations WORLD METEOROLOGICAL ORGANIZATION COMMISSION FOR INSTRUMENTS AND METHODS OF OBSERVATION OPAG SURFACE EXPERT TEAM ON SURFACE TECHNOLOGY AND MEASUREMENT TECHNIQUES Second Session CIMO/OPAG SURFACE/ ET ST&MT

More information

VISIBILITY SENSOR ACCURACY: WHAT S REALISTIC? John D. Crosby* EnviroTech Sensors, Inc. Clarksville, Maryland

VISIBILITY SENSOR ACCURACY: WHAT S REALISTIC? John D. Crosby* EnviroTech Sensors, Inc. Clarksville, Maryland 15.5 VISIBILITY SENSOR ACCURACY: WHAT S REALISTIC? John D. Crosby* EnviroTech Sensors, Inc. Clarksville, Maryland 1. INTRODUCTION Improvements in forward scatter-type visibility sensors in the past decade

More information

Experimental Test of the Effects of Z R Law Variations on Comparison of WSR-88D Rainfall Amounts with Surface Rain Gauge and Disdrometer Data

Experimental Test of the Effects of Z R Law Variations on Comparison of WSR-88D Rainfall Amounts with Surface Rain Gauge and Disdrometer Data JUNE 2001 NOTES AND CORRESPONDENCE 369 Experimental Test of the Effects of Z R Law Variations on Comparison of WSR-88D Rainfall Amounts with Surface Rain Gauge and Disdrometer Data CARLTON W. ULBRICH Department

More information

Guidance on Aeronautical Meteorological Observer Competency Standards

Guidance on Aeronautical Meteorological Observer Competency Standards Guidance on Aeronautical Meteorological Observer Competency Standards The following guidance is supplementary to the AMP competency Standards endorsed by Cg-16 in Geneva in May 2011. Format of the Descriptions

More information

Implementation Guidance of Aeronautical Meteorological Observer Competency Standards

Implementation Guidance of Aeronautical Meteorological Observer Competency Standards Implementation Guidance of Aeronautical Meteorological Observer Competency Standards The following guidance is supplementary to the AMP competency Standards endorsed by Cg-16 in Geneva in May 2011. Please

More information

A statistical approach for rainfall confidence estimation using MSG-SEVIRI observations

A statistical approach for rainfall confidence estimation using MSG-SEVIRI observations A statistical approach for rainfall confidence estimation using MSG-SEVIRI observations Elisabetta Ricciardelli*, Filomena Romano*, Nico Cimini*, Frank Silvio Marzano, Vincenzo Cuomo* *Institute of Methodologies

More information

A Comparison of Tornado Warning Lead Times with and without NEXRAD Doppler Radar

A Comparison of Tornado Warning Lead Times with and without NEXRAD Doppler Radar MARCH 1996 B I E R I N G E R A N D R A Y 47 A Comparison of Tornado Warning Lead Times with and without NEXRAD Doppler Radar PAUL BIERINGER AND PETER S. RAY Department of Meteorology, The Florida State

More information

MAIN ATTRIBUTES OF THE PRECIPITATION PRODUCTS DEVELOPED BY THE HYDROLOGY SAF PROJECT RESULTS OF THE VALIDATION IN HUNGARY

MAIN ATTRIBUTES OF THE PRECIPITATION PRODUCTS DEVELOPED BY THE HYDROLOGY SAF PROJECT RESULTS OF THE VALIDATION IN HUNGARY MAIN ATTRIBUTES OF THE PRECIPITATION PRODUCTS DEVELOPED BY THE HYDROLOGY SAF PROJECT RESULTS OF THE VALIDATION IN HUNGARY Eszter Lábó OMSZ-Hungarian Meteorological Service, Budapest, Hungary labo.e@met.hu

More information

CHARACTERISATION OF STORM SEVERITY BY USE OF SELECTED CONVECTIVE CELL PARAMETERS DERIVED FROM SATELLITE DATA

CHARACTERISATION OF STORM SEVERITY BY USE OF SELECTED CONVECTIVE CELL PARAMETERS DERIVED FROM SATELLITE DATA CHARACTERISATION OF STORM SEVERITY BY USE OF SELECTED CONVECTIVE CELL PARAMETERS DERIVED FROM SATELLITE DATA Piotr Struzik Institute of Meteorology and Water Management, Satellite Remote Sensing Centre

More information

P5.3 EVALUATION OF WIND ALGORITHMS FOR REPORTING WIND SPEED AND GUST FOR USE IN AIR TRAFFIC CONTROL TOWERS. Thomas A. Seliga 1 and David A.

P5.3 EVALUATION OF WIND ALGORITHMS FOR REPORTING WIND SPEED AND GUST FOR USE IN AIR TRAFFIC CONTROL TOWERS. Thomas A. Seliga 1 and David A. P5.3 EVALUATION OF WIND ALGORITHMS FOR REPORTING WIND SPEED AND GUST FOR USE IN AIR TRAFFIC CONTROL TOWERS Thomas A. Seliga 1 and David A. Hazen 2 1. Volpe National Transportation Systems Center, Cambridge,

More information

A TEST OF THE PRECIPITATION AMOUNT AND INTENSITY MEASUREMENTS WITH THE OTT PLUVIO

A TEST OF THE PRECIPITATION AMOUNT AND INTENSITY MEASUREMENTS WITH THE OTT PLUVIO A TEST OF THE PRECIPITATION AMOUNT AND INTENSITY MEASUREMENTS WITH THE OTT PLUVIO Wiel M.F. Wauben, Instrumental Department, Royal Netherlands Meteorological Institute (KNMI) P.O. Box 201, 3730 AE De Bilt,

More information

Evaluating Forecast Quality

Evaluating Forecast Quality Evaluating Forecast Quality Simon J. Mason International Research Institute for Climate Prediction Questions How do we decide whether a forecast was correct? How do we decide whether a set of forecasts

More information

THE CHARACTERISTICS OF DROP SIZE DISTRIBUTIONS AND CLASSIFICATIONS OF CLOUD TYPES USING GUDUCK WEATHER RADAR, BUSAN, KOREA

THE CHARACTERISTICS OF DROP SIZE DISTRIBUTIONS AND CLASSIFICATIONS OF CLOUD TYPES USING GUDUCK WEATHER RADAR, BUSAN, KOREA THE CHARACTERISTICS OF DROP SIZE DISTRIBUTIONS AND CLASSIFICATIONS OF CLOUD TYPES USING GUDUCK WEATHER RADAR, BUSAN, KOREA Dong-In Lee 1, Min Jang 1, Cheol-Hwan You 2, Byung-Sun Kim 2, Jae-Chul Nam 3 Dept.

More information

Final report on the operation of a Campbell Scientific CS135 ceilometer at Chilbolton Observatory

Final report on the operation of a Campbell Scientific CS135 ceilometer at Chilbolton Observatory Final report on the operation of a Campbell Scientific ceilometer at Chilbolton Observatory Judith Agnew RAL Space 27 th March 2014 Summary A Campbell Scientific ceilometer has been operating at Chilbolton

More information

Craig D. Smith*, Environment and Climate Change Canada, Saskatoon, Canada,

Craig D. Smith*, Environment and Climate Change Canada, Saskatoon, Canada, Exploring the utility of snow depth sensor measurement qualifier output to increase data quality and sensor capability: lessons learned during WMO SPICE Craig D. Smith*, Environment and Climate Change

More information

A Preliminary Assessment of the Simulation of Cloudiness at SHEBA by the ECMWF Model. Tony Beesley and Chris Bretherton. Univ.

A Preliminary Assessment of the Simulation of Cloudiness at SHEBA by the ECMWF Model. Tony Beesley and Chris Bretherton. Univ. A Preliminary Assessment of the Simulation of Cloudiness at SHEBA by the ECMWF Model Tony Beesley and Chris Bretherton Univ. of Washington 16 June 1998 Introduction This report describes a preliminary

More information

ERAD THE EIGHTH EUROPEAN CONFERENCE ON RADAR IN METEOROLOGY AND HYDROLOGY

ERAD THE EIGHTH EUROPEAN CONFERENCE ON RADAR IN METEOROLOGY AND HYDROLOGY Discrimination Between Winter Precipitation Types Based on Explicit Microphysical Modeling of Melting and Refreezing in the Polarimetric Hydrometeor Classification Algorithm 1 Introduction The winter weather

More information

Using Cell-Based VIL Density to Identify Severe-Hail Thunderstorms in the Central Appalachians and Middle Ohio Valley

Using Cell-Based VIL Density to Identify Severe-Hail Thunderstorms in the Central Appalachians and Middle Ohio Valley EASTERN REGION TECHNICAL ATTACHMENT NO. 98-9 OCTOBER, 1998 Using Cell-Based VIL Density to Identify Severe-Hail Thunderstorms in the Central Appalachians and Middle Ohio Valley Nicole M. Belk and Lyle

More information

Remote Sensing in Meteorology: Satellites and Radar. AT 351 Lab 10 April 2, Remote Sensing

Remote Sensing in Meteorology: Satellites and Radar. AT 351 Lab 10 April 2, Remote Sensing Remote Sensing in Meteorology: Satellites and Radar AT 351 Lab 10 April 2, 2008 Remote Sensing Remote sensing is gathering information about something without being in physical contact with it typically

More information

OTT PARSIVEL - ENHANCED PRECIPITATION IDENTIFIER AND NEW GENERATION

OTT PARSIVEL - ENHANCED PRECIPITATION IDENTIFIER AND NEW GENERATION OTT PARSIEL - ENHANCED PRECIPITATION IDENTIFIER AND NEW GENERATION OF PRESENT WEATHER SENSOR BY OTT MESSTECHNIK, GERMANY Authors: (1) Kurt Nemeth OTT Messtechnik GmbH & Co. KG Ludwigstrasse 16 87437 Kempten

More information

Analysing effects of meteorological variables on weather codes by logistic regression

Analysing effects of meteorological variables on weather codes by logistic regression Meteorol. Appl. 9, 191 197 (2002) DOI:10.1017/S1350482702002049 Analysing effects of meteorological variables on weather codes by logistic regression Hanna-Leena Merenti-Välimäki, Espoo-Vantaa Institute

More information

OBSERVATIONS OF WINTER STORMS WITH 2-D VIDEO DISDROMETER AND POLARIMETRIC RADAR

OBSERVATIONS OF WINTER STORMS WITH 2-D VIDEO DISDROMETER AND POLARIMETRIC RADAR P. OBSERVATIONS OF WINTER STORMS WITH -D VIDEO DISDROMETER AND POLARIMETRIC RADAR Kyoko Ikeda*, Edward A. Brandes, and Guifu Zhang National Center for Atmospheric Research, Boulder, Colorado. Introduction

More information

Stability in SeaWinds Quality Control

Stability in SeaWinds Quality Control Ocean and Sea Ice SAF Technical Note Stability in SeaWinds Quality Control Anton Verhoef, Marcos Portabella and Ad Stoffelen Version 1.0 April 2008 DOCUMENTATION CHANGE RECORD Reference: Issue / Revision:

More information

5.0 WHAT IS THE FUTURE ( ) WEATHER EXPECTED TO BE?

5.0 WHAT IS THE FUTURE ( ) WEATHER EXPECTED TO BE? 5.0 WHAT IS THE FUTURE (2040-2049) WEATHER EXPECTED TO BE? This chapter presents some illustrative results for one station, Pearson Airport, extracted from the hour-by-hour simulations of the future period

More information

Measuring solid precipitation using heated tipping bucket gauges: an overview of performance and recommendations from WMO SPICE

Measuring solid precipitation using heated tipping bucket gauges: an overview of performance and recommendations from WMO SPICE Measuring solid precipitation using heated tipping bucket gauges: an overview of performance and recommendations from WMO SPICE Michael Earle 1, Kai Wong 2, Samuel Buisan 3, Rodica Nitu 2, Audrey Reverdin

More information

138 ANALYSIS OF FREEZING RAIN PATTERNS IN THE SOUTH CENTRAL UNITED STATES: Jessica Blunden* STG, Inc., Asheville, North Carolina

138 ANALYSIS OF FREEZING RAIN PATTERNS IN THE SOUTH CENTRAL UNITED STATES: Jessica Blunden* STG, Inc., Asheville, North Carolina 138 ANALYSIS OF FREEZING RAIN PATTERNS IN THE SOUTH CENTRAL UNITED STATES: 1979 2009 Jessica Blunden* STG, Inc., Asheville, North Carolina Derek S. Arndt NOAA National Climatic Data Center, Asheville,

More information

77 IDENTIFYING AND RANKING MULTI-DAY SEVERE WEATHER OUTBREAKS. Department of Earth Sciences, University of South Alabama, Mobile, Alabama

77 IDENTIFYING AND RANKING MULTI-DAY SEVERE WEATHER OUTBREAKS. Department of Earth Sciences, University of South Alabama, Mobile, Alabama 77 IDENTIFYING AND RANKING MULTI-DAY SEVERE WEATHER OUTBREAKS Chad M. Shafer 1* and Charles A. Doswell III 2 1 Department of Earth Sciences, University of South Alabama, Mobile, Alabama 2 Cooperative Institute

More information

Observations needed for verification of additional forecast products

Observations needed for verification of additional forecast products Observations needed for verification of additional forecast products Clive Wilson ( & Marion Mittermaier) 12th Workshop on Meteorological Operational Systems, ECMWF, 2-6 November 2009 Additional forecast

More information

Verification of ensemble and probability forecasts

Verification of ensemble and probability forecasts Verification of ensemble and probability forecasts Barbara Brown NCAR, USA bgb@ucar.edu Collaborators: Tara Jensen (NCAR), Eric Gilleland (NCAR), Ed Tollerud (NOAA/ESRL), Beth Ebert (CAWCR), Laurence Wilson

More information

USE OF SATELLITE INFORMATION IN THE HUNGARIAN NOWCASTING SYSTEM

USE OF SATELLITE INFORMATION IN THE HUNGARIAN NOWCASTING SYSTEM USE OF SATELLITE INFORMATION IN THE HUNGARIAN NOWCASTING SYSTEM Mária Putsay, Zsófia Kocsis and Ildikó Szenyán Hungarian Meteorological Service, Kitaibel Pál u. 1, H-1024, Budapest, Hungary Abstract The

More information

Aviation Hazards: Thunderstorms and Deep Convection

Aviation Hazards: Thunderstorms and Deep Convection Aviation Hazards: Thunderstorms and Deep Convection TREND Diagnosis of thunderstorm hazards using imagery Contents Satellite imagery Visible, infrared, water vapour Basic cloud identification Identifying

More information

Non catchment type instruments for snowfall measurement: General considerations and

Non catchment type instruments for snowfall measurement: General considerations and Non catchment type instruments for snowfall measurement: General considerations and issues encountered during the WMO CIMO SPICE experiment, and derived recommendations Authors : Yves Alain Roulet (1),

More information

Automated Present Weather observations: a new concept for analysing the effects of meteorological variables

Automated Present Weather observations: a new concept for analysing the effects of meteorological variables Meteorol. Appl. 11, 213 219 (2004) DOI:10.1017/S1350482704001264 Automated Present Weather observations: a new concept for analysing the effects of meteorological variables Hanna-Leena Merenti-Välimäki

More information

A FIELD STUDY TO CHARACTERISE THE MEASUREMENT OF PRECIPITATION USING DIFFERENT TYPES OF SENSOR. Judith Agnew 1 and Mike Brettle 2

A FIELD STUDY TO CHARACTERISE THE MEASUREMENT OF PRECIPITATION USING DIFFERENT TYPES OF SENSOR. Judith Agnew 1 and Mike Brettle 2 A FIELD STUDY TO CHARACTERISE THE MEASUREMENT OF PRECIPITATION USING DIFFERENT TYPES OF SENSOR Judith Agnew 1 and Mike Brettle 2 1 STFC Rutherford Appleton Laboratory, Harwell Oxford, Didcot, Oxfordshire,

More information

Validation Report for Precipitation products from Cloud Physical Properties (PPh-PGE14: PCPh v1.0 & CRPh v1.0)

Validation Report for Precipitation products from Cloud Physical Properties (PPh-PGE14: PCPh v1.0 & CRPh v1.0) Page: 1/26 Validation Report for Precipitation SAF/NWC/CDOP2/INM/SCI/VR/15, Issue 1, Rev. 0 15 July 2013 Applicable to SAFNWC/MSG version 2013 Prepared by AEMET Page: 2/26 REPORT SIGNATURE TABLE Function

More information

THE DETECTABILITY OF TORNADIC SIGNATURES WITH DOPPLER RADAR: A RADAR EMULATOR STUDY

THE DETECTABILITY OF TORNADIC SIGNATURES WITH DOPPLER RADAR: A RADAR EMULATOR STUDY P15R.1 THE DETECTABILITY OF TORNADIC SIGNATURES WITH DOPPLER RADAR: A RADAR EMULATOR STUDY Ryan M. May *, Michael I. Biggerstaff and Ming Xue University of Oklahoma, Norman, Oklahoma 1. INTRODUCTION The

More information

Road weather forecasts and MDSS in Slovakia

Road weather forecasts and MDSS in Slovakia ID: 0030 Road weather forecasts and MDSS in Slovakia M. Benko Slovak Hydrometeorological Institute (SHMI), Jeséniova 17, 83315 Bratislava, Slovakia Corresponding author s E-mail: martin.benko@shmu.sk ABSTRACT

More information

Description of the case study

Description of the case study Description of the case study During the night and early morning of the 14 th of July 011 the significant cloud layer expanding in the West of the country and slowly moving East produced precipitation

More information

Glossary. The ISI glossary of statistical terms provides definitions in a number of different languages:

Glossary. The ISI glossary of statistical terms provides definitions in a number of different languages: Glossary The ISI glossary of statistical terms provides definitions in a number of different languages: http://isi.cbs.nl/glossary/index.htm Adjusted r 2 Adjusted R squared measures the proportion of the

More information

P10.3 HOMOGENEITY PROPERTIES OF RUNWAY VISIBILITY IN FOG AT CHICAGO O HARE INTERNATIONAL AIRPORT (ORD)

P10.3 HOMOGENEITY PROPERTIES OF RUNWAY VISIBILITY IN FOG AT CHICAGO O HARE INTERNATIONAL AIRPORT (ORD) P10.3 HOMOGENEITY PROPERTIES OF RUNWAY VISIBILITY IN FOG AT CHICAGO O HARE INTERNATIONAL AIRPORT (ORD) Thomas A. Seliga 1, David A. Hazen 2 and Stephen Burnley 3 1. Volpe National Transportation Systems

More information

Verification of Probability Forecasts

Verification of Probability Forecasts Verification of Probability Forecasts Beth Ebert Bureau of Meteorology Research Centre (BMRC) Melbourne, Australia 3rd International Verification Methods Workshop, 29 January 2 February 27 Topics Verification

More information

Combining Deterministic and Probabilistic Methods to Produce Gridded Climatologies

Combining Deterministic and Probabilistic Methods to Produce Gridded Climatologies Combining Deterministic and Probabilistic Methods to Produce Gridded Climatologies Michael Squires Alan McNab National Climatic Data Center (NCDC - NOAA) Asheville, NC Abstract There are nearly 8,000 sites

More information

4.5 Comparison of weather data from the Remote Automated Weather Station network and the North American Regional Reanalysis

4.5 Comparison of weather data from the Remote Automated Weather Station network and the North American Regional Reanalysis 4.5 Comparison of weather data from the Remote Automated Weather Station network and the North American Regional Reanalysis Beth L. Hall and Timothy. J. Brown DRI, Reno, NV ABSTRACT. The North American

More information

WMO Aeronautical Meteorology Scientific Conference 2017

WMO Aeronautical Meteorology Scientific Conference 2017 Session 1 Science underpinning meteorological observations, forecasts, advisories and warnings 1.6 Observation, nowcast and forecast of future needs 1.6.1 Advances in observing methods and use of observations

More information

Joseph C. Lang * Unisys Weather Information Services, Kennett Square, Pennsylvania

Joseph C. Lang * Unisys Weather Information Services, Kennett Square, Pennsylvania 12.10 RADAR MOSAIC GENERATION ALGORITHMS BEING DEVELOPED FOR FAA WARP SYSTEM Joseph C. Lang * Unisys Weather Information Services, Kennett Square, Pennsylvania 1.0 INTRODUCTION The FAA WARP (Weather and

More information

Application and verification of ECMWF products 2012

Application and verification of ECMWF products 2012 Application and verification of ECMWF products 2012 Met Eireann, Glasnevin Hill, Dublin 9, Ireland. J.Hamilton 1. Summary of major highlights The verification of ECMWF products has continued as in previous

More information

Real time mitigation of ground clutter

Real time mitigation of ground clutter Real time mitigation of ground clutter John C. Hubbert, Mike Dixon and Scott Ellis National Center for Atmospheric Research, Boulder CO 1. Introduction The identification and mitigation of anomalous propagation

More information

Remote Sensing of Precipitation

Remote Sensing of Precipitation Lecture Notes Prepared by Prof. J. Francis Spring 2003 Remote Sensing of Precipitation Primary reference: Chapter 9 of KVH I. Motivation -- why do we need to measure precipitation with remote sensing instruments?

More information

WMO SPICE. World Meteorological Organization. Solid Precipitation Intercomparison Experiment - Overall results and recommendations

WMO SPICE. World Meteorological Organization. Solid Precipitation Intercomparison Experiment - Overall results and recommendations WMO World Meteorological Organization Working together in weather, climate and water WMO SPICE Solid Precipitation Intercomparison Experiment - Overall results and recommendations CIMO-XVII Amsterdam,

More information

Assimilation of precipitation-related observations into global NWP models

Assimilation of precipitation-related observations into global NWP models Assimilation of precipitation-related observations into global NWP models Alan Geer, Katrin Lonitz, Philippe Lopez, Fabrizio Baordo, Niels Bormann, Peter Lean, Stephen English Slide 1 H-SAF workshop 4

More information

AERODROME METEOROLOGICAL OBSERVATION AND FORECAST STUDY GROUP (AMOFSG)

AERODROME METEOROLOGICAL OBSERVATION AND FORECAST STUDY GROUP (AMOFSG) AMOFSG/9-SN No. 31 22/8/11 AERODROME METEOROLOGICAL OBSERVATION AND FORECAST STUDY GROUP (AMOFSG) NINTH MEETING Montréal, 26 to 30 September 2011 Agenda Item 5: Observing and forecasting at the aerodrome

More information

Chapter 2: Polarimetric Radar

Chapter 2: Polarimetric Radar Chapter 2: Polarimetric Radar 2.1 Polarimetric radar vs. conventional radar Conventional weather radars transmit and receive linear electromagnetic radiation whose electric field is parallel to the local

More information

Ground-based temperature and humidity profiling using microwave radiometer retrievals at Sydney Airport.

Ground-based temperature and humidity profiling using microwave radiometer retrievals at Sydney Airport. Ground-based temperature and humidity profiling using microwave radiometer retrievals at Sydney Airport. Peter Ryan Bureau of Meteorology, Melbourne, Australia Peter.J.Ryan@bom.gov.au ABSTRACT The aim

More information

ADL110B ADL120 ADL130 ADL140 How to use radar and strike images. Version

ADL110B ADL120 ADL130 ADL140 How to use radar and strike images. Version ADL110B ADL120 ADL130 ADL140 How to use radar and strike images Version 1.00 22.08.2016 How to use radar and strike images 1 / 12 Revision 1.00-22.08.2016 WARNING: Like any information of the ADL in flight

More information

Use of lightning data to improve observations for aeronautical activities

Use of lightning data to improve observations for aeronautical activities Use of lightning data to improve observations for aeronautical activities Françoise Honoré Jean-Marc Yvagnes Patrick Thomas Météo_France Toulouse France I Introduction Aeronautical activities are very

More information

Verification of the operational NWP models at DWD - with special focus at COSMO-EU

Verification of the operational NWP models at DWD - with special focus at COSMO-EU Verification of the operational NWP models at DWD - with special focus at COSMO-EU Ulrich Damrath Ulrich.Damrath@dwd.de Ein Mensch erkennt (und das ist wichtig): Nichts ist ganz falsch und nichts ganz

More information

TAF and TREND Verification

TAF and TREND Verification TAF and TREND Verification Guenter Mahringer and Horst Frey, Austro Control, Aviation MET Service Linz, A-4063 Hoersching, Austria. guenter.mahringer@austrocontrol.at The TAF Verification Concept A TAF

More information

5.6. Barrow, Alaska, USA

5.6. Barrow, Alaska, USA SECTION 5: QUALITY CONTROL SUMMARY 5.6. Barrow, Alaska, USA The Barrow installation is located on Alaska s North Slope at the edge of the Arctic Ocean in the city of Barrow. The instrument is located in

More information

DATA FUSION NOWCASTING AND NWP

DATA FUSION NOWCASTING AND NWP DATA FUSION NOWCASTING AND NWP Brovelli Pascal 1, Ludovic Auger 2, Olivier Dupont 1, Jean-Marc Moisselin 1, Isabelle Bernard-Bouissières 1, Philippe Cau 1, Adrien Anquez 1 1 Météo-France Forecasting Department

More information

Convective Structures in Clear-Air Echoes seen by a Weather Radar

Convective Structures in Clear-Air Echoes seen by a Weather Radar Convective Structures in Clear-Air Echoes seen by a Weather Radar Martin Hagen Deutsches Zentrum für Luft- und Raumfahrt Oberpfaffenhofen, Germany Weather Radar Weather radar are normally used to locate

More information

7B.4 EVALUATING A HAIL SIZE DISCRIMINATION ALGORITHM FOR DUAL-POLARIZED WSR-88Ds USING HIGH RESOLUTION REPORTS AND FORECASTER FEEDBACK

7B.4 EVALUATING A HAIL SIZE DISCRIMINATION ALGORITHM FOR DUAL-POLARIZED WSR-88Ds USING HIGH RESOLUTION REPORTS AND FORECASTER FEEDBACK 7B.4 EVALUATING A HAIL SIZE DISCRIMINATION ALGORITHM FOR DUAL-POLARIZED WSR-88Ds USING HIGH RESOLUTION REPORTS AND FORECASTER FEEDBACK Kiel L. Ortega 1, Alexander V. Ryzhkov 1, John Krause 1, Pengfei Zhang

More information

ASSESMENT OF THE SEVERE WEATHER ENVIROMENT IN NORTH AMERICA SIMULATED BY A GLOBAL CLIMATE MODEL

ASSESMENT OF THE SEVERE WEATHER ENVIROMENT IN NORTH AMERICA SIMULATED BY A GLOBAL CLIMATE MODEL JP2.9 ASSESMENT OF THE SEVERE WEATHER ENVIROMENT IN NORTH AMERICA SIMULATED BY A GLOBAL CLIMATE MODEL Patrick T. Marsh* and David J. Karoly School of Meteorology, University of Oklahoma, Norman OK and

More information

8A.6 AN OVERVIEW OF PRECIPITATION TYPE FORECASTING USING NAM AND SREF DATA

8A.6 AN OVERVIEW OF PRECIPITATION TYPE FORECASTING USING NAM AND SREF DATA 8A.6 AN OVERVIEW OF PRECIPITATION TYPE FORECASTING USING NAM AND SREF DATA Geoffrey S. Manikin NCEP/EMC/Mesoscale Modeling Branch Camp Springs, MD 1. INTRODUCTION A frequent forecasting problem associated

More information

Heavier summer downpours with climate change revealed by weather forecast resolution model

Heavier summer downpours with climate change revealed by weather forecast resolution model SUPPLEMENTARY INFORMATION DOI: 10.1038/NCLIMATE2258 Heavier summer downpours with climate change revealed by weather forecast resolution model Number of files = 1 File #1 filename: kendon14supp.pdf File

More information

VALIDATION RESULTS OF THE OPERATIONAL LSA-SAF SNOW COVER MAPPING

VALIDATION RESULTS OF THE OPERATIONAL LSA-SAF SNOW COVER MAPPING VALIDATION RESULTS OF THE OPERATIONAL LSA-SAF SNOW COVER MAPPING Niilo Siljamo, Otto Hyvärinen Finnish Meteorological Institute, Erik Palménin aukio 1, P.O.Box 503, FI-00101 HELSINKI Abstract Hydrological

More information

Global Surface Archives Documentation

Global Surface Archives Documentation Global Surface Archives Documentation 1 July 2013 PO BOX 450211 GARLAND TX 75045 www.weathergraphics.com Global Surface Archives is a dataset containing hourly and special observations from official observation

More information

An Adaptive Neural Network Scheme for Radar Rainfall Estimation from WSR-88D Observations

An Adaptive Neural Network Scheme for Radar Rainfall Estimation from WSR-88D Observations 2038 JOURNAL OF APPLIED METEOROLOGY An Adaptive Neural Network Scheme for Radar Rainfall Estimation from WSR-88D Observations HONGPING LIU, V.CHANDRASEKAR, AND GANG XU Colorado State University, Fort Collins,

More information

Basic Verification Concepts

Basic Verification Concepts Basic Verification Concepts Barbara Brown National Center for Atmospheric Research Boulder Colorado USA bgb@ucar.edu Basic concepts - outline What is verification? Why verify? Identifying verification

More information

Meteorology 311. RADAR Fall 2016

Meteorology 311. RADAR Fall 2016 Meteorology 311 RADAR Fall 2016 What is it? RADAR RAdio Detection And Ranging Transmits electromagnetic pulses toward target. Tranmission rate is around 100 s pulses per second (318-1304 Hz). Short silent

More information

EAS 535 Laboratory Exercise Weather Station Setup and Verification

EAS 535 Laboratory Exercise Weather Station Setup and Verification EAS 535 Laboratory Exercise Weather Station Setup and Verification Lab Objectives: In this lab exercise, you are going to examine and describe the error characteristics of several instruments, all purportedly

More information

Categorical Verification

Categorical Verification Forecast M H F Observation Categorical Verification Tina Kalb Contributions from Tara Jensen, Matt Pocernich, Eric Gilleland, Tressa Fowler, Barbara Brown and others Finley Tornado Data (1884) Forecast

More information

DETERMINING USEFUL FORECASTING PARAMETERS FOR LAKE-EFFECT SNOW EVENTS ON THE WEST SIDE OF LAKE MICHIGAN

DETERMINING USEFUL FORECASTING PARAMETERS FOR LAKE-EFFECT SNOW EVENTS ON THE WEST SIDE OF LAKE MICHIGAN DETERMINING USEFUL FORECASTING PARAMETERS FOR LAKE-EFFECT SNOW EVENTS ON THE WEST SIDE OF LAKE MICHIGAN Bradley M. Hegyi National Weather Center Research Experiences for Undergraduates University of Oklahoma,

More information

Advances in Military Technology Vol. 3, No. 2, December Prediction of Ice Formation on Earth s surface. K. Dejmal 1 and V. Répal *2.

Advances in Military Technology Vol. 3, No. 2, December Prediction of Ice Formation on Earth s surface. K. Dejmal 1 and V. Répal *2. AiMT Advances in Military Technology Vol. 3, No. 2, December 28 Prediction of Ice Formation on Earth s surface K. Dejmal 1 and V. Répal *2 1 Department of Military Geography and Meteorology, University

More information

DEFINITION OF DRY THUNDERSTORMS FOR USE IN VERIFYING SPC FIRE WEATHER PRODUCTS. 2.1 Data

DEFINITION OF DRY THUNDERSTORMS FOR USE IN VERIFYING SPC FIRE WEATHER PRODUCTS. 2.1 Data S47 DEFINITION OF DRY THUNDERSTORMS FOR USE IN VERIFYING SPC FIRE WEATHER PRODUCTS Paul X. Flanagan *,1,2, Christopher J. Melick 3,4, Israel L. Jirak 4, Jaret W. Rogers 4, Andrew R. Dean 4, Steven J. Weiss

More information

Data Short description Parameters to be used for analysis SYNOP. Surface observations by ships, oil rigs and moored buoys

Data Short description Parameters to be used for analysis SYNOP. Surface observations by ships, oil rigs and moored buoys 3.2 Observational Data 3.2.1 Data used in the analysis Data Short description Parameters to be used for analysis SYNOP Surface observations at fixed stations over land P,, T, Rh SHIP BUOY TEMP PILOT Aircraft

More information

Current verification practices with a particular focus on dust

Current verification practices with a particular focus on dust Current verification practices with a particular focus on dust Marion Mittermaier and Ric Crocker Outline 1. Guide to developing verification studies 2. Observations at the root of it all 3. Grid-to-point,

More information

statistical methods for tailoring seasonal climate forecasts Andrew W. Robertson, IRI

statistical methods for tailoring seasonal climate forecasts Andrew W. Robertson, IRI statistical methods for tailoring seasonal climate forecasts Andrew W. Robertson, IRI tailored seasonal forecasts why do we make probabilistic forecasts? to reduce our uncertainty about the (unknown) future

More information

Operational quantitative precipitation estimation using radar, gauge g and

Operational quantitative precipitation estimation using radar, gauge g and Operational quantitative precipitation estimation using radar, gauge g and satellite for hydrometeorological applications in Southern Brazil Leonardo Calvetti¹, Cesar Beneti¹, Diogo Stringari¹, i¹ Alex

More information

The assimilation of AMSU and SSM/I brightness temperatures in clear skies at the Meteorological Service of Canada

The assimilation of AMSU and SSM/I brightness temperatures in clear skies at the Meteorological Service of Canada The assimilation of AMSU and SSM/I brightness temperatures in clear skies at the Meteorological Service of Canada Abstract David Anselmo and Godelieve Deblonde Meteorological Service of Canada, Dorval,

More information

Plan for operational nowcasting system implementation in Pulkovo airport (St. Petersburg, Russia)

Plan for operational nowcasting system implementation in Pulkovo airport (St. Petersburg, Russia) Plan for operational nowcasting system implementation in Pulkovo airport (St. Petersburg, Russia) Pulkovo airport (St. Petersburg, Russia) is one of the biggest airports in the Russian Federation (150

More information

The Montague Doppler Radar, An Overview

The Montague Doppler Radar, An Overview ISSUE PAPER SERIES The Montague Doppler Radar, An Overview June 2018 NEW YORK STATE TUG HILL COMMISSION DULLES STATE OFFICE BUILDING 317 WASHINGTON STREET WATERTOWN, NY 13601 (315) 785-2380 WWW.TUGHILL.ORG

More information

The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS

The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS The Impact of Horizontal Resolution and Ensemble Size on Probabilistic Forecasts of Precipitation by the ECMWF EPS S. L. Mullen Univ. of Arizona R. Buizza ECMWF University of Wisconsin Predictability Workshop,

More information

Spatial and Temporal Characteristics of Heavy Precipitation Events over Canada

Spatial and Temporal Characteristics of Heavy Precipitation Events over Canada 1MAY 2001 ZHANG ET AL. 1923 Spatial and Temporal Characteristics of Heavy Precipitation Events over Canada XUEBIN ZHANG, W.D.HOGG, AND ÉVA MEKIS Climate Research Branch, Meteorological Service of Canada,

More information

M. Mielke et al. C5816

M. Mielke et al. C5816 Atmos. Chem. Phys. Discuss., 14, C5816 C5827, 2014 www.atmos-chem-phys-discuss.net/14/c5816/2014/ Author(s) 2014. This work is distributed under the Creative Commons Attribute 3.0 License. Atmospheric

More information

Verification of Continuous Forecasts

Verification of Continuous Forecasts Verification of Continuous Forecasts Presented by Barbara Brown Including contributions by Tressa Fowler, Barbara Casati, Laurence Wilson, and others Exploratory methods Scatter plots Discrimination plots

More information

2.10 HOMOGENEITY ASSESSMENT OF CANADIAN PRECIPITATION DATA FOR JOINED STATIONS

2.10 HOMOGENEITY ASSESSMENT OF CANADIAN PRECIPITATION DATA FOR JOINED STATIONS 2.10 HOMOGENEITY ASSESSMENT OF CANADIAN PRECIPITATION DATA FOR JOINED STATIONS Éva Mekis* and Lucie Vincent Meteorological Service of Canada, Toronto, Ontario 1. INTRODUCTION It is often essential to join

More information

QualiMET 2.0. The new Quality Control System of Deutscher Wetterdienst

QualiMET 2.0. The new Quality Control System of Deutscher Wetterdienst QualiMET 2.0 The new Quality Control System of Deutscher Wetterdienst Reinhard Spengler Deutscher Wetterdienst Department Observing Networks and Data Quality Assurance of Meteorological Data Michendorfer

More information

'$1,6+0(7(252/2*,&$/,167,787(

'$1,6+0(7(252/2*,&$/,167,787( '$1,6+(7(252/2*,&$/,167,787( 7(&+1,&$/ 5(3257 )RUHFDVW YHULILFDWLRQ UHSRUW -DQXDU\ $QQD +LOGHQ 2YH.M U DQG 5DVPXV )HOGEHUJ ', &23(1+$*(1 ,6611U ; ; Preface The present report is the second in a series

More information

Description of Precipitation Retrieval Algorithm For ADEOS II AMSR

Description of Precipitation Retrieval Algorithm For ADEOS II AMSR Description of Precipitation Retrieval Algorithm For ADEOS II Guosheng Liu Florida State University 1. Basic Concepts of the Algorithm This algorithm is based on Liu and Curry (1992, 1996), in which the

More information

SPI: Standardized Precipitation Index

SPI: Standardized Precipitation Index PRODUCT FACT SHEET: SPI Africa Version 1 (May. 2013) SPI: Standardized Precipitation Index Type Temporal scale Spatial scale Geo. coverage Precipitation Monthly Data dependent Africa (for a range of accumulation

More information

WIND PROFILER NETWORK OF JAPAN METEOROLOGICAL AGENCY

WIND PROFILER NETWORK OF JAPAN METEOROLOGICAL AGENCY WIND PROFILER NETWORK OF JAPAN METEOROLOGICAL AGENCY Masahito Ishihara Japan Meteorological Agency CIMO Expert Team on Remote Sensing Upper-Air Technology and Techniques 14-17 March, 2005 Geneva, Switzerland

More information

152 STATISTICAL PREDICTION OF WATERSPOUT PROBABILITY FOR THE FLORIDA KEYS

152 STATISTICAL PREDICTION OF WATERSPOUT PROBABILITY FOR THE FLORIDA KEYS 152 STATISTICAL PREDICTION OF WATERSPOUT PROBABILITY FOR THE FLORIDA KEYS Andrew Devanas 1, Lydia Stefanova 2, Kennard Kasper 1, Sean Daida 1 1 NOAA/National Wear Service, Key West, Florida, 2 COAPS/Florida

More information

Peter P. Neilley. And. Kurt A. Hanson. Weather Services International, Inc. 400 Minuteman Road Andover, MA 01810

Peter P. Neilley. And. Kurt A. Hanson. Weather Services International, Inc. 400 Minuteman Road Andover, MA 01810 6.4 ARE MODEL OUTPUT STATISTICS STILL NEEDED? Peter P. Neilley And Kurt A. Hanson Weather Services International, Inc. 400 Minuteman Road Andover, MA 01810 1. Introduction. Model Output Statistics (MOS)

More information

Observations from Plant City Municipal Airport during the time period of interest are summarized below:

Observations from Plant City Municipal Airport during the time period of interest are summarized below: December 3, 2014 James A. Murman Barr, Murman & Tonelli 201 East Kennedy Boulevard Suite 1700 Tampa, FL 33602 RE: Case No. 166221; BMT Matter No.: 001.001007 Location of Interest: 1101 Victoria Street,

More information

5.4 USING PROBABILISTIC FORECAST GUIDANCE AND AN UPDATE TECHNIQUE TO GENERATE TERMINAL AERODROME FORECASTS

5.4 USING PROBABILISTIC FORECAST GUIDANCE AND AN UPDATE TECHNIQUE TO GENERATE TERMINAL AERODROME FORECASTS 5.4 USING PROBABILISTIC FORECAST GUIDANCE AND AN UPDATE TECHNIQUE TO GENERATE TERMINAL AERODROME FORECASTS Mark G. Oberfield* and Matthew R. Peroutka Meteorological Development Laboratory Office of Science

More information