Richard L. Bankert* and Michael Hadjimichael Naval Research Laboratory, Monterey, CA

Similar documents
10.1 AUTOMATED ANALYSIS AND FORECAST TECHNIQUES FOR CEILING AND VISIBILITY ON THE NATIONAL SCALE

(a) (b) (c) Corresponding author address: Paul H. Herzegh, NCAR, P.O. Box 3000, Boulder, CO 80307;

WMO Aeronautical Meteorology Scientific Conference 2017

ASSIMILATION OF METAR CLOUD AND VISIBILITY OBSERVATIONS IN THE RUC

6.3 A REAL-TIME AUTOMATED CEILING AND VISIBILITY ANALYSIS SYSTEM

2.4 Selecting METARs to Verify Ceiling and Visibility Forecasts

A Performance Assessment of the National Ceiling and Visibility Analysis Product

J8.4 NOWCASTING OCEANIC CONVECTION FOR AVIATION USING RANDOM FOREST CLASSIFICATION

INTEGRATED TURBULENCE FORECASTING ALGORITHM 2001 METEOROLOGICAL EVALUATION

THE REAL-TIME VERIFICATION SYSTEM (RTVS) AND ITS APPLICATION TO AVIATION WEATHER FORECASTS

Plan for operational nowcasting system implementation in Pulkovo airport (St. Petersburg, Russia)

AMDAR Forecast Applications. Richard Mamrosh NWS Green Bay, Wisconsin, USA

Verification and performance measures of Meteorological Services to Air Traffic Management (MSTA)

Remote Ground based observations Merging Method For Visibility and Cloud Ceiling Assessment During the Night Using Data Mining Algorithms

Calculates CAT and MWT diagnostics. Paired down choice of diagnostics (reduce diagnostic redundancy) Statically weighted for all forecast hours

STUDY UNIT SEVENTEEN GRAPHICAL AIRMAN S METEOROLOGICAL ADVISORY (G-AIRMET)

MxVision WeatherSentry Web Services Content Guide

Pilots watch the clouds, because clouds can indicate the kind of weather in store for a flight.

1st Annual Southwest Ohio Snow Conference April 8, 2010 Abner F. Johnson, Office of Maintenance - RWIS Coordinator

J th Conference on Aviation, Range, and Aerospace Meteorology, 90th AMS Annual Meeting, Atlanta, GA, Jan, 2010

4.5 Comparison of weather data from the Remote Automated Weather Station network and the North American Regional Reanalysis

Advanced Weather Technology

10.2 RESULTS FROM THE NCAR INTEGRATED TURBULENCE FORECASTING ALGORITHM (ITFA) FOR PREDICTING UPPER-LEVEL CLEAR-AIR TURBULENCE

CMO Terminal Aerodrome Forecast (TAF) Verification Programme (CMOTafV)

NextGen Update. Cecilia Miner May, 2017

2012 and changes to the Rapid Refresh and HRRR weather forecast models

P1.34 MULTISEASONALVALIDATION OF GOES-BASED INSOLATION ESTIMATES. Jason A. Otkin*, Martha C. Anderson*, and John R. Mecikalski #

FAA-NWS Aviation Weather Weather Policy and Product Transition Panel. Friends and Partners in Aviation Weather October 22, 2014 NOAA NOAA

Using Cell-Based VIL Density to Identify Severe-Hail Thunderstorms in the Central Appalachians and Middle Ohio Valley

5.4 USING PROBABILISTIC FORECAST GUIDANCE AND AN UPDATE TECHNIQUE TO GENERATE TERMINAL AERODROME FORECASTS

Thunderstorm Downburst Prediction: An Integrated Remote Sensing Approach. Ken Pryor Center for Satellite Applications and Research (NOAA/NESDIS)

FAA Weather Research Plans

Description of the case study

LARGE-SCALE WRF-SIMULATED PROXY ATMOSPHERIC PROFILE DATASETS USED TO SUPPORT GOES-R RESEARCH ACTIVITIES

AWOS Level Descriptions

Evaluating Icing Nowcasts using CloudSat

Emerging Aviation Weather Research at MIT Lincoln Laboratory*

RUC development on ceiling/visibility forecasts

P2.9 Use of the NOAA ARL HYSPLIT Trajectory Model For the Short Range Prediction of Stratus and Fog

DETERMINING USEFUL FORECASTING PARAMETERS FOR LAKE-EFFECT SNOW EVENTS ON THE WEST SIDE OF LAKE MICHIGAN

The Pavement Precipitation Accumulation Estimation System (PPAES)

AN ENSEMBLE STRATEGY FOR ROAD WEATHER APPLICATIONS

Canadian Meteorological and Oceanographic Society and American Meteorological Society 21 st Conference on Numerical Weather Prediction 31 May 2012

2.7 A PROTOTYPE VERIFICATION SYSTEM FOR EXAMINING NDFD FORECASTS

<Operational nowcasting systems in the framework of the 4-D MeteoCube>

P3.24 EVALUATION OF MODERATE-RESOLUTION IMAGING SPECTRORADIOMETER (MODIS) SHORTWAVE INFRARED BANDS FOR OPTIMUM NIGHTTIME FOG DETECTION

MAIN ATTRIBUTES OF THE PRECIPITATION PRODUCTS DEVELOPED BY THE HYDROLOGY SAF PROJECT RESULTS OF THE VALIDATION IN HUNGARY

CIMIS. California Irrigation Management Information System

1. INTRODUCTION 3. VERIFYING ANALYSES

Aircraft-based Observations: Impact on weather forecast model performance

Forecasting of Optical Turbulence in Support of Realtime Optical Imaging and Communication Systems

An Initial Assessment of a Clear Air Turbulence Forecasting Product. Ankita Nagirimadugu. Thomas Jefferson High School for Science and Technology

Update on CoSPA Storm Forecasts

A COMPARISON OF VERY SHORT-TERM QPF S FOR SUMMER CONVECTION OVER COMPLEX TERRAIN AREAS, WITH THE NCAR/ATEC WRF AND MM5-BASED RTFDDA SYSTEMS

A statistical approach for rainfall confidence estimation using MSG-SEVIRI observations

MARINE WEATHER HINDCAST REPORT

Section 7: Hazard Avoidance

A COMPREHENSIVE 5-YEAR SEVERE STORM ENVIRONMENT CLIMATOLOGY FOR THE CONTINENTAL UNITED STATES 3. RESULTS

Icing detection from geostationary satellite data over Korea and Japan using machine learning approaches

What s the Weather? Compiled by: Nancy Volk

Science Olympiad Meteorology Quiz #2 Page 1 of 8

Northeastern United States Snowstorm of 9 February 2017

11.1 CONVECTION FORECASTS FROM THE HOURLY UPDATED, 3-KM HIGH RESOLUTION RAPID REFRESH (HRRR) MODEL

THE CURRENT STAGE OF DEVELOPMENT OF A METHOD OF PRODUCING MOTION VECTORS AT HIGH LATITUDES FROM NOAA SATELLITES. Leroy D. Herman

Radiative Climatology of the North Slope of Alaska and the Adjacent Arctic Ocean

540 THE UTILIZATION OF CURRENT FORECAST PRODUCTS IN A PROBABILISTIC AIRPORT CAPACITY MODEL

Implementation Guidance of Aeronautical Meteorological Observer Competency Standards

Evaluation of Ensemble Icing Probability Forecasts in NCEP s SREF, VSREF and NARRE-TL Systems

3.23 IMPROVING VERY-SHORT-TERM STORM PREDICTIONS BY ASSIMILATING RADAR AND SATELLITE DATA INTO A MESOSCALE NWP MODEL

Practical Atmospheric Analysis

Atmospheric Moisture, Precipitation, and Weather Systems

NEW TECHNIQUE FOR CLOUD MODEL CONTROLLED PRECIPITATION RETRIEVAL: CLOUD DYNAMICS AND RADIATION DATABASE (CDRD)

Page 1. Name: 4) State the actual air pressure, in millibars, shown at Miami, Florida on the given weather map.

William H. Bauman III* NASA Applied Meteorology Unit / ENSCO, Inc. / Cape Canaveral Air Force Station, Florida

USE OF SATELLITE INFORMATION IN THE HUNGARIAN NOWCASTING SYSTEM

Bruce L. Rose The Weather Channel Atlanta, GA. and. Peter P. Neilley 1 Weather Services International, Inc. Andover, MA 01810

National Transportation Safety Board Office of Aviation Safety Washington, D.C December 10, 2012 WEATHER STUDY DCA13RA025

Add NOAA nowcoast Layers to Maps

Remote Sensing of Precipitation

Basic Verification Concepts

Measuring In-cloud Turbulence: The NEXRAD Turbulence Detection Algorithm

Answers to Clicker Questions

Satellite-Based Detection of Fog and Very Low Stratus

A New Numerical Weather Prediction Approach to the NDFD's Sky Cover Grid

STATION If relative humidity is 60% and saturation vapor pressure is 35 mb, what is the actual vapor pressure?

New Meteorological Services Supporting ATM

DEPARTMENT OF EARTH & CLIMATE SCIENCES Name SAN FRANCISCO STATE UNIVERSITY Nov 29, ERTH 360 Test #2 200 pts

The enduring fog and low cloud episode of 5-10 December 2015: Big Bubble Fog Trouble

Remote Sensing Observations AOSC 200 Tim Canty

A Comparison of Tornado Warning Lead Times with and without NEXRAD Doppler Radar

Advances in Weather Technology

Introduction to NCEP's time lagged North American Rapid Refresh Ensemble Forecast System (NARRE-TL)

Winter Storm of 15 December 2005 By Richard H. Grumm National Weather Service Office State College, PA 16803

RTMA Flowchart (CONUS)

J8.3 The Oceanic Convection Diagnosis and Nowcasting System

Airport Meteorology Analysis

Combining Deterministic and Probabilistic Methods to Produce Gridded Climatologies

Transitioning NCAR s Aviation Algorithms into NCEP s Operations

11A.2 Forecasting Short Term Convective Mode And Evolution For Severe Storms Initiated Along Synoptic Boundaries

DEPARTMENT OF GEOSCIENCES SAN FRANCISCO STATE UNIVERSITY. Metr Fall 2012 Test #1 200 pts. Part I. Surface Chart Interpretation.

Transcription:

P.34 SATELLITE AND NUMERICAL MODEL DATA-DRIVEN CLOUD CEILING AND VISIBILITY ESTIMATION Richard L. Bankert* and Michael Hadjimichael Naval Research Laboratory, Monterey, CA Paul H. Herzegh, Gerry Wiener, and Jim Cowie National Center for Atmospheric Research, Boulder, CO John M. Brown NOAA Research Forecast Systems Laboratory, Boulder, CO. INTRODUCTION * The Federal Aviation Administration s (FAA) Aviation Weather Research Program (AWRP) National Ceiling and Visibility (NCV) product development team is researching and developing automated cloud ceiling and visibility (C&V) products for operational users pilots, dispatchers, controllers, etc. These products include current analyses and forecasts (up to 2 hours) of ceiling, visibility, and flight category conditions. Improvements in air safety, flight planning, and in-flight guidance are expected as a result of the implementation of these new algorithms and procedures supporting the timely analysis and forecast of C&V conditions and the improved access to the resulting products for the operational users. More information on the NCV analysis and forecasting techniques can be found in Herzegh, et al (24). For current C&V analyses, rapid (5 minute) updates of C&V conditions in graphical form will be provided. The graphic output will be a result of the use of developed tools for the examination and analysis of METARs, TAFs, AIRMETs, and satellite, radar, and numerical model data. One of the current areas of research is the use of Knowledge Discovery from Databases (KDD) methodology to develop algorithms to better estimate C&V where direct observations are not available. In addition to the modeling of weather phenomena driven by physical laws (verified by data), some sensible weather elements can be modeled through a data-driven approach. KDD methods allow for development of a simple model from existing data and application of that model on new data. Within the context of the KDD approach, data mining of historical satellite, * Corresponding author address: Richard L. Bankert, Naval Research Laboratory, 7 Grace Hopper Ave., Monterey, CA 93943-552; e-mail: bankert@nrlmry.navy.mil numerical model, and METAR data will uncover the data relationships in order to estimate C&V through satellite and/or numerical model data. 2. BACKGROUND For the NCV product development team, filling in the spatial or temporal holes ( gap-filling ) of ceiling and visibility observations is a key challenge. Previous work (Bankert, et al, 24) demonstrated the potential usefulness of cloud ceiling height estimation algorithms developed through the KDD approach. Figure is an example of the real-time output being generated by the Naval Research Laboratory (NRL) GOES- cloud ceiling algorithm. Using GOES- data from a 3-year period, relationships between the satellite data and METAR observations were discovered to generate this cloud ceiling height estimation algorithm. Table is a comparison of METAR ceiling height and the KDD GOES- estimated ceiling height at various locations for the example in Figure. Table. Observed cloud ceiling heights (METAR) and NRL KDD GOES- cloud ceiling height estimation (from Figure ). LOCATION METAR CLOUD CEILING HEIGHT (FT) NRL KDD GOES- CEILING HEIGHT (FT) Santa Barbara 3 24 Van Nuys 8 95 Los Angeles 8 525 Long Beach 4 35 Santa Ana 6 355 Oceanside 9 24 San Diego 6 63 Data collection, data processing, data mining, and algorithm development are all part of the KDD methodology. Since the KDD-developed algorithms are ultimately used in assessing cloud ceilings in real-time situations, the choice of data sources and data mining tools is very important.

Table 2. RUC parameters stored in hourly, location dependent database records to be used in finding relationships between them and observed cloud ceiling (C) and visibility (V). RH at lowest model level (C&V) Dewpoint temperature at lowest model level (C&V) Emperical ceiling (using lowest level T and Td) (C) LCL (C) Temperature at lowest model level (C&V) u-wind component at lowest model level (C&V) v-wind component at lowest model level (C&V) Total wind speed at lowest model level (C&V) Sensible heat flux at surface (C&V) Latent heat flux at surface (C&V) Bowen ratio (C&V) Height of lowest model level with RH > 9% (C) Terrain height (C&V) Cloud base height (C) Cloud top height (C&V) Cloud top temperature (C&V) Cloud / No cloud (yes/no) (C&V) Snow cover depth (V) Snow cover / No snow cover (yes/no) (C&V) Height model level with max vapor mixing ratio (C) u-wind component in -3 mb AGL layer (C&V) v-wind component in -3 mb AGL layer (C&V) Total wind speed in -3 mb AGL layer (C&V) Average RH in lowest 5 mb (C&V) Temp diff (top and bottom) in lowest 5 mb (C&V) Precipitable water (C) Precipitable water ratio (C&V) Richardson number (lowest 4 levels) (C&V) PBL depth (C&V) Vertically-averaged TKE in PBL (C&V) Stoelinga-Warner ceiling (C) Soil temperature (C&V) Net longwave radiation (C&V) Net shortwave radiation (C&V) Stoelinga-Warner visibility (V) Categorical rain (V) Categorical snow (V) Categorical freezing rain (V) Categorical ice pellets (V) Ground moisture availability (C&V) Figure. 6 June 24 2 UTC visible image (top) and corresponding KDD GOES- cloud ceiling image (bottom) of Southern California and adjacent water with ceiling heights estimated in feet. 3. DATA AND ALGORITHMS Based upon the success of the NRL KDD cloud ceiling research and development, C&V estimation algorithms are currently being developed and tested in a similar manner as part of the NCV product development team s research. Archived (from 24) hourly Rapid Update Cycle (RUC) model and Geostationary Operational Environmental Satellite (GOES-2) data are being used to create a database of model and satellite parameters. Four geographic regions within the continental U.S. are being studied separately: Iowa, Northeast Texas, Gulf Coast (Texas to Florida), and Mid-Atlantic (Connecticut to Virginia) regions. These areas provide a sufficient quantity and density of METAR stations to train and test the KDD-developed algorithms. Each record in the database is a set of RUC, GOES-2, and METAR parameter values for a given time at a selected METAR location. A list of

RUC parameters can be found in Table 2. GOES- 2 parameters include channel and channeldifferencing data. In the near future cloud property data computed from a combination of GOES-2 and RUC data will be added. Data mining was performed on the database to uncover relationships among the RUC and/or GOES-2 variables that balance predictive skill with model generality for the generated C&V algorithms. Classification models (represented as decision trees) were produced through use of the Rulequest Research (997-25) C5. data mining tool (Quinlan, 993). Rule-based predictive models (numerical output) were produced through use of Rulequest s Cubist algorithm. To evaluate the performance of these algorithms, the data is split into training and testing sets for a given region with all records for a specific METAR station in either the training or testing set. C5. and Cubist are applied to the training set to uncover patterns and relationships within the RUC and GOES-2 data that can be used to assess cloud ceiling and visibility. The KDD-derived cloud ceiling algorithm is a three-step procedure to, ultimately, identify and estimate the height of low cloud ceilings at a specific location: Step : Yes/No Ceiling Classification If ceiling exists (yes class), proceed to step 2 Step 2: Low/High Ceiling Classification ( m) If ceiling is low (below m), proceed to step 3 Step 3: Compute Ceiling Height. Similarly, the visibility algorithm is currently a twostep process to identify and estimate low visibilities at the surface: Step : Low/High Visibility Classification (5 mi) If visibility is low (less than 5 mi), proceed to step 2 Step 2: Compute Visibility Distance. These algorithms are then evaluated on the testing set. 4. INITIAL RESULTS At the time of this writing, only RUC data has populated the database to a significant level for performance evaluation of the proposed C&V estimation algorithms. RUC-only C&V algorithms have been developed and tested for all four regions using approximately months of hourly data from 24. Selected METAR stations are used for testing and not involved in the data mining development of the algorithm - for each region. As an example, the selected Iowa stations are shown in Figure 2. 4. Cloud Ceiling Height Each of the three steps within a KDD cloud ceiling algorithm can be evaluated. For this RUC-only data, day and night data records are combined. Future research that includes GOES-2 data will separate day and night records. Figure 3 is a graph representing the performance for each algorithm in Step (yes/no ceiling) for each of the four regions. Four types of performance measures are examined: Overall Accuracy, Probability of Detection (POD) of ceilings, False Alarm Ratio (FAR), and True Skill Score (TSS). While the overall accuracy is similar (over 8%) for all regions, less skill was demonstrated within the Gulf Coast testing. These results are expected to improve with the introduction of GOES-2 data and computed cloud properties. Previous work (Bankert, et al, 24) has shown that satellite data is a more reliable source for determining the existence of a cloud ceiling. Performance measures for Step 2 low/high ceiling classification are displayed in Figure 4. POD, FAR, CSI, TSS are measured with respect to low ceilings which is of most interest in the aviation community. These results demonstrate fairly high accuracy and skill for the RUC-only algorithms estimating whether a ceiling is high or low. Once again, the algorithm developed for Gulf Coast locations showed the least skill. The Gulf Coast stations may be less homogeneous in terms of relating RUC parameters to cloud ceiling than the other regions. Figure 5 is a graphical display of the performance measures for Step 3 low ceiling height estimation. Similar results were found for all four regions. Each of the four region algorithms (all three steps) developed through the data mining process can and will be further analyzed in future work to evaluate any physical meaning of the decision trees and rules that were produced. Region independence will also be examined can one algorithm be developed for all regions? 4.2 Visibility The initial test for a RUC-only visibility estimation algorithm suggests some skill, but improvement is needed. As with the cloud ceiling study, each region for each step is analyzed. For the classification of high/low visibility, performance

Figure 2. Iowa METAR station locations with testing set of METARS enclosed by black border. All other station data are used to train the specific algorithm through the data mining process. Mid-Atlantic Iowa Gulf Coast NE Texas.9.8.7.6.5.4.3.2..84.85.86.82.77.72.7.62.64.67.6.62.62.54.46.47.25.22.8.8 Acc POD FAR CSI TSS Figure 3. For each region performance measures for cloud ceiling yes/no (Step ) for RUC-only algorithms. Acc: Overall Accuracy, POD: Probability of Detection, FAR: False Alarm Ratio, CSI: Critical Success Index, TSS: True Skill Score.

Mid-Atlantic Iowa Gulf Coast NE Texas.9.8.7.6.5.4.3.2..8.93.9.84.84.83.78.79.76.72.7.62.62.63.56.42.23.8.5.5 Acc POD FAR CSI TSS Figure 4. For each region performance measures for cloud ceiling low/high (Step 2) for RUC-only algorithms. Acc: Overall Accuracy, POD: Probability of Detection, FAR: False Alarm Ratio, CSI: Critical Success Index, TSS: True Skill Score. 2 8 6 4 2 8 6 4 2 Mid-Atlantic Iowa Gulf Coast NE Texas 8.2 78.4 69.7 57.8 66 7 62 AAE (m) COR (%) 7 Figure 5. For each region performance measures for low ceiling height estimation (Step 3) for RUC-only algorithms. AAE: Average Absolute Error (meters), COR: Correlation Coefficient (%).

Mid-Atlantic Iowa Gulf Coast NE Texas.9.8.7.6.5.4.3.2..87.9.94.89.57.5.49.45.48.4.39.4.37.33.3.32.28.4.2.2 Acc POD FAR CSI TSS Figure 6. For each region performance measures for visibility low/high (Step ) for RUC-only algorithms. Acc: Overall Accuracy, POD: Probability of Detection, FAR: False Alarm Ratio, CSI: Critical Success Index, TSS: True Skill Score..4 Mid-Atlantic Iowa Gulf Coast NE Texas.2.5.4.28.2.8.6.4.2.32.43.3.3 AAE (miles) COR Figure 7. For each region performance measures for low visibility estimation (Step 2) for RUC-only algorithms. AAE: Average Absolute Error (miles), COR: Correlation Coefficient.

measures are displayed in Figure 6. While the overall accuracy for this classification is high in all four regions, the skill for low visibility identification is relatively low. This is particularly true for the Gulf Coast stations. Some adjustments can be made to possibly improve this result. These include using a different threshold for separating low and high to create a more even distribution of low and high cases (currently there are many more high than low) or a different end-system design (e.g, one step instead of two). The algorithm s performance in estimating the visibility distance for low visibility cases (Step 2) is presented in Figure 7. Similar error and correlation values were produced for all four regions. The addition of GOES-2 data and RUC/GOES-2 cloud properties may improve these results. An experiment was performed using a much smaller data set of GOES-2 channel values combined with RUC data that produced better performance measures than RUC data alone. 5. FUTURE WORK The next step necessary as this research moves forward is completing database development to include GOES-2 and cloud properties data. For robust data mining and to build confidence in the algorithms to generalize to real-time data, a minimum -year database of hourly records is necessary and will be collected. More algorithm training and testing experiments are subsequently needed for all four geographic regions being studied. Each of the following items will be investigated to complete a thorough study and to find ceiling and/or visibility algorithm enhancements. ) Train and test algorithms that use a combination of data from two or more regions for testing KDD algorithms for region independence 2) Compare C&V results to Stoelina-Warner (999) C&V values (current RUC translation algorithm for visibility) to measure relative performance 3) Add cloud properties (liquid water path, cloud optical depth, etc) some computed from GOES-2 alone and some from a combination of RUC and GOES-2 to parameter list Some additional questions that need to be addressed ) For each step in either a ceiling or visibility algorithm, does it make sense to use just one of the data sources (GOES or RUC) or a combination? What criteria should be used to judge this? 2) Can these algorithms be applied to mountainous areas or are separate algorithms needed and what is best way to deal with western U.S. (different GOES satellite)? 3) What criteria will be used to determine if any KDD algorithms demonstrate enough skill for transition to an operational setting? Finally, further analysis in terms of the physical reasoning implicit in the KDD-algorithms may prove insightful and beneficial to future research. ACKNOWLEDGEMENTS The authors gratefully acknowledge the assistance of Stan Benjamin (NOAA/FSL), Ted Tsui (NRL), and other FAA NCV product development team members. This research is funded by the Federal Aviation Administration Aviation Weather Research Program (AWRP). The views expressed are those of the authors and do not necessarily represent the official policy or position of the FAA. REFERNENCES Bankert, R.L., M. Hadjimichael, A.P. Kuciauskas, W.T. Thompson, and K. Richardson, 24: Remote cloud ceiling assessment using data-mining methods. J. Appl. Meteor., 43, 929-946. Herzegh, P.H., G. Wiener, R. Bankert, R. Bateman, B. Chorbajian, and M. Tryhane, 24: Automated analysis and forecast techniques for ceiling and visibility on the national scale. Preprints, th Conference on Aviation, Range, and Aerospace Meteorology, Hyannis, MA. Amer. Meteor. Soc., CD-ROM. Quinlan, J.R., 993: C4.5: Programs for machine learning. Morgan Kaufmann Pub., San Mateo, CA, 32 pp. Rulequest Resarch, 997-25: C5. and Cubist. http://www.rulequest.com Stoelinga, M.T. and T.T. Warner, 999: Nonhydrostatic, mesobeta-scale model simulations of cloud ceiling and visibility for an East Coast winter precipitation event. Journal of Applied Meteorology, 38, 385-44.