Methodology for 2013 Stream 1.5 Candidate Evaluation
|
|
- Anastasia Ward
- 5 years ago
- Views:
Transcription
1 Methodology for 2013 Stream 1.5 Candidate Evaluation 20 May 2013 TCMT Stream 1.5 Analysis Team: Louisa Nance, Mrinal Biswas, Barbara Brown, Tressa Fowler, Paul Kucera, Kathryn Newman, Jonathan Vigh, and Christopher Williams Each Stream 1.5 candidate was evaluated based on three basic criteria: (1) a direct comparison between the Stream 1.5 candidate and each of last year s top-flight models, (2) an assessment of how the Stream 1.5 candidate performed relative to last year s top-flight models as a group, and 3) an evaluation of the Stream 1.5 candidate s impact on operational consensus forecasts or a direct comparison between the Stream 1.5 candidate and the operational consensus when appropriate. The following describes the baselines used for each aspect of the evaluation, how the experimental forecasts and the operational baselines were processed, and the approaches used for each type of analysis. Baselines The operational models used as baselines or as components of baselines for the Stream 1.5 analysis are described in Table 1. Note that only the early version 1 of all model guidance was considered in this analysis. The operational baselines used in evaluations as top-flight models are ECMWF, GFS and GFDL for track and LGEM, DSHP and GFDL for intensity. For evaluations of the Stream 1.5 candidate s impact on model consensus, the variable consensus TVCA and TVCE were used as the track baselines for the Atlantic and eastern North Pacific basins, respectively, and the fixed consensus, ICON, was used as the intensity baseline for both basins. The membership of the variable track consensus and the fixed intensity consensus are defined in Table 1. Note that the variable consensus requires that at least two of the members be present for a consensus forecast to be computed, whereas the fixed consensus requires that all members be present. Because the evaluation of each Stream 1.5 candidate is based on a homogeneous sample, the variable consensus with the Stream 1.5 candidate will actually require that at least three members be present. In other words, cases for which the Stream 1.5 candidate is available but only one member of the operational variable consensus is available would not be included in a homogeneous sample because the operational consensus would not be available for comparison. Early Model Conversion Modeling groups participating in the 2013 Stream 1.5 exercise submitted forecasted storm properties as text files conforming to the Automated Tropical Cyclone Forecast (ATCF) file format specifications. Each modeling group generated these data, which are referred to as Tier 1 data, by applying their own method of storm tracking to their model output. The forecasts from all the dynamical models are considered late model guidance. In addition, the Florida State University Multi-Model Super Ensemble (FSU-MMSE) is considered late model guidance 1 The National Hurricane Center (NHC) characterizes forecast models as early or late depending on whether their numerical guidance is available to the forecaster during the forecast cycle. Models that are available shortly after they are initialized are referred to as early models. Models with run times such that the numerical guidance is not available until after the forecaster needs to release their forecasts are considered late models. Early versions of late models are generated through an objective adjustment process provided by an interpolator program. 1
2 because the members of this consensus are a combination of early and late models. To perform the analysis in terms of early models, early model versions of the late model forecasts were generated using an interpolator package with the same functionality as the software used by the National Hurricane Center (NHC). The interpolator process first applies a smoother to an individual track and intensity forecast and applies the appropriate time lag to the forecast based on when the model guidance is available. The time-lagged track or intensity forecast is then adjusted or shifted such that the forecast for the new initial or zero hour guidance matches the analyzed position and intensity of the tropical cyclone. This adjustment is applied to all lead times for track, whereas the operational interpolator offers two adjustment methods for intensity. The first option, or full offset option, applies the same adjustment to all lead times. The second option applies the full adjustment to the time-lagged forecast out to a specified lead time tf, applies a linearly decreasing adjustment from lead time tf to lead time tn, and then no adjustment for the remainder of the forecast lead times. Each modeling group was asked to select the intensity offset option they felt was most appropriate for their model, including the parameters tf and tn if the variable offset option was selected. The one exception was the FSU-MMSE. Based on discussions with NHC, it was decided to simply time-lag the FSU-MMSE intensity guidance without applying any adjustment to match the analyzed intensity for the zero hour guidance. All Stream 1.5 candidates in the late model category were converted to early model versions using the assumption that their run time is short enough to be available for the forecast cycle six hours after the model initialization (i.e., the 6-h forecast is converted to 0-h). Error Distributions The errors associated with each forecast (Stream 1.5 and operational baselines) were computed relative to the Best Track analysis 2 using the Model Evaluation Tools Tropical Cyclone (MET- TC) package. This software was also used to generate variable and fixed consensus forecasts with and without the Stream 1.5 candidate and compute the errors associated with each of these consensus forecasts. The statistics for the individual cases were aggregated using a script in the R statistical language. All aggregations were done for homogeneous samples (i.e., only cases for which both the experimental and operational forecast were available were included in the aggregation statistics). Given the distribution of errors and absolute errors at a given lead time, several parameters of the distribution were computed: mean, median, quartiles, and outliers. In addition, confidence intervals (CI) on the mean were computed using a parametric method with a correction for first-order autocorrelation (Chambers et al. 1983, McGill et al. 1978). Only lead times and errors for which the distribution contained at least 11 samples are considered in the statistical significance (SS) discussions because the error distribution parameters cannot be accurately estimated for sample sizes less than 11. High autocorrelation can reduce the effective sample size, which can lead to samples where a sample size of 11 is insufficient to accurately estimate the variability and confidence. For those samples, the minimum sample size was increased to 20. Confidence intervals are only displayed for those samples where these measures could be accurately estimated. 95% confidence intervals (CIs) were selected as the criteria for determining statistical significance. 2 Best Track analysis was obtained from the NOAA Web Operations Center (ftp://ftp.nhc.noaa.gov/atcf) on 1 April
3 Baseline Comparisons The errors for some cases were substantial. Such outliers and/or large variability in the error distributions increase the likelihood the confidence intervals for the errors of two models will overlap even if one model is actually consistently performing better than the other. By comparing the error differences, rather than the errors themselves (i.e., using a paired test rather than a two-sample test), the variability due to difficult forecasts and large errors is removed. Hence, for criteria 1 and 3 of this evaluation (described in the first paragraph of this document), a pairwise technique was used to address the question of whether the differences between the experimental and the operational forecasts are statistically significant (SS). For this technique, the absolute error of a given quantity (e.g., intensity error) for a Stream 1.5 forecast or the experimental consensus forecast is subtracted from the same metric for the operational baseline. This subtraction is done separately for each lead time of each case, yielding a distribution of forecast error differences. The parameters of this difference distribution are then computed using the same methodology applied to the error distributions for a single model or model consensus. Knowing whether substantial error differences more often favor one model or scheme over the other is a valuable piece of information when selecting numerical guidance to be included in the operational forecast process. When negative or positive error differences occur at approximately the same frequency, the median of the error difference distribution will be insensitive to the size of the differences, whereas the mean error difference is somewhat sensitive to both the direction and size of the error differences. Hence, the mean error difference is used in this study to assess SS. A SS difference between the forecast verification metrics of the Stream 1.5 candidate (consensus with Stream 1.5 candidate) and the operational baseline (operational consensus) was noted when it was possible to ascertain with 95% confidence that the mean of the pairwise differences was not equal to zero. The pairwise method enables the identification of subtle differences between two error distributions that may go undetected when the mean absolute error (MAE) or root mean square error (RMSE) of each distribution is computed and the overlap of the CIs for the mean is used to ascertain differences (e.g., Lanzante 2005, Snedecor and Cochran 1980). Positive (Negative) mean error differences and percent improvement values indicate the errors associated with the Stream 1.5 candidate are smaller (larger) on average than the operational baseline. Comparison with Top-flight Models as a Group To assess the Stream 1.5 candidate s performance relative to last year s top-flight models as a group, rankings of the Stream 1.5 candidate s performance with respect to the top-flight operational models were determined for each case and lead time within a homogeneous sample, where a ranking of one corresponds to the Stream 1.5 candidate having the smallest error and a ranking of four (or five for some comparisons) corresponds to the Stream 1.5 candidate having the largest error. In some cases, the Stream 1.5 error can be the same as one of the top-flight models, resulting in ties in the rankings. In the case of ties, the rankings are randomly assigned to each model. This approach to handling the ties allows one to determine point-wise confidence intervals around the proportion of cases for each ranking. Once again, 95% confidence intervals were selected for this evaluation. The relative frequency of each ranking provides useful information about how the performance of the Stream 1.5 model relates to the performance of the top-flight operational models. When the frequency of each ranking (first through fourth or fifth for some comparisons) is approximately 25% of the cases for a comparison of four models and approximately 20% for a comparison of five models, then the candidate model errors are 3
4 indistinguishable from the errors of top-flight models. A high frequency of rank one indicates the Stream 1.5 candidate is matching or outperforming the top-flight models on a regular basis, whereas a high frequency of rank four (or fifth for comparisons of five models) indicates the Stream 1.5 candidate is not improving upon the operational guidance. Frequencies of the error rankings were also computed in which the lowest ranking was awarded to the Stream 1.5 candidate in the event of a tie; this additional analysis was done to provide a common context with the approach used for the 2011 Stream 1.5 evaluation and to provide information regarding the frequency of such ties. Methods for Displaying Results Mean errors with confidence intervals Graphs displaying the mean errors of an experimental scheme and the corresponding operational baseline with lead time that include 95% confidence intervals for each mean error were used to provide a quick visual summary of the size of the mean errors, the relationship between the mean errors for the experimental scheme and the corresponding baseline, trends with lead time, and a measure of the variability in the error distribution. For these graphs, black symbols and lines always correspond to the properties of the operational baseline errors and red always corresponds to the experimental scheme. Frequency of Superior Performance To provide a quick summary of whether one model consistently outperforms the other for cases associated with error differences exceeding the precision of the input data, the number of cases for which the error differences for intensity (track) equaled or exceeded 1 kt (6 nm) were tallied for each lead time, keeping track of whether the error difference favored the experimental scheme or the operational baseline. This information is displayed in terms of percent of cases with respect to lead time, where the black line corresponds to percent of cases favoring the operational baseline and the red line corresponds to percent of cases favoring the experimental scheme. Confidence intervals for these plots are calculated using the standard interval for proportions. This analysis categorizes the errors, thus the size of each error has no effect on the results once the category is determined. Furthermore, by examining the frequency rather than the magnitude of the errors, different information can be obtained. Forecasts may have similar average errors even though one forecast is frequently better than another. Conversely, forecasts may have very different average errors even though each is best on a similar number of cases. Typically, the frequency analysis confirms the conclusions from the magnitude analysis of error. When it does not, it is important to understand the forecast behavior. In this way, the frequency analysis complements the pairwise analysis and provides additional information. Boxplots Boxplots are used to display the various attributes of the error distributions in a concise format. Figure 1 illustrates the basic properties of the boxplots used in the Stream 1.5 candidate reports. The mean of the distribution is depicted as a star and the median as a bold horizontal bar. The 95% CIs for the median are shown as the waist or notch of the boxplot. Note that the notches or CIs generated by the R boxplot function do not include a correction for first-order autocorrelation. The outliers shown in this type of display are useful for obtaining information about the size and frequency of substantial errors or error differences. 4
5 Summary tables Figure 1: Description of the boxplot properties. Tables are used to provide a concise summary of the pairwise difference analysis. Each cell in the table contains three numbers corresponding to the mean error difference (top), percent improvement or degradation (middle) and the probability of having an error difference closer to zero than the mean error difference (bottom). A blank for the probability entry means the effective sample size was such that no meaningful probability statistic can be computed. Color shading is used to highlight the mean error differences that are SS, where green is used for mean error differences favoring the experimental scheme and red is used for mean error differences favoring the operational baseline. The darkness of the shading is used to highlight the size of the percent improvement or degradation. For track, the shading thresholds are based on the Stream 1.5 selection criteria. Light shading corresponds to mean track error differences that do not meet the selection criteria (< 4%). Medium shading indicates mean track error differences that meet the criteria (4-5%) and dark shading indicates mean track errors differences that go well beyond the criteria ( 6%). In contrast, the selection criteria for intensity guidance does not put forth a minimum percent improvement criteria. Hence, the shading thresholds for intensity were simply selected to provide a quick visualization of basic percent change ranges. Light shading indicates mean intensity errors with percent changes less than 5%, medium shading for percent changes between 5 to 9% and dark shading for percent changes equaling or exceeding 10%. Colored fonts are used to distinguish which scheme has smaller errors for those mean error differences that do not meet the SS criteria. Figure 2 illustrates the basic properties of these summary tables. 5
6 Table entry definitions mean error difference % improvement (+) / degradation (-) probability of having an error difference closer to zero than the mean error difference Rankings line plots SS Not SS Track Color scheme Intensity % -6 % < % % -5-4 < % 0-5 % 0 0 % < 4 0 % 5 4 % < 6 5 % 10 % 6 % 10 diff < 0 diff < 0 diff > 0 diff > 0 Figure 2: Description of entries in summary tables and color schemes By examining the error rankings, model performance can be gauged based on the frequency of superior (or inferior) performance. This analysis complements the other assessments based on distributions, means, and SS error differences. Typically, the results from a ranking frequency analysis confirm the results from the other types of analyses. However, occasionally model performance shows little difference from the top-flight models in the SS tables but large differences in rank frequency. Thus, it is important to examine both. Ranks one (smallest error) through four, or five for some model comparisons, (largest error) are assigned to all model errors, with ties randomly assigned. The frequency of each rank for the candidate model is displayed in a line plot by lead time. An example of this type of plot is shown in Fig. 3. Rankings one through four or five (1-4 or 1-5) are color coded and labeled with the appropriate ranking number. This display is similar to presenting a rank histogram for each lead time, but condensed into one figure. The dashed lines color-coded to match the appropriate rank show the point-wise 95% confidence intervals around the proportion of cases for each ranking. The 25% (20%) frequency is highlighted by a grey solid line. When a ranking frequency line lies above or below this 25% (20%) line with CIs on the same side of the 25% (20%) line (e.g., rank 1 and its CIs in the example lie above the 25% line for longer lead times), the results suggest the performance of the candidate model can be deemed statistically distinguishable, for better or worse, from that of the top-flight models. If all confidence intervals include the 25% (20%) line, the performance of the candidate model cannot be deemed statistically distinguishable from that of the top-flight models (e.g., rankings in the example at 48 h). The black numbers show the frequency of the first and fourth (fifth) rankings where the candidate model is assigned the better (lower) ranking for all ties. For cases with a lot of ties, the frequencies will differ considerably (e.g., rank 4 in the example for short lead times). When there are only a few ties, the frequencies of the rankings will be quite similar. 6
7 Figure 3: Sample error ranking line plot. References Chambers, J. M., W. S. Cleveland, B. Kleiner, and P. A. Tukey, 1983: Graphical Methods for Data Analysis. Wadsworth & Brooks/Cole Publishing Company. Lanzante, J. R., 2005: A cautionary note on the use of error bars. J. Climate, 18, McGill, R., J. W. Tukey, and W. A. Larsen, 1978: Variations of box plots. The American Statistician, 32, Snedecor, G.W., and W.G. Cochran, 1980: Statistical methods. Iowa State University Press., pp
8 Table 1: Descriptions of the operational models, including their ATCF ID, used as baselines or components of baselines for the 2013 Stream 1.5 evaluation. ATCF ID Type Description EMXI GFSI EGRI GHMI Global - interpolateddynamical Global - interpolateddynamical Global - interpolateddynamical Regional - interpolateddynamical Previous cycle ECMWF global model, adjusted using full offset (ECMWF is only available every 12 hours, so interpolated guidance used for evaluation is a combination of 6-h and 12-h time lagged) Previous cycle Global Forecast System (GFS) model, adjusted using full offset Previous cycle United Kingdom Met office (UKMET) model, automated tracker with subjective quality control applied to tracker, adjusted using full offset (UKMET is only available every 12 hours, so interpolated guidance used for evaluation is a combination of 6-h and 12-h time lagged) Previous cycle Geophysical Fluid Dynamics Laboratory (GFDL) model, adjusted using variable intensity offset correction that is a function of forecast time LGEM Statistical-dynamical Logistic Growth Equation Model HWFI Regional - interpolateddynamical Previous cycle Hurricane Weather Research and Forecasting (HWRF) model, adjusted using full offset DSHP Statistical-dynamical SHIPs with inland decay ICON Fixed-consensus Average of DSHP/LGEM/GHMI/HWFI all members must be present TVCA Variable-consensus Atlantic TVCE Variable-consensus eastern North Pacific Average of EMXI/GFSI/EGRI/GHMI/HWFI at least two members must be present Average of EMXI/GFSI/EGRI/GHMI/HWFI at least two members must be present 8
Methodology for 2014 Stream 1.5 Candidate Evaluation
Methodology for 2014 Stream 1.5 Candidate Evaluation 29 July 2014 TCMT Stream 1.5 Analysis Team: Louisa Nance, Mrinal Biswas, Barbara Brown, Tressa Fowler, Paul Kucera, Kathryn Newman, and Christopher
More informationTCMT Evaluation for the HFIP Reconnaissance Data Impact Tiger Team (RDITT)
TCMT Evaluation for the HFIP Reconnaissance Data Impact Tiger Team (RDITT) Louisa B. Nance Mrinal K. Biswas Barbara G. Brown Tressa L. Fowler Paul A. Kucera Kathryn M. Newman Christopher L. Williams NCAR
More informationModel assessment using a multi-metric ranking technique
Philosophy Model assessment using a multi-metric ranking technique Pat Fitzpatrick and Yee Lau, Mississippi State University at Stennis Ghassan Alaka and Frank Marks, NOAA AOML Hurricane Research Division
More informationA Multi-Model Ensemble for Western North Pacific Tropical Cyclone Intensity Prediction
DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. A Multi-Model Ensemble for Western North Pacific Tropical Cyclone Intensity Prediction Jonathan R. Moskaitis Naval Research
More information3C.2 THE HFIP HIGH-RESOLUTION HURRICANE FORECAST TEST: OVERVIEW AND RESULTS OF TRACK AND INTENSITY FORECAST VERIFICATION
3C.2 THE HFIP HIGH-RESOLUTION HURRICANE FORECAST TEST: OVERVIEW AND RESULTS OF TRACK AND INTENSITY FORECAST VERIFICATION L. Bernardet 1&, L. Nance 2, S. Bao 1&, B. Brown 2, L. Carson 2, T. Fowler 2, J.
More informationThe Evolution and Use of Objective Forecast Guidance at NHC
The Evolution and Use of Objective Forecast Guidance at NHC James L. Franklin Branch Chief, Hurricane Specialist Unit National Hurricane Center 2010 EMC/MMM/DTC Workshop 1 Hierarchy of TC Track Models
More informationEnsemble Prediction Systems
Ensemble Prediction Systems Eric S. Blake & Michael J. Brennan National Hurricane Center 8 March 2016 Acknowledgements to Rick Knabb and Jessica Schauer 1 Why Aren t Models Perfect? Atmospheric variables
More informationGFDL Hurricane Model Ensemble Performance During the 2012 Hurricane Season
GFDL Hurricane Model Ensemble Performance During the 2012 Hurricane Season Tim Marchok (NOAA / GFDL) Matt Morin (DRC HPTG / GFDL) Morris Bender (NOAA / GFDL) HFIP Team Telecon 12 December 2012 Acknowledgments:
More informationCOLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM AUGUST 16 AUGUST 29, 2013
COLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM AUGUST 16 AUGUST 29, 2013 We expect that the next two weeks will be characterized by above-average amounts (greater than 130 percent)
More informationIMPROVEMENT IN HURRICANE
IMPROVEMENT IN HURRICANE INTENSITY FORECAST USING NEURAL NETWORKS Tirthankar Ghosh and TN Krishnamurti Florida State University Tallahassee, FL-32306, USA HFIP Annual Review Meeting, Jan 11-12, 2017 National
More informationCOLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM AUGUST 17 AUGUST 30, 2012
COLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM AUGUST 17 AUGUST 30, 2012 We expect that the next two weeks will be characterized by above-average amounts (greater than 130 percent)
More informationNHC Ensemble/Probabilistic Guidance Products
NHC Ensemble/Probabilistic Guidance Products Michael Brennan NOAA/NWS/NCEP/NHC Mark DeMaria NESDIS/STAR HFIP Ensemble Product Development Workshop 21 April 2010 Boulder, CO 1 Current Ensemble/Probability
More informationOverview of the Tropical Cyclone Guidance Project
Overview of the Tropical Cyclone Guidance Project Dr. Jonathan L. Vigh With thanks to Mahsa Mirzargar (Univ. of Miami) Shanghai Typhoon Institute 09 July 2018 NCAR is sponsored by the National Science
More informationA Deterministic Rapid Intensification Aid
AUGUST 2011 S A M P S O N E T A L. 579 A Deterministic Rapid Intensification Aid CHARLES R. SAMPSON Naval Research Laboratory, Monterey, California JOHN KAPLAN NOAA/AOML/Hurricane Research Division, Miami,
More informationATCF: LESSONS LEARNED ON TC CONSENSUS FORECASTING. John Knaff, NOAA/NESDIS and Buck Sampson, NRL, Monterey
ATCF: LESSONS LEARNED ON TC CONSENSUS FORECASTING John Knaff, NOAA/NESDIS and Buck Sampson, NRL, Monterey Definitions TC tropical cyclone Aid individual model forecast Guidance any method that provides
More informationCOLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM SEPTEMBER 28 OCTOBER 11, 2011
COLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM SEPTEMBER 28 OCTOBER 11, 2011 We expect that the next two weeks will be characterized by below-average amounts (less than 70 percent)
More informationCOLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM AUGUST 31 SEPTEMBER 13, 2012
COLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM AUGUST 31 SEPTEMBER 13, 2012 We expect that the next two weeks will be characterized by average amounts (70-130 percent) of activity
More informationComparison of the NCEP and DTC Verification Software Packages
Comparison of the NCEP and DTC Verification Software Packages Point of Contact: Michelle Harrold September 2011 1. Introduction The National Centers for Environmental Prediction (NCEP) and the Developmental
More informationForecast Challenges of the 2017 Hurricane Season and NHC Priorities for 2018
Forecast Challenges of the 2017 Hurricane Season and NHC Priorities for 2018 Michael J. Brennan, Mark DeMaria, Eric S. Blake, Richard J. Pasch, Andrew Penny Annual HFIP Meeting 8 November 2017 Outline
More informationCOLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM AUGUST 30 SEPTEMBER 12, 2013
COLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM AUGUST 30 SEPTEMBER 12, 2013 We expect that the next two weeks will be characterized by average amounts (70-130 percent) of activity
More informationMeasuring the quality of updating high resolution time-lagged ensemble probability forecasts using spatial verification techniques.
Measuring the quality of updating high resolution time-lagged ensemble probability forecasts using spatial verification techniques. Tressa L. Fowler, Tara Jensen, John Halley Gotway, Randy Bullock 1. Introduction
More informationOPERATIONAL CONSIDERATIONS FOR HURRICANE MODEL DIAGNOSTICS / VERIFICATION
OPERATIONAL CONSIDERATIONS FOR HURRICANE MODEL DIAGNOSTICS / VERIFICATION Richard J. Pasch National Hurricane Center Hurricane Diagnostics and Verification Workshop Miami, Florida 4 May 2009 NOAA/NESDIS
More informationCOLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM AUGUST 2 AUGUST 15, 2013
COLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM AUGUST 2 AUGUST 15, 2013 We expect that the next two weeks will be characterized by below-average amounts (
More informationCOLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM AUGUST 14 AUGUST 27, 2014
COLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM AUGUST 14 AUGUST 27, 2014 We expect that the next two weeks will be characterized by below-average amounts (less than 70 percent)
More informationUse of the GFDL Vortex Tracker
Use of the GFDL Vortex Tracker Tim Marchok NOAA / Geophysical Fluid Dynamics Laboratory WRF Tutorial for Hurricanes January 24, 2018 Outline History & description of the GFDL vortex tracker Inputs & Outputs
More informationThe Use of GPS Radio Occultation Data for Tropical Cyclone Prediction. Bill Kuo and Hui Liu UCAR
The Use of GPS Radio Occultation Data for Tropical Cyclone Prediction Bill Kuo and Hui Liu UCAR Current capability of the National Hurricane Center Good track forecast improvements. Errors cut in half
More informationCOLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM SEPTEMBER 1 SEPTEMBER 14, 2015
COLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM SEPTEMBER 1 SEPTEMBER 14, 2015 We expect that the next two weeks will be characterized by below-average amounts (
More informationAdvancements in Operations and Research on Hurricane Modeling and Ensemble Prediction System at EMC/NOAA
Advancements in Operations and Research on Hurricane Modeling and Ensemble Prediction System at EMC/NOAA Zhan Zhang and Vijay Tallapragada EMC/NCEP/NOAA/DOC Acknowledgements: HWRF Team Members at EMC,
More informationTropical Cyclone Intensity Forecast Error Predictions and Their Applications
University of Miami Scholarly Repository Open Access Dissertations Electronic Theses and Dissertations 2015-12-02 Tropical Cyclone Intensity Forecast Error Predictions and Their Applications Kieran Bhatia
More informationCOLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM AUGUST 4-17, 2015
COLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM AUGUST 4-17, 2015 We expect that the next two weeks will be characterized by below-average amounts (
More informationCOLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM AUGUST 18-31, 2017
COLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM AUGUST 18-31, 2017 We expect that the next two weeks will be characterized by above-average amounts (>130%) of activity relative to
More informationCOLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM SEPTEMBER 15 SEPTEMBER 28, 2015
COLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM SEPTEMBER 15 SEPTEMBER 28, 2015 We expect that the next two weeks will be characterized by below-average amounts (
More informationA Preliminary Exploration of the Upper Bound of Tropical Cyclone Intensification
A Preliminary Exploration of the Upper Bound of Tropical Cyclone Intensification Jonathan L. Vigh (NCAR/RAL) Kerry Emanuel (MIT) Mrinal K. Biswas (NCAR/RAL) Eric A. Hendricks (Naval Postgraduate School)
More informationOperational and Statistical Prediction of Rapid Intensity Change. Mark DeMaria and Eric Blake, NCEP/NHC John Kaplan, AOML/HRD
Operational and Statistical Prediction of Rapid Intensity Change Mark DeMaria and Eric Blake, NCEP/NHC John Kaplan, AOML/HRD Outline Evaluation of NHC forecasts and deterministic models for rapid intensification
More informationFORECAST OF ATLANTIC SEASONAL HURRICANE ACTIVITY AND LANDFALL STRIKE PROBABILITY FOR 2015
FORECAST OF ATLANTIC SEASONAL HURRICANE ACTIVITY AND LANDFALL STRIKE PROBABILITY FOR 2015 We continue to foresee a below-average 2015 Atlantic hurricane season. A moderate to strong El Niño is underway,
More informationThe entire data set consists of n = 32 widgets, 8 of which were made from each of q = 4 different materials.
One-Way ANOVA Summary The One-Way ANOVA procedure is designed to construct a statistical model describing the impact of a single categorical factor X on a dependent variable Y. Tests are run to determine
More information2014 real-time COAMPS-TC ensemble prediction
2014 real-time COAMPS-TC ensemble prediction Jon Moskaitis, Alex Reinecke, Jim Doyle and the COAMPS-TC team Naval Research Laboratory, Monterey, CA HFIP annual review meeting, 20 November 2014 Real-time
More informationSome Thoughts on HFIP. Bob Gall
Some Thoughts on HFIP Bob Gall Thanks! For putting up with me for the last six years, For helping to create a Hurricane Project that I believe has been very successful I will be mostly retiring at the
More informationTransitioning Physics Advancements into the Operational Hurricane WRF Model
Transitioning Physics Advancements into the Operational Hurricane WRF Model KATHRYN NEWMAN, MRINAL BISWAS, LAURIE CARSON N OA A / ESR L T EA M M E M B E RS: E. K ALINA, J. F RIMEL, E. GRELL, AND L. B ERNARDET
More informationUsing satellite-based remotely-sensed data to determine tropical cyclone size and structure characteristics
DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Using satellite-based remotely-sensed data to determine tropical cyclone size and structure characteristics PI: Elizabeth
More informationCareful, Cyclones Can Blow You Away!
Title: Careful, Cyclones Can Blow You Away! (Meteorology) Grade(s): 6-8 Introduction: Most people associate twisters with tornadoes, but in fact tropical twisters come from hurricanes. Hurricanes are what
More informationBasic Verification Concepts
Basic Verification Concepts Barbara Brown National Center for Atmospheric Research Boulder Colorado USA bgb@ucar.edu May 2017 Berlin, Germany Basic concepts - outline What is verification? Why verify?
More informationSatellite Applications to Hurricane Intensity Forecasting
Satellite Applications to Hurricane Intensity Forecasting Christopher J. Slocum - CSU Kate D. Musgrave, Louie D. Grasso, and Galina Chirokova - CIRA/CSU Mark DeMaria and John Knaff - NOAA/NESDIS Center
More informationNHC Activities, Plans, and Needs
NHC Activities, Plans, and Needs HFIP Diagnostics Workshop August 10, 2012 NHC Team: David Zelinsky, James Franklin, Wallace Hogsett, Ed Rappaport, Richard Pasch NHC Activities Activities where NHC is
More informationEnsemble Prediction Systems
Ensemble Prediction Systems Eric Blake National Hurricane Center 7 March 2017 Acknowledgements to Michael Brennan 1 Question 1 What are some current advantages of using single-model ensembles? A. Estimates
More informationTropical Cyclone Track Prediction
Tropical Cyclone Track Prediction Richard J. Pasch and David A. Zelinsky National Hurricane Center 2017 RA-IV Workshop on Hurricane Forecasting and Warning March 7, 2017 Outline Basic Dynamics Guidance
More informationCOLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM SEPTEMBER 25 OCTOBER 8, 2014
COLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM SEPTEMBER 25 OCTOBER 8, 2014 We expect that the next two weeks will be characterized by below-average amounts (
More informationTHE MODEL EVALUATION TOOLS (MET): COMMUNITY TOOLS FOR FORECAST EVALUATION
THE MODEL EVALUATION TOOLS (MET): COMMUNITY TOOLS FOR FORECAST EVALUATION Tressa L. Fowler John Halley Gotway, Barbara Brown, Randy Bullock, Paul Oldenburg, Anne Holmes, Eric Gilleland, and Tara Jensen
More informationTropical Cyclone Formation/Structure/Motion Studies
Tropical Cyclone Formation/Structure/Motion Studies Patrick A. Harr Department of Meteorology Naval Postgraduate School Monterey, CA 93943-5114 phone: (831) 656-3787 fax: (831) 656-3061 email: paharr@nps.edu
More informationPRMS WHITE PAPER 2014 NORTH ATLANTIC HURRICANE SEASON OUTLOOK. June RMS Event Response
PRMS WHITE PAPER 2014 NORTH ATLANTIC HURRICANE SEASON OUTLOOK June 2014 - RMS Event Response 2014 SEASON OUTLOOK The 2013 North Atlantic hurricane season saw the fewest hurricanes in the Atlantic Basin
More information1.2 DEVELOPMENT OF THE NWS PROBABILISTIC EXTRA-TROPICAL STORM SURGE MODEL AND POST PROCESSING METHODOLOGY
1.2 DEVELOPMENT OF THE NWS PROBABILISTIC EXTRA-TROPICAL STORM SURGE MODEL AND POST PROCESSING METHODOLOGY Huiqing Liu 1 and Arthur Taylor 2* 1. Ace Info Solutions, Reston, VA 2. NOAA / NWS / Science and
More informationHMON (HNMMB): Development of a new Hurricane model for NWS/NCEP operations
1 HMON (HNMMB): Development of a new Hurricane model for NWS/NCEP operations Avichal Mehra, EMC Hurricane and Mesoscale Teams Environmental Modeling Center NOAA / NWS / NCEP HMON: A New Operational Hurricane
More informationEarly Guidance for Tropical Weather Systems
Early Guidance for Tropical Weather Systems Understanding What s Out There Jennifer McNatt National Weather Service Material from: Dan Brown, Mike Brennan and John Cangialosi, National Hurricane Center
More information2017 Year in review: JTWC TC Activity, Forecast Challenges, and Developmental Priorities
2017 Year in review: JTWC TC Activity, Forecast Challenges, and Developmental Priorities Mean Annual TC Activity????????? Hurricane Forecast Improvement Program Annual Review 8-9 NOV 2017 Brian Strahl,
More informationJTWC use of ensemble products. Matthew Kucas Joint Typhoon Warning Center Pearl Harbor, HI
Matthew Kucas Joint Typhoon Warning Center Pearl Harbor, HI Overview Tropical cyclone track forecasting Deterministic model consensus and single-model ensembles as track forecasting aids Conveying uncertainty
More informationNWS Operational Marine and Ocean Forecasting. Overview. Ming Ji. Ocean Prediction Center National Weather Service/NCEP. CIOSS/CoRP
NWS Operational Marine and Ocean Forecasting Overview Ming Ji Ocean Prediction Center National Weather Service/NCEP CIOSS/CoRP CoRP Symposium Corvallis, OR Aug. 12-13, 13, 2008 Titanic Telegram Marine
More informationP4.1 CONSENSUS ESTIMATES OF TROPICAL CYCLONE INTENSITY USING MULTISPECTRAL (IR AND MW) SATELLITE OBSERVATIONS
P4.1 CONSENSUS ESTIMATES OF TROPICAL CYCLONE INTENSITY USING MULTISPECTRAL (IR AND MW) SATELLITE OBSERVATIONS Christopher Velden* Derrick C. Herndon and James Kossin University of Wisconsin Cooperative
More informationCOLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM SEPTEMBER 27-OCTOBER 10, 2018
COLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM SEPTEMBER 27-OCTOBER 10, 2018 We expect that the next two weeks will be characterized by above-normal amounts of hurricane activity,
More informationBasic Verification Concepts
Basic Verification Concepts Barbara Brown National Center for Atmospheric Research Boulder Colorado USA bgb@ucar.edu Basic concepts - outline What is verification? Why verify? Identifying verification
More informationExploratory Data Analysis
CS448B :: 30 Sep 2010 Exploratory Data Analysis Last Time: Visualization Re-Design Jeffrey Heer Stanford University In-Class Design Exercise Mackinlay s Ranking Task: Analyze and Re-design visualization
More informationFig P3. *1mm/day = 31mm accumulation in May = 92mm accumulation in May Jul
Met Office 3 month Outlook Period: May July 2014 Issue date: 24.04.14 Fig P1 3 month UK outlook for precipitation in the context of the observed annual cycle The forecast presented here is for May and
More informationRecent advances in Tropical Cyclone prediction using ensembles
Recent advances in Tropical Cyclone prediction using ensembles Richard Swinbank, with thanks to Many colleagues in Met Office, GIFS-TIGGE WG & others HC-35 meeting, Curacao, April 2013 Recent advances
More informationANOVA: Analysis of Variation
ANOVA: Analysis of Variation The basic ANOVA situation Two variables: 1 Categorical, 1 Quantitative Main Question: Do the (means of) the quantitative variables depend on which group (given by categorical
More informationSIXTH INTERNATIONAL WORKSHOP on TROPICAL CYCLONES. Working Group: Phillipe Caroff, Jeff Callaghan, James Franklin, Mark DeMaria
WMO/CAS/WWW Topic 0.1: Track forecasts SIXTH INTERNATIONAL WORKSHOP on TROPICAL CYCLONES Rapporteur: E-mail: Lixion A. Avila NOAA/National Hurricane Center 11691 SW 17th Street Miami, FL 33165-2149, USA
More information11A.1 PREDICTION OF TROPICAL CYCLONE TRACK FORECAST ERROR FOR HURRICANES KATRINA, RITA, AND WILMA
11A.1 PREDICTION OF TROPICAL CYCLONE TRACK FORECAST ERROR FOR HURRICANES KATRINA, RITA, AND WILMA James S. Goerss* Naval Research Laboratory, Monterey, California 1. INTRODUCTION Consensus tropical cyclone
More informationThe increasing intensity of the strongest tropical cyclones
The increasing intensity of the strongest tropical cyclones James B. Elsner Department of Geography, Florida State University Tallahassee, Florida Corresponding author address: Dept. of Geography, The
More informationVerification of Tropical Storm Track Prediction in Southeast Asia Using GFS Model
1 Verification of Tropical Storm Track Prediction in Southeast Asia Using GFS Model Honors Thesis Presented to the College of Agriculture and Life Sciences Department of Earth and Atmospheric Sciences
More informationOutlook 2008 Atlantic Hurricane Season. Kevin Lipton, Ingrid Amberger National Weather Service Albany, New York
Outlook 2008 Atlantic Hurricane Season Kevin Lipton, Ingrid Amberger National Weather Service Albany, New York Summary 2007 Hurricane Season Two hurricanes made landfall in the Atlantic Basin at category-5
More informationTropical Cyclone Forecast Assessment
Tropical Cyclone Forecast Assessment Zachary Weller Colorado State University wellerz@stat.colostate.edu Joint work with Dr. Jennifer Hoeting (CSU) and Dr. Ligia Bernardet (NOAA) June 24, 2014 Zachary
More informationImproving Tropical Cyclone Guidance Tools by Accounting for Variations in Size
Improving Tropical Cyclone Guidance Tools by Accounting for Variations in Size John A. Knaff 1, Mark DeMaria 1, Scott P. Longmore 2 and Robert T. DeMaria 2 1 NOAA Center for Satellite Applications and
More informationNOTES AND CORRESPONDENCE. Statistical Postprocessing of NOGAPS Tropical Cyclone Track Forecasts
1912 MONTHLY WEATHER REVIEW VOLUME 127 NOTES AND CORRESPONDENCE Statistical Postprocessing of NOGAPS Tropical Cyclone Track Forecasts RUSSELL L. ELSBERRY, MARK A. BOOTHE, GREG A. ULSES, AND PATRICK A.
More informationTHE IMPACT OF SATELLITE-DERIVED WINDS ON GFDL HURRICANE MODEL FORECASTS
THE IMPACT OF SATELLITE-DERIVED WINDS ON GFDL HURRICANE MODEL FORECASTS Brian J. Soden 1 and Christopher S. Velden 2 1) Geophysical Fluid Dynamics Laboratory National Oceanic and Atmospheric Administration
More informationFlorida Commission on Hurricane Loss Projection Methodology Flood Standards Development Committee. October 30, 2014
Florida Commission on Hurricane Loss Projection Methodology Flood Standards Development Committee October 30, 2014 KatRisk LLC 752 Gilman St. Berkeley, CA 94710 510-984-0056 www.katrisk.com About KatRisk
More informationAnuMS 2018 Atlantic Hurricane Season Forecast
AnuMS 2018 Atlantic Hurricane Season Forecast : June 11, 2018 by Dale C. S. Destin (follow @anumetservice) Director (Ag), Antigua and Barbuda Meteorological Service (ABMS) The *AnuMS (Antigua Met Service)
More informationHFIP Diagnostics Workshop Summary and Recommendations
HFIP Diagnostics Workshop Summary and Recommendations Agenda Summary Operational input from NHC Atmospheric diagnostics EMC, NESDIS, CSU, GFDL, ESRL New verification techniques JNT/NCAR Land surface, ocean,
More informationThe 2009 Hurricane Season Overview
The 2009 Hurricane Season Overview Jae-Kyung Schemm Gerry Bell Climate Prediction Center NOAA/ NWS/ NCEP 1 Overview outline 1. Current status for the Atlantic, Eastern Pacific and Western Pacific basins
More informationTopic 3.2: Tropical Cyclone Variability on Seasonal Time Scales (Observations and Forecasting)
Topic 3.2: Tropical Cyclone Variability on Seasonal Time Scales (Observations and Forecasting) Phil Klotzbach 7 th International Workshop on Tropical Cyclones November 18, 2010 Working Group: Maritza Ballester
More informationAnuMS 2018 Atlantic Hurricane Season Forecast
AnuMS 2018 Atlantic Hurricane Season Forecast Issued: May 10, 2018 by Dale C. S. Destin (follow @anumetservice) Director (Ag), Antigua and Barbuda Meteorological Service (ABMS) The *AnuMS (Antigua Met
More informationDevelopments towards multi-model based forecast product generation
Developments towards multi-model based forecast product generation Ervin Zsótér Methodology and Forecasting Section Hungarian Meteorological Service Introduction to the currently operational forecast production
More informationMotivation & Goal. We investigate a way to generate PDFs from a single deterministic run
Motivation & Goal Numerical weather prediction is limited by errors in initial conditions, model imperfections, and nonlinearity. Ensembles of an NWP model provide forecast probability density functions
More informationNational Hurricane Center Products. Jack Beven National Hurricane Center
National Hurricane Center Products Jack Beven National Hurricane Center Florida Governor s Hurricane Conference 11 May 2014 NHC Tropical Cyclone Products NHC provides the big picture that complements and
More informationCOLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM AUGUST 16 29, 2018
COLORADO STATE UNIVERSITY FORECAST OF ATLANTIC HURRICANE ACTIVITY FROM AUGUST 16 29, 2018 We expect that the next two weeks will be characterized by below-normal amounts of hurricane activity. (as of 16
More informationAnuMS 2018 Atlantic Hurricane Season Forecast
AnuMS 2018 Atlantic Hurricane Season Forecast Issued: April 10, 2018 by Dale C. S. Destin (follow @anumetservice) Director (Ag), Antigua and Barbuda Meteorological Service (ABMS) The *AnuMS (Antigua Met
More informationActive Weather Threat Halloween Week Nor easter October 28 th 31 st 2012
Active Weather Threat Halloween Week Nor easter October 28 th 31 st 2012 Prepared 1130 AM EDT Wednesday, October 24, 2012 Gary Szatkowski NOAA s NJ Forecast Office Weather.gov/phi Purpose of Briefing Briefing
More informationTSR TROPICAL STORM TRACKER LAUNCH
TSR TROPICAL STORM TRACKER LAUNCH The Old Library, Lloyd s s of London Friday 30th May 2003 10.30am - 11.30am Tropical Storm Risk (TSR) Founded in 2000, Tropical Storm Risk (TSR) offers a leading resource
More informationExploiting ensemble members: forecaster-driven EPS applications at the met office
Exploiting ensemble members: forecaster-driven EPS applications at the met office Introduction Ensemble Prediction Systems (EPSs) have assumed a central role in the forecast process in recent years. The
More informationAn Objective Algorithm for the Identification of Convective Tropical Cloud Clusters in Geostationary Infrared Imagery. Why?
An Objective Algorithm for the Identification of Convective Tropical Cloud Clusters in Geostationary Infrared Imagery By Chip Helms Faculty Advisor: Dr. Chris Hennon Why? Create a database for the tropical
More informationTropical Cyclone Report Hurricane Rafael (AL172012) October Lixion A. Avila National Hurricane Center 14 January 2013
Tropical Cyclone Report Hurricane Rafael (AL172012) 12-17 October 2012 Lixion A. Avila National Hurricane Center 14 January 2013 Rafael moved across the northern Leeward Islands as a tropical storm and
More informationApplications Development and Diagnostics (ADD) Team Summary Part 2
Applications Development and Diagnostics (ADD) Team Summary Part 2 Mark DeMaria, NOAA/NESDIS HFIP Annual Review Meeting Nov. 8-9, 2011, Miami, FL Input from: Chris Davis, James Doyle, Thomas Galarneau,
More informationDevelopment of Operational Storm Surge Guidance to Support Total Water Predictions
Development of Operational Storm Surge Guidance to Support Total Water Predictions J. Feyen 1, S. Vinogradov 1,2, T. Asher 3, J. Halgren 4, Y. Funakoshi 1,5 1. NOAA/NOS//Development Laboratory 2. ERT,
More informationScientific Documentation for the Community release of the GFDL Vortex Tracker
Scientific Documentation for the Community release of the GFDL Vortex Tracker May 2016 Version 3.7b The Developmental Testbed Center Timothy Marchok NOAA/GFDL Please send questions to: hwrf-help@ucar.edu
More informationUpgrade of JMA s Typhoon Ensemble Prediction System
Upgrade of JMA s Typhoon Ensemble Prediction System Masayuki Kyouda Numerical Prediction Division, Japan Meteorological Agency and Masakazu Higaki Office of Marine Prediction, Japan Meteorological Agency
More informationAN ANALYSIS OF TROPICAL CYCLONE INTENSITY ESTIMATES OF THE ADVANCED MICROWAVE SOUNDING UNIT (AMSU),
AN ANALYSIS OF TROPICAL CYCLONE INTENSITY ESTIMATES OF THE ADVANCED MICROWAVE SOUNDING UNIT (AMSU), 2005-2008 Corey Walton University of Miami, Coral Gables, FL INTRODUCTION Analysis and forecasts of tropical
More informationThe Atlantic Hurricane Database Reanalysis Project
The Atlantic Hurricane Database Reanalysis Project 9 November, 2015 14 th International Workshop on Wave Hindcasting and Forecasting Chris Landsea, National Hurricane Center, Miami, USA Chris.Landsea@noaa.gov
More informationAnuMS 2018 Atlantic Hurricane Season Forecast
AnuMS 2018 Atlantic Hurricane Season : August 12, 2018 by Dale C. S. Destin (follow @anumetservice) Director (Ag), Antigua and Barbuda Meteorological Service (ABMS) The *AnuMS (Antigua Met Service) is
More informationTwenty-five years of Atlantic basin seasonal hurricane forecasts ( )
Click Here for Full Article GEOPHYSICAL RESEARCH LETTERS, VOL. 36, L09711, doi:10.1029/2009gl037580, 2009 Twenty-five years of Atlantic basin seasonal hurricane forecasts (1984 2008) Philip J. Klotzbach
More informationTropical Cyclone Track Prediction
Tropical Cyclone Track Prediction Richard J. Pasch and David A. Zelinsky National Hurricane Center 2016 RA-IV Workshop on Hurricane Forecasting and Warning March 8, 2016 Outline Basic Dynamics Guidance
More informationThe role of testbeds in NOAA for transitioning NWP research to operations
ECMWF Workshop on Operational Systems November 18, 2013 The role of testbeds in NOAA for transitioning NWP research to operations Ligia Bernardet 1* and Zoltan Toth 1 1 NOAA ESRL Global Systems Division,
More informationSUPPLEMENTARY INFORMATION
SUPPLEMENTARY INFORMATION DOI: 10.1038/NCLIMATE2336 Stormiest winter on record for Ireland and UK Here we provide further information on the methodological procedures behind our correspondence. First,
More informationUsing SPSS for One Way Analysis of Variance
Using SPSS for One Way Analysis of Variance This tutorial will show you how to use SPSS version 12 to perform a one-way, between- subjects analysis of variance and related post-hoc tests. This tutorial
More information