(Regional) Climate Model Validation

Similar documents
Global Climate Models and Extremes

SUPPLEMENTARY INFORMATION

Diagnosing the Climatology and Interannual Variability of North American Summer Climate with the Regional Atmospheric Modeling System (RAMS)

Appendix E. OURANOS Climate Change Summary Report

Introduction to climate modelling: Evaluating climate models

How reliable are selected methods of projections of future thermal conditions? A case from Poland

Climate Modelling and Scenarios in Canada. Elaine Barrow Principal Investigator (Science) Canadian Climate Impacts Scenarios (CCIS) Project

Changes in Weather and Climate Extremes and Their Causes. Xuebin Zhang (CRD/ASTD) Francis Zwiers (PCIC)

Statistical downscaling daily rainfall statistics from seasonal forecasts using canonical correlation analysis or a hidden Markov model?

The importance of sampling multidecadal variability when assessing impacts of extreme precipitation

Stochastic decadal simulation: Utility for water resource planning

Impacts of Climate Change on Autumn North Atlantic Wave Climate

Behind the Climate Prediction Center s Extended and Long Range Outlooks Mike Halpert, Deputy Director Climate Prediction Center / NCEP

Climate Modeling and Downscaling

The Canadian Seasonal to Interannual Prediction System (CanSIPS)

Downscaling in Time. Andrew W. Robertson, IRI. Advanced Training Institute on Climate Variability and Food Security, 12 July 2002

Regional Climate Simulations with WRF Model

Predictability and prediction of the North Atlantic Oscillation

DOWNSCALING INTERCOMPARISON PROJECT SUMMARY REPORT

Robust Arctic sea-ice influence on the frequent Eurasian cold winters in past decades

Seasonal Climate Prediction in a Climate Services Context

Predicting climate extreme events in a user-driven context

ECMWF global reanalyses: Resources for the wind energy community

Estimation of Natural Variability and Detection of Anthropogenic Signal in Summertime Precipitation in La Plata Basin

Climate Modeling Dr. Jehangir Ashraf Awan Pakistan Meteorological Department

Physical systematic biases

Extremes Seminar: Tornadoes

An Introduction to Physical Parameterization Techniques Used in Atmospheric Models

Trends in Climate Teleconnections and Effects on the Midwest

Fidelity and Predictability of Models for Weather and Climate Prediction

Climate Change Impact Analysis

Climpact2 and regional climate models

Detection of external influence on Northern Hemispheric snow cover

REQUEST FOR A SPECIAL PROJECT

Environment and Climate Change Canada / GPC Montreal

Temporal validation Radan HUTH

Mozambique. General Climate. UNDP Climate Change Country Profiles. C. McSweeney 1, M. New 1,2 and G. Lizcano 1

ENSO Irregularity. The detailed character of this can be seen in a Hovmoller diagram of SST and zonal windstress anomalies as seen in Figure 1.

Regional climate modelling in the future. Ralf Döscher, SMHI, Sweden

Use of the Combined Pacific Variability Mode for Climate Prediction in North America

Climate Change Scenarios in Southern California. Robert J. Allen University of California, Riverside Department of Earth Sciences

FUTURE CARIBBEAN CLIMATES FROM STATISTICAL AND DYNAMICAL DOWNSCALING

Julie A. Winkler. Raymond W. Arritt. Sara C. Pryor. Michigan State University. Iowa State University. Indiana University

On the Causes of and Long Term Changes in Eurasian Heat Waves

ICRC-CORDEX Sessions A: Benefits of Downscaling Session A1: Added value of downscaling Stockholm, Sweden, 18 May 2016

The PRECIS Regional Climate Model

Problems with EOF (unrotated)

The North American Regional Climate Change Assessment Program (NARCCAP) Raymond W. Arritt for the NARCCAP Team Iowa State University, Ames, Iowa USA

Andrey Martynov 1, René Laprise 1, Laxmi Sushama 1, Katja Winger 1, Bernard Dugas 2. Université du Québec à Montréal 2

SPECIAL PROJECT PROGRESS REPORT

performance EARTH SCIENCE & CLIMATE CHANGE Mujtaba Hassan PhD Scholar Tsinghua University Beijing, P.R. C

MJO modeling and Prediction

Empirical climate models of coupled tropical atmosphere-ocean dynamics!

CGE TRAINING MATERIALS ON VULNERABILITY AND ADAPTATION ASSESSMENT. Climate change scenarios

2. There may be large uncertainties in the dating of materials used to draw timelines for paleo records.

BUILDING CLIMATE CHANGE SCENARIOS OF TEMPERATURE AND PRECIPITATION IN ATLANTIC CANADA USING THE STATISTICAL DOWNSCALING MODEL (SDSM)

Application and verification of ECMWF products 2013

Influence of Model Version, Resolution and Driving Data on High Resolution Regional Climate Simulations with CLM

Coupled ocean-atmosphere ENSO bred vector

1.Decadal prediction ( ) 2. Longer term (to 2100 and beyond)

Temperature and rainfall changes over East Africa from multi-gcm forced RegCM projections

Which Climate Model is Best?

Global Change and Air Pollution (EPA-STAR GCAP) Daniel J. Jacob

Fig.3.1 Dispersion of an isolated source at 45N using propagating zonal harmonics. The wave speeds are derived from a multiyear 500 mb height daily

Downscaling and Projection of Winter Extreme Daily Precipitation over North America

The Maritime Continent as a Prediction Barrier

Exploring and extending the limits of weather predictability? Antje Weisheimer

Seasonal forecasting activities at ECMWF

Climate Change: the Uncertainty of Certainty

Advancing decadal-scale climate prediction in the North Atlantic Sector Noel Keenlyside

Get the Picture: Climate Models

Malawi. General Climate. UNDP Climate Change Country Profiles. C. McSweeney 1, M. New 1,2 and G. Lizcano 1

Supplementary Figure 1. Summer mesoscale convective systems rainfall climatology and trends. Mesoscale convective system (MCS) (a) mean total

Evidence of Decadal Climate Prediction Skill Resulting from Changes in Anthropogenic Forcing

International Journal of Scientific and Research Publications, Volume 3, Issue 5, May ISSN

A Study of the Uncertainty in Future Caribbean Climate Using the PRECIS Regional Climate Model

More extreme precipitation in the world s dry and wet regions

Bob Livezey,, Climate Services/NWS/NOAA. 4 th Climate Applications Science Workshop. Tuscon,, AZ, March 22, 2006

Climate Modeling: From the global to the regional scale

Suriname. General Climate. Recent Climate Trends. UNDP Climate Change Country Profiles. Temperature. C. McSweeney 1, M. New 1,2 and G.

Vertical Moist Thermodynamic Structure of the MJO in AIRS Observations: An Update and A Comparison to ECMWF Interim Reanalysis

Karonga Climate Profile: Full Technical Version

HIGH-RESOLUTION CLIMATE PROJECTIONS everyone wants them, how do we get them? KATHARINE HAYHOE

MESA Modeling and Data Assimilation. MESA modeling group: I. Cavalcanti, A. Seth, C. Saulo, B. Kirtman, V. Misra

Winter Forecast for GPC Tokyo. Shotaro TANAKA Tokyo Climate Center (TCC) Japan Meteorological Agency (JMA)

On the robustness of changes in extreme precipitation over Europe from two high resolution climate change simulations

Future extreme precipitation events in the Southwestern US: climate change and natural modes of variability

3. Carbon Dioxide (CO 2 )

Investigating Regional Climate Model - RCM Added-Value in simulating Northern America Storm activity

Detection and Attribution of Climate Change

Influence of reducing weather noise on ENSO prediction

Climate change and variability -

Climate Downscaling 201

Global climate predictions: forecast drift and bias adjustment issues

Statistical and dynamical downscaling of precipitation over Spain from DEMETER seasonal forecasts

Projected change in the East Asian summer monsoon from dynamical downscaling

Climate of Illinois and Central United States: Comparison of Model. Simulations of the Current Climate, Comparison of Model

Reduced Overdispersion in Stochastic Weather Generators for Statistical Downscaling of Seasonal Forecasts and Climate Change Scenarios

ATM S 111, Global Warming Climate Models

Transcription:

(Regional) Climate Model Validation Francis W. Zwiers Canadian Centre for Climate Modelling and Analysis Atmospheric Environment Service Victoria, BC

Outline - three questions What sophisticated validation methods can be employed in evaluating regional climate? How do we quantify the value-added (or lost) due to downscaling when compared with raw GCM output? Can (dynamical) downscaling improve the simulations of extreme events over GCMs?

Question 1 What sophisticated validation methods can be employed in evaluating regional climate? Are objective skill measures that are mostly developed for large scale diagnostics adequate for regional applications? Implicit in these questions is the suspicion that current large scale techniques are not up to the task data hungry and not powerful enough? not sure they can tell us how models differ from reality (or each other)

Question 1 What are the objectives of model assessment? do we reject a model if it is significantly different from observations? would we have any models left? Can we set standards for acceptable model performance? application specific some users will have very stringent standards but useful to consider biomedical science analogy is a rabbit a suitable animal model? Need to accept a rabbit for what it is

Question 1 What can statistics do? Identify and assess differences (model vs obs, model vs model) in the context of uncontrolled variability relative to a specified standard usual standard - no difference is acceptable! What are the tools? local assessment (at a grid point or station) multivariate analysis assessment of spatial variability assessment of temporal variability pattern correlation evaluate hindecast and forecast skill

Tools Local assessments paired difference tests may be appropriate e.g., to compare different kinds of nesting what does a significant difference mean?» precipitation» large scale (e.g., mslp)» surface temperature Multivariate analysis require estimates of covariance structure spatial modelling may be necessary, but not easy

Tools Assess spatial variability/covariability EOF, EEOF, REOF, MCA(SVD), etc. same problems as large scale simple structure not necessarily physical structure data sets too small inference tools very limited no rational selection rules EOFs adapt to data from which they are estimated difficult to compare observed with model need to do common EOF analyses

EOFs adapt... Eigenspecturm of annual mean surface temperature from CCCma1 control run - 40S to 90N Eigenspectrum from last 50-year chunk Mean psuedo-eigenspectrum from 19 remaining 50-year chunks Graphic by Slava Kharin

Figure Caption From Presentation Notes Figure 1: This diagram illustrates the adaptation of EOFs to the sample from which they are estimated. A 1000 year control run performed with the first generation CCCma coupled model (CGCM1) was divided into 50-year chunks. EOFs and eigenvalues of annual mean surface temperature were estimated from the last 50-year chunk. The domain of analysis used for this calculation covers 90N to 40S, roughly mimicing the coverage of observations during the 2nd half of the 20th century. The black bars illustrate the resulting eigenvalue distribution. Observations in each of the other chunks was projected onto the EOFs obtained from the last chunk. The variance of each of the resulting psuedo PCs was computed for each chunk. For each EOF, the 19 psuedo PC variances obtained in this way were averaged. The result is illustrated by the red bars. The message is that the EOFs estimated from one sample explain less variance in other, independent samples from the same process. This occurs because the EOFs are tuned to best fit the specific realization of variability that is contained in the sample from which they were obtained. Consequently, it is not a good idea to compared samples from two climates (say observed and climate model simulated) by fitting EOFs to one sample (observations), and then comparing the variance explained by each of the EOFs in that sample with the variance explained in the second (model simulated) sample. There is a real danger that such a comparison will erroneously conclude that the second sample contains less variance in the direction of each the EOFs, and therefore that the second climate is inferior.

Barnett (1999) variability of annual mean temperature common EOF analysis of first 100 years of 11 CMIP1 runs caution - drift not removed! Observations and 95% confidence interval

Figure Caption From Presentation Notes Figure 2: The problem described in Figure 1 is avoided by performing a common EOF analysis. Such an analysis is performed by removing the sample means from the individual samples to be intercompared, combining the resulting anomalies into a large super sample, finding the EOFs of this super sample, and then calculating the variance explained by each of the EOFs in the individual samples. This diagram illustrates the result of such an analysis using annual mean surface temperature simulated by 11 coupled models participating in AMIP. Diagram from Barnett (1999, J. Climate).

Tools Assess temporal variability/covariability eigen-analyses SSA, MSSA, EEOF, CEOF, POP, etc. time domain modelling Box-Jenkins markov processes (e.g., for precipitation occurrence) classification / weather typing frequency domain analysis can ask if model produces observed variability on some time/space scale

Figure Caption From Presentation Notes Figure 3: An illustration of a frequency domain analysis. Illustrated is the estimated power spectrum of global annual mean temperature simulated in 9 long control runs with coupled general circulation models. The solid black curve is a corresponding estimate obtained from observations and the dotted black line is the estimate obtained after an estimate of the anthropogenic signal is removed from the data. The spectral estimates have varying equivalent bandwidths adjusted in such a way that all estimates have the same 5-95% uncertainty band. The vertical dashed lines indicate the range of time scales (10-60 years) most important for detection and attribution studies. Models that simulate significantly less variance than observed on these time scales are indicated by an asterix. From IPCC WG1 Third Assessment Report (2001, Ch 12, Figure 12.2).

Tools Pattern correlation include time evolution large gradients in space-time may be a problem assess uncertainty/significance with resampling techniques, and using ensembles

Figure Caption From Presentation Notes Figure 4: An example of the BLT diagram that can be used to intercompare pattern correlation and related information. This particular diagram (from Lambert and Boer, 2001, Climate Dynamics) intercompares the surface air temperature climatologies of models participating in CMIP1. The pink radii labeled correlation indicate the pattern correlation between observed and simulated climatologies. The horizontal scale indicates the mean squared difference between the climatologies, and the vertical scale compares the spatial variance of the model simulated climatology with the observed spatial variance. A perfect model would be located at the red dot. Dots illustrate the quality of the model simulation of the meriodional structure (zonal means) of DJF mean temperature. Coloured labels indicate models without flux adjustment. Performance is uniformly good because it is easy to reproduce the gross meridional temperature structure of the observed climate. Triangles illustrate the quality of the model simulation of the eddie (pattern) structure that remains when zonal means are removed from the climatological distribution of DJF mean temperature. This statistic discriminates more effectively between models (for example, flux adjusted models tend to perform better) because correlations in this calculation are not dominated by the large (and easy to obtain) pole to equator temperature gradient. Note that in both cases, the ensemble mean simulation outperforms individual simulations.

Figure Caption From Presentation Notes Figure 5: As Figure 4, except for DJF precipitation.

Tools Evaluate hindecast and forecast skill seasonal forecasting climate change detection/attribution turns out to be multiple linear regression p T a S + N t = i =1 i t i t t amplitude of i th signal

1946-56 Model Estimated Signal 1956-66 1966-76 1976-86 Observed anomalies 1986-96

Figure Caption From Presentation Notes Figure 6: The diagram illustrates schematically the extended 5-decade observation (on the right) and corresponding model simulated signal pattern (on the left) that are matched by generalized multiple linear regression in modern detection studies (such as Stott et al, 2000, Science). Signal uncertainty resulting from internal climate variability is reduced by decadal averaging, by averaging across an ensemble of transient simulations, and by considering only the largest global scales. The mask that appears in the animation of this diagram illustrates the much greater challenge posed by detection on the regional scale.

Summary What sophisticated validation methods can be employed in evaluating regional climate? Need to decide what constitutes an acceptable model We have many validation methods (statistical and physical) Need to use methods that are well understood so that physical interpretation is not obscured Can t avoid cost - long runs and ensembles

Question 3 Can downscaling improve the simulations of extreme events over GCMs? How do we demonstrate this, and what measures can be used? What should we expect? How do GCMs do? How might RCMs improve, and how might we tell?

Figure Caption From Presentation Notes Figure 7: Daily precipitation as observed at the Toronto Airport during a 2-year period and as simulated by two generations of the CCCma atmospheric general circulation model at a grid point near Toronto. The diagram demonstrates that these GCMs simulate precipitation variability that is similar to that which is observed. The more recent version of the model (AGCM3) appears to produce smaller precipitation extremes at this particular location that the earlier version of the model. Scale considerations suggest that models should simulate smaller precipitation extremes than observed, and that there should be increasing agreement as the resolution of the model increases.

Figure Caption From Presentation Notes Figure 8: As Figure 7, except for a 90-day subset of the two-year record. We see he some suggestion that models precipitate small amounts of moisture more frequently than observed.

CGCM1 simulated precipitation extremes (1975-1995) Observed 20-year precipitation events (mm in 24 hours) 60 mm 80 mm Simulated 20-year precipitation events (present climate)

Figure Caption From Presentation Notes Figure 9: Illustration of the ability of a GCM to simulate extreme precipitation. Upper panel: 20-year return values for 24-hour precipitation as estimated from Canadian station data. Lower panel: As above, except as estimated from daily precipitation amounts simulated by the first generation Canadian coupled model (CGCM1) in an ensemble of three transient change simulations (see Boer, et al., 2000, Climate Dynamics) forced with observed changes in greenhouse gas concentrations and sulphate aerosol loadings. The period analysed represents 1975-1995. Note that there is some (probably fortuitus) similarity between the return values of the observed and simulated climates. Extreme precipitation events are clearly undersimulated by the model on the west coast of North America (because the model has a very smooth version of the surface topography) and on the eastern seaboard.

Simulated 20-year precip events (present climate) Locations where significantly different from ERA15 too wet too dry

Figure Caption From Presentation Notes Figure 10: Continued assessment of the ability of a GCM to simulate observed daily precipitation extremes. Upper panel shows the global distribution of estimated 20-year return values of daily precipitation simulated by CGCM1 for present day climate. The lower panel identifies locations where CGCM1 simulated daily precipitation extremes are significantly from daily precipitation extremes inferred from the ECMWF ERA15 reanalysis.

T42L18 AMIP2 T42L18 AMIP2

Figure Caption From Presentation Notes Figure 11: Continued assessment of the ability of GCMs to simulate daily precipitation extremes. Upper and lower panels illustrate estimated 20-year return values daily precipitation from two AMIP2 simulations performed with two closely related climate models. These two models have different parameterizations of convection, and one is tempted to infer that this is the cause of the large difference. However, comparison between the upper model, and an unrelated model using the same convection scheme (not shown) shows that such a conclusion would be premature. This third model has precipitation extremes behaviour that is very similar to that of the lower model above.

Question 3 What should we expect RCMs to improve? Reduction of mean bias Improved stochastic behaviour More realistic variance Better spatial variability Better tail behaviour (i.e., extremes) What would we not expect RCMs to improve? Large scale errors in climate (and forced response) of driving model (e.g., El-Nino like response to ghg forcing).

Question 3 - can we demonstrate? Reduction of mean bias yes Improved stochastic behaviour probably can study threshold crossing less demanding of data More realistic variance maybe tests not as powerful, data not as good

Question 3 - can we demonstrate? Better spatial variability easy and difficult

Figure Caption From Presentation Notes Figure 12: This animated diagram shows that some aspects of the spatial variability of RCMs relative to GCMs are easy to assess. The spatial structure seen in this snapshot from a run with the Canadian RCM is obviously superior to the kinds of spatial structure that make be seen in a typical scene for the same region from the Canadian second generation GCM. Diagram courtesy Rene Laprise and colleagues.

Question 3 - can we demonstrate? Better spatial variability? is there a supporting body of literature? many local features have remote links can RCMs simulate these features? Better tail behaviour (I.e., extremes) maybe need appropriate observations model performance mixed, at best

Summary Can downscaling improve the simulations of extreme events over GCMs? Yes - they should Can t avoid cost or uncertainty as you move into the tails Again leads to questions about what constitutes acceptable model performance.