Training: Climate Change Scenarios for PEI Training Session April 16 2012 Neil Comer Research Climatologist
Considerations: Which Models? Which Scenarios?? How do I get information for my location? Uncertainty in results? Where do I start? What about Downscaling? IPCC images
This Training Session: Use of GCM / RCM grid cell output from many models and scenarios Given the choices what do we do? Best approaches for the uncertainty More detailed investigation (of a single location) would require statistical downscaling techniques Statistical Downscaling (using SDSM, LARS, ASD, etc) is not the focus of this training
UPEI will develop its own customized climate database * observations * climate records * trends * models * projections * extremes * sea level * erosion * one-stop * COMMUNITY INVOLVEMENT (observer network)
Future Plans our own Regional Climate Modelling Program (10km resolution or better) PRECIS (Providing Regional Climates for Impacts Studies) - Developed at the UK Met Office Hadley Centre - Can be applied anywhere to generate detailed climate change projections - Long history of success worldwide (in Canada at the University of Regina) we will collaborate WRF (Weather Research and Forecasting) model - NCAR
The Typical Model Grid (GCM/RCM) 1 grid cell 11 grid cells The models provide GRID cell AVERAGED values - not a single point location value
What do the models see here? It varies tremendously by model there is no standard
It varies tremendously by model there is no standard
It varies tremendously by model there is no standard
It varies tremendously by model there is no standard
It varies tremendously by model there is no standard
It varies tremendously by model there is no standard
It varies tremendously by model there is no standard
It varies tremendously by model there is no standard REGIONAL CLIMATE MODEL
It varies tremendously by model there is no standard REGIONAL CLIMATE MODEL
Background: The models generally use 1961-1990 as their baseline period - most recent is 1981-2010 Anomalies are the DIFFERENCE between a future period projection and a baseline Maps can output model values OR anomalies Scatterplots output anomalies (the change) from the baseline value Future projections tend to be averaged over standard periods as well 2020s = 2011-2040 2050s = 2041-2070 2080s = 2071-2100
Many models available Other than the models themselves, what else affects our projections? Emission Scenarios SRES (B1, A1B, A2) Special Report on Emissions Scenarios Changing in the new release to RCPs (2.6, 4.5, 6, 8.5) Representative Concentration Pathways
So for any selected location: The model selected affects the result The emission scenario selected affects the result In AR4: 24 GCMs with 2 or 3 emission scenarios for each (about 75 outcomes) more in AR5 Within Canada we also have access to REGIONAL CLIMATE MODELS (RCMs) CRCM (Canadian Regional Climate Model EC/OURANOS)?? PRECIS using A2 and B2 emission UPEI PRECIS /WRF hi-res (10km) for Atlantic region?
Model considerations: Newer versions of models are better than older Increase in temporal and spatial resolution is preferable Uncertainty in: 1.Emission scenarios 2.Parameterization of sub-grid scale processes 3.Climate sensitivity? Increasing? Nevertheless, Models represent the best method available to project future climate (IPCC scientists) Cloud? Ice? Soil?
Models can have more than just averages (extreme variables): 2 m Air Temperature Range (C) Consecutive Dry Days (days) Days with Rain > 10 mm/d (days) Fraction of Annual Total Precip > 95 th percentile (%) Fraction of Time < 90 th percentile min temp (%) Number of Frost Days (days) Maximum Heat Wave Duration (days) Maximum 5 Day Precipitation (mm) Simple Daily Precipitation Intensity Index (mm/day) Growing Season Length (days)..or we can calculate our own indicators
Model Summary Many GCMs and more and more Regional Climate Models coming on-line (NARRCAP project) Results can vary widely between models and emission scenario selected Some models do better than others at reproducing the historical climate in different regions we will see this In complex environments (coastal, mountainous, sea ice), extra care is required (grid cell averaging and process parameterization) Downscaling of even RCMs is likely required for some investigations It is critical to not rely on any single model/scenario for decisionmaking. Due diligence requires the consideration of more than a single possible outcome.
One approach: Climate Model Ensembles Applied Results with uncertainty estimates
History of Ensembles Definition: The consideration / combination of multimodel output Has been used for a long time in weather forecasting Forecasters consider more than one model Climate models are like long term forecast models, so it makes sense to use the same methodology For climate models, this requires a lot of processing, so only recently has this become available
Research studies support ensembles IPCC-TGICA, 2007: General Guidelines on the Use of Scenario Data for Climate Impact and Adaptation Assessment. Version 2. Prepared by T.R. Carter on behalf of the Intergovernmental Panel on Climate Change, Task Group on Data and Scenario Support for Impact and Climate Assessment, 66pp. Gleckler, P. J, K. E. Taylor, and C. Doutriaux (2008) Performance metrics for climate models. Journal of Geophysical Research. Vol. 113. D06104. IPCC Expert Meeting on Assessing and Combining Multi Model Climate Projections, Boulder, Colorado, USA, 25-27 January 2010 http://www.ipcc.ch/pdf/supporting-material/ipcc_em_mme_goodpracticeguidancepaper.pdf Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, 2007. Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor and H.L. Miller (eds.) (Chapter 10.5.4 - The Multi-Model Ensemble Approach http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-5-4-1.html
Ensemble considerations for practitioners The use of a limited number of models or scenarios provides no information of the uncertainty involved in climate modelling ensembles can help Although each GCM represents the best effort of each modelling centre, there are biases The use of an ensemble (mean/median) of models tends to converge to a best estimate by reduction of strong biases in single models There are other alternatives to ensembles as well which will be demonstrated depends on the stakeholder demands
Why the differences between models? Reason can be as simple as land/sea distribution and grid resolution CGCM3T47 MIROC2.3.2 (Canada) (Japan)
Why the differences? Elevation CGCM3T47 MIROC2.3.2 (Canada) (Japan)
More Each model has varying degrees of complexity with its atmospheric and oceanic physics and their connection Sub-grid scale processes between models vary even more (e.g. Snow? Sea ice? Soil layers? CLOUD?) ALL models must approximate (parameterize) sub-grid scale processes e.g. Canada uses the Canadian Land Atmosphere Surface Scheme (CLASS) other models not as complex when dealing with soil levels, snow, ice Boundary conditions, initial conditions
Ensembles and Uncertainty Importantly, the Standard Deviation of model results over each gridcell can give us an INDICATION or CHARACTERIZATION of model certainty/uncertainty Areas of low SD = Areas of higher model agreement Areas of high SD = Areas of lower model agreement Ensembles of many independent models gives us mean projection change, AND a basic indication of the projection uncertainty Ensemble SD isn t a direct measure of uncertainty
Why not a direct measure? Perhaps all models use similar imperfect code which would lead to convergence to the wrong answer For a given emission scenario we have optimally a population size = 24) is this sufficient for our estimate? Some model projections seem to be large outliers compared to others, but all models are considered in the usual ensemble (non-weighted)
Calculation of Ensemble 1. Obtain monthly output data for the 24 models 2. Regrid each model output to a common resolution (this is selected as the 2.5 x 2.5 degree resolution of the NCEP re-analysis data) 3. Perform our required statistics on a grid-cell by gridcell basis (mean, standard dev, median, etc) 4. Generate new gridded output from these results Contributing models from the AR4: United States Japan France Austral U.K. Cana Germ Norway China Italy Russia
Ensemble Validation How well does the ensemble do? Best done against gridded observed datasets like: NCEP National Centers for Environmental Prediction ERA40 (ECMWRF) European Centre for Medium-Range Weather Forecasts Other gridded datasets as well in Canada: CANGRID at 50km resolution All gridded observational datasets use station observations and interpolate them onto a continuous grid using various techniques
Ensemble Validation 1971-2000 Mean Annual Temperature NCEP Ensemble MEAN
Ensemble Validation 1971-2000 Mean Annual Precipitation NCEP Ensemble MEAN
How can we improve the Ensemble? Perhaps through VALIDATION, but: There are no set metrics -Which variables? -Which temporal scale? -What domain area? global, continental, regional Historically, climate change scenarios have used the one model = one vote methodology No model is assigned more value than another
Some discussion of weighting model output Simply put, models which validate more closely to the historical baseline climate would be given more weight in the overall ensemble Weighting the better-performing models would provide (ideally) a better projection and a better uncertainty estimate But end-user needs differ so greatly, that one metric or set of metrics is unlikely to be of use to all (maybe precipitation is needed and the temperature variable is not) Some initial literature has hinted that little is gained by this added step so perhaps just include all models
Why little gain? Observation uncertainty Inherent model uncertainty Robustness of the performance metric Perhaps there are errors common to all models Statistical limitations not enough models, so the risk of being overconfident using metrics Studies have shown that different metrics produce different rankings of models (e.g., Gleckler et al., 2008) No guarantee historical performance metrics will be a reliable gauge of future projections
Ensemble Summary Initially, all models should be equally considered in any study see the full range Perhaps the particular interest are the extremes of the projections, not simply the mean of all models Models may not represent true independent estimates of the range of possibilities (common shortcomings?) Ensembles statistically outperform individual models and there is good scientific basis for their use Weighting techniques with ensembles is new and without proper justification of the technique is unlikely to add value
Applied Training Generating for a single location: Ensemble Estimate (average of all models) Range Estimate (range of all models) Validated Estimate (best-historical fit models)
We will use CCCSN Right here at UPEI: atlantic.cccsn.ca
Visualization Page
Scatterplot Output Anomaly change of the model for 2020s, 2050s and 2080s Table of graphed values Download Table values in CSV Analysis from this data (all model mean)