Beyond IPCC plots Ben Sanderson
What assumptions are we making?
The Chain of Uncertainty: Heat waves Future Emissions Global Climate Sensitivity Regional Feedbacks Random variability Heat wave frequency
Future Emissions Global surface temperature change ( C) Model mean global mean temperature change for high emission scenario RCP8.5 Model mean global mean temperature change for low emission scenario RCP2.6 IP IPCC AR5
The Chain of Uncertainty: Heat waves Future Emissions Global Climate Sensitivity Regional Feedbacks Random variability Heat wave frequency
Des Deser et al (2012) Random variability
Large ensembles can sample weather noise
The Chain of Uncertainty: Heat waves Future Emissions Global Climate Sensitivity Regional Feedbacks Random variability Heat wave frequency
CanCM4 GFDL-ESM2M MIROC5 FGOALS-s2 CESM1 GEOS-5 CanESM2 GISS-E2-R CCSM4 MIROC-ESM GFDL-CM3 CAWCR-ACCESS1 HadCM3 MIROC4h CNRM-CM5 CMIP-5 GFDL-ESM2G CSIRO-Mk3-6-0 GISS-E2-H inmcm4 MPI-ESM-LR NorESM1-M IPSL-CM5A HadGEM2-A MRI-CGCM3
The IPCC worldview: means and confidence
What is this assuming? truth + error More models: more confidence in projection
Another model? indistinguishable More models: better knowledge of distribution
The spectacular mean Reichler and Kim, BAMS (2008)
Partly due to geometry ensemble member observations ensemble mean The Cauchy-Schwartz inequality
and partly due to tuning Sanderson and Knutti (2012)
Truth centered now doesn t mean truth centered later Sanderson and Knutti (2012)
indistinguishable OK, so the ensemble members for future projections are indistinguishable from truth, right?
Some models are better than others (but the winner depends on what you look at )
So can we find emergent constraints? Sherwood et al (2014) Qu et al (2014) Wenzel et al (2014)
But correlation (on its own) is not conclusive Randomized Number (and range) Actual Climate Sensitivity Caldwell et al (2014)
1950 1960 1970 1980 1990 2000 2010 INMCM BCC CSM1.1 CCS R GCRS GFDL CGCM MIROC CM2.0 MIROC5 CGCM2 CM2.1 CGCM3 ANMRC BMR C CSIR O 2 3 3.6 PCM NCAR 1 NCAR I1 NCAR II1 ECMWF CCM 0-A CCM 0-B CCM 1 CCM 2 CCM 3 CSM CCSM 3 CCSM 4 CESM CNRM MP I ECHAM3 ECHAM4 ECHAM5 CM5.1 UCLA GISS GISS II GR CGCM1 HadCM2 E E2 MRI UKMO HadCM CGCM2 HadCM3 HadGEM HadGEM2 CGCM3 FGCM FGOALS ACCESS
CanCM4 GFDL-ESM2M MIROC5 CESM1 GEOS-5 CanESM2 GISS-E2-R FGOALS-s2 CCSM4 MIROC-ESM GFDL-CM3 CAWCR-ACCESS1 HadCM3 GFDL-ESM2G MIROC4h CNRM-CM5 CMIP-5 CSIRO-Mk3-6-0 GISS-E2-H inmcm4 MPI-ESM-LR NorESM1-M Atmospheric code Ocean code Land code IPSL-CM5A HadGEM2-A MRI-CGCM3
Are we overestimating confidence because models are replicated?
And are we creating artificial emergent constraints? Wenzel et al (2014)
Can we weight models to take account of interdepen dency?
Observable 1 models Observed value Observable 2
Model Quality Observable 1 0.4 0.4 0.4 0.7 Radius of Model quality Observed value 0.8 0.8 Observable 2 0.1
Model Independe nce Observable 1 1.0 0.33 0.33 0.33 Radius of model similarity 0.5 0.5 Observable 2 1.0
Differences between CMIP mean states are much greater than those from initial conditions
Overall weight Observable 1 0.13 0.13 0.13 0.7 0.4 0.4 Observable 2 0.1
A weighting function for model quality and independence =
A weighting function for model quality and independence Model Quality metric = Weight of a given model i Model independence metric
A weighting function for model quality and independence Distance of model to observation Model Quality metric Weight of a given model i = Model independence metric Quality Scaling parameter Distance of model `i` to another Similarity model `j` Scaling parameter
The Chain of Uncertainty: Heat waves Future Emissions Global Climate Sensitivity Regional Feedbacks Random variability Heat wave frequency
CMIP5 Simulations Instrumental Period Mean State Paleo Records Bayesian Combination Data from Knutti et al (2008)
The Devil in the grid-box (2) The devil in the grid box ¼ ½ 1 2
rhcritl froot/leaf FlnR rhcrith c loudfrc stokes icritc icritw conke sh r p en dp premit U.W. Shallow conke rkm rmaxfrac cr i qc dmpdz deep con. ke c0 tau rootb Smps(o/c) rsubtopm
Climate Sensitivity is a function of uncertain model parameters Years
Rowlands et al (2013) CMIP is an ensemble of best guesses not a PDF
So are PPEs the answer?
Most models look identical (many parameters do nothing of interest) There is no filter! (some ensemble members are demonstrably unlike Earth) Some models are outside the CMIP climate sensitivity range, but cannot be ruled out Stainforthet al (2005)
Most PPE members look the same NCAR perturbed physics ensemble CMIP-3 ensemble Yokohataet al, ClimDyn (2011)
Emergent constraints from large PPEs are statistically significant (but not robust to structural differences)
Conclusions The CMIP multi-model archive contains models of varying skill and interdependency. The distribution of model errors, and inter-model distances are a rich source of information which can provide mitigating strategies CMIP is an ensemble of best guesses, so the resulting distribution cannot be interpreted as a PDF for future climate PPEs can provide additional information on possible tails but cannot replace the structural diversity of CMIP
If it isn t truth centered, can you make it so?
Application: Sea ice area projections (RCP8.5) Knutti, Sedlacek and Sanderson (in prep)
Models which are closer together than could occur by chance alone...
A process of elimination: knocking out the worst performing and least independent models first
But when to stop?
But when to stop? Removing replicates Removing poor performers Removing better models