The treatment of uncertainty in climate analysis and prediction Myles Allen Department of Physics, University of Oxford

Similar documents
Predicting Climate Change

climateprediction.net progress so far

Probability and climate change UK/Russia Climate Workshop, October, 2003

Uncertainty in future climate: What you need to know about what we don t know

Ensembles, Uncertainty and Climate Projections. Chris Brierley (Room 117)

Will we ever be able to attribute individual weather events to anthropogenic climate change? March, 2003

climateprediction.net Predicting 21 st Century Climate

On the observational assessment of climate model performance

Constraints on climate change from a multi-thousand member ensemble of simulations

Professor David B. Stephenson

Progress in RT1: Development of Ensemble Prediction System

The ENSEMBLES Project

Risk in Climate Models Dr. Dave Stainforth. Risk Management and Climate Change The Law Society 14 th January 2014

Climate Change Adaptation for ports and navigation infrastructure

Communicating Forecast Uncertainty for service providers

Introduction to climate modelling: Evaluating climate models

DRAFT NOT FOR DISTRIBUTION

Probabilistic assessment of regional climate change: a Bayesian approach to combining. predictions from multi-model ensembles

Hadley Centre for Climate Prediction and Research, Met Office, FitzRoy Road, Exeter, EX1 3PB, UK.

Detection and attribution, forced changes, natural variability, signal and noise, ensembles

Operational event attribution

The Met Office climate model HadSM3 and climateprediction.net

Seamless weather and climate for security planning

Is there a safe GHG stabilization level? Results from climateprediction.net NERC Council, 2005

IPCC AR5 WGI. Chapter 10 Detection and Attribution of Climate Change : from Global to Regional. First Lead Author meeting Kunming 8-11 November, 2010

Appendix 1: UK climate projections

Choosing among models

Estimation of Natural Variability and Detection of Anthropogenic Signal in Summertime Precipitation in La Plata Basin

Understanding Uncertainty in Climate Model Components Robin Tokmakian Naval Postgraduate School

Ocean Model Uncertainty

Uncertainty in energy system models

A HIERARCHICAL MODEL FOR REGRESSION-BASED CLIMATE CHANGE DETECTION AND ATTRIBUTION

Show us the evidence Climate change in Southern Africa

ENCYCLOPEDIA OF SUSTAINABILITY SCIENCE AND TECHNOLOGY

Review of concepts and methods relating to climate sensitivity Jonathan Gregory

A Scientific Challenge for Copernicus Climate Change Services: EUCPXX. Tim Palmer Oxford

Understanding climate predictability and uncertainty (seasonal to decadal)

Wrap-up of Lecture 2. Aerosol Forcing and a Critique of the Global Warming Consensus

Climate Change: some basic physical concepts and simple models. David Andrews

Chapter 6: Modeling the Atmosphere-Ocean System

Probabilities, Uncertainties & Units Used to Quantify Climate Change

Why build a climate model

Climate Change Scenarios Dr. Elaine Barrow Canadian Climate Impacts Scenarios (CCIS) Project

Training: Climate Change Scenarios for PEI. Training Session April Neil Comer Research Climatologist

Predicting AGI: What can we say when we know so little?

(Regional) Climate Model Validation

My talk concerns estimating a fixed but unknown, continuously valued parameter, linked to data by a statistical model. I focus on contrasting

ATM S 111, Global Warming Climate Models

Is CESM the best Earth System Model? Reto Knutti Institute for Atmospheric and Climate Science ETH Zurich, Switzerland

Probabilistic Estimates of Transient Climate Sensitivity Subject to Uncertainty in Forcing and Natural Variability

Structural Uncertainty in Health Economic Decision Models

Exploratory Factor Analysis and Principal Component Analysis

Projections of future climate change

Climate Projections and Energy Security

E. Santovetti lesson 4 Maximum likelihood Interval estimation

Lecture 5: Bayes pt. 1

Exploratory Factor Analysis and Principal Component Analysis

Bayesian Model Specification: Toward a Theory of Applied Statistics

1990 Intergovernmental Panel on Climate Change Impacts Assessment

Antarctic sea ice changes natural or anthropogenic? Will Hobbs

The use of marine data for attribution of climate change and constraining climate predictions

Bayesian model selection: methodology, computation and applications

Using the climate of the past to predict the climate of the future. Danny McCarroll Swansea

Of what use is a statistician in climate modeling?

Statistical Modelling for Energy Systems Integration

Protocol for eliciting expert judgments about climate response to future radiative forcing

Should all Machine Learning be Bayesian? Should all Bayesian models be non-parametric?

NZCLIMATE TRUTH NEWSLETTER NO 325 JANUARY 30th 2014

Detection and Attribution of Climate Change

Ensemble Verification Metrics

Bayesian Modeling of Uncertainty in Ensembles of Climate Models

Seasonal Climate Prediction in a Climate Services Context

Projections of 21st century Arctic sea ice loss. Alexandra Jahn University of Colorado Boulder

Using observations to constrain climate project over the Amazon - Preliminary results and thoughts

Predicting climate extreme events in a user-driven context

SMPS 08, 8-10 septembre 2008, Toulouse. A hierarchical fusion of expert opinion in the Transferable Belief Model (TBM) Minh Ha-Duong, CNRS, France

Predictive Engineering and Computational Sciences. Research Challenges in VUQ. Robert Moser. The University of Texas at Austin.

Climate Change: the Uncertainty of Certainty

Estimation of reliability parameters from Experimental data (Parte 2) Prof. Enrico Zio

Choosing priors Class 15, Jeremy Orloff and Jonathan Bloom

Weather at Home sub-project in climateprediction.net. Andy Bowery. BOINC Workshop London 2012

Climate Modeling and Downscaling

A hierarchical fusion of expert opinion in the Transferable Belief Model (TBM) Minh Ha-Duong, CNRS, France

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007

Get the Picture: Climate Models

Contents. Decision Making under Uncertainty 1. Meanings of uncertainty. Classical interpretation

Bayesian Inference. STA 121: Regression Analysis Artin Armagan

Surface Temperature Reconstructions for the Last 2,000 Years. Statement of

Fitting a Straight Line to Data

INFLUENCE OF CLIMATE CHANGE ON EXTREME WEATHER EVENTS

Computational Cognitive Science

Statistical modelling for energy system planning

How can CORDEX enhance assessments of climate change impacts and adaptation?

Was the Amazon Drought of 2005 Human-Caused? Peter Cox Met Office Chair in Climate System Dynamics. Outline

October 1, Shortcomings of Relying on Forecasts for Planning

Global Climate Models and Extremes

Philosophy, Development, Application, and Communication of Future Climate Scenarios for the Pileus Project

Christopher Jones Department of Mathematics, University of North Carolina at Chapel Hill And Warwick Mathematics Institute, University of Warwick

Predictability is the degree to which a correct prediction or forecast of a system's state can be made either qualitatively or quantitatively.

Transcription:

The treatment of uncertainty in climate analysis and prediction Myles Allen Department of Physics, University of Oxford myles.allen@physics.ox.ac.uk

Outcomes of an IPCC workshop on uncertainty and risk, 2004 Likelihood expresses the chance of a defined outcome in the physical world and is estimated using expert judgment. Confidence expresses the degree of understanding and / or consensus among experts and is a statement about expert judgment. (M. Manning, Adv. Clim. Change Res., 2006)

The IPCC s calibrated confidence scale Degree of confidence in (statement) being correct Very High confidence At least 9 out of 10 chance High confidence About 8 out of 10 chance Medium confidence About 5 out of 10 chance Low confidence About 2 out of 10 chance Very low confidence Less than 1 out of 10 chance

The IPCC s calibrated likelihood scale Likelihood of the occurrence / outcome Virtually certain > 99% probability of occurrence Very likely > 90% probability Likely > 66% probability About as likely as not 33 to 66% probability Unlikely < 33% probability Very unlikely < 10% probability Exceptionally unlikely < 1% probability

Martin Manning s vision:

Risbey and Kandlikar s vision Measure of likelihood Justification 1. Full probability density function Robust, well defended distribution 2. Bounds Well defended percentile bounds 3. First order estimates Order of magnitude assessment 4. Expected sign or trend Well defended trend expectation 5. Ambiguous sign or trend Equally plausible contrary trend expectations 6. Effective ignorance Lacking or weakly plausible expectations

The reality Likelihood statements are hard, used by WGI, confidence statements are fluffy, used by WGII or so WGI authors would like to think. Very little use of confidence qualifiers in WGI report: only one kind of uncertainty implies IPCC authors have adopted an extreme subjectivist, or an extreme frequentist, position, with no idea of having done so.

One approach to uncertainty analysis in AR4

And two others Expert ranges Spread of multi-model ensemble

Where did those expert ranges come from?

Where did those expert ranges come from?

Sources of uncertainty in climate forecasting Initial condition uncertainty: Technically irrelevant to climate forecasts, but important because distinction between internal state and boundary conditions is fuzzy: is the Greenland ice cap part of the weather, or a boundary condition on the climate? Actually irrelevant, on any definition, beyond a few years. Boundary condition uncertainty: Natural (solar and volcanic) forcing: poorly known, but conceptually straightforward. Anthropogenic (mostly greenhouse gas emissions): all muddled up in politics, but Somebody Else s Problem. Response uncertainty, or model error : The hard one.

What is the aim of climate modeling? Recent Royal Statistical Society meeting suggested that probabilistic climate forecasts are intrinsically subjective: a sophisticated expression of the opinions of the forecasters. This would make climate models & climate modelers very important: our opinions would really matter. But something as important as climate prediction cannot be left to the opinions of climate modelers. Particularly when there is a huge penalty for under-confidence (your forecasts are too broad to be useful, your programme is closed) and no penalty for over-confidence (you ll be safely retired before anyone finds out you were wrong).

The simplest approach to uncertainty analysis: ensembles of opportunity

Dependence of results on ensemble

Evidence that the fit to 20 th century warming may be misleadingly good (Kiehl, 2007)

Bayesian Model Averaging approach to probabilistic climate forecasting = = θ θ θ θ θ θ θ θ θ d y P P y P S P d y P S P y S P ) ( ) ( ) ( ) ( ) ( ) ( ) ( S quantity predicted by the model, e.g. climate sensitivity θ model parameters, e.g. diffusivity, entrainment coefficient etc. y observations of model-simulated quantities e.g. recent warming P(y θ) likelihood of observations y given parameters θ P(θ) prior distribution of parameters θ Simple models: P(S θ)=1 if parameters θ gives sensitivity S P(S θ)=0 otherwise

The problem with Bayesian approaches: sensitivity of results to prior assumptions Solid line: Forest et al (2002) distribution if you start with uniform sampling of model parameters Dashed line: distribution obtained if you start with uniform sampling of climate sensitivity

Murphy et al (2004): the essential method for UKCP09

The distribution P(S) Murphy et al would have found, if other perturbations were ineffective, through their sampling of the entrainment coefficient Discontinuity at unperturbed value Weighting by S -2

Impact of removing S -2 weighting (solid) and discontinuities in P(S) (dashed)

Why the standard Bayesian approach won t ever work Sampling a distribution of possible models requires us to define a distance between two models in terms of their input parameters & structure, a metric for model error. As long as models contain nuisance parameters that do not correspond to any observable quantity, this is impossible in principle: should Forest et al have sampled S, log(s) or S -1?

Why we need a different approach There s no such thing as an observation-free prior in this problem: e.g. decisions on priors in Murphy et al were made in the light of knowledge of their implications for the result. Very difficult to avoid impression that investigators are subject to external pressures to adopt the right prior (the one that gives the answer people want). Highly informative priors obscure the role of new observations, making it very difficult to make progress (the 1.5-4.5K problem). So what is the alternative?

A less subjective view of the task of climate modeling Our job is to use climate models map the range of possible future climates that are consistent, at a range of levels of confidence, with the available observations: to map likelihood of the data as a function of a forecast variable of interest. If you get the same likelihood function for a particular forecast variable, irrespective of the resolution, origin, code-base etc. of your model, then you can reasonably hypothesize you are done (for that forecast variable and those observations).

Forest et al (2001), Knutti et al (2002), Frame et al (2005), Hegerl et al (2006): sample parameters to give a uniform distribution in sensitivity, then weight by likelihood Sensitivity IGG (Isopleth of Global Goodnessof-fit)

Equivalently, compute average likelihood over all models that predict a given S = θ θ θ θ θ θ θ θ θ d P S P d P y P S P y S L ) ( ) ( ) ( ) ( ) ( ) 0 ( S quantity predicted by the model, e.g. climate sensitivity. θ model parameters, e.g. diffusivity, entrainment coefficient etc. y observations of model-simulated quantities e.g. recent warming Denominator: prior predictive distribution, or P(S) given by parameter sampling with P(y θ) = constant

Impact of sampling nuisance parameters

Impact of sampling nuisance parameters

A more robust approach: compute maximum likelihood over all models that predict a given S L 1( S y) = maxθ P( S θ ) P( y θ ) P(S θ) picks out models that predict a given value of the forecast quantity of interest, e.g. climate sensitivity. P(y θ) evaluates their likelihoods. Likelihood profile, L 1 (S y), is proportional to relative likelihood of most likely available model as a function of forecast quantity. Likelihood profiles follow parameter combinations that cause likelihood to fall off as slowly as possible with S: the least favourable sub-model approach. P(θ) does not matter. Use any sampling design you like as long as you find the likelihood maxima.

Subjective elements in likelihood profiling Aim: to find the relative likelihood of the most likely (most realistic, or least unlikely) model as a function of the forecast variable of interest. Ignore the (meaningless) number of less realistic models that you find that give a similar prediction. Evaluate likelihood only over observable quantities that are Relevant to the forecast. Adequately simulated by the model (including variability). Evaluate confidence intervals from (pre-determined, conventional) likelihood thresholds.

Back to the 21 st century forecasting scenario Shading denotes likelihood w.r.t. 20 th century observations

Bayesian approach (e.g. incorporating a prior from Murphy et al, 2004) gives a tighter forecast Multiplying likelihood P(data S) by P(S) from Murphy et al, 2004

But also a tighter hindcast: do we really believe our models this much?

And if some people believe models more than others IPCC attribution methodology IPCC projection methodology

Some objections to a likelihood profiling approach to probabilistic forecasting Ignoring prior expectations gives misleadingly large uncertainty ranges: Attribution statements (likelihoods) have consistently had more impact than predictions: misleadingly conservative? Decision-makers need probability distribution functions, not just confidence intervals. Not at all clear this is true of real decision-makers. Don t people really just want to know the range of futures that is consistent with what we know so far? Likelihood profiles require such large ensembles that you can t generate them with a GCM unless you resort to crude pattern-scaling approaches.

The climateprediction.net BBC Climate Change Experiment HadCM3L coupled GCM, flux corrected, atmospheric resolution of HadCM3, lower ocean resolution, no Iceland. Spin-up with standard atmosphere, 10 perturbed oceans. Switch to perturbed atmosphere in 1920 and readjust down-welling fluxes. Transient (A1B) and control simulations to 2080. 10 future volcanic and 5 solar scenarios. 23,000 runs completed to date (7M model years).

Global temperatures from CMIP-3: transientcontrol simulations

Global temperatures from CPDN BBC climate change experiment: first 500 runs

Evaluating a likelihood profile for 2050 temperature from CPDN-BBC results Start with 5-year-averaged temperature time-series over 1961-2005 for Giorgi regions plus ocean basins (29 regions, 9 pentades). Project onto EOFs of CPDN ensemble. Compute weighted r 2 (Mahalanobis distance) using CMIP3 control integrations to estimate expected model-data discrepancy due to internal variability: r 2 i y x i C = N = ( y x = ) observations i = th i T model simulation control C 1 N ( y x ) covariance i

Discrepancy against observed change 1961-2005 versus ΔT 2050, weighting by information content

Extracting models in which r 2 <95 th percentile of control distribution

And we have a lot more than 500 model-versions

Conclusions Standard Bayesian approaches face fundamental problems applied to ensemble-based climate prediction: No measure of distance between two models: depends on arbitrary decisions about parameter definition. Priors can be tuned to give useful posterior PDFs. Likelihood profiling avoids this arbitrariness. Tell users what range of outcomes are consistent with the data at a given confidence level: it s probably what they think we are telling them anyway.

Implications The use of a single likelihood scale, in place of the recommended 2D likelihood-confidence maps, has hindered communication of uncertainty in WGI. The problem continues, with simple PDFs, convolving all kinds of uncertainty, increasingly in vogue for communication to decision-makers. My view: Strong relationship between degree of subjectivity and degree of confidence. Simply saying everything is subjective completely obscures the distinction between robust uncertainties (e.g. conventional confidence intervals) and less reliable ones (e.g. opinions of modellers).

It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature. Niels Bohr

And the people who actually did the work Simple modelling: Dave Frame, Chris Forest, Ben Booth & conversations with many others. BBC Climate Change Experiment: Carl Christensen, Nick Faull, Tolu Aina, Dave Frame, Frances McNamara & the rest of the team at cpdn & the BBC. Distributed computing: David Anderson (Berkeley and SETI@home) & the BOINC developers. Paying the bills: NERC (COAPEC, e-science, KT & core), EC (ENSEMBLES & WATCH), UK DfT, Microsoft Research And most important of all, the endlessly patient and enthusiastic participants, volunteer board monitors etc. of the climateprediction.net global community.