DART Tutorial Part IV: Other Updates for an Observed Variable

Similar documents
DART Tutorial Sec'on 9: More on Dealing with Error: Infla'on

DART Tutorial Sec'on 1: Filtering For a One Variable System

DART Tutorial Part II: How should observa'ons impact an unobserved state variable? Mul'variate assimila'on.

DART Tutorial Sec'on 5: Comprehensive Filtering Theory: Non-Iden'ty Observa'ons and the Joint Phase Space

DART_LAB Tutorial Section 2: How should observations impact an unobserved state variable? Multivariate assimilation.

Ensemble Data Assimila.on and Uncertainty Quan.fica.on

Ensemble Data Assimila.on for Climate System Component Models

Data Assimilation Research Testbed Tutorial

DART Tutorial Sec'on 19: Making DART-Compliant Models

Par$cle Filters Part I: Theory. Peter Jan van Leeuwen Data- Assimila$on Research Centre DARC University of Reading

Adaptive Inflation for Ensemble Assimilation

Ensemble Data Assimilation and Uncertainty Quantification

DART_LAB Tutorial Section 5: Adaptive Inflation

CS 6140: Machine Learning Spring What We Learned Last Week. Survey 2/26/16. VS. Model

CS 6140: Machine Learning Spring 2016

Introduc)on to Ar)ficial Intelligence

Fundamentals of Data Assimila1on

Fundamentals of Data Assimila1on

Introduction to Particle Filters for Data Assimilation

Data Assimilation Research Testbed Tutorial

DART Ini)al Condi)ons for a Refined Grid CAM- SE Forecast of Hurricane Katrina. Kevin Raeder (IMAGe) Colin Zarzycki (ASP)

Gaussian Filtering Strategies for Nonlinear Systems

GeMng to know the Data Assimila'on Research Testbed - DART. Tim Hoar, Data Assimilation Research Section, NCAR

DART Tutorial Section 18: Lost in Phase Space: The Challenge of Not Knowing the Truth.

Quan&fying Uncertainty. Sai Ravela Massachuse7s Ins&tute of Technology

Localization and Correlation in Ensemble Kalman Filters

CSE 473: Ar+ficial Intelligence. Probability Recap. Markov Models - II. Condi+onal probability. Product rule. Chain rule.

A Non-Gaussian Ensemble Filter Update for Data Assimilation

An Efficient Ensemble Data Assimilation Approach To Deal With Range Limited Observation

(Extended) Kalman Filter

Ergodicity in data assimilation methods

CSE 473: Ar+ficial Intelligence. Example. Par+cle Filters for HMMs. An HMM is defined by: Ini+al distribu+on: Transi+ons: Emissions:

A Note on the Particle Filter with Posterior Gaussian Resampling

CSE 473: Ar+ficial Intelligence

CSCI 360 Introduc/on to Ar/ficial Intelligence Week 2: Problem Solving and Op/miza/on

Nonlinear ensemble data assimila/on in high- dimensional spaces. Peter Jan van Leeuwen Javier Amezcua, Mengbin Zhu, Melanie Ades

Convective-scale data assimilation in the Weather Research and Forecasting model using a nonlinear ensemble filter

Modelling Time- varying effec3ve Brain Connec3vity using Mul3regression Dynamic Models. Thomas Nichols University of Warwick

Relative Merits of 4D-Var and Ensemble Kalman Filter

Reducing the Impact of Sampling Errors in Ensemble Filters

Overview: In addi:on to considering various summary sta:s:cs, it is also common to consider some visual display of the data Outline:

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

ECE295, Data Assimila0on and Inverse Problems, Spring 2015

Priors in Dependency network learning

Ensemble of Climate Models

A new Hierarchical Bayes approach to ensemble-variational data assimilation

2D Image Processing (Extended) Kalman and particle filter

The Ensemble Kalman Filter:

Ensemble Kalman Filter

Data Assimilation Research Testbed Tutorial. Section 7: Some Additional Low-Order Models

A Physically Based Data QC Procedure and Its Impact on the Assimila9on of GPS RO Observa9ons in the Tropical Lower Troposphere

Mul$- model ensemble challenge ini$al/model uncertain$es

Using DART Tools for CAM Development

DART and Land Data Assimila7on

Accepted in Tellus A 2 October, *Correspondence

Machine Learning and Data Mining. Bayes Classifiers. Prof. Alexander Ihler

Introduction to Ensemble Kalman Filters and the Data Assimilation Research Testbed

Kalman Filter and Ensemble Kalman Filter

UVA CS 4501: Machine Learning. Lecture 6: Linear Regression Model with Dr. Yanjun Qi. University of Virginia

Fundamentals of Data Assimilation

CSCI 360 Introduc/on to Ar/ficial Intelligence Week 2: Problem Solving and Op/miza/on. Professor Wei-Min Shen Week 8.1 and 8.2

Approximate Inference

Linear Regression and Correla/on. Correla/on and Regression Analysis. Three Ques/ons 9/14/14. Chapter 13. Dr. Richard Jerz

Linear Regression and Correla/on

Enhancing information transfer from observations to unobserved state variables for mesoscale radar data assimilation

Data assimilation in high dimensions

Short tutorial on data assimilation

Ensemble 4DVAR and observa3on impact study with the GSIbased hybrid ensemble varia3onal data assimila3on system. for the GFS

Development of the NCAR 4D-REKF System and Comparison with RTFDDA, DART-EaKF and WRFVAR

Example: Data from the Child Health and Development Study

Correcting biased observation model error in data assimilation

Land Surface Data AssimilaEon: DART and CLM

Outline. What is Machine Learning? Why Machine Learning? 9/29/08. Machine Learning Approaches to Biological Research: Bioimage Informa>cs and Beyond

Least Square Es?ma?on, Filtering, and Predic?on: ECE 5/639 Sta?s?cal Signal Processing II: Linear Es?ma?on

Lecture 5. G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1

ICOS-D inverse modelling using the CarboScope regional inversion system

What do we know about EnKF?

Boos$ng Can we make dumb learners smart?

Gaussian Process Approximations of Stochastic Differential Equations

Strong Lens Modeling (II): Statistical Methods

The Ensemble Kalman Filter:

Bayesian networks Lecture 18. David Sontag New York University

STAD68: Machine Learning

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

Using gravity wave parameteriza1ons to address WACCM discrepancies

EnKF Review. P.L. Houtekamer 7th EnKF workshop Introduction to the EnKF. Challenges. The ultimate global EnKF algorithm

STA 4273H: Sta-s-cal Machine Learning

CSE 473: Ar+ficial Intelligence. Hidden Markov Models. Bayes Nets. Two random variable at each +me step Hidden state, X i Observa+on, E i

EnKF and filter divergence

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

A variance limiting Kalman filter for data assimilation: I. Sparse observational grids II. Model error

Particle Filters. Outline

CS 147: Computer Systems Performance Analysis

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

4. DATA ASSIMILATION FUNDAMENTALS

Application of the Ensemble Kalman Filter to History Matching

Correla'on. Keegan Korthauer Department of Sta's'cs UW Madison

Miscellaneous Thoughts on Ocean Data Assimilation

Least Squares Parameter Es.ma.on

Lecture 9. Time series prediction

Transcription:

DART Tutorial Part IV: Other Updates for an Observed Variable UCAR The Na'onal Center for Atmospheric Research is sponsored by the Na'onal Science Founda'on. Any opinions, findings and conclusions or recommenda'ons expressed in this publica'on are those of the author(s) and do not necessarily reflect the views of the Na'onal Science Founda'on.

Product of Two Gaussians p( A BC) = p(b AC)p(A C) p(b C) = p(b AC)p(A C) p(b x)p(x C)dx.6 Ensemble filters: Prior is available as finite sample..4.2 Prior Ensemble 4 2 2 4 Don t know much about proper'es of this sample. May naively assume it is random draw from truth.

Product of Two Gaussians p( A BC) = p(b AC)p(A C) p(b C) = p(b AC)p(A C) p(b x)p(x C)dx How can we take product of sample with con'nuous likelihood?.6.4.2 Prior PDF Prior Ensemble 4 2 2 4 Fit a con'nuous (Gaussian for now) distribu'on to sample.

Product of Two Gaussians p( A BC) = p(b AC)p(A C) p(b C) = p(b AC)p(A C) p(b x)p(x C)dx Observa'on likelihood usually con'nuous (nearly always Gaussian)..6.4.2 Prior PDF Obs. Likelihood Prior Ensemble 4 2 2 4 If Obs. likelihood isn t Gaussian, can generalize methods below.

Product of Two Gaussians p( A BC) = p(b AC)p(A C) p(b C) = p(b AC)p(A C) p(b x)p(x C)dx Product of prior Gaussian fit and Obs. likelihood is Gaussian..6 Posterior PDF.4.2 Prior PDF Prior Ensemble Obs. Likelihood 4 2 2 4 Compu'ng con'nuous posterior is simple. BUT, need to have a SAMPLE of this PDF.

Sampling Posterior PDF: There are many ways to do this..6 Posterior PDF.4.2 2 1 1 2 3 Exact proper'es of different methods may be unclear. Trial and error s'll best way to see how they perform. Will interact with proper'es of predic'on models, etc.

Sampling Posterior PDF: Just draw a random sample (filter_kind=5 in &assim_tools_nml)..6 Posterior PDF.4.2 Random Sample 2 1 1 2 3

Sampling Posterior PDF: Just draw a random sample (filter_kind=5 in &assim_tools_nml)..6.4.2 Posterior PDF Random Sample; Exact Mean 2 1 1 2 3 Can play games with this sample to improve (modify) its proper'es. Example: Adjust the mean of the sample to be exact. Can also adjust the variance to be exact.

Sampling Posterior PDF: Just draw a random sample (filter_kind=5 in &assim_tools_nml)..6.4.2 Posterior PDF Random Sample; Exact Mean and Var. 2 1 1 2 3 Can play games with this sample to improve (modify) its proper'es. Example: Adjust the mean of the sample to be exact. Can also adjust the variance to be exact.

Sampling Posterior PDF: Just draw a random sample (filter_kind=5 in &assim_tools_nml)..6.4.2 Posterior PDF Random Sample; Exact Mean and Var. 2 1 1 2 3 Might also want to eliminate rare extreme outliers. NOTE: Proper'es of these adjusted samples can be quite different. How these proper'es interact with the rest of the assimila'on is an open ques'on.

Sampling Posterior PDF: Construct a determinis'c sample with certain features..6 Posterior PDF.4.2 3 2 1 1 2 3 4 For instance: Sample could have exact mean and variance. This is insufficient to constrain ensemble, need other constraints.

Sampling Posterior PDF: Construct a determinis'c sample with certain features (filter_kind=6 in &assim_tools_nml; manually adjust kurtosis)..6 Posterior PDF.4.2 Kurtosis 3 3 2 1 1 2 3 4 Example: Exact sample mean and variance. Sample kurtosis (related to the sharpness/tailedness of a distribution) is 3, which is the expected value for a normal distribution. Start by assuming a uniformly-spaced sample and adjus'ng quadra'cally.

Sampling Posterior PDF: Construct a determinis'c sample with certain features (filter_kind=6 in &assim_tools_nml; manually adjust kurtosis)..6 Posterior PDF.4.2 Kurtosis 2 Kurtosis 3 3 2 1 1 2 3 4 Example: Exact sample mean and variance. Sample kurtosis 2: less extreme outliers, less dense near mean. Avoiding outliers might be nice in certain applica'ons. Sampling heavily near mean might be nice.

Sampling Posterior PDF: First two methods depend only on mean and variance of prior sample..6.4.2 Prior Ensemble 4 2 2 4 Example: Suppose prior sample is (significantly) bimodal? Might want to retain addi'onal informa'on from prior. Recall that Ensemble Adjustment Filter tried to do this (Sec'on 1).

Sampling Posterior PDF: First two methods depend only on mean and variance of prior sample. Posterior PDF.4.2 Prior PDF Prior Ensemble Random Posterior Ensemble Obs. Likelihood 4 2 2 4 Example: Suppose prior sample is (significantly) bimodal? Might want to retain addi'onal informa'on from prior. Recall that Ensemble Adjustment Filter tried to do this (Sec'on 1).

Ensemble Filter Algorithms: Ensemble Kalman Filter (EnKF) (filter_kind=2 in &assim_tools_nml)..4.2 Prior Ensemble 4 2 2 4 Classical Monte Carlo algorithm for data assimila'on

Ensemble Filter Algorithms: Ensemble Kalman Filter (EnKF) (filter_kind=2 in &assim_tools_nml)..4.2 Prior Ensemble 4 2 2 4 Again, fit a Gaussian to the sample.

Ensemble Filter Algorithms: Ensemble Kalman Filter (EnKF) (filter_kind=2 in &assim_tools_nml)..6.4.2 Prior Ensemble Obs. Likelihood 4 2 2 4 Again, fit a Gaussian to the sample. Are there ways to do this without compu'ng prior sample stats?

Ensemble Filter Algorithms: Ensemble Kalman Filter (EnKF) (filter_kind=2 in &assim_tools_nml)..6.4.2 Prior Ensemble Obs. Likelihood Random Draws from Obs. 4 2 2 4 Generate a random draw from the observa'on likelihood. Associate it with the first sample of the prior ensemble.

Ensemble Filter Algorithms: Ensemble Kalman Filter (EnKF) (filter_kind=2 in &assim_tools_nml)..6.4.2 Prior Ensemble Obs. Likelihood Random Draws from Obs. 4 2 2 4 Have sample of joint prior distribu'on for observa'on and prior MEAN. Adjus'ng the mean of obs. sample to be exact improves performance. Adjus'ng the variance may further improve performance. Outliers are a poten'al problem, but can be removed.

Ensemble Filter Algorithms: Ensemble Kalman Filter (EnKF) (filter_kind=2 in &assim_tools_nml)..6.4.2 4 2 2 4 For each prior mean/obs. pair, find mean of posterior PDF. DART Tutorial Sec'on 6: Slide 24

Ensemble Filter Algorithms: Ensemble Kalman Filter (EnKF) (filter_kind=2 in &assim_tools_nml)..6.4.2 4 2 2 4 Prior sample standard devia'on s'll measures uncertainty of prior mean es'mate.

Ensemble Filter Algorithms: Ensemble Kalman Filter (EnKF) (filter_kind=2 in &assim_tools_nml)..6.4.2 4 2 2 4 Prior sample standard devia'on s'll measures uncertainty of prior mean es'mate. Obs. likelihood standard devia'on measures uncertainty of obs. es'mate.

Ensemble Filter Algorithms: Ensemble Kalman Filter (EnKF) (filter_kind=2 in &assim_tools_nml)..6 Posterior PDF.4.2 4 2 2 4 Take product of the prior/obs distribu'ons for first sample. This is the standard Gaussian product.

Ensemble Filter Algorithms: Ensemble Kalman Filter (EnKF) (filter_kind=2 in &assim_tools_nml)..6.4.2 4 2 2 4 Mean of product is random sample of posterior. Product of random samples is random sample of product.

Ensemble Filter Algorithms: Ensemble Kalman Filter (EnKF) (filter_kind=2 in &assim_tools_nml)..6.4.2 4 2 2 4 Repeat this opera'on for each joint prior pair.

Ensemble Filter Algorithms: Ensemble Kalman Filter (EnKF) (filter_kind=2 in &assim_tools_nml)..6.4.2 4 2 2 4 Posterior sample maintains much of prior sample structure. (This is more apparent for larger ensemble sizes.) Posterior sample mean and variance converge to exact for large samples.

Ensemble Filter Algorithms: Ensemble Kernel Filter (EKF) (filter_kind=3 in &assim_tools_nml)..8.6.4.2 Prior Ensemble 4 2 2 4 Can retain more correct informa'on about non-gaussian priors. Can also be used for obs. likelihood term in product (not shown here).

Ensemble Filter Algorithms: Ensemble Kernel Filter (EKF) (filter_kind=3 in &assim_tools_nml)..8.6.4 Prior PDF.2 Prior Ensemble 4 2 2 4 Usually, kernel widths are a func'on of the sample variance.

Ensemble Filter Algorithms: Ensemble Kernel Filter (EKF) (filter_kind=3 in &assim_tools_nml)..8.6.4 Obs. Likelihood Prior PDF.2 Prior Ensemble 4 2 2 4 Usually, kernel widths are a func'on of the sample variance.

Ensemble Filter Algorithms: Ensemble Kernel Filter (EKF) (filter_kind=3 in &assim_tools_nml)..8.6.4.2 Example Kernels: Half as Wide as Prior PDF Obs. Likelihood 4 2 2 4 Approximate prior as a sum of Gaussians centered on each sample.

Ensemble Filter Algorithms: Ensemble Kernel Filter (EKF) (filter_kind=3 in &assim_tools_nml)..8.6.4.2 Example Kernels: Half as Wide as Prior PDF Normalized Sum of Kernels Obs. Likelihood 4 2 2 4 The es'mate of the prior is the normalized sum of all kernels.

Ensemble Filter Algorithms: Ensemble Kernel Filter (EKF) (filter_kind=3 in &assim_tools_nml)..8.6.4.2 Obs. Likelihood 4 2 2 4 Apply distribu've law to take product: product of the sum is the sum of the products. Otherwise, the product cannot be analytically determined.

Ensemble Filter Algorithms: Ensemble Kernel Filter (EKF) (filter_kind=3 in &assim_tools_nml)..8.6.4.2 Obs. Likelihood 4 2 2 4 Compute product of first kernel with Obs. likelihood.

Ensemble Filter Algorithms: Ensemble Kernel Filter (EKF) (filter_kind=3 in &assim_tools_nml)..8.6.4.2 Obs. Likelihood 4 2 2 4 But, can no longer ignore the weight term for product of Gaussians. Kernels with mean further from observa'on get less weight.

Ensemble Filter Algorithms: Ensemble Kernel Filter (EKF) (filter_kind=3 in &assim_tools_nml)..8.6.4.2 Obs. Likelihood 4 2 2 4 Con'nue to take products for each kernel in turn. Closer kernels dominate posterior.

Ensemble Filter Algorithms: Ensemble Kernel Filter (EKF) (filter_kind=3 in &assim_tools_nml)..8.6.4.2 Normalized Sum of Posteriors Obs. Likelihood 4 2 2 4 Final posterior is weight-normalized sum of kernel products. Posterior is somewhat different than for ensemble adjustment or ensemble Kalman filter (much less density in leo lobe.)

Ensemble Filter Algorithms: Ensemble Kernel Filter (EKF) (filter_kind=3 in &assim_tools_nml)..8.6.4.2 Normalized Sum of Posteriors Posterior Ensemble Obs. Likelihood 4 2 2 4 Forming sample of the posterior can be problema'c. Random sample is simple. Determinis'c sampling is much more tricky here (few results available).

Ensemble Filter Algorithms: Rank Histogram Filter (filter_kind=8 in &assim_tools_nml)..6 Density.4.2 Apply forward operator to each ensemble member. Get prior ensemble in observa'on space. Prior 3 2 1 1 2 3 Goal: Want to handle non-gaussian priors or observa'on likelihoods. Low informa'on content obs. must yield small increments. Must perform well for Gaussian priors. Must be computa'onally efficient.

.6 Ensemble Filter Algorithms: Rank Histogram Filter (filter_kind=8 in &assim_tools_nml). Density.4.2 Prior 3 2 1 1 2 3 Step 1: Get con'nuous prior distribu'on density. Place (ens_size + 1) -1 mass between adjacent ensemble members. Reminiscent of rank histogram evalua'on method. DART Tutorial Sec'on 6: Slide 6

.6 Ensemble Filter Algorithms: Rank Histogram Filter (filter_kind=8 in &assim_tools_nml). Density.4.2 Prior 3 2 1 1 2 3 Step 1: Get con'nuous prior distribu'on density. Par'al Gaussian kernels on tails, N(tail_mean, ens_sd). tail_mean selected so that (ens_size + 1) -1 mass is in tail.

.6 Ensemble Filter Algorithms: Rank Histogram Filter (filter_kind=8 in &assim_tools_nml). Density.4.2 Prior 3 2 1 1 2 3 Step 2: Use likelihood to compute weight for each ensemble member. Analogous to classical par'cle filter. Can be extended to non-gaussian obs. likelihoods.

.6 Ensemble Filter Algorithms: Rank Histogram Filter (filter_kind=8 in &assim_tools_nml). Density.4.2 Prior 3 2 1 1 2 3 Step 2: Use likelihood to compute weight for each ensemble member. Can approximate interior likelihood with linear fit; for efficiency.

.6 Ensemble Filter Algorithms: Rank Histogram Filter (filter_kind=8 in &assim_tools_nml). Density.4.2 Prior 3 2 1 1 2 3 Step 3: Compute con'nuous posterior distribu'on. Approximate likelihood with trapezoidal quadrature, take product. (Displayed product normalized to make posterior a PDF).

.6 Ensemble Filter Algorithms: Rank Histogram Filter (filter_kind=8 in &assim_tools_nml). Density.4.2 Prior 3 2 1 1 2 3 Step 3: Compute con'nuous posterior distribu'on. Product of prior Gaussian kernel with likelihood for tails.

.6 Ensemble Filter Algorithms: Rank Histogram Filter (filter_kind=8 in &assim_tools_nml). Density.4.2 RHF Posterior Prior 3 2 1 1 2 3 Step 4: Compute updated ensemble members: (ens_size +1) -1 of posterior mass between each ensemble pair. (ens_size +1) -1 in each tail. Uninforma've observa'on (not shown) would have no impact.

.6 Ensemble Filter Algorithms: Rank Histogram Filter (filter_kind=8 in &assim_tools_nml). Density.4.2 RHF Posterior Prior EAKF Posterior 3 2 1 1 2 3 Compare to standard ensemble adjustment Kalman filter (EAKF). Nearly Gaussian case, differences are small.

Density 3 2 1 Ensemble Filter Algorithms: Rank Histogram Filter (filter_kind=8 in &assim_tools_nml). RHF Posterior Prior EAKF Posterior 3 2.5 2 1.5 1.5.5 1 Rank Histogram gets rid of prior outlier that is inconsistent with obs. EAKF can t get rid of this prior outlier. Large prior variance from outlier causes EAKF to shio all members too much towards observa'on (with mean off the page).

Ensemble Filter Algorithms: Rank Histogram Filter (filter_kind=8 in &assim_tools_nml). 1.5 Density 1.5 RHF Posterior Prior EAKF Posterior -3-2.5-2 -1.5-1 -.5.5 1 1.5 Convec've-scale models (and land models) have analogous behavior. Convec'on may fire at random loca'ons. Subset of ensembles will be in right place, rest in wrong place. Want to aggressively eliminate convec'on in wrong place.

Dealing with Ensemble Filter Errors 2. h errors; Representativeness 3. Gross Obs. Error 4. Sampling Error; Gaussian Assumption Fix 1, 2, 3 independently, HARD but ongoing. O\en, ensemble filters... t k ** * h h h t k+2 1-4: Variance infla'on, Increase prior uncertainty to give obs more impact. 1. Model Error t k+1 5. Sampling Error; Assuming Linear Statistical Relation 5. Localiza'on : only let obs. impact a set of nearby state variables. O\en smoothly decrease impact to as func'on of distance. DART Tutorial Sec'on 9: Slide 3

Model/Filter Error: Filter Divergence and Variance Infla'on 1. History of observa'ons and physical system => true distribu'on. 1.5 "TRUE" Prior PDF 4 3 2 1

Model/Filter Error: Filter Divergence and Variance Infla'on 1. History of observa'ons and physical system => true distribu'on. 2. Sampling error, some model errors lead to insufficient prior variance. 3. Can lead to filter divergence : prior is too confident, obs. Ignored. 1.5 Variance Deficient PDF "TRUE" Prior PDF 4 3 2 1

Model/Filter Error: Filter Divergence and Variance Infla'on 1. History of observa'ons and physical system => true distribu'on. 2. Sampling error, some model errors lead to insufficient prior variance. 3. Can lead to filter divergence : prior is too confident, obs. Ignored. 1.5 Variance Deficient PDF "TRUE" Prior PDF 4 3 2 1 Naïve solu'on is variance infla'on: just increase spread of prior. For ensemble member i, inflate x i ( ) = λ ( x i x) + x

Model/Filter Error: Filter Divergence and Variance Infla'on 1. History of observa'ons and physical system => true distribu'on. 2. Most model errors also lead to erroneous shi\ in en're distribu'on. 3. Again, prior can be viewed as being TOO CERTAIN..8.6.4.2 "TRUE" Prior PDF Error in Mean (from model) 4 2 2 4

Model/Filter Error: Filter Divergence and Variance Infla'on 1. History of observa'ons and physical system => true distribu'on. 2. Most model errors also lead to erroneous shi\ in en're distribu'on. 3. Again, prior can be viewed as being TOO CERTAIN..8.6.4.2 "TRUE" Prior PDF Error in Mean (from model) Variance Inflated 4 2 2 4 Infla'ng can ameliorate this. Obviously, if we knew E(error), we d correct for it directly.

Physical Space Variance Infla'on Inflate all state variables by same amount before assimila'on. Capabili'es: 1. Can be effec've for a variety of models. 2. Can maintain linear balances. 3. Prior continues to resemble that from the first guess. 4. Simple and computationally cheap. Liabili'es: 1. State variables not constrained by observa'ons can blow up. For instance unobserved regions near the top of AGCMs. 2. Magnitude of λ normally selected by trial and error.

Physical Space Variance Infla'on in Lorenz 63 Observa'on outside prior: danger of filter divergence. 4 3 2 1 2 2 Observa'on in red. Prior ensemble in green. 2 2

Physical Space Variance Infla'on in Lorenz 63 A\er infla'ng, observa'on is in prior cloud: filter divergence avoided. 4 3 2 1 2 2 Observa'on in red. Prior ensemble in green. 2 2 Inflated ensemble in magenta.

Physical Space Variance Infla'on in Lorenz 63 Prior distribu'on is significantly curved. 4 3 2 1 2 2 Observa'on in red. Prior ensemble in green. 2 2

Physical Space Variance Infla'on in Lorenz 63 Inflated prior outside alractor. Posterior will also be off alractor. Can lead to transient off-alractor behavior or 4 3 model blow-up. 2 1 2 2 Observa'on in red. Prior ensemble in green. 2 2 Inflated ensemble in magenta.