Ensemble Data Assimilation and Uncertainty Quantification

Similar documents
Data Assimilation Research Testbed Tutorial

Introduction to Ensemble Kalman Filters and the Data Assimilation Research Testbed

DART_LAB Tutorial Section 2: How should observations impact an unobserved state variable? Multivariate assimilation.

Ensemble Data Assimila.on for Climate System Component Models

Data Assimilation Research Testbed Tutorial

DART_LAB Tutorial Section 5: Adaptive Inflation

Gaussian Filtering Strategies for Nonlinear Systems

Localization and Correlation in Ensemble Kalman Filters

Forecasting and data assimilation

DART Tutorial Section 18: Lost in Phase Space: The Challenge of Not Knowing the Truth.

DART Tutorial Part II: How should observa'ons impact an unobserved state variable? Mul'variate assimila'on.

Data Assimilation Research Testbed Tutorial. Section 7: Some Additional Low-Order Models

Reducing the Impact of Sampling Errors in Ensemble Filters

DART Tutorial Sec'on 1: Filtering For a One Variable System

Relative Merits of 4D-Var and Ensemble Kalman Filter

Ensemble Data Assimila.on and Uncertainty Quan.fica.on

Ensemble forecasting: Error bars and beyond. Jim Hansen, NRL Walter Sessions, NRL Jeff Reid,NRL May, 2011

Adaptive Inflation for Ensemble Assimilation

Accepted in Tellus A 2 October, *Correspondence

Convective-scale data assimilation in the Weather Research and Forecasting model using a nonlinear ensemble filter

Ergodicity in data assimilation methods

Fundamentals of Data Assimilation

Aspects of the practical application of ensemble-based Kalman filters

Implications of Stochastic and Deterministic Filters as Ensemble-Based Data Assimilation Methods in Varying Regimes of Error Growth

A new Hierarchical Bayes approach to ensemble-variational data assimilation

DART Tutorial Sec'on 5: Comprehensive Filtering Theory: Non-Iden'ty Observa'ons and the Joint Phase Space

Ensemble forecasting and flow-dependent estimates of initial uncertainty. Martin Leutbecher

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

DART Tutorial Part IV: Other Updates for an Observed Variable

DART Tutorial Sec'on 9: More on Dealing with Error: Infla'on

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

A Non-Gaussian Ensemble Filter Update for Data Assimilation

Addressing the nonlinear problem of low order clustering in deterministic filters by using mean-preserving non-symmetric solutions of the ETKF

CS491/691: Introduction to Aerial Robotics

Adaptive Data Assimilation and Multi-Model Fusion

Fundamentals of Data Assimila1on

Hierarchical Bayes Ensemble Kalman Filter

An Efficient Ensemble Data Assimilation Approach To Deal With Range Limited Observation

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems. Eric Kostelich Data Mining Seminar, Feb. 6, 2006

4. DATA ASSIMILATION FUNDAMENTALS

2D Image Processing. Bayes filter implementation: Kalman filter

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC

A data-driven method for improving the correlation estimation in serial ensemble Kalman filter

A variance limiting Kalman filter for data assimilation: I. Sparse observational grids II. Model error

Review of Covariance Localization in Ensemble Filters

Local Ensemble Transform Kalman Filter

Fundamentals of Data Assimila1on

Ensemble Kalman Filter

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

Robust Ensemble Filtering With Improved Storm Surge Forecasting

Sea Ice Data Assimilation in the Arctic via DART/CICE5 in the CESM1

State and Parameter Estimation in Stochastic Dynamical Models

Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis

Autonomous Navigation for Flying Robots

A Note on the Particle Filter with Posterior Gaussian Resampling

Data assimilation in high dimensions

2D Image Processing. Bayes filter implementation: Kalman filter

Data assimilation in high dimensions

Adaptive ensemble Kalman filtering of nonlinear systems

ECE295, Data Assimila0on and Inverse Problems, Spring 2015

Local Ensemble Transform Kalman Filter: An Efficient Scheme for Assimilating Atmospheric Data

Consider the joint probability, P(x,y), shown as the contours in the figure above. P(x) is given by the integral of P(x,y) over all values of y.

Understanding the Differences between LS Algorithms and Sequential Filters

Evolution of Forecast Error Covariances in 4D-Var and ETKF methods

ECE521 week 3: 23/26 January 2017

Bayesian Inverse problem, Data assimilation and Localization

2D Image Processing (Extended) Kalman and particle filter

The Ensemble Kalman Filter:

Ensemble variational assimilation as a probabilistic estimator Part 1: The linear and weak non-linear case

APPENDIX 1 BASIC STATISTICS. Summarizing Data

(Extended) Kalman Filter

M.Sc. in Meteorology. Numerical Weather Prediction

Juli I. Rubin. NRC Postdoctoral Research Associate Naval Research Laboratory, Monterey, CA

Maximum Likelihood Ensemble Filter Applied to Multisensor Systems

Stability of Ensemble Kalman Filters

Short tutorial on data assimilation

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows

A Spectral Approach to Linear Bayesian Updating

Smoothers: Types and Benchmarks

Par$cle Filters Part I: Theory. Peter Jan van Leeuwen Data- Assimila$on Research Centre DARC University of Reading

Verifying the Relationship between Ensemble Forecast Spread and Skill

Many contemporary problems in science ranging from protein

CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions

Lecture 9. Time series prediction

Mesoscale Predictability of Terrain Induced Flows

Numerical Weather Prediction: Data assimilation. Steven Cavallo

Systematic strategies for real time filtering of turbulent signals in complex systems

Ji-Sun Kang. Pr. Eugenia Kalnay (Chair/Advisor) Pr. Ning Zeng (Co-Chair) Pr. Brian Hunt (Dean s representative) Pr. Kayo Ide Pr.

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents

Multiple timescale coupled atmosphere-ocean data assimilation

Application of the Ensemble Kalman Filter to History Matching

Introduction to Data Assimilation

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

Kalman Filter and Ensemble Kalman Filter

On a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model

Bayesian Statistics and Data Assimilation. Jonathan Stroud. Department of Statistics The George Washington University

DATA ASSIMILATION FOR FLOOD FORECASTING

Transcription:

Ensemble Data Assimilation and Uncertainty Quantification Jeff Anderson National Center for Atmospheric Research pg 1

What is Data Assimilation? Observations combined with a Model forecast + to produce an analysis (best possible estimate). pg 2

Example: Estimating the Temperature Outside An observation has a value ( ), pg 3

Example: Estimating the Temperature Outside An observation has a value ( ), and an error distribution (red curve) that is associated with the instrument. pg 4

Example: Estimating the Temperature Outside Thermometer outside measures 1C. Instrument builder says thermometer is unbiased with +/- 0.8C gaussian error. pg 5

Example: Estimating the Temperature Outside Thermometer outside measures 1C. The red plot is observed. ( ) P T T o, probability of temperature given that T o was pg 6

Example: Estimating the Temperature Outside We also have a prior estimate of temperature. The green curve is prior information. C ( ) P T C ; probability of temperature given all available pg 7

Example: Estimating the Temperature Outside Prior information C can include: 1. Observations of things besides T; 2. Model forecast made using observations at earlier times; 3. A priori physical constraints ( T > -273.15C ); 4. Climatological constraints ( -30C < T < 40C ). pg 8

Bayes Theorem: Combining the Prior Estimate and Observation ( ) = P T o T,C P T T o,c Posterior: Probability of T given observations and Prior. Also called update or analysis. ( )P T C Normalization ( ) Prior Likelihood: Probability that T o is observed if T is true value and given prior information C. pg 9

Combining the Prior Estimate and Observation P ( T T o, C ) = P ( T o T, C ) P ( T C ) normalization pg 10

Combining the Prior Estimate and Observation P ( T T o, C ) = P ( T o T, C ) P ( T C ) normalization pg 11

Combining the Prior Estimate and Observation P ( T T o, C ) = P ( T o T, C ) P ( T C ) normalization pg 12

Combining the Prior Estimate and Observation P ( T T o, C ) = P ( T o T, C ) P ( T C ) normalization pg 13

Combining the Prior Estimate and Observation P ( T T o, C ) = P ( T o T, C ) P ( T C ) normalization pg 14

Consistent Color Scheme Throughout Tutorial Green = Prior Red = Observation Blue = Posterior Black = Truth pg 15

Combining the Prior Estimate and Observation P ( T T o, C ) = P ( T o T, C ) P ( T C ) normalization Generally no analytic solution for Posterior. pg 16

Combining the Prior Estimate and Observation P ( T T o, C ) = P ( T o T, C ) P ( T C ) normalization Gaussian Prior and Likelihood -> Gaussian Posterior pg 17

For Gaussian prior and likelihood Prior Combining the Prior Estimate and Observation Likelihood Then, Posterior P( T C) = Normal( T p,σ p ) P( T o T,C) = Normal( T o,σ o ) P( T T o,c) = Normal( T u,σ u ) With σ u = ( σ 2 2 p + σ ) 1 o T u = σ 2 [ u σ 2 p T p + σ 2 o T ] o pg 18

The One-Dimensional Kalman Filter 1. Suppose we have a linear forecast model L A. If temperature at time t 1 = T 1, then temperature at t 2 = t 1 + Δt is T 2 = L(T 1 ) B. Example: T 2 = T 1 + ΔtT 1 pg 19

The One-Dimensional Kalman Filter 1. Suppose we have a linear forecast model L. A. If temperature at time t 1 = T 1, then temperature at t 2 = t 1 + Δt is T 2 = L(T 1 ). B. Example: T 2 = T 1 + ΔtT 1. 2. If posterior estimate at time t 1 is Normal(T u,1, σ u,1 ) then prior at t 2 is Normal(T p,2, σ p,2 ). T p,2 = T u,1,+ ΔtT u,1 σ p,2 = (Δt + 1) σ u,1 pg 20

The One-Dimensional Kalman Filter 1. Suppose we have a linear forecast model L. A. If temperature at time t 1 = T 1, then temperature at t 2 = t 1 + Δt is T 2 = L(T 1 ). B. Example: T 2 = T 1 + ΔtT 1. 2. If posterior estimate at time t 1 is Normal(T u,1, σ u,1 ) then prior at t 2 is Normal(T p,2, σ p,2 ). 3. Given an observation at t 2 with distribution Normal(t o, σ o ) the likelihood is also Normal(t o, σ o ). pg 21

The One-Dimensional Kalman Filter 1. Suppose we have a linear forecast model L. A. If temperature at time t 1 = T 1, then temperature at t 2 = t 1 + Δt is T 2 = L(T 1 ). B. Example: T 2 = T 1 + ΔtT 1. 2. If posterior estimate at time t 1 is Normal(T u,1, σ u,1 ) then prior at t 2 is Normal(T p,2, σ p,2 ). 3. Given an observation at t 2 with distribution Normal(t o, σ o ) the likelihood is also Normal(t o, σ o ). 4. The posterior at t 2 is Normal(T u,2, σ u,2 ) where T u,2 and σ u,2 come from page 18. pg 22

A One-Dimensional Ensemble Kalman Filter Represent a prior pdf by a sample (ensemble) of N values: pg 23

A One-Dimensional Ensemble Kalman Filter Represent a prior pdf by a sample (ensemble) of N values: Use sample mean and sample standard deviation N T = T n N n=1 σ T = to determine a corresponding continuous distribution N n=1 ( T n T ) 2 N 1 ( ) ( ) Normal T,σ T pg 24

A One-Dimensional Ensemble Kalman Filter: Model Advance If posterior ensemble at time t 1 is T 1,n, n = 1,, N pg 25

A One-Dimensional Ensemble Kalman Filter: Model Advance If posterior ensemble at time t 1 is T 1,n, n = 1,, N, advance each member to time t 2 with model, T 2,n = L(T 1, n ) n = 1,,N. pg 26

A One-Dimensional Ensemble Kalman Filter: Model Advance Same as advancing continuous pdf at time t 1 pg 27

A One-Dimensional Ensemble Kalman Filter: Model Advance Same as advancing continuous pdf at time t 1 to time t 2 with model L. pg 28

A One-Dimensional Ensemble Kalman Filter: Assimilating an Observation pg 29

A One-Dimensional Ensemble Kalman Filter: Assimilating an Observation Fit a Gaussian to the sample. pg 30

A One-Dimensional Ensemble Kalman Filter: Assimilating an Observation Get the observation likelihood. pg 31

A One-Dimensional Ensemble Kalman Filter: Assimilating an Observation Compute the continuous posterior PDF. pg 32

A One-Dimensional Ensemble Kalman Filter: Assimilating an Observation Use a deterministic algorithm to adjust the ensemble. pg 33

A One-Dimensional Ensemble Kalman Filter: Assimilating an Observation First, shift the ensemble to have the exact mean of the posterior. pg 34

A One-Dimensional Ensemble Kalman Filter: Assimilating an Observation First, shift the ensemble to have the exact mean of the posterior. Second, linearly contract to have the exact variance of the posterior. Sample statistics are identical to Kalman filter. pg 35

UQ from an Ensemble Kalman Filter (Ensemble) KF optimal for linear model, gaussian likelihood. In KF, only mean and variance have meaning. Variance defines the uncertainty. Ensemble allows computation of many other statistics. What do they mean? Not entirely clear. Example: Kurtosis. Completely constrained by initial ensemble. It is problem specific whether this is even defined! (See example set 1 in Appendix) pg 36

Appendix: Matlab examples DART-LAB Tutorial -- June 09 pg 37

Appendix: Matlab examples DART-LAB Tutorial -- June 09 pg 38

The Rank Histogram: Evaluating Ensemble Performance Draw 5 values from a real-valued distribution. Call the first 4 ensemble members. pg 39

The Rank Histogram: Evaluating Ensemble Performance These partition the real line into 5 bins. pg 40

The Rank Histogram: Evaluating Ensemble Performance Call the 5th draw the truth. 1/5 chance that this is in any given bin. pg 41

The Rank Histogram: Evaluating Ensemble Performance Rank histogram shows the frequency of the truth in each bin over many assimilations. pg 42

The Rank Histogram: Evaluating Ensemble Performance Rank histogram shows the frequency of the truth in each bin over many assimilations. pg 43

The Rank Histogram: Evaluating Ensemble Performance Rank histogram shows the frequency of the truth in each bin over many assimilations. pg 44

The Rank Histogram: Evaluating Ensemble Performance Rank histogram shows the frequency of the truth in each bin over many assimilations. pg 45

The Rank Histogram: Evaluating Ensemble Performance Rank histograms for good ensembles should be uniform (caveat sampling noise). Want truth to look like random draw from ensemble. pg 46

The Rank Histogram: Evaluating Ensemble Performance A biased ensemble leads to skewed histograms. pg 47

The Rank Histogram: Evaluating Ensemble Performance An ensemble with too little spread gives a u-shape. This is the most common behavior for geophysics. pg 48

The Rank Histogram: Evaluating Ensemble Performance An ensemble with too much spread is peaked in the center. (See example set 1 in appendix) pg 49

Back to 1-Dimensional (Ensemble) Kalman Filter All bets are off when model is nonlinear, likelihood nongaussian. Must assess quality of estimates for each case. Example: A weakly-nonlinear 1D model: dx/dt = x + 0.4 x x (See example set 2 in appendix) Ensemble statistics can become degenerate. Nonlinear ensemble filters may address this but beyond scope for today. pg 50

DART-LAB Tutorial -- June 09 pg 51

DART-LAB Tutorial -- June 09 pg 52

Ensemble Kalman Filter with Model Error Welcome to the real world. Models aren t perfect. Model (and other) error can corrupt ensemble mean and variance. (See example set 3 part 1 in appendix) Adapt to this by increasing uncertainty in prior. Inflation is common method for ensembles. pg 53

DART-LAB Tutorial -- June 09 pg 54

DART-LAB Tutorial -- June 09 pg 55

Dealing with systematic error: Variance Inflation Observations + physical system => true distribution. pg 56

Dealing with systematic error: Variance Inflation Observations + physical system => true distribution. Model bias (and other errors) can shift actual prior. Prior ensemble is too certain (needs more spread). pg 57

Dealing with systematic error: Variance Inflation Naïve solution: increase the spread in the prior. Give more weight to the observation, less to prior. pg 58

Ensemble Kalman Filter with Model Error Errors in model (and other things) result in too much certainty in prior. Can use inflation to correct for this. Generally tuned on dependent data set. Adaptive methods exist but must be calibrated. (See example set 3 part 2 in appendix) Uncertainty errors depend on forecast length. pg 59

Multivariate Ensemble Kalman Filter So far, we have an observation likelihood for single variable. Suppose the model prior has additional variables. Use linear regression to update additional variables. DART-LAB Tutorial -- June 09 pg 60

Ensemble filters: Updating additional prior state variables Assume that all we know is prior joint distribution. One variable is observed. What should happen to the unobserved variable? DART-LAB Tutorial -- June 09 pg 61

Ensemble filters: Updating additional prior state variables Assume that all we know is prior joint distribution. One variable is observed. Compute increments for prior ensemble members of observed variable. DART-LAB Tutorial -- June 09 pg 62

Ensemble filters: Updating additional prior state variables Assume that all we know is prior joint distribution. One variable is observed. Using only increments guarantees that if observation had no impact on observed variable, unobserved variable is unchanged (highly desirable). DART-LAB Tutorial -- June 09 pg 63

Ensemble filters: Updating additional prior state variables Assume that all we know is prior joint distribution. How should the unobserved variable be impacted? First choice: least squares. Equivalent to linear regression. Same as assuming binormal prior. DART-LAB Tutorial -- June 09 pg 64

Ensemble filters: Updating additional prior state variables Have joint prior distribution of two variables. How should the unobserved variable be impacted? First choice: least squares. Begin by finding least squares fit. DART-LAB Tutorial -- June 09 pg 65

Ensemble filters: Updating additional prior state variables Have joint prior distribution of two variables. Next, regress the observed variable increments onto increments for the unobserved variable. Equivalent to first finding image of increment in joint space. DART-LAB Tutorial -- June 09 pg 66

Ensemble filters: Updating additional prior state variables Have joint prior distribution of two variables. Next, regress the observed variable increments onto increments for the unobserved variable. Equivalent to first finding image of increment in joint space. DART-LAB Tutorial -- June 09 pg 67

Ensemble filters: Updating additional prior state variables Have joint prior distribution of two variables. Next, regress the observed variable increments onto increments for the unobserved variable. Equivalent to first finding image of increment in joint space. DART-LAB Tutorial -- June 09 pg 68

Ensemble filters: Updating additional prior state variables Have joint prior distribution of two variables. Next, regress the observed variable increments onto increments for the unobserved variable. Equivalent to first finding image of increment in joint space. DART-LAB Tutorial -- June 09 pg 69

Ensemble filters: Updating additional prior state variables Have joint prior distribution of two variables. Next, regress the observed variable increments onto increments for the unobserved variable. Equivalent to first finding image of increment in joint space. DART-LAB Tutorial -- June 09 pg 70

Ensemble filters: Updating additional prior state variables Have joint prior distribution of two variables. Regression: Equivalent to first finding image of increment in joint space. Then projecting from joint space onto unobserved priors. DART-LAB Tutorial -- June 09 pg 71

Ensemble filters: Updating additional prior state variables Have joint prior distribution of two variables. Regression: Equivalent to first finding image of increment in joint space. Then projecting from joint space onto unobserved priors. DART-LAB Tutorial -- June 09 pg 72

Ensemble filters: Updating additional prior state variables Have joint prior distribution of two variables. Regression: Equivalent to first finding image of increment in joint space. Then projecting from joint space onto unobserved priors. DART-LAB Tutorial -- June 09 pg 73

Ensemble filters: Updating additional prior state variables Have joint prior distribution of two variables. Regression: Equivalent to first finding image of increment in joint space. Then projecting from joint space onto unobserved priors. DART-LAB Tutorial -- June 09 pg 74

Ensemble filters: Updating additional prior state variables Have joint prior distribution of two variables. Regression: Equivalent to first finding image of increment in joint space. Then projecting from joint space onto unobserved priors. DART-LAB Tutorial -- June 09 pg 75

The Lorenz63 3-Variable Chaotic Model Two lobed attractor. State orbits lobes, switches lobes sporadically. Nonlinear if observations are sparse. Qualitative uncertainty information available from ensemble. Nongaussian distributions are sampled. Quantitative uncertainty requires careful calibration. (See Example 4 in the appendix) DART-LAB Tutorial -- June 09 pg 76

DART-LAB Tutorial -- June 09 pg 77

DART-LAB Tutorial -- June 09 pg 78

Ensemble Kalman Filters with Large Models Kalman filter is too costly (time and storage) for large models. Ensemble filters can be used for very large applications. Sampling error when computing covariances may become large. Spurious correlations between unrelated observations and state variables contaminate results. Leads to further uncertainty underestimates. Can be compensated by a priori limits on what state variables are impacted by an observation. Called localization. Requires even more calibration. DART-LAB Tutorial -- June 09 pg 79

The Lorenz96 40-Variable Chaotic Model Something like weather systems around a latitude circle. Has spurious correlations for small ensembles. Basic implementation of ensemble filter leads to bad uncertainty estimates. Both inflation and localization can improve performance. Quantitative uncertainty requires careful calibration. (See Example 5 in the appendix) DART-LAB Tutorial -- June 09 pg 80

Uncertainty and Ensemble Kalman Filters: Conclusions Ensemble KF variance is exact quantification of uncertainty when things are linear, gaussian, perfect, and ensemble is large enough. Too little uncertainty nearly universal for geophysical applications. Uncertainty errors are a function of forecast lead time. Adaptive algorithms like inflation improve uncertainty estimates. Quantitative uncertainty estimates require careful calibration. Out of sample estimates (climate change) are tenuous. DART-LAB Tutorial -- June 09 pg 81

Appendix: Matlab examples This is an outline of the live matlab demos from NCAR s DART system that were used to support this tutorial. The matlab code is available by checking out the DART system from NCAR at: http://www2.image.ucar.edu/forms/dart-software-download The example scripts can be found in the DART directory: DART_LAB/matlab Some of the features of the GUI s for the exercises do not work due to bugs in Matlab releases 2010a and 2010b but should work in earlier and later releases. Many more features of these matlab scripts are described in the tutorial files in the DART directory: DART_LAB/presentation These presentations are a complement and extension of things presented in this tutorial. DART-LAB Tutorial -- June 09 pg 82

Appendix: Matlab examples Example Set 1: Use matlab script oned_model Change the Ens. Size to 10 Select Start Free Run Observe that the spread curve in the second diagnostic panel quickly converges to have the same prior and posterior values. The rms error varies with time due to observation error, but its time mean should be the same as the spread. Restart this exercise several times to see that the kurtosis curve in the third panel also converges immediately, but to values that change with every randomly selected initial ensemble. Also note the behavior of the rank histograms for this case. They may not be uniform even though the ensemble filter is the optimal solution. In fact, they can be made arbitrarily nonuniform by the selection of the initial conditions. DART-LAB Tutorial -- June 09 pg 83

Appendix: Matlab examples DART-LAB Tutorial -- June 09 pg 84

Appendix: Matlab examples DART-LAB Tutorial -- June 09 pg 85

Appendix: Matlab examples Example Set 2: Use matlab script oned_model Change the Ens. Size to 10 Change Nonlin. A to 0.4 Select Start Free Run This examines what happens when the model is nonlinear. Observe the evolution of the rms error and spread in the second panel and the kurtosis in the third panel. Also observe the ensemble prior distribution (green) in the first panel and in the separate plot on the control panel (where you changed the ensemble size). In many cases, the ensemble will become degenerate with all but one of the ensemble members collapsing while the other stays separate. Observe the impact on the rank histograms as this evolves. DART-LAB Tutorial -- June 09 pg 86

DART-LAB Tutorial -- June 09 pg 87

DART-LAB Tutorial -- June 09 pg 88

Appendix: Matlab examples Example Set 3: Use matlab script oned_model Change the Ens. Size to 10 Change Model Bias to 1 Select Start Free Run This examines what happens when the model used to assimilate observations is biased compared to the model that generated the observations (an imperfect model study). The model for the assimilation adds 1 to what the perfect model would do at each time step. Observe the evolution of the rms error and spread in the second panel and the rank histograms. Now, try adding some uncertainty to the prior ensemble using inflation by setting Inflation to 1.5. This should impact all aspects of the assimilation. DART-LAB Tutorial -- June 09 pg 89

DART-LAB Tutorial -- June 09 pg 90

DART-LAB Tutorial -- June 09 pg 91

Appendix: Matlab examples Example Set 4: Use matlab script run_lorenz_63 Select Start Free Run This run without assimilation shows how uncertainty grows in an ensemble forecast in the 3 variable Lorenz model. After a few trips around the attractor select Stop Free Run. Then turn on an ensemble filter assimilation by changing No Assimilation to EAKF. Continue the run by selecting Start Free Run and observe how the ensemble evolves. You can stop the advance again and use the matlab rotation feature to study the structure of the ensemble. DART-LAB Tutorial -- June 09 pg 92

DART-LAB Tutorial -- June 09 pg 93

DART-LAB Tutorial -- June 09 pg 94

Appendix: Matlab examples Example Set 5: Use matlab script run_lorenz_96 Select Start Free Run This run without assimilation shows how uncertainty grows in an ensemble forecast in the 40 variable model. After about time=40, select Stop Free Run and start an assimilation by changing No Assimilation to EAKF and selecting Start Free Run. Exploring the use of a localization of 0.2 and/or an inflation of 1.1 helps to show how estimates of uncertainty can be improved by modifying the base ensemble Kalman filter. DART-LAB Tutorial -- June 09 pg 95