Ensemble prediction and strategies for initialization: Tangent Linear and Adjoint Models, Singular Vectors, Lyapunov vectors

Similar documents
Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF

Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF

Data assimilation; comparison of 4D-Var and LETKF smoothers

Comparing Local Ensemble Transform Kalman Filter with 4D-Var in a Quasi-geostrophic model

Phase space, Tangent-Linear and Adjoint Models, Singular Vectors, Lyapunov Vectors and Normal Modes

Bred Vectors, Singular Vectors, and Lyapunov Vectors in Simple and Complex Models

6.5 Operational ensemble forecasting methods

We honor Ed Lorenz ( ) who started the whole new science of predictability

Bred vectors: theory and applications in operational forecasting. Eugenia Kalnay Lecture 3 Alghero, May 2008

J9.4 ERRORS OF THE DAY, BRED VECTORS AND SINGULAR VECTORS: IMPLICATIONS FOR ENSEMBLE FORECASTING AND DATA ASSIMILATION

Chapter 6: Ensemble Forecasting and Atmospheric Predictability. Introduction

Bred Vectors: A simple tool to understand complex dynamics

ABSTRACT. instability properties of dynamical models, Lyapunov vectors (LVs), singular vectors

Estimating Observation Impact in a Hybrid Data Assimilation System: Experiments with a Simple Model

4. DATA ASSIMILATION FUNDAMENTALS

Some ideas for Ensemble Kalman Filter

Coupled Ocean-Atmosphere Assimilation

Evolution of Analysis Error and Adjoint-Based Sensitivities: Implications for Adaptive Observations

Organization. I MCMC discussion. I project talks. I Lecture.

Ensemble Kalman Filter potential

Estimating Observation Impact in a Hybrid Data Assimilation System: Experiments with a Simple Model

Coupled ocean-atmosphere ENSO bred vector

Multiple-scale error growth and data assimilation in convection-resolving models

(Extended) Kalman Filter

Advances and Challenges in Ensemblebased Data Assimilation in Meteorology. Takemasa Miyoshi

Recent Advances in EnKF

In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance.

EFSO and DFS diagnostics for JMA s global Data Assimilation System: their caveats and potential pitfalls

Accelerating the spin-up of Ensemble Kalman Filtering

Variational Data Assimilation Current Status

Sensitivity of Forecast Errors to Initial Conditions with a Quasi-Inverse Linear Method

Coupled atmosphere-ocean data assimilation in the presence of model error. Alison Fowler and Amos Lawless (University of Reading)

Observability, a Problem in Data Assimilation

Initial Uncertainties in the EPS: Singular Vector Perturbations

Extreme Values and Positive/ Negative Definite Matrix Conditions

HOW TO IMPROVE FORECASTING WITH BRED VECTORS BY USING THE GEOMETRIC NORM

Handling nonlinearity in Ensemble Kalman Filter: Experiments with the three-variable Lorenz model

Revision of TR-09-25: A Hybrid Variational/Ensemble Filter Approach to Data Assimilation

An introduction to data assimilation. Eric Blayo University of Grenoble and INRIA

Targeted Observations of Tropical Cyclones Based on the

Estimating observation impact without adjoint model in an ensemble Kalman filter

Addressing the nonlinear problem of low order clustering in deterministic filters by using mean-preserving non-symmetric solutions of the ETKF

Localization in the ensemble Kalman Filter

EnKF Review. P.L. Houtekamer 7th EnKF workshop Introduction to the EnKF. Challenges. The ultimate global EnKF algorithm

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.

J1.3 GENERATING INITIAL CONDITIONS FOR ENSEMBLE FORECASTS: MONTE-CARLO VS. DYNAMIC METHODS

Ensemble forecasting and flow-dependent estimates of initial uncertainty. Martin Leutbecher

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems

Local Ensemble Transform Kalman Filter

P3.11 A COMPARISON OF AN ENSEMBLE OF POSITIVE/NEGATIVE PAIRS AND A CENTERED SPHERICAL SIMPLEX ENSEMBLE

Lecture 7. Econ August 18

4D-Var or Ensemble Kalman Filter?

Gaussian Filtering Strategies for Nonlinear Systems

Tangent linear and adjoint models for variational data assimilation

4D-Var or Ensemble Kalman Filter? TELLUS A, in press

An implementation of the Local Ensemble Kalman Filter in a quasi geostrophic model and comparison with 3D-Var

R. E. Petrie and R. N. Bannister. Department of Meteorology, Earley Gate, University of Reading, Reading, RG6 6BB, United Kingdom

5 Linear Algebra and Inverse Problem

Relative Merits of 4D-Var and Ensemble Kalman Filter

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

Quarterly Journal of the Royal Meteorological Society !"#$%&'&(&)"&'*'+'*%(,#$,-$#*'."(/'*0'"(#"(1"&)23$)(4#$2#"( 5'$*)6!

Total Energy Singular Vectors for Atmospheric Chemical Transport Models

ECE295, Data Assimila0on and Inverse Problems, Spring 2015

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

Observation Impact Assessment for Dynamic. Data-Driven Coupled Chaotic System

The Inversion Problem: solving parameters inversion and assimilation problems

Earth-Science Reviews

Lecture 9. Systems of Two First Order Linear ODEs

Data assimilation for the coupled ocean-atmosphere

Smoothers: Types and Benchmarks

Lecture notes on assimilation algorithms

Nonlinear fastest growing perturbation and the first kind of predictability

Four-Dimensional Ensemble Kalman Filtering

Chapter #4 EEE8086-EEE8115. Robust and Adaptive Control Systems

variability and their application for environment protection

Tangent-linear and adjoint models in data assimilation

Analysis sensitivity calculation in an Ensemble Kalman Filter

Kalman Filter and Ensemble Kalman Filter

How 4DVAR can benefit from or contribute to EnKF (a 4DVAR perspective)

Weight interpolation for efficient data assimilation with the Local Ensemble Transform Kalman Filter

Comparisons between 4DEnVar and 4DVar on the Met Office global model

Dynamical Systems and Chaos Part I: Theoretical Techniques. Lecture 4: Discrete systems + Chaos. Ilya Potapov Mathematics Department, TUT Room TD325

Initial ensemble perturbations - basic concepts

Latent Semantic Models. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze

Complex Dynamic Systems: Qualitative vs Quantitative analysis

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Local Ensemble Transform Kalman Filter: An Efficient Scheme for Assimilating Atmospheric Data

Asynchronous data assimilation

Optimal Interpolation ( 5.4) We now generalize the least squares method to obtain the OI equations for vectors of observations and background fields.

Introduction to Data Assimilation

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Data Assimilation Research Testbed Tutorial. Section 7: Some Additional Low-Order Models

Lecture 6 Positive Definite Matrices

Ensemble Forecasting at NCEP and the Breeding Method

Variational assimilation Practical considerations. Amos S. Lawless

Numerical Weather prediction at the European Centre for Medium-Range Weather Forecasts (2)

Fundamentals of Data Assimilation

ABSTRACT. from a few minutes (e.g. cumulus convection) to years (e.g. El Niño). Fast time

The Local Ensemble Transform Kalman Filter (LETKF) Eric Kostelich. Main topics

EAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science

Transcription:

Ensemble prediction and strategies for initialization: Tangent Linear and Adjoint Models, Singular Vectors, Lyapunov vectors Eugenia Kalnay Lecture 2 Alghero, May 2008

Elements of Ensemble Forecasting It used to be that a single control forecast was integrated from the analysis (initial conditions) In ensemble forecasting several forecasts are run from slightly perturbed initial conditions (or with different models) The spread among ensemble members gives information about the forecast errors How to create slightly perturbed initial conditions? Basically Singular Vectors Bred Vectors Ensembles of data assimilation (perturbed obs. EnKF)

In another fundamental paper, Lorenz (1965) introduced (without using their current names) the concepts of: Tangent linear model, Adjoint model, Singular vectors, and Lyapunov vectors for a low order atmospheric model, and their consequences for ensemble forecasting. He also introduced the concept of errors of the day : predictability is not constant: It depends on the stability of the evolving atmospheric flow (the basic trajectory or reference state).

When there is an instability, all perturbations converge towards the fastest growing perturbation (leading Lyapunov Vector). The LLV is computed applying the linear tangent model on each perturbation of the nonlinear trajectory Fig. 6.7: Schematic of how all perturbations will converge towards the leading Local Lyapunov Vector random initial perturbations leading local Lyapunov vector trajectory

Tangent linear model (TLM) and adjoint models dx dt = F(x), x =! # # # # " x 1.. x n $ & &, F = & &! # # # # " F 1.. F n $ & & & & a nonlinear model discretized in space: ODE s x n+1 = x n +!tf( x n + x n+1 2 ) discretized in time: a set of nonlinear algebraic equations

Running the model is integrating the model from time t o to time t x(t) = M (x(t 0 )) This is the nonlinear model We add a small perturbation y(t o ) and neglect terms O(y 2 ) M(x(t 0 ) + y(t 0 )) = M(x(t 0 )) +!M!x y(t 0 ) + O(y(t 0 )2 ) = x(t) + y(t) + O(y(t 0 ) 2 ) y(t) = L(t 0,t)y(t 0 ) L(t 0,t) =!M!x This is the tangent linear model (TLM) L is an nxn matrix that depends on x(t) but not on y(t)

Euclidean norm of a vector: inner product of a vector with itself y 2 = y T y =< y, y > Size of y (Euclidean norm) Define adjoint of an operator K: < x,ky >!< K T x, y > In this case of a model with real variables, the adjoint of the TLM is simply the transpose of the TLM L T (t 0,t) Now separate the interval (t o,t) into two successive intervals t 0 < t 1 < t L(t 0,t) = L(t 1,t)L(t 0,t 1 ) Transposing we get the adjoint: the last segment is executed first! L T (t 0,t) = L T (t 0,t 1 )L T (t 1,t)

The adjoint of the model can also be separated into single time steps, but they are executed backwards in time, starting from the last time step at t, and ending with the first time step at t 0. For low order models the tangent linear model and its adjoint can be constructed by repeated integrations of the nonlinear model for small perturbations, as done by Lorenz (1965), and by Molteni and Palmer (1993) with a global quasi-geostrophic model. For large NWP models this approach is too time consuming, and instead it is customary to develop the linear tangent and adjoint codes from the nonlinear model code following some rules discussed in Appendix B. A very nice example of the TLM and ADJ code generation for the Lorenz (1963) model is given by Shu-Chih Yang:

Advanced data assimilation methods with evolving forecast error covariance: 4D-Var Example of TLM and ADJ code with Lorenz (1963) model Shu-Chih Yang (with EK)

Example with the Lorenz 3-variable model Nonlinear model x=[x 1,x 2,x 3 ] Tangent linear model δx=[δx 1, δx 2, δx 3 ] Adjoint model δx * =[δx * 1, δx* 2, δx* 3 ] dx 1 dt =!px + px 1 2 dx 2 = rx 1! x 1 x 3! x 2 dt dx 3 = x 1 x 2! bx 3 dt L =!M!x =!M!x i # "p p 0 & ( = r " x 3 "1 "x x 1 ( $ x 2 x 1 "b '( " L T =!M $ ' #!x i & "(p r ( x 3 x 2 $ ' = $ p (1 x 1 ' # $ 0 (xx 1 (b&' T The background state is needed in both L and L T (need to save the model trajectory) In a complex NWP model, it is impossible to write explicitly this matrix form

Example of tangent linear and adjoint codes (1) use forward scheme to integrate in time In tangent linear model! x 3 (t + "t) #! x 3 (t) = x 2 (t)! x 1 (t) + x 1 (t)! x 2 (t) # b! x 3 (t), or "t! x 3 (t + "t) =! x 3 (t) + [x 2 (t)! x 1 (t) + x 1 (t)! x 2 (t) # b! x 3 (t)]"t forward in time We will see that in the adjoint model the above line becomes! x * 3 (t) =! x * 3 (t) + (1 " b#t)! x * 3 (t + #t)! x * 2 (t) =! x * 2 (t) + (x 1 (t)#t)! x * 3 (t + #t)! x * 1 (t) =! x * 1 (t) + (x 2 (t)#t)! x * 3 (t + #t)! x * 3 (t + #t) = 0 backward in time * Try an example in Appendix B (B.1.15)

Example of tangent linear and adjoint codes (2) use forward scheme to integrate in time Tangent linear model,! x 3 (t + "t) =! x 3 (t) + [x 2 (t)! x 1 (t) + x 1 (t)! x 2 (t) # b! x 3 (t)]"t forward in time #! x 3 (t + "t)&! x 1 (t) ( ( =! x 2 (t) ( ( $! x 3 (t) ' ( ) # 0 x 2 (t)"t x 1 (t)"t 1 ) b"t 0 1 0 0 0 0 1 0 $ 0 0 0 1 & #! x 3 (t + "t)& ( (! x 1 (t) ( ( (! x 2 (t) ( ( ( ' $! x 3 (t) ' We have to write for each statement all the active variables. Then we transpose it to get the adjoint model

Example of tangent linear and adjoint codes (3) Tangent linear model,! x 3 (t + "t) =! x 3 (t) + [x 2 (t)! x 1 (t) + x 1 (t)! x 2 (t) # b! x 3 (t)]"t forward in time #! x 3 (t + "t)&! x 1 (t) ( ( =! x 2 (t) ( ( $! x 3 (t) ' ( ) # 0 x 2 (t)"t x 1 (t)"t 1 ) b"t 0 1 0 0 0 0 1 0 $ 0 0 0 1 & #! x 3 (t + "t)& ( (! x 1 (t) ( ( (! x 2 (t) ( ( ( ' $! x 3 (t) ' #! x * 3 (t + "t)&! x * ( 1 (t) (! x * 2 (t) ( ( $! x * 3 (t) ' # 0 0 0 0& #! x * 3 (t + "t)& x 2 (t)"t 1 0 0 ( = (! x * ( 1 (t) ( x 1 (t)"t 0 1 0(! x * 2 (t) ( ( ( $ ( 1) b"t) 0 0 1' $! x * 3 (t) ' Adjoint model: transpose of the linear tangent, backward in time Execute in reverse order

Example of tangent linear and adjoint codes (4)! x 3 (t + "t) =! x 3 (t) + [x 2 (t)! x 1 (t) + x 1 (t)! x 2 (t) # b! x 3 (t)]"t Adjoint model: transpose of the linear tangent, backward in time Execute in reverse order #! x * 3 (t + "t)&! x * ( 1 (t) (! x * 2 (t) ( ( $! x * 3 (t) ' # 0 0 0 0& #! x * 3 (t + "t)& x 2 (t)"t 1 0 0 ( = (! x * ( 1 (t) ( x 1 (t)"t 0 1 0(! x * 2 (t) ( ( ( $ ( 1 ) b"t) 0 0 1' $! x * 3 (t) ' In adjoint model the line above becomes! x * 3 (t) =! x * 3 (t) + (1" b#t)! x * 3 (t + #t)! x * 2 (t) =! x * 2 (t) + (x 1 (t)#t)! x * 3 (t + #t)! x * 1 (t) =! x * 1 (t) + (x 2 (t)#t)! x * 3 (t + #t)! x * 3 (t + #t) = 0 backward in time

There are automatic compilers to create linear tangent and adjoint models (e.g., TAMC) It is still a long and painful process Creating and maintaining an adjoint model is the main disadvantage of 4D-Var On the other hand, the adjoint provides tremendous possibilities: 4D-Var Sensitivity of the results to any parameter or initial conditions (e.g., sensitivity of forecasts to observations, Langland and Baker 2004) And many others Ensemble Kalman Filter allows doing the with the nonlinear model ensemble, without the need for an adjoint

Singular Vectors: the theory is too long, so we : the theory is too long, so we ll just give a basic idea (see section 6.3 of book) y(t 1 ) = L(t 0,t 1 )y(t 0 ) Remember the TLM L(t o, t 1 ) evolves the perturbation from t o to t 1 U T LV = S Singular Value Decomposition (SVD) S = "! 1 0. 0 $ ' $ 0! 2. 0 ' $.... ' $ ' # $ 0 0.! n &' S is a diagonal matrix whose elements are the singular values of L.

Singular Vectors: the theory is too long, so we ll just give a basic idea y(t 1 ) = L(t 0,t 1 )y(t 0 ) Remember the TLM L(t o, t 1) evolves the perturbation from t o to t 1 U T LV = S Singular Value Decomposition (SVD) S = "! 1 0. 0 $ ' $ 0! 2. 0 ' $.... ' $ ' # $ 0 0.! n &' S is a diagonal matrix whose elements are the singular values of L. and UU T = I; VV T = I

Left multiply by U: LV = US, i.e., L(v 1,...,v n ) = (! 1 u 1,...,! n u n ) where v i and u i are the vector columns of V and U v i are the initial singular vectors u i are the final singular vectors The initial SV gets multiplied by the Lv i =! i u i singular value when L is applied forward in time

Right multiply by V T : U T L = SV T and transposing, L T U = VS, i.e., L T (u 1,...,u n ) = (! 1 v 1,...,! n v n ) so that L T u i =! i v i The final SV gets multiplied by the singular value when L T is applied backward in time

From these we obtain L T Lv i =! i L T u i =! i 2 v i so that v i are the eigenvectors of L T L

Conversely, the final SVs u i are the eigenvectors of LL T LL T u i =! i Lv i =! i 2 u i

SV summary and extra properties In order to get SVs we need the TLM and the ADJ models. The leading SVs are obtained by the Lanczos algorithm (expensive). One can define an initial and a final norm (size), this gives flexibility and arbitrariness (1). The leading initial SV is the vector that will grow fastest (starting with the smallest initial norm and ending with the largest final norm). The leading SVs grow initially faster than the Lyapunov vectors, but at the end of the period, they look like LVs (and bred vectors~lvs). The initial SVs are very sensitive to the norm used. The final SVs look like LVs~BVs

Jon Ahlquist theorem One can define an initial and a final norm (size), this gives flexibility and arbitrariness (1). Given a linear operator L, a set of arbitrary vectors x i and a set of arbitrary nonnegative numbers s i arranged in decreasing order, Ahlquist (2000) showed how to define a norm such that s i and the x i are the i th singular value and singular vector of L. Because anything not in the null space can be a singular vector, even the leading singular vector, one cannot assign a physical meaning to a singular vector simply because it is a singular vector. Any physical meaning must come from an additional aspect of the problem. Said in another way, nature evolves from initial conditions without knowing which inner products and norms the user wants to use

Comparison of SV and BV (LV) for a QG model Fig. 6.7: Schematic of how all perturbations will converge towards the leading Local Lyapunov Vector random initial perturbations leading local Lyapunov vector trajectory All perturbations (including singular vectors) end up looking like local Lyapunov vectors (i.e., like Bred Vectors). These are the local instabilities that make forecast errors grow.

Two initial and final BV (24hr) contours: forecast errors, colors: BVs Note that the BV (colors) have shapes similar to the forecast errors (contours)

Two initial and final SV (24hr, vorticity norm) contours: forecast errors, colors: SVs With an enstrophy norm, the initial SVs have large scales (low vorticity). By the end of the optimization interval, the final SVs look like BVs (and LVs)

Example of initial and final singular vectors using enstrophy/stream function norms (QG model, 12 hour optimization, courtesy of Shu-Chih Yang) The initial SVs are very sensitive to the norm The final SVs look like bred vectors (or Lyapunov vectors)