4DVAR, according to the name, is a four-dimensional variational method.

Similar documents
CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

EEE 241: Linear Systems

Generalized Linear Methods

Time-Varying Systems and Computations Lecture 6

PHYS 705: Classical Mechanics. Calculus of Variations II

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems

Appendix B. The Finite Difference Scheme

Vector Norms. Chapter 7 Iterative Techniques in Matrix Algebra. Cauchy-Bunyakovsky-Schwarz Inequality for Sums. Distances. Convergence.

Lecture Notes on Linear Regression

Lecture 21: Numerical methods for pricing American type derivatives

Chapter Newton s Method

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

1 GSW Iterative Techniques for y = Ax

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

Feature Selection: Part 1

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

10-701/ Machine Learning, Fall 2005 Homework 3

Singular Value Decomposition: Theory and Applications

The Finite Element Method: A Short Introduction

Inexact Newton Methods for Inverse Eigenvalue Problems

Linear Approximation with Regularization and Moving Least Squares

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

Implicit Integration Henyey Method

Finite Element Modelling of truss/cable structures

4DVAR, according to the name, is a four-dimensional variational method.

Kernel Methods and SVMs Extension

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Handout # 6 (MEEN 617) Numerical Integration to Find Time Response of SDOF mechanical system. and write EOM (1) as two first-order Eqs.

Inductance Calculation for Conductors of Arbitrary Shape

MMA and GCMMA two methods for nonlinear optimization

Feb 14: Spatial analysis of data fields

1 Convex Optimization

1 Derivation of Point-to-Plane Minimization

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

Chapter - 2. Distribution System Power Flow Analysis

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Classification as a Regression Problem

Primer on High-Order Moment Estimators

Markov Chain Monte Carlo Lecture 6

Lecture 10 Support Vector Machines II

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Numerical Heat and Mass Transfer

Chapter 3 Describing Data Using Numerical Measures

Lecture Note 3. Eshelby s Inclusion II

Physics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1

Section 8.3 Polar Form of Complex Numbers

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics

EEL 6266 Power System Operation and Control. Chapter 3 Economic Dispatch Using Dynamic Programming

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Week 5: Neural Networks

Errors for Linear Systems

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Some modelling aspects for the Matlab implementation of MMA

Tracking with Kalman Filter

Lecture 12: Discrete Laplacian

Report on Image warping

2016 Wiley. Study Session 2: Ethical and Professional Standards Application

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0

November 5, 2002 SE 180: Earthquake Engineering SE 180. Final Project

Homework Notes Week 7

Erratum: A Generalized Path Integral Control Approach to Reinforcement Learning

Estimation: Part 2. Chapter GREG estimation

ME 501A Seminar in Engineering Analysis Page 1

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

RELIABILITY ASSESSMENT

Numerical Solution of Ordinary Differential Equations

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

Hidden Markov Models

Solution of the Navier-Stokes Equations

IV. Performance Optimization

On a direct solver for linear least squares problems

2 Finite difference basics

x = , so that calculated

One-sided finite-difference approximations suitable for use with Richardson extrapolation

Mathematical Preparations

Radar Trackers. Study Guide. All chapters, problems, examples and page numbers refer to Applied Optimal Estimation, A. Gelb, Ed.

Formulas for the Determinant

Lecture 13 APPROXIMATION OF SECOMD ORDER DERIVATIVES

Lecture 10: Euler s Equations for Multivariable

18.1 Introduction and Recap

The optimal delay of the second test is therefore approximately 210 hours earlier than =2.

3D Estimates of Analysis and Short-Range Forecast Error Variances

= = = (a) Use the MATLAB command rref to solve the system. (b) Let A be the coefficient matrix and B be the right-hand side of the system.

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Second Order Analysis

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

FTCS Solution to the Heat Equation

Uncertainty in measurements of power and energy on power networks

Transcription:

4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The cost functon s the same, provded that the observaton operators are generalzed to nclude a forecast model that wll allow a comparson between the model state and the observatons at the approprate tme. 4D-Var seeks the ntal condton such that the forecast best fts the observatons wthn the assmlaton nterval.

Cost functon for 4DVAR f a Let x ( t) M x ( t ) represent the (nonlnear) model forecast that advances from the prevous analyss tme t to the current t. Assume the observatons dstrbuted wthn a tme nterval t, t n wll be used. The cost functon ncludes a term measurng the dstance to the background at the begnnng of the nterval, and a summaton over tme of the cost functon for each observatonal ncrement computed wth respect to the model ntegrated to the tme of the observaton: J( x( t )) ( x( t ) x ( t )) B ( x( t ) x ( t )) ( H( x ) y ) R ( H( x ) y ) (6.) N b T b o T o where N s the number of observatonal vectors o y dstrbuted over tme. The control varable (the varable wth respect to whch the cost functon s mnmzed) s the ntal state of the model at the begnnng of the tme nterval, x, whereas the analyss at the end of the nterval s gven by the model ntegraton from the soluton t M t n t x x. In ths sense, the model s used as a strong constrant,.e., the analyss soluton has to satsfy the model equatons.

In other words, 4D-Var seeks the ntal condton such that the forecast best fts the observatons wthn the assmlaton nterval. The fact that the 4D-Var method assumes a perfect model s a dsadvantage snce (for example) t wll gve the same credence to older observatons at the begnnng of the nterval as to newer observatons at the end of the nterval. 3

The 4DVAR tres to use all observatons n the assmlaton tme nterval as well as possble. The followng s an example of how nformaton s propagated n tme by a smple advecton model, so that the observatons can be compared wth the frst guess. 4

A varaton n the cost functon when the control varable t x s changed by a small amount x ( t) s gven by J J J J J J[ x( t) x( t)] J ( x( t) x x... xn x( t) x x xn x( t) T (6.) J J where the gradent of the cost functon x( t) x j( t) j s a column vector. Iteratve mnmzaton schemes requre the estmaton of the cost functon gradent w.r.t. control varables. Smlar to the 3DVAR case, the steepest descent method s the smplest scheme n whch the change n the J control varable after each teraton s chosen to be opposte to the gradent x( t) J. x( t ) x( t ) Other, more effcent methods, such as conjugate gradent or quas-newton also requre the use of the gradent. To solve ths mnmzaton problem effcently, we need to be able to compute the gradent of J wth respect to the elements of the control varable. 5

Calculaton of the gradent of 4DVAR cost functon As we saw earler, gven a symmetrc matrx A and a functonal T J x Ax, the gradent s gven by J x Ax. If T J y Ay, and y y( x ), then (by applyng the chan rule) J x y x T Ay (6.3) where y x kl, y x k l s a matrx. We can wrte (6.) as J Jb Jo, and from the rules dscussed above, the gradent of the background component of the cost functon ( ( ) b ( )) T ( ( ) b Jb x t x t B x t x ( t )) wth respect to xt s gven by J b x( t ) B ( x( t ) x ( t )). (6.4) b Ths s the same as n the 3DVAR case. 6

Before we proceed to determne the gradent of the observaton term, let s frst defne Tangent Lnear Model and Its Adjont. A lnear tangent model (or tangent lnear model, TLM as t s often called) s obtaned by lnearzng the model about the nonlnear trajectory of the model between t and t, so that f we ntroduce a perturbaton n the ntal condtons, the fnal perturbaton s gven by x( t ) x( t ) M x( t ) x( t ) M x( t ) L x( t ) O( x ) (6.5) The lnear tangent model L s a matrx that transforms an small ntal perturbaton at t to the fnal perturbaton at t. The TLM equaton s then x( t ) L x ( t ). u u For example, the nonlnear advecton u s ntegrated usng the forward-n-tme centered-n-space fnte t x dfference scheme (ths scheme s actually computatonally unstable t s gven here for llustraton purpose only), n n t n n n u u u ( u u ). x The TLM model for perturbaton u s (neglectng the hgh order nonlnear term) t u u u ( u u ) u ( u u ) u u ( u u ) u u u x n n n n n n n n n n n n n n n 7

where t. Therefore x n n n n n u ( u u ) u u n n n n n n n u u ( u3 u ) u3 u n u Lnu. n n n n n un un ( un un ) un n n ( u L u, therefore n u L u L L u L L L u ) n n n n n n n n If there are several steps n a tme nterval t -t, the tangent lnear model that advances a perturbaton from t to t s gven by the product of the tangent lnear model matrces that advance t over each step: L ( t, t ) L ( t, t ) L L L... L. (6.6) j j j j j The adjont model, defned as the transpose of the lnearzed forward model, s gven by T T T T T T j j j j j L ( t, t ) L ( t, t ) L L L... L. (6.7) Eq. (6.7) shows that the adjont model advances a perturbaton backwards n tme, from the fnal to the ntal tme, because the rght most L T n the equaton s that of the fnal tme nterval (from t to t ) and the left most L T s that of the frst tme nterval (from t to t ) and matrx product s evaluated n the rght-to-left order. 8

9 Revew of matrx calculus The dervatves of vector functons Let x, y, z be vectors of lengths n, m and l, respectvely:, n x x x x n y y y y,. l z z z z Dervatve of a Scalar wth Respect to Vector If y s a scalar, n y x y y x y x x.

Dervatve of Vector wth Respect to Scalar If x s a scalar, y y y ym x x x x The dervatve of the vector y wth respect to vector x s a n m matrx: y y ym x x x y y ym y x x x x y y ym xn xn x n.. h s the chan rule for the dervatve of a vector wth respect to a vector.

When z = z(y(x)), the dervatve of the vector z wth respect to vector x z z z y y y z z z x x x n y x y x y x z z z z y z y z y z x x x y x y x y x x z z z z y z y z y x x x n y x y x y x m m m n m m m n n m m m l l l l l z z z y y y y y y x x x m n l l l m m m y y y m x x x n n z z z y y y y y y m x x x z y n y x. z z z y y y Therefore, z z y x y x, whch s the chan rule for the dervatve of a vector wth respect to a vector.

Defntons that are not consstent wth our equatons: When z = z(y(x)), the dervatve of the vector z wth respect to vector x z z z z y z y z y l x y x y x y x x x z z z z y z y z y z x x x y x y x y x x z z z z y z y z y xn xn xn y x y x y x m m m l m m m l l m m m l y y ym z z zl x x x y y y m l xn xn xn ym ym ym n n n y y ym z z zl x x x y y y y z x y. y y y z z z

The gradent of the observatonal term n the cost functon, J H H N o T o o ( ( x ) y ) R ( ( x ) y ) s more complcated because x M( x ( t)). If we ntroduce a perturbaton to the ntal state, thenx L( t, t) x, so that o ( H( x ) y ) H M H x H L( t, t ) H L( t, t ) x( t ) x x x x j j j (6.8) As ndcated by (6.8), the matrces H and L are the lnearzed Jacobans, H x M and x, respectvely. Therefore the gradent of the observaton cost functon s gven by (remember J o xt N L H R x y T T o ( t, t) ( H( ) ) J x y x T Ay ) (6.9) Equaton (6.9) shows that every teraton of the 4D-Var mnmzaton requres the computaton of the gradent,.e., o computng the ncrements ( H( x) y ) at the observaton tmes t durng a forward ntegraton, multplyng them T by HR and ntegratng these weghted ncrements back to the ntal tme usng the adjont model. 3

Denote T ( ( ) o ) T d H R H x y H R d whch we call the weghted observatonal ncrement for observatons at tme t. Snce parts of the backward adjont ntegraton are common to several tme ntervals, the summaton n (6.9) can be arranged more convenently: J o xt N N T T T T L( t, t) d LL... L d d L d... L L... L d L L... L d T T T T T T T d L ( d L ( d... L ( d L d ))) T T T T Assume, for example that the nterval of assmlaton s from Z to Z, and that there are observatons every 3 hours (Fg. 5.6). 3 6 9 L T L T L T L T 3 d d d d 3 d 4 Fgure: Schematc of the computaton of the gradent of the observatonal cost functon for a perod of hours, observatons every 3 hours and the adjont model that ntegrates backwards wthn each nterval. 4

Procedure of 4DVAR mnmzaton ) Frst ntegrate the (nonlnear) forward model and save the nonlnear trajectory (.e., model state at every tme T step) n the assmlaton wndow. Ths 4D state s needed for defnng L and for calculatng the observatonal ncrements (ths nonlnear trajectory plays the role of x b n the observaton term of 3DVAR). ) Compute the weghted negatve observaton ncrements d H R ( H( x ) y ) H R d. T o T T T 3) The adjont model L ( t, t ) L appled on a vector advances t from t to t. In our case, ths vector s d or the combnaton of d s at tmes later than t. Ths vector s the adjont varable that the adjont equaton advances. Jo T T T T 4) Then we can wrte (6.9) as d L( d L ( d... L( d L d))) x (6.) 5) From (6.4) plus (6.) we obtan the gradent of the cost functon, and the mnmzaton algorthm s then called to modfy approprately control varable x ( t). 6) After ths change, a new forward ntegraton and new observatonal ncrements are computed and the process s repeated untl convergence. Therefore, each mnmzaton teraton nvolves one forward ntegraton of the nonlnear predcton model and one backward ntegraton of the adjont model. Because everythng n the adjont model s reversed n order, and the defnton of the adjont operator requres the value of the varables n the forward model (even wthn the ndvdual steps of a sngle tme step), many recalculatons are often nvolved n order to restore the values of these varables (unless they are saved durng the forward ntegraton step), the adjont model s often -3 tmes as expensve as the forward model. But stll, only one ntegraton s needed nstead of N of them as Eq. (6.9) mght suggest. Ths s so because of the lnear nature of Eq. (6.9). 5

6

Incremental Form of 4DVAR 4D-Var can also be wrtten n an ncremental form wth the cost functon defned by J( x ) ( x ) B ( x ) H L( t, t ) x d ) R H L( t, t ) x d ) (6.) N T o T o and the observatonal ncrement s defned as d o y o H( x f ( t )). Because H ( x ) H( x ) H( x ) x H( x ) H L( t, t ) x x( t) whch s plugged nto (6.) to gve (6.). Wthn the ncremental formulaton, t s possble to choose a smplfcaton operator that solves the problem of mnmzaton n a lower dmensonal space for w than that of the orgnal model varables x : w Sx S s meant to be rank defcent (as would be the case, for example, f a lower resoluton spectral truncaton or a low-resoluton grd was used for w than for x ). In the latter case, w can be x at every other grd pont, n ths case, w s a vector that about ¼ the length of x. 7

After the mnmum of the problem s obtaned for J( w ), b g x I x S w and a new outer teraton at the full I model resoluton can be carred out (Lorenc 997). Here the nverson operator S can be spatal nterpolaton from the coarser resoluton grd of w to the hgher-resoluton grd of x. In essence, the grdded felds assocated wth the hgh-resoluton trajectory s frst nterpolated to a coarser resoluton grd, then a cost functon defned on ths coarse grd s mnmzed usng an teratve mnmzaton algorthm that nvolves forward and backward ntegraton for the TLM and adjont models n every teraton. When an analyss ncrement s found that mnmzes ths cost functon, t s nterpolated to the hgher-resoluton grd and added to the ntal condton estmate of the prevous outer teraton. Startng from ths updated ntal condton, a nonlnear forward ntegraton s performed on the hgh resoluton grd to produce a more accurate nonlnear trajectory. Ths trajectory s agan nterpolated to the coarse resoluton grd, the nonlnear model s lnearzed around ths new trajectory to obtan the more accurate TLM and adjont models. Another outer teraton loop, that contnues the mnmzaton 'nner loops' then begns. The teraton process can also be accelerated through the use of precondtonng, a change of control varables that makes the cost functon more sphercal, and therefore each teraton can get closer to the center (mnmum) of the cost functon (e.g., Parrsh and Derber 99; Lorenc 997). 8

4DVAR versus 3D analyss methods (3DVAR, IO) When compared to a 3-D analyss algorthm n a sequental assmlaton system, the (strong-constrant) 4D-Var has the followng characterstcs: t works under the assumpton that the model s perfect. Problems can be expected f model errors are large. t requres the mplementaton of the rather specal lot of work f the forecast model s complex. T L operators, the so-called adjont model. Ths can be a n a real-tme system t requres the assmlaton to wat for the observatons over the whole 4D-Var tme nterval to be avalable before the analyss procedure can begn, whereas sequental systems can process observatons shortly after they are avalable. Ths can delay the avalablty of analyss. x a s used as the ntal state for a forecast, then by constructon of 4D-Var one s sure that the forecast wll be completely consstent wth the model equatons and the four-dmensonal dstrbuton of observatons untl the end of the 4D-Var tme nterval (the cut off tme). Ths makes ntermttent 4D-Var a very sutable system for numercal forecastng. (standard strong-constrant) 4D-Var s an optmal (subjectng to a number of assumptons) and relatvely effcent (relatve to e.g., tradtonal Kalman flter or weak-constrant 4DVAR) assmlaton algorthm over ts tme perod thanks to a theorem that says that The evaluaton of the 4D-Var observaton cost functon and ts gradent, J( x ) and J ( x ), requres one drect model ntegraton from tmes t to t n and one sutably modfed adjont ntegraton made of transposes of the tangent lnear model tme-steppng operators M.. In fact t s the applcaton of ths procedure that made 4DVAR practcal. 9

4DVAR versus Kalman flter The most mportant advantage of 4D-Var s that f we assume that the model s perfect, and that the a pror error covarance at the ntal tme B s correct, t can be shown that the 4D-Var analyss at the fnal tme s dentcal to that of the extended Kalman Flter (Lorenc, 986, Daley, 99). Ths means that mplctly 4D-Var s able to evolve the forecast error covarance from B to the fnal tme (because Kalman flter explctly evolves B). Unfortunately, ths mplct covarance s not avalable at the end of the cycle, and nether s the new analyss error covarance. In other words, 4D-Var s able to fnd the BLUE (Best Lnear Unbased Estmate) but not ts error covarance. To mtgate ths problem, a smplfed Kalman Flter algorthm has been proposed to estmate the evoluton of the analyss errors n the subspace of the dynamcally most unstable modes (Fsher and Courter, 995, Cohn and Todlng, 996). 4DVAR s much cheaper than the conventonal Kalman Flter. 4DVAR s, however, comparable n cost to ensemble-based Kalman flter, whch uses a forecast ensemble to estmate the evolvng forecast error covarance.