The Maximum Likelihood Ensemble Filter as a non-differentiable minimization algorithm
|
|
- Jasmin Annabel Dalton
- 6 years ago
- Views:
Transcription
1 QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY Q. J. R. Meteorol. Soc. 134: (2008) Published online in Wiley InterScience ( The Maximum Likelihood Ensemble Filter as a non-dierentiable minimization algorithm Milija Zupanski, a * I. Michael Navon b and Dusanka Zupanski a a Cooperative Institute or Research in the Atmosphere, Colorado State University, Fort Collins, USA b School o Computational Science and Department o Mathematics, Florida State University, Tallahassee, USA ABSTRACT: The Maximum Likelihood Ensemble Filter (MLEF) equations are derived without the dierentiability requirement or the prediction model and or the observation operators. The derivation reveals that a new non-dierentiable minimization method can be deined as a generalization o the gradient-based unconstrained methods, such as the preconditioned conjugate-gradient and quasi-newton methods. In the new minimization algorithm the vector o irst-order increments o the cost unction is deined as a generalized gradient, while the symmetric matrix o second-order increments o the cost unction is deined as a generalized Hessian matrix. In the case o dierentiable observation operators, the minimization algorithm reduces to the standard gradient-based orm. The non-dierentiable aspect o the MLEF algorithm is illustrated in an example with one-dimensional Burgers model and simulated observations. The MLEF algorithm has a robust perormance, producing satisactory results or tested non-dierentiable observation operators. Copyright 2008 Royal Meteorological Society KEY WORDS unconstrained minimization; ensemble data assimilation Received 25 September 2007; Revised 24 January 2008; Accepted 25 March Introduction The maximum likelihood ensemble ilter (MLEF) is an ensemble data assimilation algorithm based on control theory (Zupanski, 2005; Zupanski and Zupanski, 2006). The MLEF is a posterior maximum likelihood approach, in a sense that it calculates the optimal state as the maximum o the probability density unction (PDF), while most o the ensemble data assimilation methodologies used in meteorology and oceanography are based on the minimum variance approach (e.g. Evensen, 1994; Houtekamer and Mitchell, 1998; Bishop et al., 2001; Whitaker and Hamill, 2002; Anderson, 2003; Ott et al., 2004). The maximum o the PDF is ound by an iterative minimization o the cost unction derived rom a multivariate posterior PDF. The iterative minimization is an important component o the MLEF since it provides practical means or inding the nonlinear analysis solution. The process o minimization produces both the most likely state and associated uncertainty. The MLEF was successully tested in applications with various weather prediction and related models, such as the Korteweg de Vries Burgers model (Zupanski, 2005; Zupanski and Zupanski, 2006), the Colorado State University (CSU) global shallow-water model (Zupanski * Correspondence to: Milija Zupanski, Cooperative Institute or Research in the Atmosphere, Colorado State University, 1375 Campus Delivery, Fort Collins, CO , USA. ZupanskiM@cira.colostate.edu et al., 2006; Uzunoglu et al., 2007), the Large-Eddy Simulation (LES) model (Carrio et al., 2008), the National Aeronautics and Space Administration GEOS-5 column precipitation model (Zupanski et al., 2007b), and the CSU Lagrangian Particle Dispersion Model (LPDM) (Zupanski et al., 2007a). In all those applications, a nonlinear conjugate-gradient method (e.g. Gill et al., 1981) was used or minimization o the cost unction. As with all other unconstrained gradient-based minimization algorithms, the nonlinear conjugate-gradient method requires the cost unction to be at least twice dierentiable. The irst derivative o the cost unction is required or the gradient, and the second derivative, or its approximation, is required or the Hessian preconditioning. Unortunately, the dierentiability requirement is not necessarily satisied in applications to realistic problems. In particular, the physical processes related to clouds and precipitation typically include non-dierentiable operators. For example, it is known that cumulus convection parametrization in weather and climate introduces a signiicant discontinuity in the irst and higher-order derivatives (e.g. Verlinde and Cotton, 1993; Zupanski, 1993; Tsuyuki, 1997; Xu and Gao, 1999; Zhang et al., 2000). A similar discontinuity problem can be identiied or observation operators as well. In satellite radiance assimilation, or example, a orward model or all-weather conditions is a non-dierentiable observation operator. This ollows rom the act that, depending on the value o the state vector (i.e. cloudy or clear), various orms o the orward operator will be chosen. For example, the cloud property Copyright 2008 Royal Meteorological Society
2 1040 M. ZUPANSKI ET AL. model and the gas extinction model are only included in the presence o clouds, leading to dierent ormulations o the orward operator in the presence o clouds and without clouds (Greenwald et al., 2002). Other nondierentiable operator examples can be ound whenever a weather regime deined by the state vector deines dierent orms o the observation operator. Common methods or solving non-dierentiable (nonsmooth) minimization are based on sub-dierentials and bundle algorithms (Clarke, 1983; Lemarechal and Zowe, 1994; Nesterov, 2005). Bundle algorithms were tested in optimal control problems o low with discontinuities (Homescu and Navon, 2003) using the PVAR sotware (Luksan and Vlcek, 2001), and also in variational data assimilation (Zhang et al., 2000) using the bundle algorithm o Lemarechal (1977). An explicit knowledge o the minimization space (e.g. its basis or span-vectors), known in ensemble data assimilation, creates an opportunity to exploit alternative means or non-dierentiable minimization without the need to deine gradients, subgradients, or their approximations. Such an approach will be pursued here. In this paper we address the dierentiability requirement or the cost unction by presenting an alternative derivation o the MLEF. For the irst time, the validity o the Taylor series expansion is not assumed, thus the dierentiability o the cost unction is not required. Since no limitation o using the irst- or second-order Taylor ormula approximation is imposed, the analysis and orecast ensemble perturbations are not restricted to be small. Under these relaxed conditions, the MLEF is ormulated as a nonlinear iltering algorithm that allows non-dierentiable models and observation operators. An important consequence o this derivation is that the optimization algorithm used within the MLEF can be now viewed as a non-dierentiable minimization algorithm. For dierentiable unctions the minimization reduces to standard gradient-based algorithms, with an implicit Hessian preconditioning. In particular, the MLEF algorithm is presented as a non-dierentiable generalization o the nonlinear conjugate-gradient and the BFGS quasi- Newton algorithms (Gill et al., 1981; Luenberger, 1984). In order to illustrate the potential o the non-dierentiable minimization used in the MLEF, a one-dimensional Burgers model simulating a shock wave is employed. Unlike in previous MLEF applications, we include a challenging non-dierentiable observation operator with discontinuities in the unction and in all its derivatives. The paper is organized as ollows. The new MLEF derivation under relaxed conditions is presented in section 2. In section 3 we describe the experimental design, model and observations. The results are presented in section 4, and the conclusions are drawn in section Non-dierentiable MLEF ormulation Let the state space be denoted R N S, where N S denotes its dimension, and let x be a state vector. We reer to the set o state vectors {x i ; (i = 1,...,N E )} as ensembles, and to the space Ɛ R N E o dimension N E as an ensemble space. In order to begin ensemble data assimilation, the initial state vector and its uncertainty need to be speciied. Let the initial state vector be denoted x 0, and let the initial N S N E square-root error covariance be denoted P 1/2 0 : Ɛ with columns {p 0 i ; (i = 1,...,N E )}. The initial state vector and the initial square-root error covariance deine a set o initial conditions 2.1. Prediction x 0 i = x 0 + p 0 i ; (i = 1,...,N E). (1) The predictive step o the MLEF (and any other ilter) addresses the means o transporting the uncertainty span-vectors rom the current analysis time to the next analysis time. A nonlinear dynamical model M : transports the state vector according to x t = M(x t 1 ), (2) where t 1 and t reer to the current and the next analysis times, respectively. Note that the model error is neglected in Equation (2) to simpliy the derivation. In order to keep the notation manageable, we omit the time index in the remainder o the paper, unless suggested otherwise. The orecast increment resulting rom the ith analysis increment is i x = x i x = M(x a i ) M(xa ) = M(x a + p a i ) M(xa ), (3) where the superscripts a and reer to analysis and orecast, respectively. The vectors {p a i ; (i = 1,...,N E)} represent the columns o the square-root analysis error covariance. Ater deining p i = ix, the square-root orecast error covariance is P 1/2 = [p 1, p 2,...,p N E ]; p i = M(xa + p a i ) M(xa ), (4) where P 1/2 : Ɛ is a N S N E matrix with columns {p i ; (i = 1,...,N E)}. Equation (3) represents the transport o uncertainty span-vectors in time by nonlinear model dynamics. The MLEF orecast step allows the nonlinear model operator and large analysis increments to be included without typical restrictions, such as linearity and dierentiability. For small analysis increments, however, the orecast-error covariance ormulation (4) reveals that the orecast step o the MLEF is closely related to the Kalman ilters (e.g. Jazwinski, 1970), and to the SEEK ilter (Pham et al., 1998; Rozier et al., 2007).
3 MLEF AS A MINIMIZATION ALGORITHM Analysis The analysis is corrected in the subspace deined by the orecast-error covariance matrix (Jazwinski, 1970). Using Equation (4) one can deine the analysis correction subspace = span{p 1, p 2,...,p N E };. (5) Then, an arbitrary vector x x can be expressed as a linear combination x x = w 1 p 1 + w 2p 2 + +w N E p N E = P 1/2 w; w = (w 1,w 2,...,w NE ) T Ɛ. (6) The transormation (6) links the analysis correction subspace with the ensemble space, and shows that P 1/2 : Ɛ. Until now, the dierentiability o the dynamical model M was not required, and no speciic assumption about the probability distribution o the analysis or orecast increments was necessary. In the analysis, however, some assumptions will be required. Oten assumed, as done here, is that probability distribution o the initial conditions errors and the observation errors are Gaussian. The novelty is that a commonly used dierentiability assumption will be relaxed, thus a more general ormulation o the MLEF analysis solution will be derived. Note that it is possible to relax the Gaussian assumption and develop a non-gaussian data assimilation ramework (e.g. van Leeuwen, 2003; Abramov and Majda, 2004; Haven et al., 2005; Fletcher and Zupanski, 2006a,b), but this will not be pursued here in order to simpliy the presentation Cost unction In the MLEF, the optimal set o coeicients {w i : i = 1,...,N E } is obtained by maximizing the posterior conditional probability. In practice this is achieved by an iterative minimization o a cost unction (e.g. Lorenc, 1986) J(x) = 1 2 (x x ) T P 1 (x x ) [y H(x)]T R 1 [y H(x)], (7) where R : Ɔ Ɔ is the observation error covariance, Ɔ R N O is the observation space, N O is the dimension o Ɔ, y Ɔ is the observation vector, and H : Ɔ is a nonlinear and/or non-dierentiable observation operator. Since the matrix P is deined using ensemble orecast increments, the minimization o the cost unction will involve a search in the analysis correction subspace. Let consider an increment o the cost unction, J (x) = J(x + x) J(x), or x. In principle, the minimization o J (x) is equivalent to minimizing J(x) (e.g. Luenberger, 1984). Direct substitution o x + x in Equation (7) results in J(x + x) = J(x) + ( x) T P 1 (x x ) [H(x + x) H(x)] T R 1 [y H(x)] ( x)t P 1 ( x) [H(x+ x) H(x)]T R 1 [H(x+ x) H(x)]. (8) Note that or dierentiable operator H, the expansion (8) reduces to J(x + x) = J(x) + ( x) T P 1 (x x ) [ ] H T ( x) T R 1 [y H(x)] x ( x)t P 1 ( x) + 1 [ ] H T [ H 2 ( x)t R 1 x x ] ( x) + O( x 3 ), (9) which is equivalent to a second-ordertaylor series expansion o J(x) in the vicinity o x. One can note that the Taylor expansion (9) has a remainder O( x 3 ), due to neglecting the higher-order nonlinear terms (where... denotes a norm). On the other hand, the expansion (8) does not have a remainder since the use o total increments accounts or all higher-order nonlinear terms. Thereore, ormula (8) may be viewed as a generalization o the Taylor expansion o J. This apparent similarity can be used to deine a generalization o the gradient vector and the Hessian matrix that could be used in minimization, to include the nonlinear and non-dierentiable operator H. Since the analysis correction subspace is already deined (Equation (5)), the increments x in the direction o known span-vectors i x ={p i, i = 1,...,N E} are considered. By comparing Equations (8) and (9), one can identiy the ith component o a generalized irst derivative o J, denoted i J, i J(x) = ( i x ) T P 1 (x x ) [H(x + i x ) H(x)] T R 1 [y H(x)] (10) and the (i,j)th element o a generalized second derivative o J, denoted 2 i,j J, i,j 2 J(x) = ( ix ) T P 1 ( j x ) + [H(x + i x ) H(x)] T R 1 [H(x + j x ) H(x)], (11)
4 1042 M. ZUPANSKI ET AL. where { i,j 2 = i( j ) : i, j N E }. I we deine an N O N E observation perturbation matrix, Z : Ɔ Z(x) = [z 1 (x), z 2 (x),...,z NE (x)]; z i (x) = R 1/2 [H(x + p i ) H(x)], (12) the generalized irst and second derivatives are G J(x) = P 1/2 (x x ) (Z(x)) T R 1/2 [y H(x)], (13) G 2 J(x) = I + (Z(x))T Z(x). (14) Note that the generalized irst derivative is a N E - dimensional vector, and the generalized second derivative is a N E N E matrix, i.e. both are deined in ensemble space Ɛ. Equations (13) and (14) are not approximations to the true derivatives since all nonlinear terms are included in the matrix Z(x). In the absence o better terminology, the term derivative is used only to indicate that or dierentiable cost unction and or small perturbations p i, Equations (13) and (14) would reduce to inite-dierence approximations o directional derivatives. The similarity o the generalized gradient (Equation (13)) with the generalized gradient in the subgradient method (e.g. Zhang et al., 2000), [ J(x)] T i x, reveals that the ormulation adopted here does satisy the requirement or the subgradient, i.e. J(x + i x ) J(x) + [ G J(x)] i since G 2 J(x) 0orall ix. However, in our ormulation the increments i x are included in the nonlinear perturbation o the observation operator through Equation (12), and thus cannot be separated into the subgradient and the perturbation. In other words, our method includes the minimization space span (or basis) vectors as inseparable components o the generalized derivatives deinition. Since the MLEF ormulation employs orward inite dierences (i.e. increments), it is interesting to compare the MLEF derivatives (Equations (13) (14)) with the inite-dierence approximations o directional derivatives. For example, the inite-dierence representation o the irst derivative in the direction i x is J(x + i x) J(x). Since the inite-dierence approximation to derivatives is a linear approximation, the critical requirement or the inite-dierence approximation is that the perturbation i x is small (Wang et al., 1995). I this is satisied, one can see that there is equivalence between the irst derivative in the MLEF ormulation, the initedierence approximation to irst derivative, and the exact irst derivative (i it exists). For large perturbations, however, there is no equivalence between these various ormulations o irst derivative. This is exactly the situation in which the MLEF is expected to perorm well, since it does not include any restrictions regarding the size o perturbations. For the second derivative there is also a similar equivalence or very small perturbations, but no equivalence or large perturbations. Thereore, in general the MLEF is not equivalent to the inite-dierence approximation to derivatives. Only or small perturbations, when linear approximations are valid, the MLEF ormulation is conveniently reduced to standard directional derivatives, including the inite-dierence approximation Generalized Hessian preconditioning A common starting point o minimization is the state vector x = x (corresponding to w = 0inEquation(6)), since it represents the best knowledge o the dynamical state prior to taking into account the observations. Since the optimal preconditioning is deined as an inverse square-root Hessian matrix (e.g. Axelsson and Barker, 1984), one can utilize Equation (14) to deine Hessian preconditioning as a change o variable w = [ G 2 J(x )] 1/2 ζ = [I + (Z(x )) T Z(x )] 1/2 ζ, (15) where ζ Ɛ is the control vector o dimension N E, and Z(x ) is obtained by substituting x = x in Equation (12). As explained in Zupanski (2005), and equivalent to the procedure used in the ensemble transorm Kalman ilter (ETKF; Bishop et al., 2001), one can perorm an eigenvalue decomposition o G 2 J(x ) to obtain [ G 2 J(x )] 1/2 = V(I + ) 1/2 V T,whereV is the eigenvector matrix and is the eigenvalue matrix o G 2 J(x ). Note that the MLEF transormation calculates a symmetric square-root matrix, corresponding to the ETKF transorm with simplex improvement (Wang et al., 2004; Wei et al., 2006). By combining Equations (6) and (15), one obtains the generalized Hessian preconditioning in state space, in the orm o the change o variable x x = G 1/2 ζ ; G 1/2 = P 1/2 [I + (Z(x )) T Z(x )] 1/2. (16) The matrix G 1/2 : Ɛ is a N S N E matrix, and it represents the inverse o the square-root generalized Hessian matrix estimated at the initial point o minimization. Once the Hessian preconditioning is accomplished, one can begin with calculation o preconditioned generalized gradients. An iterative minimization produces ζ k at iteration k according to ζ k = ζ k 1 + α k 1 d k 1,where α k 1 R 1 and d k 1 Ɛ are the step-length and the descent direction at the (k 1)th iteration, respectively. Using the change o variable (16), the state vector at kth minimization iteration is related to the control vector as x k = x + G 1/2 ζ k (17)
5 MLEF AS A MINIMIZATION ALGORITHM 1043 The preconditioned generalized gradient at the kth minimization iteration is obtained by employing the Hessian preconditioning ormulation (16) and evaluating (13) at x k G J(x k ) = [I + (Z(x )) T Z(x )] 1 ζ k (Z(x k )) T R 1/2 [y H(x k )], (18) where Z(x k ) is obtained by substituting x = x k in Equation (12) Analysis error covariance In order to complete the non-dierentiable ormulation o the MLEF, an analysis (e.g. posterior) error covariance matrix, quantiying the uncertainties o the analysis, is required. The equivalence between the inverse Hessian at the optimal point and the posterior error covariance (e.g. Fisher and Courtier, 1995; Veersé, 1999) is exploited in the MLEF algorithm. More detailed examination o the relation between the inverse Hessian and analysis error covariance in nonlinear problems can be ound in Gejadze et al., (2007). Since the generalized Hessian in ensemble space is given by Equation (16), the analysis (posterior) error covariance in ensemble space is deined as (P w ) a = ( 2 G J(xa )) 1 = [I + (Z(x a )) T Z(x a )] 1, (19) where Z(x a ) is obtained by substituting x = x a in Equation (12). The analysis error covariance in state space can be obtained by utilizing the change o variable (6) to deine the true and optimal state vectors, x t and x a,as x t x = P 1/2 w t and x a x = P 1/2 w a,wherew t and w a are the true and optimal control vectors in ensemble space, respectively. Then, the error o the state vector is related to the error o the control vector in ensemble space according to x a x t = P 1/2 (w a w t ). (20) By taking the mathematical expectation o an outer product o (20), and utilizing (19), one obtains the analysis error covariance in state space P a = P 1/2 [I + (Z(x a )) T Z(x a )] 1 P T/2 (21) As suggested in section 2.2, the columns o the square-root analysis error covariance, denoted P 1/2 a,are used to deine the initial perturbations or the ensemble orecast. Then, the matrix P 1/2 a can be written in a column orm as P 1/2 a = [p a 1, pa 2,...,pa N E ]; ( p a i = P 1/2 [ I + (Z(x a )) T Z(x a ) ] ) 1/2 i (22) The matrix P 1/2 a is a N S N E matrix. In principle, instead o the relation (19) or the inverse generalized Hessian in ensemble space, one could use the BFGS inverse Hessian update (e.g. Veerse, 1999), or some other estimate o the inverse Hessian at the optimal point (e.g. Gejadze et al., 2007). The expression (19) is currently used in the MLEF algorithm. 3. Non-dierentiable minimization algorithms In this section, two non-dierentiable minimization algorithms generalized using the derivation rom section 2 will be ormulated. The irst algorithm is the generalized nonlinear conjugate-gradient minimization algorithm, presented or both the Fletcher Reeves and the Polak Ribiere ormulations (e.g. Luenberger, 1984). Algorithm 1 (Generalized nonlinear conjugate-gradient) Choose starting point x 0 = x ζ 0 = 0 and deine d 0 = ( G J) 0 = G J(x 0 ); k 0; repeat Compute ( G J) k+1 = G J(x k+1 ); Set d k+1 = ( G J) k+1 + β k d k, where β k = ( GJ) T k+1 ( GJ) k+1 ( G J) T k ( (Fletcher Reeves), or GJ) k [( G J) k+1 ( G J) k ] T ( G J) k+1 ( G J) T k ( (Polak Ribiere) GJ) k Update ζ k+1 = ζ k + α k d k and x k+1 = x + G 1/2 ζ k+1, where α k minimizes J(x + G 1/2 ζ k+1 ) until convergence. The second minimization algorithm is the generalized BFGS quasi-newton algorithm developed rom the dierentiable orm (Nocedal, 1980; Liu and Nocedal, 1989). The limited-memory ormulation is a straightorward extension, obtained by discarding some terms in the inverse Hessian (e.g. Nocedal and Wright, 1999). Algorithm 2 (Generalized quasi-newton) Choose starting point x 0 = x ζ 0 = 0 and deine d 0 = ( G J) 0 = G J(x 0 ); k 0; repeat Compute ( G J) k+1 = G J(x k+1 ); Set s k = ζ k+1 ζ k and y k = ( G J) k+1 ( G J) k ; Set ρ k = 1/y T k s k and υ k = (I ρ k y k s T k ); Compute H k+1 = υ T k υt k 1 υt 0 υ 0 υ k 1 υ k + υk T υt k 1 υt 1 ρ 0s 0 s T 0 υ 1 υ k 1 υ k + + υk T ρ k 1s k 1 s T k 1 υ k + ρ k s k s T k ; Set d k+1 = H k+1 ( G J) k+1 ; Update ζ k+1 = ζ k + α k d k and x k+1 = x + G 1/2 ζ k+1, where α k minimizes J(x + G 1/2 ζ k+1 ); until convergence. The above minimization algorithms show that the MLEF could be used as a stand-alone non-dierentiable
6 1044 M. ZUPANSKI ET AL. minimization algorithm in applications other than ensemble data assimilation. In principle, there are two possible means to perorm minimization using the MLEF: (i) i relevant directions are known, the MLEF can be used as a reduced-rank minimization algorithm in the subspace spanned by relevant directions, and (ii) i relevant directions are not known, one can deine a basis representing the ull space and use it to perorm a regular, ull-rank minimization with MLEF. I there are means to deine the set o relevant directions, computational savings due to the reduced-rank ormulation would make this option advantageous. On the other hand, the ull-rank option (ii) is a straightorward extension o the standard conjugategradient and quasi-newton algorithms, thus it may be easier to apply in principle. In this paper we chose the option (i), since the columns o the square-root orecasterror covariance represent relevant directions. In the above ormulations we did not speciy the line-search algorithm. Although this is an important aspect o non-dierentiable minimization, in our current implementation a simple line-search method is used (e.g. Derber, 1989), which involves one unction evaluation per minimization iteration. The same line-search method was used in all experiments. The computational cost o the MLEF minimization is N E + 2 unction evaluations per minimization iteration. This estimate includes the gradient and the Hessian calculations. The computational cost increases almost linearly with the number o ensembles. As or other ensemble data assimilation algorithms, parallel processing can signiicantly increase the speed o the MLEF minimization since the communication between processors is almost negligible. On the other hand, the computational cost o the gradient-based minimization depends mostly on the cost o the gradient calculation. In our implementation o the gradient-based method, the cost per iteration is 2N E + 2, i.e. about two times more than the MLEF cost. In meteorological applications, the adjoint model (e.g. LeDimet and Talagrand, 1986) is typically used to compute the gradient. Its cost per iteration is about 2 5 unction evaluations, depending on the complexity o the operator. This means that the cost per iteration minimization is 4 7 unction evaluations. In typical operational meteorological applications (without the Hessian calculation) there are about minimization iterations, making the cost o minimization about unction evaluations. For similar complexity o the problem, the MLEF minimization requires only one minimization iteration. This approximate calculation makes the cost o the gradient-based and MLEF minimization similar or the ensemble size o With Hessian calculation included, the number o iterations would be smaller, but the total cost would likely increase. One should keep in mind, however, that the actual cost would strongly depend on the complexity o the problem and on the computational capabilities. 4. Experimental design and results Two nonlinear observation operators will be tested, a quadratic and a cubic operator. In addition, each o the operators will have a dierentiable and a nondierentiable orm. The dierentiable observation operators are deined as (a) H (x) = x 2 ; (b) H (x) = x 3, (23) while the corresponding non-dierentiable observation operators are deined as { x 2 or x 0.5 (a) H (x) = x 2 or x <0.5; { x 3 or x 0.5 (b) H (x) = x 3 or x <0.5. (24) The non-dierentiable operators given by Equation (24) are shown in Figure 1. Although they may appear relatively simple, both observation operators do have discontinuities in the unction and its derivatives. It is interesting to note that the quadratic non-dierentiable operator has a more pronounced discontinuity than the cubic operator. This makes the use o quadratic nondierentiable operator more challenging or minimization. The opposite is expected or dierentiable operators, since the quadratic operator is less nonlinear than the cubic operator. The prediction model we use is the one-dimensional Burgers equation (Burgers, 1948) u t + u u x = ν 2 u x 2, (25) where ν is a viscosity coeicient and u is the velocity. This equation is oten used in luid dynamics or X Figure 1. Non-dierentiable observation operators: (a) quadratic (ull line) and (b) cubic (dashed line). The discontinuous point at x = 0.5 represents a discontinuity in the unction and its irst and second derivatives.
7 MLEF AS A MINIMIZATION ALGORITHM 1045 simulation o nonlinear waves, shock ormation, and turbulence. Equation (25) is solved numerically using centred space dierences and Lax Wendro time integrating scheme (Fletcher, 1991). In this paper we use the Burgers equation to simulate a propagating shock wave (Akella and Navon, 2006). The dimension o the state vector is 81. We conduct a twin model experiment, in which the prediction rom one set o initial conditions is deined as truth (denoted TRUE), while the prediction rom a dierent set o initial conditions is deined as experimental (denoted EXP). In our case the EXP initial conditions are deined as a 40-time-step old TRUE orecast. The TRUE and EXP initial conditions are shown in Figure 2. The igure indicates that the velocity values are typically between 0 and 1, and that the orecast lag is about 20 grid points. Also, one can notice a steep velocity gradient that is simulating a shock wave. The observations are created by adding Gaussian random perturbation to the TRUE model irst guess, H(x true ), with zero mean and standard deviations o (when using the quadratic observation operator), and (when using the cubic observation operator). Random perturbations are added at each grid point, implying that there are 81 observations. The time requency o observations is 20 model time steps, which also deines the length o data assimilation cycle. We create observations during 20 data assimilation cycles, but most relevant adjustments happen during the irst 5 10 data assimilation cycles. All data assimilation experiments are done with 4 ensemble members, without error covariance localization and/or inlation. The initial ensemble perturbations are deined using lagged (time-shited) orecasts that correspond to the EXP model run, centred about the initial time o the data assimilation cycle No.1. This approach employs the so-called ergodic hypothesis, in which the time dierences are used to represent the spatial dierences. This initial set-up creates dynamically 1.20E E E E E E Grid Points Figure 2. The truth (TRUE) and the experimental (EXP) initial conditions beore data assimilation. The true state (dashed line) was used to initiate the true orecast rom which the observations are created, while the initial conditions in the experiment are deined as a 20-point shit o the truth (solid line), obtained as a lagged orecast. balancedperturbations, thus less noisy initial error covariance. In all other data assimilation cycles, the ensemble perturbations are updated using Equation (22). In order to test the non-dierentiable and/or nonlinear minimization perormance, or each o the two observation operators we conduct two data assimilation experiments using the Fletcher Reeves nonlinear conjugategradient algorithm described in previous section: (i) with generalized Hessian preconditioning and generalized gradient (Equations (16) and (18), respectively), and (ii) with regular derivatives. The irst experiment is denoted MLEF, since this is the standard orm o the MLEF algorithm, and the second experiment is denoted GRAD to relect its gradient-based characteristics. The regular (e.g., GRAD) derivatives are obtained by employing a linear approximation ( ) H H(x + p i ) H(x) p i x in the deinition o the observation perturbation matrix (Equation (12)), i.e. by using ( ) H z i (x) = R 1/2 p i x in Equations (16) and (18). The dierence between the observation operator gradients in the MLEF and the GRAD ormulations comes rom the higher-order terms in Taylor expansion, assuming that the unction H(x) is dierentiable. Let consider the cubic observation operator (Equation (23b)) as an example. By direct substitution, one can see that the dierence between the MLEF and the GRAD ormulations is 3x(p i )2 + (p i )3. For nonlinear problems, the perturbation p i can be large, thus making the dierence between the gradients large. The dierence becomes even more relevant or non-dierentiable operators near the discontinuous point. Let consider an example with x = 0.4 and p i = 1.0 (such that the perturbed operator crosses the discontinuity), again using the cubic observation operator as an example. Following rom the above considerations and rom Equation (24b), the MLEF gradient at point x = 0.4 is equal to H(x + p i ) H(x) = ( )3 (0.4) 3 = 2.68, while the GRAD gradient is equal to ( ) H p i x = 3x2 p i = (0.4)2 1.0 = This implies that the MLEF and the GRAD would have gradients in the opposite directions! The experimental results will indicate which o the two ormulations has an advantage. Beore we show the results, however, it should be noted that or the cubic observation operator, the maximum allowed size o the control variable adjustment had to be restricted
8 1046 M. ZUPANSKI ET AL. in the GRAD experiment, in order to prevent the minimization divergence. In the MLEF minimization this was not necessary, presumably because the algorithm is not relying on small perturbations o the Taylor expansion Non-dierentiable observation operator The results o the two algorithms over all 20 data assimilation cycles, using non-dierentiable observation operators (Equation (24)), are presented in terms o the analysis root-mean-squared (RMS) errors in Figure 3. Knowing the true state, x true, the analysis RMS error is calculated as RMS = 1 N S (x a i N xtrue i ) 2. (26) S i=1 For both observation operators the analysis RMS error in the MLEF experiment is smaller than in the GRAD experiment. For the quadratic observation operator, this advantage is less obvious than or the cubic observation operator. This could be related to a more pronounced discontinuity noted or the quadratic non-dierentiable observation operator (e.g. Figure 1). For the cubic observation operator, the MLEF analysis errors are 2 3 times smaller than GRAD errors during the irst several cycles, (a) 2.00E E E E E Data Assimilation Cycle (b) 2.00E E E-01 eventually reaching 7 times smaller value in later cycles. However, all experiments eventually achieve negligible errors when the shock wave exits the right boundary o the integration domain. An example o the diiculties o the GRAD estimation o the gradient is illustrated in Figure 4. The directional gradients or the cubic non-dierentiable operator and or the ensemble member 2 are shown in the irst minimization iteration o the irst data assimilation cycle. A vertical dashed line, separating the two regions o the velocity U, indicates the discontinuous point. In the region where U<0.5, the GRAD creates a gradient o the opposite sign to the MLEF gradient, as suggested in our discussion at the beginning o this section. In the region where U>0.5, gradients oten change sign, sometimes having an opposite sign as well, and the GRAD is generally o larger magnitude than the MLEF gradient. The linear approximation used in GRAD creates an apparent disadvantage o the gradient-based method, as implied rom the larger analysis RMS errors (Figure 3). More details o the perormance can be seen rom Figure 5, which shows the velocity analysis dierences between the MLEF and GRAD experiments in the cubic non-dierentiable observation operator experiment. Only the irst our data assimilation cycles are shown, since the most important velocity adjustment occurs during these cycles. One can see that the analysis errors are systematically smaller in the MLEF experiment. It is also interesting to note that analysis errors are becoming more localized as new observations are assimilated, indicating the sel-localizing characteristic o the MLEF. By cycle 4, the MLEF achieves ive times smaller maximum analysis error than the GRAD experiment. Since the only dierence between the two experiments is in the minimization procedure, this indicates a superior perormance o the MLEF non-dierentiable minimization. In order to urther examine the impact o the MLEF non-dierentiable minimization algorithm, the cost unction and the gradient norm in irst data assimilation cycle are shown in Figures 6 and 7, or quadratic and or cubic non-dierentiable operators, respectively. The gradient norm is deined as g = (g T g) 1/2, where g denotes 4.0E E E E Data Assimilation Cycle Figure 3. Analysis root-mean-squared (RMS) error in the non-dierentiable observation operator examples: (a) quadratic, and (b) cubic: MLEF results (solid line), and the gradient-based method results (GRAD, dashed line). 0.0E E E-04 U>0.5 U<0.5 Model grid Figure 4. Directional gradient in the GRAD (dashed line) and the MLEF (solid line) experiments. The discontinuous point at model value U = 0.5 is denoted by a vertical dotted line. The encapsulated text boxes deine the values o the model variable U.
9 MLEF AS A MINIMIZATION ALGORITHM 1047 (a) 6.00E E E E E E E E E E-01 Grid Points (c) 2.00E E E E E E-02 Grid Points (b) 3.00E E E E E E E-02 Grid Points (d) 1.20E E E E E E E-02 Grid Points Figure 5. The state vector analysis errors x a x t in irst our data assimilation cycles, or the MLEF (solid line) and the GRAD (dashed line) experiments: (a) ater cycle No.1, (b) ater cycle No.2, (c) ater cycle No.3, and (d) ater cycle No.4. the gradient. Note that the gradient norm reers to the generalized gradient norm in the case o the MLEF. The results are shown or the irst 20 minimization iterations, since the later iterations did not bring any relevant change. For the quadratic operator, the cost unctions in the MLEF and the GRAD experiments become almost the same (Figure 6(a)). The gradient norm (Figure 6(b)), however, shows a better convergence o the MLEF minimization, with gradient norm decreasing by several orders o magnitude. One can note the irregular behaviour o the cost unction, with several jumps, as well as o the gradient norm. One can see that the cost unction jumps match with the gradient norm jumps, suggesting that GRAD minimization has diiculties due to the gradient estimation. For the cubic operator, the GRAD cost unction decreased by one order o magnitude, while the MLEF cost unction decreased by more than three orders o magnitude (Figure 7(a)). The gradient norm indicates a serious problem in the GRAD minimization, without an obvious reduction, while in the MLEF minimization the gradient norm was reduced by almost ive orders o magnitude (Figure 7(b)) Dierentiable observation operator One would expect that both experiments, especially the GRAD minimization, would perorm better i non-dierentiability o the observation operator were removed. In order to test this assumption, and to urther examine the dierences between the MLEF and GRAD minimization algorithms, we repeated similar experiments as in section 4.1, except using the cubic dierentiable observation operator given by Equation (23b). As in section 4.1, we concentrate on minimization perormance in the irst data assimilation cycle. The results o the MLEF and GRAD minimization algorithms are shown in Figure 8, in terms o the cost unction and the gradient norm. One can see somewhat similar dierences as in Figure 7, again indicating a superior MLEF minimization perormance. A closer inspection o the cost unctions in Figures 7(a) and 8(a) shows that indeed dierentiability did help. The MLEF minimization did produce aster convergence than in the nondierentiable operator case, and the shape o the cost unction decrease indicates a well-behaved minimization without the diiculties observed in the cycles 2 9
10 1048 M. ZUPANSKI ET AL. o Figure 7(a). The cost unction becomes lat ater only 3 4 iterations, while in the non-dierentiable case 9 10 iterations were necessary. A careul inspection o Figures 7 and 8 indicates that the GRAD cost unction shows a better perormance than the non-dierentiable case, mostly by continuing to decrease throughout the iterations, rather than reaching saturation as in the nondierentiable case. The gradient norms are also showing some improvement compared to the non-dierentiable case. In the GRAD minimization, there is a slight overall decrease o the gradient norm, compared with the gradient norm increase during iterations in the non-dierentiable case. In the MLEF minimization, there is also a larger decrease o the gradient norm, by almost one more order o magnitude. 5. Conclusions A new derivation o the MLEF algorithm is presented. It is shown that the same inal equations as in the original ormulation can be obtained without assuming dierentiability and linearity o the prediction model and observation operators, as it would be typically done using the Taylor expansion. In order to generalize the nonlinear conjugate-gradient and quasi-newton minimization algorithms we introduced a generalized gradient and generalized Hessian as non-dierentiable equivalents o the standard gradient and the Hessian. For linear and dierentiable operator H, the generalized gradient and Hessian ormulations reduce to directional irst and second derivatives in the direction o (x i ) (e.g. Gill et al., 1981). An implicit inclusion o higher-order nonlinear terms in the non-dierentiable MLEF algorithm is important or nonlinear observation operators, being more accurate, but also by allowing larger perturbations to be included in minimization. Thereore, the MLEF system has a potential to work with challenging prediction models and observation operators encountered in geophysical and related applications, in principle non-dierentiable and/or nonlinear unctions o the state vector. The data assimilation results with two minimization algorithms, one being the MLEF and the other being a gradient-based minimization, indicate a clear (a) 1.00E+03 (a) 1.00E E E E E E Iteration Number (b) 1.00E E E E E E Iteration Number Figure 6. Minimization in the quadratic non-dierentiable observation operator example: (a) cost-unction, and (b) gradient norm: MLEF results (solid line), and the gradient-based method results (GRAD, dashed line). 1.00E Iteration Number (b) 1.00E E E E E E E Iteration Number Figure 7. As Figure 6, but in the cubic non-dierentiable observation operator example.
11 MLEF AS A MINIMIZATION ALGORITHM 1049 (a) 1.00E+07 cloud and microphysics observations, inherently nonlinear and potentially non-dierentiable. 1.00E E E E Iteration Number (b) 1.00E E E E E E E E Iteration Number Figure 8. As Figure 7, but or the dierentiable observation operator. advantage o the MLEF algorithm. For both the dierentiable and non-dierentiable observation operators the MLEF was advantageous, showing a robust perormance. Two minimization algorithms based on the MLEF are schematically presented, the generalized nonlinear conjugate-gradient and the generalized quasi-newton minimization algorithm. The algorithmic advantage o the generalized minimization methods is that no changes to the original unconstrained minimization algorithms are necessary. Only the calculation o the generalized gradient and o the generalized Hessian preconditioning is changed to relect the non-dierentiable and nonlinear character o the new method. The presented MLEF minimization algorithm is directly applicable to ensemble data assimilation methods, and to medium-size minimization problems with dimensions o up to O(10 4 ). For high-dimensional operational applications, or example within the variational data assimilation methods, there are possible extensions o the generalized minimization algorithm using the multi-grid methods and the decomposition into local domains. The particular strategy would depend on the actual minimization problem and the computational environment. In uture work we plan to examine the MLEF perormance as a non-dierentiable minimization algorithm in more complex applications, such as the assimilation o Acknowledgements This work was supported by the National Science Foundation Collaboration in Mathematical Geosciences Grants ATM and ATM , and by NASA Precipitation Measurement Mission Program under Grant NNX07AD75G. Our gratitude is also extended to National Center or Atmospheric Research, which is sponsored by the National Science Foundation, or the computing time used in this research. Reerences Abramov R, Majda AJ Quantiying uncertainty or non- Gaussian ensembles in complex systems. SIAM J. Sci. Stat. Comput. 26: Akella S, Navon IM A comparative study o the perormance o high resolution advection schemes in the context o data assimilation. Int. J. Numer. Meth. Fluids 51: Anderson JL A local least squares ramework or ensemble iltering. Mon. Weather Rev. 131: Axelsson O, Barker VA Finite-element solution o boundaryvalue problems: Theory and computation. Academic Press. Bishop B, Etherton J, Majumdar SJ Adaptive sampling with the ensemble transorm Kalman ilter. Part I: Theoretical aspects. Mon. Weather Rev. 129: Burgers JM A mathematical model illustrating the theory o turbulence. Pp in Advances in Applied Mechanics 1. Academic Press: New York. Carrio GG, Cotton WR, Zupanski D, Zupanski M Development o an aerosol retrieval method: Description and preliminary tests. J. Appl. Meteorol. Climatol. In press. Clarke FH Optimization and Non-smooth Analysis. John Wiley & Sons: NewYork. Derber JC A variational continuous assimilation technique. Mon. Weather Rev. 117: Evensen G Sequential data assimilation with a nonlinear quasigeostrophic model using Monte-Carlo methods to orecast error statistics. J. Geophys. Res. 99(C5): Fisher M, Courtier P Estimating the covariance matrices o analysis and orecast errors in variational data assimilation. Tech. Memo. 220 ECMWF: Reading, UK. Fletcher CAJ Computational Techniques or Fluid Dynamics: Fundamental and General Techniques. Springer: Berlin. Fletcher SJ, Zupanski M. 2006a. A data assimilation method or lognormally distributed observation errors. Q. J. R. Meteorol. Soc. 132: Fletcher SJ, Zupanski M. 2006b. A hybrid multivariate normal and lognormal distribution or data assimilation. Atmos. Sci. Lett. 7: Gejadze I, Le Dimet FX, Shutyaev V On error covariances in variational data assimilation. J. Num. Anal. Math. Model. 22: Gill PE, Murray W, Wright MH Practical Optimization. Academic Press: San Diego. Greenwald TJ, Hertenstein R, Vukicevic T An all-weather observational operator or radiance data assimilation with mesoscale orecast models. Mon. Weather Rev. 130: Haven K, Majda A, Abramov R Quantiying predictability through inormation theory: Small-sample estimation in a non- Gaussian ramework. J. Comput. Phys. 206: Homescu C, Navon IM Optimal control o low with discontinuities. J. Comput. Phys. 187: Houtekamer PL, Mitchell HL Data assimilation using ensemble Kalman ilter technique. Mon. Weather Rev. 126: Jazwinski AH Stochastic Processes and Filtering Theory. Academic Press: San Diego. LeDimet FX, Talagrand O Variational algorithms or analysis and assimilation o meteorological observations: Theoretical aspects. Tellus 38A:
12 1050 M. ZUPANSKI ET AL. Lemarechal C Bundle methods in non-smooth optimization. Pp in Proceedings o the IIASA Series 3. Lemarechal C, Milin R. (eds.) Pergamon Press: Oxord, UK. Lemarechal C, Zowe J A condensed introduction to bundle methods in nonsmooth optimization. Pp in Algorithms or Continuous Optimization. Spedicato E. (ed.) Kluwer Academic Publishers: Dordrecht, The Netherlands. Liu DC, Nocedal J On the limited memory BFGS method or large scale optimization. Math. Programming 45: Lorenc AC Analysis methods or numerical weather prediction. Q. J. R. Meteorol. Soc. 112: Luenberger DL Linear and Non-linear Programming. Addison- Wesley. Luksan L, Vlcek J Algorithm 811: NDA: Algorithms or Nondierentiable Optimization. ACM Trans. Math. Sotware 27: Nesterov Y Smooth minimization o non-smooth unctions. Math. Program. 103: Nocedal J Updating quasi-newton matrices with limited storage. Math. Comput. 35: Nocedal J, Wright SJ Numerical optimization. Springer-Verlag: New York. Ott E, Hunt BR, Szunyogh I, Zimin AV, Kostelich EJ, Corazza M, Kalnay E, Patil DJ, Yorke JA A Local Ensemble Kalman Filter or atmospheric data assimilation. Tellus 56A: Pham DT, Verron J, Roubaud MC A singular evolutive extended Kalman ilter or data assimilation in oceanography. J. Marine Sys. 16: Rozier D, Birol F, Cosme E, Brasseur P, Brankart JM, Verron J A reduced-order Kalman ilter or data assimilation in physical oceanography. SIAM Rev. 49: Tsuyuki T Variational data assimilation in the tropics using precipitation data. Part III: Assimilation o SSM/I precipitation rates. Mon. Weather Rev. 125: Uzunoglu B, Fletcher SJ, Zupanski M, Navon IM Adaptive ensemble size inlation and reduction. Q. J. R. Meteorol. Soc. 133: van Leeuwen PJ A variance-minimizing ilter or large-scale applications. Mon. Weather Rev. 131: Veersé F Variable-storage quasi-newton operators as inverse orecast/analysis error covariance matrices in variational data assimilation. INRIA Research Report /29/84/PDF/RR-3685.pd. Verlinde J, Cotton WR Fitting microphysical observations o non-steady convective clouds to a numerical model: An application o the adjoint technique o data assimilation to a kinematic model. Mon. Weather Rev. 121: Wang X, Bishop CH, Julier SJ Which is better, an ensemble o positive/negative pairs or a centered spherical simplex ensemble? Mon. Weather Rev. 132: Wang Z, Navon IM, Zou X, LeDimet FX A truncated Newton optimization algorithm in meteorological applications with analysis Hessian/vector products. Comput. Opt. Appl. 4: Wei M, Toth Z, Wobus R, Zhu Y, Bishop CH, Wang X Ensemble Transorm Kalman Filter-based ensemble perturbations in an operational global prediction system at NCEP. Tellus 58A: Whitaker JS, Hamill TM Ensemble data assimilation without perturbed observations. Mon. Weather Rev. 130: Xu Q, Gao J Generalized adjoint or physical processes with parameterized discontinuities. Part VI: Minimization problems in multidimensional space. J. Atmos. Sci. 56: Zhang S, Zou X, Ahlquist J, Navon IM, Sela JG Use o dierentiable and non-dierentiable optimization algorithms or variational data assimilation with discontinuous cost unctions. Mon. Weather Rev. 128: Zupanski D The eects o discontinuities in the Betts Miller cumulus convection scheme on our-dimensional variational data assimilation in a quasi-operational orecasting environment. Tellus 45A: Zupanski M Maximum Likelihood Ensemble Filter: Theoretical aspects. Mon. Weather Rev. 133: Zupanski D, Zupanski M Model error estimation employing ensemble data assimilation approach. Mon. Weather Rev. 134: Zupanski M, Fletcher SJ, Navon IM, Uzunoglu B, Heikes RP, Randall DA, Ringler TD, Daescu DN Initiation o ensemble data assimilation. Tellus 58A: Zupanski D, Denning AS, Uliasz M, Zupanski M, Schuh AE, Rayner PJ, Peters W, Corbin KD. 2007a. Carbon lux bias estimation employing Maximum Likelihood Ensemble Filter (MLEF). J. Geophys. Res. 112: D17107, DOI: /2006JD Zupanski D, Hou AY, Zhang SQ, Zupanski M, Kummerow CD, Cheung SH. 2007b. Applications o inormation theory in ensemble data assimilation. Q. J. R. Meteorol. Soc. 133:
Analysis Scheme in the Ensemble Kalman Filter
JUNE 1998 BURGERS ET AL. 1719 Analysis Scheme in the Ensemble Kalman Filter GERRIT BURGERS Royal Netherlands Meteorological Institute, De Bilt, the Netherlands PETER JAN VAN LEEUWEN Institute or Marine
More informationMaximum Likelihood Ensemble Filter Applied to Multisensor Systems
Maximum Likelihood Ensemble Filter Applied to Multisensor Systems Arif R. Albayrak a, Milija Zupanski b and Dusanka Zupanski c abc Colorado State University (CIRA), 137 Campus Delivery Fort Collins, CO
More informationA Comparative Study of 4D-VAR and a 4D Ensemble Kalman Filter: Perfect Model Simulations with Lorenz-96
Tellus 000, 000 000 (0000) Printed 20 October 2006 (Tellus LATEX style file v2.2) A Comparative Study of 4D-VAR and a 4D Ensemble Kalman Filter: Perfect Model Simulations with Lorenz-96 Elana J. Fertig
More informationThe Impact of Background Error on Incomplete Observations for 4D-Var Data Assimilation with the FSU GSM
The Impact of Background Error on Incomplete Observations for 4D-Var Data Assimilation with the FSU GSM I. Michael Navon 1, Dacian N. Daescu 2, and Zhuo Liu 1 1 School of Computational Science and Information
More informationAn Ensemble Kalman Smoother for Nonlinear Dynamics
1852 MONTHLY WEATHER REVIEW VOLUME 128 An Ensemble Kalman Smoother or Nonlinear Dynamics GEIR EVENSEN Nansen Environmental and Remote Sensing Center, Bergen, Norway PETER JAN VAN LEEUWEN Institute or Marine
More informationMAXIMUM LIKELIHOOD ENSEMBLE FILTER: THEORETICAL ASPECTS. Milija Zupanski
MAXIMUM LIKELIHOOD ENSEMBLE FILTER: THEORETICAL ASPECTS Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University Foothills Campus Fort Collins, CO 8053-1375 ZupanskiM@cira.colostate.edu
More informationFour-Dimensional Ensemble Kalman Filtering
Four-Dimensional Ensemble Kalman Filtering B.R. Hunt, E. Kalnay, E.J. Kostelich, E. Ott, D.J. Patil, T. Sauer, I. Szunyogh, J.A. Yorke, A.V. Zimin University of Maryland, College Park, MD 20742, USA Ensemble
More informationP 1.86 A COMPARISON OF THE HYBRID ENSEMBLE TRANSFORM KALMAN FILTER (ETKF)- 3DVAR AND THE PURE ENSEMBLE SQUARE ROOT FILTER (EnSRF) ANALYSIS SCHEMES
P 1.86 A COMPARISON OF THE HYBRID ENSEMBLE TRANSFORM KALMAN FILTER (ETKF)- 3DVAR AND THE PURE ENSEMBLE SQUARE ROOT FILTER (EnSRF) ANALYSIS SCHEMES Xuguang Wang*, Thomas M. Hamill, Jeffrey S. Whitaker NOAA/CIRES
More informationApplications of information theory in ensemble data assimilation
QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY Q. J. R. Meteorol. Soc. 33: 533 545 (007) Published online in Wiley InterScience (www.interscience.wiley.com).3 Applications of information theory
More informationAbstract 2. ENSEMBLE KALMAN FILTERS 1. INTRODUCTION
J5.4 4D ENSEMBLE KALMAN FILTERING FOR ASSIMILATION OF ASYNCHRONOUS OBSERVATIONS T. Sauer George Mason University, Fairfax, VA 22030 B.R. Hunt, J.A. Yorke, A.V. Zimin, E. Ott, E.J. Kostelich, I. Szunyogh,
More informationA Note on the Particle Filter with Posterior Gaussian Resampling
Tellus (6), 8A, 46 46 Copyright C Blackwell Munksgaard, 6 Printed in Singapore. All rights reserved TELLUS A Note on the Particle Filter with Posterior Gaussian Resampling By X. XIONG 1,I.M.NAVON 1,2 and
More informationAccelerating the spin-up of Ensemble Kalman Filtering
Accelerating the spin-up of Ensemble Kalman Filtering Eugenia Kalnay * and Shu-Chih Yang University of Maryland Abstract A scheme is proposed to improve the performance of the ensemble-based Kalman Filters
More informationLocal Ensemble Transform Kalman Filter: An Efficient Scheme for Assimilating Atmospheric Data
Local Ensemble Transform Kalman Filter: An Efficient Scheme for Assimilating Atmospheric Data John Harlim and Brian R. Hunt Department of Mathematics and Institute for Physical Science and Technology University
More informationA METHOD FOR INITIALIZATION OF ENSEMBLE DATA ASSIMILATION. Submitted January (6 Figures) A manuscript submitted for publication to Tellus
A METHOD FOR INITIALIZATION OF ENSEMBLE DATA ASSIMILATION Milija Zupanski 1, Steven Fletcher 1, I. Michael Navon 2, Bahri Uzunoglu 3, Ross P. Heikes 4, David A. Randall 4, and Todd D. Ringler 4 Submitted
More information(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)
Solving Nonlinear Equations & Optimization One Dimension Problem: or a unction, ind 0 such that 0 = 0. 0 One Root: The Bisection Method This one s guaranteed to converge at least to a singularity, i not
More informationLocal Ensemble Transform Kalman Filter
Local Ensemble Transform Kalman Filter Brian Hunt 11 June 2013 Review of Notation Forecast model: a known function M on a vector space of model states. Truth: an unknown sequence {x n } of model states
More informationA Simple Explanation of the Sobolev Gradient Method
A Simple Explanation o the Sobolev Gradient Method R. J. Renka July 3, 2006 Abstract We have observed that the term Sobolev gradient is used more oten than it is understood. Also, the term is oten used
More informationarxiv: v1 [physics.ao-ph] 23 Jan 2009
A Brief Tutorial on the Ensemble Kalman Filter Jan Mandel arxiv:0901.3725v1 [physics.ao-ph] 23 Jan 2009 February 2007, updated January 2009 Abstract The ensemble Kalman filter EnKF) is a recursive filter
More informationFundamentals of Data Assimila1on
2015 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO August 11-14, 2015 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University
More informationFluctuationlessness Theorem and its Application to Boundary Value Problems of ODEs
Fluctuationlessness Theorem and its Application to Boundary Value Problems o ODEs NEJLA ALTAY İstanbul Technical University Inormatics Institute Maslak, 34469, İstanbul TÜRKİYE TURKEY) nejla@be.itu.edu.tr
More informationFundamentals of Data Assimila1on
014 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO July 14-16, 014 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University
More informationA Method for Assimilating Lagrangian Data into a Shallow-Water-Equation Ocean Model
APRIL 2006 S A L M A N E T A L. 1081 A Method or Assimilating Lagrangian Data into a Shallow-Water-Equation Ocean Model H. SALMAN, L.KUZNETSOV, AND C. K. R. T. JONES Department o Mathematics, University
More informationAssimilating Nonlocal Observations using a Local Ensemble Kalman Filter
Tellus 000, 000 000 (0000) Printed 16 February 2007 (Tellus LATEX style file v2.2) Assimilating Nonlocal Observations using a Local Ensemble Kalman Filter Elana J. Fertig 1, Brian R. Hunt 1, Edward Ott
More informationScattered Data Approximation of Noisy Data via Iterated Moving Least Squares
Scattered Data Approximation o Noisy Data via Iterated Moving Least Squares Gregory E. Fasshauer and Jack G. Zhang Abstract. In this paper we ocus on two methods or multivariate approximation problems
More informationNumerical Solution of Ordinary Differential Equations in Fluctuationlessness Theorem Perspective
Numerical Solution o Ordinary Dierential Equations in Fluctuationlessness Theorem Perspective NEJLA ALTAY Bahçeşehir University Faculty o Arts and Sciences Beşiktaş, İstanbul TÜRKİYE TURKEY METİN DEMİRALP
More informationCOMPARATIVE EVALUATION OF ENKF AND MLEF FOR ASSIMILATION OF STREAMFLOW DATA INTO NWS OPERATIONAL HYDROLOGIC MODELS
COMPARATIVE EVALUATION OF ENKF AND MLEF FOR ASSIMILATION OF STREAMFLOW DATA INTO NWS OPERATIONAL HYDROLOGIC MODELS Arezoo Raieei Nasab 1, Dong-Jun Seo 1, Haksu Lee 2,3, Sunghee Kim 1 1 Dept. o Civil Eng.,
More informationDATA ASSIMILATION IN A COMBINED 1D-2D FLOOD MODEL
Proceedings o the International Conerence Innovation, Advances and Implementation o Flood Forecasting Technology, Tromsø, DATA ASSIMILATION IN A COMBINED 1D-2D FLOOD MODEL Johan Hartnac, Henri Madsen and
More informationVariational data assimilation
Background and methods NCEO, Dept. of Meteorology, Univ. of Reading 710 March 2018, Univ. of Reading Bayes' Theorem Bayes' Theorem p(x y) = posterior distribution = p(x) p(y x) p(y) prior distribution
More informationIn the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance.
hree-dimensional variational assimilation (3D-Var) In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance. Lorenc (1986) showed
More informationFundamentals of Data Assimilation
National Center for Atmospheric Research, Boulder, CO USA GSI Data Assimilation Tutorial - June 28-30, 2010 Acknowledgments and References WRFDA Overview (WRF Tutorial Lectures, H. Huang and D. Barker)
More informationP3.11 A COMPARISON OF AN ENSEMBLE OF POSITIVE/NEGATIVE PAIRS AND A CENTERED SPHERICAL SIMPLEX ENSEMBLE
P3.11 A COMPARISON OF AN ENSEMBLE OF POSITIVE/NEGATIVE PAIRS AND A CENTERED SPHERICAL SIMPLEX ENSEMBLE 1 INTRODUCTION Xuguang Wang* The Pennsylvania State University, University Park, PA Craig H. Bishop
More informationFour-dimensional ensemble Kalman filtering
Tellus (24), 56A, 273 277 Copyright C Blackwell Munksgaard, 24 Printed in UK. All rights reserved TELLUS Four-dimensional ensemble Kalman filtering By B. R. HUNT 1, E. KALNAY 1, E. J. KOSTELICH 2,E.OTT
More informationEstimating observation impact without adjoint model in an ensemble Kalman filter
QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY Q. J. R. Meteorol. Soc. 134: 1327 1335 (28) Published online in Wiley InterScience (www.interscience.wiley.com) DOI: 1.12/qj.28 Estimating observation
More information2.6 Two-dimensional continuous interpolation 3: Kriging - introduction to geostatistics. References - geostatistics. References geostatistics (cntd.
.6 Two-dimensional continuous interpolation 3: Kriging - introduction to geostatistics Spline interpolation was originally developed or image processing. In GIS, it is mainly used in visualization o spatial
More informationA HYBRID ENSEMBLE KALMAN FILTER / 3D-VARIATIONAL ANALYSIS SCHEME
A HYBRID ENSEMBLE KALMAN FILTER / 3D-VARIATIONAL ANALYSIS SCHEME Thomas M. Hamill and Chris Snyder National Center for Atmospheric Research, Boulder, Colorado 1. INTRODUCTION Given the chaotic nature of
More informationA Penalized 4-D Var data assimilation method for reducing forecast error
A Penalized 4-D Var data assimilation method for reducing forecast error M. J. Hossen Department of Mathematics and Natural Sciences BRAC University 66 Mohakhali, Dhaka-1212, Bangladesh I. M. Navon Department
More informationThe achievable limits of operational modal analysis. * Siu-Kui Au 1)
The achievable limits o operational modal analysis * Siu-Kui Au 1) 1) Center or Engineering Dynamics and Institute or Risk and Uncertainty, University o Liverpool, Liverpool L69 3GH, United Kingdom 1)
More informationOptimal Localization for Ensemble Kalman Filter Systems
Journal December of the 2014 Meteorological Society of Japan, Vol. Á. 92, PERIÁÑEZ No. 6, pp. et 585 597, al. 2014 585 DOI:10.2151/jmsj.2014-605 Optimal Localization for Ensemble Kalman Filter Systems
More informationData Assimilation with the Ensemble Kalman Filter and the SEIK Filter applied to a Finite Element Model of the North Atlantic
Data Assimilation with the Ensemble Kalman Filter and the SEIK Filter applied to a Finite Element Model of the North Atlantic L. Nerger S. Danilov, G. Kivman, W. Hiller, and J. Schröter Alfred Wegener
More informationENSEEIHT, 2, rue Camichel, Toulouse, France. CERFACS, 42, avenue Gaspard Coriolis, Toulouse, France
Adaptive Observations And Multilevel Optimization In Data Assimilation by S. Gratton, M. M. Rincon-Camacho and Ph. L. Toint Report NAXYS-05-203 2 May 203 ENSEEIHT, 2, rue Camichel, 3000 Toulouse, France
More informationA LOCAL EMSEMBLE TRANSFORM KALMAN FILTER DATA ASSIMILATION SYSTEM FOR THE FSU GLOBAL ATMOSPHERIC MODEL
A LOCAL EMSEMBLE TRANSFORM KALMAN FILTER DATA ASSIMILATION SYSTEM FOR THE FSU GLOBAL ATMOSPHERIC MODEL Rosangela Cintra 1,2, Steve Cocke 2 1, Brazilian National Space Research, INPE, S. J. Campos, Sao
More informationA Local Ensemble Transform Kalman Filter Data Assimilation System for the Global FSU Atmospheric Model
Journal o Mechanics Engineering and Automation 5 (2015) 185-196 doi: 10.17265/2159-5275/2015.03.008 D DAVID PUBLISHING A Local Ensemble Transorm Kalman Filter Data Assimilation System or the Global FSU
More informationDacian N. Daescu 1, I. Michael Navon 2
6.4 REDUCED-ORDER OBSERVATION SENSITIVITY IN 4D-VAR DATA ASSIMILATION Dacian N. Daescu 1, I. Michael Navon 2 1 Portland State University, Portland, Oregon, 2 Florida State University, Tallahassee, Florida
More informationPerformance of 4D-Var with Different Strategies for the Use of Adjoint Physics with the FSU Global Spectral Model
668 MONTHLY WEATHER REVIEW Performance of 4D-Var with Different Strategies for the Use of Adjoint Physics with the FSU Global Spectral Model ZHIJIN LI Supercomputer Computations Research Institute, The
More informationKalman filtering based probabilistic nowcasting of object oriented tracked convective storms
Kalman iltering based probabilistic nowcasting o object oriented traced convective storms Pea J. Rossi,3, V. Chandrasear,2, Vesa Hasu 3 Finnish Meteorological Institute, Finland, Eri Palménin Auio, pea.rossi@mi.i
More informationEnsemble square-root filters
Ensemble square-root filters MICHAEL K. TIPPETT International Research Institute for climate prediction, Palisades, New Yor JEFFREY L. ANDERSON GFDL, Princeton, New Jersy CRAIG H. BISHOP Naval Research
More informationOn the Kalman Filter error covariance collapse into the unstable subspace
Nonlin. Processes Geophys., 18, 243 250, 2011 doi:10.5194/npg-18-243-2011 Author(s) 2011. CC Attribution 3.0 License. Nonlinear Processes in Geophysics On the Kalman Filter error covariance collapse into
More informationInitiation of ensemble data assimilation
Initiation of ensemble data assimilation By M. ZUPANSKI 1*, S. J. FLETCHER 1, I. M. NAVON 2, B. UZUNOGLU 3, R. P. HEIKES 4, D. A. RANDALL 4, T. D. RINGLER 4 and D. DAESCU 5, 1 Cooperative Institute for
More informationLocalization in the ensemble Kalman Filter
Department of Meteorology Localization in the ensemble Kalman Filter Ruth Elizabeth Petrie A dissertation submitted in partial fulfilment of the requirement for the degree of MSc. Atmosphere, Ocean and
More informationEnsembles and Particle Filters for Ocean Data Assimilation
DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Ensembles and Particle Filters for Ocean Data Assimilation Robert N. Miller College of Oceanic and Atmospheric Sciences
More informationSimple Doppler Wind Lidar adaptive observation experiments with 3D-Var. and an ensemble Kalman filter in a global primitive equations model
1 2 3 4 Simple Doppler Wind Lidar adaptive observation experiments with 3D-Var and an ensemble Kalman filter in a global primitive equations model 5 6 7 8 9 10 11 12 Junjie Liu and Eugenia Kalnay Dept.
More informationTelescoping Decomposition Method for Solving First Order Nonlinear Differential Equations
Telescoping Decomposition Method or Solving First Order Nonlinear Dierential Equations 1 Mohammed Al-Reai 2 Maysem Abu-Dalu 3 Ahmed Al-Rawashdeh Abstract The Telescoping Decomposition Method TDM is a new
More informationA Hybrid Approach to Estimating Error Covariances in Variational Data Assimilation
A Hybrid Approach to Estimating Error Covariances in Variational Data Assimilation Haiyan Cheng, Mohamed Jardak, Mihai Alexe, Adrian Sandu,1 Computational Science Laboratory Department of Computer Science
More informationA Unification of Ensemble Square Root Kalman Filters. and Wolfgang Hiller
Generated using version 3.0 of the official AMS L A TEX template A Unification of Ensemble Square Root Kalman Filters Lars Nerger, Tijana Janjić, Jens Schröter, and Wolfgang Hiller Alfred Wegener Institute
More informationCHAPTER 1: INTRODUCTION. 1.1 Inverse Theory: What It Is and What It Does
Geosciences 567: CHAPTER (RR/GZ) CHAPTER : INTRODUCTION Inverse Theory: What It Is and What It Does Inverse theory, at least as I choose to deine it, is the ine art o estimating model parameters rom data
More informationRATIONAL FUNCTIONS. Finding Asymptotes..347 The Domain Finding Intercepts Graphing Rational Functions
RATIONAL FUNCTIONS Finding Asymptotes..347 The Domain....350 Finding Intercepts.....35 Graphing Rational Functions... 35 345 Objectives The ollowing is a list o objectives or this section o the workbook.
More informationNumerical Methods - Lecture 2. Numerical Methods. Lecture 2. Analysis of errors in numerical methods
Numerical Methods - Lecture 1 Numerical Methods Lecture. Analysis o errors in numerical methods Numerical Methods - Lecture Why represent numbers in loating point ormat? Eample 1. How a number 56.78 can
More informationOBSERVER/KALMAN AND SUBSPACE IDENTIFICATION OF THE UBC BENCHMARK STRUCTURAL MODEL
OBSERVER/KALMAN AND SUBSPACE IDENTIFICATION OF THE UBC BENCHMARK STRUCTURAL MODEL Dionisio Bernal, Burcu Gunes Associate Proessor, Graduate Student Department o Civil and Environmental Engineering, 7 Snell
More informationEffect of random perturbations on adaptive observation techniques
INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS Published online 2 March 2 in Wiley Online Library (wileyonlinelibrary.com..2545 Effect of random perturbations on adaptive observation techniques
More informationComparisons between 4DEnVar and 4DVar on the Met Office global model
Comparisons between 4DEnVar and 4DVar on the Met Office global model David Fairbairn University of Surrey/Met Office 26 th June 2013 Joint project by David Fairbairn, Stephen Pring, Andrew Lorenc, Neill
More informationLeast-Squares Spectral Analysis Theory Summary
Least-Squares Spectral Analysis Theory Summary Reerence: Mtamakaya, J. D. (2012). Assessment o Atmospheric Pressure Loading on the International GNSS REPRO1 Solutions Periodic Signatures. Ph.D. dissertation,
More informationA new structure for error covariance matrices and their adaptive estimation in EnKF assimilation
Quarterly Journal of the Royal Meteorological Society Q. J. R. Meteorol. Soc. 139: 795 804, April 2013 A A new structure for error covariance matrices their adaptive estimation in EnKF assimilation Guocan
More informationProbabilistic Model of Error in Fixed-Point Arithmetic Gaussian Pyramid
Probabilistic Model o Error in Fixed-Point Arithmetic Gaussian Pyramid Antoine Méler John A. Ruiz-Hernandez James L. Crowley INRIA Grenoble - Rhône-Alpes 655 avenue de l Europe 38 334 Saint Ismier Cedex
More informationRelative Merits of 4D-Var and Ensemble Kalman Filter
Relative Merits of 4D-Var and Ensemble Kalman Filter Andrew Lorenc Met Office, Exeter International summer school on Atmospheric and Oceanic Sciences (ISSAOS) "Atmospheric Data Assimilation". August 29
More informationA Gaussian Resampling Particle Filter
A Gaussian Resampling Particle Filter By X. Xiong 1 and I. M. Navon 1 1 School of Computational Science and Department of Mathematics, Florida State University, Tallahassee, FL 3236, USA 14 July ABSTRACT
More informationKalman Filter and Ensemble Kalman Filter
Kalman Filter and Ensemble Kalman Filter 1 Motivation Ensemble forecasting : Provides flow-dependent estimate of uncertainty of the forecast. Data assimilation : requires information about uncertainty
More informationData Assimilation: Finding the Initial Conditions in Large Dynamical Systems. Eric Kostelich Data Mining Seminar, Feb. 6, 2006
Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems Eric Kostelich Data Mining Seminar, Feb. 6, 2006 kostelich@asu.edu Co-Workers Istvan Szunyogh, Gyorgyi Gyarmati, Ed Ott, Brian
More informationTwo-step self-tuning phase-shifting interferometry
Two-step sel-tuning phase-shiting intererometry J. Vargas, 1,* J. Antonio Quiroga, T. Belenguer, 1 M. Servín, 3 J. C. Estrada 3 1 Laboratorio de Instrumentación Espacial, Instituto Nacional de Técnica
More informationObjectives. By the time the student is finished with this section of the workbook, he/she should be able
FUNCTIONS Quadratic Functions......8 Absolute Value Functions.....48 Translations o Functions..57 Radical Functions...61 Eponential Functions...7 Logarithmic Functions......8 Cubic Functions......91 Piece-Wise
More informationUncertainty Analysis Using the WRF Maximum Likelihood Ensemble Filter System and Comparison with Dropwindsonde Observations in Typhoon Sinlaku (2008)
Asia-Pacific J. Atmos. Sci., 46(3), 317-325, 2010 DOI:10.1007/s13143-010-1004-1 Uncertainty Analysis Using the WRF Maximum Likelihood Ensemble Filter System and Comparison with Dropwindsonde Observations
More informationAnalysis sensitivity calculation in an Ensemble Kalman Filter
Analysis sensitivity calculation in an Ensemble Kalman Filter Junjie Liu 1, Eugenia Kalnay 2, Takemasa Miyoshi 2, and Carla Cardinali 3 1 University of California, Berkeley, CA, USA 2 University of Maryland,
More informationScattering of Solitons of Modified KdV Equation with Self-consistent Sources
Commun. Theor. Phys. Beijing, China 49 8 pp. 89 84 c Chinese Physical Society Vol. 49, No. 4, April 5, 8 Scattering o Solitons o Modiied KdV Equation with Sel-consistent Sources ZHANG Da-Jun and WU Hua
More informationFeasibility of a Multi-Pass Thomson Scattering System with Confocal Spherical Mirrors
Plasma and Fusion Research: Letters Volume 5, 044 200) Feasibility o a Multi-Pass Thomson Scattering System with Conocal Spherical Mirrors Junichi HIRATSUKA, Akira EJIRI, Yuichi TAKASE and Takashi YAMAGUCHI
More informationWeight interpolation for efficient data assimilation with the Local Ensemble Transform Kalman Filter
QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY Published online 17 December 2008 in Wiley InterScience (www.interscience.wiley.com).353 Weight interpolation for efficient data assimilation with
More informationSensitivity analysis in variational data assimilation and applications
Sensitivity analysis in variational data assimilation and applications Dacian N. Daescu Portland State University, P.O. Box 751, Portland, Oregon 977-751, U.S.A. daescu@pdx.edu ABSTRACT Mathematical aspects
More information8. INTRODUCTION TO STATISTICAL THERMODYNAMICS
n * D n d Fluid z z z FIGURE 8-1. A SYSTEM IS IN EQUILIBRIUM EVEN IF THERE ARE VARIATIONS IN THE NUMBER OF MOLECULES IN A SMALL VOLUME, SO LONG AS THE PROPERTIES ARE UNIFORM ON A MACROSCOPIC SCALE 8. INTRODUCTION
More information(Extended) Kalman Filter
(Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system
More information4D-Var or Ensemble Kalman Filter? TELLUS A, in press
4D-Var or Ensemble Kalman Filter? Eugenia Kalnay 1 *, Hong Li 1, Takemasa Miyoshi 2, Shu-Chih Yang 1, and Joaquim Ballabrera-Poy 3 1 University of Maryland, College Park, MD, 20742-2425 2 Numerical Prediction
More informationTLT-5200/5206 COMMUNICATION THEORY, Exercise 3, Fall TLT-5200/5206 COMMUNICATION THEORY, Exercise 3, Fall Problem 1.
TLT-5/56 COMMUNICATION THEORY, Exercise 3, Fall Problem. The "random walk" was modelled as a random sequence [ n] where W[i] are binary i.i.d. random variables with P[W[i] = s] = p (orward step with probability
More informationObservation Impact Assessment for Dynamic. Data-Driven Coupled Chaotic System
Applied Mathematical Sciences, Vol. 10, 2016, no. 45, 2239-2248 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2016.65170 Observation Impact Assessment for Dynamic Data-Driven Coupled Chaotic
More informationJ1.3 GENERATING INITIAL CONDITIONS FOR ENSEMBLE FORECASTS: MONTE-CARLO VS. DYNAMIC METHODS
J1.3 GENERATING INITIAL CONDITIONS FOR ENSEMBLE FORECASTS: MONTE-CARLO VS. DYNAMIC METHODS Thomas M. Hamill 1, Jeffrey S. Whitaker 1, and Chris Snyder 2 1 NOAA-CIRES Climate Diagnostics Center, Boulder,
More informationGlobal Weak Solution of Planetary Geostrophic Equations with Inviscid Geostrophic Balance
Global Weak Solution o Planetary Geostrophic Equations with Inviscid Geostrophic Balance Jian-Guo Liu 1, Roger Samelson 2, Cheng Wang 3 Communicated by R. Temam) Abstract. A reormulation o the planetary
More informationRelationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF
Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Eugenia Kalnay and Shu-Chih Yang with Alberto Carrasi, Matteo Corazza and Takemasa Miyoshi ECODYC10, Dresden 28 January 2010 Relationship
More informationAddressing the nonlinear problem of low order clustering in deterministic filters by using mean-preserving non-symmetric solutions of the ETKF
Addressing the nonlinear problem of low order clustering in deterministic filters by using mean-preserving non-symmetric solutions of the ETKF Javier Amezcua, Dr. Kayo Ide, Dr. Eugenia Kalnay 1 Outline
More informationDART_LAB Tutorial Section 2: How should observations impact an unobserved state variable? Multivariate assimilation.
DART_LAB Tutorial Section 2: How should observations impact an unobserved state variable? Multivariate assimilation. UCAR 2014 The National Center for Atmospheric Research is sponsored by the National
More informationLecture 8 Optimization
4/9/015 Lecture 8 Optimization EE 4386/5301 Computational Methods in EE Spring 015 Optimization 1 Outline Introduction 1D Optimization Parabolic interpolation Golden section search Newton s method Multidimensional
More informationGaussian Process Regression Models for Predicting Stock Trends
Gaussian Process Regression Models or Predicting Stock Trends M. Todd Farrell Andrew Correa December 5, 7 Introduction Historical stock price data is a massive amount o time-series data with little-to-no
More informationdesign variable x1
Multipoint linear approximations or stochastic chance constrained optimization problems with integer design variables L.F.P. Etman, S.J. Abspoel, J. Vervoort, R.A. van Rooij, J.J.M Rijpkema and J.E. Rooda
More informationQuarterly Journal of the Royal Meteorological Society !"#$%&'&(&)"&'*'+'*%(,#$,-$#*'."(/'*0'"(#"(1"&)23$)(4#$2#"( 5'$*)6!
!"#$%&'&(&)"&'*'+'*%(,#$,-$#*'."(/'*0'"(#"("&)$)(#$#"( '$*)! "#$%&'()!!"#$%&$'()*+"$,#')+-)%.&)/+(#')0&%&+$+'+#')+&%(! *'&$+,%-./!0)! "! :-(;%/-,(;! '/;!?$@A-//;B!@
More informationSmoothers: Types and Benchmarks
Smoothers: Types and Benchmarks Patrick N. Raanes Oxford University, NERSC 8th International EnKF Workshop May 27, 2013 Chris Farmer, Irene Moroz Laurent Bertino NERSC Geir Evensen Abstract Talk builds
More informationOn the Efficiency of Maximum-Likelihood Estimators of Misspecified Models
217 25th European Signal Processing Conerence EUSIPCO On the Eiciency o Maximum-ikelihood Estimators o Misspeciied Models Mahamadou amine Diong Eric Chaumette and François Vincent University o Toulouse
More informationTotal Energy Singular Vectors for Atmospheric Chemical Transport Models
Total Energy Singular Vectors for Atmospheric Chemical Transport Models Wenyuan Liao and Adrian Sandu Department of Computer Science, Virginia Polytechnic Institute and State University, Blacksburg, VA
More informationImplications of the Form of the Ensemble Transformation in the Ensemble Square Root Filters
1042 M O N T H L Y W E A T H E R R E V I E W VOLUME 136 Implications of the Form of the Ensemble Transformation in the Ensemble Square Root Filters PAVEL SAKOV AND PETER R. OKE CSIRO Marine and Atmospheric
More informationR. E. Petrie and R. N. Bannister. Department of Meteorology, Earley Gate, University of Reading, Reading, RG6 6BB, United Kingdom
A method for merging flow-dependent forecast error statistics from an ensemble with static statistics for use in high resolution variational data assimilation R. E. Petrie and R. N. Bannister Department
More informationStrong Lyapunov Functions for Systems Satisfying the Conditions of La Salle
06 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 49, NO. 6, JUNE 004 Strong Lyapunov Functions or Systems Satisying the Conditions o La Salle Frédéric Mazenc and Dragan Ne sić Abstract We present a construction
More informationAssessing Potential Impact of Air Pollutant Observations from the Geostationary Satellite on Air Quality Prediction through OSSEs
Assessing Potential Impact of Air Pollutant Observations from the Geostationary Satellite on Air Quality Prediction through OSSEs Ebony Lee 1,2, Seon Ki Park 1,2,3, and Milija Županski4 1 Department of
More informationUnconstrained optimization
Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout
More informationToday. Introduction to optimization Definition and motivation 1-dimensional methods. Multi-dimensional methods. General strategies, value-only methods
Optimization Last time Root inding: deinition, motivation Algorithms: Bisection, alse position, secant, Newton-Raphson Convergence & tradeos Eample applications o Newton s method Root inding in > 1 dimension
More informationPractical Aspects of Ensemble-based Kalman Filters
Practical Aspects of Ensemble-based Kalman Filters Lars Nerger Alfred Wegener Institute for Polar and Marine Research Bremerhaven, Germany and Bremen Supercomputing Competence Center BremHLR Bremen, Germany
More informationThe Local Ensemble Transform Kalman Filter (LETKF) Eric Kostelich. Main topics
The Local Ensemble Transform Kalman Filter (LETKF) Eric Kostelich Arizona State University Co-workers: Istvan Szunyogh, Brian Hunt, Ed Ott, Eugenia Kalnay, Jim Yorke, and many others http://www.weatherchaos.umd.edu
More information