Comparison of Ensemble Data Assimilation methods for the shallow water equations model in the presence of nonlinear observation operators

Size: px
Start display at page:

Download "Comparison of Ensemble Data Assimilation methods for the shallow water equations model in the presence of nonlinear observation operators"

Transcription

1 Comparison of Ensemble Data Assimilation methods for the shallow water equations model in the presence of nonlinear observation operators M. Jardak Laboratoire de Métórologie Dynamique École Normale Supérieure, Paris, France I. M. Navon Department of Scientific Computing, Florida State University Tallahassee, FL , USA M. Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University, 1375 Campus Deliver Fort Collins, CO , USA March 24, 213 submitted to Tellus A Corresponding author: mjardak@lmd.ens.fr 1

2 Abstract A new comparison of three frequently used sequential data assimilation methods illuminating their strengths and weaknesses in the presence of linear and nonlinear observation operators is presented. The ensemble Kalman filter (EnKF), the particle filter (PF) and the maximum likelihood ensemble filter (MLEF) methods were implemented and the spectral shallow water equations model in spherical geometry model was employed using the Rossby-Haurwitz Wave no. 4 test case as initial condition. Numerical tests conducted reveal that all three methods perform satisfactorily in the presence of linear observation operator for 1 to 15 days model integration, whereas the EnKF, even with the Evensen fixture [Evensen 3] for the nonlinear observation operator failed in all tested metrics. The particle filter PF and the hybrid filter MLEF both performed satisfactorily in the presence of highly nonlinear observation operators with a slight advantage in terms of CPU time to the MLEF method. 1 Introduction Sequential data assimilation fuses observations of the current (and possibly, past) state of a system with predictions from a mathematical model (the forecast) to produce an analysis, providing the best estimate of the current state of the system. Central to the concept of sequential estimation data assimilation is the propagation of flow dependent probability density function (pdf) given an estimate of the initial pdf. In sequential estimation, the analysis and forecasts can be viewed as probability distributions. The analysis step is an application of the Bayes theorem. Advancing the probability distribution in time, for the general case is done by the Chapman-Kolmogorov equation, but since it is unrealistically expensive, various approximations operating on representations of the probability distributions are used instead. If the probability distributions are normal, they can be represented by their mean and covariance, which gives rise to the Kalman filter (KF). However, due to the high computational and storage overheads it requires, an approximation based on Monte-Carlo ensemble calculations has been proposed by [Evensen 1994], [Evensen and Van Leeuwen 1996], [Burgers et al. 1998] and [Houtekamer and Mitchell 1998]. The method is essentially a Monte-Carlo approximation of the Kalman filter which avoids evolving the covariance matrix of the state vector. A second type of EnKF filter consists of the class of square root filters of [Anderson and Anderson 23] see also [Bishop et al. 21]. A number of variants of the standard EnKF have been proposed, such as Kalman ensemble square root filters (KSRF), the singular evolutive extended Kalman (SEEK) filter, and the less common singular evolutive interpolated Kalman (SEIK) filter can be found in the paper of [Tippett et al. 23], see also the paper of [Nerger et al. 25] where the aforementioned filters were reviewed and M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 2

3 compared. However, they will not be studied here, as the purpose is rather to investigate differences between EnKF, PF and MLEF in the presence of nonlinear observation operators. The PF methods are also known as sequential Monte-Carlo (SMC) methods or Bayesian filters. The SMC methods are an efficient means for tracking and forecasting dynamical systems subject to both process and observation noise. Applications include robot tracking, video or audio analysis, and general time series analysis see [Doucet et al. 21]. These methods utilize a large number N of random samples named particles to represent the posterior probability distributions. The particles are propagated over time using a combination of sequential importance sampling and resampling steps. Resampling for PF is used to avoid the problem of degeneracy of this algorithm that is, avoiding the situation where all but one of the importance weights are close to zero. The performance of the PF algorithm can be crucially affected by a judicious choice of a resampling method. See [Arulampalam et al. 22] for a listing of the most used resampling algorithms. The PF suffers from the curse of dimensionality requiring computations that increase exponentially with dimension as pointed out by the recent work of [Bengtsson et al. 28] and [Bickel et al. 28] and finally explicitly quantified by [Synder et al. 28]. They indicated that unless the ensemble size is greater than exp(τ 2 /2), where τ 2 is the variance of the observation log-likelihood, the PF update suffers from a collapse in which with high probability only few members are assigned a posterior weight close to one while all other members have vanishing small weights. Recently, it has been proven by Van Leeuwen [[Van Leeuwen 21],[Van Leeuwen 211]] that the above claim of [Bengtsson et al. 28],[Bickel et al. 28] and [Snyder et al. 28], are in error when the proposal densities are taken into consideration. The MLEF filter of [Zupanski 25]; [Zupanski and Zupanski 26] is a hybrid between the 3-D variational method and the EnKF. It maximizes the likelihood of a posterior probability distribution, thus its name. The MLEF belongs to the class of deterministic ensemble filters, since no perturbed observations are employed. As in variational and ensemble data assimilation methods, the cost function is derived using Gaussian probability density function framework. Like other ensemble data assimilation algorithms, the MLEF produces an estimate of the analysis uncertainty ( e.g. analysis error covariance) In this paper data assimilation experiments are performed and compared using the ensemble Kalman filter EnKF as a two-parameter mean and covariance estimation method, the particle filter PF as a full PDF estimation method and the Maximum Likelihood Ensemble Filter MLEF as a two-parameter PDF mode and covariance estimation method. Therefore, the filters used in this study cover the fundamental characteristics of most of the filters present in the literature. These methods were tested on a spectral shallow water equations model using the M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 3

4 Rossby-Haurwitz wave no 4 test case for both linear and nonlinear observation operators. The spectral shallow water equations model in spherical geometry of Williamson [Williamson 1992] and [Jakob et al.1995] is used to generate the true solution and forecasts. To improve EnKF analysis errors and avoid ensemble errors that generate spurious corrections, a covariance localization investigated by [Houtekamer and Mitchell 1998,21] is incorporated. To improve both the analysis and the forecast results, the forecast ensemble solutions are inflated from the mean as suggested in [Anderson and Anderson 1999] and reported in [Hamill et al. 21]. Since the resampling is a crucial step for (PF) method, the systematic, multinomial and the residual resampling methods [Arulampalam et al. 22],[Doucet et al. 21] and [Nakano and al. 27] were tested. The paper is structured as follows. Section 2 presents the spectral shallow-water equations in spherical coordinates model and the numerical methods used for its resolution. In section 3 we present algorithmic details of each of the data assimilation methods considered. In section 4 we present the numerical results obtained and discuss them for both the linear and nonlinear observation operators in several commonly used error metrics. In particular the Talagrand diagrams ( skill or score test) see [Wilks 25], the early work of Murphy [ Murphy 1971 and 1993] and [Talagrand et al ] and the interpretation of [Hamill 21], and the root mean square error (RMSE) are employed to detect ensemble members spread, while the RMSE are used to track filter divergence [Houtekamer 25] Finally section 5 is reserved for summary and conclusions. 2 Shallow-Water equations in spherical geometry The shallow water equations are a set of hyperbolic partial differential equations that describe the flow below a pressure surface in a fluid. The equations are derived from depth-integrating the Navier-Stokes equations, they rely primarily on the assumptions of constant density and hydrostatic balance. The shallowwater equations in spherical geometry are given by u t + u u a cos θ λ + v u a v t + u v a cos θ λ + v v a h t + θ tan θ a θ + tan θ a u h a cos θ λ + v h a θ + h a cos θ g h vu fv = a cos θ λ u2 + fu = g h a θ [ u (cos θ) + λ θ ] = M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 4

5 where V = u i + v j is the horizontal velocity vector ( with respect to the surface of the sphere), gh is the free surface geopotential, h is the free surface height, g is the gravity acceleration. f = 2Ω sin θ is the Coriolis parameter, Ω is the angular velocity of the earth. θ denotes the angle of latitude, µ = sin θ is the longitude. λ the longitude,and a is the radius of the earth. One of the major advances in meteorology was the use by [Rossby 1939] of the barotropic vorticity equation with the β -plane approximation to the spherical of the Earth, and the deduction of solutions reminiscent of some large scale waves in the atmosphere. These solutions have become known as Rossby waves see [Haurwitz 194 ] While the shallow water equations do not have corresponding analytic solutions they are expected to evolve in a similar way as the above R-H equations which explains why they have been widely used to test shallow water numerical models since the seminal paper of [Phillips 1959]. Following the research work of [Hoskins 1973] Rossby -Haurwitz waves with zonal wave numbers less or equal to 5 are believed to be stable. This makes the R-H zonal wave no 4 a suitable candidate for assessing accuracy of numerical schemes as was evident from its being chosen as a test case by [Williamson et al. 1992] and by a multitude of other authors. It has been numerically shown that the R-H wave no 4 breaks down into more turbulent behavior after long term numerical integration as recently discovered by [Thuburn and Li 2] and also by [Smith and Dritschel 26]. The initial velocity field for the Rossby-Haurwitz wave is defined as The initial height field is defined as, u = aω cos φ + ak cos r 1 φ(r sin 2 φ cos 2 φ) cos(rλ) v = akr cos r 1 φ sin φ sin(rλ) h = h + a2 [A(φ) + B(φ) cos(rλ) + C(φ) cos(2rλ)] (2) g (1) M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 5

6 where the variables A(φ), B(φ), C(φ) are given by A(φ) = ω 2 (2Ω + ω) cos2 φ k2 cos 2r φ[(r + 1) cos 2 φ + (2r 2 2r 2) 2r 2 cos 2 φ] 2(Ω + ω)k B(φ) = (r + 1)(r + 2) cos r φ[(r 2 + 2r + 2) (r + 1) 2 cos 2 φ] C(φ) = 1 4 k2 cos 2r φ[(r + 1) cos 2 φ (r + 2)] In here, r represents the wave number,h is the height at the poles. The strength of the underlying zonal wind from west to east is given by ω and k controls the amplitude of the wave. 3 Sequential Bayesian Filter: theoretical setting This section applies to the three sequential data assimilation methods discussed herein. One also can see the survey of Diard [Diard et al. 23] and the overview of [Clappe et al. 27]. The sequential Bayesian filter consists in a large number N of random samples advanced in time by a stochastic evolution equation, to approximate the probability densities. In order to analyze and make inference about the dynamic system at least a model equation along with an observation operator are required. Generically, stochastic filtering problem is a dynamic system that assumes the form ẋ t = f(t, x t 1, v t ) (3) y t = h(t, x t, n t ) The first equation of (3) is the state equation or the system model, the second represents the observation equation. The vectors x t and y t are respectively the state and the observation vectors. The state and observation noises are represented by v t and n t respectively. The discrete-time counterpart of the system (3) reads ẋ k = f k (x k 1, v k 1 ) (4) y k = h k (x k, n k ) M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 6

7 The deterministic mapping f k : R nx R n d R n x is a possibly non-linear function of the state x k 1, {v k, k N} is an independent identically distributed (i.i.d) process noise sequence, n x, n d are dimensions of the state and the process noise vectors, respectively. Likewise, the deterministic mapping h k : R nx R nn R nz is a possibly non-linear function, {n k, k N} is an i.i.d. observation noise sequence, and n x, n n are dimensions of the state and observation noise vectors, respectively. Let y 1:k denote the set of all available observations y i up to time t = k, (i.e. y 1:k = {y i i = 1,, k}). From a Bayesian point of view, the problem is to recursively calculate some degree of belief in the state x k at time t = k, taking different values, given the data y 1:k up to the time t = k. Equivalently, the Bayesian solution would be to calculate the pdf p(x k y 1:k ). This density will encapsulate all the information about the state vector x k that is contained in the observations y 1:k and the prior distribution for x k. Assume the required pdf p(x k 1 y 1:k 1 ) at time k 1 is available. The prediction stage uses the state equation (4) to obtain the prior pdf of the state variable at time k via the Chapman-Kolmogorov equation p(x k y 1:k 1 ) = p(x k x k 1 )p(x k 1 y 1:k 1 )dx k 1, (5) where the probabilistic model of the state evolution, p(x k x k 1 ) is readily obtained from the state equation (4) and the known statistics of v k 1. As an observation y k becomes available at time t = k, the prior pdf could be updated via the Bayes rule p(x k y 1:k ) = p(y k x k )p(x k y 1:k 1 ), (6) p(y k y 1:k 1 ) where the normalizing constant p(y k y 1:k 1 ) = p(y k x k )p(x k y 1:k 1 )dx k. (7) depends on the likelihood function p(y k x k ), defined by the measurement or observation equation (4) and the known statistics of n k. The relations (5) and (6) form the basis for the optimal Bayesian solution. This recursive propagation of the posterior density is only a conceptual solution. One cannot generally obtain an analytical solution. Solutions exist only in a very restrictive set of cases like that of the Kalman filters for instance (namely, if f k and h k are linear and both v k and n k are Gaussian). We turn now to describe in detail each of the three used filters. M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 7

8 3.1 Particle Filters Particle filters see [Doucet et al. 2], [Doucet et al. 21], [Arulampalam et al. 22] [Berliner and Wikle 27A] and recently the review paper of [Van Leeuwen 29] approximate the posterior densities by population of states. These states are called particles. Each of the particles has an assigned weight, and the posterior distribution can then be approximated by a discrete distribution which has support on each of the particles. The probability assigned to each particle is proportional to its weight. See for instance [Metropolis and Ulam 1944], [Gordon et al., 1993] [Doucet et al. 21] and [Stuart 21]. The different (PF) algorithms differ in the way that the population of particles evolves and assimilates the incoming observations. A major drawback of particle filters is that they suffer from sample degeneracy after a few filtering steps. Recently, it has been proven by Van Leeuwen [[Van Leeuwen 21],[Van Leeuwen 211]] that such a sample degeneracy does not take place when the proposal densities are taken into consideration. In this 2-D plus time problem the common remedy is to resample the prior pdf whenever the weights focus on few members of the ensemble. Here we use several strategies such as Systematic Resampling (SR), Residual Resampling (RR) and the Multinomial Resampling (MR). For complete details on the resampling techniques see Gordon et al. [Gordon 1993] and also [Berliner and Wikle 27A and 27B]. The three resampling methods were tested, all yielding similar results but only the SR method was retained since it exhibited the fastest performance. The SR algorithm generates a population of equally weighted particles to approximate the posterior at some time k. This population of particles is assumed to be an approximate sample from the true posterior at that time instant. The PF algorithm proceeds as follows: Initialization: The filter is initialized by drawing a sample of size N from the prior pdf. Filtering: Preliminaries: Assume that {x i k 1 } i=1,,n is a population of N particles, approximately distributed as an independent sample from p(x k 1 y 1:k 1 ) Prediction: Sample N values, {qk 1,, qn k }, from the distribution of v k. Use these to generate a new population of particles, {x 1 k k 1, x2 k k 1,, xn k k 1 } via the equation x i k k 1 = f k (x i k 1, vk) i (8) M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 8

9 Filtering: Assign each x i k k 1, a weight qi k. This weight is calculated by q i k = p(y k x i k k 1 ) N j=1 p(y k x j k k 1 ) (9) This defines a discrete distribution which, for i {1, 2,, N}, assigns probability mass q i k to element xi k k 1 Resampling: Resample independently N times, with replacement, from the distribution obtained in the filtering stage. The resulting particles, {x i k } i=1,,n, form an approximate sample from p(x k y 1:k ). The method outlined above can be justified as follows. If the particles at time t = k 1 were an i.i.d sample from the posterior at time t = k 1, then the predictive stage just produces an i.i.d. sample from the prior at time t = k. The filtering stage can be viewed as an importance sampling approach to generate an empirical distribution which approximates the posterior. The proposal density is just the prior p(x k y 1:k 1 ), and as a result of Bayes formula, we obtain p(x k y 1:k 1, y k ) p(x k y 1:k 1 )p(y k x k ), (1) and the weights are proportional to the likelihood p(y k x k ). As N tends to infinity, the discrete distribution which has probability mass q i at point x i k k 1, converges weakly to the true posterior. The resampling step is crucial to particle filters. It is used to generate equally weighted particles aimed at avoiding the problem of degeneracy of the algorithm, that is, avoiding the situation that all but one of the weights are close to zero. The resampling step modifies the weighted approximate density p(x k y k ) to an unweighted density ˆp(x k y k ) by eliminating particles having low importance weights and by multiplying particles having highly importance weights. Formally: is replaced by ˆp(x k y k ) = p(x k y k ) = N i=1 N q i δ(x k x i k ) (11) i=1 1 N δ(x k x k ) = N i=1 n i N δ(x k x k i ) (12) where n i is the number of copies of particle x k i in the new set of particles {x k }. For extensive review of particle filtering see[van Leeuwen 9]. M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 9

10 3.2 The Ensemble Kalman Filter The ensemble Kalman filter (EnKF) was first proposed by Evensen [Evensen 1994] and further developed by [Burgers et al. 1998] and [Evensen 23, Evensen 27]. For the current status and the potential of the EnKF we refer to [Kalnay 29]. It is related to particle filters in the context that a particle is identical to an ensemble member. EnKF is a sequential filter method, which means that the model is integrated forward in time and, whenever observations are available, these are used to reinitialize the model before the integration continues. The EnKF originated as a version of the Extended Kalman Filter (EKF) of Jazwinski [Jazwinski 197] and Bucy[Bucy 1965] for large problems. The classical KF [Kalman 6] method is optimal in the sense of minimizing the variance only for linear systems and Gaussian statistics. Similar to the particle filter method, the EnKF stems from a Monte Carlo integration of the Fokker-Planck equation governing the evolution of the pdf that describes the prior, forecast, and error statistics. In the analysis step, each ensemble member is updated according to the KF scheme and replaces the covariance matrix by the sample covariance computed from the ensemble. However, the EnKF presents two potential problems namely: 1) The EnKF assumes that that error is a linear function of the innovation. 2) The EnKF assumes that it can correctly estimate the first two moments of the posterior distribution. However, if assumption 1) is wrong then so is 2) Let p(x) denote the Gaussian prior probability density distribution of the state vector x with mean µ and covariance P ( p(x) exp 1 ) 2 (x µ)t P 1 (x µ) We assume the data y to have a Gaussian pdf with covariance R and mean Hx, where H is the so-called the observation matrix, is related to H of equation (4), and where the value Hx assumes what the value of the data y would be in absence of observation errors. Then p(y x) is the probability (likelihood) of the observations y given that the truth is given by the model state x and is of the form ( p(y x) exp 1 ) 2 (y Hx)T R 1 (y Hx). According to the Bayes theorem the posterior probability density follows from the relation p(x y) p(y x)p(x). (13) There are many variants of implementing the EnKF of various computational efficiency and in what follows we employ standard formulation of the EnKF for linear and nonlinear M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 1

11 observation operators with covariance localization. See [Evensen 1994, Burgers et al. 1998, Mandel 26, Mandel 27 and Lewis et al. 26], also see [Nerger et al. 25] and Sakov and Oke [Sakov 28] The implementation of the standard EnKF may be divided into three steps, as follows: Setting and matching Define the ensemble X = [x 1,, x N ] (14) be an n x N matrix whose columns are a sample from the prior distribution. N being the number of the ensemble members. Form the ensemble mean X = 1 N X 1 N, (15) where 1 N R N N is the matrix where each element is equal to 1. Define the ensemble perturbation matrix X and set the R nx nx ensemble covariance matrix C X = X X, (16) Sampling Generate C = X X T N 1, (17) Y = [y 1,, y N ] (18) an n y N matrix whose columns are a replica of the measurement vector y plus a random vector from the normal distribution N (, R). Form the R ny ny measurement error covariance R = Y Y T N 1, (19) Updating Obtain the posterior X p by the linear combinations of members of the prior ensemble X p = X + CH T (HCH T + R) 1 (Y HX ). (2) The matrix K = CH T (HCH T + R) 1 (21) M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 11

12 is the Kalman gain matrix. Since R is always positive definite( i.e. as it is a covariance matrix), the inverse (HCH T + R) 1 exists. The covariance of the posterior obeys C p = C K [ HCH T + R ] K T, (22) In the case of nonlinear observation operators, a modification to the above algorithm is advised. As advised by Evensen [Evensen 23] and following his notation, let ˆx be the augmented state vector made of the state vector and the predicted observation vector (nonlinear in this case). ˆx = ( x H(x) ). (23) Define the linear observation operator Ĥ ( by ) x Ĥ = y (24) y and carry out the steps of the EnKF formulation in augmented state space ˆx and Ĥ instead of x and H. Superficially, this technique appears to reduce the nonlinear problem to the previous linear observation operator case. However, whilst the augmented problem, involving linear observation problem, is a reasonable way of formulating the EnKF, it is not as well-founded as the linear case, which can be justified as an approximation to the exact and optimal KF. It has been argued by Kepert [Kepert 24] that the above EnKF modification has peculiar properties which render it far more useful than other forms. Kepert s major concern related to the performance of the EnKF when (a) the number of ensemble members was less than the number of observations. (b) the approximation to the observation error covariance R given by (19) rather than R itself was used in the gain matrix. Houtekamer s EnKF avoids this problem by using the best available estimate of R rather than equation (19). To prevent the occurrence of filter divergence usually due to the background-error covariance estimates from small number of ensemble members as pointed out in [Houtekamer and Mitchell 1998], the use of covariance localization was suggested. Mathematically, the covariance localization increases the effective rank of the background error covariances. See the work of [ Gaspari and Cohn 1999] also [Hamill and Snyder 2, 26] and [Ehrendorfer 27]. The covariance localization consists of multiplying point by point the covariance estimate from the ensemble with a correlation function that is 1. at the observation location and zero beyond some prescribed distance. Mathematically, to apply covariance localization, the Kalman gain K = CH T (HCH T + R) 1 M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 12

13 is replaced by a modified gain ˆK = [ρ C] H T (H [ρ C] H T + R) 1 (25) where ρ denotes the Schur product ( The Schur product of matrices A and B is a matrix D of the same dimension, where d ij = a ij b ij ) of a matrix S with local support with the covariance model generated by the ensemble. Various correlation matrices have been used. For the present model, we used the usual Gaussian correlation function ρ(d) = exp[ [ D l ]2 ], (26) where l is the correlation length here l = 2 length units. In addition, the additive covariance inflation of [Anderson and Anderson 1999] with the inflation factor r = 1.1 has been employed. 3.3 The Maximum Likelihood Ensemble Filter The Maximum Likelihood Ensemble Filter (MLEF) proposed by Zupanski [Zupanski 25], and Zupanski and Zupanski [Zupanski 26] is a hybrid filter combining the 3- D variational method with the EnKF. It maximizes the likelihood of posterior probability distribution which justifies its name. The MLEF does not use the tangent linear model. It uses the full non-linear model. This is one of the major advantages of the MLEF method. The method comprises three steps, a forecast step that is concerned with the evolution of the forecast error covariances, an analysis step based on solving a non-linear cost function and an updating step. Forecasting: In the absence of model errors, the forecasting step consists of evolving the square root analysis error covariance matrix through the ensembles. The starting point is from the evolution equation of the discrete Kalman filter described in Jazwinski [Jazwinski 197] P k f = M k 1,k P k a M T k 1,k, (27) where P (k) f is the forecast error covariance matrix at time k, M k 1,k is the linearized forecast model (e.g., Jacobian) from time k 1 to time k. Since Pa k 1 is a positive matrix for any k, equation (27) could be factorized and written as P k f = (P k f )1/2 {}}{ ( Mk 1,k (P k a ) 1/2) ( M k 1,k (P k a ) 1/2) T. M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 13

14 where (P k a ) 1/2 is of the form (P k a ) 1/2 = (P k f ) 1/2 = p k,1 1 p k,2 1 p k,n 1 p k,1 2 p k,2 2 p k,n 2 p k,1 n p k,2 n p k,n n, (28) as usual N is the number of ensemble members and n the number of state variables. The lower case p k,i j are obtained by calculating the square root of (Pa k ). Using equation(28), the square root forecast error covariance matrix (Pf k)1/2 can then be expressed as p k,1 1 p k,2 1 p k,n 1 p k,1 2 p k,2 2 p k,n 2 p k,1 n p k,2 n p k,n n where for a given k and for each 1 i N they satisfy x k,i 1 x k x k x k,i 1 + p k,i 1 i = 2. = M x k 2 + p k,i 2 k 1,k. M k 1,k x k,i n x k n + p k,i n, (29) The vector x k = ( x k 1x k 2 x k n) T is the analysis state from the previous assimilation cycle. which is found from the posterior analysis pdf as presented in [Lorenc 1986]. Analyzing: The analysis step for the MLEF involves solving a non-linear minimization problem. As in Lorenc [Lorenc 1986], in the absence of model error, the associated cost function is defined in terms of the forecast error covariance matrix and is given as J (x) = 1 2 (x x b) T ( P k f ) 1 (x xb )+ 1 [y 2 H(x)]T R 1 [y H(x)] where y is the vector of observations, h is the non-linear observation operator, R is the observational error covariance matrix and x b is a background state given by x k 1 x k 2. x k n. (3) x b = M k 1,k (x k 1 ). (31) M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 14

15 Through a Hessian preconditioner we introduce the change of variable (x x b ) = (P k f ) 1/2 (I + O) T/2 ξ (32) where ξ is vector of control variables, O is referred to as the observation information matrix and I is the identity matrix. The matrix O is provided by O = (P k f )T/2 HT R 1 H(P k f ) T/2 = (R 1/2 H(P k f ) 1/2 ) T (R 1/2 H(P k f ) 1/2 ) here H is the Jacobian matrix of the non-linear observation operator H evaluated at the background state x b. Let Z the matrix defined by and let z i assumes the form Z = z i = From the following approximations and z i 1 z i 2. z i n z1 1 z1 2 z1 N z1 1 z2 2 z2 N z 1 n z 2 n z N n, = R 1/2 k H x i. z i R 1/2 [ H(x + x k i ) H(x) ], (33) O ZZ T. (34) one can use an eigenvalue decomposition of the of symmetric positive definite matrix I + O to calculate the inverse square root matrix necessary to the updating step. It is worth mentioning that the approximation (33) is not necessary and a derivation of the MLEF not involving (33) has been recently developed in Zupanski et al. [Zupanski 8] Updating: The final point about MLEF is to update the square root analysis error covariance matrix. In order to estimate the analysis error covariance at the optimal point, the optimal state x opt minimizing the cost function J given by (3) is substituted (P k a ) T/2 = (P k f ) T/2 (I + O(x opt )) T/2. (35) M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 15

16 3.4 Differences between the EnKF and MLEF The differences between the EnKF and the MLEF are that the latter uses the central forecast as best estimate, and defined the covariance around that state while the EnKF uses the ensemble mean. The second difference is that the MLEF uses a 3D-var method for the update, while the EnKF uses some linearisation. More differences between MLEF and other EnKF methods have been presented in the work of Carrassi et al. [Carrassi 29] in an application to chaotic system. Moreover, to avoid any confusion or ambiguity as to any equivalence between the ensemble transform Kalman filter (ETKF) of Bishop et al. [Bishop 21] and the MLEF we list the following major differences between them. Let p k,i is the i th the column of the square root of the analysis error covariance matrix. Further, suppose that the sum of the column vectors p k,i is zero. In this case, the corresponding p k,i column of the square root of the forecast error covariance matrix in the EnKF is given by p k,i = where as the MLEF it is given by 1 [M k 1,k (x a + p k,i ] N 1) M k 1,k (x a ) N 1 (36) p k,i = [ M k 1,k (x a + p k,i ) M k 1,k (x a ) ] (37) The MLEF employs a control deterministic forecast (as variational methods), while ETKF employs an ensemble forecast mean. Although they are same for assumed Gaussian PDF, there is no proof yet that they yield identical results in a reduced-rank (small ensemble) situation, or with nonlinear prediction models. The analysis error covariance is different, and this constitutes the most important difference. In ETKF the forecast error covariance is calculated as a statistical sample of 1 ensemble perturbations, implying the factor. In MLEF there is no such assumption, and the formula follows the idea from the Kalman Filter forecast step. The impact of N 1 1 is not trivial since it implies that analysis error covariance is completely different. N 1 For instance, let assume for a moment that both ETKF and MLEF yield the same ensemble forecast perturbations p f i = x f i xf for i = 1,, N (38) M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 16

17 where index i represents ensemble member and x f is a control forecast or ensemble mean. Whilst ETKF and MLEF forecast error covariances are ET KF Pf = 1 N 1 P MLEF f = 1 N 1 the ETKF analysis error covariance is N p f i (pf i )T i=1 N p f i (pf i )T i=1 ET KF Pa = 1 [ ET KF (Pf ) 1/2 I + O ] 1 ET KF (Pf ) T/2 (39) N 1 N 1 and the MLEF analysis error covariance is P MLEF a = (Pf MLEF ) 1/2 [I + O] 1 (Pf MLEF ) T/2, (4) where O has been defined earlier in equation (34). This is the most important difference since square-root P a is used to define initial ensemble perturbations for next cycle, and the difference amplifies after several data assimilation cycles. There is no empirical multiplication factor that can make the two identical, unless the forecast error covariance from MLEF is used in ETKF. The ETKF is a minimum variance method, while the MLEF is a maximum likelihood of a posterior PDF method. This itself is a fundamental difference. It should be obvious for non-gaussian PDFs (especially a skewed PDFs such as log-normal PDF), but it could be relevant for nonlinear problems even with (incorrectly) assuming Gaussian PDF. Minimization. In the case of nonlinear observation operator, one minimization iteration of MLEF is not identical to the ETKF one. In fact, the MLEF uses line-search (which typically produces a value much smaller than 1), while ETKF implies using a step-length equal to 1 (quadratic cost function). This produces a different analysis increment, and thus can also impact the dynamical balance of the analysis. M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 17

18 4 Numerical results 4.1 Model set-up As in [Williamson 1992,1997,27] and [Jakob et al. 1995], the grid representation for any arbitrary variable φ is related to the following spectral decomposition φ(λ, µ) = M N(m) m= M n= m φ m,n P m,n (µ)e imλ, (41) where P m,n (µ)e imλ are the spherical harmonic functions [Boyd 1]. P m,n (µ) stands for the Legendre polynomial. M is the highest Fourier wavenumber included in the east-west representation, N(m) is the highest degree of the associated Legendre polynomials for longitudinal wavenumber m. The coefficients of the spectral representation (41) are determined by 1 1 2π φ m,n = φ(λ, µ)e imλ P m,n (µ)dλdµ (42) 2π 1 The inner integral represents a Fourier transform, φ m (µ) = 1 2π φ(λ, µ)e imλ dλ (43) 2π which is evaluated using a fast Fourier transform routine. The outer integer is evaluated using Gaussian quadrature on the transform grid. φ m,n = J φ m (µ)p m,n (µ)ω j, (44) j=1 where µ j denotes the Gaussian grid points in the meridional direction and ω j is the Gaussian weight at point µ j. The meridional grid points are located at the Gaussian latitudes θ j, which are the J roots of the Legendre polynomial P j (sin θ j ) =. The number of grid points in the longitudinal and meridional directions are determined so as to allow the unaliased representation of quadratic terms, I 3M + 1 J 3N M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 18

19 where N is the highest wavenumber retained in the latitudinal Legendre representation N = max N(m) = M. The pseudo spectral method, also known as the spectral transform method in the geophysical community, of [Orszag 1969, 197] and [Eliassen et al. 197] has been used to tackle the nonlinearity. In conjunction with the spatial discretization described before, the time discretization, two semi-implicit time steps have been used for the initialization. Because of the hyperbolic type of the shallow water equations, the centered leapfrog scheme φ k+1 m,n φ k 1 m,n 2 t = F(φ k m,n) (45) has been invoked for the subsequent time steps. After the leapfrog time-differencing scheme is used to obtain the solution at t = (k + 1) t, a slight time smoothing is applied to the solution at time k t (Asselin filter). φ k m,n = φ k m,n + α [ ] φ k+1 m,n 2φ k k 1 m,n + φ m,n (46) replacing the solution at time k. It reduces the amplitude of different frequencies ν by a factor 1 4α sin 2 ( ν t). 2 Even-tough the flow considered in this study is very simplified and has nearly linear dynamics, the shallow-water flows on the sphere provide a baseline against which to evaluate the effectiveness of the assimilation and give some insight into the behavior of some data assimilation schemes. Starting from an ensemble of 5 initial conditions generated from a 1% random perturbation around the mean of the velocity, geopotential, divergence and vorticity fields. In figure 2 one ensemble member of the ensemble geopotential fields is depicted. The ensemble initial conditions is then evolved forward (with no data assimilation). It results after 18 hours of time integration in a pattern breakdown of the R-H wave no 4 test that can be easily noticed from the expected velocity and geopotential fields presented in figure 3. As a consequence, the estimated error variance would grow unreasonably rapidly. This is the motivation behind to use of the shallow-water flows on the sphere as a testbed for some data assimilation schemes. 4.2 EnKF set-up and numerical results In all our numerical experiments a time step of t = 6 sec has been employed. The observations were provided at a frequency consisting of one set of observations every 36 t. M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 19

20 4.2.1 Impact of the number of observations and the number of ensemble members The impact of the number of observations follows the findings of [ Fletcher and Zupanski 28]. In fact, there exists a number of observations threshold beyond which the RMSE for all the fields ( namely the components of the velocity and geopotential) coincide. In our test the number of the observations threshold was attained at around 912 observations at each observation time. The observational grid is a subset of the rhomboidal spectral truncation (48x38) and consists of observations distributed at every grid point in the longitude and every 2 points in the latitude, as presented in figure 1. The impact of the number of ensemble members on the results has also been examined. We have conducted several experiments with different numbers of ensemble members, namely, 5, 1, 2 and 3 ensemble members, respectively. Our results reveal that employing only 1 ensemble members was sufficient to successfully perform the data assimilation and that using higher number of ensemble members had no impact on the ensuing results Linear observation operator case - description of the EnKF results In figure 2 we present a 1% random perturbation around the mean of the geopotential field. This perturbed field along with the perturbed velocity divergence and vorticity fields will serve as initialization for the observations. In figure 4 we present an overview of the results obtained using 1 days EnKF data assimilation time integration. The linear observation operator H(u) = u has been employed, and 1 ensemble members were used with 1% random perturbation around the mean accounting for the standard deviation for the model error covariance matrix. In the presence of 1% and 3% random perturbations around the mean respectively, the RH wave no. 4 test case preserves its global characteristic shape for more than 3 days of data assimilation time integration. This shows the resilience of the RH wave no. 4 test case to random perturbation in the presence of linear observation operator Nonlinear observation operator case - description of the EnKF results Here we employ H(u) = u 2 as a representative of a nonlinear observation operator (we have also tested H(u) = u 4 ; results not shown). We use 912 observations points, their distribution in space and time are identical to those used in section The EnKF for the nonlinear observation operator H(u) = u 2 begins to deteriorate at around 5 days of data assimilation time integration. By 1 days of data assimilation time integration a full filter divergence takes place as it can be seen in the RMSE and Talagrand diagrams. M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 2

21 We should point out that in the sequel of this paper,when producing the rank scores, you compare truth with ensemble forecast members. Figure 5 displays the corresponding Talagrand diagram also known as the rank score. Contrary to the linear observation operator Talagrand diagram where a uniform ensemble repartition is observed, the nonlinear case displays a weakly U-shaped ensemble repartition. This result characterizes the onset of filter divergence. In figure 6 we present the true, the EnKF analysis for one ensemble member and the expected analysis geopotential fields for 5 days data assimilation time integration. This illustration confirms the validity of the Talagrand diagram. In fact, a deterioration of the typical shape of the RH wave no.4 test case and the asymmetry(i.e. the first and the last bin of the Talagrand diagram are overpopulated are noticeable) Figure 7 regroups the Talagrand diagrams and RMSE corresponding to different values of the random perturbation around the mean, namely 1%, 3% and 6% perturbations. The Talagrand diagrams now display a non-symmetrical U shape characteristic of filter divergence as well as bias. An increase in the values of the frequencies follows the increase in the perturbation percentage. The RMSE of the geopotential field, presented in the same figure, increase accordingly with the perturbation. In figures 8 and 9 we present the true state, the EnKF analysis for one ensemble member and the expected analysis geopotential forecast and analysis geopotential fields for 1 days EnKF window of data assimilation with 1% and 3% perturbation around the mean. The same nonlinear observation operator H(u) = u 2 has been employed. The results obtained point to a total loss of shape and symmetry deterioration of the RH wave no. 4 test case. We also used the nonlinear observation operator H(u) = sign(u)u 2, such an operator is known not to lead to bimodal observation likelihoods. Again, the filter divergence is plausible as it can be easily seen from figure 1, where the expected velocity and geopotential fields are depicted. We should point out the difference between our setup and those used in the work of Buehner et al. [Buehner 21A],[Buehner 21B] and also in the work of Hamill et al. [Hamill 211], where the EnKF appears to perform reasonably well. All our observations have the same degree of nonlinearity, while in aformentionned references observation operators are mostly weakly nonlinear. Satellite radiances are included only for almost linear clear-sky radiances (done in all operational weather centers, with only one, quite recent exception at the ECMWF). The nonlinearity comes mainly from scattering that is included in cloudy radiances only, so true nonlinearity as used in our work has not been employed in [[Buehner 21A],[Buehner 21B] and [Hamill 211]]. New efforts to mitigate the problem of high nonlinearity in EnKF have been undertaken. The recent work of Kalnay and collaborators, [Kalnay priv. comm. 28], [Kalnay and Yang 28] and [Kalnay and Yang 21] suggests an approach involving outer loops of minimization, borrowed from variational data assimilation practice. They introduce M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 21

22 an empirical convergence criterion within a nonstandard iterative minimization scheme. Common to both mentioned algorithms is that they treat each iteration of minimization as a quadratic sub-problem and thus rely on the use of traditional Kalman filter linear analysis update equation in each iteration. 4.3 PF set-up and numerical results As in the previous (EnKF) section, all our numerical experiments employ a time step of t = 6 sec. The observations are sampled once every 6 hours i.e. at a frequency consisting of one observation every 36 time steps The impact of number of particles and of the resampling strategy In view of recent results in the literature such as ([Van Leeuwen 29][Snyder et al. 28]) the issue of choice of an adequate number of particles is of paramount importance. Since no proposal densities are considered and in order to avoid filter degeneracy we follow the suggestion of Van-Leeuwen [Van Leeuwen 29] regarding the choise of suitable number of particles, namely, that the number of particles should be not less than the degrees of freedom of the simulated system. We tested three cases, namely, 1, 15 and 19 particle filters as compared to modes in the latitude-longitude directions for the spherical-spectral shallow-water model. We tested three resampling techniques, namely, residual, systematic and multinomial resampling, respectively. All of these methods performed equally well with a slight edge to the systematic resampling technique as far as computational efficiency is concerned see ([Doucet et al. 21] [Arulampalam et al. 22][Van Leeuwen 29]). A Talagrand like diagram is introduced here, where instead of taking the ensemble members we took the particles. We then iterated over the number of bins (19 particles, 51 bins). Similar to the ensemble framework, this diagram could be a useful tool to detect systematic flaws of a particle filter prediction system Linear observation operator case The linear observation operator assuming the form H(u) = u has been employed for the particle filter case and we do not show the related results. However, we observe almost identical results to those obtained for the EnKF linear observation operator case; in fact the R-H wave no 4 test case preserves its global characteristic shapes with minor changes occurring in the center of the recirculation zones at around 1 days of data assimilation time integration. M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 22

23 4.3.3 Non-linear observation operator case We consider the impact of nonlinear observation operator for the particle filter (PF) when H(u) = u 2. The results are displayed in figure 11, where the true state, PF analysis for one particle geopotential fields for 1% and 3% perturbation around the mean are depicted. After 1 days data assimilation time integration,the R-H wave no 4 test case preserves its global characteristic shape, and no distortion nor cell pattern breakdown can be detected. These results should be contrasted to our EnKF findings of filter divergence for the same data assimilation time integration. Apart from the bias ( i.e. the first and last bins) of the U-shaped diagrams shown in figure 12, we see uniformity of the frequencies repartition indicating that the probability distribution has been well sampled. The rank score also shows under dispersive ensembles, this means that the number of particles is too small even if the mean converged. In figure 13 we present the time evolution of the RMSE after 1 days of data assimilation time integration of PF for 1% and 3% perturbation around the mean. The striking result is that the RMSE variation remains relatively small in contrast to the EnKF case where the corresponding RMSE increases quasi-exponentially. The effect of degree of nonlinearity of the observation operator in the PF case has been also examined. As figure 14 shows that a stronger nonlinearity (i.e H(u) = exp(u) ) has no substantial effect on the RMSE. Indeed, the RMSE envelope is not affected despite the presence of oscillations characteristic of the PF filter. 4.4 MLEF set-up and results Only 3 ensemble members were sufficient for carrying out the MLEF runs, while the number of cycles was set to 7. The iterative nonlinear minimization method employed for the cost function minimization was the Flecher-Reeves nonlinear conjugate gradient algorithm. In each of the MLEF data assimilation cycles, 5 iterations are performed to obtain the analysis. The standard deviation of each of the Gaussian state and observation noise was taken to be 1% random perturbation around the mean. No localization nor inflation were used within the ensemble part of this method. We observe similar results to those obtained for the EnKF and PF in the linear observation operator case, namely that the global characteristic shapes of the filtered solution are preserved. The striking resemblance between the PF and MLEF results demonstrates the successful implementation of both filters. This result is further confirmed as the degree of nonlinearity of the observation operators was increased from H(u) = u 2 to H(u) = e u. Our numerical experiments are presented in in figure 15. As presented in figure 16, we observe that apart from the bias ( i.e. the first and last bins) of the U-shaped diagram, the rank score also shows under dispersive ensembles, this means that the number of ensembles is M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 23

24 small even if the mean converged. The RMSE presented in figure 17 are bounded and the nonlinearity of the observation operator has no effect on the RMSE and consequently the MLEF filter predicts well the true state solution independently of the linearity of the observation operator. 5 Summary and conclusions In this paper, a comparison was carried out for the three ensemble data assimilation methods (EnsDA) for a spherical nonlinear shallow water equations model discretized via a spectral model and tested for the Rossby Haurwitz wave no. 4. The Monte Carlo version of EnKF, the particle filter PF with several resampling strategies and finally, as a representative of hybrid filters, the maximum likelihood filter MLEF were tested in the presence of both linear and highly nonlinear observation operators. The nonlinear observation operators replicate and stand as a proxy for more realistic processes such as cloud, aerosol and precipitation processes, as well as remote sensing (e.g., satellite and radar) observations. While the typical EnsDA analysis equation assumes linearity and Gaussianity for EnKF, (thus fundamentally preventing this EnsDA from extracting maximum information from nonlinear observations) we have also tested a Bayesian filter, the particle filter, whose analysis equation is nonlinear, as well as the MLEF hybrid filter, which accommodates nonlinear observation operators as well as non-gaussianity. The results for the linear observation operator for three above mentioned sequential data assimilation filters leads to the conclusion, that the aforementioned methods are comparable and yield satisfactory results in this case. Amongst the different error metrics used to assess filter divergence, we chose to retain the Talgrand diagram also know as the rank score and the usual RMSE statistics. Results of the Talagrand diagram for EnKF in the presence of nonlinear operators show that filter divergence onset occurs at about 5 days of data assimilation using the R-H wave no.4 test case. We conclude that EnKF with Evensen s fixture for nonlinearity fails to converge for nonlinear observation operators. In contrast, the other two filters (PF and MLEF) handled the issue of nonlinearity of the observation operators as well as non-gaussianity well. These results are of course a function of our particular implementation of the EnKF, PF and MLEF, respectively, and are a function of the number of particles/ensemble members, the number of observations, as well as the particular model tested. Regarding computational efficiency, the MLEF was found to be faster than the EnKF, while the PF filter proved to be the most time consuming. The EnKF version for the nonlinear observation operator H(u) = u 2 displays a signifi- M. Jardak, I.M. Navon & M. Zupanski, submitted to Tellus A 24

A Note on the Particle Filter with Posterior Gaussian Resampling

A Note on the Particle Filter with Posterior Gaussian Resampling Tellus (6), 8A, 46 46 Copyright C Blackwell Munksgaard, 6 Printed in Singapore. All rights reserved TELLUS A Note on the Particle Filter with Posterior Gaussian Resampling By X. XIONG 1,I.M.NAVON 1,2 and

More information

Relative Merits of 4D-Var and Ensemble Kalman Filter

Relative Merits of 4D-Var and Ensemble Kalman Filter Relative Merits of 4D-Var and Ensemble Kalman Filter Andrew Lorenc Met Office, Exeter International summer school on Atmospheric and Oceanic Sciences (ISSAOS) "Atmospheric Data Assimilation". August 29

More information

Fundamentals of Data Assimila1on

Fundamentals of Data Assimila1on 014 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO July 14-16, 014 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University

More information

Data assimilation in the geosciences An overview

Data assimilation in the geosciences An overview Data assimilation in the geosciences An overview Alberto Carrassi 1, Olivier Talagrand 2, Marc Bocquet 3 (1) NERSC, Bergen, Norway (2) LMD, École Normale Supérieure, IPSL, France (3) CEREA, joint lab École

More information

Localization in the ensemble Kalman Filter

Localization in the ensemble Kalman Filter Department of Meteorology Localization in the ensemble Kalman Filter Ruth Elizabeth Petrie A dissertation submitted in partial fulfilment of the requirement for the degree of MSc. Atmosphere, Ocean and

More information

Fundamentals of Data Assimila1on

Fundamentals of Data Assimila1on 2015 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO August 11-14, 2015 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University

More information

Local Ensemble Transform Kalman Filter

Local Ensemble Transform Kalman Filter Local Ensemble Transform Kalman Filter Brian Hunt 11 June 2013 Review of Notation Forecast model: a known function M on a vector space of model states. Truth: an unknown sequence {x n } of model states

More information

Maximum Likelihood Ensemble Filter Applied to Multisensor Systems

Maximum Likelihood Ensemble Filter Applied to Multisensor Systems Maximum Likelihood Ensemble Filter Applied to Multisensor Systems Arif R. Albayrak a, Milija Zupanski b and Dusanka Zupanski c abc Colorado State University (CIRA), 137 Campus Delivery Fort Collins, CO

More information

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows Laura Slivinski June, 3 Laura Slivinski (Brown University) Lagrangian Data Assimilation June, 3 / 3 Data Assimilation Setup:

More information

EnKF Localization Techniques and Balance

EnKF Localization Techniques and Balance EnKF Localization Techniques and Balance Steven Greybush Eugenia Kalnay, Kayo Ide, Takemasa Miyoshi, and Brian Hunt Weather Chaos Meeting September 21, 2009 Data Assimilation Equation Scalar form: x a

More information

4. DATA ASSIMILATION FUNDAMENTALS

4. DATA ASSIMILATION FUNDAMENTALS 4. DATA ASSIMILATION FUNDAMENTALS... [the atmosphere] "is a chaotic system in which errors introduced into the system can grow with time... As a consequence, data assimilation is a struggle between chaotic

More information

arxiv: v1 [physics.ao-ph] 23 Jan 2009

arxiv: v1 [physics.ao-ph] 23 Jan 2009 A Brief Tutorial on the Ensemble Kalman Filter Jan Mandel arxiv:0901.3725v1 [physics.ao-ph] 23 Jan 2009 February 2007, updated January 2009 Abstract The ensemble Kalman filter EnKF) is a recursive filter

More information

Using Observations at Different Spatial. Scales in Data Assimilation for. Environmental Prediction. Joanne A. Waller

Using Observations at Different Spatial. Scales in Data Assimilation for. Environmental Prediction. Joanne A. Waller UNIVERSITY OF READING DEPARTMENT OF MATHEMATICS AND STATISTICS Using Observations at Different Spatial Scales in Data Assimilation for Environmental Prediction Joanne A. Waller Thesis submitted for the

More information

Comparison of sequential data assimilation methods for the Kuramoto Sivashinsky equation

Comparison of sequential data assimilation methods for the Kuramoto Sivashinsky equation INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS Int. J. Numer. Meth. Fluids ; 6:374 4 Published online 6 February 9 in Wiley InterScience (www.interscience.wiley.com).. Comparison of sequential data

More information

Fundamentals of Data Assimilation

Fundamentals of Data Assimilation National Center for Atmospheric Research, Boulder, CO USA GSI Data Assimilation Tutorial - June 28-30, 2010 Acknowledgments and References WRFDA Overview (WRF Tutorial Lectures, H. Huang and D. Barker)

More information

P 1.86 A COMPARISON OF THE HYBRID ENSEMBLE TRANSFORM KALMAN FILTER (ETKF)- 3DVAR AND THE PURE ENSEMBLE SQUARE ROOT FILTER (EnSRF) ANALYSIS SCHEMES

P 1.86 A COMPARISON OF THE HYBRID ENSEMBLE TRANSFORM KALMAN FILTER (ETKF)- 3DVAR AND THE PURE ENSEMBLE SQUARE ROOT FILTER (EnSRF) ANALYSIS SCHEMES P 1.86 A COMPARISON OF THE HYBRID ENSEMBLE TRANSFORM KALMAN FILTER (ETKF)- 3DVAR AND THE PURE ENSEMBLE SQUARE ROOT FILTER (EnSRF) ANALYSIS SCHEMES Xuguang Wang*, Thomas M. Hamill, Jeffrey S. Whitaker NOAA/CIRES

More information

Kalman Filter and Ensemble Kalman Filter

Kalman Filter and Ensemble Kalman Filter Kalman Filter and Ensemble Kalman Filter 1 Motivation Ensemble forecasting : Provides flow-dependent estimate of uncertainty of the forecast. Data assimilation : requires information about uncertainty

More information

A Moment Matching Particle Filter for Nonlinear Non-Gaussian. Data Assimilation. and Peter Bickel

A Moment Matching Particle Filter for Nonlinear Non-Gaussian. Data Assimilation. and Peter Bickel Generated using version 3.0 of the official AMS L A TEX template A Moment Matching Particle Filter for Nonlinear Non-Gaussian Data Assimilation Jing Lei and Peter Bickel Department of Statistics, University

More information

Ensemble square-root filters

Ensemble square-root filters Ensemble square-root filters MICHAEL K. TIPPETT International Research Institute for climate prediction, Palisades, New Yor JEFFREY L. ANDERSON GFDL, Princeton, New Jersy CRAIG H. BISHOP Naval Research

More information

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Background Data Assimilation Iterative process Forecast Analysis Background

More information

EnKF Review. P.L. Houtekamer 7th EnKF workshop Introduction to the EnKF. Challenges. The ultimate global EnKF algorithm

EnKF Review. P.L. Houtekamer 7th EnKF workshop Introduction to the EnKF. Challenges. The ultimate global EnKF algorithm Overview 1 2 3 Review of the Ensemble Kalman Filter for Atmospheric Data Assimilation 6th EnKF Purpose EnKF equations localization After the 6th EnKF (2014), I decided with Prof. Zhang to summarize progress

More information

A Global Atmospheric Model. Joe Tribbia NCAR Turbulence Summer School July 2008

A Global Atmospheric Model. Joe Tribbia NCAR Turbulence Summer School July 2008 A Global Atmospheric Model Joe Tribbia NCAR Turbulence Summer School July 2008 Outline Broad overview of what is in a global climate/weather model of the atmosphere Spectral dynamical core Some results-climate

More information

Addressing the nonlinear problem of low order clustering in deterministic filters by using mean-preserving non-symmetric solutions of the ETKF

Addressing the nonlinear problem of low order clustering in deterministic filters by using mean-preserving non-symmetric solutions of the ETKF Addressing the nonlinear problem of low order clustering in deterministic filters by using mean-preserving non-symmetric solutions of the ETKF Javier Amezcua, Dr. Kayo Ide, Dr. Eugenia Kalnay 1 Outline

More information

A new Hierarchical Bayes approach to ensemble-variational data assimilation

A new Hierarchical Bayes approach to ensemble-variational data assimilation A new Hierarchical Bayes approach to ensemble-variational data assimilation Michael Tsyrulnikov and Alexander Rakitko HydroMetCenter of Russia College Park, 20 Oct 2014 Michael Tsyrulnikov and Alexander

More information

Local Ensemble Transform Kalman Filter: An Efficient Scheme for Assimilating Atmospheric Data

Local Ensemble Transform Kalman Filter: An Efficient Scheme for Assimilating Atmospheric Data Local Ensemble Transform Kalman Filter: An Efficient Scheme for Assimilating Atmospheric Data John Harlim and Brian R. Hunt Department of Mathematics and Institute for Physical Science and Technology University

More information

Data Assimilation with the Ensemble Kalman Filter and the SEIK Filter applied to a Finite Element Model of the North Atlantic

Data Assimilation with the Ensemble Kalman Filter and the SEIK Filter applied to a Finite Element Model of the North Atlantic Data Assimilation with the Ensemble Kalman Filter and the SEIK Filter applied to a Finite Element Model of the North Atlantic L. Nerger S. Danilov, G. Kivman, W. Hiller, and J. Schröter Alfred Wegener

More information

Four-Dimensional Ensemble Kalman Filtering

Four-Dimensional Ensemble Kalman Filtering Four-Dimensional Ensemble Kalman Filtering B.R. Hunt, E. Kalnay, E.J. Kostelich, E. Ott, D.J. Patil, T. Sauer, I. Szunyogh, J.A. Yorke, A.V. Zimin University of Maryland, College Park, MD 20742, USA Ensemble

More information

Application of the Ensemble Kalman Filter to History Matching

Application of the Ensemble Kalman Filter to History Matching Application of the Ensemble Kalman Filter to History Matching Presented at Texas A&M, November 16,2010 Outline Philosophy EnKF for Data Assimilation Field History Match Using EnKF with Covariance Localization

More information

(Extended) Kalman Filter

(Extended) Kalman Filter (Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system

More information

Ergodicity in data assimilation methods

Ergodicity in data assimilation methods Ergodicity in data assimilation methods David Kelly Andy Majda Xin Tong Courant Institute New York University New York NY www.dtbkelly.com April 15, 2016 ETH Zurich David Kelly (CIMS) Data assimilation

More information

Gaussian Filtering Strategies for Nonlinear Systems

Gaussian Filtering Strategies for Nonlinear Systems Gaussian Filtering Strategies for Nonlinear Systems Canonical Nonlinear Filtering Problem ~u m+1 = ~ f (~u m )+~ m+1 ~v m+1 = ~g(~u m+1 )+~ o m+1 I ~ f and ~g are nonlinear & deterministic I Noise/Errors

More information

An Iterative EnKF for Strongly Nonlinear Systems

An Iterative EnKF for Strongly Nonlinear Systems 1988 M O N T H L Y W E A T H E R R E V I E W VOLUME 140 An Iterative EnKF for Strongly Nonlinear Systems PAVEL SAKOV Nansen Environmental and Remote Sensing Center, Bergen, Norway DEAN S. OLIVER Uni Centre

More information

Forecasting and data assimilation

Forecasting and data assimilation Supported by the National Science Foundation DMS Forecasting and data assimilation Outline Numerical models Kalman Filter Ensembles Douglas Nychka, Thomas Bengtsson, Chris Snyder Geophysical Statistics

More information

Assimilation des Observations et Traitement des Incertitudes en Météorologie

Assimilation des Observations et Traitement des Incertitudes en Météorologie Assimilation des Observations et Traitement des Incertitudes en Météorologie Olivier Talagrand Laboratoire de Météorologie Dynamique, Paris 4èmes Rencontres Météo/MathAppli Météo-France, Toulouse, 25 Mars

More information

The Ensemble Kalman Filter:

The Ensemble Kalman Filter: p.1 The Ensemble Kalman Filter: Theoretical formulation and practical implementation Geir Evensen Norsk Hydro Research Centre, Bergen, Norway Based on Evensen 23, Ocean Dynamics, Vol 53, No 4 p.2 The Ensemble

More information

MAXIMUM LIKELIHOOD ENSEMBLE FILTER: THEORETICAL ASPECTS. Milija Zupanski

MAXIMUM LIKELIHOOD ENSEMBLE FILTER: THEORETICAL ASPECTS. Milija Zupanski MAXIMUM LIKELIHOOD ENSEMBLE FILTER: THEORETICAL ASPECTS Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University Foothills Campus Fort Collins, CO 8053-1375 ZupanskiM@cira.colostate.edu

More information

Stability of Ensemble Kalman Filters

Stability of Ensemble Kalman Filters Stability of Ensemble Kalman Filters Idrissa S. Amour, Zubeda Mussa, Alexander Bibov, Antti Solonen, John Bardsley, Heikki Haario and Tuomo Kauranne Lappeenranta University of Technology University of

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 7 Sequential Monte Carlo methods III 7 April 2017 Computer Intensive Methods (1) Plan of today s lecture

More information

Particle filters, the optimal proposal and high-dimensional systems

Particle filters, the optimal proposal and high-dimensional systems Particle filters, the optimal proposal and high-dimensional systems Chris Snyder National Center for Atmospheric Research Boulder, Colorado 837, United States chriss@ucar.edu 1 Introduction Particle filters

More information

Quarterly Journal of the Royal Meteorological Society

Quarterly Journal of the Royal Meteorological Society Quarterly Journal of the Royal Meteorological Society Effects of sequential or simultaneous assimilation of observations and localization methods on the performance of the ensemble Kalman filter Journal:

More information

Ensemble forecasting and flow-dependent estimates of initial uncertainty. Martin Leutbecher

Ensemble forecasting and flow-dependent estimates of initial uncertainty. Martin Leutbecher Ensemble forecasting and flow-dependent estimates of initial uncertainty Martin Leutbecher acknowledgements: Roberto Buizza, Lars Isaksen Flow-dependent aspects of data assimilation, ECMWF 11 13 June 2007

More information

A HYBRID ENSEMBLE KALMAN FILTER / 3D-VARIATIONAL ANALYSIS SCHEME

A HYBRID ENSEMBLE KALMAN FILTER / 3D-VARIATIONAL ANALYSIS SCHEME A HYBRID ENSEMBLE KALMAN FILTER / 3D-VARIATIONAL ANALYSIS SCHEME Thomas M. Hamill and Chris Snyder National Center for Atmospheric Research, Boulder, Colorado 1. INTRODUCTION Given the chaotic nature of

More information

Variational data assimilation

Variational data assimilation Background and methods NCEO, Dept. of Meteorology, Univ. of Reading 710 March 2018, Univ. of Reading Bayes' Theorem Bayes' Theorem p(x y) = posterior distribution = p(x) p(y x) p(y) prior distribution

More information

Accelerating the spin-up of Ensemble Kalman Filtering

Accelerating the spin-up of Ensemble Kalman Filtering Accelerating the spin-up of Ensemble Kalman Filtering Eugenia Kalnay * and Shu-Chih Yang University of Maryland Abstract A scheme is proposed to improve the performance of the ensemble-based Kalman Filters

More information

Accepted in Tellus A 2 October, *Correspondence

Accepted in Tellus A 2 October, *Correspondence 1 An Adaptive Covariance Inflation Error Correction Algorithm for Ensemble Filters Jeffrey L. Anderson * NCAR Data Assimilation Research Section P.O. Box 3000 Boulder, CO 80307-3000 USA Accepted in Tellus

More information

4DEnVar. Four-Dimensional Ensemble-Variational Data Assimilation. Colloque National sur l'assimilation de données

4DEnVar. Four-Dimensional Ensemble-Variational Data Assimilation. Colloque National sur l'assimilation de données Four-Dimensional Ensemble-Variational Data Assimilation 4DEnVar Colloque National sur l'assimilation de données Andrew Lorenc, Toulouse France. 1-3 décembre 2014 Crown copyright Met Office 4DEnVar: Topics

More information

The Impact of Background Error on Incomplete Observations for 4D-Var Data Assimilation with the FSU GSM

The Impact of Background Error on Incomplete Observations for 4D-Var Data Assimilation with the FSU GSM The Impact of Background Error on Incomplete Observations for 4D-Var Data Assimilation with the FSU GSM I. Michael Navon 1, Dacian N. Daescu 2, and Zhuo Liu 1 1 School of Computational Science and Information

More information

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Dan Cawford Manfred Opper John Shawe-Taylor May, 2006 1 Introduction Some of the most complex models routinely run

More information

4D-Var or Ensemble Kalman Filter? TELLUS A, in press

4D-Var or Ensemble Kalman Filter? TELLUS A, in press 4D-Var or Ensemble Kalman Filter? Eugenia Kalnay 1 *, Hong Li 1, Takemasa Miyoshi 2, Shu-Chih Yang 1, and Joaquim Ballabrera-Poy 3 1 University of Maryland, College Park, MD, 20742-2425 2 Numerical Prediction

More information

Handling nonlinearity in Ensemble Kalman Filter: Experiments with the three-variable Lorenz model

Handling nonlinearity in Ensemble Kalman Filter: Experiments with the three-variable Lorenz model Handling nonlinearity in Ensemble Kalman Filter: Experiments with the three-variable Lorenz model Shu-Chih Yang 1*, Eugenia Kalnay, and Brian Hunt 1. Department of Atmospheric Sciences, National Central

More information

Sensor Fusion: Particle Filter

Sensor Fusion: Particle Filter Sensor Fusion: Particle Filter By: Gordana Stojceska stojcesk@in.tum.de Outline Motivation Applications Fundamentals Tracking People Advantages and disadvantages Summary June 05 JASS '05, St.Petersburg,

More information

Methods of Data Assimilation and Comparisons for Lagrangian Data

Methods of Data Assimilation and Comparisons for Lagrangian Data Methods of Data Assimilation and Comparisons for Lagrangian Data Chris Jones, Warwick and UNC-CH Kayo Ide, UCLA Andrew Stuart, Jochen Voss, Warwick Guillaume Vernieres, UNC-CH Amarjit Budiraja, UNC-CH

More information

Abstract 2. ENSEMBLE KALMAN FILTERS 1. INTRODUCTION

Abstract 2. ENSEMBLE KALMAN FILTERS 1. INTRODUCTION J5.4 4D ENSEMBLE KALMAN FILTERING FOR ASSIMILATION OF ASYNCHRONOUS OBSERVATIONS T. Sauer George Mason University, Fairfax, VA 22030 B.R. Hunt, J.A. Yorke, A.V. Zimin, E. Ott, E.J. Kostelich, I. Szunyogh,

More information

Data Assimilation for Dispersion Models

Data Assimilation for Dispersion Models Data Assimilation for Dispersion Models K. V. Umamaheswara Reddy Dept. of Mechanical and Aerospace Engg. State University of New Yor at Buffalo Buffalo, NY, U.S.A. venatar@buffalo.edu Yang Cheng Dept.

More information

Ensemble variational assimilation as a probabilistic estimator Part 1: The linear and weak non-linear case

Ensemble variational assimilation as a probabilistic estimator Part 1: The linear and weak non-linear case Nonlin. Processes Geophys., 25, 565 587, 28 https://doi.org/.594/npg-25-565-28 Author(s) 28. This work is distributed under the Creative Commons Attribution 4. License. Ensemble variational assimilation

More information

Numerical Weather prediction at the European Centre for Medium-Range Weather Forecasts (2)

Numerical Weather prediction at the European Centre for Medium-Range Weather Forecasts (2) Numerical Weather prediction at the European Centre for Medium-Range Weather Forecasts (2) Time series curves 500hPa geopotential Correlation coefficent of forecast anomaly N Hemisphere Lat 20.0 to 90.0

More information

Data assimilation in high dimensions

Data assimilation in high dimensions Data assimilation in high dimensions David Kelly Courant Institute New York University New York NY www.dtbkelly.com February 12, 2015 Graduate seminar, CIMS David Kelly (CIMS) Data assimilation February

More information

LARGE-SCALE TRAFFIC STATE ESTIMATION

LARGE-SCALE TRAFFIC STATE ESTIMATION Hans van Lint, Yufei Yuan & Friso Scholten A localized deterministic Ensemble Kalman Filter LARGE-SCALE TRAFFIC STATE ESTIMATION CONTENTS Intro: need for large-scale traffic state estimation Some Kalman

More information

A Non-Gaussian Ensemble Filter Update for Data Assimilation

A Non-Gaussian Ensemble Filter Update for Data Assimilation 4186 M O N T H L Y W E A T H E R R E V I E W VOLUME 138 A Non-Gaussian Ensemble Filter Update for Data Assimilation JEFFREY L. ANDERSON NCAR Data Assimilation Research Section,* Boulder, Colorado (Manuscript

More information

Data assimilation with and without a model

Data assimilation with and without a model Data assimilation with and without a model Tyrus Berry George Mason University NJIT Feb. 28, 2017 Postdoc supported by NSF This work is in collaboration with: Tim Sauer, GMU Franz Hamilton, Postdoc, NCSU

More information

New variables in spherical geometry. David G. Dritschel. Mathematical Institute University of St Andrews.

New variables in spherical geometry. David G. Dritschel. Mathematical Institute University of St Andrews. New variables in spherical geometry David G Dritschel Mathematical Institute University of St Andrews http://www-vortexmcsst-andacuk Collaborators: Ali Mohebalhojeh (Tehran St Andrews) Jemma Shipton &

More information

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Dongbin Xiu Department of Mathematics, Purdue University Support: AFOSR FA955-8-1-353 (Computational Math) SF CAREER DMS-64535

More information

Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF

Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Eugenia Kalnay and Shu-Chih Yang with Alberto Carrasi, Matteo Corazza and Takemasa Miyoshi 4th EnKF Workshop, April 2010 Relationship

More information

Bayesian Statistics and Data Assimilation. Jonathan Stroud. Department of Statistics The George Washington University

Bayesian Statistics and Data Assimilation. Jonathan Stroud. Department of Statistics The George Washington University Bayesian Statistics and Data Assimilation Jonathan Stroud Department of Statistics The George Washington University 1 Outline Motivation Bayesian Statistics Parameter Estimation in Data Assimilation Combined

More information

Lagrangian Data Assimilation and its Applications to Geophysical Fluid Flows

Lagrangian Data Assimilation and its Applications to Geophysical Fluid Flows Lagrangian Data Assimilation and its Applications to Geophysical Fluid Flows by Laura Slivinski B.S., University of Maryland; College Park, MD, 2009 Sc.M, Brown University; Providence, RI, 2010 A dissertation

More information

Data assimilation; comparison of 4D-Var and LETKF smoothers

Data assimilation; comparison of 4D-Var and LETKF smoothers Data assimilation; comparison of 4D-Var and LETKF smoothers Eugenia Kalnay and many friends University of Maryland CSCAMM DAS13 June 2013 Contents First part: Forecasting the weather - we are really getting

More information

New Fast Kalman filter method

New Fast Kalman filter method New Fast Kalman filter method Hojat Ghorbanidehno, Hee Sun Lee 1. Introduction Data assimilation methods combine dynamical models of a system with typically noisy observations to obtain estimates of the

More information

Robust Ensemble Filtering With Improved Storm Surge Forecasting

Robust Ensemble Filtering With Improved Storm Surge Forecasting Robust Ensemble Filtering With Improved Storm Surge Forecasting U. Altaf, T. Buttler, X. Luo, C. Dawson, T. Mao, I.Hoteit Meteo France, Toulouse, Nov 13, 2012 Project Ensemble data assimilation for storm

More information

Ensemble Kalman Filter

Ensemble Kalman Filter Ensemble Kalman Filter Geir Evensen and Laurent Bertino Hydro Research Centre, Bergen, Norway, Nansen Environmental and Remote Sensing Center, Bergen, Norway The Ensemble Kalman Filter (EnKF) Represents

More information

Analysis Scheme in the Ensemble Kalman Filter

Analysis Scheme in the Ensemble Kalman Filter JUNE 1998 BURGERS ET AL. 1719 Analysis Scheme in the Ensemble Kalman Filter GERRIT BURGERS Royal Netherlands Meteorological Institute, De Bilt, the Netherlands PETER JAN VAN LEEUWEN Institute or Marine

More information

Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter

Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter arxiv:physics/0511236 v1 28 Nov 2005 Brian R. Hunt Institute for Physical Science and Technology and Department

More information

In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance.

In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance. hree-dimensional variational assimilation (3D-Var) In the derivation of Optimal Interpolation, we found the optimal weight matrix W that minimizes the total analysis error variance. Lorenc (1986) showed

More information

A nested sampling particle filter for nonlinear data assimilation

A nested sampling particle filter for nonlinear data assimilation Quarterly Journal of the Royal Meteorological Society Q. J. R. Meteorol. Soc. : 14, July 2 A DOI:.2/qj.224 A nested sampling particle filter for nonlinear data assimilation Ahmed H. Elsheikh a,b *, Ibrahim

More information

Bayesian Inverse problem, Data assimilation and Localization

Bayesian Inverse problem, Data assimilation and Localization Bayesian Inverse problem, Data assimilation and Localization Xin T Tong National University of Singapore ICIP, Singapore 2018 X.Tong Localization 1 / 37 Content What is Bayesian inverse problem? What is

More information

Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems. Eric Kostelich Data Mining Seminar, Feb. 6, 2006

Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems. Eric Kostelich Data Mining Seminar, Feb. 6, 2006 Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems Eric Kostelich Data Mining Seminar, Feb. 6, 2006 kostelich@asu.edu Co-Workers Istvan Szunyogh, Gyorgyi Gyarmati, Ed Ott, Brian

More information

Aspects of the practical application of ensemble-based Kalman filters

Aspects of the practical application of ensemble-based Kalman filters Aspects of the practical application of ensemble-based Kalman filters Lars Nerger Alfred Wegener Institute for Polar and Marine Research Bremerhaven, Germany and Bremen Supercomputing Competence Center

More information

Four-dimensional ensemble Kalman filtering

Four-dimensional ensemble Kalman filtering Tellus (24), 56A, 273 277 Copyright C Blackwell Munksgaard, 24 Printed in UK. All rights reserved TELLUS Four-dimensional ensemble Kalman filtering By B. R. HUNT 1, E. KALNAY 1, E. J. KOSTELICH 2,E.OTT

More information

Par$cle Filters Part I: Theory. Peter Jan van Leeuwen Data- Assimila$on Research Centre DARC University of Reading

Par$cle Filters Part I: Theory. Peter Jan van Leeuwen Data- Assimila$on Research Centre DARC University of Reading Par$cle Filters Part I: Theory Peter Jan van Leeuwen Data- Assimila$on Research Centre DARC University of Reading Reading July 2013 Why Data Assimila$on Predic$on Model improvement: - Parameter es$ma$on

More information

State and Parameter Estimation in Stochastic Dynamical Models

State and Parameter Estimation in Stochastic Dynamical Models State and Parameter Estimation in Stochastic Dynamical Models Timothy DelSole George Mason University, Fairfax, Va and Center for Ocean-Land-Atmosphere Studies, Calverton, MD June 21, 2011 1 1 collaboration

More information

A Moment Matching Ensemble Filter for Nonlinear Non-Gaussian Data Assimilation

A Moment Matching Ensemble Filter for Nonlinear Non-Gaussian Data Assimilation 3964 M O N T H L Y W E A T H E R R E V I E W VOLUME 139 A Moment Matching Ensemble Filter for Nonlinear Non-Gaussian Data Assimilation JING LEI AND PETER BICKEL Department of Statistics, University of

More information

How is balance of a forecast ensemble affected by adaptive and non-adaptive localization schemes?

How is balance of a forecast ensemble affected by adaptive and non-adaptive localization schemes? 1 How is balance of a forecast ensemble affected by adaptive and non-adaptive localization schemes? Ross Bannister Stefano Migliorini, Laura Baker, Ali Rudd NCEO, DIAMET, ESA University of Reading 2 Outline

More information

A Comparison of Error Subspace Kalman Filters

A Comparison of Error Subspace Kalman Filters Tellus 000, 000 000 (0000) Printed 4 February 2005 (Tellus LATEX style file v2.2) A Comparison of Error Subspace Kalman Filters By LARS NERGER, WOLFGANG HILLER and JENS SCHRÖTER Alfred Wegener Institute

More information

Smoothers: Types and Benchmarks

Smoothers: Types and Benchmarks Smoothers: Types and Benchmarks Patrick N. Raanes Oxford University, NERSC 8th International EnKF Workshop May 27, 2013 Chris Farmer, Irene Moroz Laurent Bertino NERSC Geir Evensen Abstract Talk builds

More information

Review of Covariance Localization in Ensemble Filters

Review of Covariance Localization in Ensemble Filters NOAA Earth System Research Laboratory Review of Covariance Localization in Ensemble Filters Tom Hamill NOAA Earth System Research Lab, Boulder, CO tom.hamill@noaa.gov Canonical ensemble Kalman filter update

More information

Inter-comparison of 4D-Var and EnKF systems for operational deterministic NWP

Inter-comparison of 4D-Var and EnKF systems for operational deterministic NWP Inter-comparison of 4D-Var and EnKF systems for operational deterministic NWP Project eam: Mark Buehner Cecilien Charette Bin He Peter Houtekamer Herschel Mitchell WWRP/HORPEX Workshop on 4D-VAR and Ensemble

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time

More information

2D Image Processing (Extended) Kalman and particle filter

2D Image Processing (Extended) Kalman and particle filter 2D Image Processing (Extended) Kalman and particle filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz

More information

9 Multi-Model State Estimation

9 Multi-Model State Estimation Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State

More information

Sequential Monte Carlo Samplers for Applications in High Dimensions

Sequential Monte Carlo Samplers for Applications in High Dimensions Sequential Monte Carlo Samplers for Applications in High Dimensions Alexandros Beskos National University of Singapore KAUST, 26th February 2014 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Alex

More information

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density

Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Simo Särkkä E-mail: simo.sarkka@hut.fi Aki Vehtari E-mail: aki.vehtari@hut.fi Jouko Lampinen E-mail: jouko.lampinen@hut.fi Abstract

More information

Finite-size ensemble Kalman filters (EnKF-N)

Finite-size ensemble Kalman filters (EnKF-N) Finite-size ensemble Kalman filters (EnKF-N) Marc Bocquet (1,2), Pavel Sakov (3,4) (1) Université Paris-Est, CEREA, joint lab École des Ponts ParisTech and EdF R&D, France (2) INRIA, Paris-Rocquencourt

More information

DATA ASSIMILATION FOR FLOOD FORECASTING

DATA ASSIMILATION FOR FLOOD FORECASTING DATA ASSIMILATION FOR FLOOD FORECASTING Arnold Heemin Delft University of Technology 09/16/14 1 Data assimilation is the incorporation of measurement into a numerical model to improve the model results

More information

Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF

Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Eugenia Kalnay and Shu-Chih Yang with Alberto Carrasi, Matteo Corazza and Takemasa Miyoshi ECODYC10, Dresden 28 January 2010 Relationship

More information

Lecture 7: Optimal Smoothing

Lecture 7: Optimal Smoothing Department of Biomedical Engineering and Computational Science Aalto University March 17, 2011 Contents 1 What is Optimal Smoothing? 2 Bayesian Optimal Smoothing Equations 3 Rauch-Tung-Striebel Smoother

More information

A Gaussian Resampling Particle Filter

A Gaussian Resampling Particle Filter A Gaussian Resampling Particle Filter By X. Xiong 1 and I. M. Navon 1 1 School of Computational Science and Department of Mathematics, Florida State University, Tallahassee, FL 3236, USA 14 July ABSTRACT

More information

Reducing the Impact of Sampling Errors in Ensemble Filters

Reducing the Impact of Sampling Errors in Ensemble Filters Reducing the Impact of Sampling Errors in Ensemble Filters Jeffrey Anderson NCAR Data Assimilation Research Section The National Center for Atmospheric Research is sponsored by the National Science Foundation.

More information

Adaptive ensemble Kalman filtering of nonlinear systems

Adaptive ensemble Kalman filtering of nonlinear systems Adaptive ensemble Kalman filtering of nonlinear systems Tyrus Berry George Mason University June 12, 213 : Problem Setup We consider a system of the form: x k+1 = f (x k ) + ω k+1 ω N (, Q) y k+1 = h(x

More information

Data assimilation : Basics and meteorology

Data assimilation : Basics and meteorology Data assimilation : Basics and meteorology Olivier Talagrand Laboratoire de Météorologie Dynamique, École Normale Supérieure, Paris, France Workshop on Coupled Climate-Economics Modelling and Data Analysis

More information