CONVERGENCE OF THE SQUARE ROOT ENSEMBLE KALMAN FILTER IN THE LARGE ENSEMBLE LIMIT

Size: px
Start display at page:

Download "CONVERGENCE OF THE SQUARE ROOT ENSEMBLE KALMAN FILTER IN THE LARGE ENSEMBLE LIMIT"

Transcription

1 COVERGECE OF THE SQUARE ROOT ESEMBLE KALMA FILTER I THE LARGE ESEMBLE LIMIT EVA KWIATKOWSKI AD JA MADEL Abstract. Ensemble filters implement sequential Bayesian estimation by representing the probability distribution by an ensemble mean and covariance. Unbiased square root ensemble filters use deterministic algorithms to produce an analysis (posterior) ensemble with prescribed mean and covariance, consistent with the Kalman update. This includes several filters used in practice, such as the Ensemble Transform Kalman Filter (ETKF), the Ensemble Adjustment Kalman Filter (EAKF), and a filter by Whitaker and Hamill. We show that at every time index, as the number of ensemble members increases to infinity, the mean and covariance of an unbiased ensemble square root filter converge to those of the Kalman filter, in the case a linear model and an initial distribution of which all moments exist. The convergence is in L p and the convergence rate does not depend on the model dimension. The result holds in the infinitely dimensional Hilbert space as well. Key words. Data assimilation, L p laws of large numbers, Hilbert space, ensemble Kalman filter AMS subject classifications. 60F25, 65C05, 65C35 1. Introduction. Data assimilation uses Bayesian estimation to incorporate observations into the state of a model of a physical system, called forecast, to get a better estimate, called analysis. The resulting analysis is used to initialize the next run of the model, producing the next forecast, which is subsequently used in the next analysis, and the process thus continues. Data assimilation is widely used, e.g., in geosciences [15]. The Kalman filter represents probability distributions by the mean and covariance. It is an efficient method for the case when the probability distributions are close to Gaussian. However, in applications, the dimension of the state is large, and it is not feasible to even store the state covariance exactly. Ensemble Kalman filters are variants of Kalman filters in which the state probability distribution is represented by an ensemble of states, and the state mean and covariance are estimated from the ensemble [10]. The dynamics of the model, which could be nonlinear in this case, are applied to each ensemble member to produce the forecast estimate. Simulation studies have shown the ensembe filters are able to efficiently handle nonlinear dynamics and state spaces of high dimension. The major differences between ensemble Kalman filters are in the way that the analysis ensemble is produced from the forecast and the data. The analysis ensemble can be formed in a stochastic manner deterministic manner. The purpose of this paper is to examine ensemble Kalman filters that use a deterministic method to produce an analysis ensemble with exactly the desired statistics. Such filters are called unbiased square root filters, because the ensemble mean equals the prescribed mean, and computation of the analysis to match the prescribed ensemble covariance leads to taking the square root of a matrix. This includes several filters used in practice, such as the Ensemble Transform Kalman Filter (ETKF) [4, 13, 28], the Ensemble Adjustment Kalman Filter (EAKF) [2], and a filter by Whitaker and Hamill [29]. Criteria necessary for a ensemble square root filter to be unbiased are discussed in [20, 27]. The algebra of square root ensemble filters relies on the use of sample covariance, which does not support localization by tapering [12], but a localization Department of Mathematical and Statistical Sciences, University of Colorado Denver 1

2 can be achieved by other means, such as sequential assimilation of observations locally [2, 13]. An important question for ensemble filters is a law of large number as the size of the ensemble grows to infinity. In [19, 21], it was proved independently for the version of the ensemble Kalman filter with radomized data from [6], that the ensemble mean and covariance converge to those of the Kalman filter, as the number of ensemble members grows to infinity. Both analyses obtain convergence in all L p, 1 p <, but the convergence results are not independent of the space dimensions. The paper [21] relies on the fact that ensemble members are exchangeable and it uses uniform integrability, which does not provide a rate of convergence, while [19] uses a mean field approach and stochastic inequalities for the random matrices and vectors entry by entry to obtain convergence with the usual Monte-Carlo rate 1/. Here, we show that at every time index, as the number of ensemble members increases to infinity, the mean and covariance of an unbiased ensemble square root filter converge to those of the Kalman filter, in the case of a linear model and an initial distribution of which all moments exist. The convergence is in all L p, with the usual rate 1/, the constant does not depend on the model state dimension, and the result holds in the infinitely dimensional case as well. The constants in the estimate are constructive and depend only on the model and the data, namely the norms of the model operators and of the inverse of the data covariance, and of the vectors given. The analysis builds on some of the tools developed in [19], and extends them to obtain bounds on the operators involved in the formulation of square root ensemble filters, independently of the state space and data dimension, including in the infinitely dimensional case. The square root esemble filters are deterministic, which avoids technical complications associated with data perturbation in infinite dimension. Convergence of ensemble filters with data perturbation independent of dimension, including infinite, will be studied elsewhere. All of these estimates hold for arbitrary but fixed time; for estimates of the behavior of the ensemble Kalman filter with time, see [16]. The main idea of the analysis is simple: by the law of large numbers, the sample mean and the sample covariance of the initial ensemble converge to the mean and covariance of the background distribution. Every analysis step is a continuous mapping of the mean and the covariance, and the convergence in the large sample ensemble limit follows. The analysis quantifies this argument. The paper is organized as follows: in Section 2 we introduce the notation and review some background concepts. Section 3 contains statement of the Kalman filter and the unbiased square root filter, and shows some basic properties, which are needed later. In section 4.4, we derive the L p laws of large numbers, needed for the convergence of the statistics of the initial ensemble. In Section 5, we show the continuity properties the transformation of the statistics from one time step to the next. Finally, Section 6 presents the main result. 2. otation and preliminaries. We will work with random elements with values in a Hilbert space V. Readers interested in finite dimension only can think of random vectors in V = R n. All notations and proofs are presented in a way that applies in R n, as well as in a Hilbert space Finite dimensional case. Vectors u R n are columns, and the inner product is u, v = u v where u denotes transpose. We will use the notation [V ] for the space of all n by n matrices. For a matrix M, M denotes the transpose, and M stands for the spectral norm. We will also need the Hilbert-Schmidt norm (more 2

3 commonly called Frobenius morm) of matrices, induced by the corresponding inner product of two n by n matrices, A HS = n i,j=1 a 2 ij 1/2 = A, A 1/2 HS, A, B HS = n The Hilbert-Schmidt norm dominates the spectral norm, i,j=1 a ij b ij. A A HS, (2.1) for any matrix A. The notation A 0 means that A is symmetric and positive semidefinite, A > 0 means positive definite. For Q 0, X (u, Q) means that random vector X has the normal distribution on R n, with mean u and covariance matrix Q. For vectors u, v R n, their tensor product is the n by n matrix, It is easy to see that u v = uv [V ]. u v HS = u v, (2.2) because n n u v 2 HS = n (u i v j ) 2 = u 2 i j=1 j=1 n vj 2 = u 2 v Hilbert space case. Readers interested in finite dimension only should skip this section. In the general case, V is a separable Hilbert space equipped with inner product u, v and the norm u = u, u 1/2. The space of all bounded operators from Hilbert space U to Hilbert space V is [U, V ], and [V ] = [V, V ]. For a bounded linear operator A [U, V ], A [V, U] denotes the adjoint operator, and A is the operator norm. The Hilbert-Schmidt norm of a linear operator on V is defined by A HS = A, B 1/2 HS, A, B HS = Ae n, Be n n=1 where {e n } is any complete orthonormal sequence; the values do not depend on the choice of {e n }. An operator on V is called a Hilbert-Schmidt operator if A, B HS <, and HS (V ) is the space of all Hilbert-Schmidt operators on V. The Hilbert- Schmidt norm again dominates the spectral norm, so (2.1) holds, and HS (V ) [V ]. The importance of HS (V ) for us lies in the fact that HS (V ) is a Hilbert space, while [V ] is not. The notation A 0 for operator A [V ] means that A symmetric, A = A, and positive semidefinite, Au, u 0 for all u V. The notation A > 0 means here that A is symmetric and bounded below, that is, Au, u α u 2 for all u V and some α > 0. In particular, if A > 0, then A 1 [V ]. An operator A 0 is trace class if Tr A <, where Tr A is the trace of A, defined by Tr A = Ae n, e n. If A 0 is trace class, then A is Hilbert-Schmidt, because A HS Tr A. n=1 3

4 by For vectors u, v V, their tensor product u v [V ] is now a mapping defined u v : w V u v, w, and the proof of (2.2) becomes x y 2 HS(H) = (x y) e i 2 = x y, e i 2 = x 2 y, e k 2 = x 2 y 2, from Bessel s equality. The mean of a random element E (X) V is defined by E (X), y = E ( X, y ) for all y V. The mean of the tensor product of two random elements is defined by E (X Y ) w, y = E( X, y Y, w ) for all w, y V. Covariance is defined below (2.3). The covariance of a random element X exists when the second moment E( X 2 ) <, and the covariance is a trace class operator. On the other hand, if Q 0 is trace class, the normal distribution (u, Q) can be defined as a probability measure on V, consistently with the finite dimensional case. Just as in the finite dimensional case, if X (u, Q), then X has all moments E ( X p ) <, 1 p <. See [8, 17, 18] for further details Common definitions and properies. The rest is the same regardess if V = R n or V is a general Hilbert space. To unify the nomenclature, matrices are identified with the corresponding operators of matrix-vector multiplication. Covariance of a random vector X in V is defined by Cov(X) = E((X E (X)) (Y E (Y ))) = E(X Y ) E(X) E(Y ), (2.3) if it exists. For 1 p <, the space of all random elements X with values in V and finite moment E ( X p ) < is denoted by L p (V ), and it is equipped with the norm X p = (E( X p )) 1/p. If 1 p q < and X L q (V ), then X L p (V ) and X p X q. If X, Y L 2 (V ) are independent, then Cov(X, Y ) = 0 and E ( X, Y ) = 0. The following lemma will be used repeatedly in obtaining L p estimates. It is characteristic of the present approach that higher order norms are used to bound lower order norms. Lemma 2.1 (L p Cauchy-Schwarz inequality). If U, V L 2p (V ) and 1 p <, then U V p U 2p V 2p. Proof. By the Cauchy-Schwartz inequality in L 2 (V ), applied to the random variables U p and V p, which are in L 2 (R), U V p p = E ( U p V p ) ( E U 2p) 1/2 ( E V 2p ) 1/2 ( (E U 2p = ) 1/2p ( E V 2p ) ) 1/2p p = X p 2p Y p 2p. 4

5 Taking the pth root of both sides yields the desired inequality. An ensemble X consists of random elements X i, i = 1,...,. The ensemble mean is denoted by X or E (X ), and is defined by X = E (X ) = 1 X i. (2.4) The ensemble covariance is denoted by Q or C (X ), and is defined by Q = C (X ) = 1 (X i X ) (X i X ) = E (X X ) E (X ) E (X ), (2.5) where X X = [X 1 X 1,..., X X ]. For A, B [V ], A B means that A and B are symmetric, and B A 0. It is well known from the spectral theory that 0 A B implies A B. 3. Definitions and basic properties of the algorithms. In the Kalman filter, the probability distribution of the state of the system is described by its mean and covariance. We first consider the analysis step, which uses Bayes theorem to incorporate an observation into the forecast state and covariance to produce the analysis state and covariance. The system state X is an R n valued random vector. Denote by X f R n the forecast state mean, Q f R n n the forecast state error covariance, and X a R n and Q a R n n the analysis state mean and covariance, respectively. The observation data vector is d R m, where d HX (0, R), with H R n m the linear observation operator, and R R m m the observation error covariance. Given the forecast mean and covariance, the analysis mean and covariance are chosen as the best fit to the data in the sense of least squares, so that for all X R n, J(X) = (X X f ) (Q f ) 1 (X X f ) + (d H(X)) R 1 (d H(X)) = (X X a ) (Q a ) 1 (X X a ) + const. (3.1) The quantities that create the equality between the two formulations of the least squares functional J(X) are the Kalman Filter analysis mean and covariance, where X a = X f + K(d H(X f )) (3.2) Q a = Q f Q f H (HQ f H + R) 1 HQ f = Q f KHQ f (3.3) K = Q f H (HQ f H + R) 1 (3.4) is the Kalman gain matrix. See, e.g., [13, Sec. 2.1] or [1, 14, 26]. For future reference, we introduce the operators K(Q) = QH (HQH + R) 1 (3.5) B(X, Q) = X + K(Q)(d HX), (3.6) A (Q) = Q QH (HQH + R) 1 HQ (3.7) = Q K(Q)HQ, (3.8) 5

6 which evaluate the Kalman gain, analysis mean, and the analysis covariance, respectively, in the Kalman filter equations (3.2) (3.4). We are now ready to state the complete Kalman filter for reference. The superscript (k) denotes quantities at time step k. However, to simplify notation, we drop the time superscript (k) when we are concerned with one step only. Algorithm 3.1 (Kalman filter). Suppose that the model M (k) at each time k > 0 is linear, M (k) (X) = M (k) X + b (k), and initial mean X (0) and covariance B = Q (0),a of the state are given. At time k, the analysis mean and covariance from the previous time k 1 are advanced by the model, X (k),f = M (k) X (k 1),a + b (k), (3.9) Q (k),f = M (k) Q (k 1),a M (k), (3.10) followed by the analysis step, which incorporates the observation d (k), where H (k) X (k) d (k) has zero mean and covariance R (k), and it gives the analysis mean and covariance X (k),a = B (k) (X (k),f, Q (k),f ), (3.11) Q (k),a = A (k) (Q (k),f ), (3.12) where B (k) and A (k) are defined by (3.6) and (3.8), respectively, with d, H, and R, at time k. In many applications, the state dimension n of the system is large and computing or even storing the exact covariance of the system state is computationally impractial. Ensemble Kalman filters address this concern. Ensemble Kalman filters use a collection of state vectors, called an ensemble, to represent the distribution of the system state. This ensemble will be denoted by X, comprised of random elements X i in V, i = 1,.... The ensemble mean and ensemble covariance, defined by (2.4) and (2.5), are denoted by X and C, respectively, while the Kalman Filter mean and covariance are denoted without subscripts, as X and Q. There are many ways to produce an analysis ensemble corresponding to the Kalman filter algorithm. Given a forecast ensemble X f, ensemble square root filters produce an analysis ensemble X a such that (3.11) (3.12) are satisfied for the sample mean and covariance, in the sense that X a = B(X f, Q f ) = X f + K(Q f )(d H(X f )) (3.13) Q a = A(Q f ) = Q f K(Q f )HQ f. (3.14) The initial ensemble X (k),a is generated around initial state X (0) as a sample from a distribution with mean X (0) and covariance B. The matrix B is called the background covariance. The background covariance does not need to be stored explicitly. Rather, it is used in a factored form [5, 11], and only multiplication of L times a vector is needed, B = LL T, (3.15) X (0) i = X (0) + LY i, Y i (0, I). 6

7 For example, a sample covariance of the form (3.15) can be created from historical data. To represent better the dominant part of the background covariance, L can consist of approximate eigenvectors for the largest eigenvalues, obtained by a variant of the power method [15]. Another popular choice is L = T S, where T is a transformation, such as FFT or wavelet transform, which requires no storage of the matrix entries at all, and S is a sparse matrix [9, 24, 25]. Covariance of Fourier transform of a stationary random field is diagonal, so even an approximation with diagonal matrix S is useful, and computationally very inexpensive. Starting from the initial ensemble, the unbiased square root ensemble filter algorithm proceeds as follows. Algorithm 3.2 (Unbiased square root ensemble filter). Let the initial ensemble X (0),a at time k is given. At time k > 0, the analysis ensemble X (k 1),a i is advanced by the model X (k),f i = M (k) (X (k 1),a i ), i = 1,...,, (3.16) resulting in forecast ensemble X (k),f with the ensemble mean and covariance X (k),f The analysis step creates an ensemble X (k),a d (k) H (k) X (k) = E (X (k),f ), Q(k),f = C (X (k),f ). which incorporates the observation that has zero mean and the covariance R (k). The analysis ensemble is constructed (in a manner to be determined by the specific method) to have the ensemble mean and covariance given by E (X (k),a C (X (k),a ) = B (k) (X (k),f, Q (k),f ), (3.17) ) = A (k) (Q (k),f ), (3.18) where B (k) and A (k) are defined by (3.6) and (3.8) with d, H, and R, at time k. The Ensemble Adjustment Kalman Filter (EAKF, [2]), the filter by Whitaker and Hamill [29], and the Ensemble Transform Filter (ETF, [4]) and its variants, the Local Ensemble Transform Filter (LETKF, [13]) and its revision [28], satisfy properties (3.17) and (3.18). See also [20, 27]. These algorithms satisfy the conditions in the following lemma, which describes their basic properties. Lemma 3.3. Suppose that for each k 1, the model M (k) is linear, M (k) (X) = M (k) X + b (k). Then the ensemble mean and covariance X (k),a = E (X (k),a ), Q(k),a of the ensembles X (k),a, generated by Algorithm 3.2, satisfy = C (X (k),a ), k = 0, 1, 2,... (3.19) X (k),f Q (k),f X (k),a Q (k),a = M (k) X (k 1),a + b (k), (3.20) = M (k) Q (k 1),a M (k) (3.21) = B (k) (X (k),f, Q (k),f ) (3.22) = A (k) (Q (k),f ). (3.23) Proof. For any k, from the definition of the model M (k), X (k),f i = M (k) X (k 1),a i + b (k), i = 1,...,, (3.24) 7

8 and (3.20) and (3.21) follow. ow (3.22) and (3.23) follow from the definitions of the ensemble mean and covariance in (3.19), and from the properties (3.17) and (3.18) of the construction of the analysis ensemble. Using Lemma 3.3, we can now compare the transformations of the ensemble mean and covariance in Algorithms 3.1 and 3.2 in the case of a linear model: Kalman Filter X (k),f = M (k) X (k 1),a + b (k) Q (k),f = M (k) Q (k 1),a M (k) X (k),a = B (k) (X (k),f ) Q (k),a = A (k) (Q (k),f ) X (k),f Unbiased Square Root Ensemble Filter Q (k),f = M (k) X (k 1),a = M (k) Q (k 1),a = B (k) (X (k),f X (k),a Q (k),a + b (k) M (k) ) = A (k) (Q (k),f ) (3.25) It is clear from (3.25) that the Kalman filter mean and covariance and those of the unbiased square root ensemble filter evolve in the same way. Therefore, all that is needed for the convergence of the unbiased square root ensemble filter is the convergence of the initial ensemble mean and covariance, X (0),a X (0) and Q (0),a B, as, and the continuity of the operators A (k) and B (k), in a suitable statistical sense. Similarly as in [19, 21], we will work with convergence in all L p, 1 p <. 4. L p laws of large numbers. To prove convergence in L p of the initial ensemble mean and covariance to the mean and covariance of the background distribution, we need corresponding laws of large numbers L p convergence of sample mean. The L 2 law of large numbers for X 1,..., X L 2 (V ) i.i.d. is classical, E (X ) E (X 1 ) 2 1 X 1 E (X 1 ) 2 2 X 1 2. (4.1) The proof relies on the expansion (assuming E (X 1 ) = 0 without loss of generality), E (X ) 2 2 = 1 X i, 1 X i = 1 2 E ( X i, X k ) (4.2) = 1 2 k=1 E ( X i, X i ) = 1 E ( X 1, X 1 ) = 1 X which yields E (X ) 2 X 1 2 /. To obtain L p laws of large numbers for p 2, the equality (4.2) needs to be replaced by the following. Lemma 4.1. (Marcinkiewicz-Zygmund inequality) If 1 p < and Y i L p (V ), E(Y i ) = 0 and E( Y i p ) <, i = 1,...,, then ( p) ( ) p/2 E Y i B p E Y i 2 (4.3) where B p depends on p only. Proof. For the finite dimensional case, see [23] or [7, p. 367]. In the infinitely dimensional case, note that a separable Hilbert space is Banach space of Rademacher 8

9 type 2 [3, p. 159], which implies (4.3) for any p 1 [30, Proposition 2.1]. All infinitely dimensional separable Hilbert spaces are isometric, so the same B p works for any of them. The Marcinkiewicz-Zygmund inequality allows to prove a variant of the weak law of large numbers in L p norms, similarly as in [7, Corollary 2, page 368]. ote that Marcinkiewicz-Zygmund inequality does not hold in Banach spaces in general, so it is important that V = R n or V sepearable Hilbert space, as assumed throughout. Theorem 4.2. Let X k L p (V ) be i.i.d. and p 2. Then, E (X ) E (X 1 ) p C p X 1 E (X 1 ) p 2C p X 1 p, (4.4) where C p depends on p only. Proof. If p = 2, the statement becomes (4.1). Let p > 2, and consider first the case E (X 1 ) = 0. By Hölder s inequality with the conjugate exponents p 2 p and 2 p, ( ( X i 2 = 1 X i 2) (p 2)/p ( ) 2/p (1 p/p 2)) ( X i 2) p/2 ( 2/p ( p/2) = ( X (p 2)/p i 2) ) 2/p = (p 2)/p X i p. (4.5) Using the Marcinkiewicz-Zygmund inequality (4.3) and (4.5), ( p) ( ) p/2 E X i B p E X i 2 ( ) 2/p p/2 B p E (p 2)/p X i p ( ) B p p/2 1 E X i p = B p p/2 E ( X 1 p ), (4.6) because p p 2 2 p = p 2 1, and the X i are identically distributed. Taking the p-th root of both sides of (4.6) yields X i Bp p 1/p 1/2 X 1 p, and the first inequality in (4.4) follows. The general case when E (X 1 ) 0 follows from the triangle inequality L p convergence of sample covariance. For the convergence ensemble covariance, we use the L p law of large numbers in Hilbert-Schmidt norm, because HS (V ) is a separable Hilbert space, while [V ] is not even a Hilbert space. Convergence in the norm of [V ], the operator norm, then follows from (2.1). See also [19, Lemma 22.3] for a related result using entry-by-entry estimates. 9

10 Theorem 4.3. Let X 1,..., X L 2p (V ) be i.i.d. and p 2. Then, ( ) 2C p C (X ) Cov (X 1 ) HS p + 4C2 2p X 1 2 2p. (4.7) where C p is a constant which depends on p only; in particular, C 2 = 1. Proof. Without loss of generality, let E (X 1 ) = 0. Then C (X X ) Cov (X 1 X 1 ) (4.8) = (E (X X ) E (X 1 X 1 )) + E (X ) E (X ), where X X = [X 1 X 1,..., X X ]. For the first term in (4.8), the L p law of large numbers (4.4) in HS (V ) yields E ( E (X k X k ) E (X 1 X 1 ) p HS )1/p 2C p E ( X 1 X 1 p HS )1/p = 2C ( p E X 1 2p) 1/p 2C = p X 2 2p, using (2.2). For the second term in (4.8), we use again (2.2) to get E (X ) E (X ) HS = E (X k ) 2, and the L p law of large numbers in V yields E ( E (X ) E (X ) p HS )1/p = E ( E (X ) 2p) 1/p = E (X ) 0 2 2p ( 2C2p X 1 2p ) 2. It remains to use the triangle inequality for the general case. 5. Continuity estimates. A fundamental part of our analysis are continuity estimates for the operators A and B, which bring the forecast statistics to the analysis statistics. Our general strategy is to first derive pointwise estimates, which apply to every realization of the random elements separately. In the next section, we will integrate them to get the corresponding L p estimates. We will prove the following estimates for general covariances Q and P, with the state covariance and sample covariance of the filters in mind Pointwise bounds. The first estimate is the continuity the Kalman gain matrix (3.4) as a function of the forecast covariance in the next lemma and its corollary. See also [19, Proposition 22.2]. Lemma 5.1. If R > 0 and P, Q 0, then ( K(Q) K(P ) Q P H R min { P, Q } H 2). Proof. Since R > 0 and P, Q 0, K(Q) and K(P ) are defined in (3.5). For A, B 0, we have the identity, (I + A) 1 (I + B) 1 = (I + A) 1 (B A)(I + B) 1, (5.1) 10

11 which is easily verified by multiplication by I + A on the left and I + B on the right, and (I + A) 1 (I + B) 1 B A, (5.2) which follows from (5.1) using the inequalities (I + A) 1 1, (I + B) 1 1, because A, B 0. ow write (HQH + R) 1 = R 1/2 (R 1/2 HQH R 1/2 + I) 1 R 1/2 using the symmetric square root R 1/2 of R. By (5.2) with A = R 1/2 HQH R 1/2 and B = R 1/2 HP H R 1/2, we have that (HQH + R) 1 (HP H + R) 1 R 1/2 H Q P H R 1/2 = Q P H 2 R 1. (5.3) Using the triangle inequality and the fact that HQH + R R, we have (HQH + R) 1 R 1. (5.4) Using (5.3), (5.4), and the definition of the operator K from (3.5), we have K(Q) K(P ) = QH (HQH + R) 1 P H (HP H + R) 1 = QH (HQH + R) 1 P H (HQH + R) 1 + P H (HQH + R) 1 P H (HP H + R) 1 = (Q P ) H (HQH + R) 1 P H ( (HQH + R) 1 (HP H + R) 1) Q P H R 1 + P H Q P H 2 R 1 ( = Q P H R P H 2). Swapping the roles P and Q yields K(Q) K(P ) Q P H R 1 (1 + Q H 2 ), which completes the proof. A pointwise bounds on the Kalman gain follows. Corollary 5.2. If R > 0 and Q 0, then Proof. Use Lemma 5.1 with P = 0. Corollary 5.3. If R > 0 and Q 0, then K(Q) Q H R 1. (5.5) B(X, Q) X + Q H R 1 ( d + H X ). (5.6) Proof. By the definition of operator B in (3.6), the pointwise bound on the Kalman gain (5.5), and the triangle inequality, B(X, Q) = X + K(Q)(d HX) X + Q H R 1 ( d + H X ). The pointwise continuity of operator A also follows from Lemma 5.1. Lemma 5.4. If R > 0 and P, Q 0, then A(Q) A(P ) Q P ( 1 + H 2 R 1 ( Q + P + H 2 P Q )). 11

12 Proof. Since R > 0 and P, Q 0, it follows that K(Q), K(P ), A(Q), and A(P ) are defined. From the definition of A (3.8), Lemma 5.1, and Corollary 5.2, A(Q) A(P ) = (Q K(Q)HQ) (P K(P )HP ) = Q P + K(P )HP K(Q)HQ = Q P + K(P )HP K(P )HQ + K(P )HQ K(Q)HQ Q P + Q P K(P ) H + K(Q) K(P ) H Q Q P + Q P P H R 1 H + Q P H R 1 (1 + P H 2 ) H Q = Q P ( 1 + H 2 R 1 ( Q + P + H 2 P Q )). Remark 5.5. The choice of P in min{ P, Q } in the proof Lemma 5.4 was made to preserve symmetry. Swapping the roles of P and Q in the proof gives a sharper, but a more complicated bound A(Q) A(P ) Q P ( ( { 1 + H 2 R 1 Q + P + H 2 min P 2, Q P, Q 2})). Instead of setting P = 0 above, we can get a well-known sharper pointwise bound on A(Q) directly from (3.7). The proof is written in a way suitable for our generalization. Lemma 5.6. Let R > 0 and Q 0. Then and Proof. By the definition of operator A in (3.7), 0 A(Q) A, (5.7) A(Q) Q. (5.8) A (Q) = Q QH (HQH + R) 1 HQ Q, because QH (HQH + R) 1 HQ 0. It remains to show that A(Q) 0. ote that for any A, and since R > 0, A A + R A A, (A A + R) 1/2 A A (A A + R) 1/2 I. Since for B 0, B I is the same as the spectral radius ρ (B) 1, and, for any C and D, ρ (CD) = ρ (DC), we have finally Using (5.9) with A = Q 1/2 H gives A (A A + R) 1 A I. (5.9) Q 1/2 H (HQH + R) 1 HQ 1/2 I, 12

13 and, consequently QH (HQH + R) 1 HQ Q, which gives A(Q) 0. Since B = ρ (B) for any symmetric B, (5.8) follows from (5.7). The pointwise continuity of operator B follows from Lemma 5.1 as well. Lemma 5.7. Let R > 0, and Q, P 0. Then, B(X, Q) B(Y, P ) X Y ( 1 + H 2 R 1 Q ) + (5.10) Q P H R 1 (1 + P H 2 )( d HY ). Proof. Estimating the difference and using the pointwise bound on the Kalman gain (5.5) and the pointwise continuity of the Kalman gain from Lemma 5.1, K(Q)(d HX) K(P )(d HY ) Using the triangle inequality, we have which is (5.10). = K(Q)(HY HX) + (K(Q) K(P ))(d HY ) K(Q) H X Y + K(Q) K(P ) d HY Q H R 1 H X Y + Q P H R 1 (1 + P H 2 )( d + HY ). B(X, Q) B(Y, P ) = X + K(Q)(d HX) (Y + K(P )(d HY )) X Y + Q H R 1 H X Y Q P H R 1 (1 + P H 2 )( d + HY, 5.2. L p bounds. Using the pointwise estimate on the continuity of A, we can now estimate continuity in L p. We will need the result only with one of the arguments random and the other one constant (i.e., non random), which simplifies its statement and proof. This is because the application of these estimates will be the ensemble sample covariance, which is random, and the state covariance, which is constant. Lemma 5.8. Let Q be a random operator such that Q 0 almost surely (a.s.), let P 0 be constant, and let R > 0. Then, for all 1 p <, where A(Q) A(P ) p (1 + c P ) Q P p + c(1 + c P ) Q 2p Q P 2p, (5.11) c = H 2 R 1. (5.12) Proof. From Lemma 5.4, the triangle inequality, Lemma 2.1, and recognizing that P is constant, A(Q) A(P ) p Q P (1 + c Q + c P + c 2 Q P ) p Q P p + c Q Q P p + c P Q P p + c 2 Q P Q P p Q P p + c Q 2p Q P 2p + c P Q P p + c 2 P Q 2p Q P 2p = (1 + c P ) Q P p + c(1 + c P ) Q 2p Q P 2p. 13

14 Instead of setting P = 0 above, we get a better bound on A(Q) p directly. Lemma 5.9. Let Q be a random operator such that Q 0 a.s., and R > 0. Then, for all 1 p <, A(Q) p Q p. (5.13) Proof. From (5.8), it follows that E ( A(Q) p ) E ( Q p ). Using the pointwise estimate on the continuity of the operator B, we estimate its continuity in L p. Again, we keep the arguments of one of the terms constant, which is all we will need, resulting in a simplification. Lemma Let Q and X be random operators, and P and Y be constant. Let Q 0 a.s., P 0, and R > 0. Then, B(X, Q) B(Y, P ) p X Y p + H 2 R 1 Q 2p X Y 2p + Q P p H R 1 (1 + P H 2 ) d HY. Proof. Applying the L p norm to the pointwise bound (5.10), using the triangle inequality, recognizing that P and Y are constant, and applying the Cauchy-Schwarz inequality (Lemma 2.1) to the rest, we get B(X, Q) B(Y, P ) p = X Y ( 1 + H 2 R 1 Q ) + Q P H R 1 (1 + P H 2 ) d HY p X Y p + H 2 R 1 Q X Y p + Q P H R 1 (1 + P H 2 ) d HY p X Y p + H 2 R 1 Q 2p X Y 2p + Q P p H R 1 (1 + P H 2 ) d HY. 6. Convergence of the unbiased squared root filter. By the law of large numbers, the sample mean and the sample covariance of the initial ensemble converge to the mean and covariance of the background distribution. Every analysis step is a continuous mapping of the mean and the covariance, and the convergence in the large ensemble limit follows. The theorem below quantifies this argument. Theorem 6.1. Assume that the state space V is finite dimensional or separable Hilbert space, the initial state, denoted X (0) has distribution on V such that all moments exist, E( X (0) p ) < for every 1 p <, the initial ensemble X (0) is an i.i.d. sample from this distribution, and the model is linear, M (k) (X) = M (k) X +b (k) for every time k. Then, for all 1 p < and all k, the ensemble mean X (k),a and covariance Q (k),a from the unbiased square root ensemble filter (Algorithm 3.2) converge to the mean X (k),a and covariance Q (k),a from the Kalman filter (Algorithm 3.1), respectively, in L p as with the convergence rate 1/. Specifically, X (k),a X (k),a p a(k) p, Q (k),a Q (k),a p b(k) p, (6.1) for all = 1, 2,..., where the constants a (k) p and b (k) p depend only on p, k, on the norms M (l), H (l), R (l) 1, d (l), l k, and X (0) and Q (0),a. Proof. Base step. For k = 0 and p 2, (6.1) follows immediately from the L p laws of large numbers in L p, (4.4) and (4.7). For 1 p < 2, it is sufficient to note 14

15 that the L p norm is dominated by the L 2 norm. We will use const (p, k) to denote a generic constant which depends on p, k, on the norms of the various constant (non random) inputs and operators in the problem, and on the background distribution, but not on the dimension of the state space. Recall that X (0) = X (0),a, X (0) = X (0),a, Q (0) = Q(0),a, Q(0) = Q (0),a. A-priori bounds. From (5.8), (3.10), and (3.21), it follows that Q (k),a Q (k),f = M (k) Q (k 1),a M (k) M (k) 2 Q (k 1),a, (6.2) Q (k 1),f p Q (k),f p = M (k) Q (k 1),a M (k) p M (k) 2 Q (k 1),a p. (6.3) Because Q (0),a is a constant and the sequence Q (0),a p, = 1, 2,... is bounded independently due to (6.1) with k = 0, we have for all 1 p <, Q (k),f const (k), Q (k),f p const (p, k). (6.4) To estimate the norm of X (k),a = B(X (k),f, Q (k),f ), we use X (k),f = M (k) X (k 1),a + b (k) M (k) X (k 1),a + b (k), (6.5) and the estimate on the operator B from (5.6), which gives X (k),a = B(X (k),f, Q (k),f X (k),f + Q (k),f H (k) R (k) 1 ( d (k) + H (k) X (k),f ). (6.6) Combining (6.4), (6.6), and (6.4), we get X (k),a const (p, k). (6.7) Induction step. Let k 1 and assume that (6.1) holds with k 1 in place of k, for all > 0, and all 1 p <. Then, comparing (3.10) and (3.21), we have Q (k),f and from Lemma 5.8, Q (k),a Q(k),f p = MQ (k 1),a M MQ (k 1),a M p M 2 Q (k 1),a Q (k 1),a p, (6.8) Q (k),a p = A(Q (k),f ) A(Q(k),a ) p (1 + c Q (k),f ) Q (k),f Q(k),f p + c(1 + c Q (k),f ) Q (k),f 2p Q (k),f Q(k),f 2p, (6.9) where c (k) = H (k) 2 R (k) 1. Combining (6.2) (6.9), we obtain Q (k),a Q (k),a const (p, k) p. For the convergence of the ensemble mean to the Kalman filter mean, we have X (k),f X (k),f p = M (k) (X (k 1),a X (k 1),a ) p M (k) X (k 1),a X (k 1),a p, (6.10) 15

16 from (3.9) and (3.20). By Lemma 5.10, X (k),a X (k),a p = B(X (k),f X (k),f which, together with (6.4) and (6.8), yields, Q (k),f ) B(X(k)f, Q (k),f ) p X (k)f p + H 2 R 1 Q (k),f 2p X (k),f X (k)f 2p + Q (k),f Q(k),f p H R 1 (1 + Q (k),f H 2 ) d HX (k),f, X (k 1),a X (k 1),a const (p, k) p Acknowledgement. This research was partially supported by the U.S. ational Science Foundation under the grant DMS REFERECES [1] Brian D. O. Anderson and John B. Moore, Optimal filtering, Prentice-Hall, Englewood Cliffs,.J., [2] Jeffrey L. Anderson, An ensemble adjustment Kalman filter for data assimilation, Monthly Weather Review, 129 (2001), pp [3] Aloisio Araujo and Evarist Giné, The central limit theorem for real and Banach valued random variables, Wiley Series in Probability and Mathematical Statistics, John Wiley & Sons, ew York-Chichester-Brisbane, [4] Craig H. Bishop, Brian J. Etherton, and Sharanya J. Majumdar, Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects, Monthly Weather Review, 129 (2001), pp [5] Mark Buehner, Ensemble-derived stationary and flow-dependent background-error covariances: Evaluation in a quasi-operational WP setting, Quarterly Journal of the Royal Meteorological Society, 131 (2005), pp [6] Gerrit Burgers, Peter Jan van Leeuwen, and Geir Evensen, Analysis scheme in the ensemble Kalman filter, Monthly Weather Review, 126 (1998), pp [7] Yuan Shih Chow and Henry Teicher, Probability theory. Independence, interchangeability, martingales, Springer-Verlag, ew York, second ed., [8] Giuseppe Da Prato, An introduction to infinite-dimensional analysis, Springer-Verlag, Berlin, [9] Alex Deckmyn and Loïk Berre, A wavelet approach to representing background error covariances in a limited-area model, Monthly Weather Review, 133 (2005), pp [10] Geir Evensen, Data Assimilation: The Ensemble Kalman Filter, Springer, 2nd ed., [11] M. Fisher and P. Courtier, Estimating the covariance matrices of analysis and forecast error in variational data assimilation. ECMWF Research Department Tech. Memo 220, [12] Reinhard Furrer and Thomas Bengtsson, Estimation of high-dimensional prior and posterior covariance matrices in Kalman filter variants, J. Multivariate Anal., 98 (2007), pp [13] B. R. Hunt, E. J. Kostelich, and I. Szunyogh, Efficient data assimilation for spatiotemporal chaos: a local ensemble transform Kalman filter, Physica D: onlinear Phenomena, 230 (2007), pp [14] Andrew H. Jazwinski, Stochastic processes and filtering theory, Academic Press, ew York, [15] Eugenia Kalnay, Atmospheric Modeling, Data Assimilation and Predictability, Cambridge University Press, [16] D. T. B. Kelly, K. J. H. Law, and A. M. Stuart, Well-posedness and accuracy of the ensemble Kalman filter in discrete and continuous time. arxiv: , [17] Hui Hsiung Kuo, Gaussian measures in Banach spaces, Lecture otes in Mathematics, Vol. 463, Springer-Verlag, Berlin, [18] Peter D. Lax, Functional analysis, Pure and Applied Mathematics (ew York), Wiley- Interscience [John Wiley & Sons], ew York, [19] François Le Gland, Valérie Monbet, and Vu-Duc Tran, Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of onlinear Filtering, Dan Crisan and Boris Rozovskii, eds., Oxford University Press,

17 [20] David M. Livings, Sarah L. Dance, and ancy K. ichols, Unbiased ensemble square root filters, Phys. D, 237 (2008), pp [21] Jan Mandel, Loren Cobb, and Jonathan D. Beezley, On the convergence of the ensemble Kalman filter, Applications of Mathematics, 56 (2011), pp [22] Józef Marcinkiewicz, Collected papers, Edited by Antoni Zygmund. With the collaboration of Stanislaw Lojasiewicz, Julian Musielak, Kazimierz Urbanik and Antoni Wiweger. Instytut Matematyczny Polskiej Akademii auk, Państwowe Wydawnictwo aukowe, Warsaw, [23] J. Marcinkiewicz and A. Zygmund, Sur les foncions indépendantes, Fund. Math., 28 (1937), pp Reprinted in [22], pp [24] I. Mirouze and A. T. Weaver, Representation of correlation functions in variational assimilation using an implicit diffusion operator, Quarterly Journal of the Royal Meteorological Society, 136 (2010), pp [25] Olivier Pannekoucke, Loïk Berre, and Gerald Desroziers, Filtering properties of wavelets for local background-error correlations, Quarterly Journal of the Royal Meteorological Society, 133 (2007), pp [26] Dan Simon, Optimal State Estimation: Kalman, H, and onlinear Approaches, John Wiley and Sons, [27] Michael K. Tippett, Jeffrey L. Anderson, Craig H. Bishop, Thomas M. Hamill, and Jeffery S. Whitaker, Ensemble square root filters, Monthly Weather Review, 131 (2003), pp [28] Xuguang Wang, Craig H. Bishop, and Simon J. Julier, Which is better, an ensemble of positive negative pairs or a centered spherical simplex ensemble?, Monthly Weather Review, 132 (2004), pp [29] J. S. Whitaker and T. M. Hamill, Ensemble data assimilation without perturbed observations, Monthly Weather Review, 130 (2002), pp [30] Wojbor A. Woyczyński, On Marcinkiewicz-Zygmund laws of large numbers in Banach spaces and related rates of convergence, Probab. Math. Statist., 1 (1980), pp (1981). 17

Convergence of the Square Root Ensemble Kalman Filter in the Large Ensemble Limit

Convergence of the Square Root Ensemble Kalman Filter in the Large Ensemble Limit SIAM/ASA J. UCERTAITY QUATIFICATIO Vol. xx, pp. x x c xxxx Society for Industrial and Applied Mathematics Convergence of the Square Root Ensemble Kalman Filter in the Large Ensemble Limit Evan Kwiatkowski

More information

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation

More information

Convergence of the Ensemble Kalman Filter in Hilbert Space

Convergence of the Ensemble Kalman Filter in Hilbert Space Convergence of the Ensemble Kalman Filter in Hilbert Space Jan Mandel Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Parts based

More information

arxiv: v1 [physics.ao-ph] 23 Jan 2009

arxiv: v1 [physics.ao-ph] 23 Jan 2009 A Brief Tutorial on the Ensemble Kalman Filter Jan Mandel arxiv:0901.3725v1 [physics.ao-ph] 23 Jan 2009 February 2007, updated January 2009 Abstract The ensemble Kalman filter EnKF) is a recursive filter

More information

Spectral Ensemble Kalman Filters

Spectral Ensemble Kalman Filters Spectral Ensemble Kalman Filters Jan Mandel 12, Ivan Kasanický 2, Martin Vejmelka 2, Kryštof Eben 2, Viktor Fugĺık 2, Marie Turčičová 2, Jaroslav Resler 2, and Pavel Juruš 2 1 University of Colorado Denver

More information

Local Ensemble Transform Kalman Filter: An Efficient Scheme for Assimilating Atmospheric Data

Local Ensemble Transform Kalman Filter: An Efficient Scheme for Assimilating Atmospheric Data Local Ensemble Transform Kalman Filter: An Efficient Scheme for Assimilating Atmospheric Data John Harlim and Brian R. Hunt Department of Mathematics and Institute for Physical Science and Technology University

More information

Local Ensemble Transform Kalman Filter

Local Ensemble Transform Kalman Filter Local Ensemble Transform Kalman Filter Brian Hunt 11 June 2013 Review of Notation Forecast model: a known function M on a vector space of model states. Truth: an unknown sequence {x n } of model states

More information

Addressing the nonlinear problem of low order clustering in deterministic filters by using mean-preserving non-symmetric solutions of the ETKF

Addressing the nonlinear problem of low order clustering in deterministic filters by using mean-preserving non-symmetric solutions of the ETKF Addressing the nonlinear problem of low order clustering in deterministic filters by using mean-preserving non-symmetric solutions of the ETKF Javier Amezcua, Dr. Kayo Ide, Dr. Eugenia Kalnay 1 Outline

More information

Ensemble square-root filters

Ensemble square-root filters Ensemble square-root filters MICHAEL K. TIPPETT International Research Institute for climate prediction, Palisades, New Yor JEFFREY L. ANDERSON GFDL, Princeton, New Jersy CRAIG H. BISHOP Naval Research

More information

Four-Dimensional Ensemble Kalman Filtering

Four-Dimensional Ensemble Kalman Filtering Four-Dimensional Ensemble Kalman Filtering B.R. Hunt, E. Kalnay, E.J. Kostelich, E. Ott, D.J. Patil, T. Sauer, I. Szunyogh, J.A. Yorke, A.V. Zimin University of Maryland, College Park, MD 20742, USA Ensemble

More information

Spectral and morphing ensemble Kalman filters

Spectral and morphing ensemble Kalman filters Spectral and morphing ensemble Kalman filters Department of Mathematical and Statistical Sciences University of Colorado Denver 91st American Meteorological Society Annual Meeting Seattle, WA, January

More information

A Brief Tutorial on the Ensemble Kalman Filter

A Brief Tutorial on the Ensemble Kalman Filter University of Colorado at Denver and Health Sciences Center A Brief Tutorial on the Ensemble Kalman Filter Jan Mandel February 2007 UCDHSC/CCM Report No. 242 CENTER FOR COMPUTATIONAL MATHEMATICS REPORTS

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis. Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar

More information

P 1.86 A COMPARISON OF THE HYBRID ENSEMBLE TRANSFORM KALMAN FILTER (ETKF)- 3DVAR AND THE PURE ENSEMBLE SQUARE ROOT FILTER (EnSRF) ANALYSIS SCHEMES

P 1.86 A COMPARISON OF THE HYBRID ENSEMBLE TRANSFORM KALMAN FILTER (ETKF)- 3DVAR AND THE PURE ENSEMBLE SQUARE ROOT FILTER (EnSRF) ANALYSIS SCHEMES P 1.86 A COMPARISON OF THE HYBRID ENSEMBLE TRANSFORM KALMAN FILTER (ETKF)- 3DVAR AND THE PURE ENSEMBLE SQUARE ROOT FILTER (EnSRF) ANALYSIS SCHEMES Xuguang Wang*, Thomas M. Hamill, Jeffrey S. Whitaker NOAA/CIRES

More information

A Comparative Study of 4D-VAR and a 4D Ensemble Kalman Filter: Perfect Model Simulations with Lorenz-96

A Comparative Study of 4D-VAR and a 4D Ensemble Kalman Filter: Perfect Model Simulations with Lorenz-96 Tellus 000, 000 000 (0000) Printed 20 October 2006 (Tellus LATEX style file v2.2) A Comparative Study of 4D-VAR and a 4D Ensemble Kalman Filter: Perfect Model Simulations with Lorenz-96 Elana J. Fertig

More information

Kalman Filter and Ensemble Kalman Filter

Kalman Filter and Ensemble Kalman Filter Kalman Filter and Ensemble Kalman Filter 1 Motivation Ensemble forecasting : Provides flow-dependent estimate of uncertainty of the forecast. Data assimilation : requires information about uncertainty

More information

Asynchronous data assimilation

Asynchronous data assimilation Ensemble Kalman Filter, lecture 2 Asynchronous data assimilation Pavel Sakov Nansen Environmental and Remote Sensing Center, Norway This talk has been prepared in the course of evita-enkf project funded

More information

Assimilating Nonlocal Observations using a Local Ensemble Kalman Filter

Assimilating Nonlocal Observations using a Local Ensemble Kalman Filter Tellus 000, 000 000 (0000) Printed 16 February 2007 (Tellus LATEX style file v2.2) Assimilating Nonlocal Observations using a Local Ensemble Kalman Filter Elana J. Fertig 1, Brian R. Hunt 1, Edward Ott

More information

Localization in the ensemble Kalman Filter

Localization in the ensemble Kalman Filter Department of Meteorology Localization in the ensemble Kalman Filter Ruth Elizabeth Petrie A dissertation submitted in partial fulfilment of the requirement for the degree of MSc. Atmosphere, Ocean and

More information

Four-dimensional ensemble Kalman filtering

Four-dimensional ensemble Kalman filtering Tellus (24), 56A, 273 277 Copyright C Blackwell Munksgaard, 24 Printed in UK. All rights reserved TELLUS Four-dimensional ensemble Kalman filtering By B. R. HUNT 1, E. KALNAY 1, E. J. KOSTELICH 2,E.OTT

More information

Abstract 2. ENSEMBLE KALMAN FILTERS 1. INTRODUCTION

Abstract 2. ENSEMBLE KALMAN FILTERS 1. INTRODUCTION J5.4 4D ENSEMBLE KALMAN FILTERING FOR ASSIMILATION OF ASYNCHRONOUS OBSERVATIONS T. Sauer George Mason University, Fairfax, VA 22030 B.R. Hunt, J.A. Yorke, A.V. Zimin, E. Ott, E.J. Kostelich, I. Szunyogh,

More information

A Unification of Ensemble Square Root Kalman Filters. and Wolfgang Hiller

A Unification of Ensemble Square Root Kalman Filters. and Wolfgang Hiller Generated using version 3.0 of the official AMS L A TEX template A Unification of Ensemble Square Root Kalman Filters Lars Nerger, Tijana Janjić, Jens Schröter, and Wolfgang Hiller Alfred Wegener Institute

More information

Gaussian Filtering Strategies for Nonlinear Systems

Gaussian Filtering Strategies for Nonlinear Systems Gaussian Filtering Strategies for Nonlinear Systems Canonical Nonlinear Filtering Problem ~u m+1 = ~ f (~u m )+~ m+1 ~v m+1 = ~g(~u m+1 )+~ o m+1 I ~ f and ~g are nonlinear & deterministic I Noise/Errors

More information

Spectral and morphing ensemble Kalman filters

Spectral and morphing ensemble Kalman filters Spectral and morphing ensemble Kalman filters Department of Mathematical and Statistical Sciences University of Colorado Denver 91st American Meteorological Society Annual Meeting Seattle, WA, January

More information

Data assimilation in high dimensions

Data assimilation in high dimensions Data assimilation in high dimensions David Kelly Courant Institute New York University New York NY www.dtbkelly.com February 12, 2015 Graduate seminar, CIMS David Kelly (CIMS) Data assimilation February

More information

A Note on the Particle Filter with Posterior Gaussian Resampling

A Note on the Particle Filter with Posterior Gaussian Resampling Tellus (6), 8A, 46 46 Copyright C Blackwell Munksgaard, 6 Printed in Singapore. All rights reserved TELLUS A Note on the Particle Filter with Posterior Gaussian Resampling By X. XIONG 1,I.M.NAVON 1,2 and

More information

On Least Squares Linear Regression Without Second Moment

On Least Squares Linear Regression Without Second Moment On Least Squares Linear Regression Without Second Moment BY RAJESHWARI MAJUMDAR University of Connecticut If \ and ] are real valued random variables such that the first moments of \, ], and \] exist and

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Dan Cawford Manfred Opper John Shawe-Taylor May, 2006 1 Introduction Some of the most complex models routinely run

More information

Adaptive ensemble Kalman filtering of nonlinear systems

Adaptive ensemble Kalman filtering of nonlinear systems Adaptive ensemble Kalman filtering of nonlinear systems Tyrus Berry George Mason University June 12, 213 : Problem Setup We consider a system of the form: x k+1 = f (x k ) + ω k+1 ω N (, Q) y k+1 = h(x

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Efficient Implementation of the Ensemble Kalman Filter

Efficient Implementation of the Ensemble Kalman Filter University of Colorado at Denver and Health Sciences Center Efficient Implementation of the Ensemble Kalman Filter Jan Mandel May 2006 UCDHSC/CCM Report No. 231 CENTER FOR COMPUTATIONAL MATHEMATICS REPORTS

More information

Quarterly Journal of the Royal Meteorological Society !"#$%&'&(&)"&'*'+'*%(,#$,-$#*'."(/'*0'"(#"(1"&)23$)(4#$2#"( 5'$*)6!

Quarterly Journal of the Royal Meteorological Society !#$%&'&(&)&'*'+'*%(,#$,-$#*'.(/'*0'(#(1&)23$)(4#$2#( 5'$*)6! !"#$%&'&(&)"&'*'+'*%(,#$,-$#*'."(/'*0'"(#"("&)$)(#$#"( '$*)! "#$%&'()!!"#$%&$'()*+"$,#')+-)%.&)/+(#')0&%&+$+'+#')+&%(! *'&$+,%-./!0)! "! :-(;%/-,(;! '/;!?$@A-//;B!@

More information

The Local Ensemble Transform Kalman Filter (LETKF) Eric Kostelich. Main topics

The Local Ensemble Transform Kalman Filter (LETKF) Eric Kostelich. Main topics The Local Ensemble Transform Kalman Filter (LETKF) Eric Kostelich Arizona State University Co-workers: Istvan Szunyogh, Brian Hunt, Ed Ott, Eugenia Kalnay, Jim Yorke, and many others http://www.weatherchaos.umd.edu

More information

Analysis sensitivity calculation in an Ensemble Kalman Filter

Analysis sensitivity calculation in an Ensemble Kalman Filter Analysis sensitivity calculation in an Ensemble Kalman Filter Junjie Liu 1, Eugenia Kalnay 2, Takemasa Miyoshi 2, and Carla Cardinali 3 1 University of California, Berkeley, CA, USA 2 University of Maryland,

More information

Using Observations at Different Spatial. Scales in Data Assimilation for. Environmental Prediction. Joanne A. Waller

Using Observations at Different Spatial. Scales in Data Assimilation for. Environmental Prediction. Joanne A. Waller UNIVERSITY OF READING DEPARTMENT OF MATHEMATICS AND STATISTICS Using Observations at Different Spatial Scales in Data Assimilation for Environmental Prediction Joanne A. Waller Thesis submitted for the

More information

Multivariate Correlations: Applying a Dynamic Constraint and Variable Localization in an Ensemble Context

Multivariate Correlations: Applying a Dynamic Constraint and Variable Localization in an Ensemble Context Multivariate Correlations: Applying a Dynamic Constraint and Variable Localization in an Ensemble Context Catherine Thomas 1,2,3, Kayo Ide 1 Additional thanks to Daryl Kleist, Eugenia Kalnay, Takemasa

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance The Kalman Filter Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Sarah Dance School of Mathematical and Physical Sciences, University of Reading s.l.dance@reading.ac.uk July

More information

COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE

COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE Communications on Stochastic Analysis Vol. 4, No. 3 (21) 299-39 Serials Publications www.serialspublications.com COVARIANCE IDENTITIES AND MIXING OF RANDOM TRANSFORMATIONS ON THE WIENER SPACE NICOLAS PRIVAULT

More information

5 Compact linear operators

5 Compact linear operators 5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.

More information

Data assimilation in high dimensions

Data assimilation in high dimensions Data assimilation in high dimensions David Kelly Kody Law Andy Majda Andrew Stuart Xin Tong Courant Institute New York University New York NY www.dtbkelly.com February 3, 2016 DPMMS, University of Cambridge

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

Your first day at work MATH 806 (Fall 2015)

Your first day at work MATH 806 (Fall 2015) Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies

More information

Relative Merits of 4D-Var and Ensemble Kalman Filter

Relative Merits of 4D-Var and Ensemble Kalman Filter Relative Merits of 4D-Var and Ensemble Kalman Filter Andrew Lorenc Met Office, Exeter International summer school on Atmospheric and Oceanic Sciences (ISSAOS) "Atmospheric Data Assimilation". August 29

More information

A Note on Hilbertian Elliptically Contoured Distributions

A Note on Hilbertian Elliptically Contoured Distributions A Note on Hilbertian Elliptically Contoured Distributions Yehua Li Department of Statistics, University of Georgia, Athens, GA 30602, USA Abstract. In this paper, we discuss elliptically contoured distribution

More information

Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF

Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Eugenia Kalnay and Shu-Chih Yang with Alberto Carrasi, Matteo Corazza and Takemasa Miyoshi 4th EnKF Workshop, April 2010 Relationship

More information

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why? KTH ROYAL INSTITUTE OF TECHNOLOGY Norms for vectors and matrices Why? Lecture 5 Ch. 5, Norms for vectors and matrices Emil Björnson/Magnus Jansson/Mats Bengtsson April 27, 2016 Problem: Measure size of

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

Particle filters, the optimal proposal and high-dimensional systems

Particle filters, the optimal proposal and high-dimensional systems Particle filters, the optimal proposal and high-dimensional systems Chris Snyder National Center for Atmospheric Research Boulder, Colorado 837, United States chriss@ucar.edu 1 Introduction Particle filters

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

Analysis Scheme in the Ensemble Kalman Filter

Analysis Scheme in the Ensemble Kalman Filter JUNE 1998 BURGERS ET AL. 1719 Analysis Scheme in the Ensemble Kalman Filter GERRIT BURGERS Royal Netherlands Meteorological Institute, De Bilt, the Netherlands PETER JAN VAN LEEUWEN Institute or Marine

More information

Classical Fourier Analysis

Classical Fourier Analysis Loukas Grafakos Classical Fourier Analysis Second Edition 4y Springer 1 IP Spaces and Interpolation 1 1.1 V and Weak IP 1 1.1.1 The Distribution Function 2 1.1.2 Convergence in Measure 5 1.1.3 A First

More information

Fundamentals of Data Assimilation

Fundamentals of Data Assimilation National Center for Atmospheric Research, Boulder, CO USA GSI Data Assimilation Tutorial - June 28-30, 2010 Acknowledgments and References WRFDA Overview (WRF Tutorial Lectures, H. Huang and D. Barker)

More information

Development of the Local Ensemble Transform Kalman Filter

Development of the Local Ensemble Transform Kalman Filter Development of the Local Ensemble Transform Kalman Filter Istvan Szunyogh Institute for Physical Science and Technology & Department of Atmospheric and Oceanic Science AOSC Special Seminar September 27,

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

P3.11 A COMPARISON OF AN ENSEMBLE OF POSITIVE/NEGATIVE PAIRS AND A CENTERED SPHERICAL SIMPLEX ENSEMBLE

P3.11 A COMPARISON OF AN ENSEMBLE OF POSITIVE/NEGATIVE PAIRS AND A CENTERED SPHERICAL SIMPLEX ENSEMBLE P3.11 A COMPARISON OF AN ENSEMBLE OF POSITIVE/NEGATIVE PAIRS AND A CENTERED SPHERICAL SIMPLEX ENSEMBLE 1 INTRODUCTION Xuguang Wang* The Pennsylvania State University, University Park, PA Craig H. Bishop

More information

Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF

Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Relationship between Singular Vectors, Bred Vectors, 4D-Var and EnKF Eugenia Kalnay and Shu-Chih Yang with Alberto Carrasi, Matteo Corazza and Takemasa Miyoshi ECODYC10, Dresden 28 January 2010 Relationship

More information

Optimal Localization for Ensemble Kalman Filter Systems

Optimal Localization for Ensemble Kalman Filter Systems Journal December of the 2014 Meteorological Society of Japan, Vol. Á. 92, PERIÁÑEZ No. 6, pp. et 585 597, al. 2014 585 DOI:10.2151/jmsj.2014-605 Optimal Localization for Ensemble Kalman Filter Systems

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

More information

Practical Aspects of Ensemble-based Kalman Filters

Practical Aspects of Ensemble-based Kalman Filters Practical Aspects of Ensemble-based Kalman Filters Lars Nerger Alfred Wegener Institute for Polar and Marine Research Bremerhaven, Germany and Bremen Supercomputing Competence Center BremHLR Bremen, Germany

More information

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Dongbin Xiu Department of Mathematics, Purdue University Support: AFOSR FA955-8-1-353 (Computational Math) SF CAREER DMS-64535

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

More information

Forecasting and data assimilation

Forecasting and data assimilation Supported by the National Science Foundation DMS Forecasting and data assimilation Outline Numerical models Kalman Filter Ensembles Douglas Nychka, Thomas Bengtsson, Chris Snyder Geophysical Statistics

More information

Ergodicity in data assimilation methods

Ergodicity in data assimilation methods Ergodicity in data assimilation methods David Kelly Andy Majda Xin Tong Courant Institute New York University New York NY www.dtbkelly.com April 15, 2016 ETH Zurich David Kelly (CIMS) Data assimilation

More information

Quantifying observation error correlations in remotely sensed data

Quantifying observation error correlations in remotely sensed data Quantifying observation error correlations in remotely sensed data Conference or Workshop Item Published Version Presentation slides Stewart, L., Cameron, J., Dance, S. L., English, S., Eyre, J. and Nichols,

More information

On the concentration of eigenvalues of random symmetric matrices

On the concentration of eigenvalues of random symmetric matrices On the concentration of eigenvalues of random symmetric matrices Noga Alon Michael Krivelevich Van H. Vu April 23, 2012 Abstract It is shown that for every 1 s n, the probability that the s-th largest

More information

Sequential Monte Carlo Samplers for Applications in High Dimensions

Sequential Monte Carlo Samplers for Applications in High Dimensions Sequential Monte Carlo Samplers for Applications in High Dimensions Alexandros Beskos National University of Singapore KAUST, 26th February 2014 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Alex

More information

A Moment Matching Particle Filter for Nonlinear Non-Gaussian. Data Assimilation. and Peter Bickel

A Moment Matching Particle Filter for Nonlinear Non-Gaussian. Data Assimilation. and Peter Bickel Generated using version 3.0 of the official AMS L A TEX template A Moment Matching Particle Filter for Nonlinear Non-Gaussian Data Assimilation Jing Lei and Peter Bickel Department of Statistics, University

More information

Lecture Note 1: Probability Theory and Statistics

Lecture Note 1: Probability Theory and Statistics Univ. of Michigan - NAME 568/EECS 568/ROB 530 Winter 2018 Lecture Note 1: Probability Theory and Statistics Lecturer: Maani Ghaffari Jadidi Date: April 6, 2018 For this and all future notes, if you would

More information

Implications of the Form of the Ensemble Transformation in the Ensemble Square Root Filters

Implications of the Form of the Ensemble Transformation in the Ensemble Square Root Filters 1042 M O N T H L Y W E A T H E R R E V I E W VOLUME 136 Implications of the Form of the Ensemble Transformation in the Ensemble Square Root Filters PAVEL SAKOV AND PETER R. OKE CSIRO Marine and Atmospheric

More information

Estimating observation impact without adjoint model in an ensemble Kalman filter

Estimating observation impact without adjoint model in an ensemble Kalman filter QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY Q. J. R. Meteorol. Soc. 134: 1327 1335 (28) Published online in Wiley InterScience (www.interscience.wiley.com) DOI: 1.12/qj.28 Estimating observation

More information

Performance of ensemble Kalman filters with small ensembles

Performance of ensemble Kalman filters with small ensembles Performance of ensemble Kalman filters with small ensembles Xin T Tong Joint work with Andrew J. Majda National University of Singapore Sunday 28 th May, 2017 X.Tong EnKF performance 1 / 31 Content Ensemble

More information

The Matrix Reloaded: Computations for large spatial data sets

The Matrix Reloaded: Computations for large spatial data sets The Matrix Reloaded: Computations for large spatial data sets Doug Nychka National Center for Atmospheric Research The spatial model Solving linear systems Matrix multiplication Creating sparsity Sparsity,

More information

Large Sample Properties of Estimators in the Classical Linear Regression Model

Large Sample Properties of Estimators in the Classical Linear Regression Model Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in

More information

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction Acta Math. Univ. Comenianae Vol. LXV, 1(1996), pp. 129 139 129 ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES V. WITKOVSKÝ Abstract. Estimation of the autoregressive

More information

The Matrix Reloaded: Computations for large spatial data sets

The Matrix Reloaded: Computations for large spatial data sets The Matrix Reloaded: Computations for large spatial data sets The spatial model Solving linear systems Matrix multiplication Creating sparsity Doug Nychka National Center for Atmospheric Research Sparsity,

More information

Handling nonlinearity in Ensemble Kalman Filter: Experiments with the three-variable Lorenz model

Handling nonlinearity in Ensemble Kalman Filter: Experiments with the three-variable Lorenz model Handling nonlinearity in Ensemble Kalman Filter: Experiments with the three-variable Lorenz model Shu-Chih Yang 1*, Eugenia Kalnay, and Brian Hunt 1. Department of Atmospheric Sciences, National Central

More information

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances Journal of Mechanical Engineering and Automation (): 6- DOI: 593/jjmea Cramér-Rao Bounds for Estimation of Linear System oise Covariances Peter Matiso * Vladimír Havlena Czech echnical University in Prague

More information

(Extended) Kalman Filter

(Extended) Kalman Filter (Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system

More information

SPECTRAL AND MORPHING ENSEMBLE KALMAN FILTERS AND APPLICATIONS

SPECTRAL AND MORPHING ENSEMBLE KALMAN FILTERS AND APPLICATIONS SPECTRAL AND MORPHING ENSEMBLE KALMAN FILTERS AND APPLICATIONS Jan Mandel, Jonathan D. Beezley, Loren Cobb, Ashok Krishnamurthy, University of Colorado Denver Adam K. Kochanski University of Utah Krystof

More information

Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems. Eric Kostelich Data Mining Seminar, Feb. 6, 2006

Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems. Eric Kostelich Data Mining Seminar, Feb. 6, 2006 Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems Eric Kostelich Data Mining Seminar, Feb. 6, 2006 kostelich@asu.edu Co-Workers Istvan Szunyogh, Gyorgyi Gyarmati, Ed Ott, Brian

More information

Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter

Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter arxiv:physics/0511236v2 [physics.data-an] 29 Dec 2006 Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter Brian R. Hunt Institute for Physical Science and Technology

More information

Model Uncertainty Quantification for Data Assimilation in partially observed Lorenz 96

Model Uncertainty Quantification for Data Assimilation in partially observed Lorenz 96 Model Uncertainty Quantification for Data Assimilation in partially observed Lorenz 96 Sahani Pathiraja, Peter Jan Van Leeuwen Institut für Mathematik Universität Potsdam With thanks: Sebastian Reich,

More information

Ensemble Kalman Filter based snow data assimilation

Ensemble Kalman Filter based snow data assimilation Ensemble Kalman Filter based snow data assimilation (just some ideas) FMI, Sodankylä, 4 August 2011 Jelena Bojarova Sequential update problem Non-linear state space problem Tangent-linear state space problem

More information

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.

More information

The Local Ensemble Transform Kalman Filter and its implementation on the NCEP global model at the University of Maryland

The Local Ensemble Transform Kalman Filter and its implementation on the NCEP global model at the University of Maryland The Local Ensemble Transform Kalman Filter and its implementation on the NCEP global model at the University of Maryland Istvan Szunyogh (*), Elizabeth A. Satterfield (*), José A. Aravéquia (**), Elana

More information

Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations.

Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations. HIROSHIMA S THEOREM AND MATRIX NORM INEQUALITIES MINGHUA LIN AND HENRY WOLKOWICZ Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization

More information

A new Hierarchical Bayes approach to ensemble-variational data assimilation

A new Hierarchical Bayes approach to ensemble-variational data assimilation A new Hierarchical Bayes approach to ensemble-variational data assimilation Michael Tsyrulnikov and Alexander Rakitko HydroMetCenter of Russia College Park, 20 Oct 2014 Michael Tsyrulnikov and Alexander

More information

Quantifying observation error correlations in remotely sensed data

Quantifying observation error correlations in remotely sensed data Quantifying observation error correlations in remotely sensed data Conference or Workshop Item Published Version Presentation slides Stewart, L., Cameron, J., Dance, S. L., English, S., Eyre, J. and Nichols,

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Comparison of Ensemble Kalman Filters Under Non-Gaussianity. and Peter Bickel. Chris Snyder

Comparison of Ensemble Kalman Filters Under Non-Gaussianity. and Peter Bickel. Chris Snyder Generated using version 3.0 of the official AMS L A TEX template Comparison of Ensemble Kalman Filters Under Non-Gaussianity Jing Lei and Peter Bickel Department of Statistics, University of California,

More information

arxiv: v2 [math.pr] 27 Oct 2015

arxiv: v2 [math.pr] 27 Oct 2015 A brief note on the Karhunen-Loève expansion Alen Alexanderian arxiv:1509.07526v2 [math.pr] 27 Oct 2015 October 28, 2015 Abstract We provide a detailed derivation of the Karhunen Loève expansion of a stochastic

More information

Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter

Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter arxiv:physics/0511236 v1 28 Nov 2005 Brian R. Hunt Institute for Physical Science and Technology and Department

More information

Smoothers: Types and Benchmarks

Smoothers: Types and Benchmarks Smoothers: Types and Benchmarks Patrick N. Raanes Oxford University, NERSC 8th International EnKF Workshop May 27, 2013 Chris Farmer, Irene Moroz Laurent Bertino NERSC Geir Evensen Abstract Talk builds

More information

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Background Data Assimilation Iterative process Forecast Analysis Background

More information

A Comparison of Error Subspace Kalman Filters

A Comparison of Error Subspace Kalman Filters Tellus 000, 000 000 (0000) Printed 4 February 2005 (Tellus LATEX style file v2.2) A Comparison of Error Subspace Kalman Filters By LARS NERGER, WOLFGANG HILLER and JENS SCHRÖTER Alfred Wegener Institute

More information

The Local Ensemble Transform Kalman Filter and its implementation on the NCEP global model at the University of Maryland

The Local Ensemble Transform Kalman Filter and its implementation on the NCEP global model at the University of Maryland The Local Ensemble Transform Kalman Filter and its implementation on the NCEP global model at the University of Maryland Istvan Szunyogh (*), Elizabeth A. Satterfield (*), José A. Aravéquia (**), Elana

More information

A mechanism for catastrophic filter divergence in data assimilation for sparse observation networks

A mechanism for catastrophic filter divergence in data assimilation for sparse observation networks Manuscript prepared for Nonlin. Processes Geophys. with version 5. of the L A TEX class copernicus.cls. Date: 5 August 23 A mechanism for catastrophic filter divergence in data assimilation for sparse

More information

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

More information