Adaptive Localization: Proposals for a high-resolution multivariate system Ross Bannister, HRAA, December 2008, January 2009 Version 3.

Size: px
Start display at page:

Download "Adaptive Localization: Proposals for a high-resolution multivariate system Ross Bannister, HRAA, December 2008, January 2009 Version 3."

Transcription

1 Adaptive Localization: Proposals for a high-resolution multivariate system Ross Bannister, HRAA, December 2008, January 2009 Version 3.. The implicit Schur product 2. The Bishop method for adaptive localization (ECO-RAP) 2 3. Element-by-element evaluation of (0) 3 4. Calculation of the localized covariances 3 5. Notes for evaluating (22) for a structure function 4 6. Limiting cases 4 7. Suggested algorithm with adaptive localization 6 8. Suggested algorithm with static localization only 7 9. Adaptive localization with a major simplification 7 References 2. The implicit Schur product The Schur product is used in ensemble-based data assimilation to remove long-range correlations P f L P f R Ω, () where PL f and PR f are the localized and raw forecast error covariance matrices respectively and Ω is the localization matrix. These matrices are of size 5n 5n (where n is the total number of grid points for each of the five parameters, ψ, χ, p, θ and q [what about w?]) and so we don't have the ability to store them explicitly. In the ensemble Kalman filter, PR f is represented by its square-root (ie the 5n K matrix of ensemble perturbations, each divided by K ). Assuming that Ω is also in its squareroot (5n L) form then P f R P f R /2 P f R T/2 K K k x k x T k, (2) Ω Ω /2 Ω T/2 L L ω l ω T l. (3) l In the last line, the square-root of Ω is also considered to be comprised of new effective ensemble members, ω l, each divided by L. For Ω to be a correlation matrix, each component of the ω l must have a variance of unity. Substituting (2) and (3) into () and then writing for the i, j th element of P f gives P f L L (P f /2 R P f RT/2 ) (Ω /2 Ω T/2 ), (4) P f Lij (P f /2 R P f RT/2 )ij (Ω /2 Ω T/2 ) ij, (5)

2 K p (PR/2 f )ip (PR/2 f )jp (Ω /2 ) iq (Ω /2 ) jq, (6) q K (PR/2 f )ip (Ω /2 ) iq (PR/2 f )jp (Ω /2 ) jq. (7) p q Equation (7) shows that the localized forecast error covariance matrix is effectively made up of approximately KL ensemble members instead of just K. The effective ensemble members that give rise to the localized covariances can be written as pq xpq x p ω q, (8) where x is the effective ensemble member comprising the vector Schur product of raw ensemble member x p ( K times the pth column of P f /2 R ) with ω q ( L times the qth column of ). Ω /2 2. The Bishop method for adaptive localization (ECO-RAP) Bishop and Hodyss [] proposed the following form for Ω /2 Ω /2 C Q K EΛ /2. (9) Here C K is a 5n 5n correlation matrix calculated from the K ensemble members (see below), and EΛ /2 is a 5n L matrix. E performs an inverse Fourier transform per parameter, and Λ /2 performs scale-dependent filtering. The Q superscript in (9) indicates an element-by-element raising of power (a Schur power), where Q is even. The overbar denotes a normalization so that the Ω becomes a correlation matrix. This involves setting the sum of squares of each row of to unity. Ω /2 The localization gains its adaptive property through the C K matrix. If it were not for C K, (9) would be the square-root of a static and homogeneous correlation matrix. The issues of this problem are the following.. Determination of (i) the set spectral modes in the horizontal, (ii) the set of vertical modes in the vertical for each model quantity, (iii) an appropriate spectrum, Λ, and (iv) a choice of L, the number of modes to truncate. 2. Efficient determination and action of C Q k. For reference, (9) has the following multivariate form (followed by a specification of the dimensions of each matrix). Ω /2 ( C Q K EΛ /2 Q C K ) E ψ Λ /2 ψ E χ Λ /2 χ E p Λ /2 p E θ Λ /2 θ E q Λ /2 q, (0) [5n L] [5n 5n] [5n L]. The [5n L] part of the right hand side of (0) is the static localization. It imposes no 2

3 multivariate localization modulation; it limits univariate lengthscales of each variable. Localization associated with the multivariate part of the problem is handled by the adaptive matrix, C Q. K 3. Element-by-element evaluation of (0) Equation (0) has a high operation count. In the HRTM there are n grid points. The five variables means that there are variables. Clearly special attention must be paid towards efficiency of the problem and any approximations that can be made should be made. Let (C Q K EΛ /2 ) ip,k be column k and field position i for parameter p of C Q K EΛ /2. (C Q K EΛ /2 ) ip,k (C Q K ) ip,jp E p,j,k (Λ /2 p ) k,k, () j,p where i, j go from to n and p, p run over each parameter (ψ, χ, p, θ, q). The matrix C K (a correlation matrix found from the ensemble members) has the following form C K Σ P f RΣ, (2) K K Σ x k x T kσ, (3) k where Σ is the diagonal standard deviation matrix. Element i, j between parameters p, p is and (C K ) ip,jp K K k (x k ) p i (x k ) p j, (4) (C Q K ) ip,jp (C K ) Q ip,jp. (5) The normalization in (9) and (0) (ie the overbar) means that the localization matrix has to be calculated row-wise. Normalization gives the matrix Ω /2. 4. Calculation of the localized covariances The localized covariance element is, from (7) ip, i p (P f L) ip,i p σ p i σ p j (K ) (L ) K (x k ω l ) ip (x k ω l ) i p, k l (K ) (L ) K (x k ) ip (ω l ) ip (x k ) i p (ω l ) i p. (6) k l (x k ) ip and (x k ) i p are readily available, (ω l ) ip and (ω l ) i p are not. The relationship between the columns of Ω /2 and ω l is (ωl) ip L (Ω /2 ) ip,l, (7) Ω /2 where is to be written in terms of its components (9). The overbar on (9) can be dealt with by a factor, which normalizes µ ip (Ω /2 ) ip,l µ ip (C Q K EΛ /2 ) ip,l, (8) 3

4 where Combining (7), (8) and () gives µ ip l (C Q K EΛ /2 ) 2. (9) ip,l (ω l ) ip L µ ip (C Q K ) ip,jp E p,j,l (Λ /2 p ) l,l. (20) j,p Substituting (20) into (6) gives an expression for the localized covariances in terms of quantities that are known (P f L) ip,i p K k (K ) ( (x k ) ip (x k ) i p µ ip (C Q K ) ip,j p E p,j,l (Λ /2 l j,p p ) l,l) ( µ i p (C Q K ) i p,j p E p,j,l (Λ /2 l,l) p ). j,p (2) This summation has to be arranged so that it can be evaluated in the most efficient way, allowing for evaluation of the coefficients (P f L) ip,i p (K ) ( K ( µ ip l j From (9) and () µ ip i p ) (x k ) ip (x k ) k (C Q K ) ip,j p E p,j,l (Λ /2 l,l) ( p ) µ i p p j µ ip (C Q K ) i p,j p E p,j,l (Λ /2 l,l) p ). p (22) l ( j,p (C Q K ) ip,j p E p,j,l (Λ /2 2. (23) p ) l,l ) In Sec. 9, we consider a major simplification of these equations that is considered to make way for their efficient evaluation for large systems. For now though we consider the exact form of the equations. 5. Notes for evaluating (22) for a structure function For a structure function, i and p will both be fixed. The j summations that appear in the above may be evaluated on a reduced resolution grid (e.g. every 0-points). The summations j,p (C Q K ) ip,j p E p,j,l (Λ /2 p ) l,l appear in (22) and in (23). Store these for all l for each i, p to allow (23) to be evaluated. 6. Limiting cases Choosing Q leads to C Q K I (i.e. only elements that have matrix element identically unity will survive the Schur power). This is equivalent to the case with no adaptive localization. Choosing Q 0 is non-physical. It will set each non-zero matrix element in 4 C Q K

5 to unity. Note a fundamental difference between the conventional and the Schur matrix products. For the conventional matrix product AB C, (24) setting B to the identity matrix will leave A C. For the Schur matrix product A B C, (25) setting all elements of B to will leave A C. Exploring the case when there is no adaptive localization, Q, then (0) becomes ( Ω /2 EΛ /2 ) E ψ Λ /2 ψ E χ Λ /2 χ E p Λ /2 p E θ Λ /2 θ E q Λ /2 q, (26) which is block diagonal in parameter. Considering only one parameter, then (26) gives the following square root where Meaning that Ω from (27) with (3) give Ω ij (Ω /2 ) iq µ i E iq Λ /2 qq, (27) µ i q µ i µ j l (E il Λ /2 ll ) 2. (28) (Ω /2 ) iq (Ω T/2 ) qj, (29) q E iq E jqλ qq, (30) where means complex conjugate (we add this here because the illustration below makes use of a complex Fourier transform). In -D (30) becomes Ω ij µ i µ j q exp ik q (r i r j ) Λ qq, (3) where k q is the qth wavenumber and r i is the position of the ith grid point. If L covers the complete spectrum and Λ qq is constant (broad localization in spectral space) then orthogonality gives Ω ij µ 2 i δ ij, (32) meaning that this Schur product will be diagonal and will completely localize in real space. A narrower localization in spectral space, ie Λ qq 0 with increasing q (qualitatively similar to smaller L) then the localization in real space will be broader. 5

6 7. Suggested algorithm with adaptive localization Costs for each loop are specified at the end of each loop in red for the case when no efficiencies are used, and in blue when efficiencies are used. Numerical terms in blue brackets are for the specific model domain ( n and 5 parameters). Assume for now that L 50, except for the numerical values in green which are for the efficiency costs, but for L.. Calculation of right-hand bracketed term in (22) 2. Loop round j, p 3. Evaluate χ (j, p ) (C Q K ) i p,j p K (24) K (24) (24) 4. End loop (j, p ) 5Kn ( ) 5Kn / 00 (9 0 6 ) (9 0 6 ) 5. α (:) 0 6. β 0 7. Loop round l 8. Loop round j, p 9. α (l) + χ (j, p ) E p,j,l (Λ /2 p ) l,l 0. End loop (j, p ) 5n ( ) 5n / 00 ( ) ( ). β + α 2 (l) 2. End loop (l) 5Ln ( ) 5Ln / 00 (8 0 6 ) ( ) 3. µ i p / β 4. Loop around destination points in the structure function 5. Loop round i, p 6. Calculation of left-hand bracketed term in (22) 7. γ 0 8. Loop round k 9. γ + (x k ) ip (x k ) i p 20. End loop (k) K (24) K (24) (24) 2. Calculation of middle bracketed term in (22) 22. Loop round j, p 23. Evaluate χ (j, p ) (C Q K ) ip,j p K (24) K (24) (24) 24. End loop (j, p ) 5Kn ( ) 5n / 00 (9 0 6 ) (9 0 6 ) 25. ε (:) β Loop round l 28. Loop round j, p 29. ε (l) + χ (j, p ) E p,j,l (Λ /2 p ) l,l 30. End loop j, p 5n ( ) 5n / 00 ( ) ( ) 3. β + ε 2 (l) 32. End loop (l) 5Ln ( ) 5Ln / 00 (8 0 6 ) ( ) 33. µ ip / β 6

7 34. Structure function for i, p can be evaluated - see (33) 35. End loop (i, p) 5n [K + 5n ( + K + L)] (0 7 ) 5n [K + 5n ( + K + L) / 00] (0 5 ) ( ) (PL) f ip,i p γ µ ipµ i p (K ) L ε (l) α (l). (33) l 8. Suggested algorithm with static localization only Without the adaptive localization the problem becomes considerably simpler. In this case (22) and (23) become (P f L) ip,i p (K ) ( K i p ) (x k ) ip (x k ) k l µ ip ( µ ip E p,i,l (Λ /2 p ) l,l ) ( µ i p E p,i,l (Λ /2 p ) l,l ), (34) l (E p,i,l (Λ /2 p ) l,l ) 2. (35). Calculation of right-hand bracketed term in (34) 2. β 0 3. Loop round l 4. α (l) E p,i,l (Λ /2 p ) l,l 5. β + α 2 (l) 6. End loop (l) L (50) () 7. µ i p / β 8. Loop around destination points in the structure function 9. Loop round i, p 0. Calculation of left-hand bracketed term in (34). γ 0 2. Loop round k 3. γ + (x k ) ip (x k ) i p 4. End loop (k) K (24) (24) 5. Calculation of middle bracketed term in (34) 6. β 0 7. Loop round l 8. ε (l) E p,i,l (Λ /2 p ) l,l 9. β + ε 2 (l) 20. End loop (l) L (50) () 2. µ ip / β 22. Structure function for i, p can be evaluated - see (33) 23. End loop (i, p) 5n [K + L] ( ) ( ) 9. Adaptive localization with a major simplification In Sec. 7 we considered an algorithm for the brute-force evaluation of (22) and (23) for the 7

8 evaluation of localized covariances, and in Sec. 8 we considered the limiting case where the localization was static. Neither of these approaches are useful for large systems (the algorithm in Sec. 7 is prohibitive and the algorithm in Sec. 8 is inadequate for many purposes). Here we consider a simplification to the covariance formulae that may be useable and useful. First, recap the equations that are to be evaluated. The localized covariance matrix elements from (6) (PL) f ip,i p (K ) (L ) K (x k ) ip (ω l ) ip (x k ) i p (ω l ) i p, (6) k l the localization members from (7) and elements of the localization matrix from (8) where, from (9) (ω l ) ip L (Ω /2 ) ip,l, (7) (Ω /2 ) ip,l µ ip (C Q K EΛ /2 ) ip,l, (8) µ ip l (C Q K EΛ /2 ) 2. (9) ip,l These are straight copies of equations previously given in this document. In [2] it is suggested that considerable efficiency savings can be made in the evaluation of (8) (and hence in (6)) in the case of adaptive localization if the matrix C Q K is approximated by one that has separable structure functions. This is now explored. (C Q The analysis is centred on the evaluation of K EΛ /2 ) ip,l, which is one of the most expensive parts of the calculation ( C Q K EΛ /2)ip,j (C K ) Q ip,i p E j p,j Λ /2 jj. (36) i p Now consider the case when rows of C K are approximated by separable functions. Since index i represents all three dimensions in space, this step requires a change of notation. Let a given i represent a unique combination of x, y, z and let i represent x, y, z. Then (C K ) ip,i p may be written as (C K ) ip,i p C K (x, y, z, p; x, y, z, p ). (37) Assuming separable functions means that (C K ) ip,i p is approximated by (C K ) ip,i p C K (x, y, z, p; x, y, z, p ) C K (x, y, z, p; x, y, z, p ) C K (x, y, z, p; x, y, z, p ), (38) ie the row associated with x, y, z, p is a function of x, y, z, p and is written as the product of three functions, one a function of x, p, another a function of y, p and another a function of z, p. This is separable in x, y, z -space. Note that unfortunately, C K written in this way is not guaranteed to be symmetric (as is required for a correlation matrix), but it is assumed that this is not a vital for localization, as is presumably the case in [2]. 8

9 This is useful if columns of E are also separable, which they are under the planned formulation. In the same notation as used above, and noting that j (not j ) is a wavevector index representing k x j, k y j, k z j, then may be written E j p,j E j p,j f x p (k x j, x ) f y p (k y j, y ) f z p (k z j, z ), (39) where f x p (k x j, x ), f y p (k y j, y ) and f z p (k z j, z ) are orthogonal functions (trigonometric in the horizontal and EOF in the vertical). Separability is useful because it makes evaluation of (36) cost effective as follows ( C Q K EΛ /2)ip,j (C K ) Q ip,i p E j p,j Λ /2 jj i p C Q K (x, y, z, p; x, y, z, p ) C Q K (x, y, z, p; x, y, z, p ) C Q K (x, y, z, p; x, y, z, p ) x y z p f x p (k x j, x ) f y p (k y j, y ) f z p (k z j, z ) Λ /2 (k x j, k y j, k z j), Λ /2 (k x j, k y j, k z j) p x C Q K (x, y, z, p; x, y, z, p ) f x p (k x j, x ) C Q K (x, y, z, p; x, y, z, p ) f y p (k y j, y ) y C Q K (x, y, z, p; x, y, z, p ) f z p (k z j, z ), (40) z where notational changes have been made for compatibility with recent discussion. Remember that i is shorthand for x, y, z and j is shorthand for k x j, k y j, k z j. The 3-D integral in (36) has been replaced by three integrals over each dimension (plus parameters) in (40). At the resolution of and with five parameters, this reduces the operation count from to just This is times more efficient. Putting together (6), (7), (8), (9) and (40) gives (P f L) ip,i p (K ) K k µ ip µ i p (K ) K k (x k ) ip (Ω /2 ) ip,l (x k ) i p (Ω /2 ) i p,l, l (x k ) ip (C Q K EΛ /2 ) ip,l (x k ) i p (C Q K EΛ /2 ) i p,l, l µ ip µ i p (K ) K (x k ) ip (x k ) i p k l Λ /2 (k x j, k y j, k z j) p x C Q K (x, y, z, p; x, y, z, p ) f x p (k x l, x ) 9

10 Λ /2 (k x j, k y j, k z j) p x C Q K (x, y, z, p; x, y, z, p ) f y p (k y l, y ) y C Q K (x, y, z, p; x, y, z, p ) f z p (k z l, z ) z C Q K (x, y, z, p ; x, y, z, p ) f x p (k x l, x ) where C Q K (x, y, z, p ; x, y, z, p ) f y p (k y l, y ) y z µ ip ( C Q K (x, y, z, p ; x, y, z, p ) f z p (k z l, z ), (4) l p C Q K (x, y, z, p; x, y, z, p ) f x p (k x l, x ) x C Q K (x, y, z, p; x, y, z, p ) f y p (k y l, y ) y C Q K (x, y, z, p; x, y, z, p ) f z p (k z l, z ). (42) z The suggested algorithm is now given for this case. Costs for each loop are specified at the end of each loop in blue. Numerical terms in blue brackets are for the specific model domain ( n and 5 parameters). Assume for now that L 50. 2) /2. Calculation of term in (4) for (fixed) i, p ( i denotes a particular x, y, z ) 2. Loop round p 3. Loop round x 4. Evaluate χ x p (x ) C K (x, y, z, p ; x, y, z, p ) K (24) 5. End loop (x ) 360K (8 640) 6. Loop round y 7. Evaluate χ y p (y ) C K (x, y, z, p ; x, y, z, p ) K (24) 8. End loop (y ) 288K (6 92) 9. Loop round z 0. Evaluate χ z p (z ) C K (x, y, z, p ; x, y, z, p ) K (24). End loop (z ) 70K ( 680) 2. End loop (p ) 3 590K (86 60) 3. α (:) 0 4. β 0 5. Loop round l 6. Loop round p 0

11 7. α x 0 8. Loop round x 9. α x + χ x p (x ) f x p (k x l, x ) 20. End loop (x ) 360 (360) 2. α y Loop round y 23. α y + χ y p (y ) f y p (k y l, y ) 24. End loop (y ) 288 (288) 25. α z Loop round z 27. α z + χ z p (z ) f z p (k z l, z ) 28. End loop (z ) 70 (70) 29. α (l) + α x α y α z 30. End loop (p ) (3 590) 3. α (l) Λ /2 (k x l, k y l, k z l) 32. β + α 2 (l) 33. End loop (l) 3 590L (79 500) 34. µ i p / β 35. Loop around destination points in the structure function 36. Loop round i, p ( i denotes a particular x, y, z) 37. Calculation of static term in (4) 38. γ Loop round k 40. γ + (x k ) ip (x k ) i p 4. End loop (k) K (24) 42. Calculation of term in (4) for (variable) i, p 43. Loop round p 44. Loop round x 45. Evaluate χ x p (x ) C K (x, y, z, p; x, y, z, p ) K (24) 46. End loop (x ) 360K (8 640) 47. Loop round y 48. Evaluate χ y p (y ) C K (x, y, z, p; x, y, z, p ) K (24) 49. End loop (y ) 288K (6 92) 50. Loop round z 5. Evaluate χ z p (z ) C K (x, y, z, p; x, y, z, p ) K (24) 52. End loop (z ) 70K ( 680) 53. End loop (p ) 3 590K (86 60) 54. ε (:) β 0

12 56. Loop round l 57. Loop round p 58. ε x Loop round x 60. ε x + χ x p (x ) f x p (k x l, x ) 6. End loop (x ) 360 (360) 62. ε y Loop round y 64. ε y + χ y p (y ) f y p (k y l, y ) 65. End loop (y ) 288 (288) 66. ε z Loop round z 68. ε z + χ z p (z ) f z p (k z l, z ) 69. End loop (z ) 70 (70) 70. ε (l) + ε x ε y ε z 7. End loop p (3 590) 72. ε (l) Λ /2 (k x l, k y l, k z l) 73. β + ε 2 (l) 74. End loop (l) 3 590L (79 500) 75. µ ip / β 76. Structure function for i, p can be evaluated - see (33) 77. End loop (i, p) 5n [K K L] (0 3 ) This cost can be reduced by looping only round those i that are in the same plane as i. Instead of multiplying by n, this multiple in the last line is This reduced cost is [ ] 2 0. References [] Bishop C.H., Hodyss D., Ensemble covariances adaptively localized with ECO-RAP, : tests on simple error models, submitted to Tellus, [2] Bishop C.H., Hodyss D., Ensemble covariances adaptively localized with ECO-RAP, 2: a strategy for the atmosphere, submitted to Tellus,

How is balance of a forecast ensemble affected by adaptive and non-adaptive localization schemes?

How is balance of a forecast ensemble affected by adaptive and non-adaptive localization schemes? 1 How is balance of a forecast ensemble affected by adaptive and non-adaptive localization schemes? Ross Bannister Stefano Migliorini, Laura Baker, Ali Rudd NCEO, DIAMET, ESA University of Reading 2 Outline

More information

Representation of inhomogeneous, non-separable covariances by sparse wavelet-transformed matrices

Representation of inhomogeneous, non-separable covariances by sparse wavelet-transformed matrices Representation of inhomogeneous, non-separable covariances by sparse wavelet-transformed matrices Andreas Rhodin, Harald Anlauf German Weather Service (DWD) Workshop on Flow-dependent aspects of data assimilation,

More information

Background Error Covariance Modelling

Background Error Covariance Modelling Background Error Covariance Modelling Mike Fisher Slide 1 Outline Diagnosing the Statistics of Background Error using Ensembles of Analyses Modelling the Statistics in Spectral Space - Relaxing constraints

More information

The Matrix Algebra of Sample Statistics

The Matrix Algebra of Sample Statistics The Matrix Algebra of Sample Statistics James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) The Matrix Algebra of Sample Statistics

More information

Interpretation of two error statistics estimation methods: 1 - the Derozier s method 2 the NMC method (lagged forecast)

Interpretation of two error statistics estimation methods: 1 - the Derozier s method 2 the NMC method (lagged forecast) Interpretation of two error statistics estimation methods: 1 - the Derozier s method 2 the NMC method (lagged forecast) Richard Ménard, Yan Yang and Yves Rochon Air Quality Research Division Environment

More information

( ).666 Information Extraction from Speech and Text

( ).666 Information Extraction from Speech and Text (520 600).666 Information Extraction from Speech and Text HMM Parameters Estimation for Gaussian Output Densities April 27, 205. Generalization of the Results of Section 9.4. It is suggested in Section

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Recent Developments in Numerical Methods for 4d-Var

Recent Developments in Numerical Methods for 4d-Var Recent Developments in Numerical Methods for 4d-Var Mike Fisher Slide 1 Recent Developments Numerical Methods 4d-Var Slide 2 Outline Non-orthogonal wavelets on the sphere: - Motivation: Covariance Modelling

More information

The ECMWF Hybrid 4D-Var and Ensemble of Data Assimilations

The ECMWF Hybrid 4D-Var and Ensemble of Data Assimilations The Hybrid 4D-Var and Ensemble of Data Assimilations Lars Isaksen, Massimo Bonavita and Elias Holm Data Assimilation Section lars.isaksen@ecmwf.int Acknowledgements to: Mike Fisher and Marta Janiskova

More information

Covariance to PCA. CS 510 Lecture #14 February 23, 2018

Covariance to PCA. CS 510 Lecture #14 February 23, 2018 Covariance to PCA CS 510 Lecture 14 February 23, 2018 Overview: Goal Assume you have a gallery (database) of images, and a probe (test) image. The goal is to find the database image that is most similar

More information

EnKF Localization Techniques and Balance

EnKF Localization Techniques and Balance EnKF Localization Techniques and Balance Steven Greybush Eugenia Kalnay, Kayo Ide, Takemasa Miyoshi, and Brian Hunt Weather Chaos Meeting September 21, 2009 Data Assimilation Equation Scalar form: x a

More information

Background and observation error covariances Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience

Background and observation error covariances Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Background and observation error covariances Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Sarah Dance School of Mathematical and Physical Sciences, University of Reading

More information

Kalman Filter and Ensemble Kalman Filter

Kalman Filter and Ensemble Kalman Filter Kalman Filter and Ensemble Kalman Filter 1 Motivation Ensemble forecasting : Provides flow-dependent estimate of uncertainty of the forecast. Data assimilation : requires information about uncertainty

More information

A new Hierarchical Bayes approach to ensemble-variational data assimilation

A new Hierarchical Bayes approach to ensemble-variational data assimilation A new Hierarchical Bayes approach to ensemble-variational data assimilation Michael Tsyrulnikov and Alexander Rakitko HydroMetCenter of Russia College Park, 20 Oct 2014 Michael Tsyrulnikov and Alexander

More information

Localization in the ensemble Kalman Filter

Localization in the ensemble Kalman Filter Department of Meteorology Localization in the ensemble Kalman Filter Ruth Elizabeth Petrie A dissertation submitted in partial fulfilment of the requirement for the degree of MSc. Atmosphere, Ocean and

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Appendix C: Recapitulation of Numerical schemes

Appendix C: Recapitulation of Numerical schemes Appendix C: Recapitulation of Numerical schemes August 31, 2009) SUMMARY: Certain numerical schemes of general use are regrouped here in order to facilitate implementations of simple models C1 The tridiagonal

More information

Phys 201. Matrices and Determinants

Phys 201. Matrices and Determinants Phys 201 Matrices and Determinants 1 1.1 Matrices 1.2 Operations of matrices 1.3 Types of matrices 1.4 Properties of matrices 1.5 Determinants 1.6 Inverse of a 3 3 matrix 2 1.1 Matrices A 2 3 7 =! " 1

More information

Review of Covariance Localization in Ensemble Filters

Review of Covariance Localization in Ensemble Filters NOAA Earth System Research Laboratory Review of Covariance Localization in Ensemble Filters Tom Hamill NOAA Earth System Research Lab, Boulder, CO tom.hamill@noaa.gov Canonical ensemble Kalman filter update

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

Covariance to PCA. CS 510 Lecture #8 February 17, 2014

Covariance to PCA. CS 510 Lecture #8 February 17, 2014 Covariance to PCA CS 510 Lecture 8 February 17, 2014 Status Update Programming Assignment 2 is due March 7 th Expect questions about your progress at the start of class I still owe you Assignment 1 back

More information

Gaussian Elimination for Linear Systems

Gaussian Elimination for Linear Systems Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination

More information

Lecture 03 Positive Semidefinite (PSD) and Positive Definite (PD) Matrices and their Properties

Lecture 03 Positive Semidefinite (PSD) and Positive Definite (PD) Matrices and their Properties Applied Optimization for Wireless, Machine Learning, Big Data Prof. Aditya K. Jagannatham Department of Electrical Engineering Indian Institute of Technology, Kanpur Lecture 03 Positive Semidefinite (PSD)

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:

More information

Derivation of the Kalman Filter

Derivation of the Kalman Filter Derivation of the Kalman Filter Kai Borre Danish GPS Center, Denmark Block Matrix Identities The key formulas give the inverse of a 2 by 2 block matrix, assuming T is invertible: T U 1 L M. (1) V W N P

More information

GSI Tutorial Background and Observation Errors: Estimation and Tuning. Daryl Kleist NCEP/EMC June 2011 GSI Tutorial

GSI Tutorial Background and Observation Errors: Estimation and Tuning. Daryl Kleist NCEP/EMC June 2011 GSI Tutorial GSI Tutorial 2011 Background and Observation Errors: Estimation and Tuning Daryl Kleist NCEP/EMC 29-30 June 2011 GSI Tutorial 1 Background Errors 1. Background error covariance 2. Multivariate relationships

More information

Notes for System Identification: Impulse Response Functions via Wavelet

Notes for System Identification: Impulse Response Functions via Wavelet Notes for System Identification: Impulse Response Functions via Wavelet 1 Basic Wavelet Algorithm for IRF Extraction In contrast to the FFT-based extraction procedure which must process the data both in

More information

Class Notes: Solving Simultaneous Linear Equations by Gaussian Elimination. Consider a set of simultaneous linear equations:

Class Notes: Solving Simultaneous Linear Equations by Gaussian Elimination. Consider a set of simultaneous linear equations: METR/OCN 465: Computer Programming with Applications in Meteorology and Oceanography Dr Dave Dempsey Dept of Geosciences, SFSU Class Notes: Solving Simultaneous Linear Equations by Gaussian Elimination

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign hrtdmrt2@illinois.edu Sep 19th, 2017 C. Hurtado (UIUC - Economics) Numerical Methods On the Agenda

More information

4. DATA ASSIMILATION FUNDAMENTALS

4. DATA ASSIMILATION FUNDAMENTALS 4. DATA ASSIMILATION FUNDAMENTALS... [the atmosphere] "is a chaotic system in which errors introduced into the system can grow with time... As a consequence, data assimilation is a struggle between chaotic

More information

Computational Techniques Prof. Sreenivas Jayanthi. Department of Chemical Engineering Indian institute of Technology, Madras

Computational Techniques Prof. Sreenivas Jayanthi. Department of Chemical Engineering Indian institute of Technology, Madras Computational Techniques Prof. Sreenivas Jayanthi. Department of Chemical Engineering Indian institute of Technology, Madras Module No. # 05 Lecture No. # 24 Gauss-Jordan method L U decomposition method

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

R. E. Petrie and R. N. Bannister. Department of Meteorology, Earley Gate, University of Reading, Reading, RG6 6BB, United Kingdom

R. E. Petrie and R. N. Bannister. Department of Meteorology, Earley Gate, University of Reading, Reading, RG6 6BB, United Kingdom A method for merging flow-dependent forecast error statistics from an ensemble with static statistics for use in high resolution variational data assimilation R. E. Petrie and R. N. Bannister Department

More information

Objective localization of ensemble covariances: theory and applications

Objective localization of ensemble covariances: theory and applications Institutionnel Grand Public Objective localization of ensemble covariances: theory and applications Yann Michel1, B. Me ne trier2 and T. Montmerle1 Professionnel (1) Me te o-france & CNRS, Toulouse, France

More information

Ensemble Kalman Filter

Ensemble Kalman Filter Ensemble Kalman Filter Geir Evensen and Laurent Bertino Hydro Research Centre, Bergen, Norway, Nansen Environmental and Remote Sensing Center, Bergen, Norway The Ensemble Kalman Filter (EnKF) Represents

More information

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA Discussiones Mathematicae General Algebra and Applications 23 (2003 ) 125 137 RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA Seok-Zun Song and Kyung-Tae Kang Department of Mathematics,

More information

T2.3: Use of ensemble information in ocean analysis and development of efficient 4D-Var

T2.3: Use of ensemble information in ocean analysis and development of efficient 4D-Var T2.3: Use of ensemble information in ocean analysis and development of efficient 4D-Var A. Weaver 1,B.Ménétrier 1,J.Tshimanga 1 and A. Vidard 2 1 CERFACS, Toulouse 2 INRIA/LJK, Grenoble December 10, 2015

More information

A Reduced Rank Kalman Filter Ross Bannister, October/November 2009

A Reduced Rank Kalman Filter Ross Bannister, October/November 2009 , Octoer/Novemer 2009 hese notes give the detailed workings of Fisher's reduced rank Kalman filter (RRKF), which was developed for use in a variational environment (Fisher, 1998). hese notes are my interpretation

More information

Notes on vectors and matrices

Notes on vectors and matrices Notes on vectors and matrices EE103 Winter Quarter 2001-02 L Vandenberghe 1 Terminology and notation Matrices, vectors, and scalars A matrix is a rectangular array of numbers (also called scalars), written

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.

More information

Gaussian Filtering Strategies for Nonlinear Systems

Gaussian Filtering Strategies for Nonlinear Systems Gaussian Filtering Strategies for Nonlinear Systems Canonical Nonlinear Filtering Problem ~u m+1 = ~ f (~u m )+~ m+1 ~v m+1 = ~g(~u m+1 )+~ o m+1 I ~ f and ~g are nonlinear & deterministic I Noise/Errors

More information

Data assimilation in high dimensions

Data assimilation in high dimensions Data assimilation in high dimensions David Kelly Courant Institute New York University New York NY www.dtbkelly.com February 12, 2015 Graduate seminar, CIMS David Kelly (CIMS) Data assimilation February

More information

Algebraic Properties of Solutions of Linear Systems

Algebraic Properties of Solutions of Linear Systems Algebraic Properties of Solutions of Linear Systems In this chapter we will consider simultaneous first-order differential equations in several variables, that is, equations of the form f 1t,,,x n d f

More information

Optimal Interpolation ( 5.4) We now generalize the least squares method to obtain the OI equations for vectors of observations and background fields.

Optimal Interpolation ( 5.4) We now generalize the least squares method to obtain the OI equations for vectors of observations and background fields. Optimal Interpolation ( 5.4) We now generalize the least squares method to obtain the OI equations for vectors of observations and background fields. Optimal Interpolation ( 5.4) We now generalize the

More information

Determinants. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 25

Determinants. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 25 Determinants opyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 25 Notation The determinant of a square matrix n n A is denoted det(a) or A. opyright c 2012 Dan Nettleton (Iowa State

More information

1 Vectors and Tensors

1 Vectors and Tensors PART 1: MATHEMATICAL PRELIMINARIES 1 Vectors and Tensors This chapter and the next are concerned with establishing some basic properties of vectors and tensors in real spaces. The first of these is specifically

More information

+ V = 0, j = 1,..., 3N (7.1) i =1, 3 m = 1, N, X mi, V X mi. i = 0 in the equilibrium. X mi X nj

+ V = 0, j = 1,..., 3N (7.1) i =1, 3 m = 1, N, X mi, V X mi. i = 0 in the equilibrium. X mi X nj 7. Lattice dynamics 7.1 Basic consideratons In this section we ll follow a number of well-known textbooks, and only try to keep the notations consistent. Suppose we have a system of N atoms, m=1,...,n,

More information

Inhomogeneous Background Error Modeling and Estimation over Antarctica with WRF-Var/AMPS

Inhomogeneous Background Error Modeling and Estimation over Antarctica with WRF-Var/AMPS Inhomogeneous Background Error Modeling and Estimation over Antarctica with WRF-Var/AMPS Yann MICHEL 1 Météo-France, CNRM/GMAP 2 NCAR, MMM/DAG 10 th Annual WRF Users Workshop 23 th June 2009 Yann MICHEL

More information

4DEnVar: link with 4D state formulation of variational assimilation and different possible implementations

4DEnVar: link with 4D state formulation of variational assimilation and different possible implementations QuarterlyJournalof theoyalmeteorologicalsociety Q J Meteorol Soc 4: 97 October 4 A DOI:/qj35 4DEnVar: lin with 4D state formulation of variational assimilation and different possible implementations Gérald

More information

Using Observations at Different Spatial. Scales in Data Assimilation for. Environmental Prediction. Joanne A. Waller

Using Observations at Different Spatial. Scales in Data Assimilation for. Environmental Prediction. Joanne A. Waller UNIVERSITY OF READING DEPARTMENT OF MATHEMATICS AND STATISTICS Using Observations at Different Spatial Scales in Data Assimilation for Environmental Prediction Joanne A. Waller Thesis submitted for the

More information

A general hybrid formulation of the background-error covariance matrix for ensemble-variational ocean data assimilation

A general hybrid formulation of the background-error covariance matrix for ensemble-variational ocean data assimilation .. A general hybrid formulation of the background-error covariance matrix for ensemble-variational ocean data assimilation Anthony Weaver Andrea Piacentini, Serge Gratton, Selime Gürol, Jean Tshimanga

More information

22A-2 SUMMER 2014 LECTURE Agenda

22A-2 SUMMER 2014 LECTURE Agenda 22A-2 SUMMER 204 LECTURE 2 NATHANIEL GALLUP The Dot Product Continued Matrices Group Work Vectors and Linear Equations Agenda 2 Dot Product Continued Angles between vectors Given two 2-dimensional vectors

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

Ergodicity in data assimilation methods

Ergodicity in data assimilation methods Ergodicity in data assimilation methods David Kelly Andy Majda Xin Tong Courant Institute New York University New York NY www.dtbkelly.com April 15, 2016 ETH Zurich David Kelly (CIMS) Data assimilation

More information

GROUP THEORY PRIMER. New terms: tensor, rank k tensor, Young tableau, Young diagram, hook, hook length, factors over hooks rule

GROUP THEORY PRIMER. New terms: tensor, rank k tensor, Young tableau, Young diagram, hook, hook length, factors over hooks rule GROUP THEORY PRIMER New terms: tensor, rank k tensor, Young tableau, Young diagram, hook, hook length, factors over hooks rule 1. Tensor methods for su(n) To study some aspects of representations of a

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS

More information

A primer on matrices

A primer on matrices A primer on matrices Stephen Boyd August 4, 2007 These notes describe the notation of matrices, the mechanics of matrix manipulation, and how to use matrices to formulate and solve sets of simultaneous

More information

Background error modelling: climatological flow-dependence

Background error modelling: climatological flow-dependence Background error modelling: climatological flow-dependence Yann MICHEL NCAR/MMM/B Meeting 16 th April 2009 1 Introduction 2 A new estimate of lengthscales 3 Climatological flow-dependence Yann MICHEL B

More information

Lie Groups for 2D and 3D Transformations

Lie Groups for 2D and 3D Transformations Lie Groups for 2D and 3D Transformations Ethan Eade Updated May 20, 2017 * 1 Introduction This document derives useful formulae for working with the Lie groups that represent transformations in 2D and

More information

Numerical Weather prediction at the European Centre for Medium-Range Weather Forecasts (2)

Numerical Weather prediction at the European Centre for Medium-Range Weather Forecasts (2) Numerical Weather prediction at the European Centre for Medium-Range Weather Forecasts (2) Time series curves 500hPa geopotential Correlation coefficent of forecast anomaly N Hemisphere Lat 20.0 to 90.0

More information

DETERMINANTS, COFACTORS, AND INVERSES

DETERMINANTS, COFACTORS, AND INVERSES DEERMIS, COFCORS, D IVERSES.0 GEERL Determinants originally were encountered in the solution of simultaneous linear equations, so we will use that perspective here. general statement of this problem is:

More information

Sometimes the domains X and Z will be the same, so this might be written:

Sometimes the domains X and Z will be the same, so this might be written: II. MULTIVARIATE CALCULUS The first lecture covered functions where a single input goes in, and a single output comes out. Most economic applications aren t so simple. In most cases, a number of variables

More information

A Brief Introduction to Tensors

A Brief Introduction to Tensors A Brief Introduction to Tensors Jay R Walton Fall 2013 1 Preliminaries In general, a tensor is a multilinear transformation defined over an underlying finite dimensional vector space In this brief introduction,

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

arxiv:astro-ph/ v1 20 Oct 2003

arxiv:astro-ph/ v1 20 Oct 2003 χ 2 and Linear Fits Andrew Gould (Dept. of Astronomy, Ohio State University) Abstract arxiv:astro-ph/0310577 v1 20 Oct 2003 The mathematics of linear fits is presented in covariant form. Topics include:

More information

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. Section 9.2: Matrices Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. That is, a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn A

More information

A primer on matrices

A primer on matrices A primer on matrices Stephen Boyd August 4, 2007 These notes describe the notation of matrices, the mechanics of matrix manipulation, and how to use matrices to formulate and solve sets of simultaneous

More information

New Fast Kalman filter method

New Fast Kalman filter method New Fast Kalman filter method Hojat Ghorbanidehno, Hee Sun Lee 1. Introduction Data assimilation methods combine dynamical models of a system with typically noisy observations to obtain estimates of the

More information

SDP APPROXIMATION OF THE HALF DELAY AND THE DESIGN OF HILBERT PAIRS. Bogdan Dumitrescu

SDP APPROXIMATION OF THE HALF DELAY AND THE DESIGN OF HILBERT PAIRS. Bogdan Dumitrescu SDP APPROXIMATION OF THE HALF DELAY AND THE DESIGN OF HILBERT PAIRS Bogdan Dumitrescu Tampere International Center for Signal Processing Tampere University of Technology P.O.Box 553, 3311 Tampere, FINLAND

More information

Lecture 4: Products of Matrices

Lecture 4: Products of Matrices Lecture 4: Products of Matrices Winfried Just, Ohio University January 22 24, 2018 Matrix multiplication has a few surprises up its sleeve Let A = [a ij ] m n, B = [b ij ] m n be two matrices. The sum

More information

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Canonical Edps/Soc 584 and Psych 594 Applied Multivariate Statistics Carolyn J. Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Canonical Slide

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Evolution of Forecast Error Covariances in 4D-Var and ETKF methods

Evolution of Forecast Error Covariances in 4D-Var and ETKF methods Evolution of Forecast Error Covariances in 4D-Var and ETKF methods Chiara Piccolo Met Office Exeter, United Kingdom chiara.piccolo@metoffice.gov.uk Introduction Estimates of forecast error covariances

More information

An Ensemble Kalman Filter for NWP based on Variational Data Assimilation: VarEnKF

An Ensemble Kalman Filter for NWP based on Variational Data Assimilation: VarEnKF An Ensemble Kalman Filter for NWP based on Variational Data Assimilation: VarEnKF Blueprints for Next-Generation Data Assimilation Systems Workshop 8-10 March 2016 Mark Buehner Data Assimilation and Satellite

More information

Statistical Inference and Methods

Statistical Inference and Methods Department of Mathematics Imperial College London d.stephens@imperial.ac.uk http://stats.ma.ic.ac.uk/ das01/ 31st January 2006 Part VI Session 6: Filtering and Time to Event Data Session 6: Filtering and

More information

Minwise hashing for large-scale regression and classification with sparse data

Minwise hashing for large-scale regression and classification with sparse data Minwise hashing for large-scale regression and classification with sparse data Nicolai Meinshausen (Seminar für Statistik, ETH Zürich) joint work with Rajen Shah (Statslab, University of Cambridge) Simons

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

BOUNDARY CONDITIONS FOR SYMMETRIC BANDED TOEPLITZ MATRICES: AN APPLICATION TO TIME SERIES ANALYSIS

BOUNDARY CONDITIONS FOR SYMMETRIC BANDED TOEPLITZ MATRICES: AN APPLICATION TO TIME SERIES ANALYSIS BOUNDARY CONDITIONS FOR SYMMETRIC BANDED TOEPLITZ MATRICES: AN APPLICATION TO TIME SERIES ANALYSIS Alessandra Luati Dip. Scienze Statistiche University of Bologna Tommaso Proietti S.E.F. e ME. Q. University

More information

Anouncements. Assignment 3 has been posted!

Anouncements. Assignment 3 has been posted! Anouncements Assignment 3 has been posted! FFTs in Graphics and Vision Correlation of Spherical Functions Outline Math Review Spherical Correlation Review Dimensionality: Given a complex n-dimensional

More information

UNCERTAINTY MODELING VIA FREQUENCY DOMAIN MODEL VALIDATION

UNCERTAINTY MODELING VIA FREQUENCY DOMAIN MODEL VALIDATION AIAA 99-3959 UNCERTAINTY MODELING VIA FREQUENCY DOMAIN MODEL VALIDATION Martin R. Waszak, * NASA Langley Research Center, Hampton, Virginia Dominick Andrisani II, Purdue University, West Lafayette, Indiana

More information

The priority program SPP1167 Quantitative Precipitation Forecast PQP and the stochastic view of weather forecasting

The priority program SPP1167 Quantitative Precipitation Forecast PQP and the stochastic view of weather forecasting The priority program SPP1167 Quantitative Precipitation Forecast PQP and the stochastic view of weather forecasting Andreas Hense 9. November 2007 Overview The priority program SPP1167: mission and structure

More information

Bayesian decision theory Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory

Bayesian decision theory Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory Bayesian decision theory 8001652 Introduction to Pattern Recognition. Lectures 4 and 5: Bayesian decision theory Jussi Tohka jussi.tohka@tut.fi Institute of Signal Processing Tampere University of Technology

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Lecture 9: Sensitivity Analysis ST 2018 Tobias Neckel Scientific Computing in Computer Science TUM Repetition of Previous Lecture Sparse grids in Uncertainty Quantification

More information

Potentially useful reading Sakurai and Napolitano, sections 1.5 (Rotation), Schumacher & Westmoreland chapter 2

Potentially useful reading Sakurai and Napolitano, sections 1.5 (Rotation), Schumacher & Westmoreland chapter 2 Problem Set 2: Interferometers & Random Matrices Graduate Quantum I Physics 6572 James Sethna Due Friday Sept. 5 Last correction at August 28, 2014, 8:32 pm Potentially useful reading Sakurai and Napolitano,

More information

Stochastic Structural Dynamics Prof. Dr. C. S. Manohar Department of Civil Engineering Indian Institute of Science, Bangalore

Stochastic Structural Dynamics Prof. Dr. C. S. Manohar Department of Civil Engineering Indian Institute of Science, Bangalore Stochastic Structural Dynamics Prof. Dr. C. S. Manohar Department of Civil Engineering Indian Institute of Science, Bangalore Lecture No. # 33 Probabilistic methods in earthquake engineering-2 So, we have

More information

Doppler radial wind spatially correlated observation error: operational implementation and initial results

Doppler radial wind spatially correlated observation error: operational implementation and initial results Doppler radial wind spatially correlated observation error: operational implementation and initial results D. Simonin, J. Waller, G. Kelly, S. Ballard,, S. Dance, N. Nichols (Met Office, University of

More information

Spectral Ensemble Kalman Filters

Spectral Ensemble Kalman Filters Spectral Ensemble Kalman Filters Jan Mandel 12, Ivan Kasanický 2, Martin Vejmelka 2, Kryštof Eben 2, Viktor Fugĺık 2, Marie Turčičová 2, Jaroslav Resler 2, and Pavel Juruš 2 1 University of Colorado Denver

More information

Symmetry and Properties of Crystals (MSE638) Stress and Strain Tensor

Symmetry and Properties of Crystals (MSE638) Stress and Strain Tensor Symmetry and Properties of Crystals (MSE638) Stress and Strain Tensor Somnath Bhowmick Materials Science and Engineering, IIT Kanpur April 6, 2018 Tensile test and Hooke s Law Upto certain strain (0.75),

More information

Local Ensemble Transform Kalman Filter

Local Ensemble Transform Kalman Filter Local Ensemble Transform Kalman Filter Brian Hunt 11 June 2013 Review of Notation Forecast model: a known function M on a vector space of model states. Truth: an unknown sequence {x n } of model states

More information

GROUP THEORY PRIMER. New terms: so(2n), so(2n+1), symplectic algebra sp(2n)

GROUP THEORY PRIMER. New terms: so(2n), so(2n+1), symplectic algebra sp(2n) GROUP THEORY PRIMER New terms: so(2n), so(2n+1), symplectic algebra sp(2n) 1. Some examples of semi-simple Lie algebras In the previous chapter, we developed the idea of understanding semi-simple Lie algebras

More information

Data assimilation in high dimensions

Data assimilation in high dimensions Data assimilation in high dimensions David Kelly Kody Law Andy Majda Andrew Stuart Xin Tong Courant Institute New York University New York NY www.dtbkelly.com February 3, 2016 DPMMS, University of Cambridge

More information

The Matrix Representation of a Three-Dimensional Rotation Revisited

The Matrix Representation of a Three-Dimensional Rotation Revisited Physics 116A Winter 2010 The Matrix Representation of a Three-Dimensional Rotation Revisited In a handout entitled The Matrix Representation of a Three-Dimensional Rotation, I provided a derivation of

More information

Physics 342 Lecture 26. Angular Momentum. Lecture 26. Physics 342 Quantum Mechanics I

Physics 342 Lecture 26. Angular Momentum. Lecture 26. Physics 342 Quantum Mechanics I Physics 342 Lecture 26 Angular Momentum Lecture 26 Physics 342 Quantum Mechanics I Friday, April 2nd, 2010 We know how to obtain the energy of Hydrogen using the Hamiltonian operator but given a particular

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 3 for Applied Multivariate Analysis Outline 1 Reprise-Vectors, vector lengths and the angle between them 2 3 Partial correlation

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Poisson Solvers. William McLean. April 21, Return to Math3301/Math5315 Common Material.

Poisson Solvers. William McLean. April 21, Return to Math3301/Math5315 Common Material. Poisson Solvers William McLean April 21, 2004 Return to Math3301/Math5315 Common Material 1 Introduction Many problems in applied mathematics lead to a partial differential equation of the form a 2 u +

More information

Perturbation Theory and Numerical Modeling of Quantum Logic Operations with a Large Number of Qubits

Perturbation Theory and Numerical Modeling of Quantum Logic Operations with a Large Number of Qubits Contemporary Mathematics Perturbation Theory and Numerical Modeling of Quantum Logic Operations with a Large Number of Qubits G. P. Berman, G. D. Doolen, D. I. Kamenev, G. V. López, and V. I. Tsifrinovich

More information

The Ensemble Kalman Filter:

The Ensemble Kalman Filter: p.1 The Ensemble Kalman Filter: Theoretical formulation and practical implementation Geir Evensen Norsk Hydro Research Centre, Bergen, Norway Based on Evensen 23, Ocean Dynamics, Vol 53, No 4 p.2 The Ensemble

More information