Performance of ensemble Kalman filters with small ensembles
|
|
- Randolf Ferguson
- 5 years ago
- Views:
Transcription
1 Performance of ensemble Kalman filters with small ensembles Xin T Tong Joint work with Andrew J. Majda National University of Singapore Sunday 28 th May, 2017 X.Tong EnKF performance 1 / 31
2 Content Ensemble Kalman filter (EnKF) High dimensional challenges for EnKF. Low rank scenario: A low effective dimension p, EnKF reaches its proclaimed performance, ensemble size K > Cp for a constant C. Sparse/Localized scenario: EnKF with domain localization, with a stable localized structure reaches its proclaimed performance, if the ensemble size K > C L log d for a constant C L. X.Tong EnKF performance 2 / 31
3 Filtering Signal-observation system with random coefficients Signal: X n+1 = A n X n + B n + ξ n+1, ξ n+1 N (0, Σ n ) Observation: Y n+1 = HX n+1 + ζ n+1, ζ n+1 N (0, σ 2 oi q ) Goal: estimate X n based on Y 1,..., Y n A n, B n, Σ m, σ o, H sequence of given matrices and vectors. A n can be unstable sometimes. X.Tong EnKF performance 3 / 31
4 Weather forecast Signal: X n+1 = A n X n + B n + ξ n+1, Observation: Y n+1 = HX n+1 + ζ n+1. Weather forecast: Signal: atmosphere and ocean, follows a PDE. Obs: weather station, satellite, sensors... Main challenge: high dimension, d X.Tong EnKF performance 4 / 31
5 Kalman filter Use Gaussian: X n Y1...n N (m n, R n ) Forecast step: ˆm n+1 = A n m n + B n, R n+1 = A n R n A T n + Σ n. Assimilation step: apply the Kalman update rule m n+1 = ˆm n+1 + G( R n+1 )(Y n+1 H ˆm n+1 ), R n+1 = K( R n+1 ) G(C) = CH T (σ 2 oi q + HCH T ) 1, Complexity: O(d 3 ). K(C) = C G(C)HC Posterior at t = n Prior+Obs at t = n + 1 forecast assimilate Posterior at t = n + 1 X.Tong EnKF performance 5 / 31
6 Sampling+Gaussian Monte Carlo: use samples to represent a distribution: X (1),..., X (K) p, 1 K K δ X (k) p. k=1 Ensemble {X (k) n } K k=1 to represent N (X n, C n ) X n = X (k) n K, C n = 1 K 1 k (X (k) n X n ) (X (k) n X n ). X.Tong EnKF performance 6 / 31
7 Ensemble Kalman filter Forecast step X (k) n+1 = A nx n (k) + B n + ξ (k) n+1, Ĉ n+1 = Ŝn+1ŜT n+1 K 1. Assimilation step X (k) (k) n+1 = X n+1 + G(Ĉn+1)(Y (k) n+1 H X n+1 + ζ(k) n+1 ). Gain matrix: G(C) = CH T (σ 2 oi q + HCH T ) 1. Complexity O(K 2 d). Posterior at t = n Prior+Obs at t = n + 1 Posterior at t = n + 1 X (k) n+1 X (k) forecast: M.C. assimilate n+1 X (k) n X.Tong EnKF performance 7 / 31
8 Application and theory of EnKF Pros and Cons K = 100 ensembles can forecast d = 10 6 dimensional systems. Fast weather forecast and oil reservoir management K 2 d d 3. No systematic theoretical understanding: why does it work? Literature Focused on showing ensemble version (X n, C n ) (m n, R n ) Require K (Mandel, Cobb, Beezley 11) Fixed d sufficiently large K, A < 1 (Del Moral, Tugaut 16) Fixed K, well definedness E X (k) n 2 < (Law, Kelly, Stuart, 14) Fixed K, boundedness sup n E X (k) n 2 < (Tong, Majda, Kelly 15) Missing: performance analysis with K d. X.Tong EnKF performance 8 / 31
9 Application and theory of EnKF Pros and Cons K = 100 ensembles can forecast d = 10 6 dimensional systems. Fast weather forecast and oil reservoir management K 2 d d 3. No systematic theoretical understanding: why does it work? Literature Focused on showing ensemble version (X n, C n ) (m n, R n ) Require K (Mandel, Cobb, Beezley 11) Fixed d sufficiently large K, A < 1 (Del Moral, Tugaut 16) Fixed K, well definedness E X (k) n 2 < (Law, Kelly, Stuart, 14) Fixed K, boundedness sup n E X (k) n 2 < (Tong, Majda, Kelly 15) Missing: performance analysis with K d. X.Tong EnKF performance 8 / 31
10 Application and theory of EnKF Pros and Cons K = 100 ensembles can forecast d = 10 6 dimensional systems. Fast weather forecast and oil reservoir management K 2 d d 3. No systematic theoretical understanding: why does it work? Literature Focused on showing ensemble version (X n, C n ) (m n, R n ) Require K (Mandel, Cobb, Beezley 11) Fixed d sufficiently large K, A < 1 (Del Moral, Tugaut 16) Fixed K, well definedness E X (k) n 2 < (Law, Kelly, Stuart, 14) Fixed K, boundedness sup n E X (k) n 2 < (Tong, Majda, Kelly 15) Missing: performance analysis with K d. X.Tong EnKF performance 8 / 31
11 Challenges from high dimension Ensemble size K to represent uncertainty of dimension d: Spurious correlation in high dimension. Suppose X n (k) N (0, I d ) i.i.d, by Bai-Yin s law C n I d d/k with large probability K k=1 (X(k) n Rank deficiency: C n = [ Cn 0 Has rank(c n ) K 1, see as 0 0 Xn)(X(k) n Xn)T K 1 ] } K-1 } d-k+1 Instability of the dynamics: Ĉ n+1 = A n C n A T n + Σ n What if span(c n ) does not cover expanding directions? X.Tong EnKF performance 9 / 31
12 Heuristic answers For high dimensional numerical problems, Low rank structure Sparse structure can be exploited for efficient computation. In EnKF, efficient performance with Low effective dimension. Localized covariance structure. X.Tong EnKF performance 10 / 31
13 Low Effective Dimension X.Tong EnKF performance 11 / 31
14 Low effective dimension Assume there is a effective dimension p < K d Most uncertainty lies in p directions, others below a threshold ρ, Cn 0 Cn O(ρ) } K O(ρ) O(ρ) } d-k+1 How are the filter ensemble attracted to this subspace. Lorenz system propagation. Video: MIT Aero Astro X.Tong EnKF performance 12 / 31
15 EnKF variant for enhanced fidelity Techniques to enhance fidelity Spurious correlation in high dimension. Projecting to p principal directions of K(Ĉn+1) The leftover direction: assume error strength is ρ Rank deficiency: additive inflation ρi d C ρ n = ρi d + K k=1 (X(k) n X n )(X n (k) K 1 X n ) T = [ ] Cn + ρi K ρi d K+1 The under represented direction: assume error strength is ρ. Instability of the dynamics. Increase noise strength X n+1 k = A n+1 X n (k) + ξ (k) n+1, ξ(k) n+1 Σ+ n Σ n Σ + n = [ρa n A T n + Σ n ρ/ri d ], Σ + n indicates the system instability. Also multiplicate covariance with a ratio: Ĉ n rĉn. X.Tong EnKF performance 13 / 31
16 EnKF variant for enhanced fidelity Techniques to enhance fidelity Spurious correlation in high dimension. Projecting to p principal directions of K(Ĉn+1) The leftover direction: assume error strength is ρ Rank deficiency: additive inflation ρi d C ρ n = ρi d + K k=1 (X(k) n X n )(X n (k) K 1 X n ) T = [ ] Cn + ρi K ρi d K+1 The under represented direction: assume error strength is ρ. Instability of the dynamics. Increase noise strength X n+1 k = A n+1 X n (k) + ξ (k) n+1, ξ(k) n+1 Σ+ n Σ n Σ + n = [ρa n A T n + Σ n ρ/ri d ], Σ + n indicates the system instability. Also multiplicate covariance with a ratio: Ĉ n rĉn. X.Tong EnKF performance 13 / 31
17 EnKF variant for enhanced fidelity Techniques to enhance fidelity Spurious correlation in high dimension. Projecting to p principal directions of K(Ĉn+1) The leftover direction: assume error strength is ρ Rank deficiency: additive inflation ρi d C ρ n = ρi d + K k=1 (X(k) n X n )(X n (k) K 1 X n ) T = [ ] Cn + ρi K ρi d K+1 The under represented direction: assume error strength is ρ. Instability of the dynamics. Increase noise strength X n+1 k = A n+1 X n (k) + ξ (k) n+1, ξ(k) n+1 Σ+ n Σ n Σ + n = [ρa n A T n + Σ n ρ/ri d ], Σ + n indicates the system instability. Also multiplicate covariance with a ratio: Ĉ n rĉn. X.Tong EnKF performance 13 / 31
18 Main result: low effective dimension Theorem (Majda, T. 16) Suppose the system has a low effective filter dimension p, there is a variant of EnKF with a constant C, such that the EnKF reaches its proclaimed performance, if the sample size K > cp. Remained questions: What is the effective dimension: Intuitively, it is the number C n eigenvalues above ρ. C n depends on random sampling realization. Practically, can be checked while EnKF is running. Theoretical a-priori conditions for small effective dimension. How to describe EnKF performance. X.Tong EnKF performance 14 / 31
19 Main result: low effective dimension Theorem (Majda, T. 16) Suppose the system has a low effective filter dimension p, there is a variant of EnKF with a constant C, such that the EnKF reaches its proclaimed performance, if the sample size K > cp. Remained questions: What is the effective dimension: Intuitively, it is the number C n eigenvalues above ρ. C n depends on random sampling realization. Practically, can be checked while EnKF is running. Theoretical a-priori conditions for small effective dimension. How to describe EnKF performance. X.Tong EnKF performance 14 / 31
20 Main result: low effective dimension Theorem (Majda, T. 16) Suppose the system has a low effective filter dimension p, there is a variant of EnKF with a constant C, such that the EnKF reaches its proclaimed performance, if the sample size K > cp. Remained questions: What is the effective dimension: Intuitively, it is the number C n eigenvalues above ρ. C n depends on random sampling realization. Practically, can be checked while EnKF is running. Theoretical a-priori conditions for small effective dimension. How to describe EnKF performance. X.Tong EnKF performance 14 / 31
21 Main result: low effective dimension Theorem (Majda, T. 16) Suppose the system has a low effective filter dimension p, there is a variant of EnKF with a constant C, such that the EnKF reaches its proclaimed performance, if the sample size K > cp. Remained questions: What is the effective dimension: Intuitively, it is the number C n eigenvalues above ρ. C n depends on random sampling realization. Practically, can be checked while EnKF is running. Theoretical a-priori conditions for small effective dimension. How to describe EnKF performance. X.Tong EnKF performance 14 / 31
22 Kalman filter reference Consider another signal-observation system: Signal : X n+1 = ra n X n + B n + ξ n+1, ξ n+1 N (0, Σ n) Observation : Y n+1 = HX n+1 + ζ n+1, ζ n+1 N (0, I q ) An inflated version with Σ n = r 2 Σ + n + r 2 ρi d Signal: X n+1 = A n X n + B n + ξ n+1, ξ n+1 N (0, Σ n ) Observation: Y n+1 = HX n+1 + ξ n+1, ζ n+1 N (0, I q ) (X n, Y n) has its own Kalman filter (m n, R n), with covariance update R n+1 = K(r 2 A n R na T n + Σ n) X.Tong EnKF performance 15 / 31
23 Kalman filter reference Consider another signal-observation system: Signal : X n+1 = ra n X n + B n + ξ n+1, ξ n+1 N (0, Σ n) Observation : Y n+1 = HX n+1 + ζ n+1, ζ n+1 N (0, I q ) An inflated version with Σ n = r 2 Σ + n + r 2 ρi d Signal: X n+1 = A n X n + B n + ξ n+1, ξ n+1 N (0, Σ n ) Observation: Y n+1 = HX n+1 + ξ n+1, ζ n+1 N (0, I q ) (X n, Y n) has its own Kalman filter (m n, R n), with covariance update R n+1 = K(r 2 A n R na T n + Σ n) X.Tong EnKF performance 15 / 31
24 Kalman filter reference Consider another signal-observation system: Signal : X n+1 = ra n X n + B n + ξ n+1, ξ n+1 N (0, Σ n) Observation : Y n+1 = HX n+1 + ζ n+1, ζ n+1 N (0, I q ) An inflated version with Σ n = r 2 Σ + n + r 2 ρi d Signal: X n+1 = A n X n + B n + ξ n+1, ξ n+1 N (0, Σ n ) Observation: Y n+1 = HX n+1 + ξ n+1, ζ n+1 N (0, I q ) (X n, Y n) has its own Kalman filter (m n, R n), with covariance update R n+1 = K(r 2 A n R na T n + Σ n) X.Tong EnKF performance 15 / 31
25 Low effective dimension (X n, Y n) has its own Kalman filter (m n, R n), with covariance update R n+1 = K(r 2 A n R na T n + Σ n) Such update often has a stationary solution R n (Bougerol 93) C n R n with large probability Assumption (Low effective dimension) The original system has an effective dimension p if A covariance sequence R n has at most p eigenvalues above ρ Rank(Σ + n ) p A n A T n + Σ n /ρ has at p eigenvalues above 1/r Assumption (Uniform observability) A 1 n, A n, Σ n, R 1 n, R n are all bounded in operator norm. The m step observability Gramian m k=1 AT k,1 HT k H ka k,1 c m I d X.Tong EnKF performance 16 / 31
26 Proclaimed performance Proclaimed/estimated performance EnKF estimates X n by X n = 1 (k) K X n. Error e n = X n X n. Covariance : Ee n e T n = Ee n e n. EnKF estimates its performance by ensemble covariance C ρ n. Can it captures the error covariance? EC ρ n Ee n e n Quantification via Mahalanobis distance v 2 C = vt [C] 1 v: e n C ρ n = e T n [C ρ n] 1 e n. Idealistic: e n N (0, C ρ n), then e n C ρ n d. X.Tong EnKF performance 17 / 31
27 Proclaimed performance Proclaimed/estimated performance EnKF estimates X n by X n = 1 (k) K X n. Error e n = X n X n. Covariance : Ee n e T n = Ee n e n. EnKF estimates its performance by ensemble covariance C ρ n. Can it captures the error covariance? EC ρ n Ee n e n Quantification via Mahalanobis distance v 2 C = vt [C] 1 v: e n C ρ n = e T n [C ρ n] 1 e n. Idealistic: e n N (0, C ρ n), then e n C ρ n d. X.Tong EnKF performance 17 / 31
28 Main theorem Theorem (Majda, T. 16) Suppose the signal observation system is uniformly observable with m steps, and has a effective dimension p. Then for any c, there are C, F, D F, M n, so that if K > Cp E e n C ρ n r n 6 EF (C0 ) e 0 2 C 0 + 2m + M n d With the constants bounded by F (C) D F exp(d F log 3 C ), lim sup M n n 1 + c 1 r m 6. X.Tong EnKF performance 18 / 31
29 Main theorem Theorem (Majda, T. 16) Suppose the signal observation system is uniformly observable with m steps, and has a effective dimension p. Then for any c, there are C, F, D F, M n, so that if K > Cp E e n C ρ n r n 6 EF (C0 ) e 0 2 C 0 + 2m + M n d With the constants bounded by F (C) D F exp(d F log 3 C ), lim sup M n n 1 + c 1 r m 6. X.Tong EnKF performance 18 / 31
30 Corollary: accuracy System with frequent accurate observations: Signal: X n+1 = A n X n + B n + ɛξ n+1, ξ n+1 N (0, Σ n ) Observation: Y n+1 = HX n+1 + ɛζ n+1, ζ n+1 N (0, σoi 2 q ) Corollary: ensemble covariance C n ɛ, also E e n ɛr n 6 DR + ρef (ɛ 1 C 0 ) e m+ ɛ(d C ρ R + ρ)dm n. 0 X.Tong EnKF performance 19 / 31
31 Localized covariance X.Tong EnKF performance 20 / 31
32 Local interaction High dimension often comes from dense grids. Interaction often is local: PDE discritization. Example: Lorenz 96 model ẋ i (t) = (x i+1 x i 2 )x i 1 x i dt + F, i = 1,, d Information travels along interaction, and is dissipated. In linear systems: the matrices have short bandwidth. X n+1 = A n X n. X.Tong EnKF performance 21 / 31
33 Sparsity: local covariance Correlation of Lorenz 96 X.Tong EnKF performance 22 / 31 Correlation depends on information propagation. Correlation decays quickly with the distance. Covariance is localized with a structure Φ, e.g. Φ(x) = ρ x [Ĉn] i,j Φ( i j ) Φ(x) [0, 1] is decreasing. Distance can be general
34 Covariance Localization Spurious correlation may exist for far away terms. Localization: simply ignore far away correlations. Implementation: Schur product with a mask [Ĉn D L ] i,j = [Ĉn] i,j [D L ] i,j Use Ĉn D L to describe uncertainty [D L ] i,j = φ( i j ), with a radius L. Gaspari-Cohn matrix: φ(x) = exp( 4x 2 /L 2 )1 i j L. Cutoff/Branding matrix: φ(x) = 1 i j L. Also resolves rank deficiency, e.g = X.Tong EnKF performance 23 / 31
35 Domain localization Two types LEnKF: Domain localization and covariance tempering. Domain localization with radius l: Assume H is a partial observation matrix Use information in I i = {j : i j l} to update component i C i = P Ii CP Ii, X (k) n = G i (C) = C i H T (σ 2 oi q + HC i H T ) 1 G L (C) = e i e T i G i (C) X (k) n + G L (Ĉn)(Y n H X (k) n + ζ (k) n ). X.Tong EnKF performance 24 / 31
36 Advantage with localization Intuitively, ignoring the long distance covariance terms, reduces the sampling difficulty, and necessary sampling size. Theorem (Bickel, Levina 08) If X (1),..., X (K) N (0, Σ), denote C = 1 K K k=1 X(k) X (k). D L 1 = max i j D L i,j.there is a constant c, and for any t > 0 P( C D L Σ D L > D L 1 t) 8 exp(2 log d ck min{t, t 2 }) This indicates that K D L 2 1 log d is the necessary sample size. D L is independent of d, e.g, the cut-off/branding matrix, [D L cut] i,j = 1 i j L, D L cut 1 2L. L = 4l in our application. X.Tong EnKF performance 25 / 31
37 Advantage with localization Intuitively, ignoring the long distance covariance terms, reduces the sampling difficulty, and necessary sampling size. Theorem (Bickel, Levina 08) If X (1),..., X (K) N (0, Σ), denote C = 1 K K k=1 X(k) X (k). D L 1 = max i j D L i,j.there is a constant c, and for any t > 0 P( C D L Σ D L > D L 1 t) 8 exp(2 log d ck min{t, t 2 }) This indicates that K D L 2 1 log d is the necessary sample size. D L is independent of d, e.g, the cut-off/branding matrix, [D L cut] i,j = 1 i j L, D L cut 1 2L. L = 4l in our application. X.Tong EnKF performance 25 / 31
38 Advantage with localization Intuitively, ignoring the long distance covariance terms, reduces the sampling difficulty, and necessary sampling size. Theorem (Bickel, Levina 08) If X (1),..., X (K) N (0, Σ), denote C = 1 K K k=1 X(k) X (k). D L 1 = max i j D L i,j.there is a constant c, and for any t > 0 P( C D L Σ D L > D L 1 t) 8 exp(2 log d ck min{t, t 2 }) This indicates that K D L 2 1 log d is the necessary sample size. D L is independent of d, e.g, the cut-off/branding matrix, [D L cut] i,j = 1 i j L, D L cut 1 2L. L = 4l in our application. X.Tong EnKF performance 25 / 31
39 Main result: localized EnKF Theorem (T. 17) Suppose the system coefficients have bandwidth l, and the LEnKF ensemble covariance admits a stable localized structure, then for any δ > 0, LEnKf reaches its proclaimed performance with high probability 1 O(δ): 1 1 T T t=1 P(E S ê n ê n (1 + δ)(ĉn D 4l cut + ρi d )) 1 T D 0 + D 1 δ, if the sample size K > D l,δ log d. E S conditioned on the information of the sampling noise realization. Stable localized structure: with local structure function Φ, e.g. Φ(x) = λ x, [Ĉn] i,j M n Φ( i j ), T EM n T M. n=1 X.Tong EnKF performance 26 / 31
40 Main result: localized EnKF Theorem (T. 17) Suppose the system coefficients have bandwidth l, and the LEnKF ensemble covariance admits a stable localized structure, then for any δ > 0, LEnKf reaches its proclaimed performance with high probability 1 O(δ): 1 1 T T t=1 P(E S ê n ê n (1 + δ)(ĉn D 4l cut + ρi d )) 1 T D 0 + D 1 δ, if the sample size K > D l,δ log d. E S conditioned on the information of the sampling noise realization. Stable localized structure: with local structure function Φ, e.g. Φ(x) = λ x, [Ĉn] i,j M n Φ( i j ), T EM n T M. n=1 X.Tong EnKF performance 26 / 31
41 Main result: localized EnKF Theorem (T. 17) Suppose the system coefficients have bandwidth l, and the LEnKF ensemble covariance admits a stable localized structure, then for any δ > 0, LEnKf reaches its proclaimed performance with high probability 1 O(δ): 1 1 T T t=1 P(E S ê n ê n (1 + δ)(ĉn D 4l cut + ρi d )) 1 T D 0 + D 1 δ, if the sample size K > D l,δ log d. E S conditioned on the information of the sampling noise realization. Stable localized structure: with local structure function Φ, e.g. Φ(x) = λ x, [Ĉn] i,j M n Φ( i j ), T EM n T M. n=1 X.Tong EnKF performance 26 / 31
42 LEnKF inconsistency An intrinsic inconsistency in LEnKF. Use Ĉn D L as covariance. Target covariance by Bayes formula (I G L (Ĉn)H)[Ĉn D L ](I G L (Ĉn)H) T + σog 2 L (G L ) T. LEnKF implementation X (k) n = X (k) n + G L (Ĉn)(Y n H X (k) n + ζ (k) n ) Average ensemble covariance C n D L = [(I G L (Ĉn)H)Ĉn(I G L (Ĉn)H) T +σ 2 og L (G L ) T ] D L. Difference: commuting the localization and Kalman update. Previously investigated numerically by Nerger 2015, the inconsistency can lead to error growth. X.Tong EnKF performance 27 / 31
43 When is the inconsistency small? localization is applied, covariance is assumed localized. Given localized structure Φ, find M n so that [Ĉn] i,j M n Φ( i j ). Interestingly, when D L is D cut 4l, the Localization inconsistency CM n Φ(2l). If 2l is large, Φ(x) = λ x, this difference can be controlled. Localized covariance leads to small localization inconsistency. Therefore, we need M n to be a stable sequence, T EM n T M. n=1 X.Tong EnKF performance 28 / 31
44 When is the inconsistency small? localization is applied, covariance is assumed localized. Given localized structure Φ, find M n so that [Ĉn] i,j M n Φ( i j ). Interestingly, when D L is D cut 4l, the Localization inconsistency CM n Φ(2l). If 2l is large, Φ(x) = λ x, this difference can be controlled. Localized covariance leads to small localization inconsistency. Therefore, we need M n to be a stable sequence, T EM n T M. n=1 X.Tong EnKF performance 28 / 31
45 When is covariance localized? Practical perspective Simply assumed. Numerically checked. Theoretical perspective: does covariance localize for any stochastic system? Linear system: covariance can be computed. Nonlinear: difficult, e.g. Lorenz 96. LEnKF: difficult since assimilation is nonlinear. Under strong conditions: Weak local interaction, strong dissipation. Sparse observation for simplicity. Also scales with the noise strength. t=0 t=1 t=2 t= X.Tong EnKF performance 29 / 31
46 When is covariance localized? Practical perspective Simply assumed. Numerically checked. Theoretical perspective: does covariance localize for any stochastic system? Linear system: covariance can be computed. Nonlinear: difficult, e.g. Lorenz 96. LEnKF: difficult since assimilation is nonlinear. Under strong conditions: Weak local interaction, strong dissipation. Sparse observation for simplicity. Also scales with the noise strength. t=0 t=1 t=2 t= X.Tong EnKF performance 29 / 31
47 Summary EnKF perform well, with a small sample size, if There is a low effective dimension. There is a localized a covariance structure. X.Tong EnKF performance 30 / 31
48 Reference Performance of Ensemble Kalman filters in large dimensions. to appear on CPAM arxiv: Performance analysis of local ensemble Kalman filter. to appear on arxiv Rigorous accuracy and robustness analysis for two-scale reduced random Kalman filters in high dimensions. arxiv: Links and slides can be found at mattxin. Thank you! X.Tong EnKF performance 31 / 31
Bayesian Inverse problem, Data assimilation and Localization
Bayesian Inverse problem, Data assimilation and Localization Xin T Tong National University of Singapore ICIP, Singapore 2018 X.Tong Localization 1 / 37 Content What is Bayesian inverse problem? What is
More informationErgodicity in data assimilation methods
Ergodicity in data assimilation methods David Kelly Andy Majda Xin Tong Courant Institute New York University New York NY www.dtbkelly.com April 15, 2016 ETH Zurich David Kelly (CIMS) Data assimilation
More informationWhat do we know about EnKF?
What do we know about EnKF? David Kelly Kody Law Andrew Stuart Andrew Majda Xin Tong Courant Institute New York University New York, NY April 10, 2015 CAOS seminar, Courant. David Kelly (NYU) EnKF April
More informationData assimilation in high dimensions
Data assimilation in high dimensions David Kelly Kody Law Andy Majda Andrew Stuart Xin Tong Courant Institute New York University New York NY www.dtbkelly.com February 3, 2016 DPMMS, University of Cambridge
More informationData assimilation in high dimensions
Data assimilation in high dimensions David Kelly Courant Institute New York University New York NY www.dtbkelly.com February 12, 2015 Graduate seminar, CIMS David Kelly (CIMS) Data assimilation February
More informationRobustness and Accuracy of finite Ensemble Kalman filters in large dimensions
Robustness and Accuracy of finite Ensemble Kalman filters in large dimensions Andrew J. Majda 1 and Xin T. Tong 1 1 Courant Institute of Mathematical Sciences, New York University July 8, 2016 Abstract
More informationConvergence of the Ensemble Kalman Filter in Hilbert Space
Convergence of the Ensemble Kalman Filter in Hilbert Space Jan Mandel Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Parts based
More informationEnKF and Catastrophic filter divergence
EnKF and Catastrophic filter divergence David Kelly Kody Law Andrew Stuart Mathematics Department University of North Carolina Chapel Hill NC bkelly.com June 4, 2014 SIAM UQ 2014 Savannah. David Kelly
More informationEnKF and filter divergence
EnKF and filter divergence David Kelly Andrew Stuart Kody Law Courant Institute New York University New York, NY dtbkelly.com December 12, 2014 Applied and computational mathematics seminar, NIST. David
More informationGaussian Filtering Strategies for Nonlinear Systems
Gaussian Filtering Strategies for Nonlinear Systems Canonical Nonlinear Filtering Problem ~u m+1 = ~ f (~u m )+~ m+1 ~v m+1 = ~g(~u m+1 )+~ o m+1 I ~ f and ~g are nonlinear & deterministic I Noise/Errors
More informationApplication of the Ensemble Kalman Filter to History Matching
Application of the Ensemble Kalman Filter to History Matching Presented at Texas A&M, November 16,2010 Outline Philosophy EnKF for Data Assimilation Field History Match Using EnKF with Covariance Localization
More informationEnsemble Kalman Filter
Ensemble Kalman Filter Geir Evensen and Laurent Bertino Hydro Research Centre, Bergen, Norway, Nansen Environmental and Remote Sensing Center, Bergen, Norway The Ensemble Kalman Filter (EnKF) Represents
More informationKalman Filter and Ensemble Kalman Filter
Kalman Filter and Ensemble Kalman Filter 1 Motivation Ensemble forecasting : Provides flow-dependent estimate of uncertainty of the forecast. Data assimilation : requires information about uncertainty
More informationShort tutorial on data assimilation
Mitglied der Helmholtz-Gemeinschaft Short tutorial on data assimilation 23 June 2015 Wolfgang Kurtz & Harrie-Jan Hendricks Franssen Institute of Bio- and Geosciences IBG-3 (Agrosphere), Forschungszentrum
More informationA variance limiting Kalman filter for data assimilation: I. Sparse observational grids II. Model error
A variance limiting Kalman filter for data assimilation: I. Sparse observational grids II. Model error Georg Gottwald, Lewis Mitchell, Sebastian Reich* University of Sydney, *Universität Potsdam Durham,
More informationAdaptive ensemble Kalman filtering of nonlinear systems
Adaptive ensemble Kalman filtering of nonlinear systems Tyrus Berry George Mason University June 12, 213 : Problem Setup We consider a system of the form: x k+1 = f (x k ) + ω k+1 ω N (, Q) y k+1 = h(x
More informationA Note on the Particle Filter with Posterior Gaussian Resampling
Tellus (6), 8A, 46 46 Copyright C Blackwell Munksgaard, 6 Printed in Singapore. All rights reserved TELLUS A Note on the Particle Filter with Posterior Gaussian Resampling By X. XIONG 1,I.M.NAVON 1,2 and
More informationLocal Ensemble Transform Kalman Filter
Local Ensemble Transform Kalman Filter Brian Hunt 11 June 2013 Review of Notation Forecast model: a known function M on a vector space of model states. Truth: an unknown sequence {x n } of model states
More informationA data-driven method for improving the correlation estimation in serial ensemble Kalman filter
A data-driven method for improving the correlation estimation in serial ensemble Kalman filter Michèle De La Chevrotière, 1 John Harlim 2 1 Department of Mathematics, Penn State University, 2 Department
More informationThe Ensemble Kalman Filter:
p.1 The Ensemble Kalman Filter: Theoretical formulation and practical implementation Geir Evensen Norsk Hydro Research Centre, Bergen, Norway Based on Evensen 23, Ocean Dynamics, Vol 53, No 4 p.2 The Ensemble
More informationSeminar: Data Assimilation
Seminar: Data Assimilation Jonas Latz, Elisabeth Ullmann Chair of Numerical Mathematics (M2) Technical University of Munich Jonas Latz, Elisabeth Ullmann (TUM) Data Assimilation 1 / 28 Prerequisites Bachelor:
More informationMethods of Data Assimilation and Comparisons for Lagrangian Data
Methods of Data Assimilation and Comparisons for Lagrangian Data Chris Jones, Warwick and UNC-CH Kayo Ide, UCLA Andrew Stuart, Jochen Voss, Warwick Guillaume Vernieres, UNC-CH Amarjit Budiraja, UNC-CH
More informationLagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC
Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Background Data Assimilation Iterative process Forecast Analysis Background
More informationLagrangian Data Assimilation and Its Application to Geophysical Fluid Flows
Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows Laura Slivinski June, 3 Laura Slivinski (Brown University) Lagrangian Data Assimilation June, 3 / 3 Data Assimilation Setup:
More informationRelative Merits of 4D-Var and Ensemble Kalman Filter
Relative Merits of 4D-Var and Ensemble Kalman Filter Andrew Lorenc Met Office, Exeter International summer school on Atmospheric and Oceanic Sciences (ISSAOS) "Atmospheric Data Assimilation". August 29
More informationarxiv: v1 [physics.ao-ph] 23 Jan 2009
A Brief Tutorial on the Ensemble Kalman Filter Jan Mandel arxiv:0901.3725v1 [physics.ao-ph] 23 Jan 2009 February 2007, updated January 2009 Abstract The ensemble Kalman filter EnKF) is a recursive filter
More informationAspects of the practical application of ensemble-based Kalman filters
Aspects of the practical application of ensemble-based Kalman filters Lars Nerger Alfred Wegener Institute for Polar and Marine Research Bremerhaven, Germany and Bremen Supercomputing Competence Center
More informationEnKF Review. P.L. Houtekamer 7th EnKF workshop Introduction to the EnKF. Challenges. The ultimate global EnKF algorithm
Overview 1 2 3 Review of the Ensemble Kalman Filter for Atmospheric Data Assimilation 6th EnKF Purpose EnKF equations localization After the 6th EnKF (2014), I decided with Prof. Zhang to summarize progress
More informationEnKF-based particle filters
EnKF-based particle filters Jana de Wiljes, Sebastian Reich, Wilhelm Stannat, Walter Acevedo June 20, 2017 Filtering Problem Signal dx t = f (X t )dt + 2CdW t Observations dy t = h(x t )dt + R 1/2 dv t.
More informationCorrecting biased observation model error in data assimilation
Correcting biased observation model error in data assimilation Tyrus Berry Dept. of Mathematical Sciences, GMU PSU-UMD DA Workshop June 27, 217 Joint work with John Harlim, PSU BIAS IN OBSERVATION MODELS
More informationOrganization. I MCMC discussion. I project talks. I Lecture.
Organization I MCMC discussion I project talks. I Lecture. Content I Uncertainty Propagation Overview I Forward-Backward with an Ensemble I Model Reduction (Intro) Uncertainty Propagation in Causal Systems
More informationGaussian Process Approximations of Stochastic Differential Equations
Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Centre for Computational Statistics and Machine Learning University College London c.archambeau@cs.ucl.ac.uk CSML
More informationData Assimilation Research Testbed Tutorial
Data Assimilation Research Testbed Tutorial Section 3: Hierarchical Group Filters and Localization Version 2.: September, 26 Anderson: Ensemble Tutorial 9//6 Ways to deal with regression sampling error:
More informationConvective-scale data assimilation in the Weather Research and Forecasting model using a nonlinear ensemble filter
Convective-scale data assimilation in the Weather Research and Forecasting model using a nonlinear ensemble filter Jon Poterjoy, Ryan Sobash, and Jeffrey Anderson National Center for Atmospheric Research
More informationData Assimilation: Finding the Initial Conditions in Large Dynamical Systems. Eric Kostelich Data Mining Seminar, Feb. 6, 2006
Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems Eric Kostelich Data Mining Seminar, Feb. 6, 2006 kostelich@asu.edu Co-Workers Istvan Szunyogh, Gyorgyi Gyarmati, Ed Ott, Brian
More informationRobust Ensemble Filtering With Improved Storm Surge Forecasting
Robust Ensemble Filtering With Improved Storm Surge Forecasting U. Altaf, T. Buttler, X. Luo, C. Dawson, T. Mao, I.Hoteit Meteo France, Toulouse, Nov 13, 2012 Project Ensemble data assimilation for storm
More informationForecasting and data assimilation
Supported by the National Science Foundation DMS Forecasting and data assimilation Outline Numerical models Kalman Filter Ensembles Douglas Nychka, Thomas Bengtsson, Chris Snyder Geophysical Statistics
More informationSequential Monte Carlo Samplers for Applications in High Dimensions
Sequential Monte Carlo Samplers for Applications in High Dimensions Alexandros Beskos National University of Singapore KAUST, 26th February 2014 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Alex
More informationEnsemble Data Assimilation and Uncertainty Quantification
Ensemble Data Assimilation and Uncertainty Quantification Jeff Anderson National Center for Atmospheric Research pg 1 What is Data Assimilation? Observations combined with a Model forecast + to produce
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation
More informationA new Hierarchical Bayes approach to ensemble-variational data assimilation
A new Hierarchical Bayes approach to ensemble-variational data assimilation Michael Tsyrulnikov and Alexander Rakitko HydroMetCenter of Russia College Park, 20 Oct 2014 Michael Tsyrulnikov and Alexander
More informationStochastic Collocation Methods for Polynomial Chaos: Analysis and Applications
Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Dongbin Xiu Department of Mathematics, Purdue University Support: AFOSR FA955-8-1-353 (Computational Math) SF CAREER DMS-64535
More information(Extended) Kalman Filter
(Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system
More informationGaussian Process Approximations of Stochastic Differential Equations
Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Dan Cawford Manfred Opper John Shawe-Taylor May, 2006 1 Introduction Some of the most complex models routinely run
More informationState and Parameter Estimation in Stochastic Dynamical Models
State and Parameter Estimation in Stochastic Dynamical Models Timothy DelSole George Mason University, Fairfax, Va and Center for Ocean-Land-Atmosphere Studies, Calverton, MD June 21, 2011 1 1 collaboration
More informationEnsemble Kalman filter assimilation of transient groundwater flow data via stochastic moment equations
Ensemble Kalman filter assimilation of transient groundwater flow data via stochastic moment equations Alberto Guadagnini (1,), Marco Panzeri (1), Monica Riva (1,), Shlomo P. Neuman () (1) Department of
More informationOn the convergence of (ensemble) Kalman filters and smoothers onto the unstable subspace
On the convergence of (ensemble) Kalman filters and smoothers onto the unstable subspace Marc Bocquet CEREA, joint lab École des Ponts ParisTech and EdF R&D, Université Paris-Est, France Institut Pierre-Simon
More informationSystematic strategies for real time filtering of turbulent signals in complex systems
Systematic strategies for real time filtering of turbulent signals in complex systems Statistical inversion theory for Gaussian random variables The Kalman Filter for Vector Systems: Reduced Filters and
More informationEnsemble square-root filters
Ensemble square-root filters MICHAEL K. TIPPETT International Research Institute for climate prediction, Palisades, New Yor JEFFREY L. ANDERSON GFDL, Princeton, New Jersy CRAIG H. BISHOP Naval Research
More informationBayesian Statistics and Data Assimilation. Jonathan Stroud. Department of Statistics The George Washington University
Bayesian Statistics and Data Assimilation Jonathan Stroud Department of Statistics The George Washington University 1 Outline Motivation Bayesian Statistics Parameter Estimation in Data Assimilation Combined
More informationAn Efficient Ensemble Data Assimilation Approach To Deal With Range Limited Observation
An Efficient Ensemble Data Assimilation Approach To Deal With Range Limited Observation A. Shah 1,2, M. E. Gharamti 1, L. Bertino 1 1 Nansen Environmental and Remote Sensing Center 2 University of Bergen
More informationPractical Aspects of Ensemble-based Kalman Filters
Practical Aspects of Ensemble-based Kalman Filters Lars Nerger Alfred Wegener Institute for Polar and Marine Research Bremerhaven, Germany and Bremen Supercomputing Competence Center BremHLR Bremen, Germany
More informationData assimilation with and without a model
Data assimilation with and without a model Tyrus Berry George Mason University NJIT Feb. 28, 2017 Postdoc supported by NSF This work is in collaboration with: Tim Sauer, GMU Franz Hamilton, Postdoc, NCSU
More informationEnsembles and Particle Filters for Ocean Data Assimilation
DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Ensembles and Particle Filters for Ocean Data Assimilation Robert N. Miller College of Oceanic and Atmospheric Sciences
More informationAccepted in Tellus A 2 October, *Correspondence
1 An Adaptive Covariance Inflation Error Correction Algorithm for Ensemble Filters Jeffrey L. Anderson * NCAR Data Assimilation Research Section P.O. Box 3000 Boulder, CO 80307-3000 USA Accepted in Tellus
More informationDATA ASSIMILATION FOR FLOOD FORECASTING
DATA ASSIMILATION FOR FLOOD FORECASTING Arnold Heemin Delft University of Technology 09/16/14 1 Data assimilation is the incorporation of measurement into a numerical model to improve the model results
More informationEfficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter
Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter arxiv:physics/0511236 v1 28 Nov 2005 Brian R. Hunt Institute for Physical Science and Technology and Department
More informationData assimilation as an optimal control problem and applications to UQ
Data assimilation as an optimal control problem and applications to UQ Walter Acevedo, Angwenyi David, Jana de Wiljes & Sebastian Reich Universität Potsdam/ University of Reading IPAM, November 13th 2017
More informationEstimation of State Noise for the Ensemble Kalman filter algorithm for 2D shallow water equations.
Estimation of State Noise for the Ensemble Kalman filter algorithm for 2D shallow water equations. May 6, 2009 Motivation Constitutive Equations EnKF algorithm Some results Method Navier Stokes equations
More informationThe Ensemble Kalman Filter:
p.1 The Ensemble Kalman Filter: Theoretical formulation and practical implementation Geir Evensen Norsk Hydro Research Centre, Bergen, Norway Based on Evensen, Ocean Dynamics, Vol 5, No p. The Ensemble
More informationStochastic Spectral Approaches to Bayesian Inference
Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to
More informationSuperparameterization and Dynamic Stochastic Superresolution (DSS) for Filtering Sparse Geophysical Flows
SP and DSS for Filtering Sparse Geophysical Flows Superparameterization and Dynamic Stochastic Superresolution (DSS) for Filtering Sparse Geophysical Flows Presented by Nan Chen, Michal Branicki and Chenyue
More informationComparisons between 4DEnVar and 4DVar on the Met Office global model
Comparisons between 4DEnVar and 4DVar on the Met Office global model David Fairbairn University of Surrey/Met Office 26 th June 2013 Joint project by David Fairbairn, Stephen Pring, Andrew Lorenc, Neill
More informationBayesian Estimation of Input Output Tables for Russia
Bayesian Estimation of Input Output Tables for Russia Oleg Lugovoy (EDF, RANE) Andrey Polbin (RANE) Vladimir Potashnikov (RANE) WIOD Conference April 24, 2012 Groningen Outline Motivation Objectives Bayesian
More information4. DATA ASSIMILATION FUNDAMENTALS
4. DATA ASSIMILATION FUNDAMENTALS... [the atmosphere] "is a chaotic system in which errors introduced into the system can grow with time... As a consequence, data assimilation is a struggle between chaotic
More informationLocalization and Correlation in Ensemble Kalman Filters
Localization and Correlation in Ensemble Kalman Filters Jeff Anderson NCAR Data Assimilation Research Section The National Center for Atmospheric Research is sponsored by the National Science Foundation.
More informationLARGE-SCALE TRAFFIC STATE ESTIMATION
Hans van Lint, Yufei Yuan & Friso Scholten A localized deterministic Ensemble Kalman Filter LARGE-SCALE TRAFFIC STATE ESTIMATION CONTENTS Intro: need for large-scale traffic state estimation Some Kalman
More informationNonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México
Nonlinear Observers Jaime A. Moreno JMorenoP@ii.unam.mx Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México XVI Congreso Latinoamericano de Control Automático October
More informationAt A Glance. UQ16 Mobile App.
At A Glance UQ16 Mobile App Scan the QR code with any QR reader and download the TripBuilder EventMobile app to your iphone, ipad, itouch or Android mobile device. To access the app or the HTML 5 version,
More informationFiltering Sparse Regular Observed Linear and Nonlinear Turbulent System
Filtering Sparse Regular Observed Linear and Nonlinear Turbulent System John Harlim, Andrew J. Majda Department of Mathematics and Center for Atmosphere Ocean Science Courant Institute of Mathematical
More informationA Stochastic Collocation based. for Data Assimilation
A Stochastic Collocation based Kalman Filter (SCKF) for Data Assimilation Lingzao Zeng and Dongxiao Zhang University of Southern California August 11, 2009 Los Angeles Outline Introduction SCKF Algorithm
More informationState Space Models for Wind Forecast Correction
for Wind Forecast Correction Valérie 1 Pierre Ailliot 2 Anne Cuzol 1 1 Université de Bretagne Sud 2 Université de Brest MAS - 2008/28/08 Outline 1 2 Linear Model : an adaptive bias correction Non Linear
More informationEfficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter
arxiv:physics/0511236v2 [physics.data-an] 29 Dec 2006 Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter Brian R. Hunt Institute for Physical Science and Technology
More informationConvergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit
Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation
More informationBrian J. Etherton University of North Carolina
Brian J. Etherton University of North Carolina The next 90 minutes of your life Data Assimilation Introit Different methodologies Barnes Analysis in IDV NWP Error Sources 1. Intrinsic Predictability Limitations
More informationMorphing ensemble Kalman filter
Morphing ensemble Kalman filter and applications Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Supported by NSF grants CNS-0623983
More informationConditions for successful data assimilation
Conditions for successful data assimilation Matthias Morzfeld *,**, Alexandre J. Chorin *,**, Peter Bickel # * Department of Mathematics University of California, Berkeley ** Lawrence Berkeley National
More informationExtended Kalman Filter Tutorial
Extended Kalman Filter Tutorial Gabriel A. Terejanu Department of Computer Science and Engineering University at Buffalo, Buffalo, NY 14260 terejanu@buffalo.edu 1 Dynamic process Consider the following
More informationSmoothers: Types and Benchmarks
Smoothers: Types and Benchmarks Patrick N. Raanes Oxford University, NERSC 8th International EnKF Workshop May 27, 2013 Chris Farmer, Irene Moroz Laurent Bertino NERSC Geir Evensen Abstract Talk builds
More informationLocalization in the ensemble Kalman Filter
Department of Meteorology Localization in the ensemble Kalman Filter Ruth Elizabeth Petrie A dissertation submitted in partial fulfilment of the requirement for the degree of MSc. Atmosphere, Ocean and
More informationSequential Monte Carlo Methods for Bayesian Computation
Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter
More informationEnhancing information transfer from observations to unobserved state variables for mesoscale radar data assimilation
Enhancing information transfer from observations to unobserved state variables for mesoscale radar data assimilation Weiguang Chang and Isztar Zawadzki Department of Atmospheric and Oceanic Sciences Faculty
More informationOptimal control and estimation
Automatic Control 2 Optimal control and estimation Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011
More informationEnsemble Kalman Filters for WRF-ARW. Chris Snyder MMM and IMAGe National Center for Atmospheric Research
Ensemble Kalman Filters for WRF-ARW Chris Snyder MMM and IMAGe National Center for Atmospheric Research Preliminaries Notation: x = modelʼs state w.r.t. some discrete basis, e.g. grid-pt values y = Hx
More informationRevision of TR-09-25: A Hybrid Variational/Ensemble Filter Approach to Data Assimilation
Revision of TR-9-25: A Hybrid Variational/Ensemble ilter Approach to Data Assimilation Adrian Sandu 1 and Haiyan Cheng 1 Computational Science Laboratory Department of Computer Science Virginia Polytechnic
More information9 Multi-Model State Estimation
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State
More informationPerturbation of system dynamics and the covariance completion problem
1 / 21 Perturbation of system dynamics and the covariance completion problem Armin Zare Joint work with: Mihailo R. Jovanović Tryphon T. Georgiou 55th IEEE Conference on Decision and Control, Las Vegas,
More informationDynamic System Identification using HDMR-Bayesian Technique
Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing
More informationData assimilation with and without a model
Data assimilation with and without a model Tim Sauer George Mason University Parameter estimation and UQ U. Pittsburgh Mar. 5, 2017 Partially supported by NSF Most of this work is due to: Tyrus Berry,
More informationRobotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard
Robotics 2 Target Tracking Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard Slides by Kai Arras, Gian Diego Tipaldi, v.1.1, Jan 2012 Chapter Contents Target Tracking Overview Applications
More informationThe Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance
The Kalman Filter Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Sarah Dance School of Mathematical and Physical Sciences, University of Reading s.l.dance@reading.ac.uk July
More informationRobotics 2 Target Tracking. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard
Robotics 2 Target Tracking Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard Linear Dynamical System (LDS) Stochastic process governed by is the state vector is the input vector is the process
More informationFundamentals of Data Assimilation
National Center for Atmospheric Research, Boulder, CO USA GSI Data Assimilation Tutorial - June 28-30, 2010 Acknowledgments and References WRFDA Overview (WRF Tutorial Lectures, H. Huang and D. Barker)
More informationOptimal Localization for Ensemble Kalman Filter Systems
Journal December of the 2014 Meteorological Society of Japan, Vol. Á. 92, PERIÁÑEZ No. 6, pp. et 585 597, al. 2014 585 DOI:10.2151/jmsj.2014-605 Optimal Localization for Ensemble Kalman Filter Systems
More informationNonlinear stability of the ensemble Kalman filter with adaptive covariance inflation
Nonlinear stability of the ensemble Kalman filter with adaptive covariance inflation arxiv:1507.08319v1 [math.pr] 29 Jul 2015 Xin T Tong, Andrew J Majda and David Kelly July 31, 2015 Abstract The Ensemble
More informationA Moment Matching Particle Filter for Nonlinear Non-Gaussian. Data Assimilation. and Peter Bickel
Generated using version 3.0 of the official AMS L A TEX template A Moment Matching Particle Filter for Nonlinear Non-Gaussian Data Assimilation Jing Lei and Peter Bickel Department of Statistics, University
More informationObjective localization of ensemble covariances: theory and applications
Institutionnel Grand Public Objective localization of ensemble covariances: theory and applications Yann Michel1, B. Me ne trier2 and T. Montmerle1 Professionnel (1) Me te o-france & CNRS, Toulouse, France
More informationOn a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model
On a Data Assimilation Method coupling, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model 2016 SIAM Conference on Uncertainty Quantification Basile Marchand 1, Ludovic
More informationReview of Covariance Localization in Ensemble Filters
NOAA Earth System Research Laboratory Review of Covariance Localization in Ensemble Filters Tom Hamill NOAA Earth System Research Lab, Boulder, CO tom.hamill@noaa.gov Canonical ensemble Kalman filter update
More information