Bayesian Inverse problem, Data assimilation and Localization

Size: px
Start display at page:

Download "Bayesian Inverse problem, Data assimilation and Localization"

Transcription

1 Bayesian Inverse problem, Data assimilation and Localization Xin T Tong National University of Singapore ICIP, Singapore 2018 X.Tong Localization 1 / 37

2 Content What is Bayesian inverse problem? What is data assimilation? What is the curse of dimensionality? Localization for algorithms. X.Tong Localization 2 / 37

3 Bayesian formulation Deterministic inverse problem Given h and data y, find state x, so that y = h(x). Tikhonov regularization: solves instead min y h(x) 2 R + x 2 C x R, C: PSD matrices, x 2 C = xt C 1 x. Bayesian inverse problem Prior x p 0 = N (0, C). Observe y = h(x) + v, v N (0, R), y p l (y x). Bayes formula for posterior distribution p(x y) = p 0(x)p l (y x) p0 (x)p l (y x)dx p 0(x)p l (y x) Try to obtain p(x y). X.Tong Localization 3 / 37

4 Bayesian formulation Deterministic inverse problem Given h and data y, find state x, so that y = h(x). Tikhonov regularization: solves instead min y h(x) 2 R + x 2 C x R, C: PSD matrices, x 2 C = xt C 1 x. Bayesian inverse problem Prior x p 0 = N (0, C). Observe y = h(x) + v, v N (0, R), y p l (y x). Bayes formula for posterior distribution p(x y) = p 0(x)p l (y x) p0 (x)p l (y x)dx p 0(x)p l (y x) Try to obtain p(x y). X.Tong Localization 3 / 37

5 Connection to deterministic IP Posterior density p(x y) p 0 (x)p l (y x) exp( 1 2 x 2 C 1 2 y h(x) 2 R). Maxium a posterior (MAP) estimator max x p(x y) = min y x h(x) 2 R + x 2 C. Deterministic IP finds the MAP. But p(x y) can contain much richer information. Quantifies the uncertainty within x, given data y. X.Tong Localization 4 / 37

6 Connection to deterministic IP Posterior density p(x y) p 0 (x)p l (y x) exp( 1 2 x 2 C 1 2 y h(x) 2 R). Maxium a posterior (MAP) estimator max x p(x y) = min y x h(x) 2 R + x 2 C. Deterministic IP finds the MAP. But p(x y) can contain much richer information. Quantifies the uncertainty within x, given data y. X.Tong Localization 4 / 37

7 Uncertainty quantification (UQ) Estimate the accuracy and robustness of the solution. Finding all the local mins, i.e., likely states. Risk: what is the chance a hurricane hits a city? Streaming data and estimation on the fly. Cost: more complicated computation X.Tong Localization 5 / 37

8 Monte Carlo Direct method Can we simply compute p(x y)? Involves quadrature (integral) and PDE. Not feasible if dim(x) > 5. Practical dim(x) = One exception: if p 0 is Gaussian and h is linear Kalman filter: p(x y) is an explicit Gaussian. Monte Carlo Generate samples x k p(x y), i = 1,, N. Use sample statistics as an approximation. Local mins: clusters of samples. Sample variance: how uncertain are the estimators? Risk: what percentage of samples lead to bad outcomes? X.Tong Localization 6 / 37

9 Monte Carlo Direct method Can we simply compute p(x y)? Involves quadrature (integral) and PDE. Not feasible if dim(x) > 5. Practical dim(x) = One exception: if p 0 is Gaussian and h is linear Kalman filter: p(x y) is an explicit Gaussian. Monte Carlo Generate samples x k p(x y), i = 1,, N. Use sample statistics as an approximation. Local mins: clusters of samples. Sample variance: how uncertain are the estimators? Risk: what percentage of samples lead to bad outcomes? X.Tong Localization 6 / 37

10 Markov chain Monte Carlo (MCMC) Given a target distribution p(x), generate a sequence of samples x 1, x 2,, x N. Standard MCMC steps Generate proposals x q(x k, x ) Accept with prob. α(x, x ) = min{1, p(x )q(x, x k )/q(x k, x )p(x)} Popular choices of proposals ξ k N (0, I n ). RWM: x = x k + σξ k MALA: x = x k + σ2 2 log p(xk ) + σξ k. pcn: x = 1 β 2 x k + β ξ k. Also emcee and Hamiltonian MCMC. σ, β are tuning parameters. X.Tong Localization 7 / 37

11 How does MCMC work in high dim? Sample isotropic Gaussian p = N (0, I n ). Measurement of efficiency: integrated auto-correlation time (IACT) Measure how many iterations to get an uncorrelated sample. RWM emcee Hamiltonian MCMC MALA IACT Dimension increases with dimension IACT Dimension All X.Tong Localization 8 / 37

12 Where is the problem? Let s look at RWM, assume x k = 0. Propose x = σξ k Accept with probability exp( 1 2 σ2 ξ k 2 ) exp( 1 2 σ2 d). If we keep σ = 1, never accept if d > 20. If we want acceptance at a constant rate, σ = d 1 2. But then x = x k + σξ k is highly correlated with x k. Similar for MALA, σ = d 1/3. Hamiltonian MCMC, σ = d 1/4. Is it possible to break this curse of dimensionality? X.Tong Localization 9 / 37

13 Where is the problem? Let s look at RWM, assume x k = 0. Propose x = σξ k Accept with probability exp( 1 2 σ2 ξ k 2 ) exp( 1 2 σ2 d). If we keep σ = 1, never accept if d > 20. If we want acceptance at a constant rate, σ = d 1 2. But then x = x k + σξ k is highly correlated with x k. Similar for MALA, σ = d 1/3. Hamiltonian MCMC, σ = d 1/4. Is it possible to break this curse of dimensionality? X.Tong Localization 9 / 37

14 Where is the problem? Let s look at RWM, assume x k = 0. Propose x = σξ k Accept with probability exp( 1 2 σ2 ξ k 2 ) exp( 1 2 σ2 d). If we keep σ = 1, never accept if d > 20. If we want acceptance at a constant rate, σ = d 1 2. But then x = x k + σξ k is highly correlated with x k. Similar for MALA, σ = d 1/3. Hamiltonian MCMC, σ = d 1/4. Is it possible to break this curse of dimensionality? X.Tong Localization 9 / 37

15 How to beat the curse of dimensionality? Gaussian structure Target p N (m, Σ), X = m + Σ1/2 Z. Approximate the area near MAP with Gaussian. High dimensional Gaussian+low dimensional nongaussian. Low effective dimension Components have diminishing uncertainty/importance. Example: Fourier modes of a PDE solution. Functional space MCMC (Stuart 2010) Lorenz system propagation. Video: MIT Aero Astro X.Tong Localization 10 / 37

16 Data assimilation Sequential inverse problem: x 0 p 0 Hidden: x n+1 = Ψ(x n ) + ξ n+1, Data: y n = h n (x n ) + v n. Estimate x n based on y 1,, y n. Bayesian approach: find p(x n y 1, y 2,, y n ) Online update from p(x n y 1, y 2,, y n ) p(x n+1 y 1, y 2,, y n, y n+1 ). without storing y 1,, y n. X.Tong Localization 11 / 37

17 Weather forecast Weather forecast: Hidden: x n+1 = Ψ(x n ) + ξ n+1, Data: y n = h(x n ) + v n. x n : atmosphere and ocean state of day n Ψ: PDE solver maps yesterday to today. h: partial observation made by satellites. Main challenge: high dimension, d X.Tong Localization 12 / 37

18 Bayesian IP application Inverse problem with streaming data: x 0 p 0 Hidden: x n+1 = x n x 0, Data: y n = h n (x 0 ) + v n. Estimate x n based on y 1,, y n. Application: Subsurface detection. x 0 : subsurface structure. y n : data gathered through n-th experiements. X.Tong Localization 13 / 37

19 Kalman filter Linear setting Hidden: x n+1 = A n x n + B n + ξ n+1 Data: y n = Hx n + v n. Use Gaussian: x n y1...n N (m n, R n ) Forecast step: ˆm n+1 = A n m n + B n, R n+1 = A n R n A T n + Σ n. Assimilation step: apply the Kalman update rule m n+1 = ˆm n+1 + G( R n+1 )(y n+1 H ˆm n+1 ), R n+1 = K( R n+1 ) G(C) = CH T (R + HCH T ) 1, K(C) = C G(C)HC Posterior at t = n Prior+Obs at t = n + 1 forecast assimilate Posterior at t = n + 1 X.Tong Localization 14 / 37

20 Nonlinear filtering Main drawbacks Difficult to deal with nonlinear dynamics/observation. Computation is expensive O(d 3 ). Particle filter Can deal with nonlinearity. Try to generate samples from x k n p(x n y 1,..., y n ). Prohibitively expensive O(e d ). Ensemble Kalman filter Combination of two ideas: Gaussian distributions with samples Ensemble {x k n} K k=1 to represent N (x n, C n ) x n = x k n K, C n = 1 K 1 (x k n x n ) (x k n x n ). k X.Tong Localization 15 / 37

21 Nonlinear filtering Main drawbacks Difficult to deal with nonlinear dynamics/observation. Computation is expensive O(d 3 ). Particle filter Can deal with nonlinearity. Try to generate samples from x k n p(x n y 1,..., y n ). Prohibitively expensive O(e d ). Ensemble Kalman filter Combination of two ideas: Gaussian distributions with samples Ensemble {x k n} K k=1 to represent N (x n, C n ) x n = x k n K, C n = 1 K 1 (x k n x n ) (x k n x n ). k X.Tong Localization 15 / 37

22 Nonlinear filtering Main drawbacks Difficult to deal with nonlinear dynamics/observation. Computation is expensive O(d 3 ). Particle filter Can deal with nonlinearity. Try to generate samples from x k n p(x n y 1,..., y n ). Prohibitively expensive O(e d ). Ensemble Kalman filter Combination of two ideas: Gaussian distributions with samples Ensemble {x k n} K k=1 to represent N (x n, C n ) x n = x k n K, C n = 1 K 1 (x k n x n ) (x k n x n ). k X.Tong Localization 15 / 37

23 Ensemble Kalman filter Forecast step ˆx k n+1 = Ψ(X k n) + ξ k n+1, y k n+1 = h n+1 (x k n+1) + ζ k n+1 Assimilation step x k n+1 = ˆx k n+1 + G n+1 (y n+1 y k n+1). Gain matrix: G n+1 = S X SY T (R + S Y SY T ) 1, S X = 1 K 1 [x(1) n+1 x n+1,..., x k n+1 x n+1 ]. Complexity O(K 2 d). Posterior at t = n Prior+Obs at t = n + 1 Posterior at t = n + 1 X (k) n+1 X (k) forecast: M.C. assimilate n+1 X (k) n X.Tong Localization 16 / 37

24 Success of EnKF Successful applications to weather forecast System d 10 6, ensemble size K Complexity of Kalman filter: O(d 3 ) = O(10 18 ). Complexity of EnKF: O(K 2 d) = O(10 10 ). No computation of gradient. Also find applications in Oil reservoir management Bayesian IP Deep learning. X.Tong Localization 17 / 37

25 EnKF in high dimension Theoretical dimension curse comes from another angle The covariance of x estimated using the sample covariance x = 1 K x k, Ĉ = 1 (x k x)(x k x) T. K 1 If x k N (0, Σ), Ĉ Σ = O( d/k). (Bai-Yin s law) If sample size K much less than the dimension d. X.Tong Localization 18 / 37

26 Local interaction High dimension often comes from dense grids. Interaction often is local: PDE discritization. Example: Lorenz 96 model ẋ i (t) = (x i+1 x i 2 )x i 1 x i dt + F, i = 1,, d Information travels along interaction, and is dissipated. X.Tong Localization 19 / 37

27 Sparsity: local covariance Correlation of Lorenz 96 X.Tong Localization 20 / 37 Correlation depends on information propagation. Correlation decays quickly with the distance. Covariance is localized with a structure Φ, e.g. Φ(x) = ρ x [Ĉ] i,j Φ( i j ) Φ(x) [0, 1] is decreasing. Distance can be general

28 Covariance Localization Implementation: Schur product with a mask [Ĉ D L] i,j = [Ĉ] i,j [D L ] i,j Use Ĉ D L instead for covariance. [D L ] i,j = φ( i j ), with a radius L. Gaspari-Cohn matrix: φ(x) = exp( 4x 2 /L 2 )1 i j L. Cutoff matrix: φ(x) = 1 i j L. Application in EnKF: use Ĉn D L instead of Ĉn Essentially: assimilate information from near neighbors. X.Tong Localization 21 / 37

29 Advantage with localization Theorem (Bickel, Levina 08) x 1,..., x k N (0, Σ), sample covariance matrix C = 1 K K k=1 xk x k. There is a constant c, and for any t > 0 P( C D L Σ D L > D L 1 t) 8 exp(2 log d ck min{t, t 2 }) This indicates that K D L 2 1 log d is the necessary sample size. D L 1 = max i j D L i,j is independent of d, e.g, the cut-off/branding matrix. [D L cut] i,j = 1 i j L, D L cut 1 2L. Theorem (T. 18) If EnKF has a stable localized covariance structure, the sample size is of order log n. X.Tong Localization 22 / 37

30 Advantage with localization Theorem (Bickel, Levina 08) x 1,..., x k N (0, Σ), sample covariance matrix C = 1 K K k=1 xk x k. There is a constant c, and for any t > 0 P( C D L Σ D L > D L 1 t) 8 exp(2 log d ck min{t, t 2 }) This indicates that K D L 2 1 log d is the necessary sample size. D L 1 = max i j D L i,j is independent of d, e.g, the cut-off/branding matrix. [D L cut] i,j = 1 i j L, D L cut 1 2L. Theorem (T. 18) If EnKF has a stable localized covariance structure, the sample size is of order log n. X.Tong Localization 22 / 37

31 From EnKF to MCMC? Localization for EnKF: Update only in small local blocks. Works when covariance have local structure. How to apply localization to MCMC? How to update x i component by component? Gibbs sampling implements this idea exactly! Write x i = [x i 1, x i 2,, x i m]. x i k can be of dimension q, then d = qm. Generate x i+1 1 p(x 1 x i 2, x i 3, x i m). Generate x i+1 2 p(x 2 x i+1 1, x i 3, x i m). Generate x i+1 m p(x m x i+1 1, x i+1 2, x i+1 m 1 ). X.Tong Localization 23 / 37

32 From EnKF to MCMC? Localization for EnKF: Update only in small local blocks. Works when covariance have local structure. How to apply localization to MCMC? How to update x i component by component? Gibbs sampling implements this idea exactly! Write x i = [x i 1, x i 2,, x i m]. x i k can be of dimension q, then d = qm. Generate x i+1 1 p(x 1 x i 2, x i 3, x i m). Generate x i+1 2 p(x 2 x i+1 1, x i 3, x i m). Generate x i+1 m p(x m x i+1 1, x i+1 2, x i+1 m 1 ). X.Tong Localization 23 / 37

33 How efficient is Gibbs? First just test with p = N (0, I d ) Generate x i+1 1 p(x 1 x i 2, x i 3, x i m) = N (0, I q ). Generate x i+1 2 p(x 2 x i+1 1, x i 3, x i m) = N (0, I q ). Generate x i+1 m p(x m x i+1 1, x i+1 2, x i+1 m 1 ) = N (0, I q). Gibbs naturally exploits the component independence. It works efficiently against the dimension. How about component with sparse/local independence? X.Tong Localization 24 / 37

34 How efficient is Gibbs? First just test with p = N (0, I d ) Generate x i+1 1 p(x 1 x i 2, x i 3, x i m) = N (0, I q ). Generate x i+1 2 p(x 2 x i+1 1, x i 3, x i m) = N (0, I q ). Generate x i+1 m p(x m x i+1 1, x i+1 2, x i+1 m 1 ) = N (0, I q). Gibbs naturally exploits the component independence. It works efficiently against the dimension. How about component with sparse/local independence? X.Tong Localization 24 / 37

35 Block tridiagonal matrix Local covariance matrix C: [C] i,j decays to zero quickly when i j becomes large. Localized covariance matrix C: [C] i,j = 0 when i j > L. C has a bandwidth 2L. We will see local is a perturbation of localized We can choose q = L in x i = [x i 1, x i 2,, x i m], Then C is block tridiagonal. X.Tong Localization 25 / 37

36 Localized covariance Theorem (Morzfeld, T., Marzouk) Apply Gibbs sampler with block-size q to p = N (m, C). Suppose C is q-block-tridiagonal. Then the distribution of x k converges to p geometrically fast in all coordinates, and we can couple x k and a sample z N (m, C) such that where E C 1/2 (x k z) 2 β k d(1 + C 1/2 (x 0 m) 2 ), β 2(1 C 1 ) 2 C (1 C 1 ) 2 C 4, with C being the condition number of C. Localized covariance+mild condition dimension free covergence. X.Tong Localization 26 / 37

37 Localized precision Theorem (Morzfeld, T., Marzouk) Apply Gibbs sampler with block-size q to p = N (m, C). Suppose Σ = C 1 is q-block-tridiagonal. Then the distribution of x k converges to p geometrically fast in all coordinates, and we can couple x k and a sample z N (m, C) such that where E C 1/2 (x k z) 2 β k d(1 + C 1/2 (x 0 m) 2 ), β C(1 C 1 ) C(1 C 1 ) 2, with C being the condition number of C. Localized precision+mild condition dimension free covergence. X.Tong Localization 27 / 37

38 Covariance v.s. Precision Why both localized covariance and precision? A lemma in Bickle & Lindner Localized covariance+mild condition local precision. Localized precision+mild condition local covariance. We will see local is a perturbation of localized For computation of Gibbs sampler, localized precision is superior: x k+1 j N m j Ω 1 j,j Ω j,i(x k+1 i m i ) Ω 1 j,j Ω j,i(x k i m i ), Ω 1 j,j. i<j i>j When Ω is sparse, meaning fast computation. X.Tong Localization 28 / 37

39 Covariance v.s. Precision Why both localized covariance and precision? A lemma in Bickle & Lindner Localized covariance+mild condition local precision. Localized precision+mild condition local covariance. We will see local is a perturbation of localized For computation of Gibbs sampler, localized precision is superior: x k+1 j N m j Ω 1 j,j Ω j,i(x k+1 i m i ) Ω 1 j,j Ω j,i(x k i m i ), Ω 1 j,j. i<j i>j When Ω is sparse, meaning fast computation. X.Tong Localization 28 / 37

40 Generalization Gibbs works for Gaussian sampling, with localized covariance or precision. How about Bayesian inverse problem? y = h(x) + v, v N (0, R), x p 0 = N (m, C). If h is linear, p is also Gaussian, Gibbs is directly applicable. What to do when C e.t.c. are not localized but local? X.Tong Localization 29 / 37

41 Metropolis within Gibbs Add in Metropolis steps to incorporate information Generate x 1 p 0(x 1 x i 2, x i 3, x i m) Accept as x i+1 1 with α 1(x i 1, x 1, x i 2:m) { α 1(x i 1, x 1, x i 2:m) = min 1, exp( 1 y } 2 h(x ) 2 R) exp( 1 y 2 h(xi ) 2 R ), where x = (x 1, x i 2:m). Repeat for all 2,..., m blocks When h has a dimension free Lipschitz constant, y h(x ) 2 R y h(xi ) 2 R is independent of n. Dimension independent acceptance rate. Should have fast convergence, though proof is unclear. X.Tong Localization 30 / 37

42 Localization Often Ω and h are local [Ω] i,j decays to zero quickly when i j increases. [h(x)] j depends significantly only over a few x i. Fast sparse computation is possible with localized parameters [Ω] i,j decays to zero quickly when i j increases. [h(x)] j depends significantly only over a few x i. Localization: truncate the near zero terms, Ω Ω L, h h L. We call MwG with localization as l-mwg. Theorem (Morzfeld, T., Marzouk 2018) The perturbation to the inverse problem is of order { } max Ω Ω L 1, (H H L )(H H L ) T 1. A 1 = max 1 j n n i=1 A i,j. X.Tong Localization 31 / 37

43 Comparison with function space MCMC Function space MCMC: Discretization refines, domain const. Number of obs. const. Effective dimension const. Low-rank priors. Low-rank prior to posterior update. Solved by dimension reduction. MCMC for local problems: Domain size increases, discretization is const. Number of obs. increases. Effective dimension increases. High-rank, sparse priors. High-rank prior to posterior update. Solved by localization. X.Tong Localization 32 / 37

44 Example I: image deblurring Truth x N (0, δ 1 L 2 ), L is Laplacian. Defocus obs: y = Ax + η, η N (0, λ 1 I). Dimension is large O(10 4 ). X.Tong Localization 33 / 37

45 Example I: image deblurring Precision is sparse. Effective dimension is large. X.Tong Localization 34 / 37

46 Example II: Lorenz 96 inverse Truth x0 p0, p0 is Gaussian Climatology. Ψt : x0 7 xt : dxi = (xi+1 xi 2 )xi 1 xi + 8 Observe every other xt, y = H(Ψt (x0 )) + ξ. X.Tong Localization 35 / 37

47 Summary Bayesian IP provides more information on UQ. Often solved by MCMC algorithms. Data assimilation (DA): Bayesian IP with dynamical systems. EnKF: efficient solver for high dim DA. Localization: a useful technique in high dimensional settigns. X.Tong Localization 36 / 37

48 Reference Inverse problems: A Bayesian perspective, A.M.Stuart, Acta Numerica Localization for MCMC: sampling high-dimensional posterior distributions with local structure. arxiv: Performance analysis of local ensemble Kalman filter. to appear on J. Nonlinear Science. Thank you! X.Tong Localization 37 / 37

Performance of ensemble Kalman filters with small ensembles

Performance of ensemble Kalman filters with small ensembles Performance of ensemble Kalman filters with small ensembles Xin T Tong Joint work with Andrew J. Majda National University of Singapore Sunday 28 th May, 2017 X.Tong EnKF performance 1 / 31 Content Ensemble

More information

Ergodicity in data assimilation methods

Ergodicity in data assimilation methods Ergodicity in data assimilation methods David Kelly Andy Majda Xin Tong Courant Institute New York University New York NY www.dtbkelly.com April 15, 2016 ETH Zurich David Kelly (CIMS) Data assimilation

More information

Forecasting and data assimilation

Forecasting and data assimilation Supported by the National Science Foundation DMS Forecasting and data assimilation Outline Numerical models Kalman Filter Ensembles Douglas Nychka, Thomas Bengtsson, Chris Snyder Geophysical Statistics

More information

Sequential Monte Carlo Samplers for Applications in High Dimensions

Sequential Monte Carlo Samplers for Applications in High Dimensions Sequential Monte Carlo Samplers for Applications in High Dimensions Alexandros Beskos National University of Singapore KAUST, 26th February 2014 Joint work with: Dan Crisan, Ajay Jasra, Nik Kantas, Alex

More information

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows Laura Slivinski June, 3 Laura Slivinski (Brown University) Lagrangian Data Assimilation June, 3 / 3 Data Assimilation Setup:

More information

Data assimilation in high dimensions

Data assimilation in high dimensions Data assimilation in high dimensions David Kelly Courant Institute New York University New York NY www.dtbkelly.com February 12, 2015 Graduate seminar, CIMS David Kelly (CIMS) Data assimilation February

More information

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Dongbin Xiu Department of Mathematics, Purdue University Support: AFOSR FA955-8-1-353 (Computational Math) SF CAREER DMS-64535

More information

Methods of Data Assimilation and Comparisons for Lagrangian Data

Methods of Data Assimilation and Comparisons for Lagrangian Data Methods of Data Assimilation and Comparisons for Lagrangian Data Chris Jones, Warwick and UNC-CH Kayo Ide, UCLA Andrew Stuart, Jochen Voss, Warwick Guillaume Vernieres, UNC-CH Amarjit Budiraja, UNC-CH

More information

A new Hierarchical Bayes approach to ensemble-variational data assimilation

A new Hierarchical Bayes approach to ensemble-variational data assimilation A new Hierarchical Bayes approach to ensemble-variational data assimilation Michael Tsyrulnikov and Alexander Rakitko HydroMetCenter of Russia College Park, 20 Oct 2014 Michael Tsyrulnikov and Alexander

More information

Organization. I MCMC discussion. I project talks. I Lecture.

Organization. I MCMC discussion. I project talks. I Lecture. Organization I MCMC discussion I project talks. I Lecture. Content I Uncertainty Propagation Overview I Forward-Backward with an Ensemble I Model Reduction (Intro) Uncertainty Propagation in Causal Systems

More information

Application of the Ensemble Kalman Filter to History Matching

Application of the Ensemble Kalman Filter to History Matching Application of the Ensemble Kalman Filter to History Matching Presented at Texas A&M, November 16,2010 Outline Philosophy EnKF for Data Assimilation Field History Match Using EnKF with Covariance Localization

More information

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:

More information

Seminar: Data Assimilation

Seminar: Data Assimilation Seminar: Data Assimilation Jonas Latz, Elisabeth Ullmann Chair of Numerical Mathematics (M2) Technical University of Munich Jonas Latz, Elisabeth Ullmann (TUM) Data Assimilation 1 / 28 Prerequisites Bachelor:

More information

Short tutorial on data assimilation

Short tutorial on data assimilation Mitglied der Helmholtz-Gemeinschaft Short tutorial on data assimilation 23 June 2015 Wolfgang Kurtz & Harrie-Jan Hendricks Franssen Institute of Bio- and Geosciences IBG-3 (Agrosphere), Forschungszentrum

More information

Data assimilation in high dimensions

Data assimilation in high dimensions Data assimilation in high dimensions David Kelly Kody Law Andy Majda Andrew Stuart Xin Tong Courant Institute New York University New York NY www.dtbkelly.com February 3, 2016 DPMMS, University of Cambridge

More information

Gaussian Filtering Strategies for Nonlinear Systems

Gaussian Filtering Strategies for Nonlinear Systems Gaussian Filtering Strategies for Nonlinear Systems Canonical Nonlinear Filtering Problem ~u m+1 = ~ f (~u m )+~ m+1 ~v m+1 = ~g(~u m+1 )+~ o m+1 I ~ f and ~g are nonlinear & deterministic I Noise/Errors

More information

Ensemble Kalman Filter

Ensemble Kalman Filter Ensemble Kalman Filter Geir Evensen and Laurent Bertino Hydro Research Centre, Bergen, Norway, Nansen Environmental and Remote Sensing Center, Bergen, Norway The Ensemble Kalman Filter (EnKF) Represents

More information

Data assimilation with and without a model

Data assimilation with and without a model Data assimilation with and without a model Tim Sauer George Mason University Parameter estimation and UQ U. Pittsburgh Mar. 5, 2017 Partially supported by NSF Most of this work is due to: Tyrus Berry,

More information

Local Ensemble Transform Kalman Filter

Local Ensemble Transform Kalman Filter Local Ensemble Transform Kalman Filter Brian Hunt 11 June 2013 Review of Notation Forecast model: a known function M on a vector space of model states. Truth: an unknown sequence {x n } of model states

More information

Fundamentals of Data Assimilation

Fundamentals of Data Assimilation National Center for Atmospheric Research, Boulder, CO USA GSI Data Assimilation Tutorial - June 28-30, 2010 Acknowledgments and References WRFDA Overview (WRF Tutorial Lectures, H. Huang and D. Barker)

More information

A data-driven method for improving the correlation estimation in serial ensemble Kalman filter

A data-driven method for improving the correlation estimation in serial ensemble Kalman filter A data-driven method for improving the correlation estimation in serial ensemble Kalman filter Michèle De La Chevrotière, 1 John Harlim 2 1 Department of Mathematics, Penn State University, 2 Department

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Dan Cawford Manfred Opper John Shawe-Taylor May, 2006 1 Introduction Some of the most complex models routinely run

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

Data assimilation as an optimal control problem and applications to UQ

Data assimilation as an optimal control problem and applications to UQ Data assimilation as an optimal control problem and applications to UQ Walter Acevedo, Angwenyi David, Jana de Wiljes & Sebastian Reich Universität Potsdam/ University of Reading IPAM, November 13th 2017

More information

What do we know about EnKF?

What do we know about EnKF? What do we know about EnKF? David Kelly Kody Law Andrew Stuart Andrew Majda Xin Tong Courant Institute New York University New York, NY April 10, 2015 CAOS seminar, Courant. David Kelly (NYU) EnKF April

More information

Bayesian Inverse Problems

Bayesian Inverse Problems Bayesian Inverse Problems Jonas Latz Input/Output: www.latz.io Technical University of Munich Department of Mathematics, Chair for Numerical Analysis Email: jonas.latz@tum.de Garching, July 10 2018 Guest

More information

MCMC Sampling for Bayesian Inference using L1-type Priors

MCMC Sampling for Bayesian Inference using L1-type Priors MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin

More information

Ensemble Data Assimilation and Uncertainty Quantification

Ensemble Data Assimilation and Uncertainty Quantification Ensemble Data Assimilation and Uncertainty Quantification Jeff Anderson National Center for Atmospheric Research pg 1 What is Data Assimilation? Observations combined with a Model forecast + to produce

More information

A Stochastic Collocation based. for Data Assimilation

A Stochastic Collocation based. for Data Assimilation A Stochastic Collocation based Kalman Filter (SCKF) for Data Assimilation Lingzao Zeng and Dongxiao Zhang University of Southern California August 11, 2009 Los Angeles Outline Introduction SCKF Algorithm

More information

EnKF-based particle filters

EnKF-based particle filters EnKF-based particle filters Jana de Wiljes, Sebastian Reich, Wilhelm Stannat, Walter Acevedo June 20, 2017 Filtering Problem Signal dx t = f (X t )dt + 2CdW t Observations dy t = h(x t )dt + R 1/2 dv t.

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

Introduction to Bayesian methods in inverse problems

Introduction to Bayesian methods in inverse problems Introduction to Bayesian methods in inverse problems Ville Kolehmainen 1 1 Department of Applied Physics, University of Eastern Finland, Kuopio, Finland March 4 2013 Manchester, UK. Contents Introduction

More information

Correcting biased observation model error in data assimilation

Correcting biased observation model error in data assimilation Correcting biased observation model error in data assimilation Tyrus Berry Dept. of Mathematical Sciences, GMU PSU-UMD DA Workshop June 27, 217 Joint work with John Harlim, PSU BIAS IN OBSERVATION MODELS

More information

Dimension-Independent likelihood-informed (DILI) MCMC

Dimension-Independent likelihood-informed (DILI) MCMC Dimension-Independent likelihood-informed (DILI) MCMC Tiangang Cui, Kody Law 2, Youssef Marzouk Massachusetts Institute of Technology 2 Oak Ridge National Laboratory 2 August 25 TC, KL, YM DILI MCMC USC

More information

The Ensemble Kalman Filter:

The Ensemble Kalman Filter: p.1 The Ensemble Kalman Filter: Theoretical formulation and practical implementation Geir Evensen Norsk Hydro Research Centre, Bergen, Norway Based on Evensen 23, Ocean Dynamics, Vol 53, No 4 p.2 The Ensemble

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

Data Assimilation for Dispersion Models

Data Assimilation for Dispersion Models Data Assimilation for Dispersion Models K. V. Umamaheswara Reddy Dept. of Mechanical and Aerospace Engg. State University of New Yor at Buffalo Buffalo, NY, U.S.A. venatar@buffalo.edu Yang Cheng Dept.

More information

Markov chain Monte Carlo methods in atmospheric remote sensing

Markov chain Monte Carlo methods in atmospheric remote sensing 1 / 45 Markov chain Monte Carlo methods in atmospheric remote sensing Johanna Tamminen johanna.tamminen@fmi.fi ESA Summer School on Earth System Monitoring and Modeling July 3 Aug 11, 212, Frascati July,

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Smoothers: Types and Benchmarks

Smoothers: Types and Benchmarks Smoothers: Types and Benchmarks Patrick N. Raanes Oxford University, NERSC 8th International EnKF Workshop May 27, 2013 Chris Farmer, Irene Moroz Laurent Bertino NERSC Geir Evensen Abstract Talk builds

More information

(Extended) Kalman Filter

(Extended) Kalman Filter (Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system

More information

Robust MCMC Sampling with Non-Gaussian and Hierarchical Priors

Robust MCMC Sampling with Non-Gaussian and Hierarchical Priors Division of Engineering & Applied Science Robust MCMC Sampling with Non-Gaussian and Hierarchical Priors IPAM, UCLA, November 14, 2017 Matt Dunlop Victor Chen (Caltech) Omiros Papaspiliopoulos (ICREA,

More information

Relative Merits of 4D-Var and Ensemble Kalman Filter

Relative Merits of 4D-Var and Ensemble Kalman Filter Relative Merits of 4D-Var and Ensemble Kalman Filter Andrew Lorenc Met Office, Exeter International summer school on Atmospheric and Oceanic Sciences (ISSAOS) "Atmospheric Data Assimilation". August 29

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Fundamentals of Data Assimila1on

Fundamentals of Data Assimila1on 2015 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO August 11-14, 2015 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University

More information

Covariance Matrix Simplification For Efficient Uncertainty Management

Covariance Matrix Simplification For Efficient Uncertainty Management PASEO MaxEnt 2007 Covariance Matrix Simplification For Efficient Uncertainty Management André Jalobeanu, Jorge A. Gutiérrez PASEO Research Group LSIIT (CNRS/ Univ. Strasbourg) - Illkirch, France *part

More information

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling 10-708: Probabilistic Graphical Models 10-708, Spring 2014 27 : Distributed Monte Carlo Markov Chain Lecturer: Eric P. Xing Scribes: Pengtao Xie, Khoa Luu In this scribe, we are going to review the Parallel

More information

Review of Covariance Localization in Ensemble Filters

Review of Covariance Localization in Ensemble Filters NOAA Earth System Research Laboratory Review of Covariance Localization in Ensemble Filters Tom Hamill NOAA Earth System Research Lab, Boulder, CO tom.hamill@noaa.gov Canonical ensemble Kalman filter update

More information

Uncertainty quantification for Wavefield Reconstruction Inversion

Uncertainty quantification for Wavefield Reconstruction Inversion Uncertainty quantification for Wavefield Reconstruction Inversion Zhilong Fang *, Chia Ying Lee, Curt Da Silva *, Felix J. Herrmann *, and Rachel Kuske * Seismic Laboratory for Imaging and Modeling (SLIM),

More information

Par$cle Filters Part I: Theory. Peter Jan van Leeuwen Data- Assimila$on Research Centre DARC University of Reading

Par$cle Filters Part I: Theory. Peter Jan van Leeuwen Data- Assimila$on Research Centre DARC University of Reading Par$cle Filters Part I: Theory Peter Jan van Leeuwen Data- Assimila$on Research Centre DARC University of Reading Reading July 2013 Why Data Assimila$on Predic$on Model improvement: - Parameter es$ma$on

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

EnKF Review. P.L. Houtekamer 7th EnKF workshop Introduction to the EnKF. Challenges. The ultimate global EnKF algorithm

EnKF Review. P.L. Houtekamer 7th EnKF workshop Introduction to the EnKF. Challenges. The ultimate global EnKF algorithm Overview 1 2 3 Review of the Ensemble Kalman Filter for Atmospheric Data Assimilation 6th EnKF Purpose EnKF equations localization After the 6th EnKF (2014), I decided with Prof. Zhang to summarize progress

More information

Bayesian Estimation with Sparse Grids

Bayesian Estimation with Sparse Grids Bayesian Estimation with Sparse Grids Kenneth L. Judd and Thomas M. Mertens Institute on Computational Economics August 7, 27 / 48 Outline Introduction 2 Sparse grids Construction Integration with sparse

More information

Bayesian Estimation of Input Output Tables for Russia

Bayesian Estimation of Input Output Tables for Russia Bayesian Estimation of Input Output Tables for Russia Oleg Lugovoy (EDF, RANE) Andrey Polbin (RANE) Vladimir Potashnikov (RANE) WIOD Conference April 24, 2012 Groningen Outline Motivation Objectives Bayesian

More information

Computer Intensive Methods in Mathematical Statistics

Computer Intensive Methods in Mathematical Statistics Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of

More information

4DEnVar. Four-Dimensional Ensemble-Variational Data Assimilation. Colloque National sur l'assimilation de données

4DEnVar. Four-Dimensional Ensemble-Variational Data Assimilation. Colloque National sur l'assimilation de données Four-Dimensional Ensemble-Variational Data Assimilation 4DEnVar Colloque National sur l'assimilation de données Andrew Lorenc, Toulouse France. 1-3 décembre 2014 Crown copyright Met Office 4DEnVar: Topics

More information

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 1: Introduction

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 1: Introduction Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 1: Introduction 19 Winter M106 Class: MWF 12:50-1:55 pm @ 200

More information

Convergence of the Ensemble Kalman Filter in Hilbert Space

Convergence of the Ensemble Kalman Filter in Hilbert Space Convergence of the Ensemble Kalman Filter in Hilbert Space Jan Mandel Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Parts based

More information

Markov Chain Monte Carlo Methods for Stochastic

Markov Chain Monte Carlo Methods for Stochastic Markov Chain Monte Carlo Methods for Stochastic Optimization i John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U Florida, Nov 2013

More information

arxiv: v1 [physics.ao-ph] 23 Jan 2009

arxiv: v1 [physics.ao-ph] 23 Jan 2009 A Brief Tutorial on the Ensemble Kalman Filter Jan Mandel arxiv:0901.3725v1 [physics.ao-ph] 23 Jan 2009 February 2007, updated January 2009 Abstract The ensemble Kalman filter EnKF) is a recursive filter

More information

An Efficient Ensemble Data Assimilation Approach To Deal With Range Limited Observation

An Efficient Ensemble Data Assimilation Approach To Deal With Range Limited Observation An Efficient Ensemble Data Assimilation Approach To Deal With Range Limited Observation A. Shah 1,2, M. E. Gharamti 1, L. Bertino 1 1 Nansen Environmental and Remote Sensing Center 2 University of Bergen

More information

Particle Filtering for Data-Driven Simulation and Optimization

Particle Filtering for Data-Driven Simulation and Optimization Particle Filtering for Data-Driven Simulation and Optimization John R. Birge The University of Chicago Booth School of Business Includes joint work with Nicholas Polson. JRBirge INFORMS Phoenix, October

More information

Data Assimilation Research Testbed Tutorial

Data Assimilation Research Testbed Tutorial Data Assimilation Research Testbed Tutorial Section 3: Hierarchical Group Filters and Localization Version 2.: September, 26 Anderson: Ensemble Tutorial 9//6 Ways to deal with regression sampling error:

More information

Markov Chain Monte Carlo Methods for Stochastic Optimization

Markov Chain Monte Carlo Methods for Stochastic Optimization Markov Chain Monte Carlo Methods for Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge U of Toronto, MIE,

More information

Inverse problems and uncertainty quantification in remote sensing

Inverse problems and uncertainty quantification in remote sensing 1 / 38 Inverse problems and uncertainty quantification in remote sensing Johanna Tamminen Finnish Meterological Institute johanna.tamminen@fmi.fi ESA Earth Observation Summer School on Earth System Monitoring

More information

The Matrix Reloaded: Computations for large spatial data sets

The Matrix Reloaded: Computations for large spatial data sets The Matrix Reloaded: Computations for large spatial data sets Doug Nychka National Center for Atmospheric Research The spatial model Solving linear systems Matrix multiplication Creating sparsity Sparsity,

More information

Particle Filtering Approaches for Dynamic Stochastic Optimization

Particle Filtering Approaches for Dynamic Stochastic Optimization Particle Filtering Approaches for Dynamic Stochastic Optimization John R. Birge The University of Chicago Booth School of Business Joint work with Nicholas Polson, Chicago Booth. JRBirge I-Sim Workshop,

More information

Bayesian Inference and MCMC

Bayesian Inference and MCMC Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the

More information

Monte Carlo Methods. Leon Gu CSD, CMU

Monte Carlo Methods. Leon Gu CSD, CMU Monte Carlo Methods Leon Gu CSD, CMU Approximate Inference EM: y-observed variables; x-hidden variables; θ-parameters; E-step: q(x) = p(x y, θ t 1 ) M-step: θ t = arg max E q(x) [log p(y, x θ)] θ Monte

More information

Maximum Likelihood Ensemble Filter Applied to Multisensor Systems

Maximum Likelihood Ensemble Filter Applied to Multisensor Systems Maximum Likelihood Ensemble Filter Applied to Multisensor Systems Arif R. Albayrak a, Milija Zupanski b and Dusanka Zupanski c abc Colorado State University (CIRA), 137 Campus Delivery Fort Collins, CO

More information

Sampling from complex probability distributions

Sampling from complex probability distributions Sampling from complex probability distributions Louis J. M. Aslett (louis.aslett@durham.ac.uk) Department of Mathematical Sciences Durham University UTOPIAE Training School II 4 July 2017 1/37 Motivation

More information

Sampling the posterior: An approach to non-gaussian data assimilation

Sampling the posterior: An approach to non-gaussian data assimilation Physica D 230 (2007) 50 64 www.elsevier.com/locate/physd Sampling the posterior: An approach to non-gaussian data assimilation A. Apte a, M. Hairer b, A.M. Stuart b,, J. Voss b a Department of Mathematics,

More information

State Estimation using Moving Horizon Estimation and Particle Filtering

State Estimation using Moving Horizon Estimation and Particle Filtering State Estimation using Moving Horizon Estimation and Particle Filtering James B. Rawlings Department of Chemical and Biological Engineering UW Math Probability Seminar Spring 2009 Rawlings MHE & PF 1 /

More information

Expectation Propagation Algorithm

Expectation Propagation Algorithm Expectation Propagation Algorithm 1 Shuang Wang School of Electrical and Computer Engineering University of Oklahoma, Tulsa, OK, 74135 Email: {shuangwang}@ou.edu This note contains three parts. First,

More information

A nested sampling particle filter for nonlinear data assimilation

A nested sampling particle filter for nonlinear data assimilation Quarterly Journal of the Royal Meteorological Society Q. J. R. Meteorol. Soc. : 14, July 2 A DOI:.2/qj.224 A nested sampling particle filter for nonlinear data assimilation Ahmed H. Elsheikh a,b *, Ibrahim

More information

Fundamentals of Data Assimila1on

Fundamentals of Data Assimila1on 014 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO July 14-16, 014 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University

More information

Data assimilation with and without a model

Data assimilation with and without a model Data assimilation with and without a model Tyrus Berry George Mason University NJIT Feb. 28, 2017 Postdoc supported by NSF This work is in collaboration with: Tim Sauer, GMU Franz Hamilton, Postdoc, NCSU

More information

Numerical Weather Prediction: Data assimilation. Steven Cavallo

Numerical Weather Prediction: Data assimilation. Steven Cavallo Numerical Weather Prediction: Data assimilation Steven Cavallo Data assimilation (DA) is the process estimating the true state of a system given observations of the system and a background estimate. Observations

More information

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods Prof. Daniel Cremers 11. Sampling Methods Sampling Methods Sampling Methods are widely used in Computer Science as an approximation of a deterministic algorithm to represent uncertainty without a parametric

More information

Morphing ensemble Kalman filter

Morphing ensemble Kalman filter Morphing ensemble Kalman filter and applications Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Supported by NSF grants CNS-0623983

More information

Lecture 8: The Metropolis-Hastings Algorithm

Lecture 8: The Metropolis-Hastings Algorithm 30.10.2008 What we have seen last time: Gibbs sampler Key idea: Generate a Markov chain by updating the component of (X 1,..., X p ) in turn by drawing from the full conditionals: X (t) j Two drawbacks:

More information

Lagrangian Data Assimilation and its Applications to Geophysical Fluid Flows

Lagrangian Data Assimilation and its Applications to Geophysical Fluid Flows Lagrangian Data Assimilation and its Applications to Geophysical Fluid Flows by Laura Slivinski B.S., University of Maryland; College Park, MD, 2009 Sc.M, Brown University; Providence, RI, 2010 A dissertation

More information

Kalman Filter and Ensemble Kalman Filter

Kalman Filter and Ensemble Kalman Filter Kalman Filter and Ensemble Kalman Filter 1 Motivation Ensemble forecasting : Provides flow-dependent estimate of uncertainty of the forecast. Data assimilation : requires information about uncertainty

More information

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 9: Markov Chain Monte Carlo 9.1 Markov Chain A Markov Chain Monte

More information

Kernel adaptive Sequential Monte Carlo

Kernel adaptive Sequential Monte Carlo Kernel adaptive Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) December 7, 2015 1 / 36 Section 1 Outline

More information

Practical unbiased Monte Carlo for Uncertainty Quantification

Practical unbiased Monte Carlo for Uncertainty Quantification Practical unbiased Monte Carlo for Uncertainty Quantification Sergios Agapiou Department of Statistics, University of Warwick MiR@W day: Uncertainty in Complex Computer Models, 2nd February 2015, University

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).

More information

Approximate Bayesian Computation and Particle Filters

Approximate Bayesian Computation and Particle Filters Approximate Bayesian Computation and Particle Filters Dennis Prangle Reading University 5th February 2014 Introduction Talk is mostly a literature review A few comments on my own ongoing research See Jasra

More information

Inverse Problems in the Bayesian Framework

Inverse Problems in the Bayesian Framework Inverse Problems in the Bayesian Framework Daniela Calvetti Case Western Reserve University Cleveland, Ohio Raleigh, NC, July 2016 Bayes Formula Stochastic model: Two random variables X R n, B R m, where

More information

A Note on the Particle Filter with Posterior Gaussian Resampling

A Note on the Particle Filter with Posterior Gaussian Resampling Tellus (6), 8A, 46 46 Copyright C Blackwell Munksgaard, 6 Printed in Singapore. All rights reserved TELLUS A Note on the Particle Filter with Posterior Gaussian Resampling By X. XIONG 1,I.M.NAVON 1,2 and

More information

An introduction to adaptive MCMC

An introduction to adaptive MCMC An introduction to adaptive MCMC Gareth Roberts MIRAW Day on Monte Carlo methods March 2011 Mainly joint work with Jeff Rosenthal. http://www2.warwick.ac.uk/fac/sci/statistics/crism/ Conferences and workshops

More information

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Background Data Assimilation Iterative process Forecast Analysis Background

More information

Efficient MCMC Sampling for Hierarchical Bayesian Inverse Problems

Efficient MCMC Sampling for Hierarchical Bayesian Inverse Problems Efficient MCMC Sampling for Hierarchical Bayesian Inverse Problems Andrew Brown 1,2, Arvind Saibaba 3, Sarah Vallélian 2,3 CCNS Transition Workshop SAMSI May 5, 2016 Supported by SAMSI Visiting Research

More information

Data assimilation in the geosciences An overview

Data assimilation in the geosciences An overview Data assimilation in the geosciences An overview Alberto Carrassi 1, Olivier Talagrand 2, Marc Bocquet 3 (1) NERSC, Bergen, Norway (2) LMD, École Normale Supérieure, IPSL, France (3) CEREA, joint lab École

More information

4. DATA ASSIMILATION FUNDAMENTALS

4. DATA ASSIMILATION FUNDAMENTALS 4. DATA ASSIMILATION FUNDAMENTALS... [the atmosphere] "is a chaotic system in which errors introduced into the system can grow with time... As a consequence, data assimilation is a struggle between chaotic

More information

Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems

Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems John Bardsley, University of Montana Collaborators: H. Haario, J. Kaipio, M. Laine, Y. Marzouk, A. Seppänen, A. Solonen, Z.

More information

Part 1: Expectation Propagation

Part 1: Expectation Propagation Chalmers Machine Learning Summer School Approximate message passing and biomedicine Part 1: Expectation Propagation Tom Heskes Machine Learning Group, Institute for Computing and Information Sciences Radboud

More information

Estimating functional uncertainty using polynomial chaos and adjoint equations

Estimating functional uncertainty using polynomial chaos and adjoint equations 0. Estimating functional uncertainty using polynomial chaos and adjoint equations February 24, 2011 1 Florida State University, Tallahassee, Florida, Usa 2 Moscow Institute of Physics and Technology, Moscow,

More information

A Spectral Approach to Linear Bayesian Updating

A Spectral Approach to Linear Bayesian Updating A Spectral Approach to Linear Bayesian Updating Oliver Pajonk 1,2, Bojana V. Rosic 1, Alexander Litvinenko 1, and Hermann G. Matthies 1 1 Institute of Scientific Computing, TU Braunschweig, Germany 2 SPT

More information